2024-10-09
clustercontrol2-2.2.5-1655
clustercontrol-notifications-2.2.5-364
- Address an improvement to add 'ilert' as a vendor option with the notification services (CLUS-4869, CLUS-4908)
2024-10-03
clustercontrol2-2.2.5-1647
- Remove dependency of the clustercontrol-proxy package for the clustercontrol2 package
2024-10-02
clustercontrol-controller-2.2.0-10707
clustercontrol2-2.2.5-1642
- Address an issue to set 'safety backup' when doing a backup restore to be disabled by default with all database types (CLUS-4840)
- Address an issue when doing PITR using a deprecated / removed variable 'skip-gtids' with MariaDB (CLUS-4870)
- Address an issue with database growth running every 1h to every 12h (CLUS-4670)
- Address an issue when querying for runtime variables with PostgreSQL (CLUS-4832)
- Address an improvement to deploy with local mirrored repository (CLUS-4621)
- Address an improvement to use systemd for starting, stopping and restarting MongoDB (CLUS-4337)
- Address an issue to support insecure S3 uploads (CLUS-4763)
- Address an issue to support 'Create new local repository' with CCv2 UI (CLUS-4345)
- Address an issue to obfuscate plain text password in the job specification (CLUS-3495)
- Address a few minor cosmetic issues with the User registration form (CLUS-4557)
- Address an issue with Galera Overview dashboard where the cluster size was incorrect when there was a split brain issue (CLUS-4787)
- Address a cosmetic issue with the deployment wizard for PostgreSQL when using pg_vector (CLUS-4736)
- Address an issue with 'PostgreSQL Overview' dashboard with TimescaleDB deployments (CLUS-4859)
2024-09-23
clustercontrol-controller-2.2.0-10600
CMON Controller
- Address additional issues with mysqldump and database names with '_' in their names (CLUS-4647)
- Address a regression with PITR for MariaDB/MySQL where binary logs were applied from the beginning regardless of when the full backup was taken (CLUS-4806, CLUS-4795)
- Address an race condition issue with the pbm agent script when started by several CMON threads (CLUS-4648)
- Address an improvement for local repositories (CLUS-4624, CLUS-4621) - NOTE a UI update will arrive later
- Address an issue with 's9s cli’ where cloud credentials were exposed to non admin users (CLUS-4784)
- Address an issue to update the 1.9.5 CMON API docs to 2.2.0 (CLUS-4771)
- Address an issue to remove duplicate and obsolete operational reports (CLUS-2417)
- Address an issue with charts/graphs not showing up properly with operational reports (CLUS-4143, CLUS-2417)
- Address an improvement to make WAL archiving default with pg_basebackup (CLUS-4768, CLUS-4572)
2024-09-17
clustercontrol2-2.2.5-1615
Web App
- Address an issue with Dashboard node dropdown when switching between clusters (CLUS-4696)
- Address an issue with incorrect MaxScale port showing in the nodes pages (CLUS-4678)
- Address an issue where the Time value is always 0 with the Query Monitor -> DB Connections for MySQL (CLUS-4663)
- Address an issue with an incorrect color selection for load balancers in the 'Add load balancer' wizard (CLUS-4668)
- Address an issue with the 'Rebuild Replica' dialog loading spinner being stuck for PostgreSQL (CLUS-4785)
2024-09-13
clustercontrol-controller-2.2.0-10461
clustercontrol-controller-2.1.0-10462
CMON Controller
- Address a regression with 'show global status' when there is no Prometheus node available (CLUS-4767, CLUS-4770)
- Address an issue with mysqldump and database names with dot in their names when using UCS2 encoding (CLUS-4647)
- Git HASH is now included with `cmon —version` (CLUS-4658)
=====================================================================
ClusterControl v2.2.0
2024-09-11
clustercontrol2-2.2.5-1603
clustercontrol-proxy-2.2.4-42
clustercontrol-controller-2.2.0-10417
clustecontrol-cloud-2.2.5-413
clustercontrol-clud-2.2.5-413
clustercontrol-ssh-2.2.5-201
clustercontrol-notifications-2.2.5-360
s9s-tools-1.9.2024082802-release1
===
Welcome to the September release of ClusterControl v2.2.0. This update includes support for a Valkey which is a Redis compliant open source (BSD) high-performance key/value datastore, MariaDB 11.4 LTS and Ubuntu 24.04 LTS.
Key features include:
- Support for Valkey 7.2.5 - Valkey
- Valkey is an open source (BSD) high-performance key/value datastore that supports a variety of workloads such as caching, message queues, and can act as a primary database. Valkey can run as either a standalone daemon or in a cluster, with options for replication and high availability.
- Limitations
- Only supported for RPM based distros (for example RockyLinux 8 or 9)
- Support for MariaDB 11.4 LTS - MariaDB Community Server 11.4 LTS: What This Means For Customers | MariaDB
- MariaDB Community Server 11.4 will be a long-term maintenance (LTS) release. Version 11.4 will be the first release of the 11.x series to be LTS. This means that we will provide five years of bug fixes for this release once it’s GA.
- Release notes - Changes and Improvements in MariaDB 11.4
- Support for Ubuntu 24.04 LTS Noble Numbat) - Canonical releases Ubuntu 24.04 LTS Noble Numbat | Canonical
- Ubuntu 24.04 LTS will be the first LTS to enjoy a whopping 12 years of support. Usually, LTS releases get five years of security and maintenance updates with an additional five years of extended security support, making a total of 10 years of support before reaching EOL (End-Of-Life).
- Redis Cluster
- Added additional backup options
- Compression, encryption, and verify backup
- Added additional backup options
- MongoDB Improvements
- `shardsvr` removed from Replicaset setups
- HAProxy Improvements
- Support for monitoring (alerts) HAProxy node health (operational status)
- Vault Integration Improvements
- This addresses a limitation of the Vault integration with the CMON controller where ‘cmon-events’ still need credentials in the /etc/cmon.cnf file
- The ‘cmon-events’ process now supports Vault integration and thus the /etc/cmon.cnf file is no longer required
# /etc/cmon-events.cnf
vault_token = hvs.A2j7ScVH2j7qC8CAQni7sJ31
vault_addr = http://127.0.0.1:8200
vault_secret_path = 4efbf326-7087-47d7-9200-6c4a135494af/cluster_0
- Support for FIPS-140-2
- Correctly signed ClusterControl packages with SHA-256
- Linking CMON Controller with OpenSSL 3.0.x library
- Elasticsearch Improvements
- Support for importing Elasticsearch 7.x and 8.x clusters
- ClusterControl Ops-C Improvements (Multiple Controllers support)
- Added support of using ‘cmon-proxy’ to serve requests for the CC Ops-C web application.
The Apache server is no longer required and can be completely uninstalled.
- Added support of using ‘cmon-proxy’ to serve requests for the CC Ops-C web application.
$ apt update
$ apt purge apache2 # purge the Apache server since it's no longer required
$ apt purge clustercontrol2 # purge CCv2 web U since it is no longer required
# install clustercontrol-cc
$ apt install clustercontrol-mcc
# register the local CMON controller with cmon-proxy
$ ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc # serve web application on port 443
$ systemctl restart cmon-proxy
# change TLS cert if needed in /usr/share/ccmgr/ccmgr.yaml and restart cmon-proxy
$ cat /etc/share/ccmgr/ccmgr.yaml
filename: /usr/share/ccmgr/ccmgr.yaml
webapproot: /var/www
fetch_jobs_hours: 12
fetch_backups_days: 7
instances:
- xid: cqf1b1cfnetrgl51vhrg
url: 127.0.0.1:9501
name: local
use_cmon_auth: true
frontend_url: localhost
cmon_ssh_host: 127.0.0.1:9511
timeout: 30
logfile: /var/log/ccmgr.log
users:
- username: admin
passwordhash: fa31fa6e142d9d1b:6870:21790d5c340c9fe207d6a5e29fe39eaddeff200b517e90e19318dddb515a424f
frontend_path: /var/www/html/clustercontrol-mcc
port: 443
tls_cert: /usr/share/ccmgr/server.crt
tls_key: /usr/share/ccmgr/server.key
session_ttl: 1800000000000
# open ClusterControl Ops-C at https://<cmon controller ip>:443 and logon with one of your existing CC admin user
User registration form has been added when using CC Ops-C with a single local CMON controller.
- CCv2 UI (clustercontrol2 package)
- Support for showing redundant indexes for MySQL
- Support for using Apache mod_php with the Apache server instead of using mod_proxy
- Install php (and remove mod_proxy)
- Copy the Apache configuration from /usr/share/cmon/apache/cc-frontend.conf to /etc/httpd/conf.d/cc-webapp.conf (as an example)
- In the Apache configuration comment the IncludeOptional /usr/share/cmon/apache/cc-frontend-cmon-api.conf and IncludeOptional /usr/share/cmon/apache/cc-frontend-license.conf
- Uncomment either IncludeOptional /usr/share/cmon/apache/cc-frontend-php-proxy-socket.conf or IncludeOptional /usr/share/cmon/apache/cc-frontend-php-proxy-curl.conf (second requires php curl module)
- The Apache configuration are now stored for RPM based distros as /usr/share/cmon/apache/cc-frontend.conf when the clustercontrol2 package is installed manually.
- The cc-proxy.conf Apache configuration file is not longer required and is not part of the clustercontrol2 package any longer
- After upgrading from 2.1.0 or older version of the clustercontrol2 package the Apache configuration file cc-frontend.conf (if it exists) will be renamed to cc-webapp.conf
======================================================================================================================================================
2024-08-27
clustercontrol-controller-2.1.0-10190
CMON Controller
- Address an issue with Redis Cluster where ACL rules were propagated recursively when adding nodes (CLUS-4546)
- Address an issue with 'DB Status’ not showing up properly with MariaDB Galera Cluster and Replication (CLUS-4575)
- Address an improvement with concurrent threads when using PBM with MongoDB (CLUS-4648)
- Address an issue with an empty cluster list when migrating to Vault integration for CMON credentials (CLUS-4620)
- Git HASH is now showin with `cmon —version` (CLUS-4658)
- Address a race condition with Redis Cluster when adding a node which causes the operation to fail (CLUS-4651)
- Address regressions with Operational Reports API; getReports, getReportTemplates, listSchedules (CLUS-4597)
- Address an issue where it was possible to reuse the backup directory for pgBackRest with other backup methods (CLUS-4102)
- Address an issue to remove a DB node from a load balancer’s configuration when a DB node is removed from the cluster (CLUS-575)
- Address and issue when upgrading from TimescaleDB 15 to TimescaleDB 16 (CLUS-4338)
2024-08-20
clustercontrol2-2.2.4-1550
Web application / CCv2
- Address an issue with an unresponsive button in Query Monitior->Top Queries (CLUS-4285)
- Address additional issues with editing ProxySQL rules and null values (CLUS-4587)
- Address an issue with password validation when changing password (CLUS-4582)
- Address an issue with an incorrect color used in the topology view (CLUS-4537)
- Address an issue to remove the backup verification node from the topology view (CLUS-4543)
- Address an issue with showing incorrect tooltip with the deployment wizard for Galera (CLUS-4398)
- Address an issue where edit buttons disappeared in the configuration files editor (CLUS-4387)
2024-08-04
clustercontrol-controller-2.1.0-9960
clustercontrol2-2.2.4-1520
CMON Controller
- Address an issue where Operational reports scheduling failed to be sent (CLUS-3702)
- Address an issue where the remaining CMON agent node failed to be removed after removing a DB node (CLUS-4036)
- Address an issue where adding a Replica node with Percona XtraDB failed due to the use of the wrong DB template because the node role was unknown (CLUS-4524)
- Address an issue with loading up and editing large configuration files (CLUS-4486)
- Misc improvements for an upcoming S9S CLI release has been added (CLUS-4412)
- Support for '--compression-level' has been added
- '--recurrence' option is only valid with 'scheduleBackup’ operation
- Clarified error message when an non-existent user is deleted
Web application / CCv2
-
Address an issue where adding a Replica node with Percona XtraDB failed due to the use of the wrong DB template because the node role was unknown (CLUS-4524)
2024-07-26
clustercontrol-controller-2.1.0-9819
CMON Controller
- Address an issue deploying MaxScale 22.0x on Ubuntu/Debian and Rocky/AlmaLinux (CLUS-4420)
- Address an issue with database growth and table stats being blank / empty (CLUS-4474)
2024-07-25
clustercontrol-mcc-2.2.4-141
clustercontrol-proxy-2.2.4-42
ClusterControl Ops-C (multi cmon controller support) web application:
- Address improvements to remove having an Apache server dependency and instead use the 'cmon-proxy' process to serve the web application (CLUS-4384)
$ apt update
$ apt purge apache2 # purge the Apache server since it's no longer required
$ apt purge clustercontrol2 # purge CCv2 web U since it is no longer required
# install clustercontrol-cc
$ apt install clustercontrol-mcc
# register the local CMON controller with cmon-proxy
$ ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc # serve web application on port 443
$ systemctl restart cmon-proxy
# change TLS cert if needed in /usr/share/ccmgr/ccmgr.yaml and restart cmon-proxy
$ cat /etc/share/ccmgr/ccmgr.yaml
filename: /usr/share/ccmgr/ccmgr.yaml
webapproot: /var/www
fetch_jobs_hours: 12
fetch_backups_days: 7
instances:
- xid: cqf1b1cfnetrgl51vhrg
url: 127.0.0.1:9501
name: local
use_cmon_auth: true
frontend_url: localhost
cmon_ssh_host: 127.0.0.1:9511
timeout: 30
logfile: /var/log/ccmgr.log
users:
- username: admin
passwordhash: fa31fa6e142d9d1b:6870:21790d5c340c9fe207d6a5e29fe39eaddeff200b517e90e19318dddb515a424f
frontend_path: /var/www/html/clustercontrol-mcc
port: 443
tls_cert: /usr/share/ccmgr/server.crt
tls_key: /usr/share/ccmgr/server.key
session_ttl: 1800000000000
# open ClusterControl Ops-C at https://<cmon controller ip>:443 and logon with one of your existing CC admin user
2024-07-25
clustercontrol2-2.2.4-1501
Web application / CCv2
- Address a potential fix when editing ProxySQL rules (null field) causing a ProxySQL node to crash. Note that this issue hasn't been re-produceable in our test env (CLUS-4413)
2024-07-24
clustercontrol-controller-2.1.0-9780
clustercontrol-controller-2.0.0-9779
clustercontrol-controller-1.9.8-9778
clustercontrol2-2.2.4-1492
CMON Controller
- Address additional Local File Inclusion (LFI) vulnerabilities reported with the CMON API where the content of an included file in a HTTP request was shown (CLUS-4448)
- Address an issue with lengthy restart timeouts with MongoDB (CLUS-4337) - only for CMON controller v2.1.0
Web application / CCv2
- Address misc UI issues (CLUS-4392, CLUS-4382, CLUS-4416, CLUS-4333, CLUS-4376, CLUS-4346, CLUS-4392, CLUS-4375, CLUS-4038)
- Navigating from 'Cluster overview' to 'Redis overview'
- Galera: 'Resync' node action had no effect
- Primary node will be default node when opening a dashboard
- Top menu items were incorrectly highlighted
- Zero state pages were missing for Elasticsearch
- Input field for systems settings were too small to edit at times
2024-07-18
clustercontrol-controller-2.1.0-9676
CMON Controller
- Address a Local File Inclusion (LFI) vulnerability reported with the CMON API where the content of an included file in a HTTP request was shown. This fix prevents the content of a file to be shown when a malicious request is sent (CLUS-4432)
- Address an issue with (re-)importing a single Galera node where the cluster ID was not properly set/used (CLUS-4364)
2024-07-15
clustercontrol-controller-2.1.0-9624
CMON Controller
- Address an issue where failover for ProxySQL should not take place in maintenance mode (CLUS-4380)
- Address an issue with setting an expired CC user password which is not allowed (CLUS-4389)
2024-07-09
clustercontrol-controller-2.1.0-9571
CMON Controller
- Address various ProxySQL issues (CLUS-3937, CLUS-4265)
- Always check blacklist / whitelist when handling failovers
- Switchover is handled inside a job
- Maintenance mode is correctly handled with failovers
- Address an issue with missing configuration file template with Percona MongoDB 7 (CLUS-4324)
- Address an issue deploying TimescaleDB for v16.x (CLUS-4334)
- Address an issue with Prometheus exporter parameters with MySQL userstats for MariaDB and Percona (CLUS-4129)
2024-06-27
clustercontrol-controller-2.1.0-9406
clustercontrol2-2.2.4-1434
CMON Controller
- Address an issue with Prometheus exporter parameters with MySQL userstats for MariaDB and Percona (CLUS-4129)
Web application
- Address an issue with missing Redis metrics dashboard for Redis Cluster (CLUS-4123)
- Address an issue with a missing 'pre-loader' and error handling with the config file management page when a file fails to be read up (CLUS-4294)
- Address an issue with links in toaster not being presented as a link (CLUS-4254)
ClusterControl v2.1.0
2024-06-24
clustercontrol2-2.2.4-1422
clustercontrol-controller-2.1.0-9344
clustercontrol-mcc-2.2.4-83
clustercontrol-proxy-2.2.0-25
s9s-tools-1.9.2024062410-release1
clustecontrol-cloud-2.0.0-400
clustercontrol-clud-2.0.0-400
clustercontrol-ssh-2.0.0-166
clustercontrol-notifications-2.0.0-344
==
As a reminder, from ClusterControl version 1.9.7, the legacy web application (CCv1) is no longer the default. These features are now exclusively accessible through the ClusterControl v2 (CCv2) web application or the s9s command-line tool.Key features include:
- Support for PostgreSQL 16
- Release notes: PostgreSQL 16 Released!
- Redis ClusterA distributed implementation of the popular in-memory data store, Redis. It is designed to provide high availability and horizontal scalability by dividing the dataset into multiple partitions and distributing them across multiple nodes.
-
Backup management
- Support for cloud storage
- Minor upgrades
-
User access (ACL) management
- Create, update and delete ACL rules
- Cluster/Node actions: Promote replica and manual failover
Limitations - These features will be available in an upcoming release or patch:- Backup management
- Compression, encryption, and verify backup - Cluster/Node actions:
- Re-sharding (Scale with Redis Cluster )
- Redis metrics dashboard
- Backup management
-
Backup management
- Redis Sentinel
- User access (ACL) management
- Create, update and delete ACL rules - Minor upgrades
- User access (ACL) management
- ClusterControl Operations Center - Horizontal scaling of ClusterControl
- Use multiple ClusterControl installations / CMON controllers to handle large-volume environments of 1000+ of nodes.
Provide multi-tenancy isolation where each tenant is assigned their own CMON Controller.
Consolidate and manage multiple CMON controllers from a single point / web application - New features
- Jobs and backups stats aggregation for the Controllers overview page
- WEB SSH Console into nodesModifications to the cc-frontend.conf (Apache config file) required for existing environments<Location /api/v2/cmon-ssh/cmon/ws/> RewriteEngine On RewriteCond %{REQUEST_URI} ^/cmon-ssh/cmon/ws/(.*)$ RewriteCond %{REQUEST_URI} ^/api/v2/cmon-ssh/cmon/ws/(.*)$ RewriteRule ^(.*)$ ws://127.0.0.1:9511/cmon/ws/%1 [P,L] </Location> <LocationMatch /api/v2/cmon-ssh/> ProxyPass http://127.0.0.1:9511/ ProxyPassReverse http://127.0.0.1:9511/ </LocationMatch>
- Use multiple ClusterControl installations / CMON controllers to handle large-volume environments of 1000+ of nodes.
===========================================================================
2024-06-10
clustercontrol-controller-2.0.0-9169
clustercontrol2-2.2.3-1379
CMON Controller
- Address an issue with empty HAProxy dashboards (CLUS-4017, CLUS-4018, CLUS-4099)
- Address an issue with MongoDB 4.4 deployment (CLUS-4160)
- Address an issue with minor upgrades for PostgreSQL where the new minor versions failed to be fetched (CLUS-4144)
- Address an issue with Redis Sentinel deployment on Centos 9 stream (CLUS-4124)
Web application
- Address an issue to clarify firewall and AppArmor/SELInux settings with the deployment wizard (CLUS-3968)
- Address an issue with an incorrect 'replica lag' chart showing up on the primary node dashboard (CLUS-4130)
2024-06-05
clustercontrol2-2.2.3-1369
Web application
- Address an issue with 'import node' being disabled for all cluster types (CLUS-4136)
2024-05-30
clustercontrol-controller-2.0.0-9011
clustercontrol2-2.2.3-1362
CMON Controller
- Address an issue with PostgreSQL and MariaDB minor upgrades - in some case no upgrades were done only checks for upgradable packages (CLUS-4051)
- Address a Redis issue where CMON tries to connect using an internal/private IP/hostname (CLUS-3818 / CCX)
- Address an issue initializing the CMON database schema with MySQL 8.4.x changes to foreign keys (CLUS-4045)
- Address an issue toggling binary logging back on on Galera nodes (CLUS-3488)
- Address an issue with minor upgrades with MongoDB Inc (vendor) on MongoDB 7.0.x for Ubuntu 22.04 LTS (CLUS-3944)
- Address an issue to improve job log messages when SELinux (Centos 8) was preventing the Prometheus service to properly start up (CLUS-3645) - NOTE: SELinux needs to be properly pre-configured for Prometheus.
- Address an issue with major upgrades between PostgreSQL v13 to v14 (CLUS-4010)
Web application / CCv2
- Address an issue with Redis import dialog where node configuration step was blank (CLUS-4044)
- Address a cosmetic issue with the Redis add node dialog’s preview showing inaccurate info (CLUS-4053)
- Address an issue to handle 'unknown state' with N/A and not include them in the stats (CLUS-3816)
- Address an issue with node actions that cannot be performed on a Prometheus node (CLUS-4096 / CCX)
- Address an issue with incorrect MySQL backup options when 'PITR compatible' is enabled - 'Partial backups' and 'One dump file per DB' is not supported with it set (CLUS-4061)
- Address an issue where an 'ENABLE_PRIVACY’ option was missing from CCv2 to prevent usage tracking with Google Analytics.
Add a new configuration file named <webroot>/clustercontrol2/user.config.js with the content
```
window.FEAS_ENV = {
ENABLE_PRIVACY: 'true',
}
``` - Address an issue with Redis topology view not correctly showing primary/replica relationships (CLUS-4069)
- Address an issue with Redis Sentinel cluster missing 'Upload backup to cloud' option (CLUS-4080)
- Address a cosmetic issue with tooltips having no rounded edges (CLUS-4011)
- Address issues with the 'Top Queries' page. 'Relative %' is now removed and a new toggle has been added to show the complete query or truncate it (CLUS-4075)
- Address an issue to describe the advanced crontab format with backup schedules in a more clear way (CLUS-4002)
==================================================================
ClusterControl v2.0.0
clustercontrol2-2.2.3-1338
clustercontrol-controller-2.0.0-8821
clustecontrol-cloud-2.0.0-400
clustercontrol-clud-2.0.0-400
clustercontrol-ssh-2.0.0-166
clustercontrol-notifications-2.0.0-344
clustercontrol-mcc-2.2.0-66
clustercontrol-proxy-2.2.0-9
s9s-tools-1.9.2024051420-release1
Welcome to the May release of ClusterControl. This update includes support for Redis Cluster and scaling of ClusterControl in high-volume node or multi-tenant environments. ClusterControl Operations teams can now also deploy ClusterControl within Kubernetes using a Helm chart and provision database clusters using a ClusterControl Terraform provider. Additionally, this version has support for the latest major releases of MongoDB, and Debian.
Key features include:
- Support for Redis Cluster
A distributed implementation of the popular in-memory data store, Redis. It is designed to provide high availability and horizontal scalability by dividing the dataset into multiple partitions and distributing them across multiple nodes.- Deploy & Import
- System and host dashboards
- Cluster topology view
- Configuration files management
- Backup and restore
- Local storage only and retention setting
- Scaling
- Add and remove primaries or replicas
- Limitations - These features will be addressed in an upcoming release:
- Backup management
- Cloud storage
- Advanced settings - compression, encryption, verify backup
- Minor upgrades
- User access (ACL) management
- Backup management
- ClusterControl Operations Center - Horizontal scaling of ClusterControl
-
Use multiple ClusterControl installations / CMON controllers to handle large-volume environments of 1000+ of nodes
-
Provide multi-tenancy isolation where each tenant is assigned their own CMON Controller.
-
Consolidate and manage multiple CMON controllers from a single point / web application
-
Aggregated status dashboard
- Clusters, Nodes and CMON Controllers
-
-
CMON-Proxy LDAP authentication
-
Limitations
- WEB SSH (disabled / non-functional with the current release)
-
- Terraform Provider for ClusterControl - Terraform Registry
- Use Infrastructure as Code (IaC) tools to provision ClusterControl resources such as database clusters, nodes and load balancers.
- Integrate ClusterControl with your existing Terraform manifests and CI/CD pipelines environments.
- Use our Terraform Provider to seamlessly provide automatic database cluster provisioning on public and private clouds.
- Documentation - Terraform provider for ClusterControl
- ClusterControl Helm Chart - ClusterControl helm-chart
- Install and use ClusterControl on Kubernetes
- Public helm github repository - https://github.com/severalnines/helm-charts/tree/main/charts/clustercontrolCan't find link
- ClusterControl helm chart published on clustercontrol 2024.4.4 · severalnines/clustercontrol
-
New Major Versions support
- MongoDB 7
-
Debian 12
-
Misc
-
PITR with MSSQL Server ‘transaction log’ backups
-
MySQL binary log backups (and restore) to/from cloud storage
-
New backup method named ‘binlog’
-
Separate backups of the MySQL binary logs
-
-
CC UI
-
Misc UI/UX improvements
-
-
-
Known Limitations
-
Major upgrades with TimescaleDB v13 and v14
-
‘copy’ and ‘link’ method is currently not working properly
-
-
=================================================================================
2024-04-24
clustercontrol2-2.1.0-1313
- Address an issue with URL redirects being incorrect when clicking on or pasting a link to ClusterControl (CLUS-3634)
- Address an issue with alignment when long names/titles appear in alarms (CLUS-3632)
- Address an issue with a broken dialog when selecting 'upgrades' with a single node (CLUS-3770)
- Address an issue with minor upgrade where the primary was upgraded even though only the replica was selected (CLUS-3859)
- Address a potential issue where 'enable' MySQL binary log was not show in the menu after disabling it for MySQL Galera cluster (CLUS-3477)
2024-04-24
clustercontrol-controller-1.9.8-8622
- Address issues with memory leaks and segmentation faults for CMON (CLUS-3814, CLUS-3825)
- Address an issue with license handling of MSSQL Server (and Redis Cluster) (CLUS-3734)
- Address an issue with Redis backups on the CMON controller having incorrect IP address (CLUS-3810)
2024-04-08
clustercontrol-controller-1.9.8-8364
- Address an issue with the Prometheus exporter installation job reporting incorrect job status when failing (CLUS-3645)
- Address an issue with backup verification for MariaDB where the server version failed to be obtained (CLUS-3452)
- Address an issue with backup verification when 'qpress’ compression was enabled (CLUS-3198)
- Address additional issues (potential dead locks) with MongoDB cluster status being in unknown state (CLUS-3445)
- Address issues with deploying Elasticsearch using deprecated / EOL versions (CLUS-3769)
2024-03-22
clustercontrol-controller-1.9.8-8180
- Address an issue with column grants when syncing ProxySQL instances (CLUS-3571)
- Address a potential performance issue when CMON starts with a large number of managed clusters and the clusters' are stuck in 'UKNOWN" state (CLUS-3568)
A new command line option has been added that can be used with the CMON service file:
--disable-dsl-advisors # disables advisors
- Address a potential issue with backup restore causing CMON to crash (CLUS-3479)
- Address improvements when deleting and uninstalling a PostgreSQL node (CLUS-3152)
2024-03-22
clustercontrol-controller-1.9.7-8179
- Address a potential performance issue when CMON starts with a large number of managed clusters and the clusters' are stuck in 'UKNOWN" state (CLUS-3568)
A new command line option has been added that can be used with the CMON service file:
--disable-dsl-advisors # disables advisors
- Address an issue with MongoDB deployment and debug strings in the output which caused issues with 'mongosh’ (CLUS-3653)
- Address uninitialised variables and debug log output issues with the MongoDB codebase (CLUS-3605, CLUS-3606)
2024-03-13
clustercontrol-controller-1.9.7-8075
- Address a potential performance issue when CMON starts with a large number of managed clusters and the clusters' are stuck in 'UKNOWN" state (CLUS-3445)
A new command line option has been added that can be used with the CMON service file:
--cron-thread-delay=600 # seconds to delay before starting up cron job threads
Use it to delay the number of 'cron' threads being create when CMON starts so that the clusters' state can be correctly set.
2024-03-12
clustercontrol2-2.1.0-1238
- Address an issue with the backup list and the error ’n.map is not a function’ (CLUS-3564)
- Address an issue using long names at cluster deployment (CLUS-3521)
- Address an issue where a tooltip was missing when hovering over the cluster name in the left navigation bar (CLUS-3349)
- Address an issue where the 'Storage Host' was blank in the backup schedules list (CLUS-3493)
- Address an issue with host verification using the backup verification dialog (CLUS-3396)
- Address an issue to restore the 'WEB SSH' feature for CCv2 (CLUS-3322)
- Adress an issue where the 'Show diff' button was disabled in the configuration files page (CLUS-3526)
- Address an issue with broken pagination with ProxySQL’s Top Queries (CLUS-3554)
2024-03-11
clustercontrol-controller-1.9.8-8020
clustercontrol2-2.1.0-1232
- Address valgrind issue - Uninitialized variables in CmonMongoConnection (CLUS-3605)
- Address potential deadlock issues with MS SQLServer and always on - async mode (CLUS-3602, CLUS-3533, CLUS-3413)
- Address an issue with cluster removal when backup records failed to be deleted (CLUS-3512)
- Address an issue to add static code analyzer for CMON (CLUS-3104)
- Address an issue when importing a PostgreSQL server to allow '/' to be omitted from the cluster name (CLUS-3539)
Web app - Address an issue correcting text with the add node dialog for PostgreSQL (CLUS-3494)
- Address an issue with top queries not showing correct values for ProxySQL (CLUS-3496)
- Address an issue with selecting the primary Redis node to restore a backup on (CLUS-3492
2024-02-26
ClusterControl v1.9.8
clustercontrol2-2.1.0-1203
clustercontrol-controller-1.9.8-7845
clustercontrol-cloud-1.9.8-396
clustercontrol-notifications-1.9.8-340
clustercontrol-ssh-1.9.8-158
clustercontrol-1.9.8-8662
s9s-tools-1.9.2024022611-release1
As a reminder, starting with ClusterControl version 1.9.7, the legacy web application (CCv1) is no longer the default. These features are now exclusively accessible through the ClusterControl v2 (CCv2) web application or the s9s command-line tool.
Key features by database:
- PostgreSQL
- Major upgrades (e.g., upgrade from v13 to v14)
- pgvector extension support
- MongoDB
- Minor upgrades
- Support for users' own self-seigned certificates configuration
- net.tls.certificateKeyFile is set
- net.tls.certificateKeyFile, net.tls.CAFile are set and net.tls.CAFile.allowInvalidCertificates:true
- CMON connects with 'mongo' using --tlsAllowInvalidCertificates for some specific MongoDB configuration cases
- MySQL
- DB Status and Variable pages
- Add replicas (add primary with Galera, and create replica cluster) with identical software version - major.minor.patch - as the primary node(s)
- General
- Web SSH console for CCv2
-
# Existing installation: Add these lines to the /etc/apache2/config.d/cc-frontend.conf file
<Location /cmon-ssh/cmon/ws/>
RewriteEngine On
RewriteCond %{REQUEST_URI} ^/cmon-ssh/cmon/ws/(.*)$
RewriteRule ^(.*)$ ws://127.0.0.1:9511/cmon/ws/%1 [P,L]
</Location>
<LocationMatch /cmon-ssh/>
ProxyPass http://127.0.0.1:9511/
ProxyPassReverse http://127.0.0.1:9511/
</LocationMatch>
# Existing installation: Add the line below to the section <LocationMatch /api/v2/>
Header edit Set-Cookie ^(.*)$ "$1; Path=/" - UI/UX improvements for CCv2
Features
Major Upgrades for PostgreSQL
In-place upgrade using the "pg_upgrade" tool which upgrades the data files to the next major PostgreSQL version, e.g., v14 to v15.- Options
- Copy the old cluster's data set to the new cluster's data directory. This requires disk space enough to hold the size of two data sets!
- Using hardlinks (--links) instead of copying files. It's faster than copying the files, you don't need double the disk space however one limitation is that you cannot use the old cluster once the new cluster has started.
- pg_dumpall, this is a failsafe option in case there are issues with "pg_upgrade" to perform successfully
pgvector with PostgreSQL
pgvector is an open-source extension for PostgreSQL that enables storing and searching over machine learning-generated embeddings. It provides several capabilities that allow users to identify both exact and approximate nearest neighbors, making it a powerful tool for applications such as search, recommendation, and anomaly detection.ClusterControl supports pgvector by enabling this extension with our PostgreSQL deployment wizard through an additional 'extensions' step. While we intend to add more extensions to PostgreSQL in future releases, pgvector is the only extension currently available for selection.
Minor upgrades with MongoDB
ClusterControl v1.9.8 now supports minor upgrades with MongoDB for replicaset and sharded clusters. Perform a "major.minor" version software package check and then initiate the upgrade process.Misc
- MySQL status and variables pages. List, search and show the differences of the server status and configuration variables for MySQL based clusters
-
Additional cluster details header to show cluster and node status
-
Grouped menu items - In this version we have grouped them where it makes sense and also added icons for more easily identifying certain groups of actions.
- Shortcuts for the auto recovery options on a cluster was added to the cluster list to quickly being able to toggle it on or off with out having to navigate through the cluster menu.
2024-02-22
clustercontrol-1.9.7-8660
-
- Address an issue where 'integrations' are not shown in the UI (CLUS-3478)
2024-02-21
clustercontrol-controller-1.9.7-7812
-
- Address an issue when the hostgroup is empty and CMON did not take any action. An issue with generating alarms has been corrected (CLUS-3066)
- Address an issue with too many cronjobs (>128) defined for simultaneous execution (CLUS-3445)
- Address an issue with clusters disappearing from the cluster list with an offline / no internet environment when trying to 'Installing Zip requirement: zip' (CLUS-3447)
2024-02-13
clustercontrol-controller-1.9.7-7600
-
Address an issue where the MySQL binary logs were copied with backup verification jobs (CLUS-3199, CLUS-3143)
2024-01-29
clustercontrol-controller-1.9.7-7465
- Address an issue with inaccurate replication lag (CLUS-3297)
- Address an issue when the ProxySQL hostgroup is empty and CMON did not take any action (CLUS-3066)
- Address an issue with deployment of Oracle MySQL 8 on Ubuntu 20. GPG key updated. (CLUS-3261)
2024-01-16
clustercontrol2-2.0.0-1161
- Address an issue with Add Replication Node for MySQL. Picking a custom template/configuration was missing (CLUS-3270)
- Address an issue with the cluster list view where navigating to the last few clusters were not possible (CLUS-3265)
- Address an issue where the 'Repository' option was missing when importing a cluster (CLUS-3089)
- Address an issue with the UI toaster message when saving a backup schedule (CLUS-3187)
- Address an issue with the 'Add Replication' node wizard where the selected MySQL configuration template is switched back to the default template when moving between the steps (CLUS-3144)
- Address a cosmetic issue with the alarm details dialog the ellipses alignment (CLUS-3274)
2023-12-22
clustercontrol2-2.0.0-1133
- Address an issue with DB Users for MySQL when enable_is_queries=0 (CLUS-3067)
- Address an issue where the option to clear an alarm was missing (CLUS-3055)
- Address an issue where '[]' was enclosing recipients email addresses for scheduled operational reports (CLUS-3120)
- Address an issue with backup verification schedule where the verification process started immediately instead as scheduled (CLUS-3124)
- Address issues with showing non supported MariaDB versions (CLUS-3144)
- Address an issue where the nodes page were shown instead of the dashboard (CLUS-3183, CLUS-3113, CLUS-2882, CLUS-3248)
- Address an issue the Query Monitor->Query Outliers showing incorrect durations in days / weeks (CLUS-3165)
- Address an issue with ProxySQL when removing servers from a host group and modifying a server’s settings such as weight (CLUS-3091)
2023-12-15
clustercontrol-controller-1.9.7-7128
- Address an issue with forcing failover for MySQL Replication regardless of the state of the primary node (CLUS-3239)
2023-12-12
clustercontrol-controller-1.9.7-7051
- Address an issue with MongoDB 4.2.8 and connection failures when the 'hello()' method is not available (CLUS-3065)
- Address an issue with the LDAP configuration not being persisted between UI sessions (CLUS-3043)
- Address an issue with operational reports where the remaining license expiration days was not accurate (CLUS-2779)
2023-11-22
clustercontrol-controller-1.9.7-6806
- Address issues with the SMTP mail server configurations that used a custom port, TLS enabled and special characters in password/usernames (CLUS-2946)
- Address an issue with MySQL user management 'showing unknown json error’ for users without grants instead of proper error message (CLUS-2899, CLUS-2867)
- Address an issue restoring a backup with pgBackRest (PITR) using a stop time of of 'now()' (CLUS-2726)
- Address additional issues with plain text passwords being shown in MongoDB job logs (CLUS-2823)
2023-11-17
clustercontrol2-2.0.0-1093
- Address an issue where the pgBackRest backup records were not marked as deleted (CLUS-3004)
- Address an upgrade issue where both MySQL and ProxySQL nodes were upgraded instead of only the MySQL node (CLUS-3030)
- Address an issue where host discovery for SSH connectivity was broken when using a custom SSH port (CLUS-2997)
- Address an improvement to artificially support 'schema.*' for PostgreSQL Using the wildcard will translate the statement into `GRANT SELECT ON ALL TABLES IN SCHEMA foo TO user` (CLUS-2962)
- Address an issue where the alarm notifications (number of alarms) were not in sync with the number of alarms shown in the alarms table (CLUS-2950)
- Address an issue where EOL MongoDB 4.2 was still available to be selected in the deployment wizard (CLUS-2917)
- Address an issue where Keepalived nodes were shown as primary nodes (CLUS-2940)
- Address an issue where the 'Transaction Log' page was missing with MySQL (CLUS-2807)
2023-11-17
clustercontrol-1.9.7-8653
- Address an issue where 'Invalid Cluster ID' was shown in the Query Monitor page (CLUS-3039)
2023-10-26
clustercontrol-controller-1.9.7-6677
- Address an issue where 'free monitoring' is no longer available with MongoDB community editions (CLUS-2744)
- Address an issue verifying backups on Elasticsearch. Only a checksum verification is performed at this time (CLUS-2824)
- Address an issue where MongoDB passwords were shown in clear text in logs (CLUS-2823)
- Address an where a MySQL Replication Primary node was set to read-only if one of the replicas momentarily ran out of disk space (CLUS-2569)
2023-10-18
clustercontrol2-2.0.0-1057
- Address an issue where the 'Resync' node menu action was not visible (CLUS-2869)
- Address an issue with where an option to set the 'data dir’ for the Prometheus installation (agent based monitoring) was missing (CLUS-2823)
- Address an issue where some charts were missing with MongoDB Shards deployments (CLUS-2883)
- Address an issue where the time could not correctly be select in 'time picker’ when scheduling a backup (CLUS-2869)
- Add cluster information in the header of the cluster page with cluster type, status, auto recovery and nodes (CLUS-2856)
- Address an issue where 'Delete Job' was missing from the Job menu (CLUS-2855)
- Address an issue with the CC logo (CLUS-2760)
2023-10-16
clustercontrol-controller-1.9.7-6636
- MongoDB 4.4 is now the oldest version that can deployed (CLUS-2818)
- Address an issue with restoring backups on a cluster where hostnames were used (CLUS-28263)
- Address an issue with Elasticsearch failing to find a valid node during deployment (CLUS-2826)
- Address an issue with PgBackRest backup retention for differential backups.
There is now a new configuration variable `pgbackrest_diff_backup_count_retention` with a default value of 20 days which is set for repo1-differential-diff (CLUS-2817)
- Address an issue where passwords were shown in plain text in the mongodb logs (CLUS-2823)
2023-10-04
clustercontrol-controller-1.9.7-6597
- Address an issue where cmon backup metadata were created in the stanza directory with pg_backrest (CLUS-2789)
- Address an issue where operational reports were not sorted by ID DESC (CLUS-2514)
- Address an issue where mongosh was not installed with deb/rpm packages (CLUS-2741)
- Address an issue where the 'backup user' password after backup restore was invalid (CLUS-2589)
- Address an issue when deploying a delayed MySQL replica (CLUS-2754)
- Address an issue promoting a replica when it is delayed or lagging (CLUS-2729)
- Address an issue where a replica fails when configuring 'delayed’ threshold manually (CLUS-2739)
- Address a potential memory corruption issue with large number of running internal threads stuck on reading LDAP cert parameter from the config file (CLUS-2737)
- Address an issue to recognize non UTC timestamp in WAL files (CLUS-2757)
- Address an issue where pgBackRest full and incremental backup records were not properly deleted with the backup retention period setting (CLUS-2740, CLUS-2511)
- Address issue with where in certain scenarios auto recovery was disabled by running jobs (CLUS-2400)
- Address various minor issues where PITR failed on master if backups were taken on replicas (CLUS-2387)
2023-09-22
clustercontrol2-2.0.0-1029
- Address an issue with the topology view and MongoDB sharded cluster (CCV2-1048)
- Address an issue with the database configuration file(s) was missing. There is now a 'refresh' button that will load up the configuration files if the page is empty (CLUS-2732)
- Address an issue improving the 'waiting to load' progress indicator (CLUS-2745)
- Address an issue improving hint text(s) in the deployment wizard (CLUS-2649)
- Address an issue with the backup restore dialog where 'CEST' was always shown regardless of the timezone (CLUS-2716)
- Address an issue where the license confirmation dialog was not able to be closed without reloading the page (CCV2-1050)
- Address an issue with missing 'verify backup' and 'failover' options with the PostgreSQL backup wizard (CLUS-2762)
2023-09-20
clustercontrol-controller-1.9.7-6542
- Address an issue when doing a reconfigure 'backup dir’ with PBM (CLUS-2526)
- Address issue with pgBackRest and automatically rebuilding replicas after restore. This is enabled by default now (CLUS-2711)
- Address an issue with LDAP authentication failed when there is a DN change on the directory service side (CLUS-2633)
- Address an issue where PBM fails after upgrade to 1.9.7 (CLUS-2640)
2023-09-13
clustercontrol-controller-1.9.7-6524
clustercontrol2-2.0.0-1005
- Address an issue when restoring with pg_basebackup where the replicas were not automatically rebuilt. This is now the default behavior (CLUS-2711)
- Address issues with PITR pg_basebackup. If no PTIR time is set then recovery_target is set to immediate (CLUS-2609)
- Address an issue with not masking password when pg_basebackup failed in the job log (CLUS-2638)
- Address an issue with ProxySQL when adding a new user (CLUS-2616, CLUS-2678)
Web app - Address an issue with primary colors and corner radius for some components (CCv2-1040)
- Address an issue where network usage charts randomly switches series during with auto updates (CCV2-1030)
- Address an issue with a broken jobs page when there are no jobs (CCV2-1023)
- Address an issue with database name validation missing in the deployment wizard (CCV2-1016)
- Address issues with when restoring with pg_basebackup. 'Immediate' is the default behavior if PITR is not selected. Date and time is correctly converted to UTC for job task/spec. (CCV2-1027)
- Address an issue with team permissions with user management where a permission was missing from the dropdown (CCV2-1029)
- Address an issue when deploying additional ProxySQL nodes with native clustering where it was not possible to continue to the next step (CCV2-2706)
2023-09-11
clustercontrol-controller-1.9.7-6517
clustercontrol-1.9.7-8642
- Address an issue falling to promote replicas when there is nothing catch up on the primary (CLUS-2559)
- Address an issue with the license at deployment time (CLUS-2478)
- Address an issue disabling 'readonly' on a node when auto_manage_readonly=false (CLUS-2501)
- Address an issue with creating cluster from backup with MySQL Galera (CLUS-2594)
- Address an issue with MariaDB Enterprise where -no-backup-locks is not supported (CLUS-2612)
- Address an issue to raise alarms when CMON starts up with potential invalid configuration files and silently fails (CLUS-2367)
Web app - Address an issue where 'backup_failover_host=auto' was incorrectly sent to CMON (CLUS-2592)
2023-08-26
clustercontrol-controller-1.9.7-6488
- Address an issue where the 'vendor=postgres' failed to be recognized after upgrading to 1.9.7 (CLUS-2601)
2023-08-25
ClusterControl v1.9.7
clustercontrol-1.9.7-8636
clustercontrol-controller-1.9.7-6487
clustercontrol-cloud-1.9.7-395
clustercontrol-notifications-1.9.7-339
clustercontrol-ssh-1.9.7-151
clustercontrol2-2.0.0-979
Welcome to the August release of ClusterControl.
The key features are:
- Enterprise Vendors for PostgreSQL and MongoDB - use enterprise software repositories
- Support for PostgreSQL 15, MongoDB 6.0, MariaDB 10.11 and MS SQL Server 2022
- Support for RedHat 9, AlmaLinux 9 and RockyLinux 9
- General Availability of our new web frontend - ClusterControl v2
- Mail notifications, Certificate management
- Advisors, Incident management services, Topology View
Features
ClusterControl now supports using Enterprise vendors for PostgreSQL and MongoDB. You can manage and monitor Enterprise VendorsEDB Postgres Advanced Server and MongoDB Enterprise clusters by using their enterprise repositories when provisioning databases. You need a valid repository token (depending on the vendor) in order to use the enterprise packages.
s9s
CLI adds two new vendor (--vendor) options:
-
enterprisedb
for EDB Postgres Advanced Server -
mongodbenterprise
for MongoDB Enterprise
We have been migrating the feature set of the ClusterControl GUI (v1) to a new web frontend. This is a completely rewritten web application which aims to modernize the UI, improve the user experience and ease of use. ClusterControl v2.0 - New web frontend
It is now in a state where we have feature parity with the old user interface. You can start using ClusterControl (GUI) v2 without having to rely on the functionality of the old web application.
‘CCv2’ is now the default web application when you install ClusterControl following the product download page instructtions. The old web application ‘CCv1’ is still be accessible until the end of the year at which time it will no longer be supported.
You can upgrade/move from the CCv1 web application to the CCv2 web application with a few easy steps (existing ClusterControl installation). Upgrade to CCv2
This setup will still provide access to the old 'CCv1' user interface while the new web application is accessible at another port.
Upgrade notes:
- The 'Incident management services' configurations for PagerDuty, Opsgenine, Slack etc need to be manually re-created in CCv2 and removed from CCv1.
- The 'DB Status' and 'DB Variables' pages are coming in a future CCv2 patch
Redhat/AlmaLinux/RockyLinux
$ yum clean all
$ yum install clustercontrol2
$ systemctl restart httpd
Open the web browser and go to https://{ClusterControl_host}:9443Logon with your user credentials used with ‘CCv1’
Ubuntu/Debian
$ apt update
$ apt install clustercontrol2
$ systemctl restart apache2
Open the web browser and go to https://{ClusterControl_host}:9443Logon with your user credentials used with ‘CCv1’
2023-08-11
clustercontrol-controller-1.9.6-6467
- Address an issue where backups (records) with PgBackRest were not deleted/removed properly with the retention period (CLUS-2511)
- Address an issue deploying MaxScale on MariaDB (CLUS-2441)
- Address an issue where partial backups with Xtrabackups created full backups (CLUS-2449)
- Address an issue with backup retention where backups were not removed (CLUS-2503)
- Address an issue with MongoDB when enabling SSL and agent-based monitoring stopped working (CLUS-2428)
2023-08-07
clustercontrol-1.9.6-8624
-
Address an issue with the MariaDB deployment wizard showing an v10.11 option which will be supported in the upcoming v1.9.7 (CLUS-2248)
2023-07-31
clustercontrol-controller-1.9.6-6447
clustercontrol-1.9.6-8620
- Address an issue deploying PgBouncer when trying to get the default socket directory from PostgreSQL. New socket directory is /tmp. (CLUS-2245)
- Address an issue when PITR restore hung with pg_basebackup (CLUS-2374)
A new CMON configuration can used to change the default time to wait which is 30min.
postgresql_wait_recovery_on_restoration_timeout
- Address an issue with the Redis Sentinel cluster status when a backup failed to be restored (CLUS-2429)
- Address an issue with MongoDB and PBM where a failed backup could delete previous backups (CLUS-2418)
- Address an issue where the MongoDB Prometheus cmon exporter user’s password was logged in plain text (CLUS-2392)
Web frontend - Address an minor issue with the user registration form (CLUS-2480)
2023-07-03
clustercontrol-controller-1.9.6-6408
clustercontrol-1.9.6-8611
- Address various issues with Redis Sentinel clusters
- Address an issue where the 'internal IP' was ignored and the external IP was used instead (CLUS-2395)
- Address an issue when importing ProxySQL users that have an existing user with a '%' wildcard hostname (Reverting fix for CLUS-2270)
- Address an issue with PITR using mysqldump with Persona PXC Galera 5.7 (CLUS-2401)
- Address an issue where the MongoDB Prometheus cmon exporter user’s password was logged in plain text (CLUS-2392)
Web frontend
- Address an issue with the UI performance when loading up a cluster list when there are a substantial number of clusters (CLUS-2491)
Adddefine('SKIP_CLUSTER_CHECK_ACCESS', true);
to the <webroot>/clustercontrol/bootstrap.php file
- Address an issue with missing the 'monitor' username and password when importing ProxySQL CCv1 (CLUS-2260)
2023-06-26
clustercontrol-1.9.6-8607
clustercontrol2-0.9.3-820
- Address an issue with data retention size (storage.tsdb.retention.size) missing with Prometheus - CCv1 (CLUS-2223)
- Address an issue with missing 'monitor' username and password when importing ProxySQL - CCv2 (CCV2-831)
2023-06-19
clustercontrol-controller-1.9.6-6382
- Address an issue when deploying MongoDB Replicaset on GCP (CLUS-2218)
2023-06-12
clustercontrol-1.9.6-8601
- Address an issue on how Cluster to Cluster replication clusters are sorted with Active vs Read-only state (CLUS-1881)
- Address an issue with offline installation on Ubuntu Jammy and the setup-cc.sh script
2023-06-02
clustercontrol-controller-1.9.6-6358
- Address a startup issue with Redis Sentinel and the systemd service configuration (CCV2-690, CLUS-2213)
- Address an issue rebuilding a MSSQL Server replica when all nodes are in sync (CLUS-2247)
- Address an issue to prevent recreating existing users when the global '%' wild-pattern is used with ProxySQL (CLUS-1471)
- Address an issue with the cluster state being 'Unknown' when importing a Redis Sentinel cluster which has TLS/SSL enabled (CLUS-2220)
- Address an issue parsing MySQL user grants with MySQL Oracle 8 open using 'DB Users and Schemas’ (CLUS-2245)
2023-05-19
clustercontrol-controller-1.9.6-6328
clustercontrol-1.9.6-8578
- Address a potential CMON memory leak when managing Redis Sentinel cluster (CLUS-2155)
- Address an issue deploying PostgreSQL on Ubuntu 22.04 (CLUS-2174)
- Address an issue with switchovers for PostgreSQL. Proper use of 'CHECKPOINT' has been updated (CLUS-2101)
- Address an issue where changes to the Prometheus configuration was not persisted (CLUS-2080)
Web frontend - Address an issue when retrieving datapoints for the dashboards to improve situations when the pages failed to load properly (CLUS-1901)
-
PostgreSQL v10 is now deprecated as a deployment option
- Build from source for HAProxy is now deprecated / not available
2023-05-03
clustercontrol2-0.9.3-739
-
Address an issue an with the dashboard reloading and showing "no data" (CCV2-792)
- Address an issue to customize the dashboard refresh rate (CCV2-769)
Add a new variable to the <webroot>/clustercontrol2/config.js file named
MONITORING_DEFAULT_REFRESH_RATE_SECONDS: 60
- Address an issue with broken pagination for the backup page (CCV2-793)
- Address an issue with ProxySQL when enabling the 'import configuration' option (CCV2-738)
- Address an issue with HAProxy and selecting 'internal ip' for deployment
- Address an issue when installing the clustercontrol2 package before Apache on OpenSUSE
2023-05-03
clustercontrol-controller-1.9.6-6307
clustercontrol-1.9.6-8564
-
Address an issue where NaN is shown with NDB Cluster on the overview page (CLUS-2122)
-
Address an issue to add additional default options (single-transaction and quick) to 'mysqldump' (CLUS-1979)
-
Address an issue to include the man pages for CMON HA (CLUS-2136)
-
Address an issue to silence 'invalid configuration' alarms for HAProxy when a backend node is not available (CLUS-2102)
Web frontend
-
Address an issue to deploy HAProxy when switching from Database nodes to PgBouncer nodes in the deployment wizard (CLUS-2002)
2023-04-20
ClusterControl v1.9.6
clustercontrol-1.9.6-8556
clustercontrol-controller-1.9.6-6288
clustercontrol-cloud-1.9.6-389
clustercontrol-notifications-1.9.6-330
clustercontrol-ssh-1.9.6-145
In this release we have prioritized improving our high availability setup (CMON HA) for ClusterControl. It is an active/passive solution using the Raft protocol between a CMON controller 'leader' and 'followers'.
A CMON controller is the main process in ClusterControl and it's the 'control plane' to manage and monitor the databases.
To use CMON HA, you will basically need to do the following:
- Use a shared MySQL database cluster that all CMON controllers will use as the shared storage.
Our recommendation is to use a MySQL Galera cluster which provides additonal high availability of the database cluster.
- Install several ClusterControl nodes, at minimum 3 nodes for the quorum / voting process to work.
- These should be setup to connect / share the same CMON database and also have identical rpc_key (cmon.cnf) and RPC_TOKEN (bootstrap.php) secrets.Add the IP of the host of the Controller node to the RPC_BIND_ADDRESSES parameter in the /etc/default/cmon file on all CMON nodes.
- Next enable CMON HA using the s9s CLI on one of the CMON controller nodes.
On the selected 'leader' CMON controller node enable CMON HA using the s9s CLI.s9s controller —enable-cmon-ha
- Restart the other CMON controller 'follower' nodes.
systemctl restart cmon
- Verify that you have a leader and few followers using the s9s CLI
s9s controller —list —long
Finally in order to setup the web application to handle ‘leader' failures transparently we can use a load balancer like HAProxy which will failover to a 'leader' node which provides the 'main/active’ web application. This involves installation of HAProxy, configure it to health check the CMON nodes which can be achieved with a combination of using xinetd and a custom script to call on the CMON RPC API interface.
Please see the online documentation on CMON HA for a more detailed setup instructions.
2023-04-11
clustercontrol-controller-1.9.5-6270
clustercontrol-1.9.5-8550
- Address an issue with with keepalived details page being blank/empty (CLUS-1860)
Web frontend - Address an issue when retrieving datapoints for the dashboards to improve situations when the pages failed to load properly (CLUS-1901)
- Address an issue with hiding a user's email address if the user's account has been disabled (CLUS-1653)
2023-04-03
clustercontrol-controller-1.9.5-6258
- Address an issue with MariaDB the systemd override.conf file (CLUS-1749)
- Address an issue with MariaDB and permissions on the systemd service file (CLUS-1683)
- Address an issue with syntax errors when importing ProxySQL users with column based grants (CLUS-1887)
- Address an issue with log messages (printf) when deleting backups during restore (CLUS-2040)
- Address an issue with repetitive 'Node auto recovery delay backoff reset: 0' log messages
2023-03-24
clustercontrol2-0.9.3-712
In this release for ClusterControl v2 (our new web application) we have added:
- Mail server settings (sendmail or SMTP)
- Query Monitoring (ssh and agent based)
- Minor upgrades / patching
- Operations Reports (scheduled and on-demand)
- Configuration (files) Management
- Enable SSL/TLS encryption between client&server and replication with MySQL
- Enable SSL/TLS encryption between client&server with PostgreSQL
- Misc UX improvements
2023-03-17
clustercontrol-controller-1.9.5-6225
- Address an issue with confusing log messages during node recovery with delayed backoff (CLUS-1939)
- Address an issue to improve log messages when a backup user is failed to be created with MySQL replication and MySQL Galera clusters
- Address an issue with the 'postinst' script and libcrypto on OpenSUSE/SLES
-
Address an issue with the backup user not being created when importing Oracle MySQL 8 (CLUS-1844)
For existing clusters with issues with the backup user you might need to do a reset:
$ s9s cluster --change-config --cluster-id=122 --opt-name=backup_user --opt-value=""
OK.
$ s9s cluster --change-config --cluster-id=122 --opt-name=backup_user_password --opt-value=""
OK.
$ s9s cluster --list-config --cluster-id=122 | egrep '(backup_user|backup_password)'
backup_user ""
backup_user_password "<hidden>"
- Update CMON-DOCS to show RPCv2 documentation at the top of the page
2023-03-02
clustercontrol-cloud-1.9.5-386
clustercontrol2-0.9.1-701
- Address an issue with excessive 'error 500' in the cmon-cloud.log (CLUS-1846)
Web frontend
-
Address issues with mail server settings (CCV2-722)
2023-03-01
clustercontrol-controller-1.9.5-6154
- Address additional issues setting up a replica cluster with MariaDB Galera 10.6 (CLUS-1850)
- Address an issue with the 'CMON agent server' and timeouts when our software/package repository server might be slow to respond
- Address an issue improving error messages when failing deploying MariaDB Galera 10.6 (CLUS-1936)
- Address an issue deploying HAProxy with PgBouncer without passing the admin user and password via the S9S CLI (CLUS-1935, CLUS-1895)
- Address an issue with parsing '\' with pg_hba.conf files (CLUS-1923)
- Address an issue when creating a maintenance period with an non-existing cluster (CLUS-1809)
- Address an issue with IP address changes with CMON’s internal 'hostname_internal'. It now correctly updates when MySQL’ report_host' is changed (CLUS-1929)
2023-02-13
clustercontrol-controller-1.9.5-6136
- Address an issue with excessive log entries with S3 backup uploads in the CMON log. These are now only shown in debug mode. (CLUS-1930)
- Address an issue with error messages when installing Prometheus exporters which are informative in nature and not truly errors (CLUS-1133, CLUS-1926)
- Address an issue with importing an non-existing cluster successfully (Redis Sentinel) (CLUS-1799)
2023-02-13
clustercontrol-controller-1.9.5-6132
clustercontrol-1.9.5-8511
- Address an issue deleting old backups that were in failed status. Failed backups are now also deleted. (CLUS-1874)
- Address an issue with the database growth chart showing duplicated days (CLUS-1813)
- Address an issue when creating a maintenance job on an non-existing cluster (CLUS-1809)
- Address an issue with the s9s CLI when creating a maintenance job with a timezone other than UTC. The scheduled start time incorrectly was set 1 hour in future (CLUS-1808)
- Address an issue when setting up a replica cluster with MySQL based clusters using mysqldump PITR backups. Only xtrabackup/mariabackup made backups are supported moving forward (CLUS-1850, CLUS-1856)
Web frontend
- Address an issue when setting up a replica cluster with MySQL based clusters. Only xtrabackup/mariabackup made backups are supported moving forward and mysqldump PITR backups are filtered out. (CLUS-1892, CLUS-1850, CLUS-1856)
2023-02-13
clustercontrol-controller-1.9.5-6119
clustercontrol-1.9.5-8507
clustercontrol2-0.9.1-688
- Address an issue with WSREP_CLUSTER_ADDRESS (Galera) where it was set to use the public addresses instead of the internal/local addresses (with AWS) (CLUS-1853)
- Address an issue with locale setting C.UTF-8 when it is not available for some environments. It will now fallback to the C locale
- Address an issue with the systemd service for PostgreSQL on Centos and Oracle Linux 8. Wait instead of sending a Sigkill after the process timeouts.
- Address an issue with installing the correct PostgreSQL version on RHEL 8 where the default package to install is v10 (CLUS-1857)
Web frontend
- Address an issue with maintenance status showing incorrect timezone(CLUS-1810)
-
Address an improvement to add support to provide a different 'datadir' for a new MySQL replica (CLUS-1836) NOTE: This is only available in the new CCV2 frontend.
2023-02-06
clustercontrol-controller-1.9.5-6102
clustercontrol-ssh 1.9.5-142
- Address an issue with Xtrabackup and partial backups on Percona/Oracle MySQL 8.0. Variable 'include' depends on the Xtrabackup version used. (CLUS-1853)
- Address an issue with incorrect job message when removing a Primary MS SQL Server after failover.(CCV-595)
- Address an issue with 'DB Users' and MySQL 5.7 when 'password' is in the grants column (CLUS-1872)
- Address an issue where adding a new replica did not enable/install the query monitoring agent on the new node with MariaDB&PostgreSQL (CLUS-1861)
- Address an issue when importing a Galera cluster when special character '£' was used in the cmon user’s password (CLUS-1839)
- Address an issue to automatically restart the cmon-ssh process on Centos 8 (CLUS-1854)
2023-01-31
clustercontrol-controller-1.9.5-6084
- Address an issue where the Query Monitoring agent failed to install when using 'no internet'/offline repositories (CLUS-1853)
- Address an issue where a storage alarm was not cleared even though the disk space was reclaimed (CLUS-1672)
- Address an issue where synchronous replication with PostgreSQL failed to be properly setup (CLUS-1797)
- Address an issue to show in the job log if compression has been enabled for MS SQL Server backups (CLUS-1812)
2023-01-26
clustercontrol-1.9.5-8502
clustercontrol-controller-1.9.5-6068
- Address an issue where the backup compression option had no effect with MS SQL Server (CLUS-1812)
-
Address an issue where the Query Monitoring agent was not installed on a newly added node (CLUS-1823)
-
Address an issue with binary characters from last_sql_error field of MySQL host instances
Web frontend
-
Address an issue when adding recipients for mail alerts (CLUS-1653)
-
Address an issue where the test mail for notifications could not be sent using another email address than the logged in user (CCV2-600)
-
Add a sleep step before creating the 'ccrpc' user with the offline install script
2023-01-12
clustercontrol-controller-1.9.5-6046
- Address an issue where CMON segfaults when running 'get top queries' API request for M SQL Server (CLUS-1780)
clustercontrol-controller-1.9.5-6045
- Address an issue parsing the UUID during bootstrapping Galera cluster (CLUS-1803)
- Address an issue with dropping wsrep_on session variable on connection (CLUS-1714)
- Address an improvement to send license expiration alerts 60 days prior by default instead of 30 days (CLUS-548)
- Address an issue where MaxScale deployment failed due to db user creation failure (CLUS-1764)
- Address improvements to support CMON-Proxy
2023-01-10
clustercontrol2-0.9.1-667
In this release for ClusterControl v2 (our new web application) we have added support for:
- PgBouncer
- Deploy/Import and node details page
- Database Growth Charts
- Database User Management with MySQL and PostgreSQL
- Restore External Backup
2022-12-19
clustercontrol2-0.9.0-643
- Address an issue where the 'restart job' button was missing (CCV2-656)
- Address an issue where the backup path was hardcoded for MS SQL Server (CLUS-1771)
2022-12-16
clustercontrol-controller-1.9.5-6004
clustercontrol-1.9.5-8494
clustercontrol2-0.9.0-639
- Address an issue installing using offline repositories with MS SQL Server (CLUS-1719)
- Address an issue with an incorrectURL to a documentation when 'host is already in a cluster' errors are in the cmon log (CLUS-1743)
- Address an issue with the log collection job stopping if the pgbackrest.log (PostgreSQL/pgbackrest) file was not found. Changed from Error to Warning instead which allows the job to continue with collecting the rest of the logs(CLUS-1746)
- Address an issue with deployment failure for MS SQL Server on Centos/RHEL8 (CLUS-1720)
Web frontend
- Address an issue with pagination in the 'Activity center->Jobs' page (ClusterControlv2) (CCV2-650)
- Address an improvement to configure UI session expiration time. Add a define('SESSIONS_LIFETIME', 3600); parameter to the<webroot>/html/clustercontrol/bootstrap.php file (CLUS-1557)
2022-12-06
clustercontrol-1.9.5-8489
- Address an issue with offline installation using the <webroot>/app/tools/setup-cc.sh
2022-11-30
clustercontrol-controller-1.9.4-5956
- Address an issue with stopping failed master when >1 writeable nodes are found (CLUS-1527)
- Address an issue to restore an encrypted backup from another cluster (CLUS-1612)
- Address an improvement to stop a replica if the replication has a failure instead of keeping the replica running. A new CMON configuration parameter replication_stage_failure_stop_node=true|false can be used to set the behaviour (CLUS-1532)
- Address an issue with HAProxy when a PgBouncer node is removed the cluster so that HAProxy redirects to an existing node in the cluster, i.e., not the previous removed PgBouncer node (CLUS-1614)
- Address an issue where we could end up with multiple primaries after failover (PostgreSQL) (CLUS-1602)
- Address an issue where CMON keeps overwriting the 'mysql-server_version' variable for ProxySQL. It now sets it once at deployment (CLUS-1638)
- Address an issue with recovery jobs stuck in 'running' state (CLUS-1681)
- Address an issue with email alerts where the date header was off with 2h (CLUS-1682)
2022-11-28
clustercontrol-controller-1.9.5-5950
- Address an issue with the cmon agent 'access denied' when the agents are installed on several nodes (CLUS-1665)
- Address an issue with the cmon agent unable to reconnect after database nodes are restarted (CLUS-1665, CLUS-1680)
- Address an issue with email alerts where the date header was off with 2h (CLUS-1682)
- Address an issue parsing the CMON configuration when upgrading. Fixes to the CMON and MySQL parsers (CLUS-1674)
2022-11-22
clustercontrol-controller-1.9.5-5938
- Address an issue with recovery jobs stuck in 'running' state (CLUS-1681)
2022-11-21
clustercontrol-controller-1.9.5-5934
- Address an issue with Redis / Redis Sentinel where the bind address was set to the private IP and not 0.0.0.0 by default (CLUS-1671)
2022-11-15
clustercontrol-1.9.5-8481
clustercontrol2-0.9.0-624
-
Support for Ubuntu 22.04 LTS
-
Only works at the moment with PostgreSQL, MariaDB, MongoDB and MySQL Cluster (NDB)
-
-
New CCv2 release v0.9.0
-
ProxySQL Pages: Scheduler scripts, Process List, Monitoring
-
Cluster list sorting / filtering
-
User Management (RBAC, LDAP)
-
System Logs, Error Reporter
-
User Registration/Onboarding
-
Bug fixes & polishing
-
2022-11-07
clustercontrol-controller-1.9.5-5911
-
Address an issue to use a systemd override file for custom PGDATA and PGPORT with PostgreSQL (CLUS-1539)
2022-11-01
clustercontrol-controller-1.9.5-5901
- Address an issue with the topology view being incorrect when using short hostname or IP (CLUS-1645)
- Address an issue where CMON keeps overwriting the 'mysql-server_version' variable for ProxySQL. It now sets it once at deployment (CLUS-1638)
2022-10-24
clustercontrol-controller-1.9.5-5884
- Address a potential infinite loop issue when parsing the pg_hba.conf (PostgreSQL)
2022-10-24
clustercontrol-controller-1.9.5-5880
clustercontrol-1.9.5-8464
- Address an issue when updating grants in the pg_hba.conf file to only modify our own changes (CLUS-1627)
- Address an issue rebuilding a replica with sudo password set for the SSH user (CLUS-1630)
- Address an issue where we could end up with multiple primaries after failover (PostgreSQL) (CLUS-1602)
- Address an issue with HAProxy when a PgBouncer node is removed the cluster so that HAProxy redirects to an existing node in the cluster, i.e., not the previous removed PgBouncer node (CLUS-1614)
Web frontend
- Address and issue restoring encrypted backups with Xtrabackup that have extension .aes256 (CLUS-1626, CLUS-1634)
- Address an issue with HAProxy deployment to show also the internal/data networks to be selected (CLUS-1535)
- Address PgBackrest tooltip issues in the backup page (CLUS-1629)
- Address an issue with re-render when adding a PgBouncer node. Subsequent nodes can now be added without manually having to resize or refresh the page (CLUS-1607)
2022-10-17
clustercontrol-controller-1.9.5-5861
clustercontrol-1.9.5-8445
- Address an improvement to stop a replica if the replication has a failure instead of keeping the replica running. A new CMON configuration parameter replication_stage_failure_stop_node=true|false can be used to set the behaviour (CLUS-1532)
- Address an issue to silence gethostname errors from the logs by reverting to use the hostname value in the cmon cnf files when checking for SMTP servers (EHLO) (CLUS-1624)
- Address an issue to restore an encrypted backup from another cluster (CLUS-1612, CLUS-1622)
- Address an issue to deploy ProxySQL on PXC 8 and MySQL 8 (CLUS-1507, CLUS-1616)
- Address an issue with parsing 'REVOKE' permission when showing/retrieving MySQL DB users (CLUS-1625)
Web frontend
- Address an issue to restore an encrypted backup from another cluster. Select the cluster to use for the backup decryption key. (CLUS-1612, CLUS-1622)
2022-10-11
clustercontrol-controller-1.9.5-5841
Controller
- Address an issue where MongoDB backups were shown as failed (new 'restore_to_time' to parse instead) after upgrade to 1.9.5 (CLUS-1601)
- Address an issue to disable binary logging on PXC (CLUS-878)
- Address an issue when creating a MySQL replica cluster with a non root user (CLUS-1508)
2022-10-07
clustercontrol-controller-1.9.5-5838
clustercontrol2-0.8.0-561
Controller
- Address an issue when creating a ProxySQL user with a regular expression for the DB schema
Web frontend
- What's new in the CCv2 web application:
- ProxySQL Pages: Variables, Users, Top Queries
- Change CMON Runtime / System Settings via the Cluster's settings menu
- New branding/theme
2022-10-03
clustercontrol-controller-1.9.5-5827 | 1.9.4-5826
clustercontrol-1.9.5-8438
clustercontrol2-0.7.0-551
- Address an issue installing MySQL 5.7 on Debian 11 (CLUS-1604)
- Address an calculating DB growth where the number of tables to include was limited to 25. It’s now a configurable variable, cmon_max_table_on_db_size_calc, and set to 100 by default. (CLUS-1561)
- Address an issue where a deleted user’s sessions was automatically terminated (CCV2-542)
- Address an issue using a 'local mirrored repository' when adding new nodes. The 'vendor repo' was used instead of the 'local repository' (CLUS-1600)
- Address and issue to include cluster_id in the cluster list result set (CCV2-516)
- Address a segfault issue with MongoDB and DB growth calculation (CLUS-1516)
- Address an improvement to pass controller-id and mysql-db to cmon at start (CLUS-1588)
Web frontend
- Address an issue starting/stopping and rebuilding nodes for MS SQL Server(CCV2-492, CCV2-521)
- Address an issue to perform SSH connectivity check using the short form hostname with MS Server deployments (CCV2-486, CCV2-475)
- Address an issue fetching the trial license (CCV2-545)
2022-09-27
clustercontrol-controller-1.9.5-5810 | 1.9.4-5809
- Address an issue to limit the number of node restart/recovery attempts for Galera. An exponential back-off mechanism (with base 3s) has been added. This applies to node and cluster recovery.(CLUS-1266) (Only in 1.9.5-5810)
- Address an issue rebuilding a PostgreSQL replica node on Centos if the postgresql.conf is stored in the data directory. (CLUS-1569)
- Address an issue with PgBouncer where it did not stop when it got unregistered. (CLUS-1580) (Only in 1.9.5-5810)
- Address and issue where 'Invalid Security Configuration error' alarm was sent even though 'cluster_ssl_enforce' was set to false on MariaDB 10.2 (CLUS-1550)
2022-09-19
clustercontrol-controller-1.9.5-5794
clustercontrol-1.9.5-8430
- Address an issue with MS SQL Server to improve automatic failover to prevent having potential multiple primaries. The best secondary node to promote to a new primary is determined by the 'sync lag’. (CCV2-519, CCV2-464)
- Address an issue for MaxScale where a new promoted Primary was not properly reflected in MaxScale. The old primary was still receiving traffic. (CLUS-1558)
- Address an issue with failing QMON agent installation for PostgreSQL with existing users like a 'monitor' user(CLUS-1547)
- Address an issue to improve forced failover for MS SQL Server (CLUS-1560)
- Address an issue where the SQLServerAdmin user was different when adding a new node for MS SQL Server (CCV2-501)
- Address an issue rebuilding a MS SQL Server node where the state must be either NOT_HEALTHY or NOT_SYNCHRONIZING (CLUS-1555)
Web frontend
- Address an issue with backup schedules where the next scheduled time for CEST was incorrect (CLUS-1546)
2022-09-12
clustercontrol-controller-1.9.5-5779
- Address an issue for MaxScale where dynamic changes to /var/lib/maxscale/maxscale.cnf.d files didn’t have any effect (CLUS-1538)
- Address an issue remove/uninstall ProxySQL nodes (CLUS-1544)
- Address an issue installing Garbd and Keepalived with yum (CLUS-5344)
ClusterControl v1.9.5
2022-08-31
clustercontrol-1.9.5-8416
clustercontrol-controller-1.9.5-5742
clustercontrol-ssh-1.9.5-129
clustercontrol-cloud-1.9.5-355
clustercontrol-notifications-1.9.5-326
Use OpenSUSE 15/SUSE Linux Enterprise Server 15 with ClusterControl.
OpenSUSE is an open-source community project with Linux-based distributions that is sponsored by SUSE Software Solutions Germany GmbH and other companies.
SUSE Linux Enterprise Server (SLES) receive more intense testing than the upstream OpenSUSE community distro, with the intention that only mature, stable versions of packages will make it through to SLES.
Features
OpenSUSE 15/SUSE Linux Enterprise Server 15
Support for:- MariaDB Server v10.x with mariabackup (Xtrabackup is not available)
- MariaDB Galera Cluster v10.x with mariabackup (Xtrabackup is not available)
- PostgreSQL v10+
- MySQL NDB Cluster
- HAProxy on OpenSUSE only
- ProxySQL is currently not supported
- HAProxy is currently only supported on OpenSUSE
- . Co-located PostgreSQL nodes is not supported
- TimescaleDB is not supported on OpenSUSE/SLES
- Oracle MySQL Replication is not supported
=============================
2022-08-31
clustercontrol-controller-1.9.4-5750 | 1.9.3-5749
- Address an issue with unregistering HAProxy nodes (CLUS-1518)
- Upgrade of the my.cnf template file for MariaDB Galera Cluster 10.6+
2022-08-29
clustercontrol-controller-1.9.4-5742
- Address an issue to still show FQDN for MS SQL Server when short names are used to setup Availability Groups (CCV2-502)
- Address an issue with Oracle MySQL and Percona 8.x to use caching_sha2_password as the default authentication plugin
- Address an issue with ProxySQL when importing MySQL users with caching_sha2_password. A 'plaintext password' is required otherwise the user will be skipped during import. (CLUS-1342)
2022-08-23
clustercontrol-controller-1.9.4-5730 | 1.9.3-5729
- Address an issue to set the correct backend 'mysql-server_version' in ProxySQL (CLUS-1473) - Only in 1.9.4
- Address an issue where root@localhost was still used (testing the connection) when adding a new replication slave (CLUS-1455) - Only in 1.9.4
- Address an issue for MaxScale where static changes to /etc/maxscale.cnf had no effect - new load_persisted_configs=false is now the default which makes changes to /etc/maxscale.cnf priority rather than dynamic changes which are saved to /var/lib/maxscale/maxscale.cnf.d (CLUS-1476)
2022-08-17
clustercontrol-controller-1.9.4-5720 | 1.9.3-5719
clustercontrol-1.9.4-8407
clustercontrol2-0.7.0-516
- Address an issue where HAProxy admin user and password in the advanced settings where not properly set (CLUS-1409)
- Address an with PostgreSQL primary failover failure. All cluster nodes are now granted in all pg_hba.conf files at cluster deployment. (CLUS-1459)
- Address an issue to drop cmon_host_log table since it no longer is in use
- Address an issue with MS SQL Server deployments using FQDN (CCV2-485, CCV2-473 Note: Addtional improvements are in progress) - Only in 1.9.4
Web frontend - Address an issue with MongoDB Replicaset deployment to support arbitrator nodes (CLUS-1458) - Only in 1.9.4
- Address an issue with MS SQL Server deployments to use the hostname returned from the SSH connection check (CCV2-475 Note: Additional improvements are in progress)
- Address an issue to show a link to the T&C on the login page (CLUS-1464)
2022-08-08
clustercontrol-controller-1.9.4-5695 | 1.9.3-5694 | 1.9.2-5690
clustercontrol-notifications-1.9.4-321 | 1.9.3-322 | 1.9.2-323
clustercontrol-1.9.4-8402
clustercontrol2-0.7.0-509
- Address an issue where ProxySQL deployment fails on MariaDB 10.6 (CLUS-1460)
- Address an issue where DB user / account creation fails on MariaDB 10.6 (CLUS-1431, CLUS-1449, CLUS-1456)
- Address an issue to include Controller / CMON and cluster name for notifications (CLUS-1416)
-
Address an issue removing keepalived nodes (CLUS-1462) - Only in 1.9.4
Web frontend - Address an issue where the cluster load graph was stuck on loading (CLUS-1451)
-
Address an issue with FQDN when deploying MS SQL Server 2019 (CCV2-473, CCV2-475)
2022-07-29
clustercontrol-controller-1.9.4-5669 | 1.9.3-5668
clustercontrol-1.9.4-8396 | 1.9.3-8395
- Address an issue with backwards compatibility on MongoDB where the role detection using the 'isMaster' command was removed for older versions (CLUS-1446)
- Address an issue where the deployment job for HAProxy with PosgreSQL got stuck (CLUS-1450)
- Address an issue with ProxySQL synchronization where using a user other than 'proxysql-monitor' failed to migrate to the new instance (CLUS-1367)
- Address further improvements to support ProxySQL users with sha2 passwords (CLUS-1342)
Web frontend - Address an issue adding MongoDB arbiter which failed because the incorrect type was used (regular router/mongos) (CLUS-1444)
- Address an issue with PgBouncer deploy/import page not showing up properly (CLUS-1437)
2022-07-28
clustercontrol-controller-1.9.3-5661
- Address an issue to limit the number of audit log records returned to 100 if not boundary is specified (CLUS-1407)
- Address an issue where the deployment job for HAProxy got stuck (CLUS-1450)
2022-07-22
clustercontrol-controller-1.9.4-5638 | 1.9.3-5639
- Address an issue an issue where a stopped cluster changed state to 'failure' instead of 'stopped' (CLUS-1425)
- Address an issue where the ProxySQL synchronization job failed but showed up as successful (CLUS-1426)
- Address an issue where deleting a ProxySQL user only deleted it from the backend and not the frontend (CLUS-1428)
- Address an issue when updating the ProxySQL frontend user entry which did not update the backend entry (CLUS-1430)
- Address an issue with minor upgrades with MariaDB where packages to upgrade failed to match (CLUS-1362)
ClusterControl v1.9.4
2022-07-18
clustercontrol-1.9.4-8386
clustercontrol-controller-1.9.4-5624
clustercontrol-notifications-1.9.4-312
clustercontrol-ssh-1.9.4-127
clustercontrol-cloud-1.9.4-353
clustercontrol-clud-1.9.4-353
clustercontrol2-0.7.0
A shared filesystem snapshot repository can be setup automatically now with Elasticsearch at deployment time and an AWS S3 compliant cloud snapshot repository can be used for backups instead of local storage.
We are continuing to add features to the new ClusterControl v2 web frontend and in this release you are able to:
- Deploy and Import
- New database: MongoDB Sharded cluster
- Load balancers: ProxySQL, MaxScale and Garbd
- Automatic setup of a shared NFS filesystem for Elasticsearch at deployment
- Cluster to cluster replication with MySQL and PostgreSQL clusters
- AWS S3 compliant cloud snapshot repository for Elasticsearch
- User profile and License management
- ProxySQL and MaxScale Nodes pages
Features
Scale In and Out
Add and remove node with:- Redis Sentinel
- MS SQL Server 2019
- Elasticsearch
Elasticsearch 7.x|8.x
Full-text search and analytics engine.- Deploy one node for test or development environments
- Deploy three or more nodes for clustered deployments with master or data roles
- Basic User Authentication with username and password
- TLS/SSL API endpoint encryption
- Backup Management with local storage repository
- Scaling out or in master or data nodes
Current limitations:
- No dashboards / performance charts
- Only Ubuntu 20.04 and RedHat/Centos 8 are supported
- Scheduled backups, configuration files management, and Upgrades are not supported at this time
Misc
- Customize time before failover for and use of systemctl instead of pg_ctl for PostgreSQL
- replication_failover_wait_extra_sampling_rounds (default 0)
- Customize mail subject for notification mails
- email_subject_prefix
- Start a node in bootstrap mode for Galera
ClusterControl v2
- Deploy and Import
- New database: MongoDB Sharded cluster
- Load balancers: ProxySQL, MaxScale and Garbd
- Automatic setup of a shared NFS filesystem for Elasticsearch at deployment
- Cluster to cluster replication with MySQL and PostgreSQL clusters
- AWS S3 compliant cloud snapshot repository for Elasticsearch
- User profile and License management
- ProxySQL and MaxScale Nodes pages
FAQ
How can I start using Elasticsearch?
Please follow the instructions outlined in the ‘Getting Started with ClusterControl v2’ to install ClusterControl v2 and then use the ‘Service Deployment’ wizard to deploy an Elasticsearch cluster.
Can I separate the location of my Elasticsearch master and data nodes?
Yes, you can separate the master and data nodes to be on different hosts.
What are the minimum requirements for a clustered Elasticsearch deployment?
Yes, you can separate the master and data nodes to be on different hosts.
You need
- At least three nodes in the 'master' role
- At least two nodes in the 'data' role
- At least two nodes of each role; 'master' and 'data'
Can I rebuild an Elasticsearch node?
No there is no need, the Elasticsearch cluster automatically partitions the dataset across all available data nodes. You can remove and add a data node if you need to for example terminate and relaunch the VM the node runs on.
Can I deploy other load balancers like ProxSQL with ClusterControl v2’s web application?
Yes, we have added support to deploy and import these load balancers:
- ProxySQL
- MaxScale
- Garbd
- PgBouncer (coming soon)
- Rules
- Servers
Work In Progress (coming soon)
- Monitor
- Top Queries
- Users
- Variables
- Schedules scripts
- Node performance
- Process list
2022-07-18
clustercontrol-1.9.3-8378
Web frontend
-
Address an issue to where audit log records retrieval was not restricted. It now uses pagination correctly. (CLUS-1407)
2022-07-07
clustercontrol-controller-1.9.3-5598 | 1.9.2-5599
clustercontrol-1.9.3-8374
- Address an issue an issue where the my.cnf file was modified after adding a MySQL (galera) node. The my.cnf is now considered immutable if the cluster has been imported (CLUS-1396)
- Address an issue starting/stopping EOL version of PostgreSQL 9.6 (CLUS-1406)
Web frontend -
Address an issue to toggle off native ProxySQL clustering if import configuration is set and vice versa (CLUS-1368)
2022-06-24
clustercontrol-controller-1.9.3-5569
- Address an issue where the same parameter name (MySQL) with lower and upper case was saved twice (CLUS-1372)
- Address an issue to upgrade CMON when the MySQL password used had special characters (CLUS-1365)
2022-06-24
clustercontrol-controller-1.9.3-5565 | 1.9.2-5564
clustercontrol-1.9.3-8355 | 1.9.2-8354
- Address an issue an issue with rebuild/add replication node (MySQL 5.7) with parsing the binary log position (Only in 1.9.3) (CLUS-1357)
- Address an issue with rebuilding replication node (MySQL) due to the format of host:port (CLUS-1370)
- Address an issue with ProxySQL clustering when synchronizing settings for new nodes (further improvements planned) (Only in 1.9.3) CLUS-1340
Web frontend - Address an issue with rebuild replication node (MySQL) where the primary node is not recognized to be part of the cluster. (CLUS-1360
2022-06-08
clustercontrol-controller-1.9.3-5544 | 1.9.2-5545
clustercontrol-1.9.3-8345 | 1.9.2-8344
- Address an issue with MySQL DB user management to handle SENSITIVE_VARIABLES_OBSERVER introduced with MySQL 8.0.29 (CLUS-1331) (Only in 1.9.3)
- Address further improvements to correctly handle backup retentions (CLUS-1331)
- Address an issue where cluster de-registration cleanup is now taken care of in the backend/cmon (CLUS-1336)
Web frontend - Address an issue where adding an external mail notification address failed. (CLUS-1339)
- Address an issue to indicate whether a MySQL node is readonly or writable when rebuilding replication secondary (CLUS-1314) (Only in 1.9.3)
- Address an issue where cluster de-registration cleanup is now taken care of in the backend/cmon (CLUS-1336)
2022-06-08
clustercontrol-controller-1.9.3-5529 | 1.9.2-5530
clustercontrol-1.9.3-8335 | 1.9.2-8336
- Address an issue where rebooting a host is stuck. A hard timeout of 60s has been enabled. (CLUS-1322) (Only in 1.9.3)
- Address an issue with backup retention where backups were not purged in time (CLUS-1331) (Only in 1.9.3)
- Address an issue where backups and backup records were left after deleting a cluster (CLUS-1218)
- Address an issue where CMON retries connecting to a MySQL node with insecure connection - Use new connect_heartbeat_when_faulty=30 (30s default) to block connection attempts until the correct certs are used/configured (CLUS-1278)
Web frontend - Address an issue when deleting a cluster to choose whether backups should be removed/deleted as well or left intact (CLUS-1218)
2022-05-31
clustercontrol-controller-1.9.3-5500 | 1.9.2-5501
- Address an issue where PgBackRest failed because the backup host’s port was not passed forward (CLUS-1323 | CLUS-1319)
- Address an issue to customize the time before attempting a PostgreSQL failover. Increase the number of sampling rounds with 'replication_failover_wait_extra_sampling_rounds (default 0)’ (CLUS-523)
- Address an issue where CMON segmentation faults during backup verification (CLUS-1321)
2022-05-27
clustercontrol-controller-1.9.2-5487 | 1.9.3-5488
- Address an issue where rebuilding a replication secondary (slave) failed on Galera clusters (CLUS-1303)
2022-05-24
clustercontrol-1.9.2-8326
| 1.9.3-8324
clustercontrol-controller-1.9.2-5480 | 1.9.3-5475
- Address an issue, where MariaDB 10.5.16 and later changed xtrabackup_binlog_info. This caused a parsing error which could lead to rebuilding /staging of secondary replication nodes could fail (CLUS-1316)
- Address an issue where importing MySQL/MariaDB systems could fail due to a parsing error of the IP/hostname address (CLUS-1315)
- Address an issue in ProxySQL when syncing nodes could cause privileges to not be fully synced (CLUS-1289)
Web frontend - Address an issue with license stats when counting number of Galera nodes (CLUS-1317)
2022-05-23
clustercontrol-controller-1.9.3-5466
- Address an issue with MariaDB where the wrong mysqld binary name was used (mysqld vs mariadbd) (CLUS-1312)
- Address an issue where backup records were not added for some failed backup jobs (CLUS-1272) (clustercontrol-controller-1.9.2-5471)
- Address an issue to use systemctl instead of pg_ctl to start/stop PostgreSQL (CLUS-1260)
- Address an issue to customize the mail subject with a new parameter - 'email_subject_prefix' (CLUS-1244)
2022-05-19
clustercontrol-1.9.3-8321
clustercontrol-ssh-1.9.3-125
- Address issues upgrading the 'clustercontrol-ssh' package
Web frontend
- Address issues with informing and re-directing users to CCv2 with Elasticsearch, Redis and SQL Server (CLUS-1301, CLUS-1304)
2022-05-13
clustercontrol-controller-1.9.3-5451
clustercontrol-controller-1.9.2-5450
clustercontrol-1.9.2-8318
- Address an issue where pg_backrest created the directory /home/pgbackrest. It now uses /var/lib/pgbackrest instead (CLUS-1173)
- Address an issue with warnings of a non-existing maxscale1.log file (CLUS-1246)
- Address an issue with MS SQL Server deployment due to failure of creating a SA user (CCv2-281)
Web frontend - Address an issue with Cluster to Cluster replication where it showed duplicated clusters (CC v1.9.2) (CLUS-1229)
========================================================================
ClusterControl v1.9.3
2022-05-10
clustercontrol-1.9.3-8303
clustercontrol-controller-1.9.3-5439
clustercontrol-cloud-1.9.3-350
clustercontrol-notifications-1.9.3-310
clustercontrol-ssh-1.9.3-122
clustercontrol2-0.6.0-425
In this initial version you can use Elasticsearch with single node deployment for development or use it for production in a clustered deployment complete with backup management and scaling.
ClusterControl v2 (new web frontend) is becoming more feature complete and we have added support for Elasticsearch, MySQL Galera, PostgreSQL, TimescaleDB and MongoDB Replicaset.
We still have some ways to go, however more of the core features to manage our full range of supported database technologies are becoming available to use.
In ClusterControl v1’s web application there is now a dedicated audit log page to track user and system activities for increased security and compliance.
Features
Elasticsearch 7.x|8.x
Full-text search and analytics engine.- Deploy one node for test or development environments
- Deploy three or more nodes for clustered deployments with master or data roles
- Basic User Authentication with username and password
- TLS/SSL API endpoint encryption
- Backup Management with local storage repository
- Scaling out master or data nodes
- No cloud storage (object) repository
- No dashboards / performance charts
- Only Ubuntu 20.04 and RedHat/Centos 8 are supported
- Configuration files management, and Upgrades are not supported at this time
Audit Logging
- New Audit Log page accessible via the Activity->Audit Log (ClusterControl v1 web frontend) and Activity Center (ClusterControl v2 web frontend)
- Activities tracked:
- User authentication, cloud credentials, database configuration and users changes
- CMON configuration changes and jobs
- CC user management changes (Users and ACLs), ops report
- New CMON audit log configuration parameters
- audit_logfile, default log filename (/var/log/cmon_audit.log)
- audit_ignore_users, skip default users (system, ccrpc)
Misc
- Streaming backups with xtrabackup can now be compressed with the addition of new CMON configuration parameters (no frontend at this time)
- xtrabackup_compress_threads, xtrabackup_parallel, xtrabackup_manual_args
- The cluster overview in ClusterControl v1 application is now using data from the Prometheus server (if available)
- Delete local backup after cloud upload
- New bootstrap option when starting Galera nodes
- New audit log configuration parameters
- audit_logfile, default log filename (/var/log/cmon_audit.log)
- audit_ignore_users, skip default users (system, ccrpc)
ClusterControl v2
- Deploy/Import and manage MySQL Galera, PostgreSQL, TimescaleDB and MongoDB Replicaset clusters
- Backup management, Cluster and node actions
- Dashboards / performance charts for database and load balancers
- Deploy/Import and manage HAProxy with Keepalived
- HAProxy status/performance node page
- SSH host health-check to verify SSH connectivity
- Multiple maintenance modes
- Rebuild node for MS SQL Server nodes
FAQ
How can I start using Elasticsearch?
Please follow the instructions outlined in the ‘Getting Started with ClusterControl v2’ to install ClusterControl v2 and then use the ‘Service Deployment’ wizard to deploy an Elasticsearch cluster.
Can I separate the location of my Elasticsearch master and data nodes?
Yes, you can separate the master and data nodes to be on different hosts.
What are the minimum requirements for a clustered Elasticsearch deployment?
Yes, you can separate the master and data nodes to be on different hosts.
You need
- At least three nodes in the 'master' role
- At least two nodes in the 'data' role
- At least two nodes of each role; 'master' and 'data'
Can I rebuild an Elasticsearch node?
No there is no need, the Elasticsearch cluster automatically partitions the dataset across all available data nodes. You can remove and add a data node if you need to for example terminate and relaunch the VM the node runs on.
Can I deploy other load balancers like ProxSQL with ClusterControl v2’s web application?
No, not with this release unfortunately. We are working on adding the remaining supported load balancers and connection pooling. Expect it to be available in an upcoming release of the ClusterControl v2’s user interface.
- ProxySQL
- MaxScale
- Garbd
- PgBouncer
====================================================================
ClusterControl v1.9.2
2022-04-24
clustercontrol-controller-1.9.2-5384
- Address an issue to allow adding a new slave even if log_slave_updates was not set on a master (CLUS-1254)
- Address an issue to only install required snmp packages if snmp has been enabled (CLUS-1138)
- Address an issue promoting a MS SQL Server replica to become a new primary (CCV2-323)
2022-04-14
clustercontrol-controller-1.9.2-5345
- Address an issue with the installation of the snmptrap package (CLUS-1138)
- Address an issue deploying MariaDB Galera with SSL enabled (CLUS-1208)
2022-03-28
clustercontrol-controller-1.9.2-5308
- Address an issue with MySQL 8.x when accessing DB Users and Schemes. New privileges for MySQL 8.x added to our parser. (CLUS-1154)
2022-03-21
clustercontrol-controller-1.9.2-5292
clustercontrol-1.9.2-8273
- Address an issue when PgBackRest was setup to use the same host as the backup verification host (CLUS-1173)
- Address an issue with deprecated SSL parameters that are no longer in use with MySQL 5.7.34 and 8.0.24 and newer versions (CLUS-1093)
- Address an issue to execute snmptrap with sudo to run as root (CLUS-1138)
- Address an issue with MS SQL Server to handle overriding existing /etc/odbcinst.ini to use the correct odbc ini file (CCV2-305)
- Address an issue where setting disk space thresholds for Critical and Warning to 100% still sent alarms (CLUS-1144)
Web frontend - Address an issue with blank ProxySQL pages for Monitoring, Users, Rules, and Variables (CLUS-1162)
2022-03-07
clustercontrol-controller-1.9.2-5242
clustercontrol-1.9.2-8244
- Address an issue with snmptrap installation on RHEL 8 and improve tracing for troubleshooting (CLUS-1138)
- Address an issue with MS SQL server cluster where a node was verified twice (CCV-279)
- Address an issue with MS SQL Server which requires that the hostname used for the nodes are identical to what the 'hostname' command returns on the hosts.
The hostname requirement will stop a deployment job much earlier (CCV-281) - Address an issue importing MariaDB Galera cluster where 'wsrep_incoming_addresses' values were all 'AUTO' (CLUS-1137)
- Address an issue where the disk filled up without any alarms. The 'home', 'staging' and 'backup' directories are now all monitored (CLUS-1082)
Web frontend - Address an issue when upgrading ClusterControl with ProxySQL v1.x which has been deprecated. An upgrade message is now shown instead of a broken page.
Note: We no longer support ProxySQL v1.x since ClusterControl v1.9.0. Please upgrade to ProxySQL 2.x. (CLUS-1130)
2022-02-28
clustercontrol-controller-1.9.2-5213
clustercontrol-1.9.2-8236
- Address an issue with PgBackRest installations where the rpm EPEL repo was not installed (CLUS-745)
- Address an issue where the /etc/passwd file was used to check for existing users. This file is no longer used. (CLUS-1134)
- Address an issue with single node MS SQL server clusters which failed to show up in the cluster and node lists (CCV2-279)
- Address issues with the QMON agents where the status returned was 'none' and storage usage of /var/lib/cmnd/.storageConnection. Default storage retention is changed from 7 days to 24h (CLUS-1112)
- Address an issue with CC user management where the system users and groups failed to be automatically created
- Address an issue with log 'pollution' where pulling non-existing log files were repeatedly mentioned (CLUS-953).
This fix is also in build 1.9.1-5214 - Address an issue with patch management where 'Installed packages' were empty with PostgreSQL 14 (CLUS-1039)
- Address an issue with MySQL recovery when the error code is either 2002 or 2003 (CLUS-1117)
- Address an issue with license checks for MySQL and MongoDB (CLUS-791)
- Upgrade of MySQL Oracle repository signing key
Web frontend - Address an issue with spotlight where the QMON agents’ state were shown as unknown (CLUS-782)
2022-02-18
clustercontrol-controller-1.9.2-5177
clustercontrol-1.9.2-8229
- Address an issue where MaxScale failed to be deployed due to trying to install the deprecated 'mmmon' monitor (CLUS-1068)
Web frontend - Address an issue where MongoDB replicaset cluster failed to be converted to a sharded cluster (CLUS-903)
- Address an issue where PostgreSQL v9.6 still could be selected for deployment (CLUS-1104)
2022-02-16
clustercontrol-controller-1.9.2-5168
- Address further issues with Percona xtrabackup v8.0.27 and later versions (CLUS-1072)
2022-02-14
clustercontrol-controller-1.9.2-5160
clustercontrol-1.9.2-8225
- Address an issue where a failed authentication response is missing an error message (CLUS-1067)
- Address an issue where ProxySQL credentials were exposed in the job log (CLUS-1079)
- Address an issue to support Percona xtrabackup v8.0.27 and later versions (CLUS-1072)
- Address an issue where stopping and (re)starting a MySQL Replication node failed due to the node not being part of the topology (CLUS-1091)
Web frontend - Address a potential php API vulnerability where code could be executed through the endpoint (CLUS-1033)
- Address an issue with the PostgresQL import wizard where the 'basedir' is not needed (CLUS-1035)
- Address an issue where adding a MongoDB shard fails due to a wrong configuration template in the job spec (CLUS-826)
- Address an issue where TimescaleDB v13 was not able to be deployed
2022-02-04
clustercontrol-controller-1.9.2-5116
clustercontrol-1.9.2-8218
-
Address an issue where failed MongoDB import job log entries were missing in the cmon logs (CLUS-1066)
-
Address an issue were the cmon logs were spammed with cmon cloud refresh log entries (CLUS-1065)
-
Address an issue with CMON crashing when importing a MongoDB cluster (CLUS-1061)
-
Address an issue where restarting CMON would change a secondary MS SQL Server replication state (CLUS-1049)
-
Address an issue where the MS SQL Server password was not ODBC string safe
Web frontend -
Address an issue where a ProxySQL node could not be installed on an arbitrary host (CLUS-945 take 2)
-
Address an issue where the created timestamp for failed backups were missing in the alarms log/view (CLUS-733)
-
Address an issue where the keepalived deployment deployment button was broken (CLUS-1040)
2022-01-27
clustercontrol-controller-1.9.2-5090
- Address an issue where ‘read-only’ was set for the MySQL server (primary) due to differences in the hostname saved in CC at deploy/import time vs the ‘report_host’ in the my.cnf file (CLUS-919 reopened)
- Address issues with MS SQL Server
- Improve ‘Start node’ by waiting 30s (default) to check if the SQL Server has been started and is accepting queries after a start ‘service’ has been requested (CLUS-1042)
- CREATE MASTER KEY was created twice (CLUS-1038)
- Address an issue where the CMON log file contains cmon-cloud entries when there are no cloud integrations configured (CLUS-992)
2022-01-25
clustercontrol-1.9.2-8207
clustercontrol-controller-1.9.2-5082
- Address issues with MS SQL Server
- Create a default DB admindb’ for AlwaysOn Availability Group (CLUS-1005)
- Promote Secondary with force=false did not work properly (CLUS-1017)
- The monitor user and certs for backups were only created on the primary and not for all nodes (CLUS-1042)
- Address issues with imperative scripts (CLUS-1011)
- validate_sst_auth.js - not applicable on PXC 8.0
- grant_no_password.js - mysql.pxc.sst.role does not have a password so that is ok
- unused_indexes.js - do not inspect p_s schema
- Address an issue with HAProxy and PostgreSQL where RW and RO ports (rw_splitting) were not handled properly (CLUS-1031)
(web frontend) - Address an issue with shell_exec (PHP) where a user could inject and run potential malicious code (CLUS-1033)
- Address an issue with broken documentation links on the deployment/import wizard (CLUS-995)
- Address an issue with inconsistent timestamp on failed backup alarms for the web frontend (CLUS-733)
====
2022-01-13
clustercontrol-1.9.2-8197
clustercontrol2-0.50-281
clustercontrol-controller-1.9.2-5066
clustercontrol-cloud-1.9.2-332
clustercontrol-ssh-1.9.2-118
clustercontrol-notifications-1.9.2-306
This is a high availability deployment which does not require a shared storage solution and instead uses a replication setup for the MS SQL Server nodes.
It can be used for read-scaling, to offload backups to a secondary node, or to provide fault tolerance.
You can also now enable SNMP traps to send alarms/alerts to SNMP monitoring systems.
This will enable our customers to centralize notifications for their managed services and devices with EMS management platforms.
It can help reduce the email load and make the alerts more visible in a centralized location and can be corrected in a more timely manner.
And finally, support has been added for the latest versions of AlmaLinux 8.x, RockyLinux 8.x Debian 11.x, MariaDB 10.6, PostgreSQL v14 and TimescaleDB.
Features
Microsoft SQL Server 2019
Always On availability groups with Microsoft SQL Server 2019- Deploy 1 primary and up to 7 replica nodes.
- Automatic failover of failed primary / Promote replica to primary.
- Performance monitoring.
- Upload backups to the cloud.
- Only asynchronous replication mode is supported for AlwaysOn.
- Only Ubuntu 20.04 and RedHat/Centos 8 are supported.
- Rebuild replication node is currently not supported.
- Only MS SQL Server 2019 is supported.
- Configuration files management, Scaling and Upgrades are not supported.
- Backups cannot be stored on the Controller (CMON) host.
- SSL/TLS is not supported.
Simple Network Management Protocol - SNMP (v2c)
-
- Send ClusterControl alarms / alerts to SNMP monitoring systems with SNMP traps.
- Easy to enable by adding a SNMP monitoring/target host, port and a trap community string.
- Edit the CMON configuration file directly or make the changes using the 'Runtime configuration'
- Automatically generates a ClusterControl MIB file.
- Severalnines Private Enterprise Number (PEN) is 57397.
- Current limitations:
- Only support for SNMP v2c protocol.
- No client side filtering of the alarms - all alarms are sent to the SNMP monitoring host.
- Only one SNMP monitoring target/host can be specified.
ClusterControl v2
- Deploy Always On availability group for MS SQL Server 2019
- Cloud backups - upload backups to any S3 compliant cloud storage provider
- Performance monitoring
- MS SQL Server 2019
New versions support
- AlmaLinux 8.x, RockyLinux 8.x, Debian 11.x.
- MariaDB 10.6.
- PostgreSQL v14.
- TimescaleDB with PostgreSQL v13 and v14.
FAQ
How can I start using 'Always on Availability Groups' with MS SQL Server?
Please follow the instructions outlined in the 'Getting Started with ClusterControl v2' to install ClusterControl v2 and then use the 'Service Deployment' wizard to deploy 'AlwaysOn AG' MS SQL Server 2019 database.
What is Always On Availability Groups - Always AG for MS SQL Server?
It is a replicated environment where you have one primary replica node and up to 7 secondary replicas using asynchronous replication. It doesn't require a shared storage solution like AlwaysOn Failover Cluster Instances.
How many secondary MS SQL Server replicas can I deploy?
You can deploy up to 7 replica nodes.
How can I get started using SNMP with ClusterControl?
You need to edit your cluster's CMON cnf file and add a few parameters. The file is usually located at /etc/cmon.d/cmon_N.cnf where N is your 'clusterID'. The parameters to add are:
- snmp_trap_target_host=<SNMP monitoring host> - The SNMP monitoring host/target (required).
- snmp_trap_target_port=162 - The default SNMP port to use (optional).
- snmp_community_str='private' - Default SNMP trap community string (optional).
After you have set these configuration parameters the last step is to restart the CMON process.
$ systemctl restart cmon You can verify that SNMP has been enabled by: 1) Check the /var/log/cmon.log file for the following package installation outputs 2021-12-23T04:55:56.617Z : (INFO) Installing SNMP requirement: snmp 2021-12-23T04:56:29.649Z : (INFO) Installing SNMP requirement: snmp-mibs-downloader 2) Verify that the ClusterControl (CMON) mib file has been generated / created ls -l /usr/share/snmp/mibs/SEVERALNINES-CLUSTERCONTROL-MIB.txt -rw-r--r-- 1 root root 47012 Dec 23 04:57 /usr/share/snmp/mibs/SEVERALNINES-CLUSTERCONTROL-MIB.txt This file can also be copied to the SNMP manager to be used there if needed
Which version of the SNMP protocol do you support?
There are three different versions of SNMP:
- SNMP version 1 (SNMPv1) - This was the first implementation, operating within the structure management information specification, and described in RFC 1157.
- SNMP version 2 (SNMPv2) - This version was improved to support more efficient error handling and is described in RFC 1901. It was first introduced as RFC 1441. It is often referred to as SNMPv2c.
- SNMP version 3 (SNMPv3) - This version improves security and privacy. It was introduced in RFC 3410.
The most recent version, SNMP version 3, includes new security features that add support for authentication and encryption of SNMP messages as well as protecting packets during transit.
We support only SNMPv2 at the moment.
How does ClusterControl provide SNMP integration / support?
ClusterControl uses the command line tool 'snmptrap' to send traps to a SNMP monitoring host.
What is Severalnines MIB's Private Enterprise Number - PEN?
It is 57397.
What is a Management Information Base - MIB?
SNMP Management Information Bases (called MIBs for short) are data structures that define what can be collected from the local device or software and what can be changed and configured.
There are many MIBs defined by standards bodies such as the IETF and ISO, as well as proprietary MIBs defined by specific IT equipment vendors.
What does our ClusterControl MIB look like?
The ClusterControl MIB is available here (https://gist.github.com/alex-s9s/79c462fbe7f517bff24a2724f9b67637).
ClusterControl v1.9.1
2022-01-12
clustercontrol-controller-1.9.1-5063
- Address an issue with creating a MySQL user with Proxy grant privileges (CLUS-991)
- Address an issue where the hostname was still missing in the webhook event (CLUS-931)
- Address an issue where backup restore failed with pg_dump backups (CLUS-986)
- Address an issue with inconsistent backup alarm timestamp when a backup fails (CLUS-733)
- Address an issue to improve log output when backups are taking time during the checksum process
2021-12-22
clustercontrol-controller-1.9.1-5034
- Address an issue where storage space alerts were sent even though thresholds at 95/98% were not reached (CLUS-923, CLUS-592)
- Address an issue with incorrect maxscale log file names (CLUS-979)
- Address an issue to abort deployment jobs earlier if there is a non-compatible OS on the controller host (CLUS-971)
2021-12-15
clustercontrol-1.9.1-8186
- Address an issue were an arbitrary ProxySQL host could not be entered (CLUS-945)
- Address an issue when editing ProxySQL users where the UI elements on the page were squished together (CLUS-958)
- Address an issue with notifying users that the license has expired (CLUS-954)
2021-12-13
clustercontrol-controller-1.9.1-5006
- Address a text typo with the cmon-controller package (CLUS-965)
- Address an issue with inconsistent alarm counters (CLUS-151)
- Address an issue to disable alarms for specific mount points (CLUS-946)
- Address an issue with ProxySQL configuration synching (importing) with a custom port (CLUS-891)
- Address an issue to support AlmaLinux 8.4 (CLUS-936)
2021-12-06
clustercontrol-controller-1.9.1-4990
clustercontrol-1.9.1-8173
- Address an issue with MySQL Replication on Oracle Linux 7.9 where Percona backup tools where not properly installed (CLUS-947)
- Address an issue where the Galera nodes for ProxySQL were always offline (CLUS-939)
- Address an issue using an older version of the Prometheus mysqld_exporter with MariaDB 10.5 (CLUS-909)
- Address an issue for ProxySQL host groups where the MySQL servers’ operational status are now taken from ProxySQL’s runtime_mysql_servers (CLUS-816)
- Address an issue where ProxySQL nodes showed incorrect node status (CLUS-939, CLUS-816)
Frontend - Address an issue where ProxySQL nodes showed incorrect node status (CLUS-816)
2021-12-01
clustercontrol2-0.4.0-253
- Address an issue with the cluster selection dropdown in the backup wizard
2021-11-29
clustercontrol-controller-1.9.1-4975
clustercontrol-notifications-1.9.1-303
- Address an issue with user specified ProxySQL admin user being ignored at deployment time (CLUS-906)
- Address an issue when importing HAProxy configuration due to keywords missing in our lexer (CLUS-929)
- Address an issue with webhooks payload missing node hostname (CLUS-931)
- Address an issue with HAProxy deployment where the s9smysqlchk user was created with insufficient privileges (CLUS-775)
- Address an where wsrep_on was turned off with PXC 8.0 clusters (CLUS-834)
- Address an issue where the cmon-events is now restarted automatically at package upgrade
2021-11-24
clustercontrol-1.9.1-8170
clustercontrol-notifications-1.9.1-300
clustercontrol-cloud-1.9.1-328
- Address an issue with editing backup schedules where ‘exclude tables’ reverted to ‘include tables’ instead (CLUS-914)
- Address an issue with the notifications configuration for ‘all clusters’. It previously only considered clusters that were managed at the time when the configuration was created. It now correctly sends notifications for all managed clusters regardless when a cluster was created/added (CLUS-396)
- Address an issue where the node’s action menu did not appear on the nodes for the nodes page (CLUS-877)
- Address an issue with multi parts upload to cloud object storage (S3) where parts per upload has been increased to 1000 (CLUS-858)
2021-11-23
clustercontrol2-0.4.0-248
In this build we've added support for:
- Dashboards (agent based) with the Prometheus node
- Enable and reconfigure agent based monitoring (Prometheus exporters)
- System Overview, Cluster Overview, and Database Performance dashboards
- Cluster specific pages
- Dashboard/Performance monitoring, Nodes, Backups, Alarms, Jobs and Logs pages
2021-11-15
2021-11-08
2021-11-03
MS SQL Server 2019 has been one of our most customer-requested databases and is also one of the most recommended and popular enterprise-grade databases with over 30 years on the market.
Our first supported closed source database, its initial release includes automated deployments, node recovery and backup management.
In addition we also now support MongoDB with the most recent GA versions v4.4 and v5.0.
Let us know what you think about these features and changes anytime
Features
Microsoft SQL Server 2019
Currently, MS SQL Server 2019 can be deployed as a standalone/single server. We support ‘full', ‘differential’ and ‘transaction log’ backups — the backups can only be stored on the database node.Current limitations:
- Only Ubuntu 20.04 and RedHat/Centos 8 are supported.
- Only MS SQL Server 2019 is supported.
- Configuration files management, Scaling and Upgrades are not supported.
- Backups cannot be stored on the Controller (CMON) host.
- SSL/TLS is not supported.
- Always On availability groups.
- Performance monitoring (in the CCv2 frontend).
- Cloud upload for backups.
- Backup verification.
MongoDB
We have added support for these new GA versions of MongoDB.- Percona Server for MongoDB v4.4 and v5.0
- MongoDB Inc v4.4 and v5.0
ClusterControl v2
- Import Redis with Sentinel cluster is now supported.
- New cluster actions
- Enable/disable maintenance mode
- Enable/disable readonly (MySQL)
- Restart cluster (Galera)
- Remove cluster
- New node actions for MySQL
- Enable/disable maintenance mode
- Enable/disable readonly
- Enable/disable binary logging (Galera)
- Promote replica
- Stop replica
- Reset replica
- Rebuild replica
- Change replication primary
- Reboot host
- Restart node
- Stop node
- Remove node
Misc
- Split brain improvements with MySQL Replication (Primary/Replicas). The nodes will be set to ‘read only’ if we cannot reliably detect a new primary.
- PgBackRest backups can now enable encryption from the web application (previously only available with the s9s command line client).
FAQ
How can I start using MS SQL Server 2019 or Redis with ClusterControl
We are transitioning to a new version of our ClusterControl web frontend/application and these new databases are currently only available by using our technical preview of ClusterControl v2. Please follow the instructions outlined in the 'Getting Started with ClusterControl v2' to install ClusterControl v2 and then use the ‘Service Deployment’ wizard to deploy or import your Redis or MS SQL Server 2019 database.
What are the OS requirements for deploying MS SQL Server 2019?
Deploy MS SQL Server 2019 on Ubuntu 20.04 or RedHat/Centos 8.
What are the OS requirements for deploying Redis?
Deploy Redis v5 & v6 on Ubuntu 20.04 or RedHat/Centos 8.
What are the features and limitations with Redis?
Redis databases are deployed with Sentinel nodes that are co-located on the same host. You can deploy 1, 3 or 5 nodes. Backups are created for:
- RDB (database). The RDB persistence performs point-in-time snapshots of your dataset at specified intervals.
- AOF (Append Only File). The AOF persistence logs every write operation received by the server, that will be played again at server startup, reconstructing the original dataset.
- Performance monitoring (in the CCv2 frontend) will be supported in an upcoming release.
- No cloud upload for backups -- will be supported in an upcoming release.
- Configuration files management, Scaling and Upgrades are not supported.
- No backup verification.
- SSL/TLS is not supported.
=======================================================================
ClusterControl v1.9.0
2021-10-01
clustercontrol2-0.2.0-215 Tech Preview
- Fix a list limit where only at most 5 clusters were shown (backup schedule)
- Fix a 'cannot read properties of undefined' when trying to open the backup settings dialog
2021-09-30
clustercontrol-1.9.0-8122
clustercontrol-controller-1.9.0-4846
- Enable changing passwords on the 'Settings->Runtime Configuration' page without having to restart the ClusterControl controller (CMON)
- Fix an issue ordering the cluster list by name when using Cluster 2 Cluster replication
- Fix an issue where the failover backup host dropdown was empty with 'auto' as the backup host
- Fix for the blank confirmation dialog when setting a maintenance mode
- MySQL: Fix an issue with 'Add replication slave or replication cluster' when the 'db admin user' username is not 'root'
- Fix an issue with warning/critical alarms for disk-space usage. Only warnings with a hardcoded threshold were sent out
- Increased a timeout to fix the issue where the PBM (Percona Backup for MongoDB) agent failed to start (pid is 0)
- Cloud Deployment: Ubuntu 20.04 is no longer shown when selecting MongoDB v4.0 or v4.2
2021-09-22
clustercontrol-controller-1.9.0-4814
- Redis: Fix an issue using a custom dbfilename and/or appendfilename in the redis.conf
2021-09-13
clustercontrol-1.9.0-8079
clustercontrol-controller-1.9.0-4801
- The Prometheus server version that is installed is now set to use v2.29.2
- PgBackRest now has an option in the frontend to enable encryption
- A potential fix for an issue where the PBM (Percona Backup for MongoDB) agent failed to start (pid is 0)
- Fix an issue where the PBM (Percona Backup for MongoDB) agent was stuck when restoring with waiting for PITR to be disabled - (correctly detects 'pitr.enabled=true')
- Fix an issue with PgBackRest and using a dedicated repository host where the host port was misconfigured
- Fix an issue with disk space alarms not clearly showing which filesystem is out of space
- A potential fix for an issue with Cluster 2 Cluster replication where doing inserts/updates fail
2021-09-06 clustercontrol-1.9.0-8067
- MySQL replicas can now be forced to be promoted as a primary
- PgBackRest nodes are now shown in its own section in the nodes page
- Importing Galera Cluster now allows replicas to use a different user than the root user
- Fix a regression with sorting the cluster list based on the name
- Fix an issue adding a MySQL replica node due to failing SSH check to the host
- Fix for a presentation issue when having more than 1 cluster to cluster replication
2021-09-01 clustercontrol2-0.2.0-190 Tech Preview
In this new tech preview release of ClusterControl v2 we have added support for:
- MySQL Replication cluster deployments
- Backup management with MySQL based clusters
- A cluster topology mini-map / tooltip which gives a quick view into the nodes status and arrangement
Please see 'Getting started with ClusterControl v2' on how to install ClusterControl v2.
2021-08-30 clustercontrol-controller-1.9.0-4769
- A Crashing issue has been resolved causing CMON to segfault.
- MySQL Replication: Fixed an issue adding replication slave with a PITR mysqldump backup.
- PostgreSQL: Fixed and issue where streaming pg_basebackups could run forever if there was a socat running on the storage host.
- Query Monitor V2: Fixed an issue when deploying QM v2 where the agent service was not enabled after installation, thus the agent would not restart automatically after a reboot.
2021-08-20
clustercontrol-controller-1.9.0-4756
clustercontrol-1.9.0-8051
clustercontrol-cloud-1.9.0-310
- Deploying ProxySQL v1 is no longer supported! ProxySQL v2 is the only option as of this patch.
- MySQL: Fix to handle custom directory paths in a configuration at deployment or during an add node (using --copy-back or --move-back)
- PostgreSQL: Backup verification server was not properly removed when the option was set
- Timescale: Fix for deploying TimescaleDB v12
- Backup: Fix an issue where uploading a backup to Azure stopped at 200GB
- UI fixes:
- HAProxy: Fix an issue with server addresses showing up incorrectly
- Fix an issue with the Cluster action menu disappearing when the activity viewer was open
2021-08-15 clustercontrol-controller-1.9.0-4747 clustercontrol-1.9.0-8040
Controller:
- Fixed a bug where Adding HAProxy Load Balancer in a C2C replication (slave cluster) failed.
- Fixed a bug installing HAProxy on Ubuntu 20.04 due to "Unable to locate package haproxy18".
- Failed to add replication slave for Galera cluster due to an error loading the wrong configuration template.
- Fixed a bug where Cluster Failure is reported as Warning, but must be Critical.
- Fixed an issue where failing to upload a backup to cloud, e.g Azure, did not raise any alarm, and added more error messages in case of failure.
- Fixed an SQL injection issue when configuring the 'mail server' settings.
- Fixed an issue where backup verification failed because only a hostname was sent in the request and the port was missing.
- Fixed an issue, when desyncing a Galera node during backup would always return that the backup failed.
- Added a feature making semi-sync replication (MySQL/MariaDB replication ) optional. A replica can however only have semi-sync enabled if the master already is configured with semi-sync.
Frontend:
- Fixed an issue where it was impossible to create a schedule for xtrabackup| mariabackup.
- Added a feature to make it possible to configurable retention by size for Prometheus.
2021-08-10 clustercontrol-1.9.0-8025
- Create Secondary Cluster: Fixed an issue where the selected Master was not set in the job, thus the selected master host was never used.
- Registration: Fixed a user registration issue.
- Fixed a but on the Email Notification page where it was possible to get Internal Server Error if an older PHP version is used.
- Improved Node Action dialogs and fixed typos.
2021-08-09 clustercontrol-controller-1.9.0-4730
- LDAP: Fixed a bug preventing LDAP login to work.
- MySQL: Fixed an issue where RESET PRIMARY was used instead RESET MASTER.
- Galera: Fixed an issue restoring backups on >1 node and the datadir was not empty.
- MySQL: The node action Reset Slave [All] now implicitly stop the replica threads. Before this, a user must first stop the slave before the reset slave command could be issued.
- Upgrade: Fixed an issue where selecting one node would upgrade all nodes.
- Webhook and Email Notifications: FIxed an issue when only "Ended" events were handled and not when the event was "Created".
2021-07-30 clustercontrol-1.9.0-8008
- MySQL
- Select async or semi-synchronous replication when deploying a replication cluster or adding a replica
- Set a custom configuration template and 'datadir' when adding a replica
- Topology View: Fix to allow more than 30 characters with node names
- Backup: Set a limit to the number of cores used when compressing with PIGZ
- Email notification: Fix for an issue where not all CC users are showing up properly in the list
2021-07-26 clustercontrol-controller-1.9.0-4710
- Redis: Fixed a deployment issue where the /var/log/redis/ directory was not accessible by the user 'redis'.
- CMON events: Fixed an error in HTTP headers preventing CMON to send events to other services.
- Garbd: Fixed a garbd RedHat/CentOS installation bug.
2021-07-16
clustercontrol-1.9.0-7991
clustercontrol-controller-1.9.0-4693
clustercontrol-cloud-1.9.0-307
clustercontrol-clud-1.9.0-307
clustercontrol-notifications-1.9.0-277
clustercontrol-ssh-1.9.0-109
clustercontrol2-0.1.0-165
We also have a new Query Monitor system using an agent based approach for MySQL and PostgreSQL based databases. Using query monitor agents we are able to provide better accuracy and insights into your database performance with less load on the nodes. Analyze your database workloads by checking query digests, top queries or outliers.
In addition we have worked on several improvements with pgBackRest for backing up PostgreSQL and you can also upload your backups to any cloud storage provider who supports AWS's S3 object storage API.
Features
Redis Management and Monitoring
An Open Source in-memory data structure store that can be used as a database, cache or message broker, see redis.io for more information.Please see ClusterControl Tech Preview for instructions on how to install the technology preview to try out Redis Management and Monitoring.
- Deploy Redis v5 or v6 nodes with Sentinels - one primary and up to five replicas.
- Backup and Restore with AOF and RDB.
- Fault detection and failover orchestration by the Sentinels.
- Stats collection and monitoring dashboard with Prometheus (coming soon).
Query Monitor
A new agent based Query Monitoring system for MySQL and PostgreSQL.- Install / remove query monitor agents on the db nodes.
- Start / stop collecting query stats with the agents.
- New 'Query Workload' overview showing query digests, latency, throughput and concurrency with a scatter chart.
PostgreSQL
- pgBackRest Improvements
- pgbackrestfull, pgbackrestdiff, pgbackrestinc
- Install pgBackRest on primary or replica/standby nodes with a default stanza.
- Install pgBackRest at cluster deployment.
- Register/import existing pgBackRest nodes, Unregister (keep config files) and uninstall pgBackRest (completely remove it).
- Backup repository configurations - local (on one of the db hosts) or dedicated backup host.
- Backup on replica/standby nodes.
- Backup reconfiguration - regenerate configuration.
- Support for stanza configured retention - set type to 'time' to match ClusterControl's retention value in days.
- Methods supported
- Restore backup on primary - replicas needs to be rebuilt separately.
- Restore with custom tablespaces locations.
- Backup verification.
- Point In Time Recovery with --pitr-stop-time
Misc
- Upload backups to any AWS S3 API compliant cloud storage provider.
====================================================================
ClusterControl v1.8.2
2021-06-24 clustercontrol-1.8.2-7950
- The cluster name in the web application correctly reflects the name change when using the s9s command line client.
- The trial license with a new installation is properly activated.
- Fixed an issue with webhooks integration not receiving alarm events. An incorrect cluster id was used/saved.
Remove and add back the webhook and/or other integrations again to make sure the correct cluster id is used/saved.
2021-06-23 clustercontrol-controller-1.8.2-4631
Controller
- MySQL Replication: Fixed a bug with Promote Slave on Percona Server. The issue was that the gtid_mode was not detected correctly.
2021-06-16 clustercontrol-controller-1.8.2-4612
Controller
- Fixed an issue where all the properties were not always reset in the controller following a RESET SLAVE in MySQL/MariaDB.
- Removed the installation of the MySQL X plugin during Percona XtraDb Cluster 5.7 and Percona Server 5.7 deployments.
- Fixed an issue where CMON failed to correctly invoke the script specified in option replication_pre_failover_script.
- Improved logging for CRON JOB COLLECT_CONFIG errors.
2021-06-08 clustercontrol-1.8.2-7915
Frontend
- Fixed an issue with the backup scheduling. In some cases, only the last schedule would be shown.
- Fixed an issue with an empty message in the Schedule maintenance window.
- Fixed an error loading the ClusterControl UI, where it got stuck on "Loading Cluster List...".
2021-06-07 clustercontrol-controller-1.8.2-4596
Controller
- Updated the s9s-tools APT GPG key (as the old one expired) which is used during post-install during package installation.
- New Feature: Added support for custom init name support for MySQL Replication. The initd/systemctl script can be specified with init_service_name in the cmon_X.cnf file.
- Fixed an issue where the semi-sync plugin was installed unnecessarily during PXC deployment.
- Fixed an issue installing TimeScaleDb on PostgreSQL 13.
- Fixed an issue where pgbackrest was failing after cmon upgrade due to 'su must be run from a terminal'.
- Fixed an issue with HAProxy: If an add secondary/replica job failed, then the slave could still be added to the HAProxy loadbalancer configuration.
- Fixed an issue where ClusterControl would show that a failed PostgreSQL streaming replication link was actually working.
2021-05-31 clustercontrol-controller-1.8.2-4580 clustercontrol-1.8.2-7906
Frontend
- Added back a missing Manage -> Processes page in 1.8.2.
- Changed default repository to "Use Vendor Repositories" option for Clone Cluster dialog.
- Fixed an issue where the Clone Cluster option was missing for users created with the new User Management system.
Controller
- Fixed an issue installing TimescaleDb on PG12 and on RHEL 8.3, which failed because of the absence of the timescaledb-2-postgresql-12 package
- Fixed a bug restoring Percona Xtrabackup incremental backups using Percona Xtrabackup 8.0
- Fixed an issue configuringe Garbd using SSL on MariaDB 10.5.
- Fixed a backup issue using pgbackrest, which failed after upgrade the ClusterControl from 1.8.1 to 1.8.2
2021-05-26 clustercontrol-controller-1.8.2-4570
Controller
- Fixed a bug for ldaps:// connections coming from openldap option handling issues.
- Fixed a bug when restoring a backup from the controller and the backup path or filename contained whitespace.
- Fixed an issue with PgBackRest when the WAL-level is different compared to the stanza name.
- Fixed an issue with sssd which caused cmon to fail sometimes. The fix was to avoid using sss_ssh_knownhostsproxy and setting SSH_OPTIONS_PROXYCOMMAND to 'none'.
2021-05-20 clustercontrol-notifications-1.8.2-272
- Fixes an issue on the web client where the notifications setup dialog is always showing up at login.
- Changed how cmon-events retrieves the initial RPC API token.
2021-05-19 clustercontrol-1.8.2-7892, clustercontrol-controller-1.8.2-4551, clustercontrol-cloud-1.8.2-301
Frontend:
- Clone Cluster option is not present for PXC 5.7 on Debian 9.
Controller
- Fixed a bug to prevent multiple recovery jobs running on the same Prometheus host.
- Fixed a grant/privilege parsing issue to handle DELETE VERSIONING ROWS in MariaDB 10.5.
- Improved the cmon.service systemd script to automatically restart cmon after failure.
- Fixed an issue deploying garbd on Ubuntu 20.0 using PXC 5.7 and PXC 8.0.
- PostgreSQL: Added a correct recommendation when changing runtime configuration. E.g, changing work_mem is a session variable, but a serer restart is required to change it globally.
- Fixed an issue with Clone Cluster. It failed because of an SSL error. Also, Galera SSL Encryption is not yet supported when Cloning Clusters. In that case, create a new cluster from backup.
Cmon Cloud:
- Fixed an issue deploying Debian 10 clusters on AWS.
2021-05-12 clustercontrol-1.8.2-7878 clustercontrol-cloud-1.8.2-298 clustercontrol-clud-1.8.2-298 clustercontrol-controller-1.8.2-4531
Frontend / Cmon Cloud:
- Support for uploading backups to any cloud storage providers who provide an AWS S3 API compliant object storage API. You need to have both the latest ClusterControl web, CMON controller and the cloud packages.
- PostgreSQL: Fix upgrades for single node deployments
- Fixe an issue with opening the SSH Web console
Controller
- MariaDB: Failed to 'RESET MASTER when creating a slave cluster. The fix is to toggle only the wsrep_on session variable for MariaDB too, instead of the global one.
- MongoDB: Fixed an issue with PBM rendering backup file size 0
- MySQL Deployment: Failed to create a cluster due to --super-read-only.
2021-05-07 clustercontrol-controller-1.8.2-4518
- MySQL 5.7 and 8.0: Add replication slave from could fail due to --super-read-only.
- PostgreSQL: Fixed an issue where the replication credentials are not used when you import an existing PostgreSQL slave node.
- Fix for a crash situation by disabling the parsing of the global ssh known hosts file.
2021-04-27 clustercontrol-controller-1.8.2-4494
- Database Growth: Added more log messages to assist in debugging.
- PostgreSQL: ClusterControl could not detect that PostgreSQL replication is running and working fine. This was fixed by improving the primary_conninfo collection. Now we use data also from pg_stat_wal_receiver.
2021-04-26 clustercontrol-1.8.2-7862 clustercontrol-cloud-1.8.2-287
- Create slave cluster: It now shows backup time for the backups that can be used when staging a slave cluster
- Query Monitoring: Fix layout issues in the settings page
- Topology View: PgBouncer is now properly visualized / shown
- Backups: Fixes the backup page stuck on 'please wait' for some users
- PHP 5.4 (RHEL/Centos 6): Revert php array syntax to make the web application work on with PHP 5.4
- Navigation: Fix an issue with re-directing to the cluster list page when opening the activity viewer
- Community License: Fix a login issue choosing to run ClusterControl with the community license
- Mail notifications: Fix an issue with loading a new team/group into the list view
- PostgreSQL: The Query Monitor->DB Connections now shows the 'database' and 'user'
- Alarm Badge: Fix for an issue with the badge counts being different on the cluster list vs in the cluster overview page
- Users and Schemas: Prevent empty MySQL grant statements to be sent
- Cloud Deployment: Add options for Ubuntu 20.04, CentOS 8 and Debian 10. Debian 8 is no longer supported.
2021-04-20 clustercontrol-cloud-1.8.2-286
Cloud:
- Fixed an issue when deploying to CentOS 7 on AWS. Updated AMIs.
2021-04-09 clustercontrol-controller-1.8.2-4478
Controller:
- NDB Cluster: Updated version to 8.0.23.
- PostgreSQL: Top Queries in PostgreSQL 13 was not working due to column name changes in pg_stat_statement table (the total_time changed to total_exec_time etc).
- PostgreSQL: Cluster 2 cluster replication was not correctly shown properly when using PostgreSQL 13 due to a column name change in pg_stat_wal_receiver.
2021-04-09 clustercontrol-controller-1.8.2-4467
Controller:
- Alarms: Changed the Redundant indexes alarm text where it was suggested to navigate to the ' Table Analyzer' instead of the 'Schema Analyzer'.
- General: Improved stability by disabling code in release builds that in rare cases could cause crashes.
2021-04-07 clustercontrol-1.8.2-7804
Frontend:
- DB Schema and Users: Fixes an issue with opening the individual tabs / pages like the 'Create Database' page
- Create New Admin User: Shows a better error message and a possible fix to try if there is an issue creating the a new admin user for the new user management system.
Follow the instructions and try reset the so called 'ccrpc' user needed by the frontend for this specific case. The script to run is located at /var/www/html/clustercontrol/app/tools/reset-ccrpc.sh
2021-04-06 clustercontrol-1.8.2-7788
Frontend:
- The backup list is now showing also showing the backup job duration
- DB User Management: Fix for lower casing the table name when adding a grant in the form of <schema>.<tableName>
- Query Monitor: Fix a performance issue with Query Statistics - we now paginate the result sets properly
- Fix for a permission issue when importing keepalived nodes
- Prometheus MySQL Dashboard (exporter): Collect and show MySQL server user stats
- PgBouncer: Prevent 'space' in PgBouncer pool names and fix for showing the wrong message when importing a PgBouncer node
2021-03-31 clustercontrol-1.8.2-7760 clustercontrol-controller-1.8.2-4453
Frontend:
- Deploy: A help text regarding custom templates was missing in the tool tip for the Configuration Template.
- PgBouncer: Added syntax highlighting in the configuration file editor.
- ProxySQL: Hide blacklisted users from ProxySQL import users wizard.
- New User management: Fixed a couple with access to Key Management and Email Notifications when logged in with a user of the new user management.
Controller:
- PgBouncer: Import PgBouncer fails: Pid not found, while PgBouncer is running and the pid file exists.
- PgBouncer: Fixed an issue where the create pool dialog allows the User to enter pool name with spaces.
- Reverting a change of disabling socat when it is used with a hostname starting with a number. In some older versions of socat, having a hostname starting with a number would cause socat to fail/hang.
- Database Growth (PostgreSQL): Added more job messages making it easier to debug problems.
- Error reports: More data collected, e.g runtime_mysql_galera_hostgroups from ProxySQL.
2021-03-26 clustercontrol-1.8.2-7748 clustercontrol-controller-1.8.2-4443
Frontend:
- Prevent <webroot>/clustercontrol/app/webroot/build directory from being viewable
- Use separate S9S_USER_CONFIG config file for the 'ccrpc' command line user.
Controller:
- MariaDb 10.5: Added support for new privileges (READONLY ADMIN)
2021-03-24 clustercontrol-controller-1.8.2-4436
Controller:
- MySQL: Rebuild replication slave failed because was super_read_only=ON
- MySQL: Cluster 2 Cluster Replication setup failed due to a secure connection requirement when the 'caching_sha2_password' plugin was used.
2021-03-23
clustercontrol-1.8.2-7738
clustercontrol-controller-1.8.2-4431
clustercontrol-cloud-1.8.2-280
clustercontrol-ssh-1.8.2-105
clustercontrol-notifications-1.8.2-267
The ClusterControl web application is now able to use the improved and more secure RPC v2 API which has been used by our s9s command line tool for quite some time.
One of the key changes is that it allows us to provide a better User Management system to manage users, teams and access control.
This unfortunately requires a few steps to be taken on your part in order to switch to the new system:
- Existing ClusterControl users need to be re-created Create a new 'admin' user in the new system with the 'create admin user' wizard, then logon with that user and then create any remaining users and teams again.
- An existing LDAP server configuration need to be re-created Logon with your new 'admin' user and then create the LDAP configuration again.
We are planning to completely phase out the old User Management system from June 2022 until then both versions will be concurrently supported by the web application.
Critical bug fixes and other minor cosmetic fixes are going to be maintained with both versions of the User Management systems until the old system is phased out.
Feature Details
New User and LDAP Management
A new system to manage users via the ClusterControl controller with improved security, central user database, access control and ldap support.Both the web application and the s9s command line client will now use the same user database.
- Create users, teams with access control for different roles
- LDAP support for authenticating ClusterControl users with the most popular LDAP server options
- Unified user database to keep web application users and command line users in perfect unison
- Underlying user management system based on the Unix/Linux filesystem permissions
New Patch Management
A improved and redesigned patch management system to upgrade MySQL, PostgreSQL and ProxySQL nodes.- Show installed packages and versions
- Check/update for new packages to upgrade
- Selective upgrade of nodes
PgBouncer - connection pooler for PostgreSQL
Pool/optimize connections to one or more databases.- Deploy PgBouncer on one or more nodes
- Manage multiple pools per node
- Pool modes
- Session
- Transaction
- Statement
- Prometheus exporter and dashboard
PostgreSQL
- Support for PostgreSQL v13 (deployment and import)
- Enable pgaudit extension for audit logging
- Class of statements that can be logged
- READ: SELECT and COPY when the source is a relation or a query.
- WRITE: INSERT, UPDATE, DELETE, TRUNCATE, and COPY when the destination is a relation.
- FUNCTION: Function calls and DO blocks.
- ROLE: Statements related to roles and privileges: GRANT, REVOKE, CREATE/ALTER/DROP ROLE.
- DDL: All DDL that is not included in the ROLE class.
- MISC: Miscellaneous commands, e.g. DISCARD, FETCH, CHECKPOINT, VACUUM, SET.
- MISC_SET: Miscellaneous SET commands, e.g. SET ROLE.
- ALL: Include all of the above.
Tags / Labels
Tag clusters to quickly identify one or more clusters that is used for a specific reason- Add tags at cluster deployment
- Add tags at cluster import
- Search / filter out clusters that have specific tags
Misc
- MySQL Cluster 8.0 (NDB) support
- Percona MongoDB 4.x audit log support - only via s9s command line at the moment
- Host/server connection check whenever an IP/hostname is entered with form wizards
=======================================================================
2021-03-16 clustercontrol-controller-1.8.1-4419
Controller:
- PostgreSQL: Improved the PostgreSQL connection error handling as it could connect to a non-existing schema.
- PostgreSQL: Fixed an issue where the database name was lowercased causing the Query Statistics tab in the frontend to fail.
2021-03-10 clustercontrol-1.8.1-7691, clustercontrol-controller-1.8.1-4400
Frontend:
- ProxySQL: Fixed an issue where the ProxySQL UI was not working correctly when creating a new query rule from top queries.
- Logs Viewer: Fixed an issue where the log viewer (Logs->System Logs) would show an incorrect amount of logs.
Controller:
- PostgreSQL: Fixed a bug preventing users from killing a query in the Running Queries view.
2021-03-02 clustercontrol-1.8.1-7660, clustercontrol-controller-1.8.1-4392
Frontend:
- Topology View: Added maintenance mode visualization support in the topology view.
- LDAP: Fixed an issue where the default user's timezone has changed after upgrading to ClusterControl 1.8.1.
Controller:
- MaxScale: Fixed a dependency issue with libcurl3 for older Ubuntus.
- PgBackrest: Fixed an issue where installing PgBackrest failed with "check command requires option: pg1-path".
2021-02-23 clustercontrol-1.8.1-7639, clustercontrol-controller-1.8.1-4383
Frontend:
- Backup: Improved field validation for the cron settings.
- Jobs: A scheduled backups job displayed the current time instead of schedule time in the job title.
- MySQL: Kill connection did not function due to an issue with host ids.
Controller:
- MaxScale: Updated to version 2.5.7.
- MySQL: Kill connection did not function due to an issue with host ids.
- Percona XtraDb Cluster: Fixed an issue preventing deployment of 8.0 on Debian 10 (Buster).
- Logging: Improved logging of dead connections.
2021-02-11 clustercontrol-1.8.1-7621, clustercontrol-controller-1.8.1-4369
Frontend:
- Security: Fixed an issue to redirect the user to the login page when a request was blackholed after a session timeout.
Controller:
- MySQL 8.0: Added support for new privileges added in MySQL 8.0.23.
- Systemd scripts for CMON.
2021-02-08 clustercontrol-1.8.1-7605, clustercontrol-controller-1.8.1-4362
Frontend:
- Query Monitor: Show the time on the 'Running Queries' page in a more user-friendly format.
Controller:
- MySQL 8.0: Fixed a bug when an ALTER USER command to change the db user password was issued when using the 'caching_sha2_password' plugin.
- ProxySQL: Updating the admin user/password did not update the cluster admin/password when ProxySQL Clustering is enabled.
- Logging: Fixed a bug regarding the log message. A log message from the watchdog thread was printed out using the wrong timezone.
- PostgreSQL & HAProxy: The postgresql checker script was configured with the wrong password on the slave nodes.
2021-01-22 clustercontrol-1.8.1-7559
- Azure integration: Fixed an issue related to the selection of subnets.
- Backup. Added an option to Skip the md5sum check for a backup job. Md5sum checks may take a long time on large backup files.
- Backup: Fixed an issue where the wrong warning was printed in the Restore Backup section when the option "Restore To Standalone Node" was selected.
- Backup Feature: Rerun failed scheduled backup jobs from the Jobs view.
2021-01-21 clustercontrol-controller-1.8.1-4343
- Backup: Make it optional to skip the md5sum check for a backup job. The m5sum may take a long time to execute on large backups. This can be configured by setting backup_create_hash in the UI (Settings -> Runtime Configuration, no restart needed), or set backup_create_hash in /etc/cmon.d/cmon_X.cnf (where X is the cluster id) and restart cmon.
- Percona XtraDb Cluster 8.0: Failed deploy on Ubuntu 20.04 and fixes to certificate creation.
2021-01-13 clustercontrol-controller-1.8.1-4327
- Fixed a bug where netcat_ports used by backup could collide with ports used by e.g Prometheus exporters and services.
- MongoDb Backup: Check free space on storage host before starting backup.
- CPU Usage Alarm: Made it configurable how long time CPU Usage should be above a Warning/Critical threshold before raising the alarm. This can be set in the cmon configuration file (host_stats_window_size) or from the CMON Settings in the frontend and S9S CLI.
2021-01-07 clustercontrol-1.8.1-7527
- MySQL root password is now correctly set when using the cloud deployment (wizard)
- The MySQL verify backup job is now containing the entered PITR position or time
- The backups page was missing the 'backed up tables info' for MySQL partial backups with tables
2020-12-30 clustercontrol-controller-1.8.1-4314
- Percona Backup for MongoDB: The controller always reports the backup failed, but PBM says the backup completed and there are no error logs.
2020-12-28 clustercontrol-controller-1.8.1-4311
- MySQL Replication: Fixed a bug regarding package dependencies when adding a replication slave using Oracle MySQL 8.0 on CentOS/RHEL 8.
2020-12-23 clustercontrol-controller-1.8.1-4304
- MariaDB: Adding 'module_hotfixes=1' to repo files to overcome the MariaDB issue https://jira.mariadb.org/browse/MDEV-20673
2020-12-18 clustercontrol-controller-1.8.1-4299
- Verify Backup: Fixed an issue where the Verify Backup would fail on MariaDb 10.2 and later if the MariaDb version of the restore host could not be determined. The error presented itself as "/usr/bin/mariabackup: unknown option '--apply-log-only'"
2020-12-14 clustercontrol-controller-1.8.1-4294
- ProxySQL: Deployment fails because of a weak proxydemo user password. Now the proxydemo user is not created at all.
2020-12-11 clustercontrol-controller-1.8.1-4292
- Verify Backup: The Verify Backkup job failed for MySQL based system because the Backup Verification Server was in read-only mode.
- Verify Backup: When verifying 'mysqldumps' the binary logs of the Backup Verification Server were not removed.
2020-12-10 clustercontrol-1.8.1-7500
- Remove duplicate headers in 'Query Monitor'
2020-12-09 clustercontrol-controller-1.8.1-4285
- PostgreSQL: Can't remove PostgreSQL slave in case of co-location.
- MariaDb: Fixed a bug with Backup Verification doesn't work on MariaDB 10.2 and later (--apply-log-only is not supported any longer).
- MongoDb: mongodump logging improvement to add the last 50 lines of its output to the backup job log.
2020-12-05 clustercontrol-1.8.1-7491
- Fix for ClusterControl user login. The login username was required to be an email address which prevented LDAP login with a plain username.
2020-11-30 clustercontrol-controller-1.8.1-4274
- MongoDb Percona Backup: Fixed an issue when deleting a backup. The deletion was reported as successful, but it actually failed.
- MySQL Replication: Fixed a bug where the read_only setting in my.cnf could be inconsistently set on the nodes. The my.cnf file must always have read_only=ON on all nodes.
- MySQL Replication: Fixed an issue where if the auto_manage_readonly = true , then a user could set read_only = OFF on a slave. Now the read_only flag will be set to read_only = ON as soon as it is detected.
- Backups: Fixed the job title of backups executed from schedules. Before the title was set to 'Backup schedule #N'N, where NN is a number. Now the backup title is properly set to 'Create Backup'
2020-11-23 clustercontrol-1.8.1-7473
- Backup->Settings->Backup Settings: Customize/set netcat ports to use when streaming backups
- Cluster 2 Cluster: 2nd PostgreSQL slave cluster is now properly linked
- Delete Job: Add back support to delete pending jobs via the Activity Viewer
- Restore PITR backup: Correctly set/select the node to restore the backup (PITR) on
- Import Cluster: You no longer need to specify vendor or version - it's automatically detected
- Cluster Overview: Setting the same time range sometimes resulted in showing a different range
- Misc
- Fix incorrect backup restore confirmation messages
- Fix modal and sidebar issues with the restore backup dialog - it now closes properly
- Fix broken backup scheduler icon for the jobs in the Activity Viewer
- Fix overlapping text in the Logs->Jobs page for smaller screen resolutions
- Empty role is now not being created when cancelling the role creation
2020-11-18 clustercontrol-controller-1.8.1-4258
Controller:
- MongoDB: Installing PBM on mongodb.org release v4.2 failed.
- MaxScale: Due to an SSH environment problem a node transitioned between Offline/Online. Now there is better logging in this area.
- Scheduled backups: Fixed a bug where the scheduled backup printed out the same ID number (#0) for all backups. Note: It is only a printout/formatting issue.
ClusterControl v1.8.1 - 2020-11-13
clustercontrol-1.8.1-7442
clustercontrol-controller-1.8.1-4249
clustercontrol-notifications-1.8.1-261
clustercontrol-ssh-1.8.1-99
clustercontrol-cloud-1.8.1-263
In this release, we introduce a new consistent backup method for MongoBb Replica Sets and Sharded Clusters, native ProxySQL Clustering support, version updates of supported databases, PITR improvements for MySQL, and last but not least security enhancements on the web UI.
Security is always a top priority, and as a part of the security enhancements, a number of vulnerabilities were fixed in the web UI.
MongoDb 3.6 and later can now use Percona Backup For MongoDb to create consistent backups of replica sets and sharded clusters.
ProxySQL Clustering offers a convenient way to keep a number of ProxySQL servers in sync. If e.g a user is created on a node, then the change is propagated to other clustered ProxySQL nodes.
Feature Details
MongoDb Backup
- Uses Percona Backup For MongoDb
- Backup and Restore Replica sets and Sharded Clusters
ProxySQL Clustering
- Leverage the built in clustering of ProxySQL to keep ProxySQL instances in sync
Version Updates
- Percona XtraDb Cluster 8.0
- MariaDb and MariaDb Cluster 10.5
Security Improvements
- Prevent Clickjacking
- Updated Jquery (3.5.0)
- CGI Generic Cross-Site Request Forgery Detection
MySQL PITR enhancements
- Backup a node and perform PITR on any node in the Cluster
ClusterControl v1.8.0
2020-11-04 clustercontrol-controller-1.8.0-4223
Controller:
- Prometheus: Fixed an issue where the port used by a Prometheus exporter was incremented and could be different on each node. This made it problematic to maintain e.g firewalls.
2020-11-03 clustercontrol-controller-1.8.0-4218
Controller:
- PostgreSQL: Add missing quoting of PostgreSQL passwords in cmon configuration file.
- Deployment fails for MariaDB 10.4 on CentOS 8.
- Deployment fails for MariaDB 10.3 on CentOS 8.
2020-10-29 clustercontrol-controller-1.8.0-4210
Controller:
- Prometheus: Fixed an issue where the port used by a Prometheus exporter was incremented and could be different on each node. This made it problematic to maintain e.g firewalls.
- Notifications: Improvements to the Memory/RAM usage alarm.
2020-10-26 clustercontrol-controller-1.8.0-4202
Controller:
- Alarms: Improved reporting on the memory-/RAM-usage alarms to include the actual bytes used and total bytes available.
- General: Optimized and reduced the CPU usage of the controller when using Prometheus.
- Keepalived: Fixed a bug when uninstalling Keepalived where the controller tried to copy a non-existing config file.
2020-10-16 clustercontrol-controller-1.8.0-4195
Controller:
- General: Fix some high CPU usage about Prometheus sampling with many clusters.
- Cloud Deployment: Use private/internal IPs to communicate with DB nodes when 'use_private_network' option is set.
- Deployment: Fixed an issue with Percona installation on Debian/Ubuntu. It was pulling -dbg packages and it prolonged the installation time.
2020-10-14 clustercontrol-1.8.0-7331 clustercontrol-controller-1.8.0-4190
Frontend
- Alarms: Refactored the alarms page to make it more space-efficient.
Controller:
- MariaDb Cluster 10.4 failed to deploy on Centos 8. As a side-effect, due to dependency issues, percona-toolkit can not be installed.
2020-10-13 clustercontrol-controller-1.8.0-4188
Controller:
- PostgreSQL: Change the owner of a pre-created datadir owned by another user than 'postgres'.
- Query Monitor: 'Last seen' was set to 'now' and not the actual 'last seen'.
2020-10-07 clustercontrol-1.8.0-7312 clustercontrol-controller-1.8.0-4181
Frontend
- Cloud Deploy: Added an option to use private IP address only when creating the VMs.
Controller:
- HAProxy: Failed to Enable/Disable HAProxy because node_address was not taken into account.
- Deployment: Improved logic to determine the AppStream repository name for CentOS/RHEL/Oracle.
- MySQL Cluster: include ndb_mgm -e show in the error-report.
2020-09-29 clustercontrol-controller-1.8.0-4166
Controller:
- MySQL Cluster: Fixed an issue when handling status replies of MySQL Clusters containing many nodes.
2020-09-29 clustercontrol-controller-1.8.0-4165
Controller:
- Improved error handling & logging (including error-reporting) for MySQL NDB Clusters.
- ProxySQL Galera: Fix for crash for when updating ProxySQL in case the host group does not exist or is not defined.
- PostgreSQL: Fixed an issue when deploying on CentOS/RedHat using the option "Do Not Setup Vendor Repositories" as it wrongly adding --repo=pgdg to the yum install command.
2020-09-23 clustercontrol-1.8.0-7277 clustercontrol-controller-1.8.0-4156 clustercontrol-cloud-1.8.0-254
Frontend
- ProxySQL: Top Queries page shows super-long non-truncated digest text.
- Cloud Deployment: Fixed an issue where the wrong unit (MB) was passed to the cmon-cloud service. GB was expected.
Controller:
- HAProxy: Fixed an issue to refresh/sample HAProxy on certain actions.
- MySQL Replication: Fixed an issue to allow removing down/failed master(s) in case when there is more than one node in the cluster.
- APT repository mirroring fixes (updated aptly and fixed gpg handling on newer systems) for Ubuntu 18.04.
Cloud:
- Azure: Fixed a timeout issue where it was not possible to get the status of a VM within 10m (Could not get VM statuses in 10m0s.)
2020-09-16
clustercontrol-1.8.0-7250
clustercontrol-controller-1.8.0-4145
clustercontrol-notifications-1.8.0-257
clustercontrol-cloud-1.8.0-252
clustercontrol-ssh-1.8.0-96
Feature Details
Scalability
- ClusterControl support hundreds of nodes.
- Lowered CPU consumption.
Vault Integration
- Credentials stored in cmon configuration files can now be moved to Vault.
Tagging
- Supported via the S9S CLI.
- Set tags to the existing clusters and on cluster creation.
- Filter by tags.
ClusterControl v1.7.6
2020-09-10 clustercontrol-1.7.6-7237
Frontend
- OpsGenie: Fixed a bug where the Integration could not be created due to '"Failed to parse request body:...'
- OpsGenie: Updated instructions. The API Key must be the API Key of the Team may be used: https://docs.opsgenie.com/docs/api-key-management .
- HAProxy: Fixed a bug where disable HAProxy failed because UI didn't send the hostname parameter to the job.
- MariaBackup: Fixed an issue with partial backups. It was not possible to specify databases/tables.
2020-09-08 clustercontrol-controller-1.7.6-4130
Controller:
-
Oracle/MySQL 8.0 compatibility & testing fixes.
2020-09-02 clustercontrol-1.7.6-7207 clustercontrol-controller-1.7.6-4120
Frontend
- Topology VIew: Fixed and issue to avoid flickering on cluster update.
- Topology View: Fixed an issue showing the topology for 3 or more multi-masters.
- ProxySQL: The IP/Hostname Address was truncated in ProxySQL Processlist.
Controller:
- MongoDb: Node menu does not appear for MongoDB Replicaset due to an internal error.
- MongoDb: Fixed a bug causing Mongo shard recovery to fail and Mongos are started last.
- Prometheus: Upgraded node_exporter to 1.0.1 (no incompatible changes) + some typo fixes.
- Prometheus:Bump Prometheus version to 2.20.1
- [improvement] OS Support: Updated compatibility matrix to support Ubuntu 20.04.
2020-08-27 clustercontrol-controller-1.7.6-4108
Controller:
- OS Detection: Fixed an error to be more tolerant to extra lines introduced by ssh login to a host. This caused an issue when detecting the operating system.
- Cmon Db Schema: Fixed an issue with the cmon_log_entries table that has an invalid Foreign Key.
- ProxySQL: deploy fails on Debian 10 in combination with Percona or Oracle/MySQL 8.0
- Fixed an issue on MySQL based system where the rebuild replication slave would fail when using uppercased hostnames.
2020-08-17 clustercontrol-1.7.6-7158 clustercontrol-controller-1.7.6-4083
Frontend
- Dashboards: Fixed an issue in the Replication dashboard where the Master Server ID was presented as a decimal value and not an integer.
Controller:
- ProxySQL: Fixed a bug with logs getting flooded with SAVE MYSQL SERVERS TO DISK /LOAD MYSQL... commands if there was a MySQL server not present in the ProxySQL Server's mysql_servers table.
- Alarms: Fixed a timezone problem between the reported date in the Alarm Digest email and the triggered Alarm. Now the datetime has the same Timezone in both cases.
2020-08-11 clustercontrol-1.7.6-7146 clustercontrol-controller-1.7.6-4077
Frontend
- Fixed an issue with retrieving roles where a non super-admin LDAP users always end up at the first cluster in the list.
- Fixed an issue editing an OpsGenie integration which prompted the user to enter information in a field, but it was not possible.
Controller:
- HAProxy: Fixed an issue using non-default ports as a user-specified value would be set to the default.
- Mariabackup: A fix to support Mariabackup 10.4.14 which has dropped support for some options.
- Deploy: Fixed a bug deploying Percona on CentOS 8.
2020-08-04 clustercontrol-1.7.6-7124 clustercontrol-notifications-1.7.6-254
Frontend
- Galera: Fixed an issue which made it impossible to activate Galera SSL Encryption when SSL Encryption was enabled
- Galera: Server Load graphs were not properly initialized when there was no data to graph (e.g, because a server was down for a period of time).
- ServiceNow integration: Fixed a layout issue, and improved the usability by adding an 'All Clusters' to simplify when having many clusters. This fix also includes two new fields: 'Service' and 'Configuration Item'
Notifications service:
- ServiceNow: Added support for 'Service' and 'Configuration Item'.
2020-08-03 clustercontrol-controller-1.7.6-4068
Controller
- Fixing a bug deploying Percona Server and XtraDb Cluster on CentOS 8.
2020-08-03 clustercontrol-controller-1.7.6-4066
Controller
- PostgreSQL: Fixed an issue with PgBackRest using user-defined stanzas to prevent default configuration options defined by the controller from being set as command-line options. Thus, only the options set in the user-defined stanza will be used and nothing else.
2020-07-27 clustercontrol-1.7.6-7082 clustercontrol-controller-1.7.6-4059
Frontend
- Cloud Deployment: Fixed an issue where the wrong disk size unit (GB) was sent in the job instead of MB.
- Backup/Restore (MySQL based): Fixed a couple of UX issues. Restoring a backup using Point in-time recovery (PITR) may only be executed on the cluster nodes where the backup was created. All other options are now disabled (Restore on standalone/create cluster from backup).
- Import MySQL Replication: The option 'Import as a standalone node' has been removed as it did not have any purpose.
Controller
- MySQL 8.0: Fixed an issue with parsing privileges in a GRANT statement (INNODB_REDO_LOG_ENABLE privilege was missing)
- Replication: Fixed an issue retrieving/creating user account information in case there was one server in the setup and it was not configured as a master. The error manifested itself as 'Server not found while trying to create an account'
- Replication: Improved the behavior by not raising an alarm and disabling auto-recovery If the setting auto_manage_readonly=false is specified and the cluster has multiple writable masters.
- Replication:Enable/Disable read-only job: Failed to run if auto_manage_readonly=false
- ProxySQL: Failed to sync instance in ProxySQL (GRANT option).
2020-07-20 clustercontrol-1.7.6-7059 clustercontrol-controller-1.7.6-4047
Frontend
- Integrations: Fixed a bug when adding cluster on integration channel if cluster name only 2 characters.
- PostgreSQL: Enable timescaledb for PostgreSQL 12.
- User mgmt: Fixed an issue preventing a SuperAdmin used logged in from LDAP to change a cluster team.
Controller
- PostgreSQL: Fixed an issue where the controller failed to parse pg_hba.conf when a line ends up with empty space before a new line.
- PostgreSQL: Ensure that directories created for PostgreSQL have the correct rights and even the newly created parent directories, such as /etc/postgres.
- General: Extended OS compatibility matrix with Ubuntu 'focal' / 20.04.
2020-07-10 clustercontrol-controller-1.7.6-4036
Controller
- A fix for a race condition when the SSH connection is lost for a moment when sampling processes. This could lead to the process (e.g HAProxy, Garbd, ProxySQL) to have the wrong state for a short period of time.
- MongoDb: Consistent backup failed because the Storage Host was not set.
2020-07-05 clustercontrol-controller-1.7.6-4026
Controller
- Galera: Improved Cluster Split detection. Now, the cluster_size is measured over a three-second period, and the cluster will enter failed state if the cluster_size is not the same on all nodes after this period of time.
- PostgreSQL: Rebuilding a PostgreSQL node as a slave could make it appear with the role set to master (but non-writable, and streaming from the writable master).
- ProxySQL: Removed unnecessary log messages when installing ProxySQL.
2020-06-28 clustercontrol-1.7.6-6996 clustercontrol-controller-1.7.6-4013
Frontend
- Query Monitor: Running Queries did only appear on the last page when filtering on hosts.
- User Management: The number of visible Teams/LDAP groups was limited and more than 33 groups could not be shown in the UI.
Controller
- Query Monitor: Purge Query Monitor for MySQL did not purge the performance_schema events_statements_summary_by_digest.
- Ping time was set incorrectly (to a big value) when blocked by firewall or ICMP disabled in conf. Now it is set to -1 if blocked by the firewall or disabled.
- HAProxy: It was possible to import an non-existing HAProxy.
- ProxySQL: Fixed a bug syncing instances error when a database name contains backslash.
- ProxySQL: Fixed an issue with MariaDB when importing users with a role in ProxySQL.
- ProxySQL: Fixed an issue Installing ProxySQL 1.x failed on CentOS7.
- ProxySQL: Could not install version 1 to two nodes in the same job.
- ProxySQL: Failed to stop the ProxySQL service while removing and uninstalling the node.
2020-06-20 clustercontrol-1.7.6-6976 clustercontrol-controller-1.7.6-3996
Frontend
- Query Monitor: Fixed a bug when purging data in the Query Monitor.
Controller
- SSL Certificates: Fixed a bug that prevented self-signed certificates to be imported. The error manifested itself as: "Error 'CA certificate: Empty PEM string'".
- MaxScale: The MaxScale nodes could in some situation appears as "not available"/"offline", but the process was actually running.
- Backups: Fixed a bug where a backup could end up in the wrong backup directory if the backup was re-executed too soon after a failed backup.
2020-06-16 clustercontrol-1.7.6-6959 clustercontrol-controller-1.7.6-3985
Frontend
- MongoDB: Dashboards metrics updated to support new mongodb_exporter. A re-install of the MongoDB Exporter is needed, which is done from the Dashboards action menu.
- Schema Analyzer: showed no data for community edition.
- Stop Node action must always be visible. Even if the node is down/unknown.
- Backup Scheduling: An issue specifying the time when using advanced settings.
- User management: A User with 'Admin' role cannot open the mail notifications page
- User management: Users are shown in the wrong group (fixed 'All users' logic).
- User management: LDAP users can't see alarms or jobs.
Controller
- Monitoring/disk: A fix was added to avoid monitoring the NFS filesystem.
- PostgreSQL: In case of an inconsistent view (master down, but LB or slave reports it is up) then double-check using SSH.
- PostgreSQL: Log the replication failure alarm reason and server disconnected reason in the alarm text.
2020-06-08 clustercontrol-controller-1.7.6-3972
Controller
- ProxySQL: Supporting MariaDb roles when importing users to ProxySql
- MongoDB: Upgraded mongodb_exporter to v0.11.0 to support newer MongoDB versions.
- HAProxy: Fixed an issue where the node appears as Online but the VM is not even running.
- Galera: Failed to create slave cluster from backup for Galera.
- PostgreSQL: Cluster-to-Cluster replication shows Cluster Failure in the slave cluster.
- PostgreSQL: Failed to Create Slave Cluster on TimescaleDB.
- Notifications: Extended the fallback email address query with, dcps.users with company_id=0, and the RPCv2 owner user of the cluster. This ensures that an admin that can see all clusters will get notifications from all clusters.
2020-05-15 clustercontrol-1.7.6-6868 clustercontrol-controller-1.7.6-3940
Frontend
- CC Teams and Users management issue (removed the strict linking on company for SuperAdmin)
Controller
- PostgreSQL: Slave rebuild doesn't work for PostgreSQL.
- PostgreSQL: Failed create PostgreSQL cluster from pg_basebackup.
- PostgreSQL: Remove slave (recovery|standby) signal files after restoring pg_basebackup.
- Galera: Automatic failover is not working on MariaDB Cluster with slave nodes.
- Galera: Create Galera Cluster From Backup Fails
- Galera: Creating a Slave Cluster using PXC 5.6 failed. See note above.
- Galera recovery fails repeatedly if automatic recovery is enabled (huge dataset) b/c of a systemd script timeout. Now CMON will patch the vendors' broken systemd script.
- Galera: PXC 5.7 failed to deploy on Centos 8.
- ProxySQL: Fix for wildcard handling in MySQL grants (fixes a ProxySQL import users issue...).
- MySQL/Galera: Added parser support for new MySQL 8.0 privileges.
- MongoDb: Deployment fails and fixed by preventing /var/run & /run from any owner/access changes, it makes the SSH connections failing.
2020-05-06 clustercontrol-1.7.6-6854 clustercontrol-controller-1.7.6-3910
Frontend
- HAProxy: Auto-filling HAProxy socket, port, and credentials fields are not working in the import section (fixed template).
Controller
- PostgreSQL: Fix a pg_hba parsing error (whitespace in empty lines).
- PostgreSQL: Bugfix for duplicated pg_hba entries.
- PostgreSQL: Bugfix for repetitive CREATE ROLE calls following a failover.
- MongoDb: Backups created by s9s CLI did not contain the node name in the backup file.
- MaxScale: Remove/Register fixes
- MySQL: A strong root password is now auto-created if not specified explicitly by the job.
- MySQL: Refresh variables after restarting a node so the node is up to date in the UI.
- Prometheus: make sure tar and gzip is installed to be able to deploy the packages.
- cmon_upgrade.log: add timestamp & filenames
2020-05-03 clustercontrol-1.7.6-6846
Frontend
- HAProxy: Import error, fixed the job spec.
- Redundant nodes when Select Stream from Master in 'Create Slave Cluster' dialog.
- Backup: Got error 'Cannot set unknown key encrypt_backup on RecordType' in the UI when configuring backup with verifying backup (added property to Verification model and fixed unit tests).
- CSS Fixes.
2020-04-27 clustercontrol-controller-1.7.6-3892
Controller
- s9s_error_reporter is not working for Cluster ID 0 (fixed error-report fallback path).
- PostgreSQL: Include the pgdg common repo (for pgbackrest on centos/rhel).
- PostgreSQL: Fixes to failover in case of deleting/erasing the master's datadir.
- PostgreSQL: Ping lets do a disconnect first, so we can detect if no new connection can be made to the PostgreSQL server.
- MySQL: Bugfix for parsing the role syntax of a MySQL Db user, which could lead to the frontend failing to handle the request to show database users in Manage->Schema and Users.
2020-04-22 clustercontrol-1.7.6-6830 clustercontrol-controller-1.7.6-3880
Frontend
- Backup Schedule: When changing a backup method (from none-PgBackRest) to PgBackRest it could cause the UI to become stuck.
- Overview graph: The Cluster Overview graph was truncated in some cases to 30 minutes instead of 1 hour.
Controller
- Verify Backup: A user will be notified by email if the verification fails.
- PostgreSQL: Backup Verification made a backup of datadir before restoring the backup, which was unnecessary.
- PostgreSQL: Fixed an issue with replication lag calculation and alarming.
- PostgreSQL: Skip nodes from failover whose lagging more than MAX_REPLICATION_LAG setting.
- MySQL Replication: A fix for rebuilding replication slave where there was a race condition checking if MySQL is down.
- Galera: Fixed an issue when manipulating my.cnf files that could manifest itself as "Got error Could not read 'wsrep_provider_options'" when enabling Galera SSL Encryption.
- Galera: Add a Replication Slave in PXC 5.7 overwrote the my.cnf if it was a symlink.
- Password Escaping: Fix a password escaping issue in cmon configuration, that could lead to e.g the Prometheus database exporters to fail to connect to the database.
- LibSSH: Fixes to prevent zombie/defunct ssh proxy commands (such as sssd_ssh_known_hosts_proxy ) processes due to a missing waitpid in libssh.
2020-04-10
clustercontrol-1.7.6-6815
clustercontrol-controller-1.7.6-3854
clustercontrol-notifications-1.7.6-251
clustercontrol-cloud-1.7.6-241
clustercontrol-ssh-1.7.6-92
Feature Details
Cloud Deployment of HAProxy
- Deploy a database stack containing your favorite SQL database and HAProxy load balancer.
MySQL Freeze Frame (BETA)
- Snapshot MySQL process list before cluster failure.
Misc
- CMON Upgrade operations are logged in a log file.
- Many improvements and fixes for PostgreSQL Backup, Restore, and Verify Backup.
- A number of legacy ExtJS pages have been migrated to AngularJS.
ClusterControl v1.7.5
2020-04-08 clustercontrol-1.7.5-6810 clustercontrol-notifications-1.7.5-249 clustercontrol-cloud-1.7.5-239
Frontend
- Opsgenie Integration: A fix to allow the user to specify region when setting up the integration.
Notifications
- Opsgenie Integration: Fixed an issue resulting in the error "Failed to parse request body: parse error: expected string offset 11 of teams".
- Fixed an issue handling region.
- Improved and fixed a bug with http_proxy handing. Now, a http_proxy/https_proxy can be specified /etc/proxy.env or /etc/environment.
Cloud
- Improved and fixed a bug with http_proxy handing. Now, a http_proxy/https_proxy can be specified /etc/proxy.env or /etc/environment.
2020-04-07 clustercontrol-controller-1.7.5-3844
Controller
- HAProxy: Using ports 5433 (read/write) and 5434 (read-only) by default for PostgreSQL
- HAProxy: PostgreSQL - Read/write splitting was not setup when installing HAProxy from the S9s CLI.
- HAProxy: Installing HAProxy attempted to use the Backup Verification Server too.
- PostgreSQL: Never stopping 'Failover to a New Master' job + cluster status bugfix (it must be in Cluster Failed state when there is no writable master)
- PostgreSQL: Dashboards: Failed to deploy agents in some cases on the Data nodes.
- PostgreSQL: Import recovery.conf/postgres.auto.conf and can now be edited in the UI.
- PostgreSQL: pg_hba.conf is now editable on UI.
- PostgreSQL: pg_basebackup restore: first undo any previous PITR related options before restoring.
- PostgreSQL: Failed to Start Node for PostgreSQL.
- PostgreSQL: Fix pg_ctl status retval and output handling.
- PostgreSQL: Rebuild replication slave did not reset restore_command.
- Percona Server 8.0: Verification of partial backup failed.
- ProxySQL: Could not edit backend server properties in ProxySQL for Galera.
2020-04-01 clustercontrol-controller-1.7.5-3828 clustercontrol-notifications-1.7.5-243
Notifications
- cmon-events do not read MySQL connection details from /etc/cmon-events.cnf
- Password handling: Using a special character was rejected by cmon-events service.
- Remember to restart the service: "service cmon-events restart" after the upgrade.
Controller
- Spelling fix for cluster action 'Schedule and Disable Maintenance Mode'.
- PostgreSQL: Verify Backup, recreate missing datadir and config file if missing on the Backup Verification Server.
- PostgreSQL: Failed to Start Node for PostgreSQL
- PostgreSQL: Failed to PITR pg_basebackup because standby_mode was ON, preventing the node from leaving recovery.
- PostgreSQL: Hide passwords from PostgreSQL logs
- Error Reporting: Fixed a number of small issues.
2020-03-31 clustercontrol-1.7.5-6794
Frontend
- Spelling fix for cluster action 'Schedule and Disable Maintenance Mode'.
2020-03-30 clustercontrol-controller-1.7.5-3819 clustercontrol-1.7.5-6791
Frontend
- PostgreSQL: Point in time recovery (PITR) - fixes when selecting stop time and timezone.
- PostgreSQL: Fixed and improved restore backup to show the correct options for pg_basebackup regarding PITR.
- Cloud Deploy: Added missing references to our online documentation on how to create/add cloud credentials.
- Sync Clusters: Sync the UI view of clusters with the controller.
Controller
- PostgreSQL: Recovery of Slaves will not commence if the master is down
- PostgreSQL: Verify Backup now works when Install Software is enabled and Terminate Server is disabled.
- PostgreSQL: Promote failed when WAL replay is paused.
- PostgreSQL: Point in time recovery (PITR) fixes for pg_basebackup.
- Notifications: Alarms raised by the controller are only sent once to each recipient.
Limitations:
- PostgreSQL PITR:
- If no writes have been made after the backup, them PITR may fail.
- Specifying time too far in the future may cause issues too
- We recommend using pg_basebackup in order to use PITR
- PostgreSQL Backups [pgbackrest & pg_basebackup]:
- pgbackrest has an archive_command that is not compatible with pg_basebackup, which means e.g that a pg_basebackup cannot be restored using PITR on a PostgreSQL server configured with an archive_command configured for pgbackrest.
2020-03-23 clustercontrol-controller-1.7.5-3797 clustercontrol-1.7.5-6757
Frontend
- Verify Backup: Specifying the temporary directory field is mandatory, but it is not used at all.
- Prometheus: Graph for disk usage is incomplete
- Prometheus: Not possible to change Prometheus deployment options when deployment failed.
- PostgreSQL: Point in time recovery (PITR) depends on PostgreSQL archive_command. An archive command suitable for PgBackRest is not working for pg_basebackup. Now, PITR options are only shown for a backup method if the underlying archive command supports it.
- PostgreSQL: PITR: Fixed timezone transformation for PITR
- Query Monitor: Fixed bug saving settings.
- Overview/Node Graphs: In some circumstances the date range could be the same for From Date and To Date, resulting in zero data points and no graph displayed.
- Audit Log: The timestamp in the auth.log file is off by 1h (default UTC)
- Error Reporting: A wrong Error Report Default Destination was shown.
Controller
Bugs Fixed:
- ProxySQL: Version is not updated in Topology view
- PostgreSQL: PG Master node fails if you Enable WAL archiving after promoting it
- PostgreSQL: Verify pg_basebackup (potentially other pg backup methods too) fails.
- PostgreSQL: Promoting a slave where a master cannot be determined or reached.
- PostgreSQL: Fixed an issue with pg_basebackup and multiple tablespaces (NOTE: encryption isn't supported for multiple tablespaces).
- PostgreSQL: PgBackRest with Auto Select backup host fails.
- PostgreSQL: Restoring PgBackRest backup on PostgreSQL12 failed.
- PostgreSQL : Make sure the recovery signal file is not present when enabling WAL log archiving.
- PostgreSQL: Fallback to server version from configuration when the information is not available in the host instance.
- PostgreSQL: Verify WAL archive directory for log files before performing PITR.
- Query Monitor: Disable Query Monitor is not working by setting enable_query_monitor=-1 in /etc/cmon.d/cmon_X.cnf
- Galera: Force stop on the node does not prevent further auto-recovery jobs.
- Galera: Node recover job fails but is shown in green.
- Galera: Backup is not working for non-synced nodes in Galera Cluster. This allows mysqldumps to be taken on non-synced nodes as xtrbackup/mariabackup tools prevent this.
- MariaDB: MariaDB 10.3/10.4 promote slave action fails.
- Repository Manager: Updated and added missing versions and removed some deprecated versions.
Behavior change: - Backup Verification Server: Applies to MySQL based systems only (PostgreSQL coming soon). It is now possible to reuse an up and running Backup Verification Server (BVS). Thus, a BVS does not need to be shutdown before Verifying the backup.
- Host Discovery: A new way to execute host discovery and logging to /var/log/cmon_discovery*.log
2020-03-04 clustercontrol-1.7.5-6697
Frontend
- Auth logging. Added TZ support. Server TZ is by default, but another TZ can be set in/var/www/html/clustercontrol/boostrap.php.
2020-03-03 clustercontrol-controller-1.7.5-3735 clustercontrol-1.7.5-6695
Frontend
- Auth logging. Login/logouts and failed login attempts are stored in /var/www/html/clustercontrol/app/tmp/logs/auth.log
Controller
- PostgreSQL: Fixed a bug in Database Growth.
2020-03-01 clustercontrol-controller-1.7.5-3730 clustercontrol-1.7.5-6685
Frontend
- Cloud Deployment Wizard: Updated to latest supported vendors versions.
- PostgreSQL: Fixed an issue showing replay_location in e.g Topology View.
Controller
- MongoDb: wrong template used for MongoDb and Percona MongoDb 4.2.
- Query Monitor (mysql): datadir and slow_query_log_file variables read too often.
- TimeScaleDb. Rebuild slave fails on installed but not registered TimeScaleDb.
- MySQL/Galera: Upgrade MySQL/Galera packages in one batch instead of installing/upgrading them one-by-one.
- Include latest haproxy sample in the error-report.
- General: staging_dir from cmon.cnf is not respected.
- Percona 8.0: Can't deploy ProxySQL on a separate non-db node in Percona 8.0.
2020-02-09 clustercontrol-controller-1.7.5-3679 clustercontrol-1.7.5-6646
Frontend
- Create Slave Cluster action not working immediately after deploying a cluster.
- MaxScale: Make MaxScale available for Keepalived.
- Load balancers: Added options to avoid disabling SELinux and firewall.
- Cluster List: Sorting Clusters.
Controller
- ProxySQL: Fixed a bug deploying ProxySQL on a separate node in a Percona Server 8.0 Cluster.
- Prometheus/Dashboards: Fixed an issue DNS resolve so that the mysqld_exporter with the property db_exporter_use_nonlocal_address, properly handles the skip name resolve flag.
- PostgreSQL: Fixed an issue when the controller always tried to connect to a 'postgres' DB even if no database was specified.
2020-01-20 clustercontrol-controller-1.7.5-3666 clustercontrol-1.7.5-6627
Frontend
- Nodes Page: Fixed the time range on the Db Performance Graph.
- Nodes Page: Fixed the 'Swap Space' graph.
- Cluster List: Fixed a sorting order issue.
Controller
- ProxySQL: Fixed a bug where backup failed.
- Replication/MySQL 8.0: Fixed a bug deploying MySQL 8.0 (Oracle) on Centos8/RHEL8 and also fixing adding replication slave
- Prometheus/Dashboards: make sure dnsLookup is called for 'useNonLocalAddress' option, so IP gets granted too.
- MaxScale: Enabled installation on Debian10/Buster.
2020-01-20 clustercontrol-controller-1.7.5-3638 clustercontrol-1.7.5-6619
Frontend
- MongoDB: Add 4.0 and 4.2 versions for both mongodb.org and percona vendor in the UI
- MySQL/Backup: Added 'qpress' compression option.
- Backups: Netcat/socat port is now specified in 'Global Settings'.
- Backups: Added check on Failover host so it cannot be set to the same value as the primary backup host.
- Cluster List: Fixed a sorting order issue.
Controller
- MySQL/Backup: Auto-install 'qpress' during restore/verify when required.
- MySQL/Replication A segfault when failover master could happen in MySQL 8.0.
- MySQL: Disable unsupported variables for 5.5.
- ProxySQL: Avoid executing SQL init commands on the connection (crashing bug in ProxySQL 1.4.10, fixed in ProxySQL 1.4.13).
- MongoDB 4.2: Fixed an issue Importing a Cluster due to new lines in the keyfile.
- MongoDB: Fixed a missing cloud badge on mongo clusters created in the cloud
- PostgreSQL: Improve the free disk space detection before rebuild slave.
- PostgreSQL: Create cluster in the cloud failed because no postgresql version was specified.
- PostgreSQL: Auto-rebuilding failed replication slaves now resorts to use the full node rebuild strategy instead of pg_rewind as it knows to fail in a number of scenarios.
- Dashboards/prometheus exporters: New configuration option: db_exporter_use_nonlocal_address
2020-01-07 clustercontrol-controller-1.7.5-3616 clustercontrol-1.7.5-6604
Frontend
- Cluster Overview (MySQL based clusters): Fixed an issue with the Query Outliers which relied on deprecated code.
- Node Actions: The Stop Node action is always visible so it is always possible to stop a node.
Controller
- Notifications: 550 5.6.11 SMTPSEND.BareLinefeedsAreIllegal : fixing an error with certain SMTP servers.
- PostgreSQL 9.7 with TimeScaleDB: Add node fails on CentOS7 & CentOS8
2019-12-18
clustercontrol-1.7.5-6599
clustercontrol-controller-1.7.5-3601
clustercontrol-notifications-1.7.5-201
clustercontrol-ssh-1.7.5-88
clustercontrol-cloud-1.7.5-225
In this release we are introducing cluster-wide maintenance mode, taking snapshots of the MySQL database status and processlist before a cluster failure, and support for new versions of PostgreSQL, MongoDB, Centos and Debian.
We have previously supported maintenance mode for one node at a time, however more often than not you want to put all cluster nodes into maintenance.
Cluster-wide maintenance mode enables you to set a maintenance period for all the database nodes / cluster at once.
To assist in finding the root cause of failed database nodes we are now taking snapshots of the MySQL status and processlist which will show you the state of the database node around the time where it failed. Cluster incidents can then be inspected in an operational report or from the s9s command line tool.
Finally we have worked on adding support for Centos/RedHat 8, Debian 10, and deploying/importing MongoDB v4.2 and Percona MongoDB v4.0.
Feature Details
Cluster Wide Maintenance
- Enable/disable cluster-wide maintenace mode with cron based schedule.
- Enable/disable recurring jobs such as cluster or node recovery with automatic maintenance mode.
MySQL Freeze Frame (BETA)
- Snapshot MySQL status before cluster failure.
- Snapshot MySQL process list before cluster failure (coming soon).
- Inspect cluster incidents in operational reports or from the s9s command line tool.
Updated Version Support
- Centos 8 and Debian 10 support.
- PostgreSQL 12 support.
- MongoDB 4.2 and Percona MongoDB v4.0 support.
Misc
- Synchronize time range selection between the Overview and Node pages.
- Improvements to the nodes status updates to be more accurate and with less delay.
- Enable/disable Cluster and Node recovery are now regular CMON jobs.
- Topology view with cluster to cluster replication.
ClusterControl v1.7.4
2019-12-16 clustercontrol-1.7.4-6594 clustercontrol-controller-1.7.4-3596
Frontend
- AWS: Updated region dropdown list.
Controller
- PostgreSQL: Failed to start PostgreSQL after VM halt&reboot xt(because of a missing socket directory).
- HAProxy: Fixed a parser issue and add '^' also to supported string/regexp characters list.
2019-12-01 clustercontrol-controller-1.7.4-3565
Controller
- Replication: Removing BVS server failed.
- Deploy: Dropped ntp package dependency during deployment.
2019-11-23 clustercontrol-1.7.4-6537 clustercontrol-controller-1.7.4-3556
Frontend
- Topology View: Show link to remote cluster.
- Rebuild Replication Slave: Wrong cluster id was sent in the job.
Controller
- ProxySQL 2.x: Setting writer_is_also_reader=2 in mysql_galera_hostgroups.
- ProxySQL 1.x: "Sync Instances" fails with no such table: mysql_galera_hostgroups.
- HAProxy: Fixing haproxy parser (extended string with the following chars: |[]).
- MySQL: Backups can't be taken with xtrabackup 2.4.12.
2019-11-18 clustercontrol-controller-1.7.4-3543
Controller
- Crashing bug.
2019-11-17 clustercontrol-1.7.4-6513 clustercontrol-controller-1.7.4-3541
Frontend
- Rebuild Replication Slave: Wrong cluster id was sent in the job.
Controller
- Email/digest: Fixed an issue sending too many digest messages sent in certain cases.
- Email/digest: Fixed and issue sending blank digest emails.
- Host Discovery: Fixed a deadlock issue.
- PostgreSQL: Rebuilding a slave failed as the master could not be found.
2019-11-09 clustercontrol-controller-1.7.4-3526 clustercontrol-1.7.4-6502
Frontend
- Query Monitor -> Running Queries: Refresh button and fixed an issue limiting the result set to 200 records.
- Alarms: Fixed a bug with Ignore alarms.
Controller
- ProxySQL: Failed deploy with import configuration option.
- Cluster to cluster replication: Failed to locate master when creating slave cluster from backup.
- Replication: Percona Server 8.0 replication cluster creation failed during repl_user user creation.
2019-11-06 clustercontrol-controller-1.7.4-3519
Controller
- Cluster to cluster replication: Check the master exists in the parent cluster before attempting to stage the cluster.
2019-11-01 clustercontrol-controller-1.7.4-3512 clustercontrol-1.7.4-6483
Frontend
- MySQL: Added an option to RESET SLAVE / RESET SLAVE ALL.
- MongoDb: Removed MongoDb 3.2 as an option on Ubuntu 18.04.
- Dashboard: Added Dashboards to the ACL list.
- Query Monitor -> Running Queries: Refresh button and fixed an issue limiting the result set to 200 records. However, automatic reloading affects the pagination and will be fixed in the next patch release.
Controller
- ProxySQL 2.0: The proxysql_galera_checker script is not needed any longer and instead ClusterControl uses the mysql_galera_hostgroups table.
- PostgreSQL: copy some mandatory values from master's config into slave's config when configuring replication (as per https://www.postgresql.org/docs/9.6/hot-standby.html#HOT-STANDBY-ADMIN ).
- PostgreSQL: Database growth had an issue when detecting disk space.
2019-10-28
clustercontrol-1.7.4-6459
2019-10-25
clustercontrol-1.7.4-6451
2019-10-24
clustercontrol-1.7.4-6442
clustercontrol-controller-1.7.4-3503
clustercontrol-cloud-1.7.4-220
clustercontrol-ssh-1.7.4-84
clustercontrol-notifications-1.7.4-190
In this release we now support cluster to cluster replication for MySQL Galera and PostgreSQL clusters.
One primary use case is for disaster recovery by having a hot standby site/cluster which can take over when the main site/cluster has failed.
We also added support for MariaDB 10.4/Galera 4.x, ProxySQL 2.0 and managing database users for PostgreSQL clusters.
Feature Details
Cluster to Cluster Replication
- Asynchronous MySQL replication between MySQL Galera clusters.
- Streaming replication between PostgreSQL clusters.
- Clusters can be rebuilt with a backup or by streaming from a master cluster.
Misc
- MariaDB 10.4/Galera 4.x support.
- ProxySQL 2.0 support.
- Database User Management for PostgreSQL clusters.
=========================================================================
2019-10-21 clustercontrol-controller-1.7.3-3496
Controller
- HAProxy: Added 'tcp-check connect' to configuration templates.
2019-10-20 clustercontrol-controller-1.7.3-3494 clustercontrol-1.7.3-6429
Frontend
- PostgreSQL: Fixed an issue with the charts on the Cluster Overview page
Controller
- PostgreSQL: Query Monitor -> Query Statistics, Exclusive Lock Waits was not working correctly and did not display all data.
- Dashboard/SCUMM: Fixed an issue recoverying Prometheus exporters in case of co-located cluster nodes by multiple-clusters.
- MongoDb: Importing a single node will now fail if the node is not set up as a replica set member. Thus, it is the user's responsibility to convert the node to a member before importing it.
2019-10-13 clustercontrol-controller-1.7.3-3482 clustercontrol-1.7.3-6403
Frontend
- Dashboards/PostgreSQL: Fixed an issue with Idle and Active Connections
- Backup: Can't use backup verification server due to a bug in Host discovery.
- Email Notifications: Improvements to email validation of 'External' users and Adding/Removing of these 'External' users.
Controller
- Dashboards/PostgreSQL:
- Active and Idle connection dashboards aren't working for PostgreSQL.
- A redeploy of the postgres_exportes is needed.
- Reverted back to postgres_exporter 0.4 as 0.5 was very bugg.
- Active and Idle connection dashboards aren't working for PostgreSQL.
- Node Charts: Node CPU chart was incorrect on Centos6/RHEL6 due to it had one less column (no guest-low counter value).
- Notification: Fix daily limit handling of e-mail message recipients where -1 was not handled correctly.
- Error-reporting: There was a problem with file listing when multiple files was specified since we bash-escape the paths for safety, not it works fine.
- HAProxy: Fixed an issue parsing the HAProxy config file.
- HAProxy: While setting up haproxy for PostgreSQL reading old password from checker script fails.
- PostgreSQL: Importing a node/cluster: If logging_collector=OFF and user has not specified a custom log file then the job will be aborted and the user must specify it.
2019-09-29 clustercontrol-controller-1.7.3-3450 clustercontrol-1.7.3-6368
Frontend
- MySQL: Performance -> Transaction Log uses timestamp and not epoch.
- Fixes usability issues with Runtime Configuration making it easier to read
- PostgreSQL: Fixes in Import / Add Replication Slave dialogs with respect to Port and Logfile fields.
Controller
- MySQL: Performance -> Transaction Log uses timestamp and not epoch.
- MySQL: Fixed an issue with excessive logging of long running queries.
- HAProxy: Fix of parsing errors during collect_configs cronjob (in case of HAProxy and ProxySQL nodes).
- error-reporter: Include complete cmon log files and not only the last rows.
2019-09-24 clustercontrol-controller-1.7.3-3440
Controller
- MySQL based systems: Fixed and issue with excessive logging of long running queries.
- SSH Communication: A number of improvements which fixes intermittent errors like 'test sudo failed' and 'SUDO failed'.
2019-09-17 clustercontrol-controller-1.7.3-3428
Controller
- MySQL Replication Clusters: A fix to update the status of the failed server in ProxySQL. The old master will now be marked as OFFLINE_SOFT. Any node that is not part of the replication topology is marked as OFFLINE_SOFT.
- Added a fix that could cause a crash if a database connection could not be established.
2019-09-10 clustercontrol-controller-1.7.3-3413
Controller
- MariaDb: Setting innodb_thread_concurrency=0 due to https://jira.mariadb.org/browse/MDEV-20247
2019-09-08 clustercontrol-1.7.3-6340, clustercontrol-controller-1.7.3-3407
Frontend
- Backup: Fixed an issue with scheduling a backup. If using cron settings, then due to TZs and conversions to UTC then a specified hour could be converted to an hour belonging to another day.
- LDAP: Wrong LDAP status was shown in the UI
- Email Notifications: Adding a recipient without having any clusters installed failed
Controller
- ProxySQL: Inserting a query rule with a duplicate query rule id caused the query rule ids smaller than the duplicate to become negative.
- Prometheus version bump to v2.12
- PostgreSQL: On RedHat systems the default datadir was set to 'main' instead of 'data'.
- MongoDb: Retention fails because all mongo backups were recognised as partial, and partial can only be removed if there are more than one "full" backups.
- A fix for an infinite amount of 'Job query is working again.' log messages in the cmon log.
- Removing storage of log messages in a deprecated table called 'collected_logs'.
2019-08-24 clustercontrol-1.7.3-6322, clustercontrol-controller-1.7.3-3388
Frontend
- PostgreSQL: Add Slave: help text next to "logfile" text box.
Controller
- Import/Add Cluster: Specified sudo password was not respected.
- MongoDb: Importing a cluster failed even if the CAFile is specified following an error where it was not specified, because existing cert data was not updated in cmon's certificate storage.
- Controller: Must keep trying to connect to the MySQL server even if the MySQL server is not started, instead giving up and exit.
- PostgreSQL: Whitelist is not working as documented.
- SCUMM/Prometheus: General small improvements with disk device detection and mapping.
2019-08-17 clustercontrol-controller-1.7.3-3374
Controller
- PostgreSQL: a crashing bug was fixed that was caused by assuming that 'cluster_name' always have a value.
- PostgreSQL/pgbackrest: Fixed an issue when the backup.manifest is encrypted the backup appeared as failed. Please note that the backup.manifest record is not decrypted so some meta data information may not be updated (pending feature request).
- Controller backup/save controller: Fixed an issue saving the controller with a non-quoted password causing mysqldump to fail.
- ProxySQL: Fixed an issue where an error message was repeated due to trying to connect from a remote node using the 'admin' user, which is forbidden in ProxySQL.
- Error Reporting: Fixed a user handling issue, causing the error report to fail.
- MySQL: Database Growth, adding more verbose logging in case of issue.
2019-08-15 clustercontrol-1.7.3-6298, clustercontrol-controller-1.7.3-3370
Controller
- Performance -> Transaction Log: Fixed an issue with pagination.
Frontend
- Performance -> Transaction Log: Fixed an issue with pagination.
- Fixed an issue with JS code generation for older browsers by upgrading corejs.
2019-07-29 clustercontrol-1.7.3-6279, clustercontrol-controller-1.7.3-3336
Controller
- Added support for openntpd as an alternative to the ntp dependency.
- MySQL 8.0: Fixed an issue where the keyword 'groups' was used in a query.
- Improved error reporting in case of SSH errors when trying to determine the MySQL connect string.
- PostgreSQL: Create a symlink to custom log file during add existing cluster as well, not only during add exisitng node.
PostgreSQL: When adding an existing cluster, a custom specified log file will be be used if logging_collector is off. - PostgreSQL: Fixed an issue detecting log files.
- MySQL: A password could be visible in the 'ps' output of a node when the cmon database was updated at controller startup.
- Create/register cluster: Handle 'company_id' if provided, otherwise we try to query it up by user_id as a fallback.
Frontend
- Fixed an issue where a cluster could not be registered due to a missing company id / team id.
2019-07-24 clustercontrol-1.7.3-6270
Frontend
- Fix an issue saving and pushing out edited configuration files (Configuration Management).
- Fix an issue with the Overview page not being properly shown after switching between tabs (PostgreSQL).
2019-07-18 clustercontrol-1.7.3-6255, clustercontrol-controller-1.7.3-3319
Controller
- Postgres: Fixes in log file handling to check if the log collector is enabled already. This could result in e.g the wrong log file was used.
- Postgres: A fix in multi-node support when adding nodes that could lead to nodes not being part of the replication topology.
- Postgres: Fixed an issue when the logfile was not owned by the postgres user.
- Postgres: Updated the repository signature.
- TimescaleDb: Fixed an issue adding a replication slave due to a version mismatch.
- TimescaleDb: Fixed an issue when rebooting TimeScale (and PostgreSQL) master results in two master nodes.
- MariaDb/Replication: Fixed an issue with Promote Slave ( Switch over ).
- MariaDb/Galera: Fixed a check for the wsrep_sst_method to check whether xtrabackup vs. mariabackup is used.
- MySQL/MariaDb: Importing a cluster could fail as it assumed bind_address existed as a server system variable.
Frontend
- Add a workaround to sort the cluster list by name, status, type with a new bootstrap.php variable (instead of using cluster_id by the default)
- define('CLUSTER_LIST_SORT_BY', 'name'); # sort by cluster name
- Add additional information on how to use the 'Stanza Name' with PgBackRest backups
- Add missing confirmation dialog for MongDB restore backup
2019-07-16 clustercontrol-1.7.3-6242
Frontend
- Fix a HTML formatting issue when trying to change non-dynamic parameters in Configuration Management (MySQL).
- Fix an issue with the Nodes->DB Performance chart which requested unfiltered datasets.
2019-07-12 clustercontrol-1.7.3-6226
Frontend
- Fix missing mysqldump backups (PITR) for 'Add Replication Slave' when rebuilding with a backup.
- Fix incompatible array notation with PHP v5.3.
ClusterControl v1.7.3
2019-07-02
clustercontrol-1.7.3-6209
clustercontrol-controller-1.7.3-3293
clustercontrol-cloud-1.7.3-217
clustercontrol-ssh-1.7.3-79
clustercontrol-notifications-1.7.3-182
In this release we have added support for running multiple PostgreSQL instances on the same server with improvements to PgBackRest to support those environments.
We have also added additional cluster types to our cloud deployment and support for scaling out cloud deployed clusters with automated instance creation. Deploy MySQL Replication, PostgreSQL, and TimeScaleDB clusters on AWS, GCE, and Azure.
Feature Details
PostgreSQL
- Manage multiple PostgreSQL instances on the same host.
- Improvements to pgBackRest with non-standard instance ports and custom stanzas.
- New Configuration Management page to manage your database configuration files.
- Added metrics to monitor Logical Replication clusters.
Cloud Integration
- Automatically launch a cloud instance and scale out your database cluster by adding a new DB node (Galera) or replication slave (Replication).
- Deploy following new replication database clusters:
- Oracle MySQL Server 8.0
- Percona Server 8.0
- MariaDB Server 10.3
- PostgreSQL 11.0 (Streaming Replication).
- TimescaleDB 11.0 (Streaming Replication).
Misc
- Backup verification jobs with xtrabackup can use the --use-memory parameter to limit the memory usage.
- A running backup verification server will show up in the Topology view as well.
- MongoDB sharded clusters can add/register an existing MongoDB configuration node.
- The clustercontrol-cmonapi (CMON API) package is deprecated from now on and no longer required.
- A few more legacy ExtJS pages have been migrated to AngularJS:
- Configuration Management for MySQL, MongoDB, and MySQL NDB Cluster.
- Email Notifications Settings.
- Performance->Transaction Logs.
ClusterControl v1.7.2
2019-06-12 clustercontrol-controller-1.7.2-3142
Controller
- Fixed a CmonDb schema issue on older MySQL server versions manifesting itself as ' Specified key was too long; max key length is 767 bytes'.
- MaxScale: A fix for imported MaxScale. When importing MaxScale, the utility maxctrl is used and works currently only with socket communication on the MaxScale host itself.
- Jobs: Log files contain job spec with sensitive data.
- MariaDb: Fixed and issue with deployment of MariaDB 10.0 on Centos 6 failed.
- Postgres: Fixed a bug that could crash cmon in case wal log retention was disabled and fixed a printout in PITR job output.
2019-06-11 clustercontrol-1.7.2-6137
Frontend
- Memory leak fixes when leaving the web application open for extended periods of time (days).
- Fixes to the database software upgrades form to show correct versions supported.
Note: Only upgrades within minor versions are supported.
2019-05-24 clustercontrol-1.7.2-6069 clustercontrol-controller-1.7.2-3199
Frontend
- Deployments: Custom configuration templates can now be selected at deployment.
- Cluster Overview:
- 'Server Load' graphs were not properly displayed (PostgreSQL).
- Changing the 'Server Load' graph would not accurately show only one metric (PostgreSQL).
- Disk Reads/Writes and Uptime were set to 0 (PostgreSQL).
- Disk bytes read/written were not calculated with correct sector value of 512 bytes.
- Switching between dashboards with a specific set of steps could cause the overview page to render an empty page.
Controller
- Deadlock detection temporarily disabled for MySQL/Percona 8.0. It will be supported in the next major release.
- mysqldump failed with MySQL/Percona 8.0 because of missing show_compatibility_56=ON setting. It is now on for versions >= 5.7.6.
- Agent Based Monitoring (Prometheus):
- Uptime were set to 0.
- Disk stats for the controller is now also available.
- node_disk_written_bytes_total|node_disk_read_bytes_total are now also collected.
- Reverting to nc instead of socat on Ubuntu 16.04 due to a bug with socat's server name resolve when it starts with a number.
- Manual failover with MariaDB 10.1 for MySQL Replication cluster is now correctly flushing logs before switchover.
- Restore backup on Mongos (routers) failed to copy the data dir.
2019-05-16 clustercontrol-controller-1.7.2-3185 clustercontrol-1.7.2-6032
Frontend
- Nodes Page: Fixed an issue with y-axis scaling on the Disk Utilization chart.
- Nodes Page: Selecting the menu 'Add Replication Slave' and start adding slave was impossible when a Node recovery job was running
- MongoDB: Fixed an issue where the Restore backup dialog would not close after pressing "Finish".
Controller
- Monitoring/SCUMM: postgres_exporter and mysql exporter too URL password encoding fix which could cause a "No data points" in Dashboards -> Postgres Overview.
- Monitoring/SCUMM: A fix for disk stats to be properly shown when using LVM volumes in the Nodes -> Disk charts.
2019-05-07 clustercontrol-controller-1.7.2-3167
Controller
- MySQL 8.0: Updated imperative language files to support the previous release build issue: "Fixed an issue preventing db users from being created on MySQL 8.0".
2019-05-06 clustercontrol-1.7.2-5997 clustercontrol-controller-1.7.2-3163
Frontend
- Filtering out incomplete/failed backups from restore backup dialogs.
- MySQL Single (standalone servers): Fixed filtration logic to show the Master Nodes for MySQL Single clusters.
Controller
- MySQL 8.0: Fixed an issue preventing db users from being created on MySQL 8.0.
- Config file handling fix for docker (we mount /etc/cmon.d there and /etc/cmon.d/cmon.cnf is the main config)
2019-04-30
clustercontrol-1.7.2-5989
clustercontrol-controller-1.7.2-3155
Frontend
- Query Monitor - Query Outliers
- Fixed an issue related to date range.
- Performance > Innodb Status
- Fixed an issue when the InnoDb Status was not always shown.
Controller
- ProxySQL: Fixed an issue with importing users on MariaDb 10.2 and later.
- Galera: Fixed an issue when the recovery job was closed prematurely. This had the effect that Create Cluster could fail.
- SCUMM: Preserve the exporters of other clusters in Prometheus configuration during (re)deployment. (NOTE: users with multiple clusters and wrong Prometheus configuration may need to re-deploy the promethus on the affected [No data point] clusters).
- Query Monitor: Fixed an issue where queries were dropped following a schema update when upgrading clustercontrol-controller.
2019-04-19
clustercontrol-1.7.2-5959
clustercontrol-controller-1.7.2-3141
Frontend
- Query Monitor
- Selecting/clicking on a query didn't show the query details.
- Top queries page were empty for a single node galera cluster.
- MongoDB
- Restore backup menu item was missing.
- Restore backup dialog form was empty for single node replica sets.
- Spotlight: Performance improvements when you have several clusters / nodes.
- Cloud deployments now use the same package versions as the on-premise deployments.
Controller
- MySQL Replication: Fixed an issue with slave promotion causing an errant transaction to appear.
- Security: Fixed permissions on all cmon generated config files to be 0600.
- Galera (MariaDb): Increased start timeout for a longer SST in the mariadb.service override systemd file.
2019-04-04
clustercontrol-1.7.2-5926
clustercontrol-controller-1.7.2-3117
clustercontrol-cmonapi-1.7.2-342
clustercontrol-notifications-1.7.2-176
clustercontrol-ssh-1.7.2-73
clustercontrol-cloud-1.7.2-196
We are proud to announce an expansion of the databases we support to include TimescaleDB, a revolutionary new time-series that leverages the stability, maturity and power of PostgreSQL. TimescaleDB can ingest large amounts of data and then measure how it changes over time. This ability is crucial to analyzing any data-intensive, time-series data.
For ClusterControl, this marks the first time for supporting time-series data; strengthening our mission to provide complete life cycle support for the best open source databases and expanding our ability to support applications like IoT, Fintech and smart technology.
In this release you can now deploy a TimescaleDB and also turn an existing PostgreSQL server to a TimescaleDB server. PostgreSQL clusters also support a new backup method pgBackRest, database growth charts and improvements to manage your configuration files.
MySQL users can start to deploy and import MySQL 8.0 servers with Percona and Oracle MySQL and our new Spotlight search helps you navigate through pages, find nodes and perform actions faster.
Finally, we are also providing a beta version to setup CMON / Controller High Availability using several ClusterControl instances wired with a consensus protocol (raft) between them.
Feature Details
TimescaleDB - optimized for time-series data using SQL
- Deploy a TimescaleDB server with PostgreSQL (v9.6, v10.x and v11.x).
- Turn an existing PostgresQL server (v9.6, v10.x and v11.x) into a TimescaleDB server.
PostgreSQL
- Database growth graphs. Track the dataset growth on your databases.
- Support for pgBackRest as a backup tool:
- Create full, differential and incremental backups.
- Restore full, differential, incremental backups.
- PITR - Point In Time Recovery is supported.
- Enable compression and specify compression level.
MySQL 8.0 Support
- Cluster deployment and import of 'replication' type clusters available with:
- Percona Server for MySQL 8.0
- Oracle MySQL 8.0 Server
- Support for 'caching_sha2_password'.
CC Spotlight
- Use our new spotlight search to quickly open pages, find nodes/hosts and perform cluster and node actions.
- Click on the search icon or use the keyboard shortcut CTRL+SPACE to bring up the spotlight.
CMON / Controller High Availability (BETA)
- CMON HA is using a consensus protocol (raft) to provide a high availability setup with more than one cmon process.
- Setup a 'leader' CMON process and a set of 'followers' which share storage and state using a MySQL Galera cluster.
Misc
- Support the use of private IPs when you deploy a cluster to AWS.
- MaxScale - improved support for v 2.2 and later using maxctrl.
- Automatic vendor/version detection for importing MariaDb/MySQL based clusters.
============================================================
2019-03-25 clustercontrol-controller-1.7.1-3085
Controller
- Resolve hostnames (to IPv4) when checking a host if it exists already in other clusters.
- MongoDb: adding missing sharding:clusterRole:shardsrv value in mongod.conf when add node job is used.
- MaxScale: connection not authorized after the deploy with CC. More fixes to improve 2.3 and later support.
- Backup: Do not fail backup if wsrep desync can't be turned off, and we must set the retention on backup report even if it was marked as failed.
- Monitoring/SCUMM: haproxy_exporter: Don't append --haproxy.scrape-uri if it is already set.
- Replication: Can't add replication slave to an existing slave. Let's be stricter and do not tolerate >1 writable when setting up.
- s9s_error_reporter: make sure cmon is started, also print out the service status.
- PostgreSQL: Fixing an issue when a system file protection method denied the proxy-disable file removal
- Package handling/ YUM: Fix for a situation when package update gets stuck on user input (to accept some GPG signature).
- SSH: A fix/workaround to handle the 'forced user password change' situation if user password expires (passwd --expire USERNAME) and is prompted to change upon a successful authentication.
- SSH: limit the number of sent newline chars.
- Updated Oracle repository key due to expiration.
2019-03-18 clustercontrol-1.7.1-5812
Frontend
- Allow empty SMTP username and SMTP password for the SMTP configuration.
- Fix an issue for failing to stop MySQL slave threads (IO and SQL).
2019-03-05 clustercontrol-controller-1.7.1-3056
Controller
- Advisors: Fixed an issue with the wsrep_cluster_address.js where an "internalHostName" method was missing.
- MongoDb: Use the mongodb OS user depending on the OS and package when setting up ssl.
- PostgreSQL: Fixed a PostgreSQL grant failure because of client locale setting.
- PostgreSQL: Workaround a PostgreSQL service initdb bug. Now we call directly the 'initdb' binary. The relevant original bug report: https://www.postgresql.org/message-id/20171208104120.21687.74167@wrigleys.postgresql.org
2019-02-27 clustercontrol-notifications-1.7.1-173
Frontend
- Fix for cmon-events to prevent Avast to report it as a malware (Telegram API).
- Fix for cmon-events to start even if the MySQL server has not started first.
2019-02-20 clustercontrol-1.7.1-5720
Frontend
- Keepalived: Fixed an issue importing Keepalived.
- HAProxy: Dashboard fixes (SCUMM).
- Nodes Page: Removed the tab 'Logs' as it is deprecated and found in 'Logs->System Logs' instead.
2019-02-18 clustercontrol-controller-1.7.1-3032
Controller
- Maria Backup: Fixed an issue parsing LSN in mariabackup >= 10.2.22.
- Prometheus: Fixed an issue when restarting a failed exporter.
2019-02-13 clustercontrol-controller-1.7.1-3027
Controller
- Galera: A fix to the wsrep_cluster_address.js advisor to also check the internal/private hostname/IP-addresses.
- MySQL: skip missing grant alarms on backup-verification nodes.
2019-02-13 clustercontrol-controller-1.7.1-3026 clustercontrol-1.7.1-5700
Frontend
- ProxySQL: Fixed an issue in the pagination structure in ProxySQL sync making it impossible to Import/Export/Sync ProxySQL Configurations
- Fixed an issue regarding REPLICATION LAG where the lag was presented as a derived value instead of an absolute when viewing the individual servers.
- Fixed an issue with rebuild replication slave from incremental backup dialog.
Controller
- Fixed an issue regarding stats aggregation. This could manifests itself as spikes in particularly the REPLICATION_LAG.
- Keepalived: Small update for registering keepalived; the service port must be corrected to 112.
- Process Management: A fix for a file descriptor leak when an internal object was reused.
-
MongoDb 4.0: A fix for creating mongodb replica sets by checking executed mongodb commands for more error messages.
2019-02-06 clustercontrol-controller-1.7.1-3016 clustercontrol-1.7.1-5673
Frontend
- Deploy HAProxy on PostgreSQL: Fixed an issue where the dialog was stripped and did not load completely.
- Performance -> Db Variables: Variables with different values are not marked in red
- Dashboards: System Overview, improved the readability of the CPU Usage chart.
- PostgreSQL Query Monitor: Removed tuning advise and the option to purge queries as it is not possible at all.
Controller
- Configuration Changes: Fixed an issue where the owner and privileges of a config file was not preserved.
- Deploy/Create Cluster From Backup: A fix to prevent the restore backup from running in another job.
- ProxySQL: Replaced old galera_checker script for proxysql to a new 2.0 version one
- ProxySQL: Improved s9s CLI and cmon such that making a proxysql configuration backup can be performed using the s9s CLI.
- Advisors: A new script to check prepared statement exec limits. The advisor script must be manually scheduled by the administrator.
- Alarm Notifications: The Memory Utilisation alarm was not showing all processes in the included 'top' view.
2019-01-22 clustercontrol-controller-1.7.1-2994 clustercontrol-notifications-1.7.1-168
Controller
- MySQL/Galera: Fixed a bug in related to the loading of Disk/PU/Net stats on the Cluster Overview page.
- HAProxy/ProxySql/Garbd: disable firewall/selinux (if requested by the job, default is true for both values).
- Replication: Added a small hint about --report-host argument being required for add existing slaves.
- MongoDb: Fixed an issue when an rolling restart was attempted, but a stop/start of the cluster is required when setting up SSL.
- MongoDb: Added a 'server_selection_try_once', 'server_selection_timeout_ms' to allow the user to fine tune connection settings when e.g the network is slow. Run cmon --help-config to see the complete description.
Cmon-events/notifications
- Fixes to logging
- The License check failed due to the wrong field name, preventing e.g notification plugins from receiving alarm events.
2019-01-13 clustercontrol-controller-1.7.1-2985
Controller
- Bugfix for SSH connection negotiation failure on compression methods.
- HAProxy: A configuration error could occur when adding a new node, a 'none' word was wrongly added to the HAProxy configuration.
- HAProxy: Deploying HaProxy fails when it builds from source. Missing zlib1g-dev / zlib dependency.
- HAProxy: xinetd port was missing a default value. It now defaults to port 9200.
- Point in-time Recovery (MySQL): Binary logs could be applied in the wrong order.
- MySQL Replication: Switchover hooks do not work (replication_pre_switchover_script and replication_post_switchover_script are now executed upon Promote Slave).
- ProxySQL: Importing a user from MySQL fails to duplicate the grants.
- Prometheus: A fix to collect the log file from the Prometheus host, instead of the exporter host.
- Create cluster job fails on permissions of ssh user when the username contained \.
- NDB Cluster: Updated to use MySQL Cluster 7.5.12 binaries.
- Operational Reports: A fix to avoid repetition of node information in the 'System Report'.
- Cloud: A fix to improve the auto registration of the cmon-cloud binary and improved logging. This also requires a new version of cmon-cloud (new build coming soon).
2018-12-29 clustercontrol-1.7.1-5622, clustercontrol-notifications-1.7.1-159
Frontend
- MySQL Galera: Fix 'Add Node' regression where the template file was not set in the job specification.
- Prevent cmon-events to crash if cmon is not running.
ClusterControl v1.7.1
2018-12-21 clustercontrol-controller-1.7.1-2854, clustercontrol-1.7.1-5617, clustercontrol-cloud-1.7.1-163, clustercontrol-notifications-1.7.1-157, clustercontrol-ssh-1.7.1-70, clustercontrol-cmonapi-1.7.1-338In this release we have primarily continued to add improvements to our agent based monitoring dashboards and PostgreSQL.
Feature Details
Agent Based Monitoring
- Install/enable Prometheus exporters on your nodes and hosts with MySQL, PostgreSQL and MongoDB based clusters.
- Customize collector flags for the exporters (Prometheus). This allows you for example to disable collecting from MySQL's performance schema if you experience load issues on your server.
- Supported Exporters:
- Node/host metrics
- Process - /proc metrics
- MySQL server metrics
- PostgreSQL metrics
- ProxySQL metrics
- HAProxy metrics
- MongoDB metrics
- Dashboards:
- System Overview
- Cluster Overview
- MySQL Server - General
- MySQL Server - Caches
- MySQL InnoDB Metrics
- Galera Cluster Overview
- Galera Server Overview
- PostgreSQL Overview
- ProxySQL Overview
- HAProxy Overview
- MongoDB Cluster Overview
- MongoDB ReplicaSet
- MongoDB Server
Backup
- Create a cluster from an existing backup with MySQL Galera or PostgreSQL.
PostgreSQL
- Query Monitoring improvements - View query statistics:
- Access by sequential or index scans
- Table I/O statistics
- Index I/O statistics
- Database Wide Statistics
- Table Bloat And Index Bloat
- Top 10 largest tables
- Database Sizes
- Last analyzed or vacuumed
- Unused indexes
- Duplicate indexes
- Exclusive lock waits
- Verify/restore backup on a standalone host.
- Create a cluster from an existing backup.
- Support for PostgreSQL 11. Deploy and import clusters.
MongoDB
- Support to deploy/import and manage MongoDB Inc v4.0
Misc
- New license format. Please contact sales@severalnines.com for a new license.
- Continuing moving ExtJS pages to AngularJS. This time the load balancer and nodes page.
- UI logging for troubleshooting web application issues.
- ClusterControl Backup/Restore -This feature can be used to migrate a setup from one controller to another controller. Backup the meta data of an entire controller or individual clusters from the s9s CLI. The backup can then be restored on a new controller with a new hostname/IP and the restore process will automatically recreate database access privileges.
ClusterControl v1.7.0
2018-12-21 clustercontrol-controller-1.7.0-2962
Controller
- Bugfix for SSH connection negotiation failure on compression methods.
- Added support for MaxScale 2.3
- Exporters: New process_exporter version (0.10.10)
- Error Reporting: s9s_error_reporter -i0 collects all config files under /etc/cmon.d/
2018-12-12 clustercontrol-1.7.0-5548, clustercontrol-controller-1.7.0-2939
Frontend
- Keepalived: Fixed an issue where it was listed as a 'master' in the Cluster Node bar.
- Fixed an issue when the replication slaves of a Galera cluster was not shown under 'Show Server's
- Config Mgmt: Removed the Configuration -> Template item as it is deprecated in its current form.
Controller
- Error Report: Fixed an issue where passwords was not masked.
- Deploy Mongodb: Fixed signing keys issues for APT/YUM repos.
2018-12-10 clustercontrol-controller-1.7.0-2930
Controller
- HAProxy: A fix to remove /dev/shm/proxyoff file when promoting a slave or rebuilding a slave.
2018-12-07 clustercontrol-controller-1.7.0-2928
Controller
- PostgreSQL: Double-check if slave has properly configured the 'trigger_file' option in recovery.conf.
- Fixed and issue with wrong owner of the stagingDir (~/s9s_tmp)
- Updated a mongodb.org repo key (replaced the key Richard Kreuter <richard@10gen.com>, with MongoDB 3.4 Release Signing Key <packaging@mongodb.com>
- ProxySQL: properly handling # when handling the monitor and admin users passwords.
2018-11-27 clustercontrol-1.7.0-5455 clustercontrol-controller-1.7.0-2904
Frontend
- PHP Sessions fix for PHP v5.3 and earlier: Added the possibility to fallback to previous filebased session handling. If you experience UI issue please set:
define(' SESSIONS_FALLBACK', true);
in /var/www/html/clustercontrol/bootstrap.php and reload the page. - Backup: Fixed an issue with cron schedule validation in Scheduled Backups.
- Dashboards: Minor optimizations and re-organization of some dashboards.
Controller
- Galera: Clone cluster did not handle default datadir and wsrep_cluster_name for cloning
- Backup: Backup dir starting with /sys can't be removed, fixed a security check.
- Error Reporting: skip GRA* files from error report.
- Operational Reports: system report: Customizable graphs interval (in days unit)
- Operational Reports: changed title from 'Daily System Report' to 'System Report'
- Fixed a bug escaping passwords.
2018-11-13 clustercontrol-1.7.0-5375 clustercontrol-controller-1.7.0-2876
Frontend
- Fixed an issue with PHP session management on PHP 5.3 and earlier. This manifested itself as e.g the Node page was loading forever, no data in the UI and "Internal Error".
Controller
- Backup [mariabackup/xtrabackup]: Clean up qpress archives after restoring an xtrabackup|mariabackup compressed backup
- Verify Backup [mariabackup/xtrabackup]: Fixed a regression where the wrong restore method was selected.
2018-10-30 clustercontrol-controller-1.7.0-2859
Controller
- Deploy/Import Cluster: Fixed an issue to allow \ (backslash) in the admin user password (mysql root password).
2018-10-30 clustercontrol-controller-1.7.0-2854, clustercontrol-1.7.0-5319, clustercontrol-cloud-1.7.0-154, clustercontrol-notifications-1.7.0-153, clustercontrol-ssh-1.7.0-66
Frontend
- Keepalived: Added a fix to show the role, i.e which keepalived node that has the VIP assigned.
- Deploy: Added dot, space and / as allowed symbols for the password field.
- ProxySQL: corrected use of proxysql match digest/pattern fields
- General: Improved session handling.
- SSE (Server Side Events): Improvements to show notifications.
- OS service files fixes to handle non English locales for cmon-cloud, cmon-events, and cmon-ssh.
Controller
- Backup: Restore backup on a Galera cluster (mariabackup/xtrabackup) to a single node shuts down whole cluster even if bootstrap cluster was disabled.
- Backup: mariabackup qpress support.
- Backup: Increased the size of the backup record (TEXT -> MEDIUMTEXT )
- Backup: Fail early if an attempt is made to take an xtrabackup on a MariaDb 10.3 server, and warn if xtrabackup is attempted on the MDB 10.2 series. Using mariabackup on 10.2 and 10.3 is recommended.
- Backup: Verification now supports --use-memory option
- Deploy / MariaDb 10.3: Fix buggy galera_new_cluster (https://jira.mariadb.org/browse/MDEV-17379)
- Galera: Fixed an issue with rebuilding node from the backup.
- Galera/Replication: Fixed an issue preventing a node from being rebuilt if only mariabackup was available on the node. Also improved error messages.
- Keepalived: Added information which node has the VIP assigned
2018-10-19 clustercontrol-1.7.0-5281
Frontend
- Add Node with 'Rebuild from backup': Fix wrong backup id parameter in the job spec.
- Add Node: Moved rebuild backup dropdown.
- Mail server configuration: Fix invalid port length.
- Rebuild from backup: Fix to only show successful backups in the dropdown.
- Removed xtrabackup option from MariaDB v10.3 clusters since it's no longer working with v10.3.
2018-10-16 clustercontrol-controller-1.7.0-2832
Controller
- MariaDb: Fixed an issue with rebuild replication slave to support MariaDb Backup.
- Configuration Management: Fixed an issue preventing to assign decimal values to a database variable.
2018-10-10 clustercontrol-1.7.0-5259 | clustercontrol-controller-1.7.0-2825
Frontend
- SSE (Server Side Events): Fixed when a toaster was shown prompting configuration suggestions when a security token is invalid.
- Advisors: Fixed an issue with importing of advisors and the overwrite flag was not respected.
- Cloud: Fixed and issue with subnets and AZs
- Backup: Added 'MySQL Db Only' as a dump type for mysqldump. This creates a dump of only the mysql database.
Controller
- General: Fixed an issue to chown a dir only if ClusterControl created it.
- Advisors: A fix to properly handle multiple partitions in s9s/host/disk_space_usage.js.
- MongoDb: Fixed an issue where a stepDown was attempted on a shard router (mongos), and the restart node job failed.
- Prometheus: Fail install if a running Prometheus server is detected.
- Prometheus: Updates to queries and optimisations.
- Postgres: Fixed an issue when deploying 9.2.
- Galera: Fixed a bug where the desync node did not work when using MariaDb Backup.
- MySQL Replication: Fixed a bug when the node got the wrong node status after a restart.
2018-09-26 clustercontrol-1.7.0-5224 | clustercontrol-controller-1.7.0-2798
Frontend
- Nodes Page: Fixed a regression with the node charts where the last four graphs had "no data points".
- User Management: Fixed a navigational issue making the Clusters list show up as empty.
- Events (Server Side): Fixed an configuration issue regarding CMON events notifications, which could lead to a 'Enable Events' dialog showing up too frequently.
Controller
- Operational Reports: Fixed an issue where the cluster type in the operational reports was missing.
- Operational Reports: Fixed an issue where the creation of operational reports could deadlock.
- Deploy (MySQL based setups): Fixed a deployment issue where a sanity check failed to determine if percona-xtrabackup was successfully installed.
- MongoDb: Fixed an issue with configuration file handling when mongos and mongod's are colocated.
- Prometheus: A couple of minor optimisations to queries (improved filtering of disk device/fs)
- ProxySQL: Fixed an installation issue on LXD containers.
2018-09-24
clustercontrol-1.7.0-5208
clustercontrol-controller-1.7.0-2792
clustercontrol-cmonapi-1.7.0-333
clustercontrol-cloud-1.7.0-147
clustercontrol-ssh-1.7.0-62
clustercontrol-notifications-1.7.0-139
In this release we are introducing support for agent based monitoring with Prometheus (open-source systems monitoring and alerting system). Enable your cluster to use Prometheus exporters to collect metrics on your nodes and hosts. Avoid excessive SSH activity for monitoring and metrics collections and use SSH connectivity only for management operations.
You can use a set of new dashboards that uses Prometheus as the data source and gives access to its flexible query language and multi-dimensional data model with time series data identified by metric name and key/value pairs. In future releases we will be adding more features such as allowing you to create and import your own dashboards.
We have also a new security feature to enable Audit Logging for MySQL based clusters. Enable policy-based monitoring and logging of connection and query activity executed on your MySQL servers.
Finally we have added support to easily scale out your cloud deployed clusters by automating the cloud instance creation for the new DB node.
Feature Details
Agent Based Monitoring
- Install a Prometheus v2.3.x server on a specified host.
- Install/enable Prometheus exporters on your nodes and hosts with MySQL and PostgreSQL based clusters.
- Supported Exporters:
- Node/machine metrics
- Process - /proc metrics
- MySQL server metrics
- PostgreSQL metrics
- ProxySQL metrics
- New dashboards:
- Cross Server Graphs
- System Overview
- MySQL Overview
- MySQL Replication
- MySQL Performance Schema
- MySQL InnoDB Metrics
- Galera Cluster Overview
- Galera Graphs
- PostgreSQL Overview
- ProxySQL Overview
Security
- Enable/disable Audit Logging on your MySQL based clusters. Enable policy-based monitoring and logging of connection and query activity.
Cloud
- Cloud Scaling. Automatically launch cloud instances and add nodes to your cloud deployed clusters.
Misc
- Support for MariaDB v10.3
- New 'demote master to slave' action for MySQL replication clusters.
- Customize the timezone for dates and time shown across the application.
- UI toasters/notifications for CMON events and alarms. Enables 'Server Sent' events to be sent to the web application for a more dynamic updated user interface.
- Improved workflow to enable PITR for PostgreSQL.
- Added performance graphs for ProxySQL hosts.
Changes in ClusterControl v1.6.2
2018-09-14 clustercontrol-1.6.2-5148 | clustercontrol-controller-1.6.2-2769
Controller
- Backup (MariaDB Backup): Use mbstream instead of xbstream. This removes the dependency to the Percona Xtrabackup package.
- Advisors (MySQL): Improved TimeZone advisor to check if the timezones on the MySQL servers are aligned. This fixes an issue with e.g CET and CEST timezone which are from MySQL's perspective treated the same
- Backup (Verify Backup): Fixed an issue regarding connectivity. Now the Verify Backup does not rely on the MySQL system database tables from cluster db node to perform the verification. This removes the need for a port (9999 by default) to be open between the cluster node(s) and the backup verification server.
- Job handling: Improved parallelism.
Frontend
- MaxScale: Fixed an issue in password validation
- ACLs: Fixed a number of issues in ACL handling.
2018-08-27 clustercontrol-controller-1.6.2-2726
Controller
- ProxySQL: Fixed an issue with Sync Instance preventing query rules to become active on target instance.
- Backup (MariaDB Backup): Fixed an issue where the incorrect encryption options were passed to Maria Backup.
- Backup (Percona XtraBackup/MariaDB Backup): Fixed the order so that backups are first compressed and then encrypted resulting in smaller backup sizes.
- Galera: Fixed a bug in 'Clone Cluster' which ignored the 'sudo' password (if set) leading to failed cloning.
2018-08-21 clustercontrol-1.6.2-5025 clustercontrol-controller-1.6.2-2718
Frontend
- Fix broken Re-sync Node from backup (MySQL Galera).
- Misc ACL privileges fixes to Deployments, Activity Viewer, Left Side Navigation, and default user.
- Correctly handle empty responses on the User Management page.
Controller
- Backup: Fixed an issue with parallel backups when executed on the controller.
- Backup: A fix to recreate the backup user with the proper privileges.
- ProxySQL: Fixed an issue with broken stats (e.,g the 'Questions' was not properly accounted for).
- ProxySQL: Fixed an issue with version detection (added fallbacks).
- PostgreSQL: Added support for creating user entries with masks other than /32 via s9s cli.
- PostgreSQL: Fixed an issue with connection errors from HAProxy to PostgreSQL with IPv6.
- Replication: Failover scripts did not get executed.
- MongoDb: Updated the repo key.
2018-07-23 clustercontrol-1.6.2-4959
Frontend
- Fix copy and paste in the Query Monitor (PostgreSQL)
- Show trimmed query in full for Query Monitor (PostgreSQL)
- Fix dialog labels for AppArmor/SELinux
- Partial backup warning only for xtrabackup/mariadbbackup
- Security page is currently only for MySQL/PostgreSQL and fixes for use existing certificates
ClusterControl v1.6.2
2018-07-16
clustercontrol-1.6.2-4942
clustercontrol-controller-1.6.2-2662
clustercontrol-cmonapi-1.6.2-330
clustercontrol-cloud-1.6.2-141
clustercontrol-ssh-1.6.2-59
clustercontrol-notifications-1.6.2-136
Welcome to our new 1.6.2 release!
Feature Details
Backup
- Continuous Archiving and Point-in-Time Recovery (PITR) for PostgreSQL.
- Rebuild a node from a backup with MySQL Galera clusters to avoid SST.
- Option to restore external backups stored on a DB node (instead of only the Controller host).
MySQL/Galera
- Rebuild a Galera node from a backup to avoid SST.
Security
- Consolidate security functionality on an easily accessible single page.
- Enable/Disable:
- Client/Server SSL encryption for MySQL based clusters.
- SSL replication traffic encryption for MySQL Galera based clusters.
- Transparent Data Encryption (MySQL). Coming soon!
- Audit Logging (MySQL). Coming soon!
ProxySQL
- Clear/Reset Top Queries.
- Advanced query rules options: Error and OK messages, sticky connection and multiplex.
- Autofill match digest for a query rule.
Cloud
- Destructive actions now cleanup used cloud resources (accounting).
- Support for scaling DB nodes by automating cloud instance provisioning. Coming Soon!
- Support for load balancers. Coming Soon!
- Support for MySQL Replication clusters. Coming Soon!
Misc
- ClusterControl (CMON) Runtime Configuration page.
- Support for MongoDB v3.6.
Changes in ClusterControl v1.6.1
2018-07-04 clustercontrol-1.6.1-4896
Frontend
- Identical host charts fix for SQL and Data Nodes with MySQL Cluster (NDB).
- 'DB User Management' fix with MySQL Cluster (NDB). Create and edit DB users works again.
2018-06-28 clustercontrol-controller-1.6.1-2621
Controller
- MariaDB: Deployment fix caused by a mix up of authentication_string and password in the mysql.user table.
- Restore slaves/Rebuild nodes [MySQL, Postgres] Making the directory of the datadir backup configurable. Specify 'datadir_backup_path' in /etc/cmon.d/cmon_X.cnf. By default the datadir will be copied (after the server has been shutdown, but before restoring/rebuilding) using a filesystem copy to {datadir}_bak.
- Error reporting: A fix to also include the include files of a database node configuration file.
- Alarms: Fixed an issue when the measured value was a NaN or INF.
- MySQL: Add Node could fail due to a bug in version detection.
- General: A fix allowing other jobs to run in parallel with remove cluster jobs.
2018-06-26 clustercontrol-1.6.1-4865
FE
- Remove the default 0 sized cc-ldap.log file from the package overwriting the existing ldap log file.
2018-06-15 clustercontrol-1.6.1-4848 | clustercontrol-controller-1.6.1-2605 | clustercontrol-notifications-111
FE
- Fix schedule backup verification with mysqldump.
- Fix empty configuration template dropdown for add node (MySQL Galera).
- Fix to allow controller host timezone when scheduling maintenance mode.
- Fix for the schedule maintenance mode dialog closing immediately.
- Fix stuck scrolling with the PostgreSQL advisor's page.
- Fix missing validation for the xtrabackup --use-memory option.
- Add 'Lock DDL per table 'option for xtrabackup.
- Fixes to cmon-events to handle filtering correctly.
Controller
- Alarms/Notifications: Fixed a bug refreshing alarm thresholds. This prevented user specified thresholds in cmon_X.cnf from being applied.
- Mongo: Adding numa node number check before installing or using numactl for mongo.
- PostgreSQL host granting (pg_hba) fixed
- PostgreSQL: show error log when node failed to start.
- PostgreSQL: fixed an issue with pg_hba file error when using IPv6
- PostgreSQL/HAProxy: HAProxy did not refresh postgres node state after rebuild of a slave.
- ProxySQL: Include more data in the error report.
- ProxySQL: Adding sanity check on admin port for registering existing proxysql node.
- ProxySQL: Updated ProxySQL galera checker script to use 1.4.8
- Galera: Fixed a crashing bug in case of missing wsrep_sst_auth
- Maxscale: Fixed so that the software will not be installed if it is already installed on the node.
Events/Notifications
- Fixed a bug which ignored the configured filter. This caused e.g a Warning alarm to create a notification, when only Critical was configured.
ClusterControl v1.6.1
2018-05-25
clustercontrol-1.6.1-4801 | clustercontrol-controller-1.6.1-2572
clustercontrol-cmonapi-1.6.1-324 | clustercontrol-notifications-1.6.1-94
clustercontrol-cloud-1.6.1-121 | clustercontrol-ssh-1.6.1-53
Feature Details
Backup
- Support for MariaDB Backup for MariaDB based clusters. MariaDB Server 10.1 introduced MariaDB Compression and Data-at-Rest Encryption which is supported by MariaDB Backup (a fork of Percona XtraBackup).
- Support for Schema(--no-data) or Data(--no-create-info) only backups and skipping extended insert(--skip-extended-insert) with mysqldump.
- Support for --use-memory with xtrabackup.
- Support for custom backup subdirectory names:
- Set the name of the backup subdirectory. This string may hold standard "%X" field separators, the "%06I" for example will be replaced by the numerical ID of the backup in 6 field wide format that uses '0' as leading fill characters.
- Default value: "BACKUP-%I"
- B - The date and time when the backup creation was beginning.
- H - The name of the backup host, the host that created the backup.
- i - The numerical ID of the cluster.
- I - The numerical ID of the backup.
- J - The numerical ID of the job that created the backup.
- M - The backup method (e.g. "mysqldump").
- O - The name of the user who initiated the backup job.
- S - The name of the storage host, the host that stores the backup files.
- % - The percent sign itself. Use two percent signs, "%%" the same way the standard printf() function interprets it as one percent sign.
PostgreSQL
- Synchronous Replication Slaves.
- Multiple NICs support:
- Deploy DB nodes using management/public IPs for monitoring connections and data/private IPs for replication traffic.
- Deploy HAProxy using management/public IPs and private IPs for configurations.
Misc
- ServiceNow has been added as a new notifications integration.
- Support for MaxScale 2.2.
- Database User Management (MySQL) can now search/filter accounts on username, hostname, schema or table.
- Node page graphs are now showing accurate time ranges and datapoint gaps.
- Query Monitoring is using the CMON RPC API.
- Database Growth is using the CMON RPC API.
- Support for PHP 7.2 with an upgraded CakePHP version 2.10.9
Changes in ClusterControl v1.6.0
2018-05-18 clustercontrol-controller-1.6.0-2553
Controller
- PostgreSQL: Support for init scripts for RHSCL PosrgreSQL packages. Please note that further tuning of the environment may be needed.
- PostgreSQL: Improved logic to locate the Postgres log files.
- PostgreSQL: Verifying the configuration and listen_addresses before registering the node.
- PostgreSQL: Better error reporting in case of connection timeouts.
- PostgreSQL: Improvements and better messaging of slave recovery in case of the host being down.
- MySQL/Galera: Properly handle quoted 'wsrep_sst_auth' entries.
- Backup: Running a backup prevented other jobs from being executed.
- Backup: A fix to prevent a backup to be uploaded to the cloud when the user did not ask for it.
- Error reporting: A fix for 'Access denied' when S9s CLI created a user.
- General: Removed the printout 'RPC: No variables available for...'.
2018-05-17 clustercontrol-1.6.0-4767
FE
- Fix to add missing admin port option for ProxySQL installations and registrations.
- Fix replication lag not shown properly for MySQL Replication clusters.
- Fix to allow changing the default region with a cloud credential.
- Fix to restart a failed PostgreSQL job.
2018-05-07 clustercontrol-notifications-1.6.0-88
FE
- Bump version of clustercontrol-notifications to 1.6.0.
2018-05-07 clustercontrol-cloud-1.6.0-115
FE
- Fix an issue with the security group on AWS preventing cloud deployment to work if ClusterControl was installed in the same VPC.
2018-05-04 clustercontrol-1.6.0-4699
FE
- Security Vulnerability: Fixed an issue where it was possible to perform a XSS attack.
- Cloud Deployments: Fixed a missing validation of the SSH Key.
- LDAP: Add support to get the user group from a 'memberof' attribute
2018-05-02 clustercontrol-controller-1.6.0-2514 | clustercontrol-1.6.0-4682 | clustercontrol-cmonapi-1.6.0-310
FE
- LDAP: Fix an issue preventing users to login with anything else than an email address.
- Changed default basedir to /usr for MySQL Cluster (NDB) import.
- Fix for an issue where a failed ProxySQL node was added and then not removable.
- Fix for and issue with a blank page with DB User Management when default anonymous users (test users) are detected.
- Add validation when trying to use reserved words with PostgreSQL deployments.
- Tune the custom advisor dialog for lower resolution screens.
- Fix an regression preventing error reports from being created from the frontend.
Controller
- NDB Cluster: SELinux settings were not checked correctly.
- NDB Cluster: The Install Software option was not respected.
- NDB Cluster: Fixed an issue detecting disk space and calculating the size of the REDO log.
- Postgres: Add Replication Slave will fail if there is an existing Postgres server running on the node, and also check if the psql client is available.
- PostgreSQL: Forbid using reserved SQL keywords as PostgreSQL username (as it is an identifier there which can not be a reserved keyword).
- Backup (xtrabackup): Fixed an issue where an Incremental backup could be created without having a Full backup. Now the following will happen: If there is no Full backup, the Incremental backup will be executed as a Full backup.
- MaxScale: Version 2.2.x support.
ClusterControl v1.6.0
2018-04-17 clustercontrol-controller-1.6.0-2493 | clustercontrol-1.6.0-4567 | clustercontrol-cmonapi-1.6.0-303 | clustercontrol-cloud-1.6.0-104 | clustercontrol-ssh-1.6.0-44
Welcome to our new 1.6.0 release! Restoring your database using only a backup for disaster recovery is at times not enough. You often want to restore to a specific point in time or transaction after the backup happened.
You can now do Point In Time Recovery - PITR for MySQL based databases by passing in a stop time or an event position in the binary logs as a recovery target.
We are also continuing to add cloud functionality:
- Launch cloud instances and deploy a database cluster on AWS, Google Cloud and Azure from your on-premise installation.
- Upload/download backups to Azure cloud storage.
Our cluster topology view now supports PostgreSQL replication clusters and MongoDB ReplicaSets and Shards. Easily see how your database nodes are related to each other and perform actions with intuitive drag and drop motion.
As in every release we continously work on improving our UX/UI experience for our users. This time around we have re-designed the DB User Management page for MySQL based clusters.
It should be easier to understand and manage your database users with this new user interface.
Let us know what you think about these features and changes anytime at cc-feedback!
Feature Details
Point In Time Recovery - PITR (MySQL)
- Position and timebased recovery for MySQL based clusters.
- Recover until the date and time given by Restore Time (Event time - stop date&time).
- Recover until the stop position is found in the specified binary log file. If you enter binlog.001827 it will scan existing binary logs files until binlog.001827 (inclusive) and not go any further.
Deploy and manage clusters on public Clouds (BETA)
Supported cloud providers
- Amazon Web Services (VPC), Google Cloud, and Azure.
Supported databases:
- MySQL Galera, PostgreSQL, MongoDB ReplicaSet
- Current limitations:
- There is currently no 'accounting' in place for the cloud instances. You will need to manually remove created cloud instances.
- You cannot add or remove a node automatically with cloud instances.
- You cannot deploy a load balancer automatically with a cloud instance.
Topology View
Support added for:
- PostgreSQL Replication clusters.
- MongoDB ReplicaSets and Sharded clusters.
Misc
- Improved cluster deployment speed by utilizing parallel jobs. Deploy more than one cluster in parallel.
- Re-designed DB User Management for MySQL based clusters.
- Support to deploy and manage MongoDB cluster on v3.6
Comments
3 comments
Hi Vinay,
Thanks for these changelog and bugfixes/improvements, very pleased to see this new release with a clear Changelog and some instructions on how to upgrade !!
Thanks a lot guys for this good work ;)
Regards,
Laurent
Hello team,
I have been experiencing issues with stanza on postgres just before you release the fix.
PostgreSQL: Fixed an issue with PgBackRest
Can you please provide more details up on this fix?
You are excluding the user managed stanzas, or you are excluding the options for stanza and overriding the setting set manually. Also I am interested if such kind of issues are really out of support with the older versions of CC. :) I am using CC i order to minimize the support of this database and not getting into database administration.
Thanks in advance!
Teodor
Hi Teodor,
What it means on this is that if a user specifies a custom stanza name, then when performing a pgbackrest, that will instead be used and not the clustercontrol generated one.
I suggest you raise a Zendesk ticket instead for more deeper question related to this.
Thanks Teodor.
Please sign in to leave a comment.