2018-04-17 clustercontrol-controller-1.6.0-2493 | clustercontrol-1.6.0-4567 | clustercontrol-cmonapi-1.6.0-303 | clustercontrol-cloud-1.6.0-104 | clustercontrol-ssh-1.6.0-44
Welcome to our new 1.6.0 release! Restoring your database using only a backup for disaster recovery is at times not enough. You often want to restore to a specific point in time or transaction after the backup happened.
You can now do Point In Time Recovery - PITR for MySQL based databases by passing in a stop time or an event position in the binary logs as a recovery target.
We are also continuing to add cloud functionality:
- Launch cloud instances and deploy a database cluster on AWS, Google Cloud and Azure from your on-premise installation.
- Upload/download backups to Azure cloud storage.
Our cluster topology view now supports PostgreSQL replication clusters and MongoDB ReplicaSets and Shards. Easily see how your database nodes are related to each other and perform actions with intuitive drag and drop motion.
As in every release we continously work on improving our UX/UI experience for our users. This time around we have re-designed the DB User Management page for MySQL based clusters.
It should be easier to understand and manage your database users with this new user interface.
Let us know what you think about these features and changes anytime at cc-feedback!
Point In Time Recovery - PITR (MySQL)
- Position and timebased recovery for MySQL based clusters.
- Recover until the date and time given by Restore Time (Event time - stop date&time).
- Recover until the stop position is found in the specified binary log file. If you enter binlog.001827 it will scan existing binary logs files until binlog.001827 (inclusive) and not go any further.
Deploy and manage clusters on public Clouds (BETA)
Supported cloud providers
- Amazon Web Services (VPC), Google Cloud, and Azure.
- MySQL Galera, PostgreSQL, MongoDB ReplicaSet
- Current limitations:
- There is currently no 'accounting' in place for the cloud instances. You will need to manually remove created cloud instances.
- You cannot add or remove a node automatically with cloud instances.
- You cannot deploy a load balancer automatically with a cloud instance.
Support added for:
- PostgreSQL Replication clusters.
- MongoDB ReplicaSets and Sharded clusters.
- Improved cluster deployment speed by utilizing parallel jobs. Deploy more than one cluster in parallel.
- Re-designed DB User Management for MySQL based clusters.
- Support to deploy and manage MongoDB cluster on v3.6
Changes in ClusterControl v1.5.1
- Monitoring: SSH Optimizations to reduce the number of SSH connections on remote nodes.
- Monitoring: CPU temperature monitoring is now configurable (and disabled by default, monitor_cpu_temperature cmon configuration option)
- Galera: Disable P_S queries in Query Monitor during upgrade.
- Galera / add node: Check if Mariadb version is of 10.1.31 and above. In this case mariabackup will be used.
- ProxySQL: Fixed an issue when modifying the variable values from the UI.
- MaxScale: template issue with a configuration parameter not compatible with MySQL Mon.
- Maxscale: debian9 support
- HAProxy: If xinetd failed to install it could lead to the controller crashing.
- Fixing a license barrier when deploying Galera cluster causing an error: “Refusing to recover node (no license)”l
- Mariadb 10.1 now requires wsrep_sst_method=mariabackup (new deploys of mariadb will always use mariabackup for SST).
2018-03-07 clustercontrol-controller-1.5.1-2411 clustercontrol-1.5.1-4434
- CRITICAL: Fixed another issue where the wrong node was selected due to an indexing problem, which could lead to an action being executed on the wrong node.
- Fixed an issue when importing keepalived.
2018-03-06 clustercontrol-controller-1.5.1-2409 clustercontrol-1.5.1-4425
- PostgreSQL: Explicitly grant nodes by IP (in addition to hostnames) in pg_hba.conf.
- PostgreSQL: config write with includes caused invalid syntax error issues.
- MySQL Cluster: Bug fixes to Database Growth.
- Operational Reports: Improved handling ofdifferent gnuplot versions.
- General: Configurable ICMP pinging. Set ‘enable_icmp_ping=false’ to disable ICMP pinging (Azure requires this). By default it is true (recommended).
- Installer: Permissions fixed so there are no writable files after install.
- Fixed an issue where the wrong node was selected due to an indexing problem, which could lead to an action being executed on the wrong node.
- Improved handling of saving email notification settings.
2018-02-24 clustercontrol-controller-1.5.1-2390 clustercontrol-1.5.1-4395
- PostgreSQL: Fixed a bug in the PostgreSQL config parsing causing a syntax error using ‘include’.
- Advisors: Bug fixes and corrections
- MySQL Cluster: Fixed a number of issues around hostnames and port settings, which caused node types (data node, management node) to be improperly identified.
- Backup (Verify Backup): Fixed a number of issues handling the Backup Verification Server.
- Backup(Verify Backup): A backup verification email is now sent when the backup has been verified.
- Operational Reports: Availibity report issues. The cluster_events/node_events tables were inadvertently dropped during ClusterControl upgrades causing the stats to be reset.
- PostgreSQL: pg_basebackup executed on a slave failed on imported clusters due to a missing grant.
- Remove Node: Fixes to make it possible to only unregister a node (remove the node from ClusterControl management).
- Schema: DB schema fixes to the server_node properties column by extending the size.
- Galera/GroupRepl: properly write/update the cmon_X.cnf mysql_server_addresses field to mark non galera|group_repl nodes there correctly.
- Remove Node: Improved consistency in ‘Remove Node’’ dialogs.
- MySQL/Galera: New default value for binary logging path which is now outside of the datadir.
- Backup (MySQL based clusters): Fixed an issue where Backup Method and Backup Host dropdowns were empty.
- Deploy/Import/Add Node: Improved Host Discovery showing the actual SSH error.
- Deploy/Import: SSH Key Path validation was missing.
- Charts: It was possible to select a negative range (smaller end date than start date).
- MongoDB: Add Shards dialog got stuck when entering a hostname (SSH check never terminated).
2018-02-06 clustercontrol-controller-1.5.1-2362 clustercontrol-1.5.1-4356
- Stats: RAM stats updated to also account for SReclaimable
- PostgreSQL: enable pg_stats_statements extension only on non read-only nodes.
- Error Reporter: include more info about PostgreSQL clusters (pg_stat_replication table + recovery config file)
- MySQL: Fixed an issue handling !include directives containing quotes. The import config job (automatically executed upon a controller restart) will auto-correct broken MySQL !include directives containing quotes.
- Deployment/Import dialogs: Added validation for SSH Key Path.
- ProxySQL: Filter users with unrelated hosts when deploying ProxySQL
- MongoDB: Fixed a problem specifying hostnames when performing "Add Shard".
- Galera: A path to a 'node_recovery_lock_file' can now be specified in /etc/cmon.d/cmon_N.cnf. If set and the lock file is found on the node then the node recovery will fail until an admin/script removes this file. The cmon controller process must be restarted when this parameter has been specified. This feature maybe useful for encrypted filesystems.
- MySQL Cluster: Fix to allow deployments on other ports than 3306.
- MySQL general: Error code 2013 (lost connection during query) is not a reason to set a node into disconnected state.
- MySQL general: Fixed up handling of 'ignore-db-dir' in config templates on MySQL 5.5 based servers
- ProxySQL: improve proxysql support such that admin-admin_credentials may contain multiple credentials
2018-01-23 clustercontrol-1.5.1-4335, clustercontrol-controller-1.5.1-2335
- Load balancers: Fix to make it possible to remove haproxy/maxscale even if the host is not reachable.
- Error reporting: Fix to always include cluster id 0 jobs in the error report.
- Galera: Fix to disallow garbd deployment to a host having a running mysqld.
- Replication: Improve handling of read_only when importing existing replication cluster.
- Replication: Alert if a mysql server is not connect to any master, i.e hanging loose.
- Postgres: Fix to recover a failed postgres server in case there is only one single postgres node in the system.
- Postgres: Fix to prevent postgres to be restarted in case sending SIGHUP (to reload config) failed.
- Advisors: Fix to present a clear error message for the performance schema advisors in case performance schema tables are not available for a particular MySQL version
- Verify Backup: Fix to correctly stage the standalone node with mysql user info (cmon user, etc)
- Fix properly enabling read/write split with HAProxy and MySQL Galera
- Fix incorrect list of nodes showing up as bootstrapping candidate (Galera)
- Fix leaving user records behind when deleting the whole team.
- Add an option to limit network streaming bandwidth (Mb/s) when doing a backup
- Fix missing "read only" port when adding HAProxy for PostgreSQL
- Fix showing the correct "read/write" port when adding HAProxy for PostgreSQL
- Fix Query Monitor for PostgreSQL to show the complete query and not truncate it
- Fix misleading tooltips when deploying or importing a PostgreSQL cluster
- Remove requirement to have the binlog enabled when adding a "SQL Node" with MySQL NDB Cluster
- Remove incorrect software package option when adding a "SQL Node" with MySQL NDB Cluster
- Fix MaxScale console port issue with using Safari
- Fix schedule backups to work even when the verify backup option is enabled
clustercontrol-ssh-1.5.0-39, clustercontrol-cloud-1.5.0-31, clustercontrol-clud-1.5.0-31
In this release we have added support to optionally use our built-in AES-256 encryption for your backups. Secure your backups for offsite or cloud storage with a flip of a checkbox.
We have also added an option to use a custom retention period per backup schedule.
There is a new Topology view (BETA) initially with MySQL based clusters to show a replication topology (incl. any load balancers) for your cluster. Use drag and drop to perform node actions, for example drag a replication slave on top of a master node which will prompt you to either rebuild the slave or change the replication master.
A new left side navigation bar provides faster page access to some of our features and the node actions are now also accessible directly from the node list.
AES-256 Backup Encryption (and Restore)
Supported backup methods
- mysqldump, xtrabackup (MySQL).
- pg_dump, pg_basebackup (PostgreSQL).
- mongodump (MongoDB).
Topology View (BETA)
- MySQL Replication Topology.
- MySQL Galera Topology.
- Support for MongoDB v3.4.
- Fix to add back restore from backup.
- Multiple NICs support. Management/public IPs for monitoring connections and data/private IPs for replication traffic.
- Left side navigation.
- Global settings breakout.
- Quick node actions.
- Backups: E-mails from Hourly scheduled backups was not set sent.
- Restore External Backups: Fixed a bug where the command was wrongly quoted.
- MySQL Replication: Improved logging during apply relay log phase and improved logic.
- MySQL Replication: A network outage on the master, could lead to the master wrongly join back again when the network became operational again.
- Postgres: API changes to support version 10.x
- Postgres: Fixed a deployment problem of version 10.x on Centos/Redhat.
- Postgres: pg_basebackup fix for version 10.x
- NDB/MySQL Cluster: Respect job datadir parameters when deploying NDB cluster (for ndbd and ndb_mgmd nodes...)
- MongoDb: Ops Monitor, Running Operations showed blank page due to a bug in a JS script.
- Developer Studio: Better error messages for the host::system(..) call.
- Fix license check does not work correctly with WebSSH.
- Fix Can't rebuild PostgreSQL slave - no masters to pick from.
- Clarify how external backup works and remove unsupported options.
- Add ';' as acceptable character for root password when importing existing cluster.
- Fix issues with an empty Performance->DB Variables page for certain setups.
- Monitoring: Revert to show more samples in Overview Graph
- Make cmon stop faster when it couldn't connect to CmonDb
- Error reporting: minor enhancements
- NDB: Fix some issues around executable name handling
- PostgreSQL pg_basebackup issue bugfixed
- A bug fix fixing empty log file name handling (avoids annoying messages in the cmon log)
- Able to handle special chars in database names (mysql dir name decoding)
- Backup / mysqldump: skip dynamic tables from mysql DB: innodb_index_stats,innodb_table_stats
- Fix to always send out operational reports by email.
- Deployment: A fix to upgrade openssl if deemed necessary.
clustercontrol-ssh-1.5.0-37, clustercontrol-cloud-1.5.0-31, clustercontrol-clud-1.5.0-31
In this release we have started to add integrations with cloud services and initially plan to add support for the major public cloud providers; Amazon Web Services, Google Cloud and Azure.
We are reintroducing backup to the cloud where you can now manually upload or schedule backups to be stored on AWS S3 and Google Cloud Storage. You can then download and restore backups from the cloud in case of local backup storage disasters or if you need to reduce local disk space usage for your backups.
For MySQL based clusters we have added support for MariaDB 10.2 and you can now choose to initially stage a slave from an existing backup instead of staging from a master. Individual databases (mysqldump only) can be backed up with separate dumps/files, and you can trigger verification/restoring of a backup to happen after N hours after a scheduled backup has completed.
PostgreSQL has an additional backup method pg_basebackup that can be used for online binary backups. Backups taken with pg_basebackup can be used later for point-in-time recovery and as the starting point for a log shipping or streaming replication standby servers.
We have also added support for synchronous replication failover and deploying HAProxy with Keepalived (for load balancing HA) to be used with PostgreSQL clusters.
Load balancers (HAProxy) can be deployed by explicitly selecting the public/management IP for connecting and provisioning the software. Especially useful for cloud environments if you are provisioning/managing over a public network.
We also have some additional improvements for ProxySQL. You can add or modify schedulers and also mass import existing database users into your ProxySQL instances to quickly setup access.
Cloud Services (AWS S3 and Google Cloud Storage)
- Manual upload or schedule backups to be uploaded after completion to the cloud.
- Download and restore backups from a cloud storage.
- Backup individual databases separately (mysqldump only).
- Upload, download and restore backups stored in the cloud.
- Trigger a verification and restore of a backup after N hours of completion.
- Rebuild a replication slave by staging it from an existing backup.
- Add a new replication slave by staging it from an existing backup.
- New backup method pg_basebackup which makes a binary copy of the database files.
- Synchronous replication failover (support for synchronous_standby_names).
- Support for HAProxy with Keepalived.
- Support for PostgreSql 10.
- Mass import existing database users into ProxySQL.
- Add and modify scheduler scripts.
- MariaDB v10.2 support (Galera and MySQL Replication).
- MySQL Cluster(NDB) v7.5 support.
- Added support to show and filter DB status variables for MongoDB nodes.
- HTML formatted alarm and digest emails.
- Multiple NICs supports when deploying Load balancers (HAProxy).
- Continuous improvements to UX/UI and performance.
- New cmon-cloud process and clud client to handle cloud services.
- New Report: Database Growth
MySQL based cluster: if the 'mysql' database was explicitly backed up, then it was restored in the wrong way causing permission denied and the restore to fail.
- Galera: codership repository fixes
- Debian Jessie (Debian 9) support.
2017-10-25 clustercontrol-1.4.2-3958, clustercontrol-controller-1.4.2-2179
Resend alarm emails
Only collect the relevant log files from each host
Accounts daemon fix, to prevent doing any operations on accounts-daemon if running the environment as root or if it is not started.
- Galera: Add replicaiton slave: Properly detect if a replication slave is actually connected to the master.
error-reporter: include node type(s) in the host directory names
CmonDB 'alarm' table UTF-8 changes
HAProxy config check
Resend alarm emails
Removed banner from the Add Existing Slave, making it hard to understand what would happen.
Set default value as "1" by default for Compression Level for mysqldump.
Galera: Overview Page, "Flow Control Paused" now shows floating points value
Host statistics graphs, issue with multicore CPU graphing
More verbosity when capturing LDAP logs
Configuration Management: Applied the byte conversion mechanism for the mysql change parameter dialog.
Fixed the save settings for property 'History' and removed property 'SSH Options'
ProxySQL: Query Rules, added IN () format to match pattern generation
Query Monitor: Adding query outliers explanation in Overview page
Query Monitor: Renamed Query Histogram to Query Outliers to match what it actually is.
- Backups: always execute commands on controller, only use the seen address (from node's POV) for constructing the netcat sender command line.
- s9s_error_reporter: updates for better compatibility with all s9s cli version
- s9s_error_reporter: Prevent error reporting from being blocked by other jobs.
- Failed deploy MariaDB 10.2&&10.1 for Galera Custer, mariadb-compat does not exist on debian..
- mysqldump: handling the backup compression level (bugfix)
- Galera (all vendors): mysql_upgrade must only run if monitored_mysql_root_password is set. The upgrade will failed if not possible to connect
- Galera: Fix advisor to handle wsrep_cluster_address arguments
2017-10-03 clustercontrol-notifications-1.4.2-62, clustercontrol-ssh-1.4.2-32
- System V Init - Prevent/disable the 'cmon-events' process to start (by cron or manually) when <webroot>/clustercontrol/bootstrap.php has set define('CMON_EVENTS_ENABLED', false);
- System V Init - Prevent/disable the 'cmon-ssh' process to start (by cron or manually) when <webroot>/clustercontrol/bootstrap.php has set define('SSH_ENABLED', false);
2017-09-11 clustercontrol-1.4.2-3699, clustercontrol-controller-1.4.2-2091
- Non-default cluster specific SSH port support for host validation when adding a new or an existing node.
- Show all valid nodes for 'Rebuild Replication Slave' and 'Change Replication Master'. All nodes with binary logging enabled is a valid option.
- Minor filtering fixes to 'Manage -> Schemas and Users'.
- Removed controller host from PostgreSQL's query monitor.
- Minor performance optimization. Removed redundant repeated timezone call.
- Use cluster specific SSH settings for host validation when adding a new or an existing node.
- New error report tarball naming convention - error-report-TIMESTAMP-clusterCID.tar.gz.
- Include backup records and backup schedules in the error reports.
- Minor fix to backup scheduling when using advanced cron format.
- HAProxy: A problem with hidden properties made it impossible to view HAProxy details in the UI unless the stats admin user and password was not admin/admin.
- Alarms: Possibility to disable the SwapV2 alarms (set swap_inout_period=0 in cmon_X.cnf)
- Configuration Management: Correctly exclude non DB nodes from drop downs.
2017-08-22 clustercontrol-1.4.2-3607, clustercontrol-controller-2058
- Group Replication: SUDO password not set in job.
- MySQL (all variants): Password validation updated to support more characters.
- MySQL (all variants): Import existing MySQL cluster fails if specified user is other than ‘root’.
PostgreSQL: A problem restoring abackup on the specified node (by job: server_address, UI sends master/writable) is fixed.
Error reporting: Important error reporter fix to be more tolerant of empty/invalid filenames.
Replication: Cluster state was not set if node/cluster recovery was disabled.
2017-08-14 clustercontrol-1.4.2-3574, clustercontrol-controller-1.4.2-2045
- Group Replication: Create Cluster job did not submit the sudo password if set.
- Galera: Restore backup host dropdown was empty unless the Galera node had log_bin enabled.
- Postgres: small UI fix to remove empty columns.
- MySQL(all variants)/PostgreSQL: use socat for streaming when it is available.
- MySQL (all variants): Super read-only causing create database to fail during restore.
- MySQL (all variants): Backup, failed to read included config files from my.cnf (!includedir), if the included config dir was empty.
- Error reporter: drop -W option from netstat (not supported by rhel/centos 6.x).
- Error reporter: Add missing dependencies for error-reporter (tar/gzip) for minimal distros (eg.: containers.
- MongoDb: Backup creation fix (for case when ssh user is not allowed to ssh to the controller itself).
- ProxySQL: Installing an improved galera checker script for new ProxySQL installations.
- ProxySQL: A fix to auto-restart a failed ProxySQL node.
- Docker: Small fix to support HAProxy with Docker.
- Docker: Do not set ulimit inside a container (as this makes some operation failing inside docker).
- Query Monitor: Doesn't collect queries with mysql local override and PS=off.
- Replication: do not recover a user shutdown node
- Fix password reset script for php v7.
- Fix LDAP regression with Active Directory and "samba account".
- Fix host filtering for Query Monitor.
- Fix LDAP login regression.
- Fix to show all databases for Group Replication backups.
- Fix not fatal duplicated symlink error creation at post-installation.
2017-07-24 clustercontrol-1.4.2-3505, clustercontrol-notifications-1.4.2-57, clustercontrol-ssh-1.4.2-25 clustercontrol-controller-1.4.2-2013
- ProxySQL log rotate: ProxySQL logs can grow big very fast.
- PostgreSQL: Improved master failure handling to prevent an old master from being accidentally restarted.
- Galera/Replication: Adding a node did not update the loadbalancer HAProxy correctly. Xinetd was not started.
- Minor fixes to printouts in cmon log file.
- Add support to disable automatic node discovery at import time for Galera cluster. Manually add IPs/hostnames.
- Add support to filter by host for PostgreSQL's Query Monitoring.
- Fix a race condition for ProxySQL graphs that would eventually consume all memory and crash the browser.
- Fix escapes in match patterns for ProxySQL.
- Remove execution flag for systemd service files for cmon-events and cmon-ssh.
- Fix master selection dropdown for add node. No longer shows non-master nodes.
- Fix transient node switching glitch in the nodes page.
- Fix regression of minimum 2 SQL nodes at deployment (MySQL/NDB). No longer required.
- Fix node selection dropdown when restoring a mysqldump. Only masters allowed.
- Add standalone option when importing a MySQL Replication cluster.
- Remove ProxySQL load balancer option with MySQL/NDB Cluster. Currently not supported.
- Fix activity viewer next/prev causing page to scroll.
- Fix missing sudo password if it was set when verifying/checking a host with deployment/add nodes.
- Fixed a cmon grant error (for root and cmon passwords like "!password$$")
- Skip .sst from db_growth calculation.
- Restore mysqldump bugfix (for strange passwords)
- Properly escape cmon bassword
- Backup: Add compression level for backups
- Backup (MySQL Replication / Galera): Improved passsword handling of backupuser
- Don't do smartctl on /dev/mapper devices at all
- Postgres: Fix a minor systemd override file access rights issue.
- ProxySQL: Can't remove node when the node is unreachable
- Deployment (MySQL5.7 templates): added ignore-db-dir=lost+found
- PostgreSQL: Put slave to failed state when replication is known to be broken
- PostgreSQL: Fix a minor systemd override.conf file access rights issue
- PostgreSQL: An important bugfix for failover (the solution for the nodes stuck in 'startup' replication state)
- Replication: deeper external checks when there is a master failure. Try to connect from the slaves to the master using the mysql client to determine if the slave can see the master or get's an 2003/2013 error.
- Galera: Rolling-restart could fail due to an old value of the node's cluster size. Collect the wsrep variables before checking the cluster size and this is now done in a time controlled loop.
2017-06-21 clustercontrol-1.4.2-3421, clustercontrol-controller-1969, clustercontrol-cmonapi-279, clustercontrol-notifications-14.2-53, clustercontrol-ssh-1.4.2-21
In this release we have more improvements for ProxySQL to help you add existing instances in single or active/passive setups with Keepalived.
You can also easily synchronize a ProxySQL configuration that has query rules, users, and host groups with other instances to keep them identical.
Taking backups is essential for any organisation however an often overlooked practice is to actually verify that backups are in fact undamaged.
In this first version, verify backups by restoring a mysqldump or an xtrabackup on standalone hosts that are not part of your clusters.
Future updates will allow you to automate/schedule backup verifications, run queries and also use cloud or container resources to restore the backups on.
Alarms and Events can now easily be sent to incident management services like PagerDuty and VictorOps, or to chat services like Slack and Telegram.
You can also use Webhooks if you want to integrate with other services to act on status changes in your clusters.
Do you need to SSH into the DB nodes? Use our new Web based SSH console to open a terminal window directly to any of your cluster hosts.
Last but not least you can now deploy PostgreSQL in master and slave(s) setups with automatic failover and slave promotion.
- Copy, Export and Import ProxySQL configurations to/from other instances to make them in sync.
- Add Existing standalone ProxySQL instance.
- Add Existing Keepalived in active/passive setups with ProxySQL.
- Support for 3 ProxySQL instances with a Keepalived active/passive setup.
- Simplified Query Cache creation.
- Verify/Restore a mysqldump on standalone host.
- Verify/Restore an xtrabackup on standalone host.
- Customize your backup schedule by using the cron format.
- Send Alarms and Events to
- PagerDuty, VictorOps, and OpsGenie.
- Slack and Telegram.
- User registered Webhooks.
Web SSH Console
- Open a terminal window to any cluster nodes.
- Only supported with Apache 2.4+.
- New Master - Slave(s) cluster deployment wizard (streaming replication).
- Automated failover and slave to master promotion.
- Rebuild slave.
- Fixed TLS connection issues for e-mail sending (SMTP).
- Improved configuration handling of include/includeDir directives.
- Database user management RPC API for the s9s command line client.
- Continuous improvements to UX/UI.
- New cmon-events process to handle notifications to 3rd party services.
- New cmon-ssh process to handle Web SSH console access.
- Improved error reporting for troubleshooting/support.
- Use a custom mysql port when adding a MySQL Asynchronous slave (MySQL Galera).
- Fix for a build issue on Ubuntu/Debian.
- Fix for setting the Settings->Backup's retention period. In future versions Settings->Backups will be deprecated/removed and can be accessed from the Backup page instead.
- Fix inconsistent backup executed and next execution time and timezones displayed. UTC timezeone is used across the backup page for now.
- Performance->Transaction Log is disabled as default. Added a slider to set sampling interval.
- 'Add Node' and 'Add Existing Node' now has a data directory input field to change the data directory used for the new node.
- Alarm category in the Activity Viewer is now correctly showing the component name instead of the type name.
- Fix to show correct server name in the individual server load graphs.
- Fix regression/empty table for Performance->DB Variables.
- Fix to enable editable dropdown to the Add Existing Keepalived form for HAProxy.
- Support for using a custom port when adding a MySQL Asynchronous Slave (MySQL Replication)
- Fix for Configuration Management -> Change to list only valid nodes.
- Performance -> 'Status Time Machine' is now deprecated/removed.
- Disable by default tx deadlock detection as it takes a lot of CPU. Added new param:
How often to check for deadlocks. 0 means disabled (default). Specified in seconds. (default: 0).
Enable in /etc/cmon.d/cmon_X.cnf (if you want to enable it, then 20 is a good value) and restart cmon.
- Sample controller IP seen by MySQL nodes once after every cmon restart.
- logrotate (wtmp) more often and restart accounts-daemon
- A fix of show_db_users and show_db_unusued_accounts java scripts.
2017-05-12 clustercontrol-1.4.1-3121 | clustercontrol-controller-1.4.1-1890 | clustercontrol-cmonapi-274
- ProxySQL: Fix wrong IP in proxysql selected node header.
- PostgreSQL fixes
- Overview page no longer cause high load on the web client
- Performance -> DB Variables is now loading up correctly
- Tooltips added for the graphs
- LDAP authentication attempts are logged to a separate log file, <webdir>/clustercontrol/app/log/cc-ldap.log
- Minor improvements on how multiple recipients for email notifications are added.
- Galera: Fixed a bug in clone cluster
- Deployment: Fixed a bug using hostnames, which could cause grant/privilege errors from controller preventing the controller to connect to the managed nodes.
- ProxySQL: hashing of passwords in the mysql_users table.
- Backup Reports: Properly transform IP's into hostnames in backup report (due to a previous UI bug, some backups&schedules are used IP-s instead of hostnames)
- MongoDb: Degraded cluster state reported after removing shard
- Fixed an issue causing not all recipients to be listed under Settings (top menu) -> Email Notifications
2017-04-24 clustercontrol-1.4.1-3048 | clustercontrol-controller-1.4.1-1856 | clustercontrol-cmonapi-266
- Fix for empty databases list with MySQL backups.
- MySQL Variables page now use the RPC API.
- Improved deployment wizard placeholders descriptions.
- Enable 'restore backup' for PostgreSQL.
- Enable using a custom PostgreSQL port (default 5432) for deployments.
- Fix for allowing negative port numbers in the load balancer forms.
- Fix empty details on the keepalived node page.
- Fix for saving timezone settings other than GMT+0 with email notifications.
- Fix for deploying a single MySQL replication node cluster.
- Require set 'force' to stop a read-write MySQL server (MySQ Replication).
- Fix for node(s) reconnection issue to restored master after a restore backup.
- Fix configuration (my.cnf) import to start immediately after a MySQL replication slave has been added (Galera)
- Job log improvement. Show the command/action that was requested.
- Fix with MaxScale to show correct list of masters and slaves in the console.
2017-04-12 clustercontrol-1.4.1-3002 | clustercontrol-controller-1.4.1-1834
- See also 2017-04-11
- New Operation Report - Schema Change Report. With this feature you can spot changes in your database schemas and ensure changes are sound on your system.
- See also 2017-04-11
- Detect schema changes (CREATE and ALTER TABLE. Drop table is not supported yet). New options: schema_change_detection_address, schema_change_detection_databases, schema_change_detection_pause_time_ms must be set in /etc/cmon.d/cmon_X.cnf to enable the feature. A new Operation Report (Schema Change) must be scheduled.
Creating a report of 100 000 schemas and tables will take about 5-10 minutes depending on hardware. Configure the schema_change_detection_address to run on a replication slave or an async slave connected to e.g a Galera or Group Replication Cluster. For NDB this schema_change_detection_address should be set to a MySQL server used for admin purposes. Throttle the detection process with schema_change_detection_pause_time_ms. schema_change_detection_databases is a comma separated string of database names and also supports wildcards, e.g 'DB%', will evaluate all database starting with DB.
2017-04-11 clustercontrol-1.4.1-2998 | clustercontrol-controller-1.4.1-1830
- Fixed a bug making it impossible to restart failed jobs.
- Fixed a bug in the Nodes graphs which made them render wrongly
- Replication: Extended the Import dialog (Replication cluster) with a few more options (enable information schema queries).
- Galera: Added Multi Nic support for Add Replication Slave
- Fixed the title for the Nodes page
- ProxySQL: Handle latency (us/ms) and improvements to graphs.
- Query Monitor: Top queries useless with more than 20 queries
- Fixed a bug making it impossible add an existing replication slave.
- Replication (Percona,MySQL): print out messages to show progress while applying relay log.
- Java script fixes to take the enable_is_queries setting into account
- SSH alarms re-organised and an alarm is raised if SSH access is determined to be too slow.
- GroupRepl: fixing add-replication slave bug
- A JS script to change password on all MySQL servers (mainly useful only for NDB)
- ProxySQL: small fix for ‘latency’. Older versions used Latency_ms, newer Latency_us.
- User option: enable_is_queries = 0|1
2017-04-04 clustercontrol-1.4.1-2967 | clustercontrol-cmonapi-1.4.1-257 | clustercontrol-nodejs-1.4.1-86 | clustercontrol-controller-1.4.1-1811
In this release we have added additional management functions for ProxySQL. You can now view queries passing through ProxySQL, create and edit query rules, host groups/servers, users and variables.
We also have support for managing MySQL Galera and Replication clusters using separate managment and data/database IPs for improved security.
- Support for MySQL Galera in addition to Replication clusters.
- Support for active-standby HA setup with KeepAlived.
- Use the Query Monitor to view query digests.
- Manage Query Rules (Query Caching, Query Rewrite).
- Manage Host Groups (Servers).
- Manage ProxySQL DB Users.
- Manage ProxySQL System Variables.
- Manage MySQL Galera and Replication clusters with management/public IPs for monitoring connections and data/private IPs for replication traffic.
- Add Galera nodes or Replication Read Slaves with managament and data IPs.
2017-03-29 clustercontrol-1.4.0-2912 | clustercontrol-controller-1.4.0-1798
- Create/Import NDB Cluster changes (remove the 15 node limitation)
- Create NDB Cluster failed due to a bug in RAM detection.
- Replication: Roles were not updated correctly when autorecovery was disabled.
2017-03-13 clustercontrol-1.4.0-2812 | clustercontrol-controller-1.4.0-1769
- Fix for 'Copy Log' to work again
- Fix broken Galera SSL encryption indicator
- Added support to change default ProxySQL listening port
- Further hostname fixes for ProxySQL
- License handling fix with notifications
- Added support to change default ProxySQL listening port
- Syslog logging fix (command line param --syslog)
In your /etc/default/cmon file add the following line: ENABLE_SYSLOG=1
2017-02-28 clustercontrol-1.4.0-2743 | clustercontrol-controller-1748
- Rebuild Replication Slave did not present available masters
- ProxySQL deployment sends IP instead of hostnames when required
- Further improvements to handle RPC API token mismatches
- Workaround to handle IP addresses instead of hostnames for ProxySQL deployments
- Improvements to avoid create zombie processes
- Remove false positive SSH alarms when using a hostname in the cmon.cnf file
- Sending backup failure mails as "critical" notification
2017-02-15 clustercontrol-1.4.0-2709 | clustercontrol-controller-1725
- The Cluster list is no longer disappearing when the CMON process is either restarted, stopped or down
- Rebuild slave/change master dialog correctly populates the nodes dropdown
- Selecting a node action could at time cause a wrong dialog to show up
- Improvements to RPC API Token mismatch error messages
- 'Check for updates’ in the Settings page is deprecated/removed
- Galera: wsrep_notify_cmd pointing to the script wsrep_notify_cc (discontinued) was invalidated wrongly.
- Galera: Fixes in configuration to support 2.4.5 of Percona Xtrabackup and MariaDb Cluster 10.1, due to this bug https://bugs.launchpad.net/percona-xtrabackup/+bug/1647340.
- Avoid samping from a failed node
- Deployment: removed --purge from apt-get remove, to handle /var/lib/mysql as a mountpoint.
- Correct filtering with config parameters in the Configuration Management
- Read-Only switcher removed from the Overview Page. You can now only change the read-only status from the Nodes page's action menu
- Fix issue with the Nodes page's action menu where the wrong action item was selected and could accidentally be performed instead
- Improvements to the cluster and node status updates cycles.
New <webdir>/clustercontrol/bootstrap.php variable to control refresh intervals:
Default is now 10s from before 30s.
- Permanently disabled the 'system_check.js' script as it was causing problems for some users
- Automatic log rotate of /var/log/wtmp when it reaches 10MB in size. 10 files are stored for history, and runs at 02:00am.
- Replication: A backup stored on the controller and restored on another host than the backup was created from would restore the backup on the wrong host (created host).
- Replication: FLUSH LOGS after failover to update SHOW SLAVE HOSTS.
- Galera: Percona XtraDb Cluster 5.5 for Debian/Ubuntu failed to install.
- Clear Alarms: specify 'send_clear_alarm=1' in /etc/cmon.d/cmon_X.cnf and restart cmon to receive email notification when a Cluster Failure, SSH failure, MySQL Disconnected, Node/Cluster Failed Recovery, and Cluster Split alarms have been resolved. 'send_clear_cluster_failure' is an alias for this option.
- OS detection: Failed to detect Debian version if lsb_release was not installed. [bug 1235]
- Aborted jobs now have the correct status.
- Fix for wrong scheduled time shown in Operational Reports
- Fix for inconsistent MongoDB menus
- Fix for confusing 'Change Organizations' option.
You can more easily create a SuperAdmin/Root user to manage all your organizations/teams.
2017-01-20 clustercontrol-1.4.0-2601 | clustercontrol-controller-1.4.0-1675
- Manage -> Configurations: Wrong args sent to change_config_param.js script
- Fix of crashing bug during partial restore.
- Graph missing from Operational Report.
- Replication: Stop Slave (from UI) auto restarted the slave.
- Adding a MySQL Node and having HAProxy caused a problem creating the s9smysqlchk user.
- Fix for an issue with having clusters from multiple controllers in one UI.
- Migration of backups: better error messages and corrections the if backup files does not exists.
- Sudo: corrects an issue where the sudo configuration (in case of using sudo with password) would overwrite the sudo settings.
- Fixed a bug in Excessive CPU Usage, the number of CPU cores was not taken into account.
- Backup: an overlapping backup schedule will fail to execute and the user is prompted to correct the backup schedule.
2017-01-03 clustercontrol-1.4.0-2542 |clustercontrol-controller-1.4.0-1641
- New advisor: s9s/mysql/galera/check_gra_log_files.js monitors the growth of GRA log files.
- ProxySQL failed to install on Centos/RHEL7 when mysql client is missing.
- SMTP/TLS bug improvements for email notifications
- Backup Retention: Backups matching the retention period as not removed.
- Restore of Partial Backup (xtrabackup) shutdown the db nodes, but it is not necessary.
- Stop Garbd failed on Centos/RHEL7
- Fix in the "enable/disable node/cluster recovery" to show a confirmation dialog when changing settings
- Small fix in query monitoring dialog.
2016-12-22 clustercontrol-1.4.0-2527 | clustercontrol-controller-1.4.0-1630
- New Advisor (Top Queries) and fixes
- Updated MySQL Group Replication (GA) to install from Oracle default MySQL repositories instead of MySQL Labs releases.
- Improvements to support Galera 3.19
- Maintenance mode related fix for deployment jobs
- ProxySQL: additional deployment option (implicit transactions)
- If 'vendor' is not set in the cluster's /etc/cmon.d/cmon_X.cnf file (X is the cluster id), then cmon will attempt to auto-detect the vendor. For MySQL based setups, please ensure the correct vendor is set to one of the following: percona, oracle, codership, mariadb. E.g vendor=mariadb, if you are using a mariadb based setup.
- Query sampling time is no longer needed/used (Query Monitor settings)
- Added option for Implicit Transactions (ProxySQL)
- Text clarification when saving an existing DB user twice
- Fix for correctly saving mail server settings
- Fix for inconsistent password styles
2016-12-12 clustercontrol-1.4.0-2491 | clustercontrol-cmonapi-1.4.0-247 | clustercontrol-nodejs-1.4.0-82 | clustercontrol-controller-1.4.0-1614
In this release we are pleased to introduce support for ProxySQL as an additional load balancer option and experimental support for Oracle MySQL Group Replication!
We also have several improvements for MongoDB users. You can now convert a replicaset to a sharded cluster and scale by adding or removing shards.
- Deploy ProxySQL on MySQL Replication clusters (support for additional database types coming).
- Monitor ProxySQL performance (v1).
Experimental support for Oracle MySQL Group Replication
- Deploy Group Replication Clusters.
- Support Read-Write split configuration at deployment for MySQL Replication clusters.
- Enhanced multi-master deployment.
- Flexible replication-topology management.
- Replication error handling (Errant transactions).
- Automated failover.
- Convert a ReplicaSet cluster to a sharded cluster.
- Add or Remove shards from a sharded cluster.
- Add Mongos/Routers to a sharded cluster.
- Step down or freeze a node.
- New Advisors.
Backup, Query Monitor and Advisors
- A re-designed streamlined view into your scheduled and completed backups.
- Note: Upload/Download backups to AWS S3 has been temporarily removed.
- A re-designed Query Monitor with query execution plan output (explain) for MySQL.
- A re-designed Advisors page that makes easier to see what needs to be acted upon.
- Support for Percona XtraDB Cluster 5.7
- New Operational Report generating available software and security packages to upgrade.
- New header with navigation breadcrumbs.
- Activity Viewer showing Cluster Logs/Events. See more fine grained levels of logs and events generated and captured by ClusterControl.
- Support for maintenance mode. Put individual nodes into maintenance mode which prevents ClusterControl to raise alarms and notifications during the maintenance period.
Changes in ClusterControl v1.3.2
2016-10-14 clustercontrol-1.3.2-2167 | clustercontrol-controller-1.3.2-1504
- Allow two MongoDB Replica Set nodes to be deployed. Add an arbiter via 'Add Node'
- Fixes to database growth tables. Enable sorting on database or table columns
- Enable MariaDB 10.0 version for Repository mirroring
2016-09-19 clustercontrol-1.3.2-2066 | clustercontrol-cmonapi-1.3.2-233 | clustercontrol-controller-1.3.2-1455
- Support for v7.4.12 in Create/Deploy MySQL/NDB Cluster (starting from controller build #1446)
- Option to select MongoDB consistent backup (https://github.com/Percona-Lab/mongodb_consistent_backup) is now properly shown for MongoDB Cluster if it is installed
- Fix importing existing MySQL Cluster/NDB cluster (added mgm nodes)
- Fix page refresh issues on Logs->Job
- Fix saving confirmation issues to the Configuration Management (MySQL)
- Fix empty Nodes->DB Variables page (MySQL)
- Fix Cluster and Node recovery status indicators on the cluster list vs cluster specific pages
2016-09-05 clustercontrol-1.3.2-2023 | clustercontrol-controller-1.3.2-1431
- Create/Import Cluster Wizard cosmetic fixes
- Fix Operational Reports and MySQL User Management ACL settings for custom user profiles
- Fix empty graphs on MongoDB Nodes->DB Performance page
- Fix a bug about restoring partial xtrabackups which did not work at all earlier. Now the partial xtrabackups are restored to a particular directory and the user must manually restore the tablespaces to the datadir.
- Fix of a bug that in some situations could cause a node to not be fully removed.
2016-08-08 clustercontrol-1.3.2-1910 | clustercontrol-cmonapi-1.3.2-226 | clustercontrol-nodejs-1.3.2-73 | clustercontrol-controller-1.3.2-1391
- Deploy or add existing MongoDB Sharded clusters (Percona MongoDB and MongoDB Inc v3.2)
- Minor re-designed overview page for sharded clusters and performance graphs
- Support for writing MongoDB based Advisors
- Support for managing MongoDB configurations
- Support for Percona consistent mongodb backup, https://github.com/Percona-Lab/mongodb_consistent_backup (if installed on the ClusterControl host)
New Activity Viewer
- Easily see Alarms and Jobs for all clusters consolidated in a single view
New Deployment and Add Existing Cluster and Servers Dialog
- Re-designed dialog for deploying and adding clusters
- Supports MySQL Replication, MySQL Galera, MySQL/NDB, MongoDB ReplicaSet, MongoDB Shards and PostgreSQL
2016-07-28 clustercontrol-controller 1.3.1-1372
- Fix for a new Percona 5.6 systemd script
- Fix for a new MariaDb 10.1 systemd script
- Fix a busy loop issue (happening after some time with Proxmox provisioned LXC containers)
- Recovery job marked as succeed when it is actually failed
2016-07-18 clustercontrol 1.3.1-1820 | clustercontrol-controller 1.3.1-1364 | clustercontrol-cmonapi 1.3.1-215
- Fix for digest mails (encoding and empty bodies) with MS Exchange
- Fix for reports generation crashes
- Fix for 'Create Database' returning 'unable to find host'
- Support for HAProxy 1.6 new stats URL format
- Moving File privilege to the Administration section for 'Create Account'
- Updated AWS SDK to 2.8.30 and removed deprecated requirement on AWS SSH Private Key File
2016-06-20 clustercontrol 1.3.1-1655 | clustercontrol-controller 1.3.1-1324 | clustercontrol-cmonapi 1.3.1-198
- Backup: Fixed an issue with long running backups and overrun of backup log entries (backup would not terminate properly)
- Fix for automatically correcting a wrongful 'sudo' configuration.
- Alarms: fixed inconsistent alarm count
- Jobs: Fixed a number of issues such as being able to Restart failed jobs
2016-06-06 maintenance release: clustercontrol-controller 1.3.1-1304 | clustercontrol-1580
- Galera: Fixed a version detection issue of the galera wsrep component.
2016-05-31 clustercontrol 1.3.1-1562 | clustercontrol-controller 1.3.1-1296 | clustercontrol-cmonapi 1.3.1-195 clustercontrol-nodejs 1.3.1-64
MySQL based clusters
- Create MySQL Replication Clusters (master + N slaves) with Percona (5.6|5.7), MariaDB (10.1) or Oracle (5.7) packages
- Enable SSL client/server encryption
- Enable/Disable automatic management of the server read_only variable by setting 'auto_manage_readonly=true|false' in the cmon.cnf file of the replication clusters. Default is true.
- Add Existing MySQL/NDB Cluster. Add an existing production deployed NDB Cluster. 2 MGMT Nodes, X SQL Nodes, Y Data Nodes.
New Backup and Restore options
- Explicitly select a backup failover host to use instead of auto selecting a failover host
- Improved restore mysqldump files
MySQL User Management
- General UI improvements
- Set accounts to require encrypted connections by enabling "REQUIRE SSL"
- Import existing SSL certificates and keys. Upload your certificate, private key and CA (if any) to the ClusterControl Controller host and then import the certificate to be managed by ClusterControl.
- Support for installing ClusterControl on MySQL 5.7
- Correctly show nodes that are in maintenance mode, e.g., during node recovery
- Simplified MariaDB MaxScale deployment. No need to enter a MariaDB enterprise repository URL
- Added "Restart Node" action for all cluster types
- Upgrade to CakePHP 2.8.3
- Job Log improvements
Changes in ClusterControl v1.3.0
2016-05-16 maintenance release: clustercontrol-1.3.0-1469 | clustercontrol-controller 1.3.0-1274
- Disable SELinux/Firewall options not set by default when Create/Add Cluster.
- Create NDB Cluster: typo in node type.
- Create NDB Cluster: SELinux/Firewall options was not used properly.
- MaxScale: Use community repos.
- Debian/Centos: Fixed OS detection code when lsb_release was missing.
- Handle the case that on some distributions the service name is 'mariadb' and not 'mysql'/'mysqld'.
- Failed to handle quotes (single and double) when validated wsrep_sst_auth settings.
2016-05-09 maintenance release: clustercontrol-controller 1.3.0-1262
- Ubuntu 15.04 fix to handle that my.cnf is a symlink
- Missing SUPER privilege in Create Cluster causing the Incremental Xtrabackup to fail.
2016-05-03 maintenance release: clustercontrol-1.3.0-1438 | clustercontrol-controller 1.3.0-1257
- Permission problem in a web folder
- Fix upgrade issue for 1.3.0 on centos/rhel
- Fixed a compatibility issue with xtrabackup 2.2.x
2016-05-02 maintenance release: clustercontrol-1.3.0-1420 | clustercontrol-controller 1.3.0-1252
- Alllow 'strange characters' in user name (now all ASCII is supported except ` ´ ' ). UTF-8 characters are not supported.
- Made "Disable Firewall" default choice for Redhat/Centos when creating clusters.
- A directory, WWWROOT/cmon, was never created during installation which affected uploading of files.
- Postgres fixes to start a node from UI.
- Wrong status for nodes in MySQL Cluster.
- MySQL standalone nodes were deployed as read only.
- Mongo/HAProxy config file parsing issues fixed.
- Failed to detect CentOS 6.6
- Some settings (thresholds) set in the front-end was not respected by the controller.
- Fixed a compatibility issue with xtrabackup 2.1.x.
2016-04-25 maintenance release: clustercontrol-controller 1.3.0-1242
- mysqldump fails for MariaDb 10.x with an erroneous parameter being used.
2016-04-24 maintenance release: clustercontrol-1.3.0-1393 | clustercontrol-controller 1.3.0-1240
- New "Install Software" option for Galera Cluster with "Create Database Cluster" and "Create Database Node"
Default "Yes" act as before where ClusterControl provisions the database nodes with required packages and any existing packages could be uninstalled if required.
If set to "No" then no provisioning of packages or uninstallation of any existing packages are done. It is assumed that the DB nodes have been provisioned by for example a configuration management system with all required database packages. The create cluster/node jobs will then only provision out our Galera my.cnf file and then bootstrap the cluster without doing any provisioning of software. It is important that the mysql server process is stopped before running the job with "install Software" set to "No".
- MongoDB arbiter is now shown on the "Nodes" page
- Correct wrong assets path. Fixes missing logo in operational reports.
Manual fix: Move /usr/share/cmon/assets/assets to /usr/share/cmon/assets
- Support for "Install Software" option for Galera Clusters with "Create Database Cluster" and "Create Database Node"
2016-04-21 maintenance release: clustercontrol-1.3.0-1375
- Fix broken Add Existing Server/Cluster dialog.
2016-04-19 maintenance release clustercontrol-controller 1.3.0-1234 | clustercontrol-1.3.0-1355
- Prefer "netcat-openbsd" over other variants when provisioning a node.
- epel-release URL fix for Centos 7 (using time-proof urls).
- Auto schema upgrade fixes in /etc/init.d/cmon
The cmon init script in 1.3.0 automatically tries to upgrade the cmon schema to the current version.
- Create Cluster Job: Remove unused/wrong keys from the json format.
- Key Management: Fix reload issues with manage key's content table.
- Manage-Hosts: Fix Unknown status for HAProxy and Keepalived.
2016-04-18 clustercontrol 1.3.0-1347 | clustercontrol-controller 1.3.0-1228 | clustercontrol-cmonapi 1.3.0-183 clustercontrol-nodejs 1.3.0-56
- Key Management allows you to manage a set of SSL certificates and keys that can be provisioned on your clusters
- Create certificate authority certificates or self-signed certificates and keys
- Easily Enable and Disable SSL encrypted client-server connections for MySQL and Postgres based clusters
- Additional Operational Reports
- Generate an Availability Summary of uptime/downtime for your managed clusters and see node availability and cluster state history during the reported period
- Generate a backup summary of backup success/failure rates for your managed clusters
- Improved Security
- From this version we are setting an unique Controller RPC API Token which enables token authentication for your managed clusters. No user intervention is needed when upgrading older ClusterControl versions. An unique token will be automatically generated, set and enabled for existing clusters.
- Custom scripts/applications utilizing the RPC API need to pass the correct token for the clusters, see http://severalnines.com/downloads/cmon/cmon-docs/current/ccrpc.html#configuration for details on how to pass the token correctly.
- Create/Mirror Repository
- Mirror your database vendor’s software repository without having to actually deploy a cluster. A mirrored local repository is used in scenarios where you cannot upgrade a cluster and must lock the db versions to use.
- Additional Backup Retention Periods
- Enable shorter retention periods
MySQL based clusters
- Create a production setup of NDB/MySQL Cluster from ClusterControl
- Deploy Management Nodes, SQL/API Nodes and Data Nodes
- Easily toggle read-only mode on and off for MySQL nodes
MongoDB based clusters
- Create MongoDB ReplicaSet Node
- Support for Percona MongoDB 3.x
- MongoDb 2.x is no longer supported.
Changes in ClusterControl v1.2.12
2016-04-03 - Maintenance release of clustercontrol-controller-1.2.12-1201, clustercontrol-1.2.12-1261
- xtrabackup failed if there monitored_mysql_root_user was anything else than ‘root’, i.e the value of monitored_mysql_root_user in cmon.cnf was not respected.
- xtrabackup failed if executed on an asynchronously slave connected to a Galera node.
- MongoDb: Shards was not presented correctly
2016-03-20 - Maintenance release of clustercontrol-controller-1.2.12-1184, clustercontrol-1.2.12-1261
- Restore: Copying files larger than 2GB failed.
- Clear alarms when removing a node
- Galera: Setting up asynchronous slave connected to Galera failed for MariaDb 10.x
- MaxScale: displayed as a slave in the Overview
- MongoDb: Shards was not presented correctly
- MySQL Transaction Log: Pagination issue
2016-03-04 - Maintenance release of all components: clustercontrol-controller-1.2.12-1158, clustercontrol-1.2.12-1195, clustercontrol-cmonapi-1.2.12-171.
- Very old backup schedules could sometimes cause problems
Improved handling for checking mount points that does not exits
- Query Monitor: Running Queries did not always show b/c of a problem in processlist.js
- Missing explains
- Occasionally upgrades could fail because a UI cache was not cleared
- LDAP fixes related to issues when upgrading from 1.2.10 to 1.2.12
- Showed too many node types in Query Monitor -> Running Queries drop down
- Missing possibility to hide graphs opened by 'Show Servers'
- Fixes to queries showing explains
- Operational Reports (BETA). Generate, schedule and email out operational reports. The current default report shows a cluster's health and performance at the time it was generated compared 1 day ago.
The report provides information on Node availability, Backup summary, Top queries, Host and Node stats. We will add more options and report types in future releases.
- Custom Advisor dialog. Create threshold based advisors with host or MySQL stats without needing to write your own JS script.
- Notification Services (new clustercontrol-nodejs package). Currently only email and pagerduty notifications are used by custom advisors. More to come.
- Local Mirrored Repository. Create a local mirror of a database vendor's software repository. This allows you to "freeze" the current versions of the software packages used to provision a database cluster for a specific vendor and you can later use that mirrored repository to provision the same set of versions when adding more nodes or deploying other clusters.
- Export graphs as CSV|XLS files
- Search the content in the system logs
MySQL based clusters
- MariaDB 10.1 support.
- Enable binary logging for a node. This node can then be used as the master for a replication slave or use the binary log for point in time recovery.
- Delayed replication option when adding slave to the Galera Cluster. Delay the replication with N seconds.
- Enable/Disable SSL encryption of Galera replication links.
MySQL Replication Master
- Oracle MySQL 5.7 as vendor. Limitation: Percona Xtrabackup is not supported for MySQL 5.7 yet.
- Semi-sync replication option
- Find the most advanced MySQL slave server to use for Master promotion
MySQL Replication Slave
- Delayed replication option (MySQL 5.6). Delay the replication with N seconds.
- New table lists delayed replication slaves in the cluster
New Backup options
- Auto Select backup host. Allow ClusterControl to automatically select which node to take the backup on.
- Enable backup failover node. If the selected backup node is down a failover node will be elected.
Galera: De-syncing a node with the highest local index and then used as the backup failover node.
MySQL Replication: Random slave node used as the backup failover node.
- "No backup locks" for xtrabackup/innobackupex. Use FLUSH NO_WRITE_TO_BINLOG TABLES and FLUSH TABLES WITH READ LOCK instead of LOCK TABLES FOR BACKUP.
- Manage, Garbd and MaxScale configurations. Limitation: Maxscale does not support 'reload' (https://mariadb.atlassian.net/browse/MXS-99) meaning the operator must restart (e.g from the UI) the maxscale daemon.
- Support for MongoDB 3.2
- Support for Postgres 9.5
Changes in ClusterControl v1.2.11
2015-12-11 - patch release clustercontrol-controller build no 1052
- Backup: supports group [mysqldump] in my.cnf file
- Developer Studio: Fixed bugs in import/export of advisors
- Scalability fix: Use poll instead of select
2015-12-04 - patch release clustercontrol-controller build no 1039, clustercontrol (ui) build no 899, cmonapi build no 141.
- Finer granularity on Range Selection without using date selector (15 mins, 30 mins, 45 mins)
- Removed obsolete data columns (Connections and Queries) from cluster bar
- Role and Manage Organizations fixes
- Fixed a bug when using internal repos
- A config file parser fix for include files (parser tried to treat a directory as a file)
NDB nodes statues was reported as "9999" (mysql-unknown) when auto-recovery is diesabled
- Mariadb repo creation bugfix
- Fixed a crashing bug when having many clusters on one controller.
2015-11-15 - patch release clustercontrol-controller build no 1023, clustercontrol (ui) build no 883, cmonapi build no 138.
- Fixes to User/Organization management
- Xtrabackup: corrected --no-timestamp option (was -no-timestamp)
- Implemented max-request-size handling for the REST API calls to limit transfers between the controller and REST consumers (such as the UI)
- MySQL Cluster: Stop Node job could fail unnecessarily. / Start Node job stuck in RUNNING state for too long.
- Keepalived: corrected vrrp_script chk_haproxy (was rrp_script chk_haproxy)
2015-11-06 - patch release clustercontrol-controller build no 1007, clustercontrol (ui) build no 854, cmonapi build no 135.
- Default "Admin" Role is missing ACLs settings for Create DB Node and Dev Studio
- When viewing Global Jobs, the installation Progress window cannot be resized vertically.
- DB Variables page does not load properly
- Find Most Advanced Node job sent with the wrong cluster id (0) causing it to fail.
- Postgres: postgres|postmaster executable names are both supported meaning that the postmaster process is now properly handled.
- Reading disk partition information failed as non root user
2015-11-02 - patch release clustercontrol-controller build no 998, clustercontrol (ui) build no 842, cmonapi build no 135.
- Change the favicon for ClusterControl to the one that is used on our site www.severalnines.com
- MongoDB add node to replica set looks wrong
- Global Job Messages: Local cluster jobs are shown in the popup dialog
- Fix in Manage -> Schema Users. Drop user even if user is empty (‘’@‘localhost’)
- Add/Register Existing Galera Node: The "Add Node" button does not react/work if there is no configuration files in the dropdown for the "Add New DB Node" form
- MongoDB add node to replica set dialog - text was cut
- Add/Register Existing Galera Node: The "Add Node" button does not react/work if there is no configuration files in the dropdown for the "Add New DB Node" form
- [PostgreSQL] Empty "DB Performance" graphs
- Installation progress window text disappears while scrolling back
- Galera: Register_node job: registers node with wrong type
- Create DB Cluster: Checking OS is the same on all servers
- Create DB Cluster/Node, Add Node: Install cronie on Redhat/Centos
- Scheduled backups that are stored both on controller and on node (full and incremantals) fail to restore.
- Increase size of ‘properties’ column in server_node table to contain 16384 characters. The following is needed on the cmon db: ALTER TABLE server_node MODIFY properties VARCHAR(16384) DEFAULT '';
- CmonHostManager::pull(..): lets properly handle if JSon parse failes...
- MongoDb: Check if there is a new member in the replica set and then reload the config
- MySQL: Bugfix for replication mysqldump backuping issues (appeared recently): lets exclude the temporary (name starts with #) DBs from backup
- Postgres: Add existing replication slave failed.
- Character set on connection + cmon.tx_deadlock_log, change to use utf8mb4 to properly encode characters in Performance -> Transaction Log preventing data from being shown. Do mysql -ucmon -p -h127.0.0.1 cmon < /usr/share/cmon/cmon_db.sql to recreate this table.
2015-10-23 - patch release clustercontrol-controller build no 985, clustercontrol (ui) build no 826, cmonapi build no 131.
- Backup fix to support xtrabackup 2.3
- Start-up bugs to initialise internal host structures
- netcat port defaults to 9999 (and impossible to change)
- Cluster failure with "Unknown database some_schema" message
- Remove Node: wsrep_cluster_address is not updated
- Corrected printout in backup
- Corrected sampling of wsrep_flow_cntr_sent/recv
- In Cluster jobs list, Delete and Restart buttons do not work
- Add Replication Slave UI Dialog not showing properly
- Editing a previously created backup schedule alters the hostname, and backup job fails
- Number counter on 'Alarms' and 'Logs' tabs doesn't make sense
- User Management - refresh/reload button and corrected text for CREATE USER
- clustercontrol-controller build no 974
- clustercontrol build no 808
- clustercontrol-cmonapi build no 128
Do not forget to apply schema diffs from the version you are upgrading from (1.2.10). If you are already on 1.2.11 there is schema changes to apply!
This is a our best release yet for Postgres with a number of improvements.
- Create a new Postgres Node/cluster from the "Create Database Node" dialog or add an new node with a few clicks
- You can now easily add a new replication slave for your Postgres master node
- The replication peformance and status is shown on the overview page for the slave
- You can restore a backup created by ClusterControl on a specific node
- Create your own dashboard with stats to chart/graph on the overview page like MySQL based clusters
- DB performance charts on the Nodes page
- View database status and variables on your postgres nodes side by side
- Create your own postgres "advisors/DB minions" for alarms or email notifications
MaxScale for MySQL based clusters.
MariaDB MaxScale is an open-source, database-centric proxy that works with MariaDB Enterprise, MariaDB Enterprise Cluster, MariaDB 5.5, MariaDB 10 and Oracle MySQL.
- Deploy MaxScale instance for round-robin or read/write splitter with a customizable configuration
- Add an existing running MaxScale instance
- Send commands to "maxadmin" and view the output in ClusterControl
MySQL Based Clusters.
- You can now use CoderShip as the Galera vendor for Create Cluster and Database node
- Create a MySQL Replication Master Node from the Create DB Node dialog. Currently only Percona as vendor is supported
- Add/Register an existing running MySQL slave without stopping and provisioning the dataset from the cluster
- Create Cluster and Database Node now support using "internal repositories" for environments where you do not have internet access and have internal repostory servers instead
- Removed the limit of only being able to chart 8 DB stats. You can now arrange the charts in a layout with 2 or 3 columns and chart up to 20 stats
- Fixes to Clone Cluster and the UI notification system/look
- Backup individual schemas
- Option to enable 'wsrep_desync' during backup for Galera clusters to workaround stalls/issues with FLUSH TABLES WITH READ LOCK. Puts the backup node into 'Donor/Desynced' state during the backup.
- Manage Email Notifications for all users at once
New System Logs page
- We have a new page specifically for system logs that you access from Logs->System Logs. Currently Database Logs are shown here.
- A tree view lists your DB nodes so you can simply pick the nodes that you want to check the mysql error log for
Revamped Configuration Management
- New implementation and look using our JS engine and a set of js scripts
- Group Changes. Automatically change and persist individual database variables across your DB nodes at once. If it's a dynamic variable we'll change it directly on the nodes
Revamped MySQL User Management
- New implementation and look using our JS Engine and a set of js scripts
- We' removed the old implementation where we maintained users created from ClusterControl separately
- Users and privileges are set directly and retrieved from your cluster so you are always in sync
- Create your users across more than one cluster at once
HAProxy and KeepAlived
- You can now add existing running HAProxy and Keepalived instances that have been installed outside of ClusterControl
- Changing Cluster/Node AutoRecovery settings in the UI are not persisted in the cmon configuration files. Hence, restarting the cmon process will load in the old settings as defined in the cmon configuration file. To make the settings persistent you must edit the cmon.cnf file (/etc/cmon.cnf or /etc/cmon.d/cmon_X.cnf, where X is the cluster id of the particular cluster).
Changes in ClusterControl v1.2.10
Introducing our new powerful ClusterControl DSL (Domain Specific Language) which allows you to create Advisors, AutoTuners or "mini Programs" on our ClusterControl platform! (BETA)
- Allows you to execute SQL statements and/or run shell commands/programs across all your cluster hosts and retrieve results to be processed for advisors/alerts/actions etc.
- SDK documentation
- Integrated Developer's Studio (Developer IDE)
- Provides a simple but elegant environment to quickly create/edit, compile, run/test and schedule your JS programs.
- ClusterControl Advisors/JS bundle for MySQL based clusters - feel free to modify and share your changes with the community!
- A set of basic advisors with rules, alerts and actions that you can use as a base for your own customizations.
- Import ClusterControl JS bundles from the community or our partners.
- Export ClusterControl JS bundles for others to use/try out.
- Galera Cluster
- Create a Galera Cluster with up to 9 nodes for local/on-premise deployments.
- New cluster action that shows you the most advanced (last committed) node in your cluster, simplifying manual cluster recovery.
- Show long running and deadlocked transactions, great for performance tuning.
- Actions that can be performed on a Node is now also available directly from the overview page.
- New Add Node option to Add an Existing DB Node, i.e., a node that has been provisioned without ClusterControl.
- MySQL Replication clusters using GTIDs support Failover and Slave Promotion (manual).
- Overview page's cluster load graph and the Nodes's page graphs have been migrated to use the faster CMON RPC API.
- Configuration Management uses the CMON RPC API to manage configurations.
- General frontend optimizations for better UI performance.
- Fixed bugs in the SSL/TLS email protocol
Changes in ClusterControl v1.2.9
Feb 8th, 2015
- MySQL Replication (master <-> master) should not upgrade.
Support for PostgreSQL Servers!
- Add Existing PostgreSQL Server (standalone). Only v9.x supported.
- Monitor and schedule backups
- Query Monitor
- Port 9500 must be open on the controller for internal communication between UI and the CMON process
- Port 9999 (by default) must be open bi-directionally between controller and data nodes for streaming backups (mysqldumps, xtrabackup, pgdump)
- Bootstrap Cluster. Select a DB node to initialize the cluster from. Optionally enable/force SST for joining nodes and forcefully stop (SIGKILL) nodes
- Stop Cluster forcefully (SIGKILL) or with a graceful shutdown time
- Start DB node. Optionally enable SST at startup
- Stop DB node forcefully (SIGKILL) or with a graceful shutdown time
- Make a non-primary DB node primary
- Replication Slave Setup for Galera Cluster (GTID support). Slaves are bootstrapped with a Xtrabackup stream from a chosen Master
- Failover replication (GTID only) slave from to a new master
- Stage replication slave from master (Xtrabackup streamed from master to slave), useful in event of slave corruption
- Enable SSL Replication Encryption on the Galera Cluster. 2048-bit default key and certificate generated on the ClusterControl node and transferred to all the Galera nodes automatically
- SSL support between controller and managed nodes
- wsrep-recover is used to discover the most advanced Galera Node for recovery operations
- Removed manipulation of wsrep_cluster_address in my.cnf files meaning ClusterControl no longer makes any alterations of a node's configuration file
- Backup functionality completely re-written, and netcat port for streaming backups is user specified
- Restore ClusterControl originated or external made backups on selected hosts
- Alarm is raised if a node has set wsrep_cluster_address=gcomm://
- Improved logging and hints to assist with failed recovery attempts
- Enable/Disable Node/Cluster Auto Recovery from UI
Advanced HAProxy Deployment Settings
- Set for example client and server timeouts, max connections for frontend and backend. Select which backend servers are 'active' or 'backup'
- It is possible to enable/disable nodes part of a load balancer.
- Built-in HAProxy Statistics. No longer need to launch separate window to monitor the HAProxy performance
- Template configuration is stored on the controller in /usr/share/cmon/templates/haproxy,cfg , mysqlchk.*, and mysqlchk_xinetd and allows for pre-install modifications.
Deadlock and long running queries detection
- db_long_query_time_alarm (specify in cmon.cnf). If a query takes longer than db_long_query_time_alarm to execute an alarm will be raised containing detailed information about blocked and long running transactions. db_long_query_time_alarm = 0 (disable), default value 5
- MySQL Replication / Single MySQL Server
- Failover replication (GTID only) slave from to a new master
- Stage replication slave from master (Xtrabackup streamed from master to slave), useful in event of slave corruption
- MongoDB Cluster
- New Overview page with global lock stats.
A new more “modern” front-end theme
- Re-organized Cluster specific actions into an easy to access list.
- A global alarm list which shows alarms per cluster. No need to drill into each cluster to see the alarms anymore.
- s9s_galera (—install/remove-garbd)
- s9s_sw_update deprececated for mariadb/percona apt/yum installs
Most of the above functionality are now handled directly by the Controller process.
Chef recipe & Puppet manifest for ClusterControl Controller (CMON)
- Zabbix Template, see http://www.severalnines.com/blog/clustercontrol-template-zabbix
- Changes in the Controller (CMON)
New configuration options (cmon.cnf):
- enable_mysql_timemachine =[0|1] , default is 0, meaning it is disabled.
- cmondb_ssl_key= path to SSL key, for SSL encryption between CMON and the CMON DB.
- cmondb_ssl_cert = path to SSL cert, for SSL encryption between CMON and the CMON DB
- cmondb_ssl_ca = path to SSL CA, for SSL encryption between CMON and the CMON DB
- cluster_ssl_key= path to SSL key, for SSL encryption between CMON and managed MySQL Servers.
- cluster_ssl_cert = path to SSL cert, for SSL encryption between CMON and managed MySQL Servers.
- cluster_ssl_ca = path to SSL CA, for SSL encryption between CMON and managed MySQL Servers.
- cluster_certs_store = path to storage location of SSL related files, defaults to /etc/ssl/<clustertype>/<cluster_id>
- New binary format for host statistics which consumes less space (cpu, memory, disk, network stats)
- Fixed disk statistics collector to support non 4K block sizes
- E-mails do not contain IP addresses when hostnames are specified in the cmon configuration
- Password will not be logged (to jobs for example) or sent anymore
- Alarm will be raised when there is a missing MySQL GRANT
- Alarm will be raised/sent when there is a high IO wait for a period (>=50% average in 10 minutes)
- New alarm for Galera configuration problems
- Improved alarm emails (for example: high cpu/mem usage mails will contain the output of 'top' command)
- Several new RPC interfaces (directly on the daemon) for jobs and statistics handling
- The web client has started to migrate over to use RPC API calls instead of the CMON API
- Acceptance testsuite which runs daily using vm instances
- Job failures are much better explained
- Huge refactor for cluster handling, it is now mostly unified
- Improved host/node handling (makes it possible later on to add support for multiple services on a single host)
- Better CentOS7 / systemd support
- cmon init script updates (and unified across distros [redhat/debian])
- Support for more detailed SSH logging if needed
- Agents are no longer supported
Changes in ClusterControl v1.2.8
Sep 17th, 2014
- Create Single DB Node. Launch/provision a single MySQL Galera node or MongoDB ReplicaSet member node to a host.
Create MySQL DB Users and Privileges across several DB clusters at once.
LDAP improvements. Better support for AD. Added member+dn support. Groups and Users can be on different baseDN.
Support for Alerts and Incident tracking with external providers using a new Alarm/Events plugin system. PagerDuty plugin/integration available.
Unified Event Viewer. Show merged log entries (entries from multiple log sources) correlated with alarms/events occurrences.
New alarms/email notification system. Daily alarm digests (summary). Fine-tune email delivery of different alarms/events.
"Capacity Planner" (ALPHA). Add this constant to the UI's bootstrap.php file, define('RPC_PORT','9500'); to enable access to it.
Three new default MySQL dashboards. InnoDB IO, Query Performance and Galera Flow Control graphs.
- Audit logging. User activity tracking. Username and originating IP is logged in the Job log.
Add Node (MySQL/MongoDB) improvements.
yum/apt repo server for ClusterControl! See this blog post for details.
Changes in ClusterControl v1.2.6
Apr 22nd, 2014
- LDAP Authentication (BETA)
User Role based access to ClusterControl functions
OpenStack: Launch OST instances & Deploy a Galera Cluster (BETA)
- Manage multiple Galera Clusters with a single ClusterControl Controller host
Show Master and Slaves added to a Galera Cluster
Manage/Monitor MySQL Servers (auto detects if replication is enabled)
Embedded Classic DB Configurations Wizard deprecated/removed!
Changes in ClusterControl v1.2.5
Feb 11th, 2014
- Support for Galera 3.x builds (Codership & PXC 5.6)
- AWS VPC (Create/Delete and Deploy) BETA
- Custom Expressions (User defined alerts/alarms)
- Support for agent-less monitoring
- Minor UI changes
Changes in ClusterControl v1.2.4c (maintenance release)
Dec 13th, 2013
- Updated s9s_sw_update to reflect changes in Percona Repositories for Ubuntu.
- Bug: Invalid clear of wsrep_cluster_addresses on controller startup.
Changes in ClusterControl v1.2.4
Nov 19th, 2013
- Online backup storage in AWS S3 and Glacier
- Multi-cluster support. Share one Controller Node with multiple clusters
- Add existing Galera cluster via ClusterControl to monitor and manage
- Galera database configurator facelift
- Automatically deploy Galera and MongoDB cluster from ClusterControl
- Time shift stats/graphs
- MongoDB ReplicaSet AWS Deployment for Dev/Test env.
- AWS deployments now use our web site to generate a database configuration. Deploy the latest GA version of Galera/MongoDB.
- InnoDB Status output
- Schema Analyzer (redundant indexes, myisam tables, missing primary keys)
- Mongodb: Stats counters for TokuMX
- Mongodb: auth support (mongodb_user and mongodb_password)
Changes in ClusterControl v.1.2.3
July 15th, 2013
- Clone Galera Cluster via the GUI (s9s_clone)
- Deploy HAProxy and Keepalived with VIP via the GUI
- User defined "dashboards" in the Overview page (quickly select your favorite graphs to show)
- New Overview page for Galera clusters
- MySQL Query Histogram added to the Performance page
- New view for DB variables and status (MySQL) added to the Performance tab. Easier to view and compare status/variables across all nodes
- Execute external/user made scripts (on the controller node)
- Customizable refresh rate (DB variables and status)
- Centralized backups
- Start/stop and rebuild MySQL replication slave for MySQL 5.6
- Reboot host from UI
- Improved sampling of statistics (better resolution)
- [MONGODB] Replica set support
- [MONGODB] Backups with mongodump
- [MONGODB] Tokumx support
- [MONGODB] Arbiter support (add/remove from cmd line)
Changes in ClusterControl v.1.2.2
May 16th, 2013
- Deploy Galera cluster nodes on multi AZs and regions on AWS (great for test/dev)
- The Job log is available now in the 'Logs' view
- Simple database schema and user management (feature set from our classic cmon gui)
- Activate/deactivate monitoring of external processes (Mangage-Process)
- Add node for MariaDB
- Logfile Analyzer - automatically checks and detects problems found in mysql error logs.
Changes in ClusterControl v.1.2.1
May 2nd, 2013
- Added support for MongoDB backup
- New database growth graph
- MySQL status time machine table (show status value differences over time)
- Deploy Galera cluster on AWS (only on a single AZ). Great for test/dev.
- Moved settings (Configurations, Hosts, Processes, Software Packages, Upgrade, Schema graphs) views to new 'Manage' tab
- Fixed bugs in add node
- centralized backup, store backup data on controller by using s9s_backupc
- replication 5.6 aware (GTID)
- s9s_backup was changed, upgrade of s9s_backup on all nodes is required.
- email bug for SMTP notifications.
- recovery improvements in galera (refuse to recover cluster if a majority of the nodes cannot be reached), and recovery will be retried for a much long period of time (to avoid Galera node recovery blocked messages).
- s9s-admin tools (on controller do: git clone git://github.com/severalnines/s9s-admin.git ) for more details.
- check /usr/lib64/ for libgalera_smm.so
Changes in ClusterControl v.1.2.0
March 14, 2013
- Improved alarms
- Improvements to support ClusterControl GUI
- Bug fixes
Changes in ClusterControl v1.1.33
August 1st, 2012
- Controller: Added alarms for Replication, in case a MySQL Server crashes
- Controller: Alarms for Galera, in case a MySQL Server crashes
- Controller: Removed redundant messages and newlines from log messages
- Controller: Persisting db|host_stats_collection interval to cmon db
- Query Monitor: log_queries_not_using_indexes now settable from the Web Interface
- Query Monitor: Set long query time via Web interface. Setting upper bound (1MB) on query size to be parsed.
- Query Monitor: Possibility to override CMON settings in favor for local my.cnf settings
- WWW + Controller: Reworked Configuration Management + web interface
- WWW + Controller: Last mysql error now saved in mysql_server table
- RRD: Optimized rrd graph creation, optimized galera stats collection to reduce db writes
- WWW: Added ‘clear all jobs’ button
- MySQL Cluster: Display an error in the Web UI if an SQL Node is not connected to the cluster
- Galera: Improvements in availability handling, in case createPrimary fails
- Replication: serverid + autoincrement sedding fixed
- Replication: Fixed MaxConnection bug in Replication
- MySQL Cluster: Fixed Index/DataMemory collection problem if MemoryReportFrequency is not set
- MySQL Cluster: Fixed bug in MGM status info, preventing rolling restarts
- MySQL Cluster: Fixed bug in stop node (SQL/Data node)
- Galera: Make node statistics less jumpy during restarts/recovery
- Controller: Clear MySQL replication links when a MySQL Server is removed from the cluster
- Controller: fixed bug causing multiple email messages to be sent in case of an alarm
- Controller: Fixed ProcessList bug if pidfile already had a path to prevent concatenation with datadir
- Controller: Added printout to error log if a pidfile could not be opened by the Process Manager
- Controller: Prevent autorestart of failed agents from happening too fast
- Backup: Fixes in length of file issue (backup file size was 0 sometimes)
Changes in ClusterControl v1.1.32
June 25th, 2012
- Added load averages in ClusterControl Web interface
- Removed unnecessary log messages
- Added new configuration parameter to cmon.cnf: enable_autorecovery=1 (default 1 == enabled, 0 means disabled - only manual recovery).
- Galera: It is now possible to manually recover a non-Primary Galera node from the ClusterControl web interface.
- Galera: Improved handling of cluster recovery. Pass 1: find the best node to recover from and make it the new Primary. Pass 2: Recover the remainder of the nodes from the new Primary
- Galera: Cleaned up redundant table galera_status_history
- Fixed buffer overrun in query profiling and anonymizing queries (affects agents only)
- Disable autorestart of failed agents from happening too fast
- Galera: Handling of existing provider_options when setting pc.bootstrap
- Buffer overrun in log message
- Backups: Fixed issue with a stale mysql connection
- Added error handling to process stat collection (a process could have existed when a vector of pids were assembled, but process terminated before being used)
- RRD: Fixed "ERROR: /var/lib/cmon//cluster_1_stats.rrd: expected 9 data source readings (got 1) from N"