CLI or dashboard stopping/starting/restarting MySQL cluster nodes ...
For a master-master MySQL replication configuration on CentOS 7, is it frowned upon to run 'systemctl [stop|restart|start] mysqld.service' verses stopping, starting, or restarting nodes from the dashboard, or is it okay to do so?
I don't see any problem starting/stopping the service via systemctl directly. If automatic recovery is turned on, ClusterControl will try to start the MySQL service again after 34 seconds (4 seconds for detection and 30 seconds graceful time before commencing the recovery job).
If you want to perform maintenance on the node, it's recommended to put it into maintenance mode so you won't get notified with false alarm during the maintenance window.
Very good. Thanks for that information.
A couple of thoughts come to mind:
- If using the S9 utility 's9s node --stop --nodes=<ip-addr>', is this the same as stopping the node from the ClusterControl dash, where the downed node is put in maintenance mode for 30 minutes (as observed when downing a node via the dash), or will the CLI command stop the node until is it restarted via either the dashboard or CLI 's9s node ...' command?
- Regarding auto-recovery, what are the differences between 'enable_cluster_autorecovery' and 'enable_node_autorecovery', and when would either used and enabled in the '/etc/cmon.d/cmon_x.cnf' file (my environment is MySQL 5.7 master-master replication)?
Please sign in to leave a comment.