Installation on an existing Cluster




  • Avatar
    Ashraf Sharif

    Hi Vladimir,

    You need a dedicated host for Clustercontrol (we do not recommend you to co-host ClusterControl with the database cluster). Please refer to ClusterControl Administration Guide here for details:

    You can also refer to our ClusterControl Quick Start guide to get ClusterControl installed on top of your existing database cluster:



  • Avatar
    Theo Smith

    Hi Gents,

    I had a similar issue with this error:

    380 - Message sent to controller

    381 - Verifying job parameters.

    382 - Verifying controller host and cmon password.

    383 - Verifying the SSH connection to

    384 - Verifying the MySQL user/password.

    385 - Found 1 nodes.

    386 - Checking the nodes that those aren't in other cluster.

    Host ( is already in an other cluster.


    I have found that when I delete an existing cluster out of ClusterControl on version 1.2.6, it does not remove the .cnf file that is part of cmon in /etc/cmon.d/

    All I did was remove the .cnf that pertained to my host, and re-added the cluster.

    If you still get this error in the CMON DB, under the "hosts" table delete the entries there that pertain to your hosts ;)




  • Avatar
    Alexandr Svinarchuk

    Hi ! I'd just install cmon  + CC UI on separate node. I tuned cmon.cfg and it work fine all data collecting from my galera cluster, but when i try to use UI i got nothing.

    I added existing cluster via form and in Job window i see just a message - Message sent to controller and no progress.

    What i did make wrong and how append existing cluster to UI ?

  • Avatar
    Ashraf Sharif

    Hi Alexandr,

    Did you follow this guide to install: ?

    What is the operating system that you used? Also please attach /etc/cmon.cnf here (you can hide the mysql_password value).



  • Avatar
    Alexandr Svinarchuk

    Hi Ashraf !

    Yes i follow this instruction. This my cmon.cnf

    #cmon config file

    id and name of cluster that this cmon agent is monitoring.

    Must be unique for each monitored cluster, like server-id in mysql




    os = [redhat|debian]


    skip_name_resolve = [0|1] - set 1 if you use ip addresses only everywhere


    mode = [controller|agent|dual]


    type = [mysqlcluster|replication|galera]


    CMON DB config  - mysql_password is for the 'cmon' user






    location of mysql install, e.g /usr/ or /usr/local/mysql


    #hostname is the hostname of the current host


    ndb_connectstring  - comma-separated list of management servers: a:1186,b:1196


    The user that can SSH without password to the other nodes


    location of file. The pidfile is written in /tmp/ by default


    logfile is default to syslog.


    collection intervals (in seconds)



    mysql servers in the cluster. "," or " " sep. list

    mysql_server_addresses=ip of one galera's node

    mgm and data nodes are only used to MySQL Cluster "," or " " sep. list




    configuration file directory for database servers (location of config.ini, my.cnf etc)

    on RH, usually it is /etc/, on Debian/Ubuntu /etc/mysql




  • Avatar
    Ashraf Sharif

    Hi Alexandr,

    If you followed the quick start guide, you don't necessary need to tweak cmon.cnf's parameters. But when you do, please specify following two mandatory parameters:

    hostname=[your clustercontrol IP]


    *replace the respective values accordingly

    Ensure that you can perform passwordless ssh to from ClusterControl node to galera node:

    ssh -i /root/.ssh/id_rsa -p [ssh_port] root@[one of the galera node's IP address]

    Once done, restart cmon and monitor the output of /var/log/cmon.log.




  • Avatar
    Alexandr Svinarchuk

    Hi, Ashraf !

    I did review my config:

    1). mysql_server_addresses was set i just hide it on my post;

    2). i set hostname to local ip

    after that looks like all start working, but when i try to add cluster i got next error:

     177 - Getting node list from the MySQL server.

    Failed to detect galera nodes: wsrep_cluster_address=gcomm://.Choose a node with wsrep_cluster_address=gcomm://.

  • Avatar

    Hi Jodan , 

    I have a prolem for " Add an existing cluster ".


    54 - Message sent to controller

    55 - Verifying job parameters.

    56 - Verifying controller host and cmon password.

    57 - Verifying the SSH connection to

    58 - Verifying the MySQL user/password.

    59 - Getting node list from the MySQL server.

    60 - Found 1 nodes.

    61 - Checking the nodes that those aren't in other cluster.

    62 - Verifying the SSH connection to the nodes.

    63 - Check SELinux statuses

    64 - Granting the controller on the cluster.

    GRANT failed: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)


    I neet your help , Please .


  • Avatar
    Ashraf Sharif

    Hi Alexandr,

    Try to specify other node's IP address as describe in this page on step #6:



  • Avatar
    Ashraf Sharif

    Hi kid_pig,

    Is the MySQL server running on Please verify it with this command on

    $ service mysql status

    $ ps -ef | grep mysql

    $ mysql -uroot -p'[mysql root password]' -hlocalhost -p[mysql port]

    $ mysql -uroot -p'[mysql root password]' -h127.0.0.1 -p[mysql port]

    Make sure all commands mentioned above are executed correctly (without error). Once done, retry again with the adding.



  • Avatar

    Hi Ashraf , 

    Thank you for reply , is Mysql server Garela .


    root@node1:~# service mysql status

    * MySQL is running (PID: 6270)

     root@node1:~# ps -ef | grep mysql

    root 5624 1 0 Jul10 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe

    mysql 6270 5624 0 Jul10 ? 00:10:52 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/ --socket=/var/run/mysqld/mysqld.sock --port=3306 --wsrep_start_position=9bae18ae-074d-11e4-0800-e1c3cd34a375:0

    root 13779 12907 0 14:50 pts/0 00:00:00 grep --color=auto mysql

     mysql -uroot -p123456 -h127.0.0.1 -P3306

    Welcome to the MySQL monitor. Commands end with ; or \g.

    Your MySQL connection id is 87

    Server version: 5.5.28 Source distribution, wsrep_23.7.r3829

     root@node1:~# mysql -uroot -hlocalhost -p3306

    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) .

    root@node1:~# cat /etc/hosts localhost.localdomain localhost

    fe00::0 ip6-localnet

    ff00::0 ip6-mcastprefix

    ff02::1 ip6-allnodes

    ff02::2 ip6-allrouters clustercontrol node2 node3 node1


    I think I need fix localhost . But I don't know check  it.

    Regards Ashraf



  • Avatar
    Ashraf Sharif

    Hi kid_pig,

    Your /etc/hosts is fine. The error indicates that CC couldn't connect to MySQL using socket. You have specified wrong command though:

    $ mysql -uroot -hlocalhost -p3306

    It supposed to be:

    $ mysql -uroot -p123456 -hlocalhost -p3306

    I would advise you to check the mysql error log located at /var/log/mysql/error.log and make sure MySQL able to connect through socket. Then, try again to add the Galera cluster.




  • Avatar
    Vishnu Rao

    hi Johan,

    does the cmon ui (limited account) have features enabled for mysql-cluster or is it paid ?

    i set up my own ndb cluster with out configurator.

    i setup my cmon ... it connected to my management node and all... but the UI does show it anywhere.

    do let me know mysql-cluster monitoring via cmon is free or do i need a licencese purchage.

    thanking you

    with regards,

    ch Vishnu





  • Avatar
    Severalnines Support

    Greetings Vishnu!

    You should be able to monitor your MySQL Cluster as part of the community edition.

    Can you explain the steps you took?

    Did you manually install ClusterControl on top of your existing MySQL Cluster using the instructions in this article.

    Or did you do the automated install using the Bootstrap script?

  • Avatar

    Thank Everyone , I has been finish project . I reinstall cluster mysql & cluster control server. Great, It was running . Regard !

  • Avatar
    Rene Loef

    I've succesfully installed the clustercontrol on a dedicted host and have added the cluster to the control server.

    I am also able to see the controller node and the mysql nodes, but I'm unable to see the Management nodes (also installed on the mysql nodes) and the Data nodes. I used the installer script (s9s_bootstrap --install).

    The CMON log is not showing only a few errors;

    Jul 24 06:30:10 : (ERROR) Can't get a connected management server (which repeats every hour)


    Jul 24 07:30:10 : (ERROR) Creating processlist failed, failed to get Datadir, no connected ndb_mgmd was found. (also every hour)

    Any pointers would be appreciated..

  • Avatar

    Hi Rene,

    did you add


    to /etc/cmon.cnf?

    Where are the ndb binaries located?

    You may have to set, e.g:


    Do you have a firewall between the controller and the mysql cluster, blocking port 1186?




  • Avatar
    Rene Loef

    Hi Johan,

    the ndb_connectstring is filled with the IP addresses of the correct hosts (and ports). 

    I can also connect to those two hosts from the Controller server using telnet ipaddress port, so the TCP connection/firewall looks to be configured correctly.

    The controller server has the normal MySQL installed from Debian, not the cluster mysql. The controller server currently has no ndb binaries.

    Do I need to set the mysql_basedir to the location of the mysql cluster basedir? Or do I need to copy over the cluster files to the controller server?

  • Avatar
    Rene Loef


    I reinstalled the cluster using the and it seems to be working fine now,


Please sign in to leave a comment.

Powered by Zendesk