Installation on an existing Cluster

Follow

Comments

79 comments

  • Avatar
    Ashraf Sharif

    Hi Vladimir,

    You need a dedicated host for Clustercontrol (we do not recommend you to co-host ClusterControl with the database cluster). Please refer to ClusterControl Administration Guide here for details:

    http://www.severalnines.com/docs/clustercontrol-administration-guide

    You can also refer to our ClusterControl Quick Start guide to get ClusterControl installed on top of your existing database cluster:

    http://www.severalnines.com/docs/clustercontrol-quick-start-guide

    Regards,

    Ashraf

  • Avatar
    Theo Smith

    Hi Gents,

    I had a similar issue with this error:

    380 - Message sent to controller

    381 - Verifying job parameters.

    382 - Verifying controller host and cmon password.

    383 - Verifying the SSH connection to 10.10.14.10.

    384 - Verifying the MySQL user/password.

    385 - Found 1 nodes.

    386 - Checking the nodes that those aren't in other cluster.

    Host (10.10.14.19) is already in an other cluster.

     

    I have found that when I delete an existing cluster out of ClusterControl on version 1.2.6, it does not remove the .cnf file that is part of cmon in /etc/cmon.d/

    All I did was remove the .cnf that pertained to my host, and re-added the cluster.

    If you still get this error in the CMON DB, under the "hosts" table delete the entries there that pertain to your hosts ;)

    Regards,

    T

     

  • Avatar
    Alexandr Svinarchuk

    Hi ! I'd just install cmon  + CC UI on separate node. I tuned cmon.cfg and it work fine all data collecting from my galera cluster, but when i try to use UI i got nothing.

    I added existing cluster via form and in Job window i see just a message - Message sent to controller and no progress.

    What i did make wrong and how append existing cluster to UI ?

  • Avatar
    Ashraf Sharif

    Hi Alexandr,

    Did you follow this guide to install: http://www.severalnines.com/docs/clustercontrol-quick-start-guide/existing-cluster/clustercontrol-ui ?

    What is the operating system that you used? Also please attach /etc/cmon.cnf here (you can hide the mysql_password value).

    Regards,

    Ashraf

  • Avatar
    Alexandr Svinarchuk

    Hi Ashraf !

    Yes i follow this instruction. This my cmon.cnf

    #cmon config file

    id and name of cluster that this cmon agent is monitoring.

    Must be unique for each monitored cluster, like server-id in mysql

    cluster_id=1

    name=cluster_1

    agentless=1

    os = [redhat|debian]

    os=redhat

    skip_name_resolve = [0|1] - set 1 if you use ip addresses only everywhere

    skip_name_resolve=1

    mode = [controller|agent|dual]

    mode=controller

    type = [mysqlcluster|replication|galera]

    type=galera

    CMON DB config  - mysql_password is for the 'cmon' user

    mysql_port=3306

    mysql_hostname=127.0.0.1

    mysql_password=*****

    local_mysql_port=*****

    local_mysql_password=*****

    location of mysql install, e.g /usr/ or /usr/local/mysql

    mysql_basedir=/usr

    #hostname is the hostname of the current host

    hostname=

    ndb_connectstring  - comma-separated list of management servers: a:1186,b:1196

    ndb_connectstring=127.0.0.1

    The user that can SSH without password to the other nodes

    os_user=root

    location of cmon.pid file. The pidfile is written in /tmp/ by default

    pidfile=/var/run/

    logfile is default to syslog.

    logfile=/var/log/cmon.log

    collection intervals (in seconds)

    db_stats_collection_interval=30

    host_stats_collection_interval=30

    mysql servers in the cluster. "," or " " sep. list

    mysql_server_addresses=ip of one galera's node

    mgm and data nodes are only used to MySQL Cluster "," or " " sep. list

    datanode_addresses=

    mgmnode_addresses=

    wwwroot=/var/www/html

    configuration file directory for database servers (location of config.ini, my.cnf etc)

    on RH, usually it is /etc/, on Debian/Ubuntu /etc/mysql

    #db_configdir=

    ssh_port=***

    ssh_identity=/root/.ssh/id_rsa

  • Avatar
    Ashraf Sharif

    Hi Alexandr,

    If you followed the quick start guide, you don't necessary need to tweak cmon.cnf's parameters. But when you do, please specify following two mandatory parameters:

    hostname=[your clustercontrol IP]

    mysql_server_addresses='[dbhost_ip1,dbhost_ip2,dbhost_ip3]'

    *replace the respective values accordingly

    Ensure that you can perform passwordless ssh to from ClusterControl node to galera node:

    ssh -i /root/.ssh/id_rsa -p [ssh_port] root@[one of the galera node's IP address]

    Once done, restart cmon and monitor the output of /var/log/cmon.log.

    Regards,

    Ashraf

     

  • Avatar
    Alexandr Svinarchuk

    Hi, Ashraf !

    I did review my config:

    1). mysql_server_addresses was set i just hide it on my post;

    2). i set hostname to local ip

    after that looks like all start working, but when i try to add cluster i got next error:

     177 - Getting node list from the MySQL server.

    Failed to detect galera nodes: wsrep_cluster_address=gcomm://.Choose a node with wsrep_cluster_address=gcomm://.

  • Avatar
    kid_pig

    Hi Jodan , 

    I have a prolem for " Add an existing cluster ".

     

    54 - Message sent to controller

    55 - Verifying job parameters.

    56 - Verifying controller host and cmon password.

    57 - Verifying the SSH connection to 172.16.200.27.

    58 - Verifying the MySQL user/password.

    59 - Getting node list from the MySQL server.

    60 - Found 1 nodes.

    61 - Checking the nodes that those aren't in other cluster.

    62 - Verifying the SSH connection to the nodes.

    63 - Check SELinux statuses

    64 - Granting the controller on the cluster.

    GRANT failed: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)

     

    I neet your help , Please .

     

  • Avatar
    Ashraf Sharif

    Hi Alexandr,

    Try to specify other node's IP address as describe in this page on step #6:

    http://www.severalnines.com/docs/clustercontrol-quick-start-guide/existing-cluster/clustercontrol-ui

    Regards,

    Ashraf

  • Avatar
    Ashraf Sharif

    Hi kid_pig,

    Is the MySQL server running on 172.16.200.27? Please verify it with this command on 172.16.200.27:

    $ service mysql status

    $ ps -ef | grep mysql

    $ mysql -uroot -p'[mysql root password]' -hlocalhost -p[mysql port]

    $ mysql -uroot -p'[mysql root password]' -h127.0.0.1 -p[mysql port]

    Make sure all commands mentioned above are executed correctly (without error). Once done, retry again with the adding.

    Regards,

    Ashraf

  • Avatar
    kid_pig

    Hi Ashraf , 

    Thank you for reply , 

    172.16.200.27 is Mysql server Garela .

    On 172.16.200.27: 

    root@node1:~# service mysql status

    * MySQL is running (PID: 6270)

     root@node1:~# ps -ef | grep mysql

    root 5624 1 0 Jul10 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe

    mysql 6270 5624 0 Jul10 ? 00:10:52 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 --wsrep_start_position=9bae18ae-074d-11e4-0800-e1c3cd34a375:0

    root 13779 12907 0 14:50 pts/0 00:00:00 grep --color=auto mysql

     mysql -uroot -p123456 -h127.0.0.1 -P3306

    Welcome to the MySQL monitor. Commands end with ; or \g.

    Your MySQL connection id is 87

    Server version: 5.5.28 Source distribution, wsrep_23.7.r3829

     root@node1:~# mysql -uroot -hlocalhost -p3306

    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) .

    root@node1:~# cat /etc/hosts

    127.0.0.1 localhost.localdomain localhost

    fe00::0 ip6-localnet

    ff00::0 ip6-mcastprefix

    ff02::1 ip6-allnodes

    ff02::2 ip6-allrouters

    172.16.200.30 clustercontrol

    172.16.200.28 node2

    172.16.200.29 node3

    172.16.200.27 node1

     

    I think I need fix localhost . But I don't know check  it.

    Regards Ashraf

     

     

  • Avatar
    Ashraf Sharif

    Hi kid_pig,

    Your /etc/hosts is fine. The error indicates that CC couldn't connect to MySQL using socket. You have specified wrong command though:

    $ mysql -uroot -hlocalhost -p3306

    It supposed to be:

    $ mysql -uroot -p123456 -hlocalhost -p3306

    I would advise you to check the mysql error log located at /var/log/mysql/error.log and make sure MySQL able to connect through socket. Then, try again to add the Galera cluster.

    Regards,

    Ashraf

     

  • Avatar
    Vishnu Rao

    hi Johan,

    does the cmon ui (limited account) have features enabled for mysql-cluster or is it paid ?

    i set up my own ndb cluster with out configurator.

    i setup my cmon ... it connected to my management node and all... but the UI does show it anywhere.

    do let me know mysql-cluster monitoring via cmon is free or do i need a licencese purchage.

    thanking you

    with regards,

    ch Vishnu

     

     

     

     

  • Avatar
    Severalnines Support

    Greetings Vishnu!

    You should be able to monitor your MySQL Cluster as part of the community edition.

    Can you explain the steps you took?

    Did you manually install ClusterControl on top of your existing MySQL Cluster using the instructions in this article.

    Or did you do the automated install using the Bootstrap script?

    http://support.severalnines.com/entries/21952156-Get-Your-Database-Cluster-Under-ClusterControl-

  • Avatar
    kid_pig

    Thank Everyone , I has been finish project . I reinstall cluster mysql & cluster control server. Great, It was running . Regard !

  • Avatar
    Rene Loef

    I've succesfully installed the clustercontrol on a dedicted host and have added the cluster to the control server.

    I am also able to see the controller node and the mysql nodes, but I'm unable to see the Management nodes (also installed on the mysql nodes) and the Data nodes. I used the installer script (s9s_bootstrap --install).

    The CMON log is not showing only a few errors;

    Jul 24 06:30:10 : (ERROR) Can't get a connected management server (which repeats every hour)

    and

    Jul 24 07:30:10 : (ERROR) Creating processlist failed, failed to get Datadir, no connected ndb_mgmd was found. (also every hour)

    Any pointers would be appreciated..

  • Avatar
    Johan

    Hi Rene,

    did you add

    ndb_connectstring="<mgmhost1>,<mgmhost2>"

    to /etc/cmon.cnf?

    Where are the ndb binaries located?

    You may have to set, e.g:

    mysql_basedir=/opt/mysqlcluster/

    Do you have a firewall between the controller and the mysql cluster, blocking port 1186?

     

    BR

    johan

  • Avatar
    Rene Loef

    Hi Johan,

    the ndb_connectstring is filled with the IP addresses of the correct hosts (and ports). 

    I can also connect to those two hosts from the Controller server using telnet ipaddress port, so the TCP connection/firewall looks to be configured correctly.

    The controller server has the normal MySQL installed from Debian, not the cluster mysql. The controller server currently has no ndb binaries.

    Do I need to set the mysql_basedir to the location of the mysql cluster basedir? Or do I need to copy over the cluster files to the controller server?

  • Avatar
    Rene Loef

    Hi,

    I reinstalled the cluster using the http://www.severalnines.com/cluster-configurator/ and it seems to be working fine now,

    René

Please sign in to leave a comment.

Powered by Zendesk