Multiple monitoring environments.

Comments

21 comments

  • Avatar
    Johan

    Hi George,

    We can do it indeed, but requires some "manual" tweaking. 

    What needs to be done is to have:

    1) cmon 1.1.27

    2) For each cluster   you will have one CMON controller running (on each node in the cluster you will have agents). The controller processes can run on the same box if:
     - one controller will monitor one cluster
     - each controller will have its own /etc/cmon.cnf file, e.g, called cmon.cnf.<clusterid> (one of each cluster id), with a unique cluster id, and a unique log file, e.g log-file=/var/log/cmon_<clusterid>
    -  each controller must have its own cmon initd script, e.g, /etc/init.d/cmon_<clusterid> , and then in each cmon initd script you will have different pid files and  it will also point to the unique cmon.cnf.<clusterid> file.

    Currently the  the cmon database will be shared for the different clusters, so all clusters will be logging into database 'cmon'. 

    Let me know if this helps, otherwise, I would be more than happy to help you get this working for you.

    Best regards,
    Johan

     


     

       

    0
    Comment actions Permalink
  • Avatar
    George Neill

    I was hoping you wouldn't say that! :)    Ok.  we'll try that.  It might be worth while (maybe overkill?) for you guys to look at embedding Lua for the CMON configuration files.  I'll report back when we have it figured out.

     

    0
    Comment actions Permalink
  • Avatar
    George Neill

    Johan,

    Thanks for your help today.  We followed your instructions and we were NOT able to get both environments under the same www UI.   The UI seems to merge all of the information together when we click on the "View Cluster" link for either cluster.  For now we have decided to just clone our current VM and run two separate instances, but I would still like to be able to host this on one server!

    Later,

    George

    0
    Comment actions Permalink
  • Avatar
    Severalnines Support

    Hi George,

    I believe we could resolve this offline. It takes some tweaking, but we will have to make some changes to the CMON configuration to enable this.

    If anybody else has any such requests, please ping us again and we'll prioritize this.

    Thanks.

    Vinay

    0
    Comment actions Permalink
  • Avatar
    Patrick Zoblisein

    Hi - were the tweaks for this ever published?

    I'm attempting to place two separate clusters into the same Web UI and am getting same results as the original poster - cmon version 1.1.33. I attempted Johan's steps by creating a unique /etc/init.d/cmon02 on the controller, using a unique cmon02.cnf.

    Cluster 01 in the web UI shows green for all of it's components but also list the components for Cluster 02 in red.

    Cluster 02 in the web UI shows green for all of it's components but also list the components for Cluster 01 in red.

    Thanks

    Patrick

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi Patrick,  

    is this MySQL Cluster or Galera?  

    Best regards,

    Johan

    0
    Comment actions Permalink
  • Avatar
    Patrick Zoblisein

    Hi Johan - this is for MySQL Cluster (NDB).

     

    Cluster A (this is a DEV environment):

    Host 1 has two data nodes, management node and SQL node.

    Host 2 has two data nodes.

     

    Cluster B

    Host 3 has two data nodes, management node and SQL node.

    Host 4 has two data nodes.

     

    Host 5 is where I've deployed the cmon controller and am attempting to monitor both clusters via the same Web UI.

    Controller 1 - /usr/sbin/cmon --config-file=/etc/cmon.cnf  (cluster_id=1 / name = NDB_DBS01)

    Controller 2 - /usr/sbin/cmon --config-file=/etc/cmon-dbs02.cnf (cluster_id=2 / name = NDB_DBS02)

    [root@ data]# ls -l /etc/init.d/cmon*

    -rwxr-xr-x 1 root root 3174 Aug 29 13:20 /etc/init.d/cmon
    -rwxr-xr-x 1 root root 3256 Jan 17 20:50 /etc/init.d/cmon-dbs02


    Please let me know what other specific information you need - I'll be happy to provide it.

    Thanks

    Patrick

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi Patrick,

    Thanks for the information, i will write down what you need to do (give you sample cmon initd and cmon.cnf files). Hopefully today.

    If it had been Galera then we have another option for doing this.

    Best regards

    Johan

    0
    Comment actions Permalink
  • Avatar
    Patrick Zoblisein

    Thanks Johan - look forward to seeing the samples.

    Patrick

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi Patrick,

    I am sorry for the late reply.

    But here is a package, you will have to fill out the hostnames etc in the cmon.cnf

    It requites you upgrade to 1.1.36:  http://support.severalnines.com/entries/21095371-cmon-1-1-36-released-upgrade-instructions

    Then on the controller:

    * Install cmon_2.cnf.controller  as /etc/cmon_2.cnf
    * Install cmon_initd_2   as /etc/init.d/cmon_2
      (since i don't know if you use RPMs or the tarball version of cmon, then update :  
       SBINDIR=/usr/local/cmon/sbin/
      accordingly. If the init.d script does not work at all (i took it from an ubuntu system, let me know or patch it yourself).

    On the agents:
    * cmon.cnf.agent  install as /etc/cmon.cnf
      (it is already prepped for mysql cluster, and cluster id 2 but you need to changes hostnames, paths. 

    Let me know what happens.

    Best regards,

    Johan

    0
    Comment actions Permalink
  • Avatar
    Patrick Zoblisein

    Hi Johan - thanks very much for taking the time to try to resolve this - my apologies for taking so long to come back to this.

     

    Sadly, I've not see any change in behavior.

     

    Have upgraded cmon from 1.1.33 to 1.1.36, followed your instructions above and deployed the cmon_2 configuration as described.  No issue with running two cmon controllers from the same host.

     

    Issue of having clusters "cross"-displayed via single Web UI persists.  It's as if the cid (index.php?id=2&action=view) is being ignored in the cmon db?

     

    Thanks

    Patrick

    0
    Comment actions Permalink
  • Avatar
    Patrick Zoblisein

    Hi Johann - think I may have found the SQL which is causing this - can you review and confirm / deny when you have a chance?

     

    mysql> SELECT n.nodeid, n.host, n.nodegroup, n.uptime, n.start_phase,n.status, n.start_mode, n.failed_restarts, n.startok, n.version, n.report_ts, (now()-n.report_ts)>60 as report_warning,n.node_type, ifnull((now()-h.cmon_status)<60,0) as cmon_status FROM node_state n LEFT JOIN hosts h ON n.cid=1 AND n.cid=h.cid AND n.hostid=h.id WHERE n.node_type='NDBD' order by n.nodeid, n.status;

     

    For the following output though, I added n.cid to demostrate which nodes belong to which cluster:

     

    mysql> SELECT n.cid, n.nodeid, n.host, n.nodegroup, n.uptime, n.start_phase,n.status, n.start_mode, n.failed_restarts, n.startok, n.version, n.report_ts, (now()-n.report_ts)>60 as report_warning,n.node_type, ifnull((now()-h.cmon_status)<60,0) as cmon_status FROM node_state n LEFT JOIN hosts h ON n.cid=1 AND n.cid=h.cid AND n.hostid=h.id WHERE n.node_type='NDBD' order by n.nodeid, n.status;
    +-----+--------+----------------+-----------+--------+-------------+---------+------------+-----------------+---------+---------+---------------------+----------------+-----------+-------------+
    | cid | nodeid | host | nodegroup | uptime | start_phase | status | start_mode | failed_restarts | startok | version | report_ts | report_warning | node_type | cmon_status |
    +-----+--------+----------------+-----------+--------+-------------+---------+------------+-----------------+---------+---------+---------------------+----------------+-----------+-------------+
    | 1 | 20 | x.x.x.151 | 0 | 413773 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:12:53 | 1 | NDBD | 0 |
    | 1 | 21 | x.x.x.152 | 0 | 413697 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:12:53 | 1 | NDBD | 0 |
    | 1 | 22 | x.x.x.151 | 1 | 413772 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:12:53 | 1 | NDBD | 0 |
    | 1 | 23 | x.x.x.152 | 1 | 413694 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:12:53 | 1 | NDBD | 0 |
    | 2 | 40 | x.x.x.153 | 0 | 2961 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:13:00 | 1 | NDBD | 0 |
    | 2 | 41 | x.x.x.154 | 0 | 2958 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:13:00 | 1 | NDBD | 0 |
    | 2 | 42 | x.x.x.153 | 1 | 2960 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:13:00 | 1 | NDBD | 0 |
    | 2 | 43 | x.x.x.154 | 1 | 2957 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:13:00 | 1 | NDBD | 0 |
    +-----+--------+----------------+-----------+--------+-------------+---------+------------+-----------------+---------+---------+---------------------+----------------+-----------+-------------+
    8 rows in set (0.00 sec)

    Removing `ON n.cid=x AND` from the JOIN and adding n.cid to the WHERE clause provides me with the results I was originally expecting:

     

    mysql> SELECT n.nodeid, n.host, n.nodegroup, n.uptime, n.start_phase,n.status, n.start_mode, n.failed_restarts, n.startok, n.version, n.report_ts, (now()-n.report_ts)>60 as report_warning,n.node_type, ifnull((now()-h.cmon_status)<60,0) as cmon_status FROM node_state n LEFT JOIN hosts h n.cid=h.cid AND n.hostid=h.id WHERE n.node_type='NDBD' and n.cid=1 order by n.nodeid, n.status;
    +--------+----------------+-----------+--------+-------------+---------+------------+-----------------+---------+---------+---------------------+----------------+-----------+-------------+
    | nodeid | host | nodegroup | uptime | start_phase | status | start_mode | failed_restarts | startok | version | report_ts | report_warning | node_type | cmon_status |
    +--------+----------------+-----------+--------+-------------+---------+------------+-----------------+---------+---------+---------------------+----------------+-----------+-------------+
    | 20 | x.x.x.151 | 0 | 413773 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:12:53 | 1 | NDBD | 0 |
    | 21 | x.x.x.152 | 0 | 413697 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:12:53 | 1 | NDBD | 0 |
    | 22 | x.x.x.151 | 1 | 413772 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:12:53 | 1 | NDBD | 0 |
    | 23 | x.x.x.152 | 1 | 413694 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:12:53 | 1 | NDBD | 0 |
    +--------+----------------+-----------+--------+-------------+---------+------------+-----------------+---------+---------+---------------------+----------------+-----------+-------------+
    4 rows in set (0.00 sec)

    mysql> SELECT n.nodeid, n.host, n.nodegroup, n.uptime, n.start_phase,n.status, n.start_mode, n.failed_restarts, n.startok, n.version, n.report_ts, (now()-n.report_ts)>60 as report_warning,n.node_type, ifnull((now()-h.cmon_status)<60,0) as cmon_status FROM node_state n LEFT JOIN hosts h n.cid=h.cid AND n.hostid=h.id WHERE n.node_type='NDBD' and n.cid=2 order by n.nodeid, n.status;
    +--------+----------------+-----------+--------+-------------+---------+------------+-----------------+---------+---------+---------------------+----------------+-----------+-------------+
    | nodeid | host | nodegroup | uptime | start_phase | status | start_mode | failed_restarts | startok | version | report_ts | report_warning | node_type | cmon_status |
    +--------+----------------+-----------+--------+-------------+---------+------------+-----------------+---------+---------+---------------------+----------------+-----------+-------------+
    | 40 | x.x.x.153 | 0 | 2961 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:13:00 | 1 | NDBD | 0 |
    | 41 | x.x.x.154 | 0 | 2958 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:13:00 | 1 | NDBD | 0 |
    | 42 | x.x.x.153 | 1 | 2960 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:13:00 | 1 | NDBD | 0 |
    | 43 | x.x.x.154 | 1 | 2957 | 0 | STARTED | NR | 0 | 0 | 7.2.10 | 2013-02-12 15:13:00 | 1 | NDBD | 0 |
    +--------+----------------+-----------+--------+-------------+---------+------------+-----------------+---------+---------+---------------------+----------------+-----------+-------------+
    4 rows in set (0.00 sec)

     

    Thanks

    Patrick

    0
    Comment actions Permalink
  • Avatar
    Patrick Zoblisein

    Hi Johan - made the following changes to ndb_dao.php on my 1.1.36 controller server which seems to have resolved my issue:

     

    ++++$query="SELECT n.nodeid, n.host, n.nodegroup, n.uptime, n.start_phase,n.status, n.start_mode, n.failed_restarts, n.startok, n.version, n.report_ts, (now()-n.report_ts)>60 as report_warning,n.node_type, ifnull((now()-h.cmon_status)<60,0) as cmon_status FROM node_state n LEFT JOIN hosts h ON n.cid=h.cid AND n.hostid=h.id WHERE n.node_type='NDBD' and n.cid=$cid order by n.nodeid, n.status";

     

    ----$query="SELECT n.nodeid, n.host, n.nodegroup, n.uptime, n.start_phase,n.status, n.start_mode, n.failed_restarts, n.startok, n.version, n.report_ts, (now()-n.report_ts)>60 as report_warning,n.node_type, ifnull((now()-h.cmon_status)<60,0) as cmon_status FROM node_state n LEFT JOIN hosts h ON n.cid=$cid AND n.cid=h.cid AND n.hostid=h.id WHERE n.node_type='NDBD' order by n.nodeid, n.status";

     

     

    ++++function get_api_record($cid,$type)

    ---- function get_api_record($id,$type)

     

    ++++$result=mysql_query("SELECT n.nodeid, n.host, n.status, n.version, n.report_ts,(now()-n.report_ts)>300 as report_warning, n.node_type, ifnull((now()-h.cmon_status)<60,0) as cmon_status FROM node_state n LEFT JOIN hosts h ON n.cid=h.cid AND n.hostid=h.id WHERE n.node_type='". $type .  "' and n.cid=$cid order by nodeid, status");

    ----$result=mysql_query("SELECT n.nodeid, n.host, n.status, n.version, n.report_ts,(now()-n.report_ts)>300 as report_warning, n.node_type, ifnull((now()-h.cmon_status)<60,0) as cmon_status FROM node_state n LEFT JOIN hosts h ON n.cid=$id AND n.cid=h.cid AND n.hostid=h.id WHERE node_type='". $type .  "' order by nodeid, status");

     

    Thanks

    Patrick

    0
    Comment actions Permalink
  • Avatar
    Johan

    Patrick, 

    That is excellent! Thanks for the patches. They are going in.

    Best regards,

    Johan

    0
    Comment actions Permalink
  • Avatar
    Jan Ressmeyer

    hello,

    my multiple Cluster Configuration with only one CMon Controller hangs because of i use only ip adresses instead of hostnames in the configuration, e.g. the floowing query have to only 1 row in set:

     

    mysql> select * from hosts where hostname = (SELECT value FROM cmon_configuration WHERE param = 'CMON_HOSTNAME1' AND cid = 1);

    +----+-----+----------+-------------+-----------+------------+---------------------+-----+--------------+---------------------+
    | id | cid | hostname | ping_status | ping_time | ip | report_ts | msg | cmon_version | cmon_status |
    +----+-----+----------+-------------+-----------+------------+---------------------+-----+--------------+---------------------+
    | 1 | 1 | CMON-1 | 1 | 23 | 10.67.1.26 | 2014-01-13 16:27:20 | | 1.2.3 | 2014-01-13 16:27:20 |
    +----+-----+----------+-------------+-----------+------------+---------------------+-----+--------------+---------------------+
    1 row in set (0,00 sec)

    Otherwise the PHP Code hangs because the cid was not queried from the host table....

     

     

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi Jan,

    It is not supported in 1.2.3, you should upgrade to 1.2.4d.

    Basically you need to create directory /etc/cmon.d/ and put two different clusters' cmon.cnf file in that directory (and not have one in /etc/cmon.cnf). We are happy to make this work out for you if you interested.

    Regards,

    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Dawid Fijoł

    Hello,

    Just wanted to know if that is possible to have production and dev clusters monitored in one cluster control in the newest version?

    ClusterControl UI version: 1.2.10.148b3cf
    ClusterControl CMON Version:  1.2.10.741
    CMON API Version:

    1.2.10.4103a2a

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi Dawid,

    Yes, it is supported since v1.2.6. Use 'Add Existing Server/Cluster' feature to add multiple clusters into ClusterControl.

     

    Regards,

    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Dawid Fijoł

    Hi Ashraf,

     

    Yes but when I do it in this way like you said - then im able to add only mysql nodes (api nodes). But what about data and management nodes? Im talking here about mysql cluster (ndb)

    Regards,

    David (Dawid)

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi David,

    Adding existing MySQL Cluster (ndb) is not supported from the ClusterControl UI. But it is possible to use our bootstrap script to add MySQL Cluster into ClusterControl. Instructions at: http://www.severalnines.com/ccg/bootstrap-script-mysql-clustermongodb-sharded-cluster

    The bootstrap script will add only one MySQL Cluster, so you may use it to add the production cluster. Once imported, you will have to manually add the dev cluster as cluster ID 2. To achieve this, run following steps on ClusterControl node:

    1. Copy CMON configuration file from cluster ID 1 as a template for cluster ID 2:

    $ mkdir /etc/cmon.d/
    $ cp /etc/cmon.cnf /etc/cmon.d/cmon_2.cnf

     

    2. Update following lines inside /etc/cmon.d/cmon_2.cnf:

     

    cluster_id=2
    logfile=/var/log/cmon_2.log
    mysql_server_addresses=<your dev SQL nodes in comma separated list>
    datanode_addresses=<your dev data nodes in comma separated list>
    mgmnode_addresses=<your dev management nodes in comma separated list>
    ndb_connectstring=<NDB connection string of the cluster>

    ** Details on configuration options can be referred here: http://www.severalnines.com/ccg/configuration-file

    3. Setup passwordless SSH to the all nodes in dev cluster using following command:

    $ ssh-copy-id <IP address>

    4. Restart CMON service to apply the changes:

    $ service cmon restart

    At this point, ClusterControl should start to provisioning all nodes defined in the configuration file (both /etc/cmon.cnf and /etc/cmon.d/cmon_2.cnf). I would recommend you to monitor the output of CMON controller for cluster ID 2 at /var/log/cmon_2.log. If everything is configured correctly, you should see it listed in the ClusterControl UI as the second cluster under Database Clusters list.

    Regards,

    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Dawid Fijoł

    Hi Ashraf,

    This worked without any problem. Really straight forward to get this done.

    Thanks for you help!

    David

    0
    Comment actions Permalink

Please sign in to leave a comment.

Powered by Zendesk