Single mariadb database monitor

Comments

5 comments

  • Official comment
    Avatar
    Ashraf Sharif

    Hi Dimas,

    Attach to the ClusterControl container and edit /etc/cmon.d/cmon.cnf on the following line:

    hostname=172.17.0.2

    Restart cmon service using the supervisorctl command to load the change:

    $ supervisorctl restart cmon

    Then, try again to import the server from the ClusterControl UI.

    Regards,
    Ashraf

    Comment actions Permalink
  • Avatar
    Dimas

    Log:

    [20:56:44]:<em style='color: #2ecc71;'>Zeus</em>: 'CREATE USER 'cmon'@'2276041346a4' IDENTIFIED BY 'xxxxx'' failed: ERROR 1133 (28000) at line 1: Can't find any matching row in the user table

    [20:56:43]:<em style='color: #2ecc71;'>Zeus</em>: Granting user <strong style='color: #59a449;'>cmon</strong>.
    [20:56:43]:<em style='color: #2ecc71;'>Zeus</em>:3306: Node is writable.
    [20:56:43]:Granting controller (<em style='color: #1abc9c;'>2276041346a4</em>).
    [20:56:42]:Detected that <strong style='color: #59a449;'>skip_name_resolve</strong> is used on the target server(s).
    [20:56:42]:<em style='color: #2ecc71;'>Zeus</em>:3306: <strong style='color: #59a449;'>skip_name_resolve</strong>=skip_name_resolve ON
    [20:56:42]:Check SELinux statuses.
    [20:56:42]:<em style='color: #2ecc71;'>Zeus</em>: Checking ssh/sudo with credentials <strong style='color: #59a449;'>ssh_cred_job_6656</strong>.
    [20:56:41]:Checking that nodes are not in another cluster.
    [20:56:41]:Found in total 1 nodes.
    [20:56:41]:Found node: '<em style='color: #2ecc71;'>Zeus</em>'
    [20:56:41]:<em style='color: #2ecc71;'>Zeus</em>:3306: monitored_mysql_root_password is not set, please set it later in the generated cmon.cnf
    [20:56:41]:<em style='color: #2ecc71;'>Zeus</em>:3306: Sanity check...
    [20:56:41]:<em style='color: #2ecc71;'>Zeus</em>:3306: Verifying the MySQL user/password.
    [20:56:40]:<em style='color: #2ecc71;'>Zeus</em>: Checking ssh/sudo with credentials <strong style='color: #59a449;'>ssh_cred_job_6656</strong>.
    [20:56:40]:Adding existing MySQL cluster.
    [20:56:40]:Generating configuration for cluster <strong style='color: #59a449;'>Zeus</strong>.
    [20:56:38]:Adding cluster with type '<strong style='color: #59a449;'>replication</strong>', vendor '<strong style='color: #59a449;'>percona</strong>'.
    Job spec: {
    "command": "add_cluster",
    "group_id": 1,
    "group_name": "admins",
    "user_id": 1,
    "user_name": "myser",
    "job_data":
    {
    "api_id": 1,
    "basedir": "/usr",
    "cluster_name": "Zeus",
    "cluster_type": "replication",
    "company_id": "1",
    "config_template": "my.cnf.gtid_replication",
    "datadir": "/var/lib/mysql",
    "db_password": "password",
    "db_user": "user",
    "disable_firewall": true,
    "disable_selinux": true,
    "enable_cluster_autorecovery": false,
    "enable_information_schema_queries": true,
    "enable_node_autorecovery": false,
    "generate_token": true,
    "install_software": true,
    "monitored_mysql_port": "3306",
    "port": "3306",
    "ssh_keyfile": "/root/.ssh/id_rsa",
    "ssh_port": "22",
    "ssh_user": "root",
    "sudo_password": "",
    "tag": "",
    "user_id": 1,
    "vendor": "percona",
    "version": "5.7",
    "nodes":
    [
    {
    "hostname": "Zeus",
    "hostname_data": "Zeus",
    "hostname_internal": "Zeus",
    "port": null
    }
    ]
    }
    }

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi Dimas,

    The following line means ClusterControl has to use the IP address to create the user (instead of hostname):

    [20:56:42]:Detected that <strong style='color: #59a449;'>skip_name_resolve</strong> is used on the target server(s).

    However, when ClusterControl was trying to create a user called "cmon", it turns out the IP address that it must use is basically a hostname "'2276041346a4". This looks like a problem in detecting the correct IP address in the entrypoint.sh script around these lines: 
    https://github.com/severalnines/docker/blob/master/entrypoint.sh#L19-L20

    On the Docker host, attach to the ClusterControl container and send me the output of the following commands:

    $ ip a
    $ hostname
    $ ip a | grep eth0 | grep inet | awk {'print $2'} | cut -d '/' -f 1 | head -1
    $ hostname -i | awk {'print $1'} | tr -d ' '
    $ grep ^hostname /etc/cmon.d/cmon.cnf

    Regards,
    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Dimas

    sh-4.2# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
        link/sit 0.0.0.0 brd 0.0.0.0
    5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
        link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    sh-4.2# hostname
    2276041346a4
    sh-4.2# ip a | grep eth0 | grep inet | awk {'print $2'} | cut -d '/' -f 1 | head -1
    172.17.0.2
    sh-4.2# hostname -i | awk {'print $1'} | tr -d ' '
    172.17.0.2
    sh-4.2# grep ^hostname /etc/cmon.d/cmon.cnf
    hostname=2276041346a4

    0
    Comment actions Permalink
  • Avatar
    Dimas

    Thx 👍

    0
    Comment actions Permalink

Please sign in to leave a comment.

Powered by Zendesk