System failed to initiate a job command. You may want to try again.

Comments

19 comments

  • Avatar
    Dinesh Ghutake

    Hi I am also getting same error described above , has some body have found any solution for it?

    I also checked httpd logs where i don't find any error in error.log and only one warning in ssl_error_log file . Does below warning causing this issue ? if yes what is the solution for it.

    tail -f ssl_error_log -n 1000
    [Mon Jul 06 07:23:16 2015] [warn] RSA server certificate wildcard CommonName (CN) `*.severalnines.local' does NOT match server name!?

     

    Thanks is advance!!!

     

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi Dinesh,

    The error message showing up is when the UI is not able to push the job request to the REST API. Can you verify if you can access to http://[ClusterControl_host]/cmonapi ?

    Also, what is the output if you run following command on ClusterControl:

    mysql -uroot -p -e 'select url from dcps.apis'

    Regards,

    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Dinesh Ghutake

    Hi Thanks, you very much for your quick response.

    What is the job request api uri? Can I test them independently?

     

    Yes I could able to access http://<my clustercontrol Host>/cmonapi.

     

     

     

    I am getting below results

    When I execute the "select url from dcps.apis" query . 

    http://127.0.0.1/cmonapi

     

     

    Hi One more additional information that when I tried to login on cmonapi , I got below registration error.

     

     

     

    Many Thanks

    Dinesh Ghutake

    0
    Comment actions Permalink
  • Avatar
    Dinesh Ghutake

    hi I have made some progress on it. what i is that I have stopped cmon service and dropped the cmon database and re executed /va/www/html/clustercontrol/app/tools/setup-cc.sh which reinstalled cmon database again.. 

    Then I again tried to add cluster node fro Cluster control UI  and now I am getting below SSH error. ANy Idea how do i resolve this?

     

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi Dinesh,

    You also need to setup passwordless SSH on ClusterControl node, so it can SSH to itself passwordlessly. Run following command on ClusterControl node as user dinesh:

    su - dinesh
    ssh-copy-id -i /home/dinesh/.ssh/id_rsa dinesh@10.232.24.198

    Then, retry again to add the cluster.

    Regards,

    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Dinesh Ghutake

    Hi Ashraf, Thank you for your kind help and support. that worked ..Now I could be able to made progress now. 

    Now I am getting permission denied error while adding cluster..I am looking into this issue whether mysql root user cann't access database.

     

    15 - Message sent to controller
    16 - Verifying controller host and cmon password.
    17 - Verifying the SSH access to the controller.
    18 - Verifying job parameters.
    19 - Verifying the SSH connection to 10.247.97.147.
    20 - Verifying the MySQL user/password.
    21 - Can't access the MySQL server: sh: /home/chaitanya/percona/Percona-XtraDB-Cluster-5.6.22-rel72.0-25.8.978.Linux.x86_64/bin/mysql: Permission denied, command: /home/chaitanya/percona/Percona-XtraDB-Cluster-5.6.22-rel72.0-25.8.978.Linux.x86_64/bin/mysql -NAB -u'root' -pxxxxxx -e 'SELECT 1' 2>&1
    22 - Can't access the MySQL server: sh: /home/chaitanya/percona/Percona-XtraDB-Cluster-5.6.22-rel72.0-25.8.978.Linux.x86_64/bin/mysql: Permission denied, command: /home/chaitanya/percona/Percona-XtraDB-Cluster-5.6.22-rel72.0-25.8.978.Linux.x86_64/bin/mysql -NAB -u'root' -h127.0.0.1 -P3306 -pxxxxxx -e 'SELECT 1' 2>&1
    23 - Can't access the MySQL server: sh: /home/chaitanya/percona/Percona-XtraDB-Cluster-5.6.22-rel72.0-25.8.978.Linux.x86_64/bin/mysql: Permission denied, command: /home/chaitanya/percona/Percona-XtraDB-Cluster-5.6.22-rel72.0-25.8.978.Linux.x86_64/bin/mysql -NAB -u'root' -h10.247.97.147 -P3306 -pxxxxxx -e 'SELECT 1' 2>&1
    24 - Could not connect to mysql server.
    Job failed.

     

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi Dinesh,

    Does this user 'dinesh' has sudo privileges to execute super user command? It says: "Permission denied". Please verify if the basedir can be accessible by the specified user. I can see the basedir location is under other user's home directory, "/home/chaitanya" but can't confirm if the user has enough privileges.

    Regards,

    Ashraf

    0
    Comment actions Permalink
  • Avatar
    thiraviam

    Hi Dinesh,

    I have also come across  the same issue in mysql galera cluster. And I have resolved by set the unique root password for all the nodes. 

    Add SSH user : root, SSH key path:/root/.ssh/id_rsa.pub 

    0
    Comment actions Permalink
  • Avatar
    Dinesh Ghutake

    Hi Thirviam, Thank you for your suggestion , since i am using private cloud vms i can not have ssh with roo user. As Ashraf suggested I have created user chaitanya and i could be able made progress now.

    I am able to connect both of percona cluster nodes but on second node (10.247.97.146) from Cluster control node, not I am NOT able execute the queries because cluster control by default looks for mysql client on /usr/bin/mysql and that client is not available on 146 node (cluster node installation is on my home).

    On 147 node,it's working fine because I had another mysql installed already therefore I could able to create soft link there and cluster control could able execute queries on vm 147 from /usr/bin/mysql.

    Question is why clustercontrol always looks for /usr/bin/mysql instead of looking for given base directory for ex. /home/chaitanya/percona/<perconamysql>/bin/mysql?

    what would be the possible solution for this ?

    Any help is appreciated.Thanks is advance!! 

     

     

    215 - Message sent to controller
    216 - Verifying controller host and cmon password.
    217 - Verifying the SSH access to the controller.
    218 - Verifying job parameters.
    219 - Verifying the SSH connection to 10.247.97.147.
    220 - Verifying the MySQL user/password.
    221 - monitored_mysql_root_password is not set, please set it later the generated cmon.cnf
    222 - Getting node list from the MySQL server.
    223 - Found node: '10.247.97.146'
    224 - Found node: '10.247.97.147'
    225 - Found in total 2 nodes.
    226 - Checking the nodes that those aren't in another cluster.
    227 - Verifying the SSH connection to the nodes.
    228 - Check SELinux statuses
    229 - 10.247.97.146: Failed to determine skip_name_resolve: , command: /usr/bin/mysql -NAB -u'cmon' -pxxxxxx -e "SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_VARIABLES WHERE VARIABLE_NAME='skip_name_resolve'" 2>/dev/null
    Job failed.
    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi Dinesh,

    On job ID 229, /usr/bin/mysql is being executed on the controller node, not the target MySQL server to determine whether skip_name_resolve is enabled. So ensure on ClusterControl node, you have following lines configured under /etc/cmon.cnf:

    mysql_basedir=[path to MySQL basedir on ClusterControl server, eg: /usr ]
    mysql_bindir=[path to MySQL binary directory on ClusterControl server, eg: /usr/bin ]

    Restart CMON and try to add the node again.

     

    Regards,
    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Dinesh Ghutake

    Hi Ashraf, Thanks for your response and appreciated your help, I just tried as you suggested modified /etc/cmon.cnf but facing same issue.

    I guess controller node is trying execute query on one of cluster node which is vm 146 and looking for mysql client on /usr/bin/mysql ,but could not find the mysql client there.

    Shall I install mysql client on 146 in /usr/bin/mysql and try it again?

    /etc/cmon.cnf

    mysql_basedir=/var/cluster-control/mysql
    mysql_bindir=/var/cluster-control/mysql/bin

     

    240 - Message sent to controller
    241 - Verifying controller host and cmon password.
    242 - Verifying the SSH access to the controller.
    243 - Verifying job parameters.
    244 - Verifying the SSH connection to 10.247.97.147.
    245 - Verifying the MySQL user/password.
    246 - monitored_mysql_root_password is not set, please set it later the generated cmon.cnf
    247 - Getting node list from the MySQL server.
    248 - Found node: '10.247.97.146'
    249 - Found node: '10.247.97.147'
    250 - Found in total 2 nodes.
    251 - Checking the nodes that those aren't in another cluster.
    252 - Verifying the SSH connection to the nodes.
    253 - Check SELinux statuses
    254 - 10.247.97.146: Failed to determine skip_name_resolve: , command: /usr/bin/mysql -NAB -u'cmon' -pxxxxxx -e "SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_VARIABLES WHERE VARIABLE_NAME='skip_name_resolve'" 2>/dev/null
    Job failed.

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi Dinesh,

    I couldn't reproduce the problem with the latest version:

    clustercontrol-controller-1.2.10-822.x86_64
    clustercontrol-cmonapi-1.2.10-99.x86_64
    clustercontrol-1.2.10-518.x86_64

    Here is my cmon.cnf (I used MariaDB 10.x client):

    mysql_basedir=/root/mariadb-10.0.20-linux-x86_64/
    mysql_bindir=/root/mariadb-10.0.20-linux-x86_64/bin

    And the target MySQL server's basedir:
    /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl101/

    If you are running on earlier version, please upgrade to at least the version I mentioned by following these steps: http://support.severalnines.com/entries/21095371

    Also, examine /var/log/cmon.log and look for "ERROR" log level if it has.

    Regards,

    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Dinesh Ghutake

    Hi Ashraf,

    I could able to resolve above issue by making soft link of mysql.sock file with /var/lib/mysql/mysql.sock.

    I think, I am pretty close now , getting access denied for 'cmon' user on VM 146.

    Just for info while adding cluster i am using cmon as database user. 

    on both VM147 and VM 147 and on cluster control node i have granted cmon user which looks same all these VMs.

    Cluster Control node-->

    mysql> show grants for 'cmon';
    +--------------------------------------------------------------------------------------------------------------------------------+
    | Grants for cmon@% |
    +--------------------------------------------------------------------------------------------------------------------------------+
    | GRANT ALL PRIVILEGES ON *.* TO 'cmon'@'%' IDENTIFIED BY PASSWORD '*E8C5459B50EF1C73187CBEFB6D0FAF5C0F4E0812' WITH GRANT OPTION |
    +--------------------------------------------------------------------------------------------------------------------------------+
    1 row in set (0.00 sec)

    VM 146-->

    mysql> show grants for 'cmon';
    +--------------------------------------------------------------------------------------------------------------------------------+
    | Grants for cmon@% |
    +--------------------------------------------------------------------------------------------------------------------------------+
    | GRANT ALL PRIVILEGES ON *.* TO 'cmon'@'%' IDENTIFIED BY PASSWORD '*E8C5459B50EF1C73187CBEFB6D0FAF5C0F4E0812' WITH GRANT OPTION |
    +--------------------------------------------------------------------------------------------------------------------------------+
    1 row in set (0.00 sec)

     

    VM 147 -->

     

    mysql> show grants for 'cmon';
    +--------------------------------------------------------------------------------------------------------------------------------+
    | Grants for cmon@% |
    +--------------------------------------------------------------------------------------------------------------------------------+
    | GRANT ALL PRIVILEGES ON *.* TO 'cmon'@'%' IDENTIFIED BY PASSWORD '*E8C5459B50EF1C73187CBEFB6D0FAF5C0F4E0812' WITH GRANT OPTION |
    +--------------------------------------------------------------------------------------------------------------------------------+
    1 row in set (0.02 sec)

     

    456 - Message sent to controller
    457 - Verifying controller host and cmon password.
    458 - Verifying the SSH access to the controller.
    459 - Verifying job parameters.
    460 - Verifying the SSH connection to 10.247.97.147.
    461 - Verifying the MySQL user/password.
    462 - monitored_mysql_root_password is not set, please set it later the generated cmon.cnf
    463 - Getting node list from the MySQL server.
    464 - Found node: '10.247.97.146'
    465 - Found node: '10.247.97.147'
    466 - Found in total 2 nodes.
    467 - Checking the nodes that those aren't in another cluster.
    468 - Verifying the SSH connection to the nodes.
    469 - Check SELinux statuses
    470 - Detected that skip_name_resolve is not used on the target server(s).
    471 - Granting the controller on the cluster.
    472 - Node is Synced : 10.247.97.146
    473 - 10.247.97.146: GRANT failed: Warning: Using a password on the command line interface can be insecure. ERROR 1045 (28000) at line 1: Access denied for user 'cmon'@'localhost' (using password: YES)
    Job failed.

     

    Any Idea? Thanks in Advance!!

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi Dinesh,

    It clearly says:
    ERROR 1045 (28000) at line 1: Access denied for user 'cmon'@'localhost' (using password: YES) 

    Can you try with this command on 146?

    mysql -ucmon -p -hlocalhost

    Also please show the output of following statement on 146:
    SHOW GRANTS FOR 'cmon'@'localhost';

    Regards,

    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi Dinesh,

    Does the 'root' and 'cmon' passwords contain non-alphanumeric passwords?

    Did you manually install Percona from tar.gz files or something else?

    What cloud provider are you using? 

    I would like to setup a similar environment to  reproduce your problems , b/c the installation process should be much smother than this and it seems the error messages are not clear enough.

    BR
    johan

    0
    Comment actions Permalink
  • Avatar
    Dinesh Ghutake

    Hi Ashraf/John,

    To answers Ashraf''s queries here is the out put--> 

    [chaitanya@IPPGITVERCTRL01 Percona-XtraDB-Cluster-5.6.22-rel72.0-25.8.978.Linux.x86_64]$ ./bin/mysql -ucmon -p -hlocalhost
    Enter password:
    Welcome to the MySQL monitor. Commands end with ; or \g.
    Your MySQL connection id is 1302
    Server version: 5.6.22-72.0-25.8-log Percona XtraDB Cluster binary (GPL) 5.6.22-25.8, Revision 978, wsrep_25.8.r4150

    Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.

    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

    mysql> SHOW GRANTS FOR 'cmon'@'localhost';

    +----------------------------------------------------------------------------------------------------------------------+
    | Grants for cmon@localhost |
    +----------------------------------------------------------------------------------------------------------------------+
    | GRANT ALL PRIVILEGES ON *.* TO 'cmon'@'localhost' IDENTIFIED BY PASSWORD '*E8C5459B50EF1C73187CBEFB6D0FAF5C0F4E0812' |
    +----------------------------------------------------------------------------------------------------------------------+
    1 row in set (0.00 sec)

    mysql>

    It seems I am missing some step here, but could not get it..

    HI John,

    'root' and 'cmon' passwords contains only alphabet values.

    Yes we have used tar.gz to manually install Percona DB cluster since Yum repo was able on our VMs (redhat 5.7) and there were already other instance of mysql present.

    Our cluster is working fine we have been executing perfomance lab on it from last 1 month.

    It's our internal company private cloud. 

    I agree installation process was smoother for cluster control , I am getting these issues while adding existing cluster nodes on cluster control.

    Thank you very much for you help.

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi,

    Is the other mysql instance still running on the VM? On what port? 

    BR

    johan

    0
    Comment actions Permalink
  • Avatar
    Dinesh Ghutake

    Hi,

    No all other mysql instances have been stopped and removed from cluster control VMs.

    Regards,

    Dinesh 

    0
    Comment actions Permalink
  • Avatar
    Dinesh Ghutake

    /etc/cmon.cnf files on cluster controller node looks like this -->

    mysql_port=3306

    mysql_hostname=10.232.24.198
    mysql_password=*****
    hostname=10.232.24.198
    mysql_basedir=/var/cluster-control/mysql
    mysql_bindir=/var/cluster-control/mysql/bin
    skip_name_resolve=1
    mode=controller
    type=galera
    local_mysql_port=3306
    local_mysql_password=******
    os_user=root
    logfile=/var/log/cmon.log
    mysql_server_addresses=10.247.97.147,10.247.97.146

    Your help is much appreciated.

    Regards, 

    Dinesh

    0
    Comment actions Permalink

Please sign in to leave a comment.

Powered by Zendesk