Upgrade ClusterControl to Latest Version

Follow

Comments

32 comments

  • Avatar
    Ashraf Sharif

    Hi David,

    It seems like you executed the script under /usr/bin directory. Try again by removing /usr/bin/s9s_repo directory and restart again the script from other location (it is recommended to run the script from your $HOME)

    Regards,

    Ashraf

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi,

    That is fine. The UI and the CMON API (a RESTful API) is still on 1.2.4, but the ClusterControl server needed a patch (fixes for the comments  above saying upgrade did not work) and became 1.2.4a .

    Best regards

    Johan

    0
    Comment actions Permalink
  • Avatar
    Niko

    Hey,

    I did. Now UI says:

    |
    ClusterControl UI version:
    | 1.2.4
    |
    |
    ClusterControl Server Version: 
    | 1.2.4a
    |
    |
    CMON API Version:
    | 1.2.4
    |

    How do I update rest?

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi,

    Please do:

    ./s9s_upgrade_cmon --latest --force

    Make sure you have the latest s9s_upgrade_cmon first:

    cd s9s-admin

    git pull

    cd ccadmin

    ./s9s_upgrade_cmon ... 

     

     

     

    0
    Comment actions Permalink
  • Avatar
    Niko

    Hey,

    I tried to update software with automatic scripts but for some reason it only shows status but won't start updating:

    ./s9s_upgrade_cmon --latest

    ===================================

          Collecting Information

    ** Current CMON version: 1.2.4

    ** Latest CMON version: 1.2.4a

    ** Latest CCUI version: 1.2.4

    ClusterControl is already up-to-date.

    How can I update from 1.2.4 to 1.2.4a?

    0
    Comment actions Permalink
  • Avatar
    Henry Snow

    I'm going from 1.2.2 using the same upgrade process as above. Hostnames and IPs have been modified.

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    From 1.2.3 with this command:

    To upgrade to the latest version, clone the Git repo in ClusterControl host:

    $ git clone https://github.com/severalnines/s9s-admin.git

     If you already have that clone, it is very important you update it:

    $ cd s9s-admin

    $ git pull

    Navigate to the s9s_upgrade_cmon script folder:

    $ cd s9s-admin/ccadmin

     

    Start the upgrade process:

    ./s9s_upgrade_cmon --latest

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi,

    That is strange, i can't reproduce this upgrading from 1.2.3 to 1.2.4, what version did you upgrade from?

    Thanks,

    Johan

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    Hi, simply solved rolling back to the previous version and evertything now works without problems.

    Thanks for your help

    Regards

    Principe

    0
    Comment actions Permalink
  • Avatar
    Johan

    Did you restart cmon on the controller?

    service cmon restart

    Wait a few minutes, then what do you see in Manage -> Processes (please send screen shot + /var/log/cmon.log from the controller)? 

    Best regards

    Johan

    0
    Comment actions Permalink
  • Avatar
    Henry Snow

    I'm having the same issue as Principe. Logs look fairly similar.

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    Stopping cmon:  ok

    Starting cmon: Checking for default config file at /etc/cmon.cnf: found

    Deprecated parameter: nodaemon

       To start cmon in nodaemon mode:    ./cmon -d

    found pidfile /var/run/cmon.pid with pid 23372

    Going to daemonize.. - cmon will write a log in /var/log/cmon.log from now

    Starting cmon  : ok

     

     

     

    0
    Comment actions Permalink
  • Avatar
    Johan

    @principe -  can you do on the controller:

    service cmon restart

    What do you get then?

    0
    Comment actions Permalink
  • Avatar
    David Majchrzak

    @Ashraf Sharif: This was actually straight from /root/s9s-admin/ccadmin/

    The problem was an empty directory /usr/bin/s9s-repo

    Deleted the directory and the script continued and upgraded successfully.

     

    0
    Comment actions Permalink
  • Avatar
    Johan

    @principe  - hold on, trying to reproduce this.

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    Any suggestions for me?

    Regards

    0
    Comment actions Permalink
  • Avatar
    Chris

    Hi guys

     

    I`ve tried to upgrade old 1.3.6 CC but it fails. How can I get new CC on already running cluster (Percona and 1.3.6 CC)? 

     

    KR

    0
    Comment actions Permalink
  • Avatar
    David Majchrzak

    Tried updating to 1.24 but it seems to fail a bit:

    ./s9s_upgrade_cmon --latest

    ===================================

    Collecting Information

    Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.

    Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.

    Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.

    Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.

    ** Current CMON version: 1.2.3d

    ** Latest CMON version: 1.2.4

    New version found.

    ** Proceed with upgrading? (Y/n): Y

    Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.

    ===================================

    Backup CMON controller

    Creating backup directory: /tmp/s9s_backup_2013-11-20-11_33_22 [ OK ]

    Backup CMON schema Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.

    [ OK ]

    Backup CMON cron [ OK ]

    Backup CMON web app [ OK ]

    Backup ClusterControl web app [ OK ]

    Backup ClusterControl schema Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.

    [ OK ]

    Backup CMON share directory [ OK ]

    Backup CMON configuration files [ OK ]

    Backup CMON log file [ OK ]

    Backup CMON init.d [ OK ]

    Backup my.cnf [ OK ]

    Backup S9S binary cp: omitting directory `/usr/bin/s9s_repo'

    [ Backup S9S binary ]

    [ ERROR ]

    Upgrade failed. Unfortunately, the detected backup is incomplete. Kindly proceed with manual restoration.

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    mysql> select * from ext_proc \G

    *************************** 1. row ***************************

           id: 7

          cid: 1

     hostname: 192.168.2.20

          bin: /usr/bin/garbd

         opts: -a gcomm://192.168.2.22:4567 -g  -d

          cmd: /usr/bin/garbd  -a gcomm://192.168.2.22:4567 -g my_wsrep_cluster -d

    proc_name: garbd

       status: 0

         port: 4567

       active: 1

    report_ts: 2013-11-20 10:31:21

    *************************** 2. row ***************************

           id: 16

          cid: 1

     hostname: 192.168.2.11

          bin: /usr/sbin/haproxy

         opts: -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)

          cmd: /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)

    proc_name: haproxy

       status: 0

         port: 9600

       active: 1

    report_ts: 2013-11-20 10:31:41

    *************************** 3. row ***************************

           id: 17

          cid: 1

     hostname: 192.168.2.12

          bin: /usr/sbin/haproxy

         opts: -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)

          cmd: /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)

    proc_name: haproxy

       status: 0

         port: 9600

       active: 1

    report_ts: 2013-11-20 10:31:42

    *************************** 4. row ***************************

           id: 18

          cid: 1

     hostname: 192.168.2.11

          bin: /usr/sbin/keepalived

         opts:  

          cmd: nohup nice /usr/sbin/keepalived

    proc_name: keepalived

       status: 0

         port: 0

       active: 1

    report_ts: 2013-11-20 10:31:42

    *************************** 5. row ***************************

           id: 19

          cid: 1

     hostname: 192.168.2.12

          bin: /usr/sbin/keepalived

         opts:  

          cmd: nohup nice /usr/sbin/keepalived

    proc_name: keepalived

       status: 0

         port: 0

       active: 1

    report_ts: 2013-11-20 10:31:42

    *************************** 6. row ***************************

           id: 25

          cid: 1

     hostname: 192.168.2.21

          bin: /usr/sbin/cmon

         opts:  

          cmd: service cmon restart  

    proc_name: cmon

       status: 0

         port: 0

       active: 1

    report_ts: 2013-11-20 10:40:13

    *************************** 7. row ***************************

           id: 26

          cid: 1

     hostname: 192.168.2.24

          bin: /usr/sbin/cmon

         opts:  

          cmd: service cmon restart  

    proc_name: cmon

       status: 0

         port: 0

       active: 1

    report_ts: 2013-11-20 10:40:13

    *************************** 8. row ***************************

           id: 27

          cid: 1

     hostname: 192.168.2.22

          bin: /usr/sbin/cmon

         opts:  

          cmd: service cmon restart  

    proc_name: cmon

       status: 0

         port: 0

       active: 1

    report_ts: 2013-11-20 10:40:13

    *************************** 9. row ***************************

           id: 28

          cid: 1

     hostname: 192.168.2.23

          bin: /usr/sbin/cmon

         opts:  

          cmd: service cmon restart  

    proc_name: cmon

       status: 0

         port: 0

       active: 1

    report_ts: 2013-11-20 10:40:13

    *************************** 10. row ***************************

           id: 29

          cid: 1

     hostname: 192.168.2.25

          bin: /usr/sbin/cmon

         opts:  

          cmd: service cmon restart  

    proc_name: cmon

       status: 0

         port: 0

       active: 1

    report_ts: 2013-11-20 10:40:14

    *************************** 11. row ***************************

           id: 31

          cid: 1

     hostname: 192.168.2.12

          bin: /usr/sbin/cmon

         opts:  

          cmd: service cmon restart  

    proc_name: cmon

       status: 0

         port: 0

       active: 1

    report_ts: 2013-11-20 10:58:53

    *************************** 12. row ***************************

           id: 33

          cid: 1

     hostname: 192.168.2.11

          bin: /usr/sbin/cmon

         opts:  

          cmd: service cmon restart  

    proc_name: cmon

       status: 0

         port: 0

       active: 1

    report_ts: 2013-11-20 11:20:10

    12 rows in set (0.00 sec)

     

     

    mysql> select * from cmon.hosts \G

    *************************** 1. row ***************************

                 id: 1

                cid: 1

           hostname: 192.168.2.20

        ping_status: 1

          ping_time: 27

                 ip: 192.168.2.20

          report_ts: 2013-11-20 11:57:04

                msg:

       cmon_version: 1.2.4

        cmon_status: 2013-11-20 11:57:29

    wall_clock_time: 1384945049

    *************************** 2. row ***************************

                 id: 2

                cid: 1

           hostname: 192.168.2.21

        ping_status: 1

          ping_time: 4112

                 ip: 192.168.2.21

          report_ts: 2013-11-20 11:57:04

                msg:

       cmon_version: 1.2.4

        cmon_status: 2013-11-20 11:57:29

    wall_clock_time: 1384945049

    *************************** 3. row ***************************

                 id: 3

                cid: 1

           hostname: 192.168.2.22

        ping_status: 1

          ping_time: 327

                 ip: 192.168.2.22

          report_ts: 2013-11-20 11:57:04

                msg:

       cmon_version: 1.2.4

        cmon_status: 2013-11-20 11:57:31

    wall_clock_time: 1384945051

    *************************** 4. row ***************************

                 id: 4

                cid: 1

           hostname: 192.168.2.23

        ping_status: 1

          ping_time: 267

                 ip: 192.168.2.23

          report_ts: 2013-11-20 11:57:04

                msg:

       cmon_version: 1.2.4

        cmon_status: 2013-11-20 11:57:30

    wall_clock_time: 1384945050

    *************************** 5. row ***************************

                 id: 5

                cid: 1

           hostname: 192.168.2.24

        ping_status: 1

          ping_time: 438

                 ip: 192.168.2.24

          report_ts: 2013-11-20 11:57:04

                msg:

       cmon_version: 1.2.4

        cmon_status: 2013-11-20 11:57:30

    wall_clock_time: 1384945050

    *************************** 6. row ***************************

                 id: 6

                cid: 1

           hostname: 192.168.2.25

        ping_status: 1

          ping_time: 320

                 ip: 192.168.2.25

          report_ts: 2013-11-20 11:57:04

                msg:

       cmon_version: 1.2.4

        cmon_status: 2013-11-20 11:57:31

    wall_clock_time: 1384945051

    *************************** 7. row ***************************

                 id: 29

                cid: 1

           hostname: 192.168.2.11

        ping_status: 1

          ping_time: 289

                 ip: 192.168.2.11

          report_ts: 2013-11-20 11:57:04

                msg:

       cmon_version: 1.2.4

        cmon_status: 2013-11-20 11:57:32

    wall_clock_time: 1384945102

    *************************** 8. row ***************************

                 id: 30

                cid: 1

           hostname: 192.168.2.12

        ping_status: 1

          ping_time: 302

                 ip: 192.168.2.12

          report_ts: 2013-11-20 11:57:04

                msg:

       cmon_version: 1.2.4

        cmon_status: 2013-11-20 11:57:30

    wall_clock_time: 1384945105

    8 rows in set (0.00 sec)

     

    0
    Comment actions Permalink
  • Avatar
    Johan

    Can you do on the controller:

    mysql -ucmon -p -h127.0.0.1

    use cmon;

    select * from cmon.ext_proc \G

    select * from cmon.hosts \G

    Best regards

    Johan

     

     

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    I have done what described up in this page:

     

    ./s9s_upgrade_cmon --latest

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi,

    Did you update the UI?

    BR

    johan

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    Strange, mysql is running and accessible by the user cmon.

     

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi,

    It looks quite fine (there is no mysql server running on 192.168.2.11?) .

    Can you go to Manage -> Processes and take a screenshot of it?

    Best regards

    Johan

     

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    Going to daemonize.. - cmon will write a log in /var/log/cmon.log from now

    Starting cmon  :Nov 20 11:17:05 : (INFO) Starting cmon version 1.2.4 with the following parameters:

         mysql-password=xxxxxxx

        mysql-hostname=192.168.2.20

        mysql-port=3306

        ndb-connectstring=<only applicable for  mysqlcluster>

        cluster-id=1

        hostname=192.168.2.11

        db_hourly_stats_collection_interval=5

         db_stats_collection_interval=30

         host_stats_collection_interval=60

         ssh_opts=-nq

         ssh_port=

         ssh_identity=/root/.ssh/id_rsa

         os_user=root

         os_user_home=/root

         os=redhat

         mysql_basedir=/usr/

         enable_autorecovery=1

     

    Nov 20 11:17:05 : (INFO) If that doesn't look correct, kill cmon and restart with -? for help on the parameters, or change the params in /etc/init.d/cmon

    Nov 20 11:17:05 : (INFO) Looking up (1): 192.168.2.11

    Nov 20 11:17:05 : (INFO) IPv4 address: 192.168.2.11 (192.168.2.11)

    Nov 20 11:17:05 : (INFO) Testing connection to mysqld..

    Nov 20 11:17:05 : (INFO) Setting up threads to handle monitoring

    Nov 20 11:17:05 : (INFO) Starting up Alarm Handler

    Nov 20 11:17:05 : (INFO) Added 192.168.2.11:/usr/sbin/cmon to managed processes (ext_proc)

    Nov 20 11:17:05 : (INFO) Starting MySQL Agent Collector

    Nov 20 11:17:06 : (INFO) Starting host collector

     ok

    Nov 20 11:17:08 : (INFO) Starting Process Manager

    Nov 20 11:17:09 : (INFO) Starting Bencher Thread

     

    This is the log, the cmon process is running but it seems that old paramters are wronks?

    Regards

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi,

    Can you post the /var/log/cmon.log from any of the problematic agents?

    Also can you do on the same problematic node you get the /var/log/cmon.log from:

    ps -ef |grep cmon  

     

    If you don't want to put it here, please open a support ticket.

    Best regards

    Johan

     

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    Hi, I installed all of them from the UI during the trial period

    0
    Comment actions Permalink
  • Avatar
    Johan

    Hi Principe,

    Did you install garbd, haproxy, keepalived from scripts or from the UI?

    Best regards

    Johan

    0
    Comment actions Permalink
  • Avatar
    Principe Orazio

    Hi, I upgrade the new version and everything seems to be done. The only think that I don't understand is the message "Agent is not responding for .... ", this message is for garbd, haproxy, and keepalived hosts, for cluster nodes it's all right.

    I try to run service cmon restart but it continue to show me this message.

    What I can do? Any suggestions?

    Thanks

    Principe Orazio

    0
    Comment actions Permalink
  • Avatar
    Ashraf Sharif

    Hi,

    For the new UI, you need to install it by go to http://cluster_control_ip/install . Enter required details like MySQL root password, email address, password and most importantly take note the ClusterControl API Access Token. Click "Install" and follow the integration wizard. Once completed, you can access the new UI accessible at https://cluster_control_ip/clustercontrol.

    For example of upgrade, you can refer to our blog post here:  http://www.severalnines.com/blog/how-upgrade-clustercontrol.

    0
    Comment actions Permalink

Please sign in to leave a comment.

Powered by Zendesk