Please follow these upgrade instructions in our Administration Guide:
https://severalnines.com/docs/administration.html#upgrading-clustercontrol
[DEPRECATED - DON'T READ BELOW]
We have built an automatic upgrade script which is available under our Git repository:
https://github.com/severalnines/s9s-admin.git
IMPORTANT CHANGE! Port 9500 must be opened on the Controller. The Apache Server needs to connect to the Controller on localhost:9500/127.0.0.1:9500.
To upgrade to the latest version, clone the Git repo in ClusterControl host:
$ git clone https://github.com/severalnines/s9s-admin.git
If you already have that clone, it is very important you update it:
$ cd s9s-admin
$ git pull
Navigate to the s9s_upgrade_cmon script folder:
$ cd s9s-admin/ccadmin
Start the upgrade process:
./s9s_upgrade_cmon --latest
The script will compare the current ClusterControl version with the latest version available at our download site and perform the upgrade in all hosts if necessary. For safety precaution, it will backup ClusterControl on every host (default backup path will be /$HOME/s9s_backup/s9s_backup_[date & time]) before performing any upgrade.
Other Parameters
To force upgrade, even though the script detects installed ClusterControl version is already up-to-date, specify the --force (-f) option:
./s9s_upgrade_cmon --latest --force
By default, the upgrade will skip CMON and dcps databases backup as in some cases this behavior can eaten up a lot of disk space. You may instruct the script to perform the DB backup during the upgrade by using --backup-db option (-n) and make sure you have sufficient free space beforehand:
./s9s_upgrade_cmon --latest --backup-db
You can restore back your ClusterControl to the original version by with --restore and --backupdir option:
./s9s_upgrade_cmon --restore=all --backupdir=/tmp/s9s_backup_0-0-0-00/
If you are using non-default installation path (default is /usr/local/cmon for Debian/Ubuntu and /usr for Redhat/CentOS), you can use the topdir option (-t) to specify the custom path as example below:
./s9s_upgrade_cmon --latest --topdir=/opt/cmon
It is also possible to define custom backup path to override the default path during upgrade/backup process using backupdir option (-d) as example below:
./s9s_upgrade_cmon --latest --backupdir=/root/backup/s9s_backup
or
./s9s_upgrade_cmon --backup=all --backupdir=/root/backup/s9s_backup
So, your restoration process (if needed) will be using the custom backup path as well:
./s9s_upgrade_cmon --restore=all --backupdir=/root/backup/s9s_backup
Option restore and backup can take 3 arguments:
- all - restore/backup all hosts (ClusterControl controller host and agent hosts)
- controller - restore/backup controller host only
- agent - restore/backup all agent host only
**For sudoers, kindly run the command with 'sudo'.
Comments
32 comments
Hi guys
I`ve tried to upgrade old 1.3.6 CC but it fails. How can I get new CC on already running cluster (Percona and 1.3.6 CC)?
KR
Hi,
Did you mean version 1.1.36? Can you clone again the git repo and restart the upgrade process with following command:
$ ./s9s_upgrade_cmon --latest 2>&1 | tee upgrade.log
And attach the upgrade.log to us via support ticket.
Hi
Thanks for a tip. The issue was lack of disk space, now it`s ok. But I have now two instances of CC: old one working well, and new one empty! How to import old settings?
KR
Hi,
For the new UI, you need to install it by go to http://cluster_control_ip/install . Enter required details like MySQL root password, email address, password and most importantly take note the ClusterControl API Access Token. Click "Install" and follow the integration wizard. Once completed, you can access the new UI accessible at https://cluster_control_ip/clustercontrol.
For example of upgrade, you can refer to our blog post here: http://www.severalnines.com/blog/how-upgrade-clustercontrol.
Hi, I upgrade the new version and everything seems to be done. The only think that I don't understand is the message "Agent is not responding for .... ", this message is for garbd, haproxy, and keepalived hosts, for cluster nodes it's all right.
I try to run service cmon restart but it continue to show me this message.
What I can do? Any suggestions?
Thanks
Principe Orazio
Hi Principe,
Did you install garbd, haproxy, keepalived from scripts or from the UI?
Best regards
Johan
Hi, I installed all of them from the UI during the trial period
Hi,
Can you post the /var/log/cmon.log from any of the problematic agents?
Also can you do on the same problematic node you get the /var/log/cmon.log from:
ps -ef |grep cmon
If you don't want to put it here, please open a support ticket.
Best regards
Johan
Going to daemonize.. - cmon will write a log in /var/log/cmon.log from now
Starting cmon :Nov 20 11:17:05 : (INFO) Starting cmon version 1.2.4 with the following parameters:
mysql-password=xxxxxxx
mysql-hostname=192.168.2.20
mysql-port=3306
ndb-connectstring=<only applicable for mysqlcluster>
cluster-id=1
hostname=192.168.2.11
db_hourly_stats_collection_interval=5
db_stats_collection_interval=30
host_stats_collection_interval=60
ssh_opts=-nq
ssh_port=
ssh_identity=/root/.ssh/id_rsa
os_user=root
os_user_home=/root
os=redhat
mysql_basedir=/usr/
enable_autorecovery=1
Nov 20 11:17:05 : (INFO) If that doesn't look correct, kill cmon and restart with -? for help on the parameters, or change the params in /etc/init.d/cmon
Nov 20 11:17:05 : (INFO) Looking up (1): 192.168.2.11
Nov 20 11:17:05 : (INFO) IPv4 address: 192.168.2.11 (192.168.2.11)
Nov 20 11:17:05 : (INFO) Testing connection to mysqld..
Nov 20 11:17:05 : (INFO) Setting up threads to handle monitoring
Nov 20 11:17:05 : (INFO) Starting up Alarm Handler
Nov 20 11:17:05 : (INFO) Added 192.168.2.11:/usr/sbin/cmon to managed processes (ext_proc)
Nov 20 11:17:05 : (INFO) Starting MySQL Agent Collector
Nov 20 11:17:06 : (INFO) Starting host collector
ok
Nov 20 11:17:08 : (INFO) Starting Process Manager
Nov 20 11:17:09 : (INFO) Starting Bencher Thread
This is the log, the cmon process is running but it seems that old paramters are wronks?
Regards
Hi,
It looks quite fine (there is no mysql server running on 192.168.2.11?) .
Can you go to Manage -> Processes and take a screenshot of it?
Best regards
Johan
Strange, mysql is running and accessible by the user cmon.
Hi,
Did you update the UI?
BR
johan
I have done what described up in this page:
./s9s_upgrade_cmon --latest
Can you do on the controller:
mysql -ucmon -p -h127.0.0.1
use cmon;
select * from cmon.ext_proc \G
select * from cmon.hosts \G
Best regards
Johan
mysql> select * from ext_proc \G
*************************** 1. row ***************************
id: 7
cid: 1
hostname: 192.168.2.20
bin: /usr/bin/garbd
opts: -a gcomm://192.168.2.22:4567 -g -d
cmd: /usr/bin/garbd -a gcomm://192.168.2.22:4567 -g my_wsrep_cluster -d
proc_name: garbd
status: 0
port: 4567
active: 1
report_ts: 2013-11-20 10:31:21
*************************** 2. row ***************************
id: 16
cid: 1
hostname: 192.168.2.11
bin: /usr/sbin/haproxy
opts: -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)
cmd: /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)
proc_name: haproxy
status: 0
port: 9600
active: 1
report_ts: 2013-11-20 10:31:41
*************************** 3. row ***************************
id: 17
cid: 1
hostname: 192.168.2.12
bin: /usr/sbin/haproxy
opts: -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)
cmd: /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)
proc_name: haproxy
status: 0
port: 9600
active: 1
report_ts: 2013-11-20 10:31:42
*************************** 4. row ***************************
id: 18
cid: 1
hostname: 192.168.2.11
bin: /usr/sbin/keepalived
opts:
cmd: nohup nice /usr/sbin/keepalived
proc_name: keepalived
status: 0
port: 0
active: 1
report_ts: 2013-11-20 10:31:42
*************************** 5. row ***************************
id: 19
cid: 1
hostname: 192.168.2.12
bin: /usr/sbin/keepalived
opts:
cmd: nohup nice /usr/sbin/keepalived
proc_name: keepalived
status: 0
port: 0
active: 1
report_ts: 2013-11-20 10:31:42
*************************** 6. row ***************************
id: 25
cid: 1
hostname: 192.168.2.21
bin: /usr/sbin/cmon
opts:
cmd: service cmon restart
proc_name: cmon
status: 0
port: 0
active: 1
report_ts: 2013-11-20 10:40:13
*************************** 7. row ***************************
id: 26
cid: 1
hostname: 192.168.2.24
bin: /usr/sbin/cmon
opts:
cmd: service cmon restart
proc_name: cmon
status: 0
port: 0
active: 1
report_ts: 2013-11-20 10:40:13
*************************** 8. row ***************************
id: 27
cid: 1
hostname: 192.168.2.22
bin: /usr/sbin/cmon
opts:
cmd: service cmon restart
proc_name: cmon
status: 0
port: 0
active: 1
report_ts: 2013-11-20 10:40:13
*************************** 9. row ***************************
id: 28
cid: 1
hostname: 192.168.2.23
bin: /usr/sbin/cmon
opts:
cmd: service cmon restart
proc_name: cmon
status: 0
port: 0
active: 1
report_ts: 2013-11-20 10:40:13
*************************** 10. row ***************************
id: 29
cid: 1
hostname: 192.168.2.25
bin: /usr/sbin/cmon
opts:
cmd: service cmon restart
proc_name: cmon
status: 0
port: 0
active: 1
report_ts: 2013-11-20 10:40:14
*************************** 11. row ***************************
id: 31
cid: 1
hostname: 192.168.2.12
bin: /usr/sbin/cmon
opts:
cmd: service cmon restart
proc_name: cmon
status: 0
port: 0
active: 1
report_ts: 2013-11-20 10:58:53
*************************** 12. row ***************************
id: 33
cid: 1
hostname: 192.168.2.11
bin: /usr/sbin/cmon
opts:
cmd: service cmon restart
proc_name: cmon
status: 0
port: 0
active: 1
report_ts: 2013-11-20 11:20:10
12 rows in set (0.00 sec)
mysql> select * from cmon.hosts \G
*************************** 1. row ***************************
id: 1
cid: 1
hostname: 192.168.2.20
ping_status: 1
ping_time: 27
ip: 192.168.2.20
report_ts: 2013-11-20 11:57:04
msg:
cmon_version: 1.2.4
cmon_status: 2013-11-20 11:57:29
wall_clock_time: 1384945049
*************************** 2. row ***************************
id: 2
cid: 1
hostname: 192.168.2.21
ping_status: 1
ping_time: 4112
ip: 192.168.2.21
report_ts: 2013-11-20 11:57:04
msg:
cmon_version: 1.2.4
cmon_status: 2013-11-20 11:57:29
wall_clock_time: 1384945049
*************************** 3. row ***************************
id: 3
cid: 1
hostname: 192.168.2.22
ping_status: 1
ping_time: 327
ip: 192.168.2.22
report_ts: 2013-11-20 11:57:04
msg:
cmon_version: 1.2.4
cmon_status: 2013-11-20 11:57:31
wall_clock_time: 1384945051
*************************** 4. row ***************************
id: 4
cid: 1
hostname: 192.168.2.23
ping_status: 1
ping_time: 267
ip: 192.168.2.23
report_ts: 2013-11-20 11:57:04
msg:
cmon_version: 1.2.4
cmon_status: 2013-11-20 11:57:30
wall_clock_time: 1384945050
*************************** 5. row ***************************
id: 5
cid: 1
hostname: 192.168.2.24
ping_status: 1
ping_time: 438
ip: 192.168.2.24
report_ts: 2013-11-20 11:57:04
msg:
cmon_version: 1.2.4
cmon_status: 2013-11-20 11:57:30
wall_clock_time: 1384945050
*************************** 6. row ***************************
id: 6
cid: 1
hostname: 192.168.2.25
ping_status: 1
ping_time: 320
ip: 192.168.2.25
report_ts: 2013-11-20 11:57:04
msg:
cmon_version: 1.2.4
cmon_status: 2013-11-20 11:57:31
wall_clock_time: 1384945051
*************************** 7. row ***************************
id: 29
cid: 1
hostname: 192.168.2.11
ping_status: 1
ping_time: 289
ip: 192.168.2.11
report_ts: 2013-11-20 11:57:04
msg:
cmon_version: 1.2.4
cmon_status: 2013-11-20 11:57:32
wall_clock_time: 1384945102
*************************** 8. row ***************************
id: 30
cid: 1
hostname: 192.168.2.12
ping_status: 1
ping_time: 302
ip: 192.168.2.12
report_ts: 2013-11-20 11:57:04
msg:
cmon_version: 1.2.4
cmon_status: 2013-11-20 11:57:30
wall_clock_time: 1384945105
8 rows in set (0.00 sec)
Tried updating to 1.24 but it seems to fail a bit:
./s9s_upgrade_cmon --latest
===================================
Collecting Information
Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.
Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.
Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.
Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.
** Current CMON version: 1.2.3d
** Latest CMON version: 1.2.4
New version found.
** Proceed with upgrading? (Y/n): Y
Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.
===================================
Backup CMON controller
Creating backup directory: /tmp/s9s_backup_2013-11-20-11_33_22 [ OK ]
Backup CMON schema Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.
[ OK ]
Backup CMON cron [ OK ]
Backup CMON web app [ OK ]
Backup ClusterControl web app [ OK ]
Backup ClusterControl schema Warning: Using unique option prefix pass instead of password is deprecated and will be removed in a future release. Please use the full name instead.
[ OK ]
Backup CMON share directory [ OK ]
Backup CMON configuration files [ OK ]
Backup CMON log file [ OK ]
Backup CMON init.d [ OK ]
Backup my.cnf [ OK ]
Backup S9S binary cp: omitting directory `/usr/bin/s9s_repo'
[ Backup S9S binary ]
[ ERROR ]
Upgrade failed. Unfortunately, the detected backup is incomplete. Kindly proceed with manual restoration.
Hi David,
It seems like you executed the script under /usr/bin directory. Try again by removing /usr/bin/s9s_repo directory and restart again the script from other location (it is recommended to run the script from your $HOME)
Regards,
Ashraf
Any suggestions for me?
Regards
@principe - hold on, trying to reproduce this.
@Ashraf Sharif: This was actually straight from /root/s9s-admin/ccadmin/
The problem was an empty directory /usr/bin/s9s-repo
Deleted the directory and the script continued and upgraded successfully.
@principe - can you do on the controller:
service cmon restart
What do you get then?
Stopping cmon: ok
Starting cmon: Checking for default config file at /etc/cmon.cnf: found
Deprecated parameter: nodaemon
To start cmon in nodaemon mode: ./cmon -d
found pidfile /var/run/cmon.pid with pid 23372
Going to daemonize.. - cmon will write a log in /var/log/cmon.log from now
Starting cmon : ok
I'm having the same issue as Principe. Logs look fairly similar.
Did you restart cmon on the controller?
service cmon restart
Wait a few minutes, then what do you see in Manage -> Processes (please send screen shot + /var/log/cmon.log from the controller)?
Best regards
Johan
Hi, simply solved rolling back to the previous version and evertything now works without problems.
Thanks for your help
Regards
Principe
Hi,
That is strange, i can't reproduce this upgrading from 1.2.3 to 1.2.4, what version did you upgrade from?
Thanks,
Johan
From 1.2.3 with this command:
To upgrade to the latest version, clone the Git repo in ClusterControl host:
$ git clone https://github.com/severalnines/s9s-admin.git
If you already have that clone, it is very important you update it:
$ cd s9s-admin
$ git pull
Navigate to the s9s_upgrade_cmon script folder:
$ cd s9s-admin/ccadmin
Start the upgrade process:
./s9s_upgrade_cmon --latest
I'm going from 1.2.2 using the same upgrade process as above. Hostnames and IPs have been modified.
Hey,
I tried to update software with automatic scripts but for some reason it only shows status but won't start updating:
./s9s_upgrade_cmon --latest
===================================
Collecting Information
** Current CMON version: 1.2.4
** Latest CMON version: 1.2.4a
** Latest CCUI version: 1.2.4
ClusterControl is already up-to-date.
How can I update from 1.2.4 to 1.2.4a?
Hi,
Please do:
./s9s_upgrade_cmon --latest --force
Make sure you have the latest s9s_upgrade_cmon first:
cd s9s-admin
git pull
cd ccadmin
./s9s_upgrade_cmon ...
Please sign in to leave a comment.