If you have not used the Configurator to setup your cluster you can still install ClusterControl on your existing database cluster. Make sure the database cluster is running in full capacity prior to this installation.
Please follow the instructions here in our Getting Started Guide.
Comments
82 comments
Hello
moved the comments to separate lines and also commented out
#datanode_addresses=
#mgmnode_addresses=
It still fails when starting
not sure if the following is incorrect
mysql_server_addresses=xx.xx.xx.xx, xx.xx.xx.yy,xx.xx.xx.zz
Hi,
If you start with:
sudo /usr/sbin/cmon
What happens then?
Please send what it prints out, and also what you have in /var/log/cmon.log
THanks
Johan
sudo /usr/local/cmon/sbin/cmon
Checking for default config file at /etc/cmon.cnf
Found default config file at /etc/cmon.cnf
Invalid 'os' specified. os=redhat|debian
For some weird reason its not able to read the data in the file.As such I copied the file contents from this page itself from the several nine instructions
Copying the file contents back to notepad and then repasting it using nano seems to have fixed this particular issue
Now the commands runs successfully but the process dies instantly
service cmon start
Starting cmon --config-file=/etc/cmon.cnf : ok
The process is dying but the cmon.log does show some issues.I will work on them and get back if I still cant succeeed!
Thanks for your help so far
Hi,
Check for trailing whitespaces after
os=debian
(and for all other parameters too)
BR
Johan
Hi Johan
As I mentioned, I copied out the file contents , pasted in notepad and then pasted it back again using putty and the file started working
But now I get a different error
May 22 20:28:50 : (INFO) IPv4 address: xx.xx.xx.yy (xx.xx.xx.yy)
May 22 20:28:50 : (WARNING) CMON_HOSTNAME compares to 127.0.0.1
May 22 20:28:50 : (ERROR) failed to verify host - mysql servers (are you using IPs or hostname? Be consistent.
May 22 20:28:50 : (ERROR) Critical error (mysql error code -1) occured - shutting down
LOl
I just discovered the setup script inside the bin folder- why don't your instructions mention that magic script!!
Ok the setup script worked like wonders but the cluster controller is trying to ssh into our MySQL nodes!
is this necessary?
We could like to use the manager only for monitoring but would not like it to perform restarts
As the nodes connect through a private network but the CCM is setup on a server outside the private network, its using a different IP (public IP) to restart the nodes which isn't correct
Canwe configure cmon to work only as a monitor and not make any changes to the cluster?
Hi,
Great it worked, did you use cmon_install_controller.sh ?
Sure you can set in /etc/cmon.d/cmon_<cluster>.cnf:
enable_autorecovery=0
Restart cmon. It will not try to do recovery then.
However it will still try, in the current version, to do ssh in order to pull log files etc.
That part could potentially be fixed.
BR
johan
Hi Johan
Yes the sript cmon_install_controller.sh wprked like magic.
Everything worked without any major issue after that accept for the frequent retries to connect to the cluster
Hi!
I'm going to try this again after an excellent demo presented to us.
Some feedback on the installation guide: (I'm installing on Debian Wheezy)
Minor detail,
cp etc/init.d/cmon /etc/init.d/
should be
cp cmon/etc/init.d/cmon /etc/init.d/
chmod +x /etc/init.d/cmon
Why is there no tgz of the agent installation, I can only find rpms? And is the step
cp -rf cmon/www/* /var/www/
cp etc/init.d/cmon /etc/init.d/
really necessary on the agent installations? Seems to me that the www part only should be on the controller?
And, the agent installation script seems not to be aware if it's installed on a data-node or sql-node, it recommends
setting grants on a datanode as well.
Bah, no I see the link at the bottom of the page "Automated install"... hehe maybe you should have that link on the top of the page?
Going to try that one instead. =)
BR
Johan W
Hi Johan W,
Thanks for pointing that out. We have made corrections to the post as suggested.
Automated installation is always a preferred way to start with, as described in following page:
http://support.severalnines.com/entries/21952156-Get-Your-Database-Cluster-Under-ClusterControl-
Regards,
Ashraf
hi all. I just wanna try this product, but got some issues. os: centos 6.5 on all machines. got 6nodes percona xtradb cluster. and controller near.
this is my cmon.cnf
#cmon config file
id and name of cluster that this cmon agent is monitoring.
Must be unique for each monitored cluster, like server-id in mysql
cluster_id=1
name=default_cluster_1
os = [redhat|debian]
os=redhat
skip_name_resolve = [0|1] - set 1 if you use ip addresses only everywhere
skip_name_resolve=1
mode = [controller|agent|dual]
mode=controller
type = [mysqlcluster|replication|galera]
type=galera
CMON DB config - mysql_password is for the 'cmon' user
mysql_port=3306
mysql_hostname=127.0.0.1
mysql_password=*****
location of mysql install, e.g /usr/ or /usr/local/mysql
mysql_basedir=/usr
#hostname is the hostname of the current host
hostname=10.60.0.7
ndb_connectstring - comma-separated list of management servers: a:1186,b:1196
#ndb_connectstring=127.0.0.1
The user that can SSH without password to the oter nodes
os_user=sm1ly
location of cmon.pid file. The pidfile is written in /tmp/ by default
pidfile=/var/run/
logfile is default to syslog.
logfile=/var/log/cmon.log
collection intervals (in seconds)
db_stats_collection_interval=30
host_stats_collection_interval=30
mysql servers in the cluster. "," or " " sep. list
#mysql_server_addresses=10.60.0.70,10.60.0.71,10.60.0.72,10.60.0.73,10.60.0.74 #THIS option trying to mysql connect and brokes.
mgm and data nodes are only used to MySQL Cluster "," or " " sep. list
#datanode_addresses=
#mgmnode_addresses=
wwwroot=/var/www/html
configuration file directory for database servers (location of config.ini, my.cnf etc)
on RH, usually it is /etc/, on Debian/Ubuntu /etc/mysql
#db_configdir=
SSH settings
#ssh_port=51515 #THIS options doesnt work
#ssh_keyfile=/home/sm1ly/.ssh/id_rsa #THIS options doesnt work
#staging_dir=/var/stuff/staging #THIS options doesnt work
version
[root@salt1 sm1ly]# rpm -qa | grep cmon
cmon-controller-1.2.5-147.x86_64
and where I can find cmon.cnf like default with all the options?
and cmon doesnt write to log often. when it looks like its start, but it dead.
thx.
Apr 10 11:12:59 : (INFO) Looking up and adding '10.60.0.70'
Apr 10 11:12:59 : (INFO) IPv4 address: 10.60.0.70 (10.60.0.70)
Apr 10 11:12:59 : (WARNING) CMON_HOSTNAME compares to 127.0.0.1
Apr 10 11:12:59 : (ERROR) failed to verify host - mysql servers (are you using IPs or hostname? Be consistent.
Apr 10 11:12:59 : (ERROR) Critical error (mysql error code -1) occured - shutting down
this is error when mysql_server_addresses= writed
okey, I cant do it without DNS. I got dns server. its ok. thats my last cnf:
#cmon config file
id and name of cluster that this cmon agent is monitoring.
Must be unique for each monitored cluster, like server-id in mysql
cluster_id=1
name=abboom
os = [redhat|debian]
os=redhat
skip_name_resolve = [0|1] - set 1 if you use ip addresses only everywhere
skip_name_resolve=0
mode = [controller|agent|dual]
mode=controller
type = [mysqlcluster|replication|galera]
type=galera
CMON DB config - mysql_password is for the 'cmon' user
mysql_port=3306
mysql_hostname=localhost
mysql_password=***
location of mysql install, e.g /usr/ or /usr/local/mysql
mysql_basedir=/usr/
#hostname is the hostname of the current host
hostname=salt1.abboom.world
The user that can SSH without password to the oter nodes
os_user=sm1ly
location of cmon.pid file. The pidfile is written in /tmp/ by default
pidfile=/var/run/
logfile is default to syslog.
logfile=/var/log/cmon.log
collection intervals (in seconds)
db_stats_collection_interval=30
host_stats_collection_interval=30
mysql servers in the cluster. "," or " " sep. list
mysql_server_addresses=10.60.0.70,10.60.0.71,10.60.0.72,10.60.0.73,10.60.0.74
wwwroot=/var/www/html
configuration file directory for database servers (location of config.ini, my.cnf etc)
on RH, usually it is /etc/, on Debian/Ubuntu /etc/mysql
db_configdir=/etc/
ssh_opts=-p,51515
ssh_identity=/home/sm1ly/.ssh/id_rsa
agentless=1
I got this errors:
Apr 10 11:57:56 : (INFO) Testing connection to mysqld...
Apr 10 11:57:56 : (WARNING) ERROR: Failed to connect to database cmon with the following parameters: user=cmon, password=***, host=localhost, port=3306 :
ERROR: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
Apr 10 11:57:56 : (ERROR) Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) (errno: 2002)
why option db_configdir=/etc/ doesnt work? it doesnt read conf.
mine my.cnf:
[mysqld]
bind-address = 0.0.0.0
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
okey, I just use ln -s for mysql.sock
but are u really kidding me? I makig my brain about 3hours... but it doesnt work. really? u want I but this? I just cant try this product, just see what it is..
new issues:
Apr 10 12:12:47 : (INFO) Starting cmon version 1.2.5.152 with the following parameters:
mysql-password=***
mysql-hostname=localhost
mysql-port=3306
ndb-connectstring=<only applicable for mysqlcluster>
cluster-id=1
hostname=salt1.abboom.world
db_hourly_stats_collection_interval=5
db_stats_collection_interval=30
host_stats_collection_interval=30
ssh_opts=-p,51515
ssh_identity=/home/sm1ly/.ssh/id_rsa
os_user=sm1ly
os_user_home=/home/sm1ly
os=redhat
mysql_basedir=/usr/
enable_autorecovery=1
Apr 10 12:12:47 : (INFO) If that doesn't look correct, kill cmon and restart with -? for help on the parameters, or change the params in /etc/init.d/cmon
Apr 10 12:12:47 : (INFO) Looking up (1): salt1.abboom.world
Apr 10 12:12:47 : (INFO) IPv4 address: 10.60.0.7 (salt1.abboom.world)
Apr 10 12:12:47 : (INFO) Testing connection to mysqld...
Apr 10 12:12:47 : (INFO) Community version
Apr 10 12:12:47 : (INFO) Checked tables - seems ok
Apr 10 12:12:47 : (INFO) Schema looks complete.
Apr 10 12:12:47 : (INFO) Registering managed cluster with cluster id=1
Apr 10 12:12:47 : (INFO) Managed cluster has been registered - registered cluster id=1
Apr 10 12:12:47 : (INFO) Creating host salt1.abboom.world(10.60.0.7) skip DNS: false version: 1.2.5.152
Apr 10 12:12:47 : (INFO) Setting up threads to handle monitoring
Apr 10 12:12:47 : (INFO) Starting up Alarm Handler
Apr 10 12:12:47 : (INFO) Aborting DEQUEUED, RUNNING and DEFINED jobs
Apr 10 12:12:47 : (INFO) Looking up and adding 'wmysql1.abboom.world'
Apr 10 12:12:47 : (INFO) IPv4 address: 10.60.0.70 (wmysql1.abboom.world)
Apr 10 12:12:47 : (WARNING) CMON_HOSTNAME compares to localhost
Apr 10 12:12:47 : (ERROR) failed to verify host - mysql servers (are you using IPs or hostname? Be consistent.
Apr 10 12:12:47 : (ERROR) Critical error (mysql error code -1) occured - shutting down
conf:
#cmon config file
id and name of cluster that this cmon agent is monitoring.
Must be unique for each monitored cluster, like server-id in mysql
cluster_id=1
name=abboom
os = [redhat|debian]
os=redhat
skip_name_resolve = [0|1] - set 1 if you use ip addresses only everywhere
skip_name_resolve=0
mode = [controller|agent|dual]
mode=controller
type = [mysqlcluster|replication|galera]
type=galera
CMON DB config - mysql_password is for the 'cmon' user
mysql_port=3306
mysql_hostname=localhost
mysql_password=Samkab0l1
location of mysql install, e.g /usr/ or /usr/local/mysql
mysql_basedir=/usr/
#hostname is the hostname of the current host
hostname=salt1.abboom.world
The user that can SSH without password to the oter nodes
os_user=sm1ly
location of cmon.pid file. The pidfile is written in /tmp/ by default
pidfile=/var/run/
logfile is default to syslog.
logfile=/var/log/cmon.log
collection intervals (in seconds)
db_stats_collection_interval=30
host_stats_collection_interval=30
mysql servers in the cluster. "," or " " sep. list
mysql_server_addresses=wmysql1.abboom.world,wmysql2.abboom.world,wmysql3.abboom.world,wmysql4.abboom.world,wmysql5.abboom.world
wwwroot=/var/www/html
configuration file directory for database servers (location of config.ini, my.cnf etc)
on RH, usually it is /etc/, on Debian/Ubuntu /etc/mysql
db_configdir=/etc/
ssh_opts=-p,51515
ssh_identity=/home/sm1ly/.ssh/id_rsa
agentless=1
ideas?
Hi Alexander,
mysql_hostname value should equal to hostname value. Please use following /etc/cmon.cnf:
=================
#cmon config file
id and name of cluster that this cmon agent is monitoring.
Must be unique for each monitored cluster, like server-id in mysql
cluster_id=1
name=abboom
os = [redhat|debian]
os=redhat
skip_name_resolve = [0|1] - set 1 if you use ip addresses only everywhere
skip_name_resolve=1
mode = [controller|agent|dual]
mode=controller
type = [mysqlcluster|replication|galera]
type=galera
CMON DB config - mysql_password is for the 'cmon' user
mysql_port=3306
mysql_hostname=10.60.0.7
mysql_password=Samkab0l1
location of mysql install, e.g /usr/ or /usr/local/mysql
mysql_basedir=/usr/
#hostname is the hostname of the current host
hostname=10.60.0.7
The user that can SSH without password to the oter nodes
os_user=sm1ly
location of cmon.pid file. The pidfile is written in /tmp/ by default
pidfile=/var/run/
logfile is default to syslog.
logfile=/var/log/cmon.log
collection intervals (in seconds)
db_stats_collection_interval=30
host_stats_collection_interval=30
mysql servers in the cluster. "," or " " sep. list
mysql_server_addresses=10.60.0.70,10.60.0.71,10.60.0.72,10.60.0.73,10.60.0.74
wwwroot=/var/www/html
configuration file directory for database servers (location of config.ini, my.cnf etc)
on RH, usually it is /etc/, on Debian/Ubuntu /etc/mysql
db_configdir=/etc/
ssh_port=51515
ssh_opts=-q
ssh_identity=/home/sm1ly/.ssh/id_rsa
agentless=1
=================
Restart CMON and monitor /var/log/cmon.log. If you get GRANT error, please run the advised statements in the log output.
Regards,
Ashraf
Hello again, when I deploy HAproxy - It want login from me. what it will be?
Hi Alex,
The default HAproxy statistic page username and password are admin/admin. You can change it inside /etc/haproxy/haproxy.cfg.
Regards,
Ashraf
And one more question.
I tryint to deploy with one controller node - the second cluster. but the problem that IP of this machine 10.60.0.7 - wrong. when I try localhost - it doesnt work too
when I takink package. and trying to install it error are:
HTTP request sent, awaiting response... 416 Requested Range Not Satisfiable
The file is already fully retrieved; nothing to do.
localhost: scp -q -i/home/sm1ly/.ssh/id_rsa -P51515 ../../repo/epel-release-6-8.noarch.rpm sm1ly@10.60.0.90:/home/sm1ly/s9s_tmp[ok]
10.60.0.90: Executing 'sync'[ok]
10.60.0.90: Executing 'rpm -Uvh /home/sm1ly/s9s_tmp/epel-release-6-8.noarch.rpm' [failed: retrying 1/10]
[failed: retrying 2/10]
[failed: retrying 3/10]
[failed: retrying 4/10]
[failed: retrying 5/10]
[failed: retrying 6/10]
[failed: retrying 7/10]
[failed: retrying 8/10]
[failed: retrying 9/10]
[failed: retrying 10/10]
[failed]
The following command failed:
ssh -q -qtnt -i/home/sm1ly/.ssh/id_rsa -p51515 sm1ly@10.60.0.90 "sudo rpm -Uvh /home/sm1ly/s9s_tmp/epel-release-6-8.noarch.rpm "
Try running the command on the line above again, contact http://support.severalnines.com/tickets/new, attach the output from deploy.sh and the error from running the command to the Support issue.
[sm1ly@salt1 install]$ ssh -q -qtnt -i/home/sm1ly/.ssh/id_rsa -p51515 sm1ly@10.60.0.90 "sudo rpm -Uvh /home/sm1ly/s9s_tmp/epel-release-6-8.noarch.rpm "
Preparing... ########################################### [100%]
package epel-release-6-8.noarch is already installed
I just try to remove it.
and more. its install socat earlier. why 2 times?
guys really? I told this script NO I DONT WANT REMOVE MY MYSQL. why it removed it and init scripts... oh guys...
and u want give only 12 request for enterprise? with thoose bugs?
10.60.0.91: Executing 'yum -y remove postfix'[ok]
10.60.0.91: Executing 'yum -y remove sysbench'[ok]
10.60.0.91: Executing 'yum -y remove cmon*'[ok]
ssh: illegal option -- 5
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-e escape_char] [-F configfile]
[-I pkcs11] [-i identity_file]
[-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-R [bind_address:]port:host:hostport] [-S ctl_path]
[-W host:port] [-w local_tun[:remote_tun]]
[user@]hostname [command]
ssh: illegal option -- 5
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-e escape_char] [-F configfile]
[-I pkcs11] [-i identity_file]
[-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-R [bind_address:]port:host:hostport] [-S ctl_path]
[-W host:port] [-w local_tun[:remote_tun]]
[user@]hostname [command]
ssh: illegal option -- 5
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-e escape_char] [-F configfile]
[-I pkcs11] [-i identity_file]
[-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-R [bind_address:]port:host:hostport] [-S ctl_path]
[-W host:port] [-w local_tun[:remote_tun]]
[user@]hostname [command]
ssh: illegal option -- 5
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-e escape_char] [-F configfile]
[-I pkcs11] [-i identity_file]
[-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-R [bind_address:]port:host:hostport] [-S ctl_path]
[-W host:port] [-w local_tun[:remote_tun]]
[user@]hostname [command]
ssh: illegal option -- 5
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-e escape_char] [-F configfile]
[-I pkcs11] [-i identity_file]
[-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-R [bind_address:]port:host:hostport] [-S ctl_path]
[-W host:port] [-w local_tun[:remote_tun]]
[user@]hostname [command]
10.60.0.92: Executing 'killall -9 mysqld mysqld_safe cmon'[ok]
10.60.0.92: Executing 'yum -y remove redland'[ok
okey. it installed. and what I got? IT BROKES THE BASE! i have no one cluster. lol)))) gonna go to snapshot))
rly guys, we want to buy it, but fix it please) how we can use it)
u got a lot problems wih IPs and hostnames.
why I cant change port when trying to add cluster?
how does it check it? and where is the log?
i find only
ClustersController::initconfigurator() - APP/Controller/ClustersController.php, line 608
ReflectionMethod::invokeArgs() - [internal], line ??
Controller::invokeAction() - CORE/Cake/Controller/Controller.php, line 485
Dispatcher::_invoke() - CORE/Cake/Routing/Dispatcher.php, line 103
Dispatcher::dispatch() - CORE/Cake/Routing/Dispatcher.php, line 85
[main] - APP/webroot/index.php, line 96
2014-04-10 18:09:16 Debug: s9s_ur=http://www.severalnines.com/galera-configurator3/tmp/dc007755053493628247/s9s-galera-percona-3.2
.0-rpm.tar.gz
2014-04-10 18:09:16 Debug: Unable to login to host server
2014-04-10 18:12:39 Debug: s9s_ur=http://www.severalnines.com/galera-configurator3/tmp/dc007755053493628247/s9s-galera-percona-3.2
.0-rpm.tar.gz
2014-04-10 18:12:40 Debug: Unable to login to host server
LOL ))) I used 22 ports on all machines, than I add a cluster) and what?)) it brokes all again)))
Hi Alex,
What did you mean by 'broke the base' and 'it brokes all again'? There are several ways to get ClusterControl installed, as described in details in this blog post: http://www.severalnines.com/blog/several-ways-install-clustercontrol . Each DB cluster need to have a dedicated ClusterControl host.
When you 'add existing galera cluster' using ClusterControl A (CC A), you need to have CC W and Galera X,Y,Z ready (need to have separate ClusterControl server). If you are using bootstrap script, you can install ClusterControl directly inside CC W, without the need to use CC A.
The functionality has limited support on SSH port and we are going to introduce a new way of importing existing Galera cluster in our next version (1.2.6) which scheduled to be released next week.
Regards,
Ashraf
Hello. I have some problems with installation.
Please, tell me, what is cluster controller, what is cmon, what is agent? Where can i read about structure of your product? Is there some limitations on programs place (i mean, what of this must be on some node of cluster, what never must be on node of cluster)?
Here is my case:I have MySQL cluster on one machine (just test it for my purposes).
First of all i was trying to install all to the same machine with install-cc.sh. It did not work (just didn't log in, i was trying to add values to database by myself, but it did not help).
Now i'm trying to install it manually. I'm running cluster at one machine with ip 192.168.1.36. I have installed cluster control on the same host, moreover it's working now and there is no error messages in log file except some warnings like "can't pull log <something>". After that i have tried to install gui with install-cc.sh, and i did it (i have chosen "do not install cc). I successfully logged in, but there was no info about my cluster. When i choose "add existing cluster"->"MySQL" and input all parameters, i get this:
170 - Message sent to controller
171 - Verifying job parameters.
172 - Verifying controller host and cmon password.
173 - Verifying the SSH connection to 192.168.1.36.
174 - Verifying the MySQL user/password.
175 - Found 1 nodes.
176 - Checking the nodes that those aren't in other cluster.
Host (192.168.1.36) is already in an other cluster.
Here is my /etc/cmon.cnf:
#cmon config file
id and name of cluster that this cmon agent is monitoring.
Must be unique for each monitored cluster, like server-id in mysql
cluster_id=1
name=default_cluster_1
agentless=1
os = [redhat|debian]
os=debian
skip_name_resolve = [0|1] - set 1 if you use ip addresses only everywhere
skip_name_resolve=1
mode = [controller|agent|dual]
mode=controller
type = [mysqlcluster|replication|galera]
type=mysqlcluster
location of mysql install, e.g /usr/ or /usr/local/mysql
mysql_basedir=/opt/mysql/server-5.6/
CMON DB config - mysql_password is for the 'cmon' user
mysql_port=3306
mysql_hostname=192.168.1.36
mysql_password=cmon
#hostname is the hostname of the current host
hostname=test_hostname
ndb_connectstring - comma-separated list of management servers: a:1186,b:1196
ndb_connectstring=192.168.1.36:1186
The user that can SSH without password to the oter nodes
os_user=root
location of cmon.pid file. The pidfile is written in /tmp/ by default
pidfile=/var/run/
logfile is default to syslog.
logfile=/var/log/cmon.log
collection intervals (in seconds)
db_stats_collection_interval=30
host_stats_collection_interval=30
mysql servers in the cluster. "," or " " sep. list
mysql_server_addresses=192.168.1.36
mgm and data nodes are only used to MySQL Cluster "," or " " sep. list
datanode_addresses=192.168.1.36
mgmnode_addresses=192.168.1.36
wwwroot=/var/www/
configuration file directory for database servers (location of config.ini, my.cnf etc)
on RH, usually it is /etc/, on Debian/Ubuntu /etc/mysql. If different please set:
db_configdir=/home/user/Cluster/53/
Please sign in to leave a comment.