This article describes the requirements on a server that is part of ClusterControl.
The following is described below:
- Resolving Hostnames and IP addresses
- Sudo / root - Operating System user
- Passwordless SSH RSA/DSA
- SELinux
- AppArmor
- Time Zone
- Firewall (opens in a new tab/window)
Resolving Hostnames and IP addresses
We recommend that you use IP addresses!! If you use IP addresses you can skip this section.
Each server that is part of the infrastructure must have:
- a real hostname (unless you have specified SKIP_NAME_RESOLVE in the configurator)
- Note that SKIP_NAME_RESOLVE is recommended!
In that case you need a good /etc/hosts file. The file should be identical on all servers in your cluster.
Below is a GOOD example:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
10.0.1.10 clustercontrol
10.0.1.11 server_a
10.0.1.12 server_b
The /etc/hosts file must be the copied out (and be idenitcal) on all machines in the Cluster. MySQL Cluster, MySQL Replication, Galera, and ClusterControl will not work as expected otherwise.
Below is a VERY BAD example (having hostname on the same line as 127.0.0.1 or 127.0.1.1 is bad practice):
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 clustercontrol localhost.localdomain localhost
10.0.1.10 clustercontrol
10.0.1.11 server_a
10.0.1.12 server_b
DNS
If you have DNS and it resolves hostname X to X.example.com, then you should add the FQDN when entering the hostnames in the configurator.
Else GRANTs may be GRANTed to say host X, but MySQL expects X.example.com (i.e, the FQDN).
Tested Operating Systems
We have tested on and have users reports that deployment of Galera, Replication, and MySQL Cluster, and that ClusterControl works on:
- Centos 5.8 / 6.0 / 6.2
- Redhat 5.8 and later
- Ubuntu 10.04 / 11.04 / 12.04
- Debian 6.0
- Centos 5.4 and earlier
- Fedora Core 16
Sudo / root - Operating System user
If you install as user 'root' you can skip this section. You are recommended to install as 'root' and run as root, it is the easiest option.
If you install as another user than 'root' the following must be true:
- The Operating System User (OS User) must exists on all hosts
- The OS User must not be 'mysql'
- 'sudo' program must be installed on all machines
- The OS User must be allowed to do 'sudo', i.e, it must be in sudoers (see password-less 'sudo' below)
Password-less 'sudo' Howto
On all machines (unless you prepare your image) you can do (this is not required for the Galera, MySQL Cluster, and MongoDB Configurators):
> sudo visudo
#add the following line at the end. Replace OSUSER with the OS User you have entered in the Configurator:
OSUSER ALL=(ALL) NOPASSWD: ALL
## OR (for galera, mysqlcluster, mongodb configurators you do not need password-less sudo any longer and can have:
## the following line:
# OSUSER ALL=(ALL) ALL
# save and exit the file.
Open a new terminal to verify it works. You should now be able to do (e.g):
sudo ls /usr
without entering a password.
Also test:
ssh -qt osuser@ipaddress "sudo ls /usr"
where osuser is the name of the user you intend to use during the installation, and ipaddress is the ip address of a computer in your cluster.
If you have to enter a password, then you have to login to that machine and run 'sudo visudo' and do as stated above.
SSH - passwordless
The Operating System User (OS User) specified in the Configurator must be able to ssh from the ClusterControl server to all other servers without password.
The deployment script , deploy.sh, will allow you to setup password less SSH so normally you don't have to do anything here.
If you want to prepare images (AMIs, VMWARE etc) you should as in the example below. The example shows how to setup passwordless SSH from the server clustercontrol to your images.
#Make sure you are the OS User and do:
clustercontrol> ssh-keygen -t rsa
#press enter when it asks passwords etc
#FOREACH server including clustercontrol DO
clustercontrol> ssh-copy-id <OS User>@<server>
#Make sure you have the files $HOME/.ssh/authorized_keys and $HOME/.ssh/authorized_keys2
# on the images you want to SSH password less to, with the permissions set above.
# The id_rsa.pub file only needs to be on the server you want to SSH passwordless from.
You should now be able to ssh from clustercontrol to the other server(s).
ssh <username>@<serverip>
If you can't, check permissions of the .ssh dir and the files in it.
Some users have also set the following in their /etc/ssh/sshd_config file:
RSAAuthentication=Yes
Don't forget to restart you sshd daemon ( /etc/init.d/sshd restart ) if you make changes in the ssd_config file
DSA
If you use DSA (cmon defaults to RSA), then you need to follow these instructions.
Encrypted Homedirs
If the sudo user's home directory is encrypted, then you need to follow these instructions.
SELinux
The Deployment scripts (deploy.sh) allows you to disable SELinux so normally you don't have to do anything here.
MySQL requires that SELInux accepts MySQL as a service and that is allowed to open ports.
Often, SELinux is set in enforcing mode, which typically means that the MySQL Server won't start or that no client are able to connect to it.
For MySQL Cluster it means often that the Data nodes are stuck in start phase 0 or 1.
Setting SELinux in permissive mode
Edit /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
Make sure you have
SELINUX=permissive
or
SELINUX=disabled
The server requires a reboot for this option to kick in.
You can also temporarily disable SELinux:
echo '0' > /selinux/enforce
If you want to use SELinux then you need to set it up so it works with MySQL (port 3306). It is not covered here.
AppArmor
Todo: Suggestions and recommendations are welcome
Security
The deployment packages specify which ports that must be open in the firewall. Disallow access to any of those ports from the outside world.
SSH
- it must be possible to SSH from the ClusterControl server to the other nodes in the cluster without password, thus the other nodes must accept the SSH port specified in the Configurator.
- Lock down SSH access so it is not possible to SSH into the nodes from any other server than the ClusterControl server.
- Lock down the ClusterControl server so that it is not possible SSH into it directly from the outside world.
In practice it means that only very few people in the organization has access to the servers. The fewer the better.
Time Zone
ClusterControl requires all servers' time (controller and agents) to synchronize and run within the same time zone. Verify this by using following command:
$ date Mon Sep 17 22:59:24 UTC 2013
Eg: Change the timezone from UTC to Pacific time:
rm /etc/localtime
ln -sf /usr/share/zoneinfo/US/Pacific localtime
UTC is however recommended, see here http://www.cyberciti.biz/faq/howto-linux-unix-change-setup-timezone-tz-variable/ ,
Configure NTP client for each host with a working time server to avoid time drifting between hosts which could cause inaccurate reporting and some graphs disappearance issue.
Comments
3 comments
This is very helpful and generic enough to follow in other distros not listed here. It would be nice to see a bit more info on SLES 11.x. The MySQL Cluster 7.2.x binaries (rpms) work very well in this environment, but I haven't found much information regarding the SLES environment in your site, which is a pity since it has many nice options to simplify and admin's life. Keep up the good work, and please add SLES 11.x to your list of tested distros and config features.
The process is well-documented and the prerequisites are clearly listed, making it easy for users to follow along. The restoration steps are straightforward and should help quickly resolve any issues caused by a corrupted CMON DB. Additionally, users may find it helpful to run a Thyroid health program to ensure the integrity of their system after restoration.
The process is well-documented and the prerequisites are clearly listed, making it easy for users to follow along. The restoration steps are straightforward and should help quickly resolve any issues caused by a corrupted CMON DB. Additionally, users may find it helpful to run a Ecd automotive to ensure the integrity of their system after restoration.
Please sign in to leave a comment.