Add an Existing PostgreSQL Cluster (Master & Slave)
Pre-Conditions
- You have a plain and simple PostgreSQL master & slave streaming replication setup (v9.5/9.6.x or v10.x).
- You want to manually add this cluster to ClusterControl
- You have already installed ClusterControl on a dedicated server/host
In the steps below we are going to assume the following:
- ClusterControl has been installed on host 10.10.10.10
- The master is running on host 10.10.10.11
- The slave is running on host 10.10.10.12
Create ClusterControl DB Users
- ClusterControl needs a "superuser" DB user to connect to your servers. This DB user is primarily used to collect stats and monitor and manage your nodes and the replication channel.
# On your master server execute the following statement
CREATE ROLE <MY_CCADMIN_USER> WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN ENCRYPTED PASSWORD '<MY_CCADMIN_PASSWORD>';
- ClusterControl creates a 'replication' DB user when deploying a new PostgreSQL cluster from scratch via our application. You can skip this step if you already have an existing 'replication' user for your setup.
# On you master server execute the following statement
CREATE ROLE <MY_REPLICATION_USER> WITH REPLICATION LOGIN ENCRYPTED PASSWORD '<MY_REPLICATION_USER_PASSWORD>';
- Check the results with \du
Allow ClusterControl DB Users to Connect to the Cluster
Client authentication is controlled by the 'pg_hba.conf' file usually located in /etc/postgresql/<version>/main/.
The MY_CCADMIN_USER needs to connect from the installed ClusterControl host, 10.10.10.10 and the MY_REPLICATION_USER needs to connect from the master and slave hosts.
Add records for the MY_CCADMIN_USER and MY_REPLICATION_USER on both the master and slave (master is on 10.10.10.11 and the slave on 10.10.10.12).
If you already have records for your replication user then you can skip this step.
- Allow MY_REPLICATION_USER replication user to connect
...
host replication MY_REPLICATION_USER 10.10.10.11/32 md5
host replication MY_REPLICATION_USER 10.10.10.12/32 md5
...
- Allow MY_CCADMIN_USER ClusterControl user to connect from the ClusterControl host
...
host all MY_CCADMIN_USER 10.10.10.10/32 md5
...
- Ex
Enable Query Monitoring Capture for your Cluster
In order to track execution stats of the SQL statements running on your server we need to enable the pg_stat_statements.track module. Edit the 'postgresql.conf' file usually located in /etc/postgresql/<version>/main/ and add the following:
...
pg_stat_statements.track=all
...
Also check your listen_address and port. We usually set it to:
...
listen_addresses = '*'
port = 5432
...
Reload the Configuration Changes on the PostgreSQL Servers
On both the master and slave server do:
$ pg_ctl reload
or open psql and run
postgres# SELECT pg_reload_conf();
Create ClusterControl's Cluster Configuration File
Now that your PostgreSQL cluster is all set we move on to ClusterControl. We need to manually create a new cluster configuration file for your cluster.
In our example before we have this setup.
- ClusterControl host is on 10.10.10.10
- Master host is on 10.10.10.11
- Slave host is on 10.10.10.12
Cluster configuration files are created and stored in /etc/cmon.d/ on the ClusterControl host. The naming convention is to use cmon_N.cnf where N is the cluster ID for your cluster. Since we have no existing clusters at the moment we'll use 1 as the cluster ID.
Example of a real cmon cnf file below:
root@n1:/etc/cmon.d# cat cmon_1.cnf
cdt_path=/
cluster_id=1
cluster_type=postgresql_single
cmon_user=cmon
group_owner=1
hostname=10.10.10.10
logfile=/var/log/cmon_1.log
mode=controller
monitored_mountpoints=/var/lib/postgresql/9.6/main
mysql_hostname=127.0.0.1
mysql_password='cmon'
mysql_port=3306
name='cluster_1'
os=debian
osuser=root
owner=1
pidfile=/var/run
postgresql_user=admin
postgresql_password=admin123
postgresql_server_addresses=10.10.10.11:5432,10.10.10.12:5432
repl_password=f7QKwOERXA
repl_user=cmon_replication
rpc_key=17FdXKi5idO9FIp9
server_version=9.6
ssh_identity=/root/.ssh/id_rsa
ssh_port=22
vendor=postgres
You are going to use the exact same configuration as above with changes only to a few parameters. Change the listed parameters below for your environment:
...
cluster_id=<set the cluster ID, usually 1 if this is the first cluster>
hostname=<the IP of the ClusterControl host>
monitored_mountpoints=<your data directory, usually /var/lib/postgresql/9.6/main>
name=<your cluster name, can be anything>
os=<your OS, use 'debian' or 'redhat'>
postgresql_user=<MY_CC_ADMIN_USER>
postgresql_password=<MY_CCADMIN_USER_PASSWORD>
postgresql_server_addresses=<master IP:port>,<slave IP:port>
repl_user=<MY_REPLICATION_USER>
repl_password=<MY_REPLICATION_USER_PASSWORD>
# this rpc_key is important for a later step when we are registering the cluster to show up in the UI/frontend
rpc_key=<UNIQUE/RANDOM API ACCESS TOKEN, use random alphanumeric characters (up to 40 chars), for example 17FdXKi5idO9FIp9>
server_version=<your postgres server version 9.5 | 9.6 | 10.0>
ssh_identity=<root user or sudo user's ssh key, for example /root/.ssh/id_rsa>
ssh_port=22
...
Create the /etc/cmon.d/cmon_1.cnf file and restart the the 'cmon' process. Check the /var/log/cmon_1.log for any issues/errors.
$ service cmon restart
$ tail -100f /var/log/cmon_1.log
Please sign in to leave a comment.
Comments
0 comments