- You have a plain and simple PostgreSQL master & slave streaming replication setup (v9.5/9.6.x or v10.x).
- You want to manually add this cluster to ClusterControl
- You have already installed ClusterControl on a dedicated server/host
In the steps below we are going to assume the following:
- ClusterControl has been installed on host 10.10.10.10
- The master is running on host 10.10.10.11
- The slave is running on host 10.10.10.12
Create ClusterControl DB Users
- ClusterControl needs a "superuser" DB user to connect to your servers. This DB user is primarily used to collect stats and monitor and manage your nodes and the replication channel.
# On your master server execute the following statement
CREATE ROLE <MY_CCADMIN_USER> WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN ENCRYPTED PASSWORD '<MY_CCADMIN_PASSWORD>';
- ClusterControl creates a 'replication' DB user when deploying a new PostgreSQL cluster from scratch via our application. You can skip this step if you already have an existing 'replication' user for your setup.
# On you master server execute the following statement
CREATE ROLE <MY_REPLICATION_USER> WITH REPLICATION LOGIN ENCRYPTED PASSWORD '<MY_REPLICATION_USER_PASSWORD>';
- Check the results with \du
Allow ClusterControl DB Users to Connect to the Cluster
Client authentication is controlled by the 'pg_hba.conf' file usually located in /etc/postgresql/<version>/main/.
The MY_CCADMIN_USER needs to connect from the installed ClusterControl host, 10.10.10.10 and the MY_REPLICATION_USER needs to connect from the master and slave hosts.
Add records for the MY_CCADMIN_USER and MY_REPLICATION_USER on both the master and slave (master is on 10.10.10.11 and the slave on 10.10.10.12).
If you already have records for your replication user then you can skip this step.
- Allow MY_REPLICATION_USER replication user to connect
host replication MY_REPLICATION_USER 10.10.10.11/32 md5
host replication MY_REPLICATION_USER 10.10.10.12/32 md5
- Allow MY_CCADMIN_USER ClusterControl user to connect from the ClusterControl host
host all MY_CCADMIN_USER 10.10.10.10/32 md5
Enable Query Monitoring Capture for your Cluster
In order to track execution stats of the SQL statements running on your server we need to enable the pg_stat_statements.track module. Edit the 'postgresql.conf' file usually located in /etc/postgresql/<version>/main/ and add the following:
Also check your listen_address and port. We usually set it to:
listen_addresses = '*'
port = 5432
Reload the Configuration Changes on the PostgreSQL Servers
On both the master and slave server do:
$ pg_ctl reload
or open psql and run
postgres# SELECT pg_reload_conf();
Create ClusterControl's Cluster Configuration File
Now that your PostgreSQL cluster is all set we move on to ClusterControl. We need to manually create a new cluster configuration file for your cluster.
In our example before we have this setup.
- ClusterControl host is on 10.10.10.10
- Master host is on 10.10.10.11
- Slave host is on 10.10.10.12
Cluster configuration files are created and stored in /etc/cmon.d/ on the ClusterControl host. The naming convention is to use cmon_N.cnf where N is the cluster ID for your cluster. Since we have no existing clusters at the moment we'll use 1 as the cluster ID.
Example of a real cmon cnf file below:
[email protected]:/etc/cmon.d# cat cmon_1.cnf
You are going to use the exact same configuration as above with changes only to a few parameters. Change the listed parameters below for your environment:
cluster_id=<set the cluster ID, usually 1 if this is the first cluster>
hostname=<the IP of the ClusterControl host>
monitored_mountpoints=<your data directory, usually /var/lib/postgresql/9.6/main>
name=<your cluster name, can be anything>
os=<your OS, use 'debian' or 'redhat'>
postgresql_server_addresses=<master IP:port>,<slave IP:port>
# this rpc_key is important for a later step when we are registering the cluster to show up in the UI/frontend
rpc_key=<UNIQUE/RANDOM API ACCESS TOKEN, use random alphanumeric characters (up to 40 chars), for example 17FdXKi5idO9FIp9>
server_version=<your postgres server version 9.5 | 9.6 | 10.0>
ssh_identity=<root user or sudo user's ssh key, for example /root/.ssh/id_rsa>
Create the /etc/cmon.d/cmon_1.cnf file and restart the the 'cmon' process. Check the /var/log/cmon_1.log for any issues/errors.
$ service cmon restart
$ tail -100f /var/log/cmon_1.log
Register the PostgreSQL Cluster in the Web Application
Now it's time to get your cluster to show up in our web application. Open your browser to http://<clustercontrol host>/clustercontrol and navigate to 'Settings->Cluster Registrations'.
In the Cluster Registrations Page click on 'Synchronize' link.
The last step is now to enter the correct 'rpc_key' token that you set in the /etc/cmon.d/cmon_1.cnf file. Ignore the error 'toaster' that showed up and click on the 'here' link next to the 'Change the RPC API Token ...' shown in the cluster list.
Enter the 'rpc_key' token from the /etc/cmon.d/cmon_1.cnf file in the 'Change RPC API Token' dialog.
If everything went well then your PostgreSQL cluster should show up in the cluster list.