To avoid a single point of failure with your HAProxy, one would set up two identical HAProxy instances (one active and one standby) and use Keepalived to run VRRP between them. VRRP provides a virtual IP address to the active HAProxy, and transfers the Virtual IP to the standby HAProxy in case of failure. This is seamless because the two HAProxy instances need no shared state.
In this example, we are using 2 nodes to act as the load balancer with IP failover in front of our database cluster. VIP will be floating around between LB1 (master) and LB2 (backup). When LB1 is down, the VIP will be taking over by LB2 and once the LB1 up again, the VIP will be failback to LB1 since it hold the higher priority number.
We are using following hosts/IPs:
You may refer to following diagram for the architecture:
1. Before we start to deploy, make sure LB1 and LB2 are accessible using passwordless SSH. Copy the SSH keys to the load balancer nodes:
$ ssh-copy-id -i ~/.ssh/id_rsa 192.168.10.101
$ ssh-copy-id -i ~/.ssh/id_rsa 192.168.10.102
2. Install HAproxy into both nodes, select in the UI Manage -> Load Balancer
Click "Install HAProxy" when you are happy with the settings. The HAProxy Configuration template is stored on the controller in /usr/share/cmon/templates/haproxy.cfg and in that directory you also have a the template for the mysqlchk script.
3. You will noticed that these 2 load balancer nodes have been installed and provisioned by ClusterControl. You can verify this by login into ClusterControl > Nodes and you should see similar screenshot as below:
Requires that you have two load balancers installed
1. Navigate to Manage -> Load Balancer, and select the tab Keepalived.
Installation completed! You can now access your database servers through VIP, 192.168.10.100 port 33306.
Please sign in to leave a comment.