Forums/Knowledge Base/HAProxy

Install HAProxy and Keepalived (Virtual IP)

Ashraf Sharif
posted this on April 19, 2013 12:11

To avoid a single point of failure with your HAProxy, one would set up two identical HAProxy instances (one active and one standby) and use Keepalived to run VRRP between them. VRRP provides a virtual IP address to the active HAProxy, and transfers the Virtual IP to the standby HAProxy in case of failure. This is seamless because the two HAProxy instances need no shared state.

In this example, we are using 2 nodes to act as the load balancer with IP failover in front of our database cluster. VIP will be floating around between LB1 (master) and LB2 (backup). When LB1 is down, the VIP will be taking over by LB2 and once the LB1 up again, the VIP will be failback to LB1 since it hold the higher priority number. 

We are using following hosts/IPs:

VIP: 192.168.10.100
LB1: 192.168.10.101
LB2: 192.168.10.102

DB1: 192.168.10.111
DB2: 192.168.10.112
DB3: 192.168.10.113
ClusterControl: 192.168.10.115

You may refer to following diagram for the architecture:

haproxy_keepalived.PNG

 

Install HAproxy

1. Login into ClusterControl node to perform this installation. We have built a script to deploy HAproxy automatically which is available under our Git repository https://github.com/severalnines/s9s-admin. Navigate to the installation directory which you used to deploy the database cluster and clone the repo:

$ cd /root/s9s-galera-2.2.0/mysql/scripts/install
$ git clone https://github.com/severalnines/s9s-admin.git

2. Before we start to deploy, make sure LB1 and LB2 are accessible using passwordless SSH. Copy the SSH keys to the load balancer nodes:

$ ssh-copy-id -i ~/.ssh/id_rsa 192.168.10.101
$ ssh-copy-id -i ~/.ssh/id_rsa 192.168.10.102 

3. Install HAproxy into both nodes:

./s9s-admin/cluster/s9s_haproxy --install -i 1 -h 192.168.10.101
./s9s-admin/cluster/s9s_haproxy --install -i 1 -h 192.168.10.102

4. You will noticed that these 2 load balancer nodes have been installed and provisioned by ClusterControl. You can verify this by login into ClusterControl > Nodes and you should see similar screenshot as below:

haproxy.png

 

Install Keepalived

Following steps should be performed in LB1 and LB2 accordingly.

1. Install Keepalived package:

On RHEL/CentOS:

$ yum install -y centos-release
$ yum install -y keepalived
$ chkconfig keepalived on 

On Debian/Ubuntu:

$ sudo apt-get install -y keepalived
$ sudo update-rc.d keepalived defaults 

2. Tell kernel to allow binding non-local IP into the hosts and apply the changes:

$ echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
$ sysctl -p

 

Configure Keepalived and Virtual IP

1. Login into LB1 and add following line into /etc/keepalived/keepalived.conf:

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth0 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.10.100 # the virtual IP
}
track_script {
chk_haproxy
}
}

2. Login into LB2 and add following line into /etc/keepalived/keepalived.conf:

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth0 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 100 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.10.100 # the virtual IP
}
track_script {
chk_haproxy
}
}

3. Start Keepalived in both nodes:

$ sudo /etc/init.d/keepalived start

4. Verify the Keepalived status. LB1 should hold the VIP and the MASTER state while LB2 should run as BACKUP state without VIP:

LB1 IP:

$ ip a | grep -e inet.*eth0
inet 192.168.10.101/24 brd 192.168.10.255 scope global eth0
inet 192.168.10.100/32 scope global eth0

LB1 Keepalived state:

$ cat /var/log/messages | grep VRRP_Instance
Apr 19 15:47:25 lb1 Keepalived_vrrp[6146]: VRRP_Instance(VI_1) Transition to MASTER STATE
Apr 19 15:47:25 lb1 Keepalived_vrrp[6146]: VRRP_Instance(VI_1) Entering MASTER STATE

LB2 IP:

$ ip a | grep -e inet.*eth0
inet 192.168.10.102/24 brd 192.168.10.255 scope global eth0

LB2 Keepalived state:

$ cat /var/log/messages | grep VRRP_Instance
Apr 19 15:47:25 lb2 Keepalived_vrrp[6146]: VRRP_Instance(VI_1) Transition to MASTER STATE
Apr 19 15:47:25 lb2 Keepalived_vrrp[6146]: VRRP_Instance(VI_1) Received higher prio advert
Apr 19 15:47:25 lb2 Keepalived_vrrp[6146]: VRRP_Instance(VI_1) Entering BACKUP STATE

 **Debian/Ubuntu: Under some distributions /var/log/messages might be missing , try to find the similar log under /var/log/syslog.

Installation completed! You can now access your database servers through VIP, 192.168.10.100 port 33306.

 

Comments

User photo
JOE YU

How to avoid brain split in the above keepalived configuration?

eg. when the communication is broken between keepalived master and slave host ?

July 01, 2013 18:44
User photo
Ashraf Sharif
Severalnines

You can refer to following pages for detailed explanation on how to avoid Keepalived split-brain:

http://scale-out-blog.blogspot.com/2011/01/virtual-ip-addresses-and-their.html

http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.failover.html

July 01, 2013 19:36
User photo
JOE YU

Thanks for reply.

Have read these two docs already. Still can't get clue to solve the problem without using pacemaker or other component to make a complex architect.

Not sure if using the same state (Like BACKUP) and same priority like (100) in both keepalived.conf  can avoid the brain split of keepalived.

 

Thanks,

July 01, 2013 19:44
User photo
Aiman Farhat

Hi Ashraf,

Thanks for this nice article, I am having oneissue and can't figure it out. The virtual IP gets assigned to the master and on fail over the VIP get's assigned to the backup, but the issue is I can't ping the IP Address (10.134.41.180)  from the backup or externally. 

 ip addr show:

eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:a8:0d:cf brd ff:ff:ff:ff:ff:ff
inet 10.134.41.103/25 brd 10.134.41.127 scope global eth0
inet 10.134.41.180/32 scope global eth0
inet6 fe80::250:56ff:fea8:dcf/64 scope link
valid_lft forever preferred_lft forever

 

Thanks for your help

Aiman

July 24, 2013 21:51
User photo
Ashraf Sharif
Severalnines

Hi Aiman,

 

Try to check firewall setting on backup host and ARP table on client host. You can use command "arp -an" to verify the latest virtual IP mapping. It should be mapped to the backup host's MAC address. Depending on your router or switch, you might face "arp cache problem" if the virtual IP has been failover but not updated in your client's ARP table.

July 25, 2013 08:07
User photo
Aiman Farhat

Hi Ashraf,

   Thank you for the speedy reply. It turn out that the VIP I am using it is not on the same subnet. I got it working now.

Thanks again.

Aiman

July 25, 2013 12:45
User photo
Shaun Botsis

Hi Ashraf

 

I get the following error when trying to provision HAProxy to a vanilla debian7 install.

 

ll# ./s9s-admin/cluster/s9s_haproxy --install -i 1 -h 172.16.200.48
cmon12341
load opts 1
Testing ssh to 172.16.200.48: ssh -q -p22 -o UserKnownHostsFile=/dev/null -o Str                    ictHostKeyChecking=no -oNumberOfPasswordPrompts=0 -oConnectTimeout=10 -oIdentity                    File=/root/.ssh/id_rsa -oNumberOfPasswordPrompts=0 root@172.16.200.48  ls -al /u                    sr
[ok]
Using loadbalancing policy 'leastconn'.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mys                    ql.sock' (2)
No hostnames found.

What am I doing wrong ?

Thanks

November 03, 2014 20:38
User photo
Ashraf Sharif
Severalnines

Hi Shaun,

Could you update in /usr/bin/s9s_haproxy line 76 to:

chmod 644 $MYCNF_CMON

Then try again the HAproxy deployment. Please verify if it is working.

Regards,
Ashraf

November 04, 2014 03:09
User photo
Shaun Botsis

hi Ashraf

I get exactly the same response after updating

chmod 600 $MYCNF_CMON >> chmod 644 $MYCNF_CMON

November 04, 2014 04:29
User photo
Ashraf Sharif
Severalnines

Hi Shaun,

Can you edit /usr/bin/s9s_haproxy starting from line 71:

[mysql_cmon]
user=cmon
password=
EOF

to:

[mysql_cmon]
host=127.0.0.1
port=3306
user=cmon
password=
EOF

 

Then save and try again the deployment. It seems like CMON didn't use correct credentials when connecting to CMON DB.

 

Regards,

Ashraf

November 04, 2014 05:02
User photo
Ashraf Sharif
Severalnines

Hi Shaun,

Can you edit /usr/bin/s9s_haproxy starting from line 71:

[mysql_cmon]
user=cmon
password=
EOF

to:

[mysql_cmon]
host=127.0.0.1
port=3306
user=cmon
password=
EOF

 

Then save and try again the deployment. It seems like CMON didn't use correct credentials when connecting to CMON DB.

 

Regards,

Ashraf

November 04, 2014 05:02
User photo
Shaun Botsis

Thanks for the help Ashraf :)  I got it working. 

 

There was one more issue relating to the package manager not having the HAproxy package available.

On my Debian 7 install I had to add deb http://ftp.debian.org/debian/ wheezy-backports main  to /etc/apt/sources.list.

November 08, 2014 16:47
User photo
Ashraf Sharif
Severalnines

Hi Shaun,

Yes, we are aware of that. Certain package managers do not have Haproxy in their repository, so installing with "use source" option inside ClusterControl UI would solve this. 

Regards,

Ashraf

November 10, 2014 03:49