In the next following sections I will walk you through using Chef to deploy a MySQL Galera cluster on EC2 and then monitor and manage it using Severalnines's ClusterControl.
In the end we should have 5 nodes deployed:
* 1 Node with Chef Server and serving also as a Chef workstation using Chef v0.10.8 * 3 Nodes running MySQL Galera v188.8.131.52 * 1 Node running ClusterControl v1.1.28
The distribution used for the EC2 nodes in this post is Ubuntu Server 11.10 and the Chef version used is 0.10.8.
While these recipes have been tested primarily on Debian/Ubuntu distributions they have been written with RHEL/Centos support in mind as well however less tested on though.
We'll first setup a node with Chef Server and Severalnines's cookbook repository then prepare an EC2 image installed with Chef Client that is setup to connect to our Chef Server. If you already have a Chef environment then you can of course skip to the 'Install MySQL Galera' section.
Then last we'll create a copule of data bags and set roles for our nodes which specifies which reciepes to run on our nodes.
Download Severalnines cookbook repository and install Chef Server
First we'll launch a new Ubuntu Server 11.10 EC2 instance, install git and then clone Severalnines's github cookbook.
Logon to your new EC2 instance and get the cookbook repository.
Since we're also installing the Chef Client tools on this host you will be prompted for a Chef Server URL and should just enter 'none' and also 'knife configure -i' will prompt your for input.
... output from 'knife configure -i' ... WARNING: No knife configuration file found Where should I put the config file? [/home/ubuntu/.chef/knife.rb] Please enter the chef server URL: [http://ip-10-245-66-151.ec2.internal:4000] http://localhost:4000 Please enter a clientname for the new client: [ubuntu] chef_ws Please enter the existing admin clientname: [chef-webui] Please enter the location of the existing admin client's private key: [/etc/chef/webui.pem] .chef/webui.pem Please enter the validation clientname: [chef-validator] Please enter the location of the validation key: [/etc/chef/validation.pem] .chef/validation.pem Please enter the path to a chef repository (or leave blank): /home/ubuntu Creating initial API user... Created client[chef_ws] Configuration file written to /home/ubuntu/.chef/knife.rb
Check if 'knife' is setup correctly.
$ knife client list chef-validator chef-webui chef_ws
When prompted use the Chef Server's private IP address for the Chef URL, e.g., http://<ip-private-chef-chef>:4000
You also need to copy over the Chef Server's /etc/chef/validation.pem file to the Chef Client node's /etc/chef/ directory.
Then restart the Chef client.
$ sudo /etc/init.d/chef-client restart
**NOTE 1:** When you launch several instances from the same Chef Node AMI you need to delete /etc/chef/client.pem if it already has been registed for a node. Each new node requires its own client.pem file. Just delete it and do a '/etc/init.d/chef-client restart' to register the node and create a new client.pem file.
Verify that the nodes have pinged the Chef Server by:
$ knife node list
You should see a list of your nodes.
Install MySQL Galera
We should now have 1 node with Chef Server running and 3 bare bone nodes with Chef Client running. The next step is to create a data bag and upload the cookbook.
By default the root MySQL password used will be 'password' which you can change by editing the cookbook's attributes directly or by setting overrides in a role which we'll create later. You might also want to set a stricter grant for the "wsrep user" which as default is set to "GRANT ALL ON *.* TO 'wsrep_sst'@'%' IDENTIFIED BY 'wsrep' ". You can do this by changing the server.rb recipe in the galera cookbook.
There are only three keys that you should need to change from the sample.
* controller_host_ipaddress This is the IP address of the ClusterControl controller. * type This is the cluster type and is set to **galera** since we're handling a galera cluster. Other cluster deployments that ClusterControl supports are 'replication' and 'mysqlcluster'. * cc_pub_key The controller recipe will generate a ssh key and this is the place holder to paste in its public key. The agent nodes will stored the public key in the authorized_keys file in order for the controller to access the nodes properly. * agent_hosts A list of IP addresses where the ClusterControl's agents are to deployed.
Let's upload the data bag:
$ knife data bag from file s9s_controller cookbooks/cmon/data_bags/s9s_controller/config.json Updated data_bag_item[s9s_controller::config]
And let's create a few ClusterControl roles that we can use for the nodes.
Here we need to set 3 overrides because the galera cookbook installs the MySQL server under /usr/local. Also if the root password has changed from the default used then this is a good place to make sure the recipe uses the new password.
The web application role:
$ cat cookbooks/roles/cc_webapp.rb name "cc_webapp" description "ClusterControl Web Application" run_list ["recipe[cmon::webserver]", "recipe[cmon::webapp]"]
Upload the roles to chef:
$ knife role from file cookbooks/roles/cc_controller.rb Updated Role cc_controller! $ knife role from file cookbooks/roles/cc_agent.rb Updated Role cc_agent! $ knife role from file cookbooks/roles/cc_webapp.rb Updated Role cc_webapp!
For the ClusteControl's controller node we'll apply two roles, one is the cc_controller and the other cc_web since we would like to have the web application installed as well.
Before adding the agent roles we'll wait until the controller node is up and running. This is only in order to grab the public ssh key (/root/.ssh/id_rsa.pub) that we need to paste into our s9s_controller data bag (cc_pub_key) and then re-upload to chef so that the agents can get a hold of it.
After that we can continue by finally deploying the agent recipes on our galera nodes.