[ERROR] /usr//sbin/mysqld: unknown variable 'wsrep_provider=/usr/lib64/galera/libgalera_smm.so'
Fresh install on CentOS 6.4 failed on second node. Strange, seems like it installed and synced on the first node just fine. Strange thing is on the first node I can see galera mysql installed:
However, on the second node, it appears that only MySQL installed, not galera. Script seems to be busted.
Make sure to use N+1 hosts, e.g., 3 Galera hosts + 1 host for our ClusterControl node.
You only used 3 hosts which is a problem since we require a dedicated host for ClusterControl.
On the ClusterControl node we install a plain vanilla MySQL server not a Galera version which is why you see different packages.
Use 4 hosts, for example:
192.168.21.4 - ClusterControl node
192.168.21.5 - db 1
192.168.21.6 - db 2
192.168.21.7 - db 3
You were 100% correct. Being a programmer myself, I think it would be good ingenuity to implement that validation in the script to check and ensure it is not being installed on the same server. Or, give the user the option. In my case, I'm running 3 redundant servers with MySQL and HTTP on all of them. Therefore, it would of been better for me to include Cluster Control on them as well. Now I'm forced to deploy and install another server.
Those are just my bits, cheers!
Yes, I agree with you that we need to provide overall better validation.
We already do have various checkpoints in our web configurator and scripts but I guess this one slipped through. We'll look into that.
I don't recall so need to check if ClusterControl can be installed manually on top of only 3 hosts. There might be issues in how the stats are collected and how the gui will handle this though. Will let you know.
Thanks for the feedback!
I did a quick test with installing ClusterControl only on 3 galera nodes and there are few issues early on which makes this a non-supported option at the moment.
- MySQL statistics in your cluster will always include the backend cmon process activities
- Galera node recovery does not work on the db node which hosts the "cmon controller"
If you would like to try this setup you can follow, http://support.severalnines.com/entries/20613923-Installation-on-an-existing-Cluster with some additional few extra steps. Please note this is not a supported setup and you might have various other issues later on.
- On your "cmon controller" host, change the 'mode' parameter in the /etc/cmon.cnf file to 'dual' instead of 'controller' and add your mysql servers to the 'mysql_server_addresses' parameter.
Thanks for the testing, it is very much appreciated! I think this problem qualifies as a feature request. Reason being is I'm running 3 servers that are redundant, it makes sense to me to deploy Cluster Control on those 3 servers as I get the redundancy. If one server fails, I can still maintain controller over my environment. In this design, I get less deployment cost, yet more powerful making your solution more attractive.
I do believe this would be a great feature for those Openstacker's or alike out there deploying multi-node setups. I would much rather deploy it into the cluster rather than having to pay the additional cost of deploying an additional node and managing it. Then one would think about some sort of redundancy for that node.
Also, the DB for Cluster Control MySQL doesn't have to run on the cluster, it could run as another instance. The possibilities are endless, however, the focus should always be making the user experience and service availability the highest of quality while decreasing deployment cost.
Please sign in to leave a comment.