Allow configurable sockets on monitored nodes
Hallo folks,
I just tried out the ClusterControl software and in general its very nice.
But I found some ugly parts and would like to suggest some optimization.
We are running multiple cluster instances on one physical host, each of them of course with another port as well as another socket (which is different from the default socket).
If I try to add one of our existing cluster, the controller connects via ssh to the remote node right, but tries to connect to the default socket.
I thought it would try to connect to the configured port in the "add cluster" wizard, but it doesn't but tries to use the default socket which is not used on our servers.
A special socket which is in use by the instance cannot be configured.
Can you either add the possibility to specify the port the controller should connect to on the monitored node or add the possibility to configure, that the controller should connect via tcp to the configured port on localhost?
This would allow multiple cluster instances on one host and allow us to use ClusterControl in general ;-).
Best
Steffen
-
Hi
Thanks for the feedback! This is a known issue that we are aware of. There are more changes than just having configurable sockets/ports that would be needed in order to handle these type of deployments well from the ClusterControl's point view.
I've added your input to our feature backlog.
Please sign in to leave a comment.
Comments
1 comment