CMON HA Cluster - settings reset after leader goes down

Comments

3 comments

  • Official comment
    Avatar
    Sebastian Insausti

    Hi Remo,

    I saw you already created a ticket in our support portal.

    We'll continue with this topic there.

    Thank you.

    Comment actions Permalink
  • Avatar
    Paul Namuag

    Hi Remo Liebmann,

    That's great to hear as someone trying our CMON HA. 

    From a high-availability standpoint, that should not how the outcome would likely be since being availability, the data should be replicated among the standby-nodes or galera nodes that are available in the Cluster. 

    It would be informative for us on how did you setup the CMON HA and also what version have you setup and tested so far?

     

    0
    Comment actions Permalink
  • Avatar
    Remo Liebmann

    Hi,

    the setup is:

    - 3 Nodes on Debian 12
    - all in the same VLAN
    - I've only used the install procedure described in the docs: https://docs.severalnines.com/clustercontrol/latest/admin-guide/redundancy-high-availability/#cmon-ha
    - 1 Galera database cluster (3 Nodes) for testing (with HAProxy, Keepalived on all 3 Nodes)

    Versions:
    - Controller:2.3.2.13291. Buildv:2.3.2 sha:4f2f1585 b:475 n:release-mcc-2.3.2
    - MariaDB (on CC-Nodes): 10.11.11-MariaDB-0+deb12u1

    The behavior I'm observing:
    - I shutdown the leader node (Cluster is registered and all services like HAProxy, Keepalived are visible before that)
    - After the remaining nodes have elected a new leader, the cluster remains visible but only the primary db-nodes
    - when I try to add the Loadbalancers back to the GUI it fails. I have to remove the cluster and import it again.
    - after rebooting the old leader node and shutting down the new leader node, the same occurs

    Also, while installing CMON HA, there was no cmon-cronjob installed or created. Might this be the reason?

    0
    Comment actions Permalink

Please sign in to leave a comment.

Powered by Zendesk