This post will explain how to setup the NFS cluster and the failover between two servers using Corosync as the cluster engine and Pacemaker as the resource manager of the cluster.
This post is the continuation of the series of posts to setup a highly available NFS server. Check out the first post to setup the iSCSI storage part here.
Pacemaker is an open source, highly available resource manager. The tasks of Pacemaker are to keep the configuration of all the resources of the cluster and the relations between the servers and resources. For example, if we need to setup a VIP (virtual IP), mount a filesystem or start a service on the active node of the cluster, Pacemaker will setup all the resources assigned to the server in the order we specify on the configuration to ensure all the services will be started correctly.
Corosync is an open source cluster engine which allows messages to be shared between different servers of a cluster. This is in order to check health statuses and inform other components of the cluster- just in case one of the servers goes down and starts the failover process.
Resource Agents are scripts that manage different services and are based on the OCF standard. The system already comes with some scripts, and most of the time they will be enough for typical cluster setups, but of course it’s possible to develop a new one depending on your needs and requirements.
So, after this small introduction about the cluster components, let’s get started with the configuration!
– Install package dependencies:
– Generate a private key to ensure the authenticity and privacy of the messages sent between the nodes of the cluster:
NOTE: This command will generate the private key on the path:
/etc/corosync/authkey. Copy the key file to the other server.
– Disable the quorum policy, since we need to deploy a 2-node configuration:
– Setup the VIP resource of the cluster:
– Setup the init script for the NFS server:
NOTE: The nfs-kernel-server init script will be managed by the cluster, so disable the service to start it at boot time using
– Configure the mount point for the NFS export:
– Configure a resource group with the NFS service, the mountpoint and the VIP:
– Prevent healthy resources from being moved around the cluster, configuring resource stickiness:
Check Cluster Status:
– Check the status of the resources of the cluster:
– If resources are in NFS2-SRV and we want to failover to NFS1-SRV:
– Remove all constraints created by the move command:
Original post by Iván Mora (SysOps Engineer @ CAPSiDE) and can be found at opentodo.net.
To find the first part of how to set up the iSCSI storage, please click here.