In this article, you’ll find how to setup a highly available and redundant NFS server cluster using iSCSI with DM-Multipath.
The objective of this scenario is to create redundant and fault tolerant NFS storage with an automatic failover, ensuring maximum availability of the NFS exports- most of the time.
This post specifically will cover the configuration of the iSCSI initiator for both NFS servers and the configuration for the device mapper multipathing (DM-Multipath).
For this environment, there are two servers running Ubuntu 14.04.2 LTS with two NICs configured on each server- one server provides the NFS service to the clients and another one connects with the iSCSI SAN network. For the iSCSI SAN storage device, two physical adapters and two network interfaces have already been set up for each adapter for redundant network access and to provide two physical paths to the storage system.
Both NFS servers will have the LUN device attached using a different InitiatorName and will have the device mapper multipathing setup (this allows you to configure multiple I/O paths between server nodes and storage arrays into a single device). These I/O paths are physical SAN connections that can include separate cables, switches, and controllers- so, basically, it is as if the NFS servers have a single block device.
The cluster software used is Corosync and the resource manager is Pacemaker. Pacemaker will be responsible for assigning a VIP (virtual IP address), mounting the filesystem from the block device and starting the NFS service with the specific exports for the clients on the active node of the cluster. In case of failure of the active node of the cluster, the resources will be migrated to the passive node and the services will continue to operate as if nothing had happened. There is a secondary article at opentodo.net that continues with how to specifically setup Corosync and Pacemaker- you can find that here.
So let’s get started with the setup!
iSCSI Initiator Configuration
Edit configuration file
-Edit configuration file
NOTE: initiator identifiers on both servers are different but they are associated with the same LUN device.
–Run a discovery on iSCSI targets:
–Connect and login with the iSCSI target:
–Check the sessions established with the iSCSI SAN device:
–At this point, the block devices should be available on both servers like local, attached devices. You can check it by simply running
In this case,
/dev/sda is the local disk for the server and
/dev/sdc corresponds to the iSCSI block devices (one device for each adapter). Now, we need to setup a device mapper multipath for these two devices,
/dev/sdc. This is in case one of the adapters fails and a LUN device will continue working in our system and the multipath will switch the used disk for our block device.
–We need first to retrieve and generate a unique SCSI identifier to configure on the multipath configuration, running the following command for one of the iSCSI devices:
–Create the multipath configuration file
/etc/multipath.conf with the following content:
– Restart multipath-tools service:
– Check again the disks available in the system:
Now, as you can see, we have a new block device using the alias setup on the multipath configuration file
/dev/mapper/nfs. The disk I’ve partitioned and implemented the file system in is the block device
/dev/mapper/nfs-part1, so you can mount it in your system with the mount utility.
–You can check the health of the multipath block device and if both devices are operational by running the following command: