In this article, you’ll find how to setup a highly available and redundant NFS server cluster using iSCSI with DM-Multipath.

The objective of this scenario is to create redundant and fault tolerant NFS storage with an automatic failover, ensuring maximum availability of the NFS exports- most of the time.

This post specifically will cover the configuration of the iSCSI initiator for both NFS servers and the configuration for the device mapper multipathing (DM-Multipath).

For this environment, there are two servers running Ubuntu 14.04.2 LTS with two NICs configured on each server- one server provides the NFS service to the clients and another one connects with the iSCSI SAN network. For the iSCSI SAN storage device, two physical adapters and two network interfaces have already been set up for each adapter for redundant network access and to provide two physical paths to the storage system.

Both NFS servers will have the LUN device attached using a different InitiatorName and will have the device mapper multipathing setup (this allows you to configure multiple I/O paths between server nodes and storage arrays into a single device). These I/O paths are physical SAN connections that can include separate cables, switches, and controllers- so, basically, it is as if the NFS servers have a single block device.

iSCSI Diagram

The cluster software used is Corosync and the resource manager is Pacemaker. Pacemaker will be responsible for assigning a VIP (virtual IP address), mounting the filesystem from the block device and starting the NFS service with the specific exports for the clients on the active node of the cluster. In case of failure of the active node of the cluster, the resources will be migrated to the passive node and the services will continue to operate as if nothing had happened. There is a secondary article at opentodo.net that continues with how to specifically setup Corosync and Pacemaker- you can find that here.

So let’s get started with the setup!

iSCSI Initiator Configuration

-Install dependencies:

# aptitude install multipath-tools open-iscsi

-Server 1
Edit configuration file /etc/iscsi/initiatorname.iscsi:

InitiatorName=iqn.1647-03.com.cisco:01.vdsk-nfs1

-Server 2
-Edit configuration file /etc/iscsi/initiatorname.iscsi:

InitiatorName=iqn.1647-03.com.cisco:01.vdsk-nfs2

NOTE: initiator identifiers on both servers are different but they are associated with the same LUN device.

–Run a discovery on iSCSI targets:

# iscsiadm -m discovery -t sendtargets -p 10.54.61.35
# iscsiadm -m discovery -t sendtargets -p 10.54.61.36
# iscsiadm -m discovery -t sendtargets -p 10.54.61.37
# iscsiadm -m discovery -t sendtargets -p 10.54.61.38

–Connect and login with the iSCSI target:

# iscsiadm -m node -T iqn.2054-
02.com.hp:storage.msa2012i.0390d423d2.a -p 10.54.61.35 -–login
# iscsiadm -m node -T iqn.2054-
02.com.hp:storage.msa2012i.0390d423d2.a -p 10.54.61.36 --login
# iscsiadm -m node -T iqn.2054-
02.com.hp:storage.msa2012i.0390d423d2.b -p 10.54.61.37 --login
# iscsiadm -m node -T iqn.2054-
02.com.hp:storage.msa2012i.0390d423d2.b -p 10.54.61.38 --login

–Check the sessions established with the iSCSI SAN device:

# iscsiadm -m node
10.54.61.35:3260,1 iqn.2054-
02.com.hp:storage.msa2012i.0390d423d2.a
10.54.61.36:3260,2 iqn.2054-
02.com.hp:storage.msa2012i.0390d423d2.a
10.54.61.37:3260,1 iqn.2054-
02.com.hp:storage.msa2012i.0390d423d2.b
10.54.61.38.38:3260,2 iqn.2054-
02.com.hp:storage.msa2012i.0390d423d2.b 

–At this point, the block devices should be available on both servers like local, attached devices. You can check it by simply running fdisk:

# fdisk -l

Disk /dev/sdb: 1000.0 GB, 1000000716800 bytes
255 heads, 63 sectors/track, 121576 cylinders, total
1953126400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdb1 63 1953118439 976559188+ 83 Linux

Disk /dev/sdc: 1000.0 GB, 1000000716800 bytes
255 heads, 63 sectors/track, 121576 cylinders, total
1953126400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdc1 63 1953118439 976559188+ 83 Linux 

In this case, /dev/sda is the local disk for the server and /dev/sdb and /dev/sdc corresponds to the iSCSI block devices (one device for each adapter). Now, we need to setup a device mapper multipath for these two devices, /dev/sdb and /dev/sdc. This is in case one of the adapters fails and a LUN device will continue working in our system and the multipath will switch the used disk for our block device.

Multipath configuration:

–We need first to retrieve and generate a unique SCSI identifier to configure on the multipath configuration, running the following command for one of the iSCSI devices:

# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3600c0ff000d823e5ed6a0a4b01000000

–Create the multipath configuration file /etc/multipath.conf with the following content:

##
## This is a template multipath-tools configuration file
## Uncomment the lines relevent to your environment
##
defaults {
user_friendly_names yes
polling_interval 3
selector "round-robin 0"
path_grouping_policy multibus
path_checker directio
failback immediate
no_path_retry fail
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*]"
}

multipaths{
multipath {
# id retrieved with the utility /lib/udev/scsi_id
wwid
3600c0ff000d823e5ed6a0a4b01000000
alias nfs
}
}

– Restart multipath-tools service:

# service multipath-tools restart

– Check again the disks available in the system:

# fdisk -l

Disk /dev/sdb: 1000.0 GB, 1000000716800 bytes
255 heads, 63 sectors/track, 121576 cylinders, total
1953126400 sectors
Units = sectors of 1 * 512 = 512 bytes6
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdb1 63 1953118439 976559188+ 83 Linux

Disk /dev/sdc: 1000.0 GB, 1000000716800 bytes
255 heads, 63 sectors/track, 121576 cylinders, tota
1953126400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdc1 63 1953118439 976559188+ 83 Linux

Disk /dev/mapper/nfs: 1000.0 GB, 1000000716800 bytes
255 heads, 63 sectors/track, 121576 cylinders, total
1953126400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id
System
/dev/mapper/nfs1 63 1953118439 976559188+ 83
Linux

Disk /dev/mapper/nfs-part1: 1000.0 GB, 999996609024 bytes
255 heads, 63 sectors/track, 121575 cylinders, total
1953118377 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Now, as you can see, we have a new block device using the alias setup on the multipath configuration file /dev/mapper/nfs. The disk I’ve partitioned and implemented the file system in is the block device /dev/mapper/nfs-part1, so you can mount it in your system with the mount utility.

–You can check the health of the multipath block device and if both devices are operational by running the following command:

# multipath –ll
nfs (3600c0ff000d823e5ed6a0a4b01000000) dm-3 HP,MSA2012i
size=931G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 6:0:0:0 sdb 8:16 active ready running
`- 5:0:0:0 sdc 8:32 active ready running

References
Device Mapper Multipath
Setting up iSCSI Multipath in Ubuntu

Original post by Iván Mora (SysOps Engineer @ CAPSiDE) can be found at opentodo.net.
To find the second post in this series on highly available servers by Iván, please click here.

TAGS: how-to, storage

speech-bubble-13-icon Created with Sketch.
Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

*
*