Adsense 1

Tuesday, March 11, 2008

Using the same LUN in multiple servers

Setting up SAN and sharing a single LUN to multiple servers.

When i first started working in SAN all i knew was that i can create LUNs (a slice of space in the storage) and use them in the servers as drives. What i didn't know was that i cannot (and should not) access the same LUN from multiple servers simultaneously (I learned it the hard way).

Usually I test things before putting them in production. In my first attempt i created a LUN and made it as a part of two Storage Groups, each with one server.

Well, i could see the 'test' LUN in both servers and mount it in those machines simultaneously. Tried to write some data in the LUN from server1 and view it from server2. It did not show up.I could not even list the files written in server1 from server2 or vice versa. It is actually not a problem with SAN or server. The problem was with my setup. While i can assign the same LUN to multiple storage groups, one LUN must NOT be accessed from both servers simultaneously. We ended up losing data here.

But why?

Because, there is no way for a server to know about the disk blocks changed by the other server. Since LUNs are presented to servers as scsi drives servers cannot cordinate changes in disk blocks without special programs. We need clustered file systems for this. A clustered file system is a file system which is simultaneously mounted on multiple servers. There are several file system clustering softwares such as GFS, OCFS, Veritas CVM, luster etc.

We will now see how to configure GFS.

The below described installation was done with the following setup:
SAN Storage box
SAN Fibre switch - Brocade
4 servers - Each connected with 1 HBA card and connected to SAN directly. I had 4 RAID groups each with one server. 2 servers( cluster nodes) were running with Centos 4.4 and the other two with 4.3
Create 1 LUN in the SAN and assign it to all Raid groups.

Note: I used dlm as cluster locking method. More on that later.

To know more about setting up the hardware click here
Official documentation of setting up GFS can be found here

Below are few things to be done before implementing GFS:

You will be doing the whole setup as root.

Make sure clocks on the GFS nodes are synchronized (I used ntp for this).

You will need one fencing device for the cluster to work. I used Brocade Fabric Switch for fencing. GFS will not work without fencing.

Always make sure that the kernel you use has gfs modules. Some kernels don't. All you have to do is to install a kernel with gfs support and boot the server with that kernel (i used 2.6.9-55.0.2.ELsmp)

We will use one server (server node) for deploying the config across other nodes. Enable password less login from this server to other nodes in the cluster.

You will need GUI mode in the server node for doing cluster configuration. I have given a sample text configuration file also for those who don't have GUI access. The file can be used as is (ofcourse after substituting your values)

All the nodes & fencing device in the cluster must be in the same subnet. Else you may have to configure multicast.

FQDN names of all nodes must be resolvable. Else put appropriate entries in /etc/hosts

Also keep the following details handy

Number of file systems ( 1 in my case). More can be added later.
File-system name (sk)
Journals ( I used 4)
GFS nodes ( 4 )
Storage devices and partitions

You can easily install Cluster Suite & gfs from the csgfs repo, as explained below (run these commands as root in all nodes and the server).

# cd /etc/yum.repos.d/
# wget http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo
# rpm --import /usr/share/doc/centos-release-4/RPM-GPG-KEY

# yum install rgmanager system-config-cluster magma magma-plugins cman cman-kernel dlm dlm-kernel fence gulm iddev GFS GFS-kernel gnbd gnbd-kernel lvm2-cluster GFS-kernheaders gnbd-kernheaders
Note: If you want the server to run with smp kernel install the appropriate rpms (eg: cman-kernel-smp)

Boot the server with the new kernel (that has gfs support). Repeat this in all cluster nodes and server node.

Configuration Steps:
Step1: Defining & creating the cluster config

In the server node type:
# system-config-cluster

Choose File->New
Give cluster name
Choose DLM as locking method
Click ok to accept the configuration.

Then you will see the cluster you just created.

Step 2: Adding a fence device

Click on Fence devices and Add a Fence Device. A window named Fence Device Configuration opens up. Choose the device you have for fencing (mine is Brocade Switch).
Fill Name, IP adress, Login & Password for the device.
Click ok to accept the changes.

Step 3: Adding a node to the cluster
In the cluster window choose cluster nodes and click on add a cluster node
Give the FQDN of the server node. Leave the quoram vote blank. Click ok.

Step 4: Adding fencing for the node
Choose the node you just added.
Click on Manage fencing for this node. This will open a new window called Fence Configuration.
Now choose add a fence level. No need to enter anything.
Click Close (make sure that the fence level is added for that node. You will see the details if you click on the node name)
.

Repeat the steps 3 & 4 for all other nodes. Substitute the other node names instead of server node name in Step 3.

Save the config file (default: /etc/cluster/cluster.config). You will now get a Send to cluster button. Click that button to copy the config file in all the cluster nodes (this will be done through scp; that is why we configured the server to login to nodes without password).

Below is a sample config file: $cat /etc/cluster/cluster.conf
<cluster alias="Test" config_version="1" name="Test">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="server1.examplenet" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
<clusternode name="server2.example.net" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_brocade" ipaddr="x.x.x.x" login="user" name="Switch" passwd="test"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
You can also create your own file in text mode and copy it to the nodes. Enter other nodes info in the config file between the tags as found in the above example.

Step 5: Check if you have all needed modules

Verify that the configuration is copied to all the servers and then proceed.

Reboot all the nodes with gfs kernel and check whether they have gfs & dlm enabled. Below are the steps and output from my server node:
# lsmod|grep gfs
gfs 290652 0
lock_harness 8992 2 gfs,lock_dlm
cman 124896 3 gfs,lock_dlm,dlm

# lsmod|grep dlm
lock_dlm 44640 0
lock_harness 8992 2 gfs,lock_dlm
dlm 116580 1 lock_dlm
cman 124896 3 gfs,lock_dlm,dlm

If nothing shows up, please follow the below steps:
# modprobe gfs
# modprobe lock_dlm
# modprobe dlm

To make the modules persistent, put the below lines in /etc/rc.modules
# echo modprobe dlm >> /etc/rc.modules
# echo modprobe lock_dlm >> /etc/rc.modules
# echo modprobe gfs >> /etc/rc.modules
# chmod +x /etc/rc.modules
Also turn on all these cluster services with chkconfig: ccsd, cman, fenced, clvmd , gfs, rgmanager, clvmd (only if you are planning to cluster LVMs).

Proceed to Step 6 only if step 5 is Successful ie., if all modules get detected after a fresh reboot of the server.

Step 6: Making a GFS partition or LVM (Do this only if Step 5 was successful)

Now we are going to create a clustered File system(GFS) partition in the server and use it in all nodes.

In the server, create a partition inside the LUN drive (now shown as scsi drive in the server). It can be a normal partition or LVM. Below are the steps to create a clustered LVM ( in my system the LUN disk was scanned as sdb; Verify & substitute your's in the below config).

# fdisk /dev/sdb
Create one partition.

Choose either of the two options.

Step 6: Option 1
# pvcreate /dev/sdb1
# vgcreate VG /dev/sdb1
# vgchange -aey VG (To make it as clustered VG)

# lvcreate -L1023G -n LV1 VG ( Remember to substitute the size of your disk )
# gfs_mkfs -p lock_dlm -t Test:sk -j 8 /dev/VG/LV1
Step 6: Option 2
For a normal partition command would be:
# gfs_mkfs -p lock_dlm -t Test:sk -j 8 /dev/sdb1

Note: I have used Test:sk; you can substitute your clustername:fname here.

Above gfs_mkfs command creates the gfs file system in our test LV, LV1.

Step 6a: Activating clustered LVMs (Optional - Skip this if you are using an ordinary partition).

Since we used a clusted LVM you have to modify the following two lines in the /etc/lvm/lvm.conf file. Do this in all nodes and server node:

locking_type = 2 (by default it is 1)
locking_library = "liblvm2clusterlock.so"


After this restarting clvmd will show up all logical volumes inclusive of clustered LVMs. Note: You have to restart clvmd in all nodes. Rebooting all the nodes & server may solve the problems many times.

Step 8: Mounting the GFS parition/LVM in all nodes

Issue this command in all cluster nodes and server node.

For normal partition:
# mount -t gfs /dev/sdb1 /mnt

For CLVMs:
# mount -t gfs /dev/VG/LV1 /mnt

You can now write a data in /mnt and access it from the other server without problems.

To make the mounting points permanent after next reboot, put the below entry in /etc/fstab of all nodes and server:

For Normal Paritions:
/dev/sdb1 /MyData gfs defaults 0 0

For CLVMs:
/dev/VG/LV1 /MyData gfs defaults 0 0

GFS module in CentOS RPM

How to check if the kernel you have in the server has gfs modules?

When i was installing gfs support in one of my servers i wanted to check whether the kernel i used was having gfs modules. Without that i cannot make gfs work. It was simple to find out.

After installing the kernel and gfs rpms, just list the files under the kernel modules directory:


# cd /lib/modules/2.6.9-55.0.2.ELsmp
# ls kernel/fs/
autofs4 cramfs ext3 freevxfs gfs_locking hfsplus jffs2 msdos nfs_common nls udf
cifs exportfs fat gfs hfs jbd lockd nfs nfsd smbfs vfat


You can even list the files in gfs and gfs_locking directories

# ls kernel/fs/gfs
gfs.ko

# ls kernel/fs/gfs_locking/
lock_dlm lock_gulm lock_harness lock_nolock


It will be tricky if you want other modules like xfs. You need to find a kernel RPM that has both xfs & gfs module rpms or if you have time to play you can compile them all yourselves.