Adsense 1

Saturday, December 27, 2008

Apache and Tomcat with modjk using tar

I have already put together a tutorial to connect Apache HTTP with Apache tomcat. In that i had installed httpd with rpm. This tutorial explains the steps to connect apache(http) with tomcat using modjk ;only that this time i am going to explain how to do everything from tar bundles. Here i am not going to explain how to install tomcat. Another tutorial on that is here: http://techsk.blogspot.com/2008/04/installing-tomcat.html.

In the below tutorial i will be installing apache in the default directory (/usr/local/apache2) you can change that location in your server to any other by adding --prefix=<your-install-directory> in the configuration step before calling make. Also i am installing Apache HTTP with SSL support.

When i get some time i will put together the steps for enabling SSL in apache and for using tomcat's SSL keys in Apache.

Here we go:

First download httpd:
# cd /usr/local/src/
# wget http://apache.mirrors.redwire.net/httpd/httpd-2.2.10.tar.gz
# wget http://www.apache.org/dist/httpd/httpd-2.2.10.tar.gz.md5
# md5sum httpd-2.2.10.tar.gz (was equal to the downloaded sum)
# tar zxfv httpd-2.2.10.tar.gz
# cd httpd-2.2.10

Below command will configure (prepare) apache installation with most of the shared modules. Type ./configure --help to see what they are
# ./configure --enable-so --enable-mods-shared=most --enable-ssl    (Hint: --prefix=</path/to/install/directory>)
# make
# make install
# cd /usr/local/src/
# wget http://www.trieuvan.com/apache/tomcat/tomcat-connectors/jk/source/jk-1.2.27/tomcat-connectors-1.2.27-src.tar.gz
# wget http://www.apache.org/dist/tomcat/tomcat-connectors/jk/source/jk-1.2.27/tomcat-connectors-1.2.27-src.tar.gz.md5
# md5sum tomcat-connectors-1.2.27-src.tar.gz (was equal to the downloaded sum).
# tar zxfv tomcat-connectors-1.2.27-src.tar.gz
# cd tomcat-connectors-1.2.27-src/native/
#  ./configure --with-apxs=/usr/local/apache2/bin/apxs
# make
# make install

That is it for the installation. Now we will configure apache for connecting it with tomcat
# cd /usr/local/apache2/conf/
# cp httpd.conf httpd.conf_18Nov08
# vi httpd.conf
Add the following code below the LoadModule section.

# Load mod_jk
#
LoadModule jk_module modules/mod_jk.so

# Configure mod_jk
#
JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info

JkMount *.jsp test <- This must be the worker name you give in workers.properties. This will at(many) times go into your Virtual Host configuration.

Uncomment the line: Include conf/extra/httpd-ssl.conf
Save and Exit.
# cd /usr/local/apache2/conf/
# vi workers.properties (create new file and paste the below code)

# workers.properties
#

# In Unix, we use forward slashes:
ps=/

# list the workers by name

worker.list=test

# ------------------------
# First tomcat server
# ------------------------
worker.test.port=8009
worker.test.host=localhost
worker.test.type=ajp13

#
# END workers.properties
#
Save and exit.

Start Apache.
# /usr/local/apache2/bin/apachectl start

Now you will be able to browse all jsp pages through apache. Apache will silently be connecting with tomcat through modjk to serve those.
Blogged with the Flock Browser

Connecting Apache with tomcat using modjk

Apache HTTP and Tomcat are web servers meant for different purposes. While Apache HTTP can be used to serve static (html) and dynamic (cgi/perl) contents ,  tomcat is used to serve Java pages (jsp/servlets). We can link both Apache HTTP & Apache Tomcat using modjk (connector):

What can you do by linking them both?

1. Effectively serve both static (html) and dynamic (jsp/servlet) pages.
2. Load balance/Cluster between two or more tomcat services.

There may be more ....

I did the installation in CentOS 4.4 (32bit). I installed Apache http (rpm) using yum. I assume that you have already installed Tomcat.

Below is the tutorial:

# yum install httpd httpd-devel (httpd-devel is needed to install any module in httpd)
# wget http://apache.siamwebhosting.com/tomcat/tomcat-connectors/jk/source/jk-1.2.26/tomcat-connectors-1.2.26-src.tar.gz (download any latest version).
# wget http://www.apache.org/dist/tomcat/tomcat-connectors/jk/source/jk-1.2.26/tomcat-connectors-1.2.26-src.tar.gz.md5 (md5 check sum for the above).

Check if you have downloaded a genuine package
# md5sum tomcat-connectors-1.2.26-src.tar.gz (This must be equal to the sum downloaded with the second wget).

# tar zfxv tomcat-connectors-1.2.26-src.tar.gz
# cd tomcat-connectors-1.2.26-src
# cd native
# ./configure --with-apxs=/usr/sbin/apxs
# make
# make install

# cd /etc/httpd/conf/
# cp httpd.conf httpd.conf.orig (I always keep a copy of the original).
# vi httpd.conf

Add the following code below the LoadModule section.


# Load mod_jk
#
LoadModule jk_module modules/mod_jk.so

# Configure mod_jk
#
JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info

JkMount *.jsp test <- This must be the worker name you give in workers.properties. This will at(many) times go into your Virtual Host configuration.

Save and Exit.

# cd /etc/httpd/conf/
# vi workers.properties (create new file and paste the below code)

# workers.properties
#

# In Unix, we use forward slashes:
ps=/

# list the workers by name

worker.list=test

# ------------------------
# First tomcat server
# ------------------------
worker.test.port=8009
worker.test.host=localhost
worker.test.type=ajp13

#
# END workers.properties
#

Restart apache. Now you can access tomcat through apache. Any request to jsp files will go to test-tomcat.
Blogged with the Flock Browser

Wednesday, November 19, 2008

After a long time!

I was too busy to update my pages here. With most of the deadlines passed through, I hope to add more articles starting next week.

You can Expect something on serving dynamic pages from apache with apache + mod_jk + tomcat setup,  failover setup for two tomcat instance with apache and mod_jk, LVM setup, Problems/errors and fix
es in LVM, Striping in LVM.

I will also update my experiences about ISMS implementations in our company so that it will be helpful for others who need help.

Thanks for coming here.

Monday, August 04, 2008

Dual MySQL services in single server

Requirement:
To install one more mysql instance (server) and make it run on another port. The second instance must use a different data directory (ofcourse!). I assume that you have a working copy of mysql in the server which may be listening on any port (default: 3306) and storing data any where (default: /var/lib/mysql).

Steps:

# cd /usr/local/src/
downloaded mysql from here: http://dev.mysql.com/get/Downloads/MySQL-5.1/mysql-5.1.26-rc.tar.gz/from/pick#mirrors

# tar zxfv mysql-5.1.26-rc.tar.gz
# cd mysql-5.1.26-rc
See what options are available with ./configure --help

Below are options we need to consider:
  --bindir=DIR           user executables [EPREFIX/bin]
  --sbindir=DIR          system admin executables [EPREFIX/sbin]
  --libexecdir=DIR       program executables [EPREFIX/libexec]
  --datadir=DIR          read-only architecture-independent data [PREFIX/share]
  --sysconfdir=DIR       read-only single-machine data [PREFIX/etc]
  --sharedstatedir=DIR   modifiable architecture-independent data [PREFIX/com]
  --localstatedir=DIR    modifiable single-machine data [PREFIX/var]
  --libdir=DIR           object code libraries [EPREFIX/lib]
  --includedir=DIR       C header files [PREFIX/include]
  --oldincludedir=DIR    C header files for non-gcc [/usr/include] - installation will take some libs from here.
  --infodir=DIR          info documentation [PREFIX/info]
  --mandir=DIR           man documentation [PREFIX/man]

Note that all are created by default under PREFIX/EPREFIX. By default EPREFIX=PREFIX.

Some more options to look into:
--with-tcp-port=port-number (default 3306)
--with-mysqld-user=username - no default

--with-unix-socket-path=SOCKET
    Where to put the unix-domain socket.  SOCKET must be an absolute file name.

 --with-plugins=PLUGIN[,PLUGIN..]
                          Plugins to include in mysqld. (default is: none)
                          Must be a configuration name or a comma separated list of plugins.
                          Available configurations are: none max max-no-ndb all.
                          Available plugins are: partition daemon_example
                          ftexample archive blackhole csv example federated
                          heap innobase myisam myisammrg ndbcluster.

Also use this: CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions -fno-rtti

I will be using /data/mysql for this instance's settings and data. The new instance should listen on port 33060. Let us start the config now:

# ./configure --prefix=/data/mysql --with-unix-socket-path=/data/mysql/mysql.sock --with-tcp-port=33060 --with-plugins=all
# make
# make install

This will take about 15 minutes

#  cp support-files/my-medium.cnf /data/mysql/my.cnf (If you want to config anything).
# cd /data/mysql/
# chown -R mysql .
# chown -R mysql .
# ls (there won't be var directory in that). We have to create it with the below command which creates all necessary databases.

# ./bin/mysql_install_db --user=mysql

Start the server:
# /data/mysql/bin/mysqld_safe &

And test it:
# mysql -P 33060 -p

It worked.

You can setup other accounts, change root password etc after this. Refer the below manual for all other options.

http://dev.mysql.com/doc/refman/5.1/en/quick-install.html

Sunday, June 01, 2008

how to run a cronjob on last Friday of every month

A crontab entry in linux looks like this:
10 10 * * 7 /path/to/your/script

This says to run /path/to/your/script at 10:10am on Sunday.

The first five fields are:
minute (0-59)
hour (0-23)
day of the month(1-31)
month of the year (1-12)
day of the week (0-7 with 0 &7 = Sunday, 1 = Monday etc.,)

For running a script on last Friday of every month you would typically put entry as:
10 10 28-31 * 5 /path/to/your/script

But the above will run your script on 28,29,30 & 31 of every month and on each Friday (weekly). The thing is, cron would execute the script if either day of the week or day of the month matches.

To run your script only on every last Friday/weekday of a month use the below entry:
10 10 * * 5 [ $(date +"\%m") -ne $(date -d 7days +"\%m") ] && /path/to/your/script

The above entry is executed every week. It has two parts:

First part:
$(date +"\%m") -ne $(date -d 7days +"\%m") - Checks if the month today is not equal to month next week (7days from now - same day). If they are equal then current
day (Friday in our case) is not the 'last Friday' if not it is the last
Friday.

Second Part: your actual script /path/to/your/script

We have glued both parts with &&. This will effect the execution of second part ie., second part is executed only when the first part is true (Months returned from both date commands should not be equal).

Also note that there is a \ before %m in the entry. Cron cannot understand % sign and so we have to escape it with \.

References:
http://www.unix.com/answers-frequently-asked-questions/13527-cron-crontab.html
http://www.unix.com/unix-advanced-expert-users/11562-can-cron-do.html
http://forums.macosxhints.com/archive/index.php/t-34624.html

Thursday, May 29, 2008

Enabling https (SSL) in Tomcat

For enabling ssl in tomcat you need to buy signed certificate/s from a Certificate Authority (some hosting companies also sell SSL certificates). The work doesn't end with that. After buying a certificate you have to create a certificate request, which your CA will sign and authorize.

Below are the steps i followed to enable SSL in tomcat.

Note: Remember to replace the full path of the filename. Usually i create these files a new subdirectory.

Creating Certificate Request

$ keytool -genkey -alias tomcat -keyalg RSA -keystore tomcat.key

The above command generates a key store file (tomcat.key) with RSA algorithm and aliased as tomcat. A keystore is nothing but a file which has your key and the CA's signature. The above command gets all the following information from you, so please keep them handy
Keystore password: <any thing you want>
First & Last Name: <your name/any authorized person's name>
Organizational Unit: <Dept in your company>
Organization Name: <Your company name>
Your City:
State:
Country Code: <eg: US>

Following command actually creates the csr
$ keytool -certreq -keyalg RSA -alias tomcat -file certreq.csr -keystore tomcat.key

Depending on whom you bought the certificate from, you may have to copy the contents of the certreq.csr from your CA's site or your CA will send it as email attachment.

Keys from CA:

You will get the keys (issued) from CA. You may also have to download two more key's (root &amp; intermediate) from your CA's website. First add root cert, then add intermediate followed by your site's certificate. Below are the steps:

First import root cert.
$ keytool -import -alias root -keystore tomcat.key -trustcacerts -file <root.crt>

Then the intermediate cert
$ keytool -import -alias intermed -keystore tomcat.key -trustcacerts -file <intermediate.crt>

Finally your site's cert
$ keytool -import -alias tomcat -keystore tomcat.key -trustcacerts -file <yourdomain.com.crt>

yourdomain.com.crt is the cert file issued by CA for your domain.

How to list the imported Certificates?

$ keytool -list -keystore tomcat.key

The above keystore file is now ready. Let us know configure tomcat to use this key file.

If you have already installed tomcat, edit the server.xml of the tomcat instance you want to turn on https. Search for the below lines:

<Connector port="8443" maxHttpHeaderSize="8192"
               maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
               enableLookups="false" disableUploadTimeout="true"
               acceptCount="100" scheme="https" secure="true"
               clientAuth="false" sslProtocol="TLS" />

The above lines are commented by default. Remove the comment signs <!-- before and  --> after them.

Add the below line anywhere in between them:
keystoreFile="/path/to/keystore/tomcat.key" keystorePass="changeit"

Restart the tomcat service. We are done.

Now you must be able to browse https://localhost:8443 from the same machine where tomcat is installed. You can view the certificate and cross check it.
Blogged with the Flock Browser

Thursday, April 03, 2008

How to configure 2 or more tomcat instances in the same server

We have already seen how to install tomcat in Linux.

Now let us create one more instance. As you may already know we cannot have two services listening on the same ip & port. So, for the second instance of the tomcat we have to change the port settings.

This tutorial is in continuation of my first post on installing tomcat. You can read it here. I assume that you already have one tomcat instance up and running (as a special user 'myuser').

Login as or su as myuser:
# su - myuser
$ cd ~

$ unzip /your/download/directory/apache-tomcat-x.x.x.zip (or)

if downloaded gzip file then
$ tar zxfv /your/download/directory/apache-tomcat-x.x.x.tar.gz

$ mv apache-tomcat-x.x.x test-tomcat1
$ cd ~/test-tomcat1/conf

Changing the default ports:
$ cp server.xml server.xml.orig
$ vi server.xml

Change the shutdown port 8006 (as shown in below line):

server port="8006" shutdown="SHUTDOWN"

to:
server port="9006" shutdown="SHUTDOWN"


Change non-ssl http 1.1 Connector port on 8080 (as shown in below line):

Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" ...

to:
Connector port="9080" maxHttpHeaderSize="8192" maxThreads="150" ...


Ajp connector port 8009 (as shown in below line):
connector port="8009" enablelookups="false" redirectport="8443" protocol="AJP/1.3"

to:
connector port="9009" enablelookups="false" redirectport="9443" protocol="AJP/1.3"


If you have enabled proxied http connector port 8082 (found in line below):
Connector port="8082" maxThreads="150" minSpareThreads="25" ...

Change it to:
Connector port="9082" maxThreads="150" minSpareThreads="25" ...


If you have enabled https port 8443 (found in lines below):

connector
port="8443" maxhttpheadersize="8192"
keystorefile="/home/tomcat/keystore/tomcat.key" scheme="https"
secure="true" clientauth="false" sslprotocol="TLS"

Change it to:
connector
port="9443" maxhttpheadersize="8192"
keystorefile="/home/myuser/keystore/domain.key" scheme="https"
secure="true" clientauth="false" sslprotocol="TLS"


Now give executable permission to the scripts:
$ cd ~/test-tomcat1/bin/
$ chmod u+x *.sh

Create a directory for your Servlet's log files:
$ mkdir ~/myservlet1

And add aliases:

$alias tomcat1-down=~/test-tomcat/bin/shutdown.sh
$ alias tomcat1-up=cd ~/myservlet1; ~/test-tomcat/bin/startup.sh

Remember to put alias entries in ~/.bashrc so that they are persistent.

Startup tomcat:
$ tomcat1-up

Open your browser. You must be able to view both tomcat test pages by now.
Page1: http://localhost:8080/
Page2: http://localhost:9080/

Tuesday, April 01, 2008

Installing tomcat in Linux

We will need Java to install tomcat. You can download it here. I used JDK 6 Update 5.

I am not going into the details of installing jdk. Make a note of where you install it. My jdk location is: /usr/java/jdk1.6.0_5

Download tomcat from here. I used 6.0.16

By default tomcat listens on port 8080, so you will not need super user permissions. If you are using any port <1024>
# passwd myuser

su - myuser

$ unzip /your/download/directory/apache-tomcat-x.x.x.zip (or)

if downloaded gzip file then
$ tar zxfv /your/download/directory/apache-tomcat-x.x.x.tar.gz

This creates a directory apache-tomcat-x.x.x. It is always a good practice to rename the directory and give it some meaning full name:
$ mv apache-tomcat-x.x.x test-tomcat

Set JAVA_HOME
$ export JAVA_HOME=/usr/java/jdk1.6.0_5/bin

Optionally set CATALINA_HOME (If you are running only one instance).
$ export CATALINA_HOME=/opt/test-tomcat

Remember to put the above entries in user's .bash_profile.

Starting tomcat:

$ cd ~/test-tomcat/bin/
$ chmod u+x *.sh ( so that the sh scripts can be executed)
$ cd ~/myservlet/ (where you store all config files needed by your servlet)
$ ~/test-tomcat/bin/startup.sh

Note: Normally any output/log file generated by your Servlet will be created in the directory where you started tomcat from. I generally use a directory named after my servlet. For this tutorial i am using used ~/myservlet as the directory.

If your servlet isn't generating logs of its own then you can view the logs in ~/test-tomcat/logs/ directory. This directory has many files. Two files are particularly important.
1. catalina.out is the main log file (watch out this grows faster in a big site with more hits & logging).
2. There are also individual files for each day (eg format: catalina-yyyy-mm-dd.log).

As i said above, tomcat service listens on port 8080 and serves webpages/ servlets through it. It also listens on following default ports:

8005 - Shutdown port (you can telnet to this port from localhost and shutdown the tomcat service by issuing SHUTDOWN in the telnet session. Afaik, You cannot enable this for remote connections).

8009 - AJP connector port (used to connect this tomcat instance with other tomcat or apache instances. Apache communicates with tomcat through this port).

8443 - SSL port (disabled by default).

To make life easier add aliases for shutting down and starting the tomcat instances:
$alias tomcat-down=~/test-tomcat/bin/shutdown.sh

$ alias tomcat-up= cd ~/myservlet ; ~/test-tomcat/bin/startup.sh

Remember to put alias entries in ~./bashrc. Also note that the above alias entry will work only if tomcat is installed in that user's home directory.

Tuesday, March 11, 2008

Using the same LUN in multiple servers

Setting up SAN and sharing a single LUN to multiple servers.

When i first started working in SAN all i knew was that i can create LUNs (a slice of space in the storage) and use them in the servers as drives. What i didn't know was that i cannot (and should not) access the same LUN from multiple servers simultaneously (I learned it the hard way).

Usually I test things before putting them in production. In my first attempt i created a LUN and made it as a part of two Storage Groups, each with one server.

Well, i could see the 'test' LUN in both servers and mount it in those machines simultaneously. Tried to write some data in the LUN from server1 and view it from server2. It did not show up.I could not even list the files written in server1 from server2 or vice versa. It is actually not a problem with SAN or server. The problem was with my setup. While i can assign the same LUN to multiple storage groups, one LUN must NOT be accessed from both servers simultaneously. We ended up losing data here.

But why?

Because, there is no way for a server to know about the disk blocks changed by the other server. Since LUNs are presented to servers as scsi drives servers cannot cordinate changes in disk blocks without special programs. We need clustered file systems for this. A clustered file system is a file system which is simultaneously mounted on multiple servers. There are several file system clustering softwares such as GFS, OCFS, Veritas CVM, luster etc.

We will now see how to configure GFS.

The below described installation was done with the following setup:
SAN Storage box
SAN Fibre switch - Brocade
4 servers - Each connected with 1 HBA card and connected to SAN directly. I had 4 RAID groups each with one server. 2 servers( cluster nodes) were running with Centos 4.4 and the other two with 4.3
Create 1 LUN in the SAN and assign it to all Raid groups.

Note: I used dlm as cluster locking method. More on that later.

To know more about setting up the hardware click here
Official documentation of setting up GFS can be found here

Below are few things to be done before implementing GFS:

You will be doing the whole setup as root.

Make sure clocks on the GFS nodes are synchronized (I used ntp for this).

You will need one fencing device for the cluster to work. I used Brocade Fabric Switch for fencing. GFS will not work without fencing.

Always make sure that the kernel you use has gfs modules. Some kernels don't. All you have to do is to install a kernel with gfs support and boot the server with that kernel (i used 2.6.9-55.0.2.ELsmp)

We will use one server (server node) for deploying the config across other nodes. Enable password less login from this server to other nodes in the cluster.

You will need GUI mode in the server node for doing cluster configuration. I have given a sample text configuration file also for those who don't have GUI access. The file can be used as is (ofcourse after substituting your values)

All the nodes & fencing device in the cluster must be in the same subnet. Else you may have to configure multicast.

FQDN names of all nodes must be resolvable. Else put appropriate entries in /etc/hosts

Also keep the following details handy

Number of file systems ( 1 in my case). More can be added later.
File-system name (sk)
Journals ( I used 4)
GFS nodes ( 4 )
Storage devices and partitions

You can easily install Cluster Suite & gfs from the csgfs repo, as explained below (run these commands as root in all nodes and the server).

# cd /etc/yum.repos.d/
# wget http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo
# rpm --import /usr/share/doc/centos-release-4/RPM-GPG-KEY

# yum install rgmanager system-config-cluster magma magma-plugins cman cman-kernel dlm dlm-kernel fence gulm iddev GFS GFS-kernel gnbd gnbd-kernel lvm2-cluster GFS-kernheaders gnbd-kernheaders
Note: If you want the server to run with smp kernel install the appropriate rpms (eg: cman-kernel-smp)

Boot the server with the new kernel (that has gfs support). Repeat this in all cluster nodes and server node.

Configuration Steps:
Step1: Defining & creating the cluster config

In the server node type:
# system-config-cluster

Choose File->New
Give cluster name
Choose DLM as locking method
Click ok to accept the configuration.

Then you will see the cluster you just created.

Step 2: Adding a fence device

Click on Fence devices and Add a Fence Device. A window named Fence Device Configuration opens up. Choose the device you have for fencing (mine is Brocade Switch).
Fill Name, IP adress, Login & Password for the device.
Click ok to accept the changes.

Step 3: Adding a node to the cluster
In the cluster window choose cluster nodes and click on add a cluster node
Give the FQDN of the server node. Leave the quoram vote blank. Click ok.

Step 4: Adding fencing for the node
Choose the node you just added.
Click on Manage fencing for this node. This will open a new window called Fence Configuration.
Now choose add a fence level. No need to enter anything.
Click Close (make sure that the fence level is added for that node. You will see the details if you click on the node name)
.

Repeat the steps 3 & 4 for all other nodes. Substitute the other node names instead of server node name in Step 3.

Save the config file (default: /etc/cluster/cluster.config). You will now get a Send to cluster button. Click that button to copy the config file in all the cluster nodes (this will be done through scp; that is why we configured the server to login to nodes without password).

Below is a sample config file: $cat /etc/cluster/cluster.conf
<cluster alias="Test" config_version="1" name="Test">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="server1.examplenet" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
<clusternode name="server2.example.net" votes="1">
<fence>
<method name="1"/>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_brocade" ipaddr="x.x.x.x" login="user" name="Switch" passwd="test"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
You can also create your own file in text mode and copy it to the nodes. Enter other nodes info in the config file between the tags as found in the above example.

Step 5: Check if you have all needed modules

Verify that the configuration is copied to all the servers and then proceed.

Reboot all the nodes with gfs kernel and check whether they have gfs & dlm enabled. Below are the steps and output from my server node:
# lsmod|grep gfs
gfs 290652 0
lock_harness 8992 2 gfs,lock_dlm
cman 124896 3 gfs,lock_dlm,dlm

# lsmod|grep dlm
lock_dlm 44640 0
lock_harness 8992 2 gfs,lock_dlm
dlm 116580 1 lock_dlm
cman 124896 3 gfs,lock_dlm,dlm

If nothing shows up, please follow the below steps:
# modprobe gfs
# modprobe lock_dlm
# modprobe dlm

To make the modules persistent, put the below lines in /etc/rc.modules
# echo modprobe dlm >> /etc/rc.modules
# echo modprobe lock_dlm >> /etc/rc.modules
# echo modprobe gfs >> /etc/rc.modules
# chmod +x /etc/rc.modules
Also turn on all these cluster services with chkconfig: ccsd, cman, fenced, clvmd , gfs, rgmanager, clvmd (only if you are planning to cluster LVMs).

Proceed to Step 6 only if step 5 is Successful ie., if all modules get detected after a fresh reboot of the server.

Step 6: Making a GFS partition or LVM (Do this only if Step 5 was successful)

Now we are going to create a clustered File system(GFS) partition in the server and use it in all nodes.

In the server, create a partition inside the LUN drive (now shown as scsi drive in the server). It can be a normal partition or LVM. Below are the steps to create a clustered LVM ( in my system the LUN disk was scanned as sdb; Verify & substitute your's in the below config).

# fdisk /dev/sdb
Create one partition.

Choose either of the two options.

Step 6: Option 1
# pvcreate /dev/sdb1
# vgcreate VG /dev/sdb1
# vgchange -aey VG (To make it as clustered VG)

# lvcreate -L1023G -n LV1 VG ( Remember to substitute the size of your disk )
# gfs_mkfs -p lock_dlm -t Test:sk -j 8 /dev/VG/LV1
Step 6: Option 2
For a normal partition command would be:
# gfs_mkfs -p lock_dlm -t Test:sk -j 8 /dev/sdb1

Note: I have used Test:sk; you can substitute your clustername:fname here.

Above gfs_mkfs command creates the gfs file system in our test LV, LV1.

Step 6a: Activating clustered LVMs (Optional - Skip this if you are using an ordinary partition).

Since we used a clusted LVM you have to modify the following two lines in the /etc/lvm/lvm.conf file. Do this in all nodes and server node:

locking_type = 2 (by default it is 1)
locking_library = "liblvm2clusterlock.so"


After this restarting clvmd will show up all logical volumes inclusive of clustered LVMs. Note: You have to restart clvmd in all nodes. Rebooting all the nodes & server may solve the problems many times.

Step 8: Mounting the GFS parition/LVM in all nodes

Issue this command in all cluster nodes and server node.

For normal partition:
# mount -t gfs /dev/sdb1 /mnt

For CLVMs:
# mount -t gfs /dev/VG/LV1 /mnt

You can now write a data in /mnt and access it from the other server without problems.

To make the mounting points permanent after next reboot, put the below entry in /etc/fstab of all nodes and server:

For Normal Paritions:
/dev/sdb1 /MyData gfs defaults 0 0

For CLVMs:
/dev/VG/LV1 /MyData gfs defaults 0 0

GFS module in CentOS RPM

How to check if the kernel you have in the server has gfs modules?

When i was installing gfs support in one of my servers i wanted to check whether the kernel i used was having gfs modules. Without that i cannot make gfs work. It was simple to find out.

After installing the kernel and gfs rpms, just list the files under the kernel modules directory:


# cd /lib/modules/2.6.9-55.0.2.ELsmp
# ls kernel/fs/
autofs4 cramfs ext3 freevxfs gfs_locking hfsplus jffs2 msdos nfs_common nls udf
cifs exportfs fat gfs hfs jbd lockd nfs nfsd smbfs vfat


You can even list the files in gfs and gfs_locking directories

# ls kernel/fs/gfs
gfs.ko

# ls kernel/fs/gfs_locking/
lock_dlm lock_gulm lock_harness lock_nolock


It will be tricky if you want other modules like xfs. You need to find a kernel RPM that has both xfs & gfs module rpms or if you have time to play you can compile them all yourselves.

Tuesday, February 26, 2008

Intro to SAN config

I am trying to explain the terms used while configuring SAN. You can use the below details as reference for creating/reconfiguring LUNs in a SAN Storage.

The Model i worked on is EMC CX-320C. It had two DAEs (Disk Arrary Enclosures). DAE1 was populated with FC Drives and DAE2 was populated with SATAII. You must be able to figure out the options in other Storages easily.

EMC has an web interface for configuring the drives, RAID levels and to prepare them for storing data. It is called Navisphere Manager.

In the EMC's interface you will see a list of things - Hosts connected to the SAN, physical interfaces, SPA, SPB details ( drives in that ), RAID Groups and Storage Groups. Initially there wont be any RAID/Storage Group.

First we have to configure the drives into specific RAID (0/2/5) groups and then add them to a Storage group.

RAID Group: (RG)
A collection of drives operating in a RAID. When creating a new RAID group you have to specify the following:
1. RAID group id (Unique).
2. Number of disks you are going to add (you can select the drives from specific DAE).
3. RAID Type - HotSpare, Disk, Unbound (more on this later). If selecting more than 3 drives RAID 0,3,5 will show up.


Best practices while creating RGs:
1. Do not combine the different disk types into single RG. Combining them will make the RAID array to operate in lowest speed available.eg: Create separate RGs for FC drives and SATAII drives. Combining them will make FC drives to operate in SATAII's speed.
2. Remember to have one drive as global hot spare for the whole array/DAE so that it will kick in when one of the drives in the array fails.
3. RAID0 is for highest performance (no redundancy) while RAID5 is for high redundancy (ofcourse has performance hit).
4. Do not combine drives from 2 DAEs to a single RG even if they are of same type and speed.

LUN:

After Creating a RAID group you will create LUNs. LUNs are nothing but partitions presented as (scsi) drives to servers. LUNs are always a part of the RAID group. You cannot assign a RG as whole to a Storage Group, instead you will assign individual LUN/s to SG/s.


If you have specified 'unbound' as RAID type while creating RG, you can specify the RAID type when creating a LUN. Remember all LUNs in a RG must of same RAID Type.

Storage Group:

Storage Group is a collection of all info: LUNs, Hosts, SAN Copy connections etc., (we have all these three and there may be more that i am not aware of). A unique name must be given for every storage group.

Create a Storage group as you desire and add LUNs and hosts to it. You can either select LUNs & connect hosts from the Storage group or add LUN to a Storage Group from a RG.

Note the following:
1. You can add one LUN to multiple storage groups. But remember that only one server from the SGs must access the LUN. If you want more than one server to access a single LUN simultaneously then that LUN must be configured as a file system cluster (eg: gfs).
2. You can add a host to only one SG. Adding the same host to another SG will remove the host from the old SG.
3. There can be as many LUNs in a SG.