Linux Fibre Channel SAN
So I needed a SAN for my lab environment that could work for my Windows Cluster. So it was time to build it. I tried several builds, I also tried a couple different Fibre Channel cards (QLogic 2462 and Emulex 1150). Linux distros I tried where Centos 5.3/6.4, OpenSUSE 13.1, Fedora 19, Debian 7.2, and Ubuntu 12.04.3. I tried both LIO (TCM) and SCST. I found that Ubuntu worked best with the QLogic 2462 and SCST. This was not an easy project but Ubuntu seems to work reliably and works well for my Windows 2012 R2 Hyper-V cluster and with Synthetic FibreV machines.
The hardware this is built on is:
ASUS Sabertooth8150 CPU
24 GB DDR3 Ram (way more than is really needed)
QLogic QLA2462 Fibre Channel Card
ATI Graphics adapter
LSI MegaRaid 9260-i8 SAS controller with 512MB Cache and BBU (see my posting on installing LSI MegaRAID Storage Manager on Ubuntu if you want to use that to monitor your physical disks)
​2-500GB SATA 7.2K Drives Raid-1(System)
8-450GB 15K SAS Raid-1 (Fast VM Host Storage)
2-1TB 7.2K SATA1 (Slow VM Host Storage)
​4-2TB 7.2K SATA Raid-1 (Data Drives)
I did a very Basic server build on UBUNTU. I only used about a third of the System Drive space. The rest I am saving for Witness drives and a VTL (which I will build later).
Now the fun part. SCST build and I enter sudo with the i switch so I stay in it through the process:
sudo -i
install Prerequisates:
apt-get install fakeroot kernel-wedge build-essential makedumpfile kernel-package libncurses5 libncurses5-dev subversion
apt-get build-dep --no-install-recommends
Get the Kernel Source and create a couple symbolic links to make building kernel easier. I am using Kernel 3.8. you can see your kernel folsers by doing a ls on /usr/src
mkdir /usr/src
cd /usr/srcwget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.15.5.tar.gz
tar -zxvf linux-3.15.5.tar.gz
ln -s linux-3.15.5 linux
ln -s linux-3.15.5 kernel
Now lets start to build SCST by downloading the source:
cd ~
svn co https://svn.code.sf.net/p/scst/svn/trunk scst
you used to have to patch the kernel, but in the latest SCST and 3.x kernels, you no longer need to do that.
Prepare the Kernel patch files and patch the Kernel. The patch file will depend on your kernel version. I am using Kernel Version 3.8:
cp ~/scst/scst/kernel/scst_exec_req_fifo-3.15.patch /usr/src/linux/
cd /usr/src/linux
patch -p1 < scst_exec_req_fifo-3.15.patch
Now we need to prepare the QLogic FC driver (qla2xxx) then Make it:
mv /usr/src/linux/drivers/scsi/qla2xxx /usr/src/linux/drivers/scsi/qla2xxx_orig
cp ~/scst/qla2x00t /usr/src/linux/drivers/scsi/qla2xxx -r
In older kernals you may need to enable the QLogic QLA2xxxlevel drivers either way you need to do a menuconfig so it can save it.
cd /usr/src/linux
​sudo make menuconfig
Now it is time to compile the kernel. Compiling can take a long time, to speed it up a wee bit you can set the concurrency based on the number of cores your processor has (CONCURRENCY_LEVEL = cores +1 so if you have 4 cores it will be 5)
export CONCURRENCY_LEVEL=5
make-kpkg clean
fakeroot make-kpkg --initrd --append-to-version=-scstheaders kernel-image kernel-headers
cd /usr/src/dpkg -i linux-image-3.15.5-scst_3.15.5-scst-10.00.Custom_amd64.deb
dpkg -i linux-headers-3.15.5-scst_3.15.5-scst-10.00.Custom_amd64.deb
Now we need to update grub and reboot so the new kernel can be loaded.
update-grub
reboot
Black list and unload old QLogic card driver.
sudo -i
echo blacklist qla2xxxqla2xxx.conf >/etc/modprobe.d/blacklist-qla2xxx.conf
rmmod qla2xxx
The next step is to Build SCST
cd ~/scst
BUILD_2X_MODULE=y CONFIG_SCSI_QLA_FC=y CONFIG_SCSI_QLA2XXX_TARGET=y make all install
cd ~/scst/scst/src
make all
make installcd ~/scst/scstadmin
make all
make installcd ~/scst/qla2x00t
makecd ~/scst/qla2x00t/qla2x00-target
make
make installupdate-rc.d scst defaults
Lets make sure our QLogic card is working and we see our WWNs
cat /sys/class/fc_host/host*/port_name
Now it is time to probe all of our drivers and handlers
rmmod qla2xxx
rmmod qla2x00tgt
rmmod scst_disk
rmmod scst_vdisk
rmmod qla2xxx_scst
rmmod scst_vdisk
rmmod scst_user
rmmod scst_modisk
rmmod scst_processor
rmmod scst_raid
rmmod scst_tape
rmmod scst_cdrom
rmmod scst_changer
rmmod ib_srpt
rmmod iscsi_scst
rmmod qla2xxx
service scst restart
The final part is to create your targets and initiators. Your targets are the WWN on your local card. your Initiators are the WWN onthe machine that needs access to the LUN you are going to advertise. Hypathetically here is our system:
Local System (SAN) has an WWN address of 21:00:00:2c:21:81:fe:e4
​you have a 2 node cluster that need access to the same luns. the Initiator addresses (FC Card WWN addresses on each cluster node) is:
Node 1: 10:00:00:00:B2:63:1A:88
Node 2: 10:00:00:00:A4:EF:2B:77
I will be advertising an entire Logical Disk and a 1GB logical volume that I will use as a witnes disk for my cluster. The Disk/Partition I want to advertise are:
(Logical Disk) /dev/sdd
​(Logical Volume) /dev/SANDisk/1GB-Witness
First Create your targets:
scstadmin -enable_target 21:00:00:2c:21:81:fe:e4 -driver qla2x00t -noprompt
Now let's create the Target group for the disks (replace GroupName with the name you want to give the group):
scstadmin -add_group GroupName -driver qla2x00t -target 21:00:00:2c:21:81:fe:e4 -noprompt
Now we need to add the Disks/Volumes and put them in the group (Change DeviceName to a name for your device and GroupName is the name you used in the last step):
scstadmin -open_dev DeviceName -handler vdisk_blockio -attributes filename=/dev/SANDisk/1GB-Witnessnoprompt
​scstadmin -open_dev DeviceName-1 -handler vdisk_blockioattributes filename=/dev/sdd -noprompt
scstadmin -add_group GroupName -driver qla2x00t -target 21:00:00:2c:21:81:fe:e4 -noprompt
Now assign a LUN to each Device:
scstadmin -add_lun 1 -driver qla2x00t -target 21:00:00:2c:21:81:fe:e4 -group GroupName -device DeviceName -noprompt
scstadmin -add_lun 1 -driver qla2x00t -target 21:00:00:2c:21:81:fe:e4 -group GroupName -device DeviceName-1 -noprompt
Now add your Cluster Nodes (initiators):
scstadmin -add_init 10:00:00:00:B2:63:1A:88 -driver qla2x00t -target 21:00:00:2c:21:81:fe:e4 -group GroupName -noprompt
scstadmin -add_init 10:00:00:00:A4:EF:2B:77​ -driver qla2x00t -target 21:00:00:2c:21:81:fe:e4 -group GroupName -noprompt
Your cluster machines should be seeing the 2 LUNs you just advertised and they should be accessible like regular disks. you can create partitions on then and format them just like they are on the local machine.
once it is up and running, you can see the session connected to it by running:
scstadmin -list_sessions
Comments