Not everyone has all their NetApp Filers in service. Either you may have found some NetApp Filers sitting in storage, or you may simply no longer need your Filers. In any case, the Filer shelves are useful FC-AL arrays that shouldn't be considered only usable when attached to a Filer head. In this article I will cover some tips on how to use the shelves off your filers on any X86 Linux system, with some help from QLogic and Sistina LVM. I'll cover some procedures for working with EVMS as well.
Getting Started:Obviously, you'll need a Linux system with a free PCI slot, and the Filer shelfs. You'll also need a Fibre Channel Arbitrated Loop card with copper interface, you can either use the QLogic 2100 in the Filer head or you can pick up a QLA2100 or QLA2200 off eBay for about $100. Make sure that you also grab all the cables supplied with the shelfs, if you are using an FC7/8 this also includes the copper terminator.
As for software, you'll need a Linux box, the distribution doesn't matter, but you should have compilers and build tools installed. If your Linux system doesn't have LVM (and/or Linux MD RAID) support you'll need to build a custom kernel. Building a custom kernel is outside the scope of this article, however there is a large amount of documentation with the kernel, and you can find help at The Linux Documentation Project. It's recommended that you install the following patches into your kernel:
I do not recommend that you use the QLogic 2100 driver supplied with the kernel. Instead, you can build a module. I generally prefer that all support services (LVM, MD, EVMS) are build into the kernel, and then modularize the FC drivers. Using FC drivers as a module adds flexability in that if you stop, start, or add disks to your arrays you don't have to reboot the system. The system probes and adds disks in the loop when the module is loaded.
While LVM and Linux MD is code already in the kernel base, you can get the EMVS patches here. The kernel patches are include in the tarball with the userland code. I recommending using the latest stable (2.4.20 at the writting of this) for your custom kernel. You can find information on how to install the EMVS patches into the kernels codebase in the tarballs README. Once you've applied the patches, you can finish building the userland code.
Once you have your new kernel compiled and booted, you can now add the QLogic module. When the module is loaded, look at dmesg output to verify that the disks initialized and LIPed and then you can see that they were added by looking at /proc/partitions or the /proc/scsi tree.
[root@nexus benr]# insmod qlogicfc Using /lib/modules/2.4.20/kernel/drivers/scsi/qlogicfc.o [root@nexus benr]# lsmod Module Size Used by qlogicfc 170000 0 (unused) [root@nexus benr]# [benr@nexus scsi]$ cat scsi Attached devices: Host: scsi1 Channel: 00 Id: 01 Lun: 00 Vendor: SEAGATE Model: ST136403FC Rev: NA10 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi1 Channel: 00 Id: 02 Lun: 00 Vendor: SEAGATE Model: ST136403FC Rev: NA10 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi1 Channel: 00 Id: 03 Lun: 00 Vendor: SEAGATE Model: ST136403FC Rev: NA10 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi1 Channel: 00 Id: 04 Lun: 00 Vendor: SEAGATE Model: ST136403FC Rev: NA10 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi1 Channel: 00 Id: 05 Lun: 00 Vendor: SEAGATE Model: ST136403FC Rev: NA10 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi1 Channel: 00 Id: 06 Lun: 00 Vendor: SEAGATE Model: ST136403FC Rev: NA10 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi1 Channel: 00 Id: 07 Lun: 00 Vendor: SEAGATE Model: ST136403FC Rev: NA10 Type: Direct-Access ANSI SCSI revision: 02 [benr@nexus scsi]$ [benr@nexus scsi]$ cat isp2x00/1 QLogic ISP2100 SCSI on PCI bus 00 device 78 irq 3 base 0x1000
In order to use the NetApp disks with LVM you must destroy the first sector of the disks. Understand what your doing when you do this. I've never tried to put a disk that I've destroyed like this back into service on a NetApp Filer, but I would imagine it'd be repaired by upgrading the firmware during a boot of the Filer head.
[root@nexus /root]# pvcreate /dev/sdb pvcreate -- device "/dev/sdb" has a partition table Remove each partition from the disk (/dev/sd?2 and /dev/sd?4) <- LVM Bug 266, Use DevFS Use this process: [root@nexus benr]# dd if=/dev/zero of=/dev/scsi/host1/bus0/target5/lun0/disc bs=1k count=1 1+0 records in 1+0 records out [root@nexus benr]# blockdev --rereadpt /dev/scsi/host1/bus0/target5/lun0/disc [root@nexus benr]# pvcreate /dev/scsi/host1/bus0/target5/lun0/disc pvcreate -- physical volume "/dev/scsi/host1/bus0/target5/lun0/disc" successfully created [root@nexus benr]#
Above you can see that we need to remove the partition table in order to use it in conjunction with LVM. We can use the full SCSI device as above or use the shortened Linux device (the /dev/sdX devices) as below.
[root@nexus benr]# dd if=/dev/zero of=/dev/sda bs=1k count=1 1+0 records in 1+0 records out [root@nexus benr]# blockdev --rereadpt /dev/sda [root@nexus benr]# pvcreate /dev/sda pvcreate -- physical volume "/dev/sda" successfully created [root@nexus benr]#
Now that our devices are cleaned and physical volumes have been created for each of the disks we want to add we can create an LVM Volume Group (vg) and then create the logical volume itself.
[root@nexus benr]# cat /proc/partitions major minor #blocks name 8 0 35566480 scsi/host1/bus0/target1/lun0/disc 8 16 35566480 scsi/host1/bus0/target2/lun0/disc 8 32 35566480 scsi/host1/bus0/target3/lun0/disc 8 48 35566480 scsi/host1/bus0/target4/lun0/disc 8 64 35566480 scsi/host1/bus0/target5/lun0/disc 8 80 35566480 scsi/host1/bus0/target6/lun0/disc 8 96 35566480 scsi/host1/bus0/target7/lun0/disc 3 0 20000232 ide/host0/bus0/target0/lun0/disc 3 1 5124703 ide/host0/bus0/target0/lun0/part1 3 2 1 ide/host0/bus0/target0/lun0/part2 3 5 14337981 ide/host0/bus0/target0/lun0/part5 3 6 530113 ide/host0/bus0/target0/lun0/part6 [root@nexus benr]# vgcreate cuddlevg /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 vgcreate -- no valid physical volumes in command line [root@nexus benr]# vgcreate cuddlevg /dev/scsi/host1/bus0/target1/lun0/disc /dev/scsi/host1/bus0/target2/lun0/disc /dev/scsi/host1/bus0/target3/lun0/disc /dev/scsi/host1/bus0/target4/lun0/disc /dev/scsi/host1/bus0/target5/lun0/disc /dev/scsi/host1/bus0/target6/lun0/disc /dev/scsi/host1/bus0/target7/lun0/disc vgcreate -- INFO: using default physical extent size 32 MB vgcreate -- INFO: maximum logical volume size is 2 Terabyte vgcreate -- doing automatic backup of volume group "cuddlevg" vgcreate -- volume group "cuddlevg" successfully created and activated [root@nexus benr]# vgchange -a y cuddlevg vgchange -- volume group "cuddlevg" already active [root@nexus benr]# pvdisplay /dev/scsi/host1/bus0/target1/lun0/disc --- Physical volume --- PV Name /dev/scsi/host1/bus0/target1/lun0/disc VG Name cuddlevg PV Size 33.92 GB [71132960 secs] / NOT usable 32.19 MB [LVM: 132 KB] PV# 1 PV Status NOT available Allocatable yes Cur LV 0 PE Size (KByte) 32768 Total PE 1084 Free PE 1084 Allocated PE 0 PV UUID N7cLGL-oAtB-uJqR-y7a8-JaMs-GCqA-5qGz5E [root@nexus benr]# vgchange -a y cuddlevg [root@nexus benr]# vgdisplay cuddlevg --- Volume group --- VG Name cuddlevg VG Access read/write VG Status available/resizable VG # 0 MAX LV 256 Cur LV 0 Open LV 0 MAX LV Size 2 TB Max PV 256 Cur PV 7 Act PV 7 VG Size 237.12 GB PE Size 32 MB Total PE 7588 Alloc PE / Size 0 / 0 Free PE / Size 7588 / 237.12 GB VG UUID tpjZEX-w48w-pq3F-VIbH-mkDK-fgDh-15CWY2 [root@nexus benr]# lvcreate -i7 -l4 -l7588 cuddlevg -n fc8lv lvcreate -- INFO: using default stripe size 16 KB lvcreate -- doing automatic backup of "cuddlevg" lvcreate -- logical volume "/dev/cuddlevg/fc8lv" successfully created [root@nexus benr]# lvdisplay fc8lv lvdisplay -- a path needs supplying for logical volume argument "fc8lv" [root@nexus benr]# lvdisplay /dev/ Display all 435 possibilities? (y or n) [root@nexus benr]# lvdisplay /dev/cuddlevg/ fc8lv group [root@nexus benr]# lvdisplay /dev/cuddlevg/fc8lv --- Logical volume --- LV Name /dev/cuddlevg/fc8lv VG Name cuddlevg LV Write Access read/write LV Status available LV # 1 # open 0 LV Size 237.12 GB Current LE 7588 Allocated LE 7588 Stripes 7 Stripe size (KByte) 16 Allocation next free Read ahead sectors 1024 Block device 58:0 [root@nexus benr] [root@nexus benr]# mkfs -t jfs /dev/cuddlevg/fc8lv mkfs.jfs version 1.1.0, 20-Nov-2002 Warning! All data on device /dev/cuddlevg/fc8lv will be lost! Continue? (Y/N) y \ Format completed successfully. 248643584 kilobytes total disk space. You have new mail in /var/spool/mail/benr [root@nexus benr]# [root@nexus benr]# mount -t jfs /dev/cuddlevg/fc8lv /filer_shelf [root@nexus benr]# df -h Filesystem Size Used Avail Use% Mounted on /dev/ide/host0/bus0/target0/lun0/part1 4.8G 4.1G 530M 89% / /dev/ide/host0/bus0/target0/lun0/part5 13G 8.2G 4.6G 64% /home /dev/cuddlevg/fc8lv 237G 30M 237G 1% /filer_shelf [root@nexus benr]#
Lets use Bonnie++ to benchmark it.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP nexus.homeste 1256M 6020 87 50626 49 34853 39 6871 98 59553 31 563.2 4 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 952 6 +++++ +++ 360 2 413 6 +++++ +++ 401 3
Here you can see that we're getting 50MB/s writes (output) and 59.5MB/s reads (input). Not blistering performance for a 7 disk stripe but given that we're recycling old gear it's pretty good.
Here's a look at how to setup and install the Qlogic FC Driver for better performance than the one already in the kernel that we used above. You can get the driver from QLogics website.
When using 2 shelves qlogicfc.o doesn't cut it. Use Qlogic Driver (qlogic.com). Used: 6.05.00b9 DO NOT install the module code into the kernel, too big a pain. Unpack source, and fix makefile, then run make. Fix by adding qla2100.o to DRIVER= line. -> Like this: [benr@nexus qlogic]$ tar xfvz qla2x00-v6.05.00b9-dist.tgz qlogic/ qlogic/drvrsetup qlogic/ipdrvrsetup qlogic/libinstall qlogic/libremove qlogic/qla2xipsrc-v1.0b5.tgz qlogic/qlapi-v2.00beta4-rel.tgz qlogic/readme.txt qlogic/qla2x00src-v6.05.00b9.tgz [benr@nexus qlogic]$ cd ql bash: cd: ql: No such file or directory [benr@nexus qlogic]$ cd qlogic/ [benr@nexus qlogic]$ ./drvrsetup Extracting QLogic driver source... Done. [benr@nexus qlogic]$ [benr@nexus qlogic]$ vi makefile -> Change line: DRIVER=qla2200.o qla2300.o -> To: DRIVER=qla2100.o qla2200.o qla2300.o [benr@nexus qlogic]$ make [benr@nexus qlogic]$ cp qla2100.o /lib/modules/2.4.20/kernel/drivers/scsi/
This time lets use IBM's EVMS (Enterprise Volume Management System) as an alternative to LVM/LVM2.
FC8 via EVMS: [root@nexus benr]# cat /proc/partitions major minor #blocks name 3 0 20000232 ide/host0/bus0/target0/lun0/disc 3 1 5124703 ide/host0/bus0/target0/lun0/part1 3 2 1 ide/host0/bus0/target0/lun0/part2 3 5 14337981 ide/host0/bus0/target0/lun0/part5 3 6 530113 ide/host0/bus0/target0/lun0/part6 [root@nexus benr]# insmod qla2100 Using /lib/modules/2.4.20/kernel/drivers/scsi/qla2100.o [root@nexus benr]# cat /proc/partitions major minor #blocks name 8 0 17783240 scsi/host1/bus0/target0/lun0/disc 8 2 1521506992 scsi/host1/bus0/target0/lun0/part2 8 4 1428870442 scsi/host1/bus0/target0/lun0/part4 8 16 17783240 scsi/host1/bus0/target1/lun0/disc 8 18 1521506992 scsi/host1/bus0/target1/lun0/part2 8 20 1428870442 scsi/host1/bus0/target1/lun0/part4 8 32 17783240 scsi/host1/bus0/target2/lun0/disc 8 34 1521506992 scsi/host1/bus0/target2/lun0/part2 8 36 1428870442 scsi/host1/bus0/target2/lun0/part4 8 48 17783240 scsi/host1/bus0/target3/lun0/disc 8 50 1521506992 scsi/host1/bus0/target3/lun0/part2 8 52 1428870442 scsi/host1/bus0/target3/lun0/part4 8 64 17783240 scsi/host1/bus0/target4/lun0/disc 8 66 1521506992 scsi/host1/bus0/target4/lun0/part2 8 68 1428870442 scsi/host1/bus0/target4/lun0/part4 8 80 17783240 scsi/host1/bus0/target5/lun0/disc 8 82 1521506992 scsi/host1/bus0/target5/lun0/part2 8 84 1428870442 scsi/host1/bus0/target5/lun0/part4 8 96 17783240 scsi/host1/bus0/target6/lun0/disc 8 98 1521506992 scsi/host1/bus0/target6/lun0/part2 8 100 1428870442 scsi/host1/bus0/target6/lun0/part4 3 0 20000232 ide/host0/bus0/target0/lun0/disc 3 1 5124703 ide/host0/bus0/target0/lun0/part1 3 2 1 ide/host0/bus0/target0/lun0/part2 3 5 14337981 ide/host0/bus0/target0/lun0/part5 3 6 530113 ide/host0/bus0/target0/lun0/part6 [root@nexus benr]# dd if=/dev/zero of=/dev/scsi/host1/bus0/target0/lun0/disc bs=1k count=1 1+0 records in 1+0 records out [root@nexus benr]# dd if=/dev/zero of=/dev/scsi/host1/bus0/target1/lun0/disc bs=1k count=1 1+0 records in 1+0 records out [root@nexus benr]# dd if=/dev/zero of=/dev/scsi/host1/bus0/target2/lun0/disc bs=1k count=1 1+0 records in 1+0 records out [root@nexus benr]# dd if=/dev/zero of=/dev/scsi/host1/bus0/target3/lun0/disc bs=1k count=1 1+0 records in 1+0 records out [root@nexus benr]# dd if=/dev/zero of=/dev/scsi/host1/bus0/target4/lun0/disc bs=1k count=1 1+0 records in 1+0 records out [root@nexus benr]# dd if=/dev/zero of=/dev/scsi/host1/bus0/target5/lun0/disc bs=1k count=1 1+0 records in 1+0 records out [root@nexus benr]# dd if=/dev/zero of=/dev/scsi/host1/bus0/target6/lun0/disc bs=1k count=1 1+0 records in 1+0 records out [root@nexus benr]# blockdev --rereadpt /dev/scsi/host1/bus0/target0/lun0/disc [root@nexus benr]# blockdev --rereadpt /dev/scsi/host1/bus0/target1/lun0/disc [root@nexus benr]# blockdev --rereadpt /dev/scsi/host1/bus0/target2/lun0/disc [root@nexus benr]# blockdev --rereadpt /dev/scsi/host1/bus0/target3/lun0/disc [root@nexus benr]# blockdev --rereadpt /dev/scsi/host1/bus0/target4/lun0/disc [root@nexus benr]# blockdev --rereadpt /dev/scsi/host1/bus0/target5/lun0/disc [root@nexus benr]# blockdev --rereadpt /dev/scsi/host1/bus0/target6/lun0/disc [root@nexus benr]# cat /proc/partitions major minor #blocks name 8 0 17783240 scsi/host1/bus0/target0/lun0/disc 8 16 17783240 scsi/host1/bus0/target1/lun0/disc 8 32 17783240 scsi/host1/bus0/target2/lun0/disc 8 48 17783240 scsi/host1/bus0/target3/lun0/disc 8 64 17783240 scsi/host1/bus0/target4/lun0/disc 8 80 17783240 scsi/host1/bus0/target5/lun0/disc 8 96 17783240 scsi/host1/bus0/target6/lun0/disc 3 0 20000232 ide/host0/bus0/target0/lun0/disc 3 1 5124703 ide/host0/bus0/target0/lun0/part1 3 2 1 ide/host0/bus0/target0/lun0/part2 3 5 14337981 ide/host0/bus0/target0/lun0/part5 3 6 530113 ide/host0/bus0/target0/lun0/part6 [root@nexus benr]# evms_activate [root@nexus benr]# evmsgui(At this point, we only see Volumes, Segments, Disks and Plugins....)
The following procedure is done inside the EVMS GUI (evmsgui).
Lets benchmark this thing now that we're using EVMS.
[root@nexus /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/ide/host0/bus0/target0/lun0/part1 4.8G 4.1G 490M 90% / /dev/ide/host0/bus0/target0/lun0/part5 13G 8.3G 4.5G 65% /home /dev/evms/fc8_volume 118G 15M 118G 1% /filer_shelf [benr@nexus benr]$ bonnie++ -d /filer_shelf/ Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP nexus.homeste 1256M 5604 81 19477 18 15546 17 6861 98 59964 37 677.7 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 998 7 +++++ +++ 376 2 422 5 +++++ +++ 401 3 nexus.homestead.com,1256M,5604,81,19477,18,15546,17,6861,98,59964,37,677.7,5,16,998,7,+++++,+++,376,2,422,5,+++++,+++,401,3 [benr@nexus benr]$ Output = Write Input = Read 20M/s Write Buffered 60M/s Read Buffered
Suprisingly poor numbers.
Lets bench this again using 2 FC9 shelves, 14 disks total.
Dual FC9's: [root@nexus benr]# cat /proc/partitions major minor #blocks name 8 0 35566480 scsi/host1/bus0/target0/lun0/disc 8 16 35566480 scsi/host1/bus0/target1/lun0/disc 8 32 35566480 scsi/host1/bus0/target2/lun0/disc 8 48 35566480 scsi/host1/bus0/target3/lun0/disc 8 64 35566480 scsi/host1/bus0/target4/lun0/disc 8 80 35566480 scsi/host1/bus0/target5/lun0/disc 8 96 35566480 scsi/host1/bus0/target6/lun0/disc 8 112 35566480 scsi/host1/bus0/target7/lun0/disc 8 128 35566480 scsi/host1/bus0/target8/lun0/disc 8 144 35566480 scsi/host1/bus0/target9/lun0/disc 8 160 35566480 scsi/host1/bus0/target10/lun0/disc 8 176 35566480 scsi/host1/bus0/target11/lun0/disc 8 192 35566480 scsi/host1/bus0/target12/lun0/disc 8 208 35566480 scsi/host1/bus0/target13/lun0/disc 3 0 20000232 ide/host0/bus0/target0/lun0/disc 3 1 5124703 ide/host0/bus0/target0/lun0/part1 3 2 1 ide/host0/bus0/target0/lun0/part2 3 5 14337981 ide/host0/bus0/target0/lun0/part5 3 6 530113 ide/host0/bus0/target0/lun0/part6 [root@nexus benr]# df -h Filesystem Size Used Avail Use% Mounted on /dev/ide/host0/bus0/target0/lun0/part1 4.8G 4.1G 490M 90% / /dev/ide/host0/bus0/target0/lun0/part5 13G 8.3G 4.5G 65% /home /dev/evms/fc9_vol 474G 60M 474G 1% /filer_shelf [root@nexus benr]# chmod 777 /filer_shelf/ You have new mail in /var/spool/mail/benr [root@nexus benr]# exit exit You have new mail in /var/spool/mail/benr [benr@nexus benr]$ bonnie++ -d /filer_shelf/ Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP nexus.homeste 1256M 5636 82 33744 33 23145 28 6306 93 60607 37 712.0 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 775 5 +++++ +++ 345 2 326 4 +++++ +++ 358 3 [benr@nexus benr]$
Again, poor numbers. Make your own decision.
This paper is mostly an assortment of jotted notes while I setup my enviroment, but you can see here whats possible and the flexability that Linux provides. Indeed you could use Solaris, AIX or any other UNIX platform you like, but Linux provides the most power and flexability at the least cost.
Hopefully this paper can give you some ideas of how to turn your spare stock of NetApp gear into useful storage for your office or personal needs.