Here is a, hopefully, growing collection of scripts to help manage VxVM disk storage subsystems. All scripts are GPL, and written in PERL.

Thank You!:


Thanks to a very very generious individual, 2 JNI HBA's have been donated. Furthermore, 2 SCSI diffs were as well, which means that when I scrape together some cash for an AIT/DLT drive, I can do NetBackup tutorals! I am eternally grateful.


lux_parse.pl:UPDATED OCT 10th!

This is a small PERL script which gathers data together about your disk subsystem. It is intended (and useful only for) systems relying on A5x00 (aka: Photon) disk arrays. The problem with using the arrays is this: you commonly use these disk arrays for high speed, redundant disk systems, based on RAID0+1 or RAID1+0 layouts. In these cases you commonly will create 2 (or more) fibre channel arbitrated loops and mirror volumes disk for disk. In order to manage this type of storage design you must name each vmdisk something equivelent to it's location, therefore you carefully name your vmdisks something like "arrayA_r0", to designate that this disk is in disk array "A", and it's in Rear Slot 0. This is a great way to manage your disks. But, what happens when a failure occurs? There are two sparing methods: relocd and sparecheck. Sparecheck will spare whole disks, whereas relocd will spare out subdisks one-by-one, very messy. The real problem being that when you use sparecheck the vmdisk that spares out the failed disk will be renamed to whatever the failed disk was named, therefore when you look at your "vxprint" output without looking at the SCSI IDs you won't notice that anything is wrong... however in reality you may now have a volume in which both mirrors are using disks in the same disk array, breaking the redundancy that we built in.

So, what I really needed was a tool that could tell me all of the following things:

The point being that I can run the simple script, get a nice list to print out and then make sure that the VMDisk name matches the disk location. Other info, like the driver instance number is nice for the records. The previous method of doing this was to sit down with about 100 pages of output from: path_to_inst, vxdisk list, vxprint, luxadm display X, and maybe even the listing of /dev/dsk, and then spend hours putting together a disk map. But this takes ages, and is prone to error. I think it's a good exercise for new admins, and a useful exercise for older admins, but when you come back from a 2 week vacation and can't determine exactly what has changed and need to manaully verify the config, you don't wanna wait to finish a map.

It should be noted that other tools, such as jtplex and STORtools fall short of the mark, but are close alternatives. I had little to no luck with jtplex, ontop of the fact that it didn't list A5x00 disk locations. STORtools, on the other hand, makes wonderful disk maps, but is unaware of the VM requiring you to still do a manual disk map, but with most of the hard work done for you. Besides this, STORtools should NOT be used lightly or frequently, and certainly not in critical production enviroments when there is never a good time. Therefore I built this script, which tries to be as non-intrusive as possible. It is run per photon so you can spread out the runs... we all know what happens when you run luxadm too much or too closely to a previous run, SES gets confused and cranky. If dumps the luxadm display X information to a /tmp file, the contents of the /dev/dsk/ directory to a /tmp file (much faster than polling the directory itself), and dump the vxdisk list output to a /tmp file, and then runs regex's on these tmp files. When it's done it removes these files.

Future updates: I'm thinking of adding at least one more feature to the script, other than cleaning up my ugly code. I want it to run thru the vxprint information to tell me which volumes the disk is a part of and what the subdisks name is. I'll be writting that in soon.

Your all familar with the standard disclaimers, and the restrictions of the GPL. Redistribute but credit cuddletech and/or myself. Change it all you like, but if you make some nifty changes let me know. Etc, etc. If you loose your entire subsystem cause of my script it's your own damned fault, don't blame me. And lastly, read it before you run it. I don't trust other peoples scripts, I don't expect you to trust mine. Do note, that you must run the scripts as root, since only root has perms to run luxadm.

lux_parse.pl v1.0 Oct 10th, 2002: Initial release, bad looking code, debuging stuff left in, bad variable names, etc.
lux_parse.pl v2.0 Oct 11th, 2002: Cleaned up code, column format, more expandable, handles most types of errors from photon.

Here is an example of v2.0's output. Several errors have been simulated (*yoink!*) to test it:

bash-2.03# ./lux_parse.pl.v2.0 cuddlestor
Cleaning up: ... done.
Drive Listing for A5100 named cuddlestor:
A5x Slot        Short WWN       Status
                SCSI ID Instance        VMDisk  DG
--------        ----------------        ------  ---
cuddlestor,f0   ERROR   Off(Bypassed:AB)
                -       -       -       -
cuddlestor,f1   20372d0f69      On (O.K.)
                c1t1d0s2        ssd12   cuddle-f2       cuddledg
cuddlestor,f2   20370971e8      On (O.K.)
                c1t2d0s2        ssd10   -       -
cuddlestor,f3   2037097752      On (O.K.)
                c1t3d0s2        ssd3    cuddle-f3       cuddledg
cuddlestor,f4   20370970f3      On (O.K.)
                c1t4d0s2        ssd0    cuddle-f4       cuddledg
cuddlestor,f5   20370d44c8      On (O.K.)
                c1t5d0s2        ssd7    cuddle-f5       cuddledg
cuddlestor,f6   ERROR   Not Installed
                -       -       -       -
cuddlestor,r0   20370e0b08      On (O.K.)
                c1t16d0s2       ssd9    cuddle-r0       cuddledg
cuddlestor,r1   20370e85f8      On (O.K.)
                c1t17d0s2       ssd13   cuddle-r1       cuddledg
cuddlestor,r2   20370d3bef      On (O.K.)
                c1t18d0s2       ssd5    cuddle-r2       cuddledg
cuddlestor,r3   20370d44ee      On (O.K.)
                c1t19d0s2       ssd8    cuddle-r3       cuddledg
cuddlestor,r4   203714322b      On (O.K.)
                c1t20d0s2       ssd2    cuddle-f0       cuddledg
cuddlestor,r5   20370971df      On(No UNIX Label)
                c1t21d0s2       ssd6    -       -
cuddlestor,r6   20370d44d2      On(No UNIX Label)
                c1t22d0s2       ssd11   -       -
Done...Cleaning up: vxdisk.tmp.out, luxadm.cuddlestor.out, devdsk.tmp.out... done.
bash-2.03# 

You'll notice that in the above example I've gotten v2.0 to adiquately deal with failure conditions. I'll test against other conditions when I find a new way to break my array, but these are the 3 most common: Disk ByPassed, Disk Not Installed, UNIX Label fault (ie: vtoc is crap).

Next is a more practicle example, of a system with 6 photons (I'll only show 1), all A5200's using DMP, etc,etc. As you can see, this photons layout could be cleaned up a bit, and thats the point of it all. Again, this is v2.0 output:

bash-2.02# ./lux_parse.pl.v2.0 a
Cleaning up: ... done.
Drive Listing for A5200 named a:
A5x Slot        Short WWN       Status
                SCSI ID Instance        VMDisk  DG
--------        ----------------        ------  ---
a,f0    20374fe451      On (O.K.)
                c2t0d0s2        ssd424  arraya01        prod1gr
                c4t0d0s2        ssd58           
a,f1    2037935ec6      On (O.K.)
                c2t1d0s2        ssd563  arraya02        prod1gr
                c4t1d0s2        ssd564          
a,f2    20379742b3      On (O.K.)
                c2t2d0s2        ssd565  arraya03        prod1gr
                c4t2d0s2        ssd566          
a,f3    2037199aca      On (O.K.)
                c2t3d0s2        ssd413  arraya04        prod1gr
                c4t3d0s2        ssd45           
a,f4    203719dda1      On (O.K.)
                c2t4d0s2        ssd405  arraya05        prod1gr
                c4t4d0s2        ssd32           
a,f5    20374fe161      On (O.K.)
                c2t5d0s2        ssd393  arraya06        prod1gr
                c4t5d0s2        ssd16           
a,f6    203719d9a4      On (O.K.)
                c2t6d0s2        ssd402  arraya07        prod1gr
                c4t6d0s2        ssd29           
a,f7    203719d392      On (O.K.)
                c2t7d0s2        ssd415  arraya08        prod1gr
                c4t7d0s2        ssd47           
a,f8    203719dce9      On (O.K.)
                c2t8d0s2        ssd414  arraya09        prod1gr
                c4t8d0s2        ssd46           
a,f9    203719dc8e      On (O.K.)
                c2t9d0s2        ssd421  arraya10        prod1gr
                c4t9d0s2        ssd55           
a,f10   203719de70      On (O.K.)
                c2t10d0s2       ssd429  arraya11        prod1gr
                c4t10d0s2       ssd0            
a,r0    203719d7d2      On (O.K.)
                c2t16d0s2       ssd420  arraya12        prod1gr
                c4t16d0s2       ssd54           
a,r1    203719d868      On (O.K.)
                c2t17d0s2       ssd408  arraya13        prod1gr
                c4t17d0s2       ssd38           
a,r2    203719dcc6      On (O.K.)
                c2t18d0s2       ssd412  arraya14        prod1gr
                c4t18d0s2       ssd42           
a,r3    2037935ec1      On (O.K.)
                c2t19d0s2       ssd572  arraya15        prod1gr
                c4t19d0s2       ssd571          
a,r4    203719d02f      On (O.K.)
                c2t20d0s2       ssd406  arraya16        prod1gr
                c4t20d0s2       ssd36           
a,r5    203719d905      On (O.K.)
                c2t21d0s2       ssd404  arraya17        prod1gr
                c4t21d0s2       ssd33           
a,r6    2037936d80      On (O.K.)
                c2t22d0s2       ssd580  arraya18        prod1gr
                c4t22d0s2       ssd579          
a,r7    203719dcab      On (O.K.)
                c2t23d0s2       ssd417  arraya19        prod1gr
                c4t23d0s2       ssd50           
a,r8    203719d76b      On (O.K.)
                c2t24d0s2       ssd411  arraya20        prod1gr
                c4t24d0s2       ssd41           
a,r9    203719d961      On (O.K.)
                c2t25d0s2       ssd396  arraya21        prod1gr
                c4t25d0s2       ssd23           
a,r10   203793836f      On (O.K.)
                c2t26d0s2       ssd586  arraya22        prod1gr
                c4t26d0s2       ssd585          
Done...Cleaning up: vxdisk.tmp.out, luxadm.a.out, devdsk.tmp.out... done.
bash-2.02# 
For questions or comments mail benr@cuddletech.com