The Storage Networking Industry Association’s (SNIA) Storage Developer Conference (SDC) is not, as the fancy name suggests, not a place for storage hobbyist or the light hearted. Attendees are leaders in our industry, highly informed and knowledgeable. If they are interested in it, we all will be soon. If you follow the storage press at all, the two big things on their mind won’t surprise you:
- Solid State Disk (SSD)
From performance talks, to corruption analysis talks, to ZFS talks, to NFSv4 talks, every session included a slide for or was asked a question about both of these. Frankly, there were very few answers. Sun’s “hybrid storage architecture” for ZFS (for those in the know, this is L2ARC and ZIL offload, which are put on special SSD’s). Most of the talks only noted “SSD will change everything… its too early to tell how.” Given that the concern of the show is largely on primary storage, not secondary backup, de-dup was constantly come up but rarely had a place.
If de-duplication is a new term for you, here’s the quick and dirty pitch. Imagine having to architect backups for 300 helpdesk PC’s, all are running a standardized Windows XP, office stack, plus helpdesk support and naturally other user applications. Lets say the average PC has 80GB of data on its local drive. So thats 300 * 80GB to back up, perhaps nightly. A nightmare. Historically, to reduce the backup load by either putting user home directories on a centralized file server and just not backup PC’s, only the file server, or you’d exclude paths such as C:/Windows (or whatever the hell they call it now). De-duplication typically uses hashing algorithms either on the client or on the backup server to reduce storing duplicate data blocks. So that means you only backup one copy of Windows XP, and then 299 references to it. If someone sends out a PDF of the company handbook thats 5MB, and there are 300 local copies of it, thats 1.5GB of the same file, but with de-duplication we store only a single 5MB file plus references to it.
From the example you can see that customers backing up Oracle databases or customized purpose build servers might not be in dire need of this technology (although they are interested too), but if your backing up server farms or desktop systems this is something you can’t wait another second to get your hands on; especially if your backing up to tape!
I should note, de-dup is becoming more than just a backup technology. Storage admins see applications for file servers and other applications. I’m certain that in 5 years de-duplication methodology will be used in ways I’d laugh at today.
As for SSD. Its coming. I remember 10 years ago in a lab where we had a “Solid State Disk”, which in the pre-flash era meant a box with bank upon bank of RAM and a big battery. Today SSD is cheap and getting cheaper. But how will they be used?
Today we have the concepts of “tiered storage”. This means different things based on who you talk to. In some cases such as Pillar Data this is done by partitioning drive cylinders so that tier 1 data is on the outer (faster) tracks and tear 2, 3, 4 on the inner (slower) tracks. In other cases this means putting important fast access data on smaller 15K or 10K RPM FC or SAS disks as “tier 1″, and bulk data on larger “nearline” 7,200 RPM SATA disks. For customers using HSM (Hierarchical Storage Management) you can even automate the data migration back and forth across tiers, all the way out to tape drives which was untill recently cheaper per gig than disk.
So many storage administrators and architects seem to see SSD pushing into tier1 and pushing 15K spinning media down the stack. Instead of Fast, Slow, Tape, you get Super-Fast, Fast, Slow and potentially just dump tape.
I know I’m a zealot, but Sun really is leading the charge here. The Hybrid Storage Pool architecture is really brilliant because it views SSD not as faster disks, but rather as slow (relatively of course) non-volatile memory. Traditionally you have an in-memory filesystem cache (ZFS’s is called “ARC”), data flows through the cache and eventually is ejected to make room for fresher data meaning that if you call that data again you go out to disk. ZFS’s L2ARC (Level 2 ARC) extends your in memory disk cache using SSD, so if you go back for data you don’t have to go all the way out to disks. On busy file servers this is a massive win! A 64GB SSD is a really small disk, but as a secondary disk cache its massive! Plus, there is no management involved on the administrators part, no data policy or data classification to work out, the filesystem handles it for you.
Sun’s other component to the ZFS Hybrid Storage Architecture is ZIL Offload. Most data access is asynchronous can be nicely cached and writes flushed to disk when its convenient. However, some applications such as databases or NFS do synchronous (O_DSYNC) IO, this flag requires that the filesystem immediately flush the data to stable storage. On a busy file server this is a performance killer. ZFS ZIL (ZFS Intent Log) is where these synchronous writes go; by putting those writes on super-fast SSD you get several orders of magnitude performance improvement without relying on things like RAID Controller Write Back Caches.
Since we’re talking about SSD, let me point out that not all SSD’s are created the same. There are two main types of SSD on the market right now: MLC and SLC. Here’s the 60 second explanation:
- Single-Level Cell (SLC): These flash devices have higher performance, more write/erase cycles and thus greater endurance, use less power, but cost much more. These are generally considered “Enterprise Grade SSD”.
- Multi-Level Cell (MLC): In contrast to SLC, these devices have lower performance, less endurance, but offer might higher density and lower cost per bit. If you see a “cheap” $300 SSD at Fry’s or NewEgg its almost certainly MLC. These are generally considered “Consumer Grade SSD”.
If you see a Sun presentation on Hybrid Storage, you’ll see them refer to these as “Read Biased” (MLC, slower but higher capacity) and “Write Biased” (SLC, faster but less capacity). By using the appropriate technology in the appropriate role they significantly reduce cost for an SSD deployment. If you look at everyone else out there just viewing SSD is “fast disk”, the decision between SLC and MLC is really just a matter of cost; if you can afford SLC great, if not MLC, or perhaps even sub-teiring SLC to MLC SSD.
So thats de-dup and SSD. If you haven’t heard of these, you will. Familiarize yourself with the basics now, you’ll be better prepared for the future.
On a closing note. I talked to several people about SMART data. I’m shocked by how many people tell me to ignore SMART data as untrustworthy and unreliable. I was hoping someone at the show would disagree… I was disappointed. Most other experts agree, vendors don’t trust SMART data and in some cases outright “fudge” the data or at the least disregard conclusions based on the data. On person remarked that most drives sent to Seagate due to a SMART suggested failure are simply scrubbed, cleared, and re-shipped. So, the belief that SMART data is something to be seriously monitored by admins continues. If you have it, nifty, but if not, oh well. As for me… I love telemetry, so SMART still has a warm spot in my heart, wrinkles and all.
UPDATE:: Just hours before I wrote this, Mr. Harris of StorageMojo wrote about NetApp’s efforts to bring de-dup to primary storage.