Understanding ZFS: Compression
Posted on November 6, 2008
One of the most appealing features ZFS offers is built in compression capabilities. The tradeoffs are self evident, consume additional CPU but conserve disk space. If your running an OLTP database then compression probly isn’t for you, however if you are doing bulk data archiving this could be a huge win.
ZFS is built with the realization that in modern systems we typically have large amounts of memory and CPU available, and we should be provided with the means to put those resources to work. Contrast this with the traditional logic that compression slows things down, because we stop and compress the data before flushing it out to disk, which takes time. Consider that in some situations, you may have significantly faster CPU and Memory than you have IO throughput, in which case it may in fact be faster to read and write compressed data because your reducing the quanity of IO through the channel! Compression isn’t just about saving disk space… keep an open mind.
The first important point about ZFS Compression is that its granular. Within ZFS we create datasets (some people call them “nested filesystems”, but I find that confusing terminology), each of which has inherited properties. One of those properties is compression. Therefore, if we create a “home” dataset which mounts to “/home”, and then create a “home/user” dataset for each user, we can do interesting things, such as apply per-user quotas (disk usage limits) or reservations (set aside space) or, in this context, enable, disable or specify differing types of compression. Some users may want compression, others may not, or you may wish all to use it by default. ZFS gives us a wide range of flexible options. Most importantly, if we change our mind at some point we can change the setting and all new data is compressed, the old uncompressed data is still used as expected. This means that changes are no disruptive, however this does mean that if you really want to later conserve all the disk you can you’d need to enable compression and then slosh all the data off and then back in.
So, how do we enable compression? Simple, use the zfs set compression=on some/dataset command. If we then “get all” properties from a dataset we’ll see some interesting information. Here is an example (pruned for length) of my home directory:
root@quadra ~$ zfs get all quadra/home/benr NAME PROPERTY VALUE SOURCE quadra/home/benr type filesystem - quadra/home/benr creation Thu Oct 9 11:33 2008 - quadra/home/benr used 122G - quadra/home/benr available 432G - quadra/home/benr referenced 122G - quadra/home/benr compressratio 1.19x - quadra/home/benr mounted yes - quadra/home/benr quota none default quadra/home/benr reservation none default quadra/home/benr recordsize 128K default quadra/home/benr mountpoint /quadra/home/benr default quadra/home/benr checksum on default quadra/home/benr compression on inherited from quadra/home ...
Here we see that compression is “on”, and was inherited automatically from its parent dataset “quadra/home”. We can also see the compression ratio above: 1.19x.
But what are our options? Just on or off? Many ZFS properties have simplistic “defaults”, in this case “on” means that we use the “lzjb” compression algorithm. We can instead specify the exact algorithm. Current, in fairly modern releases of Nevada/OpenSolaris we have available the default LZJB (a lossless compression algorithm created by Jeff Bonwick, which is extremely fast) and gzip at compression levels 0-9. If you set “compression=gzip” you’ll get GZIP level 6 compression, however you can explicitly “set compression=gzip-9”. More compression algorithms may be added in the future. (The source is out there, feel free to give us another!)
But how can you see the effect? Did you know the “du” command will show you the on-disk (compressed) size of a file? Lets experiment!
root@quadra ~$ zfs create quadra/test root@quadra ~$ zfs get compression quadra/test NAME PROPERTY VALUE SOURCE quadra/test compression off default
Ok, we have a dataset to play with. I’ve downloaded Moby Dick and combined into a single text file.
root@quadra test$ ls -lh moby-dick.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:38 moby-dick.txt root@quadra test$ du -h moby-dick.txt 1.8M moby-dick.txt root@quadra test$ head -4 moby-dick.txt .. < chapter I 2 LOOMINGS > Call me Ishmael. Some years ago--never mind how long precisely --having little or no money in my purse, and nothing particular
Alright, so here is Moby Disk in text, weighing in at 1.8M uncompressed. Lets not enable compression (LZJB), copy the file and see how much benefit we get:
root@quadra test$ zfs set compression=on quadra/test root@quadra test$ cp moby-dick.txt moby-dick-lzjb.txt root@quadra test$ sync root@quadra test$ ls -lh total 3.5M -rw-r--r-- 1 root root 1.8M Nov 6 01:40 moby-dick-lzjb.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:38 moby-dick.txt root@quadra test$ du -ah 1.7M ./moby-dick-lzjb.txt 1.8M ./moby-dick.txt 3.5M .
Nice, we’re saving some space. Now lets repeat with gzip.
root@quadra test$ zfs set compression=gzip quadra/test root@quadra test$ cp moby-dick.txt moby-dick-gzip.txt root@quadra test$ ls -lh total 4.6M -rw-r--r-- 1 root root 1.8M Nov 6 01:44 moby-dick-gzip.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:40 moby-dick-lzjb.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:38 moby-dick.txt root@quadra test$ du -ah 1.7M ./moby-dick-lzjb.txt 1.8M ./moby-dick.txt 1.1M ./moby-dick-gzip.txt 4.6M .
Ahhhh. Nice gain there. Remember that this is gzip-6 really, lets crank it up to gzip-9!
root@quadra test$ zfs set compression=gzip-9 quadra/test root@quadra test$ cp moby-dick.txt moby-dick-gzip9.txt root@quadra test$ ls -lh total 4.6M -rw-r--r-- 1 root root 1.8M Nov 6 01:44 moby-dick-gzip.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:46 moby-dick-gzip9.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:40 moby-dick-lzjb.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:38 moby-dick.txt root@quadra test$ du -ah 1.7M ./moby-dick-lzjb.txt 1.8M ./moby-dick.txt 1.1M ./moby-dick-gzip.txt 512 ./moby-dick-gzip9.txt 4.6M .
Wow! Thats savings. Just to put this in context, I’ll test gzip’ing the file like your used to (using tmpfs, not zfs):
root@quadra test$ cd /tmp root@quadra tmp$ ls -alh moby-dick.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:47 moby-dick.txt root@quadra tmp$ gzip moby-dick.txt root@quadra tmp$ ls -alh moby-dick.txt.gz -rw-r--r-- 1 root root 1.1M Nov 6 01:47 moby-dick.txt.gz
And so we see here that just gzip’ing the file matches the compression I got with gzip (gzip-6) enabled.
But before you get too excited, remember that this is consuming system CPU time. The more compression you do, the more CPU you’ll consume. If this is a dedicated storage system your working on, then consuming a ton of CPU for compression may well be worth it (many appliances have fast CPU’s for just this reason), however if your running critical apps and CPU really counts, then notch it down or even turn it off. I highly recommend you dry-run your application workload and then load test it hard to see whether or not that extra CPU will be a problem. Whenever possible, try to determine these things before you deploy, not after.
To follow up that idea, remember that we can set differing compression levels on different datasets. You may want to put your application data on an uncompressed dataset, but store less commonly used data or backups on a separate dataset where you’ve cranked compression up. Get creative!
ZFS is an amazing technology and compression is certainly one of its big attractions for the common user. Workstation always low on disk? Compression to the rescue, no stupid FUSE or loopback tricks required. 🙂
A Word Of Warning
At this point I do want to warn you of something. Notice that du displays actual disk consumption, not true file size. Now consider the way in which most admins actually use the command… to total up cumulative file sizes. On a typical file system, “du -sh .” will nicely total up all the files, which would be the same as if I tar’ed up the files and looked at the tarball’s filesize. When using compression you can not use “du” in this way because the files are larger than the actual disk usage. So you get into potentially confusing situations like this:
root@quadra test$ ls -alh total 5.6M drwxr-xr-x 2 root root 6 Nov 6 01:46 . drwxr-xr-x 8 root root 8 Nov 6 01:33 .. -rw-r--r-- 1 root root 1.8M Nov 6 01:44 moby-dick-gzip.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:46 moby-dick-gzip9.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:40 moby-dick-lzjb.txt -rw-r--r-- 1 root root 1.8M Nov 6 01:38 moby-dick.txt root@quadra test$ du -h . 5.6M . root@quadra test$ tar cfv md.tar moby-dick* moby-dick-gzip.txt moby-dick-gzip9.txt moby-dick-lzjb.txt moby-dick.txt root@quadra test$ ls -lh md.tar -rw-r--r-- 1 root root 7.0M Nov 6 02:11 md.tar
In the real world, this could come to you as a shock if you wanted to rsync a bunch of data, totalled it up using “du” to estimate the bits that need to move, and then got nervous when you moved way more bits than you initially expected because your forgot to take compression into the equitation. So hopefully some of you can learn here not just how ZFS works, but appreciate “du” in a new way as well. 🙂