ZFS vs LVM For Dummies

Creative Commons License teclasorg

Warning: This article is an over-simplified and absolutely incomplete view of ZFS vs LVM from a user’s point of view. I’m sure, LVM works great for a lot of people, but, well, for me it sucked. And ZFS simplifies my life. Honestly. Here’s why.

Providing Disk Space To Virtual Machines

Ok, I have to admit that my requirements are kind of weird. I want to run a bunch of virtual machines without re-partioning my hard drive, adding additional disks, or paying for shared storage. We lease our servers and they’re hosted in a data center far, far away. Re-partitioning disks is a hassle, especially if you don’t have direct access to the box. Sure, you can use your remote console, but re-installing the OS is not a pleasant experience if you can’t enter the DVDs into the disk drive. So I want to provide disk space for virtual machines (Xen VMs on Debian, Zones on OpenSolaris) using the features of my beloved virtual disk manager.

On LVM I created disk image files, loopback mounted them and then added the loop devices to LVM. The following sequence of commands was necessary to create a virtual disk for my virtual machines:

losetup /dev/loop0 my_disk.img
pvcreate /dev/loop0
vgcreate vgmy_disk /dev/loop0

On ZFS all that crap using disk images and loopback devices isn’t necessary. And additionally, it’s so much simpler to provide disk space:

zfs create my_disk

Giving More Disk Space To Virtual Machines

If I want to add more disk space to my virtual machine, LVM makes me break my fingers with the following command sequence:

losetup /dev/loop1 my_disk0.img
pvcreate /dev/loop1
vgextend vgmy_disk /dev/loop1
lvextend -L+10G /dev/vgmy_disk/my_logical_volume
e2fsck -fy /dev/vgmy_disk/my_logical_volume
resize2fs /dev/vgmy_disk/my_logical_volume

Simple, right?

Now the ZFS version:

zfs set quota=20G my_disk

Hmm, let me think, which one I prefer…

Moving Virtual Disks To Other Servers

If you want to move a virtual disk (maybe consisting of multiple disk images) to another box, there is a lot to do like disabling the volumegroup vgchange -an ..., unmounting the loop devices, copying the files, mounting the loop devices again, scanning and enabling the volume group again (vgscan; vgchange -ay ...).
In ZFS you just use zfs send … | ssh newbox zfs receive … – that’s it.

I know, my approach using disk images was maybe the worst and stupiest idea ever – but it drastically illustrates the complexities of LVM in comparison to the simplicity provided by ZFS. LVM used right might be easier, but I doubt that it comes close to the ease of use and power of ZFS. Or do you have different experiences? Share them with us in the comments. Let the flame wars begin…

5 thoughts on “ZFS vs LVM For Dummies

  1. I think LVM may be misrepresented here, given that you chose to use loopback files and a new volume group for every virtual machine. I can’t really see why that would be important.

    If you dropped that requirement, and just had a single volume group, you could add all free space to that volume group and drop all of the loopback device management. Further, if you just let the domU’s manage what is on the logical volume assigned to them, you can make a choice per domU whether to further use LVM inside the domU (allowing online vgextend’s and such) or just make a filesystem on the LV directly.

    Not saying this is better than ZFS, just that the process can be streamlined, and is quite a bit simpler than what is presented here.

    Like

    1. Thanks Clint, I definitely did not dive too deep into all the possibilities of LVM. One issue for me was that I was not able to repartition the hard disk and therefore did not have any big free space to put a single volume group on. This is the root of all my config complexity for sure.

      Like

  2. Also in LVM in order to migrate between physical disks is not required to unmount it.
    You can do just
    vgextend vg0 /dev/sdb1
    pvmove /dev/sda1 /dev/sdb1
    vgreduce vg0 /dev/sda1

    Thats all.

    I am not trying to say ZFS is bad, but i really like to see the differences, because i had to choose something. What about stability and restorability? Does anybody has experience with restoring zfs after failure?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.