Warning: This article is an over-simplified and absolutely incomplete view of ZFS vs LVM from a user’s point of view. I’m sure, LVM works great for a lot of people, but, well, for me it sucked. And ZFS simplifies my life. Honestly. Here’s why.
Providing Disk Space To Virtual Machines
Ok, I have to admit that my requirements are kind of weird. I want to run a bunch of virtual machines without re-partioning my hard drive, adding additional disks, or paying for shared storage. We lease our servers and they’re hosted in a data center far, far away. Re-partitioning disks is a hassle, especially if you don’t have direct access to the box. Sure, you can use your remote console, but re-installing the OS is not a pleasant experience if you can’t enter the DVDs into the disk drive. So I want to provide disk space for virtual machines (Xen VMs on Debian, Zones on OpenSolaris) using the features of my beloved virtual disk manager.
On LVM I created disk image files, loopback mounted them and then added the loop devices to LVM. The following sequence of commands was necessary to create a virtual disk for my virtual machines:
losetup /dev/loop0 my_disk.img pvcreate /dev/loop0 vgcreate vgmy_disk /dev/loop0
On ZFS all that crap using disk images and loopback devices isn’t necessary. And additionally, it’s so much simpler to provide disk space:
zfs create my_disk
Giving More Disk Space To Virtual Machines
If I want to add more disk space to my virtual machine, LVM makes me break my fingers with the following command sequence:
losetup /dev/loop1 my_disk0.img pvcreate /dev/loop1 vgextend vgmy_disk /dev/loop1 lvextend -L+10G /dev/vgmy_disk/my_logical_volume e2fsck -fy /dev/vgmy_disk/my_logical_volume resize2fs /dev/vgmy_disk/my_logical_volume
Now the ZFS version:
zfs set quota=20G my_disk
Hmm, let me think, which one I prefer…
Moving Virtual Disks To Other Servers
If you want to move a virtual disk (maybe consisting of multiple disk images) to another box, there is a lot to do like disabling the volumegroup
vgchange -an ..., unmounting the loop devices, copying the files, mounting the loop devices again, scanning and enabling the volume group again (
vgscan; vgchange -ay ...).
In ZFS you just use zfs send … | ssh newbox zfs receive … – that’s it.
I know, my approach using disk images was maybe the worst and stupiest idea ever – but it drastically illustrates the complexities of LVM in comparison to the simplicity provided by ZFS. LVM used right might be easier, but I doubt that it comes close to the ease of use and power of ZFS. Or do you have different experiences? Share them with us in the comments. Let the flame wars begin…