

Warning: This article is an over-simplified and absolutely incomplete view of ZFS vs LVM from a user’s point of view. I’m sure, LVM works great for a lot of people, but, well, for me it sucked. And ZFS simplifies my life. Honestly. Here’s why.
Providing Disk Space To Virtual Machines
Ok, I have to admit that my requirements are kind of weird. I want to run a bunch of virtual machines without re-partioning my hard drive, adding additional disks, or paying for shared storage. We lease our servers and they’re hosted in a data center far, far away. Re-partitioning disks is a hassle, especially if you don’t have direct access to the box. Sure, you can use your remote console, but re-installing the OS is not a pleasant experience if you can’t enter the DVDs into the disk drive. So I want to provide disk space for virtual machines (Xen VMs on Debian, Zones on OpenSolaris) using the features of my beloved virtual disk manager.
On LVM I created disk image files, loopback mounted them and then added the loop devices to LVM. The following sequence of commands was necessary to create a virtual disk for my virtual machines:
losetup /dev/loop0 my_disk.img pvcreate /dev/loop0 vgcreate vgmy_disk /dev/loop0
On ZFS all that crap using disk images and loopback devices isn’t necessary. And additionally, it’s so much simpler to provide disk space:
zfs create my_disk
Giving More Disk Space To Virtual Machines
If I want to add more disk space to my virtual machine, LVM makes me break my fingers with the following command sequence:
losetup /dev/loop1 my_disk0.img pvcreate /dev/loop1 vgextend vgmy_disk /dev/loop1 lvextend -L+10G /dev/vgmy_disk/my_logical_volume e2fsck -fy /dev/vgmy_disk/my_logical_volume resize2fs /dev/vgmy_disk/my_logical_volume
Simple, right?
Now the ZFS version:
zfs set quota=20G my_disk
Hmm, let me think, which one I prefer…
Moving Virtual Disks To Other Servers
If you want to move a virtual disk (maybe consisting of multiple disk images) to another box, there is a lot to do like disabling the volumegroup vgchange -an ...
, unmounting the loop devices, copying the files, mounting the loop devices again, scanning and enabling the volume group again (vgscan; vgchange -ay ...
).
In ZFS you just use zfs send … | ssh newbox zfs receive … – that’s it.
I know, my approach using disk images was maybe the worst and stupiest idea ever – but it drastically illustrates the complexities of LVM in comparison to the simplicity provided by ZFS. LVM used right might be easier, but I doubt that it comes close to the ease of use and power of ZFS. Or do you have different experiences? Share them with us in the comments. Let the flame wars begin…
I think LVM may be misrepresented here, given that you chose to use loopback files and a new volume group for every virtual machine. I can’t really see why that would be important.
If you dropped that requirement, and just had a single volume group, you could add all free space to that volume group and drop all of the loopback device management. Further, if you just let the domU’s manage what is on the logical volume assigned to them, you can make a choice per domU whether to further use LVM inside the domU (allowing online vgextend’s and such) or just make a filesystem on the LV directly.
Not saying this is better than ZFS, just that the process can be streamlined, and is quite a bit simpler than what is presented here.
LikeLike
Thanks Clint, I definitely did not dive too deep into all the possibilities of LVM. One issue for me was that I was not able to repartition the hard disk and therefore did not have any big free space to put a single volume group on. This is the root of all my config complexity for sure.
LikeLike
I don’t suppose you have a way to migrate/convert lvm into zfs?
LikeLike
No, unfortunately I’m not aware of a way to migrate lvm into zfs.
LikeLike
Also in LVM in order to migrate between physical disks is not required to unmount it.
You can do just
vgextend vg0 /dev/sdb1
pvmove /dev/sda1 /dev/sdb1
vgreduce vg0 /dev/sda1
Thats all.
I am not trying to say ZFS is bad, but i really like to see the differences, because i had to choose something. What about stability and restorability? Does anybody has experience with restoring zfs after failure?
LikeLike
With no offence – I think that you are misusing LVM. It was not ment to be used the way you are using it, so logically it is very wierd to use it this way. If you want to base your config on partitions “in” files, that sure LVM is not the right tool for that. The way to use LVM as it was ment, is to format a partition for LVM and let it manage it. After that the resizing of volumes is really easy
““
lvextend -rL +10G /dev/main/repository
lvreduce -rL -10G /dev/vgmain/test
I just wonder if there is preformance penalty for using file based partitions, because it adds one more layer of filesystem, which is not necessary.
LikeLike
I fear the article above is extremely limited in the capabilities that LVM brings. Nothing against ZFS, but it isn’t always an option. If you plan to use RAIDz2 for VM or data storage, then definitely use ZFS – that 6 physical drives, minimal, right? For systems with fewer drives or systems where booting from ZFS is not production ready, LVM can fill the gap for OS drives. LVM has been enterprise capable for 20 yrs on Linux.
LVM is fully supported by libvirt, so presenting an LV is part of the VM creation process. The hostOS just knows that there is a VG. Libvirt will allocate a LV if you ask. The storage isn’t presented to the host.
LVM supports sparse LVs too – called thin provisioning, so we can allow customers to see they have 50G, but only use 10G. As they use more and more storage, 10% more at a time, the LV expands, automatically. If they never use is, then that storage is available to other VMs … which are also thin provisioned.
LVM is very fast, which is counter intuitive. More layers usually mean slower, right? With LVM, we can convert between non-mirrrored and mirrored storage – either way.
We can move entire physical volumes why the disks are still used with pvmove. So, when you outgrow that 256G SSD, moving to a 1TB SSD isn’t a big deal. With hot-swap storage, it won’t need any downtime. Without hot-swap storage, only the time to install the new SSD is needed – so about 5 minutes. Then all the migration work can happen. Heck, I need to migrate a VM server with a 500G SATA SSD to a 500G NVMe SSD. Thankfully, it has LVM for the OS and for about 15 VMs. I just need to install the NVMe tomorrow during a maintenance period, then create a GPT table, and 2 partitions (/boot and LVM PV for the rest). Sunday, reboot using the NVMe device (may reach in to unplug the SATA SSD), and remove the old SSD in a week for some other purpose, inside another system.
I’m ignorant about how I’d accomplish these things using ZFS. But there is a solution to ignorance.
LikeLike