ZFSBootMenu is a bootloader that provides a powerful and flexible discovery, manipulation and booting of Linux on ZFS. Originally inspired by the FreeBSD bootloader, ZFSBootMenu leverages the features of modern OpenZFS to allow users to choose among multiple “boot environments” (which may represent different versions of a Linux distribution, earlier snapshots of a common root, or entirely different distributions), manipulate snapshots in a pre-boot environment and, for the adventurous user, even bootstrap a system installation via
zfs recv
.In essence, ZFSBootMenu is a small, self-contained Linux system that knows how to find other Linux kernels and initramfs images within ZFS filesystems. When a suitable kernel and initramfs are identified (either through an automatic process or direct user selection), ZFSBootMenu launches that kernel using the
kexec
command.
Interesting bootloader, for sure, but I am curious to know how many people use ZFS on Linux. Are there any distributions that use ZFS by default?
As a confirmed ZFS addict, I use it on my main desktop PC (openSUSE), my private NAS (FreeBSD) as well as on all of my company servers which run FreeBSD and Debian on the bare metal. To my knowledge, there are no distributions that use ZFS by default (Ubuntu had it for a while, but I do not know if they still do). As far as my needs and preferences are concerned, I found Debian to be the most accommodating distribution among the “big name” distros since they include ZFS as a DKMS package and the maintainers do a great job of providing newer versions via the backports repository. I have used ZFS since it first became stable on FreeBSD in 2011, and back then you had to install FreeBSD manually as, at the time, it was not a part of the installer. For this reason, I am used to installing the operating system, more or less, by hand if I want to use ZFS. On Linux, Debian makes this really easy for me as an advanced user with debootstrap since none of Debian’s configuration tools, or lack thereof, really concern themselves with the disk or partition setup and thereby leave ZFS alone.. On my openSUSE desktop, I use ext4 for the root partition and then my /home directory lives in a ZFS pool which, for desktop use, is very useful for when I run VirtualBox which benefits from ZFS compression and snapshotting etc., and of course I can do send/receive to my NAS for backups.
On my NAS/Home Server, I run FreeBSD with ZFS both root & everything else.
My laptop, I’m running ZFS root/whole disk, although I had to do a pretty manual install with KDE Neon (Ubuntu LTS + newer Plasma updates).
I’m going to be reinstalling my desktop soon, and eventually my laptop, and contemplating NetRunner, although probably just Debian testing, and I want to do ZFS root as well.
Why? Snapshot/recovery/transparent compression/etc.
I’ll have to remember this project, and look into it.
Take a look at https://docs.zfsbootmenu.org/en/v2.2.x/guides/debian/bookworm-uefi.html – we have a basic guide to installing Debian root-on-ZFS with ZFSBootMenu.
i’m using and booting zfs on alpine linux
https://wiki.alpinelinux.org/wiki/Root_on_ZFS_with_native_encryption
Obviously this article isn’t about btrfs, but I’m curious if anyone with experience with both has an opinion about one FS versus the other?
The bulk of my infrastructure uses raid + LVM2. It is a bit rough around the edges sometimes (especially lvm2 thin pool quirkiness) but it works and I know what I dealing with. I really wish linux volume management and raid would be consolidated into one single tool.
LVM thin volumes and snapshots are useful for me, but as with all things there’s more than one way to approach things and I’m considering alternatives to use in the future especially if I can get the features I need with more streamlined tooling.
I’ve used ZFS for bulk storage on my Linux workstation for over 10 years, but am slowly transitioning to btrfs. The primary reasons I’ve loved ZFS (aside from being a Solaris administrator back when it was released) were the ease of use, CoW snapshots and block compression. However, btrfs has all of these things and more now, and it’s been robust enough for the big NAS guys for many years now. The rebalance and ability to change RAID levels on the fly are very neat. I’ve used it for my primary partitions for quite a few years, but will be moving my bulk storage from ZFS to btrfs soon. The biggest issue I’ve had with ZFS on Linix in recent years is the pace of kernel releases on most distros these days happens faster than the ZoL guys can keep up, so a kernel update can render your system without ZFS until that’s resolved (you can of course boot a previous kernel or apply largely untested ZFS patches).
dexterous,
Yeah, it would been nice if linux mdraid/lvm supported that, but that stack is getting old and isn’t as usable. I think ZFS (and possibly btrfs) offer a much nicher ways to grow the array organically.
Right there you may have sold me on btrfs, haha.
Not to get too far in the weeds, but I’ve dealt with linux’s unstable driver ABI before, maintaining an out of tree kernel file system, and you are absolutely right about this. I would imagine many distros are keeping the drivers in sync themselves before publishing new kernels, but for someone like me who wants to run the latest mainline kernel and use it for my distro, it is a lot of work to handle kernel ABI breakages.
I know ZFS allows one to have store and access virtual disk images without having to use a loopback device (in a similar manor to LVM), but am I right that btrfs can only host hd images as files and not volumes? That could be a con.
The only distribution I know of that doesn’t take ZFS into account when releasing a kernel package update is Arch. Void linux, for example, supports multiple different kernel series and will only bump the ‘linux’ meta package when all DKMS drivers (nvidia, ZFS, etc) support that specific kernel.
In the ~5 years I’ve been running Void with ZFS, I’ve never once had a problem with a kernel update and ZFS.
zdykstra,
Naturally distros are only going to distribute official kernels and ZFS drivers that are supported and have been tested together. You shouldn’t see a problem there. Usually though it means they stick a couple versions behind the latest stable kernel and if you try and install mainline that’s when you run into trouble.
For example, debian 12 includes linux 6.1, but the latest stable is 6.4.10 and mainline is 6.5.
Granted, the vast majority of users will just stick to whatever the distro offers. They don’t care about the latest kernels, but sometimes it’s annoying when there’s a new kernel feature/fix that you actually want, but you’re still stuck on an older kernel, especially given how long some LTS distros wait to update.
Yes, I’m aware of all of this. Look at the parent comment to which I was replying.
They said that distributions are releasing kernels faster than OpenZFS can support them. The only distribution that I’m aware of with this problem is Arch. We’ve setup automation around installing various Linux distributions onto a ZFS dataset in https://github.com/zbm-dev/zfsbootmenu/tree/master/testing/helpers – and I’ve written additional guides at https://docs.zfsbootmenu.org/en/v2.2.x/ . Again, the only one that’s actually problematic is Arch.
Not just Arch, no. It also happens regularly with Fedora & it almost feels like there’s an internal battle going on between the kernel & ZoL devs, as kernel devs keep removing symbols that ZoL relies on. This is a recent change & has been occurring regularly over the last 2 or 3 years. As I said, i’ve been running ZoL for a good 10 years at least. Many distro maintainer couldn’t care less about ZoL compatibility as it’s not compatible with the GPL. Don’t take my word for it though, see it for yourself in the regularly posted issues; https://github.com/openzfs/zfs/issues
My home server started out with FreeBSD and ZFS on IDE drives (160 GB!). That pool has been migrated between vdev types (raidz1 to multi-mirror), hard drive types (IDE to SATA, currently 2 TB), and from a single root-on-ZFS pool to a multi-pool setup (root pool on USB to root pool on NVMe; with a separate storage pool).
Last year, I migrated that pool over to Ubuntu so that I could access Plex features that weren’t available on FreeBSD, and to get better support for the Nvidia GPU transcoding capabilities.
No data has been lost on that pool in the 15 years it’s been running, although I did recreate it from backups once (to move from raidz to mirror vdevs). There’s a separate backup pool on an external drive connected via USB3 that ZFS snapshots are sent to each night.
At work, we use FreeBSD+ZFS for all our backups servers. Rsync runs every night to backup remote servers to individual datasets, snapshots are created every morning, and then sent to off-site servers running FreeBSD+ZFS. We keep between 1-3 years of daily snapshots, depending on the server. Recovering individual files from any specific date is a simple scp command. Restoring an entire server is a simple matter of booting the new server off the network, configuring filesystems, and running rsync.