Long ago, during the time of creation, I confidently waved my hand and allocated a 1GB ESP partition and a 1GB boot partition, thinking to myself with a confident smile that this would surely be more than enough for the foreseeable future. However, this foreseeable future quickly vanished along with my smile. What was bound to happen eventually came, but I didn’t expect it to arrive so soon. What could possibly require such a large boot partition? And how should we resolve this? Here, I would like to introduce the boot partition issue I encountered, as well as temporary coping methods and final solutions, mentioning the problems encountered along the way for reference.
↫ fernvenue
Some of us will definitely run into this issue at some point, so if you’re doing a fresh installation it might make sense to allocate a bit more space to your boot partition. If you have a running system and are bumping into the limitations of your boot partition and don’t want to reinstall, the linked article provides some possible solutions.
At first I thought Thom was talking about a similar issue Windows had (recovery partition too small in old installations causes a KB to fail to install, for more info go here), and I was wondering why Thom wasn’t ranting about bloat in Windows and about those poor Windows users being subjected to such weird errors, then realized the article it about Desktop Linux. My bad.
Satya’s still not gonna notice you
Why are you taking us about a scenario you made up in your head just to have a go at Thom? Grow up.
I’m not a fan of how bootloaders have evolved such that this is a problem.
IMHO the boot partition should only contain executables pertinent to the bootloader but not the operating system or drivers. These files don’t belong in the bootloader.
It becomes more onerous when you dual or triple boot operating systems. Every install, despite having it’s own dedicated partition ends up with a large kernel and initrd on the bootloader. Because of this. it’s NOT sufficient to backup/snapshot the operating system’s file system, we additionally have to backup/restore it’s corresponding files on the boot partition. It’s just a shame that because of this we can’t rely solely on LVM / btrfs to backup & restore the OS instance. Obviously the boot loader needs to be able to load the kernel, but grub already has FS drivers and there’s no technical reason these have to be saved in grub’s FS. If this were handled better, a small bootloader partition is indeed all one would need since it wouldn’t contain parts of the OS.
You may be able to get away with running a kernel/initrd out of sync with the installed OS but IMHO it’s kind of jarring that it works this way. It is what it is, but ideally we could snapshot/restore the entire OS including the kernel and initrd without them being an exception.
Think about how cool this would be: we could snapshot the current OS state and no matter how much a new install gets butchered we could be confident that we could restore the snapshot without having to worry about any of this bootloader nonsense.
That’s how I run things with ZFSBootMenu.
My EFI partition just contains ~100MiB of VMLINUZ.EFI and VMLINUZ-BACKUP.EFI for ZFSBootMenu, which only change if I upgrade ZFSBootMenu, and then it pulls everything else out of the /boot folder inside the zroot/ROOT/ubuntu dataset which shares its pool of free space with the zroot/home dataset.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Yeah, I’d like to see all linux distros go in this direction. While we’re at it I’d like for grub to support LVM & mdraid. Grub isn’t my favorite boot loader, but it’s the defacto standard of modern linux distros.
I suspect you’re understating the goodness of “this direction”. ZFSBootMenu, being inspired by how FreeBSD integrates ZFS, has a recovery shell which can do stuff like rolling back to one of the snapshots that something like sanoid has been managing, making it trivial to roll back the boot environment to before you broke it if you put your OS in a different dataset than /home and /srv and other such userdata. (Or creating a snapshot clone, which is like a git branch, so you can boot into the last working version and then use that to diagnose and fix the broken version rather than discarding it.)
Not as good as Windows’s support for automatically tracking the last successful boot and letting you roll back to that… but much better than what I had with my previous ext4-based setup.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Obviously the snapshot ability isn’t unique to ZFS, so I wouldn’t want this to be limited to ZFS,
I think I can appreciate the goodness of ZFS, haha. On linux I’m kind of biased in favor of mainline drivers and ZFS hasn’t been mainlined due to licensing. Oh well, despite this, I know some people are happy to use the ZFS drivers bundled in their distro.
I think ZFS is technically a robust choice, it might even be the most robust choice. Still btrfs has a nice selling point over ZFS, which is dynamic re-balancing. I love how you can change things without recreating the array. Unfortunately btrfs raid still fails my tests to keep a system running after suffering a disk fault. I’ll just link to a previous discussion, if anyone’s curious…
https://www.osnews.com/story/141259/convert-ntfs-to-btrfs-and-boot-windows-off-btrfs/#comment-10445403
*nod* I’m just saying the desirable aspect of “this direction” includes being able to easily rollback or clone snapshots from the bootloader.
ZFS only comes into play because I want a self-healing full-checksumming filesystem and I’m too paranoid about data loss to try btrfs and rebalancing isn’t a concern for me because I’ve never felt comfortable using striped RAID.
I’m not even comfortable using whatever that overlay filesystem was that worked as a higher-level “each file is on one disk or another, but the system automates presenting multiple heterogeneous disks as a single filesystem” pseudo-RAID. I stick to JBODing together mutually independent ZFS mirrors because I value knowing that, if I were to power the machine off and yank a single drive, ZFS would complain that I didn’t “export” it (i.e. unset the “in use” flag) first, but I’d have a complete, readable filesystem with each project/application/whatever being either present or absent in its entirety.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Even under ZFS? I use it for everything.
Yeah, to me this should be considered a basic requirement for all raid systems. For what it’s worth mdraid does handle this well and I trust it. All the raid features that should work, do work. It’s robust but I feel I’m missing out on some of the newer features found in btrfs and zfs.
Yeah.
For a variety of reasons, I’ve never found myself running enough separate drives to really make the benefits of striping outweigh the downsides.
That runs counter to the whole “optimize for performance” aspect of what block-level striping is. The whole point of striping is to simulate a drive with higher throughput by sharding the same file across multiple drives and that’s why re-balancing is a thing.
If, instead of sharding each file, you’re just doing something more along the lines of MergerFS, then any kind of balancing heuristic trends toward bin-packing (Which is NP-complete) and you don’t benefit from the performance increases when reading/writing a single file.
(And, for the record, the reason I don’t use MergerFS is because I’d wind up implementing the exact same root/home/tier2/tier3 split that I already do with ZFS datasets and mountpoints… just with an additional layer of code for bugs to hide in… though, granted, having a “tier3” is more an artifact of not having a NAS where I can sound-dampen more than one external on-site, online backup drive, so I’m currently limited to 16TB for root+home+tier2.)
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Raid can let you do both. Mdraid normally needs multiple threads to get a performance advantage under raid 1. I don’t think many mdraid users know that using raid 10 with a “far layout” will double the performance of single threaded reads because blocks are stripped across disks. Most people using raid 1 with two disks would be better off using raid 10 under the far layout on the same two disks.
https://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
I don’t know about ZFS, but all of my initial disappointment over mdraid performance not scaling for single threaded IO got quelled once I saw how well mdraid scales with raid 10 far layout.
If the choice were mine I’d still prefer what btrfs does and don’t want to merge file systems. Btrfs has fantastic support for accommodating arbitrary changes on the fly. Say I build a raid array of 2G+2G+2G+2G and then I want to replace one of the disks with a 5G disk….btrfs can dynamically rebalance the array into a 2G+2G+2G+5G, increasing capacity while maintaining the same level of redundancy – it even supports changing the redundancy afterwards. In this regards it’s quite a bit more flexible than most other raid solutions including ZFS that require rebuilding the entire array and/or upgrading all the disks at the same time. ZFS can’t even shrink a volume, which may be seen as less important but it’s nice that btrfs has no such limitation.
These days, I’m of the mind that putting /boot on a separate partition is more bother than benefit.
…especially when I’ve never known “/boot is separated for recovery purposes” as more than an abstract thing because, since the very first time I successfully installed Linux around 2000, I’ve had access to LiveCD/LiveUSB booting as a far superior recovery environment. (eg. access to a web browser and higher resolutions)
I definitely agree, though, that on the machines where I’m not using ZFSBootMenu as my bootloader, I’ll allocate 1GiB of EFI partition. (eg. The little Debian mini PCs I use for low-measurement-noise benchmarking/profiling are optimized to minimize startup time and that meant using the kernel’s EFI stub to avoid needing a separate bootloader.)
Maybe I missed something, but it didn’t really explain what was consuming all that space. I agree these days 500MB for /boot is cutting it close if you like to retain a few backup kernels but I’ve found in general 1GB is more than plenty – even with all the crap in the initrd. Needing more than 1GB seems fringe to me, but most of us are blessed with more storage than we need.
Depending on the bootloader being used, you might need to put those giant GPU firmware blobs on the boot partition.