“It’s the one major part of the PC that’s still reminiscent of the PC’s primordial, text-based beginnings, but the familiarly-clunky BIOS could soon be on its deathbed, according to MSI. The motherboard maker says it’s now making a big shift towards point and click UEFI systems, and it’s all going to kick off at the end of this year.” FINALLY.
It’s about damn time!
Seconded! The sooner the BIOS is gone the better off the PC will be. Of course, there’ll still be bios-booting systems for years and years to come, so we’re nowhere close to being rid of it. Still, hopefully this means I’ll be able to actually buy a mainstream UEFI board. I hope they put a command interface in there too though, something like the old OpenFirmware interface. Much better for headless installations or when you know exactly what you want to do. I’d love to just be able to connect over the network and type “boot cd0” or something like that and go without having to change the boot order or hook up a keyboard or mouse.
Indeed!
There aren’t very many boards out there that will can redirect the bios output to network or serial connections… but they do exist (mostly in the server market).
I doubt this will be any more popular with EFI-based boards though… very few commodity hardware consumers would ever care for this.
Ah, but UEFI is extendable. MSI might not include it in their consumer boards, but there are already EFI shells that can be installed and I bet they’d get ported over pretty quick if these boards become common enough. That’s the really awesome thing about EFI: we’re not necessarily limited to what the manufacturer includes like we are with BIOS.
Phoenix has done a lot of great work in the EFI area.
This has proven valuable to me for migrating away from BIOS to EFI based environments: http://blogs.phoenix.com/phoenix_technologies_bios/uefi/
Can it be?!! They’ve finally caught up to Amiga Kickstart 3.0! And that came out in 1992!
This is great news but I sincerely still don’t really understand what improvements it brings over BIOS other than extensibility. I mean, can it boot computers faster? The only significant advantage I know is that it’s probably easier to port Mac OS X now.
Another thing is that EFI drivers can be much more powerful.
Your graphics card could have an EFI driver, and your OS could just call EFI to use it.
Then, you don’t need to maintain drivers on the OS, and alternative OSes don’t need drivers at all, other than to use EFI – just keep your graphics card firmware flashed, and the drivers stay up to date.
It can boot faster because you can embed the bootloader as an EFI module, so you can skip the MBR. In Fact, there is not MBR in the first place.
It use GPT instead of DOS based partition table, so you can have as many as you want and all are primary.
You can add module and remove some. Apple added a module that allow to boot from a DVD player over the network. It’s not magical, it’s just an extension.
It can replace the OLD PXE spec to allow easier diskless booting and backups.
It have limitless possibility
Oh, and yes, it come with a GUI, so noob can understand it without having eyes burned by a blue bios screen.
And they could have slapped a point-and-click GUI on the standard PC BIOS too… in fact I owned an AMI BIOS Socket 3 machine back in the 90’s that had a point-and-click mouse-based BIOS.
The interface for that BIOS was so horrid, however, that I was glad to have my AWARD BIOS text-based interface back on the next machine I built
On a 486 years back it had a point and click bios, no kidding, supported ps2 mouse and had a vaguely windows 3.1 like interface.
(Posted this in the wrong story – sorry for everyone reading it twice!)
The MBR partitioning scheme cannot address volumes larger than 2Tb. GPT partitioning can be used to overcome this. UEFI is not a hard requirement to boot from GPT; BIOS-based booting just loads the first sector and executes it. BIOS does not limit the partitioning schemes that can be used; that is determined by the software on the disk, ie., the bootloader.
What this article meant to say is that GPT BIOS bootloader support is less than universal, and moving to UEFI gives GPT boot support without rewriting the bootloader. Except moving to UEFI really is rewriting the bootloader…
This is good news, however a bit late.
It seems like the main problem with BIOS is its lack of proper driver support, and booting in 16-bit mode, possible breaking things from time to time.
However, we already have a powerful bootloader, GRUB. GRUB is now so enhanced that it can do many things EFI was supposed to do, and it’s already a psuedo-standard in x86/x64 booting (Solaris, HURD, Linux have grub as their first choice).
Edited 2010-06-09 00:26 UTC
And the problem with GRUB is that it exists in a volatile location where any OS can screw with it. For its full glory, it also requires a host partition to reside on, and some other system (read: BIOS) to bootstrap it from “bootable” media.
BTW, does GRUB 2 yet support chain-booting a bootable CD? That has always annoyed me.
Face it, BIOS and UEFI are stage 0, while GRUB can currently only exist at stage 1.
This is all very good and well, up until you need to install Windows.
Microsoft go out of their way to break GRUB (Win7 even refuses to boot of any partition or disk that isn’t the primary unless a preview version of Windows exists as the primary – this is clearly engineered deliberately to break GRUB)
Windows 7 has no problem booting from GRUB, and the need for a primary partition does not pose any problem for GRUB. The only problem Windows creates for GRUB — and any other bootloader — is that it wipes MBR on install so that the active primary partition is booted instead of GRUB.
These days, you should use GRUB2.
Well, considering that the Multiboot2 spec is currently not finalized, GRUB2’s sole apparent benefit at the moment is that it’s a cleanup rewrite of GRUB. Which also means that it is less tested and hence potentially more buggy.
Unless there’s some other benefit which I don’t know of, GRUB2 does not look that interesting at the moment, compared to GRUB legacy. Except for early adopters who like to have an even more messy /boot/grub folder…
Edited 2010-06-09 12:42 UTC
Windows 7 doesn’t boot from GRUB. What GRUB does is loads Win7’s bootloader (much like GRUB loading another version of GRUB held on a different partition).
I never remotely implied any such thing so I have no idea why you’d bring that up.
And as Windows is always the active primary partition, Windows will always get loaded. Hence my earlier post.
That’s rubbish. GRUB is more than sufficient for most people. Particularly when GRUB supports graphical menus (albeit not nearly to the same degree as GRUB2) and that GRUB2 is a complete nightmare to configure.
Excuse me, but when you write “Microsoft go out of their way to break GRUB (Win7 even refuses to boot of any partition or disk that isn’t the primary …)”, you imply that the use of a primary partition breaks GRUB. “Never remotely implied”? It’s explicit.
The use of the primary partition breaks GRUB because Windows forcefully overwrites GRUB on install.
So I still can’t see how you came to that conclusion that I thought GRUB couldn’t install on a primary partition.
No, it doesn’t. Overwriting the MBR is the only thing installing Windows does that wipes out GRUB. It’s no big deal, and would happen no matter which bootloader you use, and no matter which partition Windows was installed to. Primary, secondary partition doesn’t matter at all to GRUB. It’s irrelevant.
I think you’re using the wrong word. It’s the Windows installer’s erasing of grub that breaks it. It has nothing to do with a primary partition at all, it wouldn’t matter if the installer put Windows on a logical partition. Replacing the mbr would *still* break an already installed boot loader no matter what it is and no matter where the os is installed. You said that because Windows is installed to a primary partition grub breaks, when this is demonstrably not the reason.
Yeah, I thought the Win7s installer put more in the MBR than XP’s did – thus breaking Win7 if the MBR was then changed to point to GRUB.
But after playing around last night, turns out this isn’t the case.
I still can’t see any logical reason for Win7 refusing to install on any non-primary partitions other than to be bloody awkward. (thus forcing users to install Windows then Linux rather than the other way round.
This is particularly frustrating when taking into account how inept Windows installers partition manager is (I ended up having to boot a Linux liveCD just to partition the disk correctly).
I’d guess it’s so their bootloader can be extremely simple? Having a smart, versatile bootloader just makes it easy to install and run competing OS’s, and how often does Microsoft go out of their way to make it easy to use a competitor’s products?
Yeah, I’ve definitely done that often. Get the partition layout you want with fdisk from a Linux live CD, then turn things over to the brain-damaged Windows installer once everything’s ready for it. Because you never quite know what it’ll do, if it sees multiple partitions that it could write to. Once, I had it stick the boot-loader and kernel in the first active partition, and the rest of the system in a logical partition that it created and stuck inside a primary partition marked as type FAT. I have no earthly idea why it decided to do that.
Well, Powerquest’s PartitionMagic was one of the first software to do partitioning the way it should always have been done. Then everybody followed, with Mac OS X even introducing some little usability improvements…
…except for Microsoft which still uses partitioning software from the 80s, only with a shiny graphic shell on top of it.
And then some people wonder why so much sysadmins hate Windows while linux is such an immature platform…
Edited 2010-06-11 05:13 UTC
That’s a problem, but not a huge one: most habitual dual-booters learn to overcome it pretty quick. Basically, here are the steps for a multi-boot environment that includes Windows:
1. Create your partition layout. Mark every partition that you don’t want Windows to mess with as a type that’s not Windows (like 82).
2. Install Windows. Put it in the first partition that’s of a type that Windows likes.
3. Install your other OS’s, and GRUB. Put GRUB in the MBR.
The pertinent bit is simple: install GRUB after Windows, and make sure that no other partitions are of a type that Windows likes when you install Windows. IIRC, Windows doesn’t actually have to be in the first primary partition, it just insists that it is at install time. I don’t *think* anything bad will happen if you install Windows in, say, the second primary partition and then later make the first one, say, FAT or NTFS or whatever. At least, I think I did that with an early Ubuntu and Windows XP and got away with it.
Basically, the rules for getting GRUB and Windows to play nice, you kinda often learn through experience, but they’re not complicated once you do. It’s actually really simple, and imminently possible.
Edited 2010-06-10 23:42 UTC
No, whatever the MBR points to is loaded. If you install Windows, it writes to the MBR, so you need to get a rescue CD, preferably of a compatible ISA (IA32 or x86_64, though often x86_64 will work just fine for either), chroot in, and run
It’s not something new in 7, Vista, XP, or even 2000.
How does that break GRUB? It’s been long-known that Windows likes to boot from a primary partition, and that GRUB doesn’t give a damn. So, you install Windows on primary only, and if you’re going to be short on partitions, put your *n*xes on extendeds. if you install Windows after your other OSes, chroot in from a similar boot disc, and re-install GRUB to the MBR.
GRUB is absolute garbage. I can’t believe people would use a boot manager that utterly fails to boot when the partition it was installed from gets corrupted. The main point of having multiple bootable partitions is to have the option to boot to a backup in an emergency – but with GRUB you can’t – completely defeating the purpose. (…and that’s just one of its problems)
I’d like to see GRUB wiped off the planet.
And replaced by what ? LILO ?
GRUB isn’t that bad, actually. If it gets corrupted, you only have to play with grub-install on a LiveCD to get it running again. Or to put it on a separate partition, as several people do… (Myself I don’t because I think that it is reliable enough, but I often read this recommendation)
Edited 2010-06-09 09:44 UTC
It’s a broken design. Period. The boot loader shouldn’t need to use any partitions in order to function and boot the available Operating Systems.
If UEFI can help us get rid of the stuff like GRUB it’s all for the better.
GRUB isn’t that bad, actually.
Indeed. It’s even worse! My system boots just fine and dandy as long as I don’t have any USB sticks or removable hard drives attached… but if I try to boot with a USB stick in the USB port GRUB fails to load! Apparently having an extra storage medium attached causes GRUB to get gloriously confused and try to look for its files in the wrong place.. D’oh. It’s freaking annoying.
You sure that’s not the BIOS’ doing? I have one machine where, if I have any type of external USB media attached at all, the BIOS literally will not pass control to the hard drive no matter what. It just sits there with a blinking cursor. It’s not USB booting causing it either since I have that turned off on this machine. It’s extremely irritating to say the least.
You sure that’s not the BIOS’ doing? I have one machine where, if I have any type of external USB media attached at all, the BIOS literally will not pass control to the hard drive no matter what. It just sits there with a blinking cursor. It’s not USB booting causing it either since I have that turned off on this machine. It’s extremely irritating to say the least.
Yes, I know it’s GRUB: it loads GRUB and GRUB prints out its usual version identification etc, and an error saying it can’t load its files..
Sounds like your BIOS may be re-ordering the drives in a different order.
I’ve heard others experience similar problems, but I have not seen it personally on any of my machines.
Sounds like your BIOS may be re-ordering the drives in a different order.
I’ve heard others experience similar problems, but I have not seen it personally on any of my machines.
That’s what I was thinking, but then again, the Microsoft’s own MBR loader works just fine even with USB sticks attached. Of course, it’s a whole lot less complicated than GRUB, but still.
Perhaps it internally re-maps the drive that it was booted from back to 0 and that allows it to work properly… who knows.
There are tons of nice boot managers. Right now I’m using PLoP, but XosL and BootIT NG are also nice. With any of those you have some recovery options if you screw up your partition table.
IMHO, each OS needs to have its bootloader installed to its own partition. It keeps things simple, and it’s much more difficult to break more than a single OS. Letting Grub or NTLDR reside in the MBR is setting yourself up for an unbootable situation.
For example, occasionally when I update my Gentoo kernel I’ll make a typo in lilo.conf, which renders that partition unbootable. So I’ll boot another OS to fix it. If LILO was installed the MBR I’d need to fetch a live CD or USB key and the process would be much more painful. Not to mention that I sometimes don’t have those recovery tools on hand.
Sounds to me like you think pretty much all available bootloaders for normal PCs are rubbish. I don’t know of any one that can function w/o a partition.
Why do you need a partition table to boot? Just because most expect it doesn’t mean they all do
(Note: I realize now you’re talking about chainloading bootloaders)
The BIOS is just gonna execute from the first block on the disk.
For example, you can take a raw Haiku image and write it to the beginning of your disk. This will eliminate the MBR and partition table altogether – allowing Haiku’s stage 1.5 bootloader to be started directly from the BIOS.
In fact, if you want to put it on a partition, you have to actually modify the boot block of the image because it’s hardcoded initially to assume the image starts at 0x0 offset on the disk rather than partition 1.
Some BIOS’ don’t like this, however, for no good reason (expecially when booting to a USB device). They will sometimes insist that a partition table exists before loading the MBR.
Haiku’s “anyboot” image comes with a MBR and fake partition table to satisfy the finicky BIOS requirements, but uses some trickery to basically just jump directly to the stage 1.5 regardless. Somewhere in between it also contains El-Torito support to allow booting directly from a CD.
Edited 2010-06-09 23:20 UTC
Yes. Specifically, bootloaders that allow booting from many different devices, and booting of different OSes (even if of the same family), for which the MBR is just a bit too small. On top of that, MBR corruption is generally a very minor problem, compared to corruption of data or its container(s).
That said, it would be neat to define a boot and partition setup in which a bootable partition can simply be selected at a mobo menu, like whole devices are, bypassing the need for anything but usefully executable beginning sectors for any locally-attached device, or discoverable network boot device.
Can’t we have less bios rather than more? You know, push the power management down to the hardware itself. Maybe give it enough intelligence so that you can point it to a partition that has the files for booting, and nothing else. A small, well defined part of the computer that does one thing well. Is it really too much to ask for?
No, let’s not do that. It’s better to do things in software than hardware. Just take a look at the features of the 386 that have gone unused because it’s easier and more reliable and more manageable to do them in software (task-switching, segmentation, etc.). Power management isn’t simple and newer policies might be developed over the life of a device or computer. Rather than hardwiring power management, it’s better to have it as a set of drivers that can be tweaked and upgraded. It’s also a lot easier and cheaper to write drivers than to develop complex custom hardware.
Yes, because power management is going away and/or the OS know more about how to manage the power on hardware rather than the hardware itself…
No, that’s a strawman. Of course, each device knows how to manage its own power. Certainly the details of each device’s power management should be left to the hardware. But higher level stuff, such as whether the device is on full-power, partial power, or off, etc. should be left as policy decisions to the OS which can balance user’s needs with its overall knowledge of the state of the system to make a good decision regarding power management. That’s far too complicated to be built into the motherboard and policy doesn’t belong there anyway.
What you’re describing is essentially ACPI… Or am I missing something ?
You confuse me. You say ‘no you are wrong, instead we should do exactly what you said we should do’. The device manages it’s own power and the os tells it what to do. No frimware involved at all. Well, I suppose there would be firmware on the device, but it has no delusions of booting the machine. So no frimware between the os and the device would be a better way of putting it.
Your original post wasn’t entirely clear, but you did say “push the power management down to the hardware itself”, implying that all power management is done in hardware instead of managed by software and system firmware. If that’s not what you meant, then I retract my accusations.
That sentiment is good in theory but used still need software to carry out those functions, even if it’s just firmware.
So personally I’d rather have firmware that can easily be tailored to my personal usage than firmware that cannot. And if that means having “more BIOS rather than less”, then so be it.
A new virus/malware vector…yay. Not to mention more bug opportunities than Discovery Channel.
I second the “less is more” comment. Whatever form the BIOS takes, it should do as little as possible to do any hardware initialization and self-testing required, then pass control to the bootloader or OS.
/+1.
Let alone that huge amount of DRM “services” that the good people at certain parts of the industry are already trying to shove down our throats. (Which can then be used as additional attack vectors).
– Gilboa
You can already get malware on your MBR / bootloader. It’s just they’re not really seen in the wild as there’s very little gain in doing so:
* you can’t easily self-replicate like you could with infected EXEs
* and you can’t easily collect bank details (etc) like you could with traditional trojans / spyware
So I can’t see UEFI being any more of a practical attack vector than the current alternatives already in place.
Hi,
There’s a lot of misinformation, both in the comments here and in the original article. Hopefully the following clears some of it up..
UEFI system would be an essential requirement in order for a PC to boot from a drive larger than 2TB. Wrong. For the MBR scheme, a partition is described by a 32-bit starting LBA address and 32-bit number of sectors; therefore a partition can start at LBA address 0xFFFFFFFF and be 0xFFFFFFFF sectors long. This means you can have a 4 TiB disk with a pair of 2 TiB partition without much problem. In theory it’s also possible to have sectors larger than 512 bytes (e.g. 4096-byte sectors); and in practice it is possible for an OS to use the GPT partitioning scheme without using UEFI.
Not least the fact that a standard BIOS can’t simply be flashed with a new UEFI system. Actually, a full UEFI system typically uses an “EFI partition” to store things like device drivers, boot loaders, etc. There’s no reason that this “EFI partition” can’t contain half of the firmware (and code for a legacy BIOS compatibility layer); where the flash ROM only contains the minimum needed to access the EFI partition. In this way a smaller flash ROM could be used, and flash ROMs in existing systems would be larger than necessary.
“UEFI is written in C, rather than the assembly code used in a traditional BIOS.” I’ve seen BIOS code written mostly in C (with small amounts of assembly, not unlike the Linux kernel for example), and there’s no reason why EFI firmware can’t be written in assembly.
Another thing is that EFI drivers can be much more powerful. They aren’t. For example, the video API (and therefore device drivers for video) is a simple “frame buffer” interface, with no support for 2D/3D acceleration, hardware pointers or anything else. Basically it has the same features as the old VESA VBE standard (just with a slightly cleaner interface). Worse, none of it is designed for re-entrancy (it’s all mostly useless for a multi-tasking or multi-CPU situations). The funny part here is that (for video, for the short-term) most EFI systems will probably end up using the video card ROM’s legacy/real mode code behind the scenes.
Then, you don’t need to maintain drivers on the OS, and alternative OSes don’t need drivers at all, other than to use EFI – just keep your graphics card firmware flashed, and the drivers stay up to date. Completely wrong. Because UEFI drivers aren’t designed for anything other than UEFI (and aren’t suitable for anything else); you’d end up maintaining UEFI drivers in addition to maintaining the drivers for each OS.
It can boot faster because you can embed the bootloader as an EFI module, so you can skip the MBR. In Fact, there is not MBR in the first place. UEFI parses the GPT partition table, finds an UEFI partition, then loads modules from that UEFI partition. It’s slower. The majority of the time consumed by firmware during boot is used for initializing/configuring/testing devices and the chipset (e.g. memory controllers); and this work is the same regardless of what type of firmware is being used; so the extra work (e.g. reading lots more sectors from disk) done by UEFI doesn’t add much to the total boot time.
And they could have slapped a point-and-click GUI on the standard PC BIOS too. Correct. PC BIOS can have a point-and-click GUI; but also UEFI doesn’t require a point-and-click GUI and can be a simple text thing. I’ve got 2 Intel motherboards here that support both BIOS and UEFI that use the same text-based “BIOS setup screens” that we’re all used to.
And the problem with GRUB is that it exists in a volatile location where any OS can screw with it. While there’s many problems with GRUB (e.g. the inadequate and/or unstable “multi-boot” specification that even Linux won’t touch); when it comes to security UEFI isn’t any better – there’s no protection for the “EFI partition” where the UEFI modules and OS boot loaders are stored.
Ah, but UEFI is extendable. The old BIOS thing is extendable too. Any device can include a ROM to extend the BIOS (which is how video, SCSI and network cards typically work) and you can have a boot manager (or boot virus) that adds or replaces BIOS functions. You don’t see this too much though – the only example I can remember is a utility (based on SYSLINUX if I remember correctly) that extends the BIOS “int 0x13” interface to pretend a file on the network is a floppy drive, to allow an ancient real mode OS like DOS to boot from network while it thinks it’s booting from a floppy.
The biggest advantage of UEFI is that the interfaces are “clean” (e.g. not riddled with backward compatibility quirks) and portable (e.g. same on 32-bit 80×86, 64-bit 80×86 and Itanium). It also represents the first step in getting rid of a lot of legacy stuff from the “PC Compatible” architecture, that has remained for hysterical historical reasons.
-Brendan
Ready to bet that in the process of removing legacy stuff, they’ll first get rid of the useful stuff which Windows doesn’t require anymore (eg direct access to VGA’s text-mode video memory) before getting rid of the hacks which Windows uses in its booting process (A20 and its cousins) ?
+1 (can’t vote as commented)
excellent comment. Thanks for posting.
He he he you couldn’t expect less from one of OSdev’s senior members
Excellent and informative. I’d vote you up but since I already commented I’m not trusted to have an informed opinion.
I was wondering about that – VESA is actually pretty reasonable, as long as the video card manufacturer implements it properly. I can’t imagine most mainstream video card manufacturers providing any more power to their EFI driver than they already do for their VESA driver – it’s not likely the EFI and bootloader will need a fully featured 3d accelerated video driver any time soon
Yes, that’s true, and I’d still prefer the text-based interface myself
Ouch – what a mess. This reminds me of the ASUS Express Gate software, which absolutely sucks IMO. Why can’t motherboard manufacturers include a chunk of lockable flash memory for this stuff instead?
Edited 2010-06-09 18:19 UTC
This sounds really frightening. The way you make it sound, it won’t be possible to install a new hard drive anymore without worrying about setting up a special partition and installing special drivers before I even install the OS? Actually I find this pretty hard to believe, and I am relatively certain that my Mac (which uses EFI) doesn’t have this requirement. So what gives?
Edited 2010-06-09 21:12 UTC
Hi,
That would depend how capable the firmware is. For my computers (both with Intel firmware) the firmware includes enough device drivers to access the EFI partition on hard drives, USB flash and CD-ROMs; and also includes the “EFI shell”. This means that a CD or USB flash can be used to boot an EFI OS, and/or can contain enough utilities to create an EFI partition on the hard disk.
I’ve never owned/used/seen OS X. I have heard Apple systems don’t completely comply with the UEFI specification (and vaguely remember something about Apple using HPFS for the UEFI partition despite the UEFI specification describing a “variation of FAT”, and some differences involving video that I’m having trouble remembering). The UEFI partition might exist (but could be hidden by OS X so that it’s invisible when using OS X), or it could be using the main OS X partition as the UEFI partition (or, as an extension to the UEFI partition). Wikipedia (http://en.wikipedia.org/wiki/EFI_System_partition) says that for Apple the EFI partition normally does exist and is normally empty (which makes me wonder exactly where the OS X boot loader lives). Various comments in Apple forums give me the impression that for Apple the default UEFI partition is a 128 MiB partition near the end of the disk (small enough that users wouldn’t notice the missing disk space if OS X hides it); and that if it’s deleted OS X still boots correctly (but firmware updates fail).
As far as I can tell, Apple computers just aren’t UEFI compatible (instead they use Apple firmware that is “based on UEFI”). This also makes me doubt that it’s possible to install OS X on a standard computer that does comply with the UEFI specifications properly (at least not without additional hackery). I also wouldn’t be surprised if this non-compatibility was deliberate (e.g. to force people to buy Apple hardware with Apple pricing, rather than being able to use standard hardware with standard pricing).
-Brendan
what about http://www.coreboot.org/Welcome_to_coreboot
maybe could be a option for not cutting edge mainboard that “can’t have UEFI”
I’ve been using IBM servers with UEFI for some time. All I got was trouble.
It really sounds cool but it has to be implemented properly.
BTW. it boots much slower than regular bios. I know it’s IBM crap, but it doesn’t leave me good experience with UEFI.
Is UEFI the same thing that Intel tried to sell to the Los Alamos National Laboratory, who simulate nuclear weapons on supercomputers, a few years ago? So the pitch went like this: our new proprietary “BIOS” is so extensible and flexible that hardware vendors can extend it with proprietary plug-ins that can inspect all of memory and take control of the hardware with every interrupt. Please, run your top-secret, national security-critical software on our proprietary hardware and allow us to install opaque blobs of proprietary “plug-ins” on the hardware; of course most of the hardware and proprietary plug-ins are made in China, who of course is absolutely not interested in spying on US nuclear weapons simulations. Yeah, right. No wonder LANL went on to develop LinuxBIOS which is now coreboot.
It surprises me that no geek on this forum has yet warned the other geeks about the dangers that UEFI creates. Not only it requires you to have firmware on your hard disk(s) but it can also defeat *any* security or encryption scheme that you might want to use in software. Worse, proprietary plug-ins could enforce DRM, spy on you and call home without you even noticing (the network and wifi cards might deliberately hide such traffic from you, too!). It could open all sorts of backdoors and update itself while you browse.
UEFI? Just say NO!
Coreboot? Demand it!
—
Ludovic Brenta.
Hi,
When you’ve got a massive number of computers (nodes in a cluster) with no keyboard/video on each computer/node, something like changing one BIOS option on each computer can be a complete disaster. Imagine a team of 20 people running around plugging laptops into a serial port of each computer/node, averaging about 300 computers/nodes per hour, 8 hours per day for almost 3 days straight. Sounds like a very expensive amount of downtime when there’s 6480 “triblades” to change.
For something like that, you want the minimum amount of firmware possible (just enough to get the memory controller and networking setup, with no configuration options and no messing about with disk or USB controllers that don’t exist) and download some sort of executable from the network. This doesn’t describe BIOS or UEFI (or coreboot, to be honest).
This is all mostly FUD – UEFI is no better/worse than the legacy BIOS, and no better/worse than coreboot.
If you want security then you need to acknowledge that a flash chip can be reflashed; and nothing contained within that flash chip (whether it’s BIOS, UEFI or coreboot) is secure.
There’s only 2 ways to get around that – use old-fashioned ROMs that can’t be modified (or updated), or use TPM.
Of course most people don’t care about security that much. Most people only care about having stable specifications, so people can write software that works with the firmware today and know that it won’t be broken by API changes tomorrow. The API for the BIOS is stable (but is also ugly – relying on a large number of de-facto specifications and “historical practice”). The specification and API for UEFI is very good. The specification for coreboot doesn’t even exist, and the API can change with no notice because one developer thought they had a “good idea”.
In all of these cases (for maintainability, security and compatibility), coreboot fails.
-Brendan