NeoSmart Technologies has published a (fairly colorful and strongly-opinionated?) article on the new Windows 8 “touch-friendly” boot menu, and how in many ways it has come to resemble a mini-OS more than a traditional boot loader, introducing a completely new boot sequence and possibly even operating in protected mode. Also touches briefly on changes relating to the new “Secure Boot” initiative.
The Windows kernel is unable to kexec any other kernel except for other versions of the Windows kernel. Unless they change this, there’s no way to chainload other bootloaders and therefore breaks software, such as Ubuntu’s wubi, at least by default.
I have no idea why they thought this was a good idea. If I want to dual boot Windows 7, I need to boot the Windows 8 kernel, load the Windows 8 drivers, select the OS in the boot menu, unload the Windows 8 drivers, kexec the Windows 7 kernel, and then load the Windows 7 drivers.
Good thing they didn’t remove the ability to modify the BCD, so more advanced users can still use the ‘old’ boot menu that doesn’t involve loading a chain of kernels.
I don’t think that any sane alternate OS user really chainload their OSs using the messy Windows bootloader. The BCD is quite a hell to manually edit even for hardcore Windows admins.
Thus, losing the ability to chainload anything other than Windows is not a great loss.
Agreed. BCD is much harder to edit than older boot.ini.
CapEnt,
“Thus, losing the ability to chainload anything other than Windows is not a great loss.”
I agree with you today, but just be forewarned that *if* windows 8 will require secure boot to run normally, then you may have no choice but to use their bootloader, in which case loosing the ability to chainload linux *would* be a great loss.
But that’s what EasyBCD (the software in OP), it lets you chainload Linux/BSD/Mac and legacy windows from the Windows 8 bootmenu. It’s really easy to use, too.
This new bootloader looks pretty, for sure: native resolution on LCD screens, touch support, sound, etc, but its a overkill without any real advantage other than making the system recovery more accessible, and quite probably will create more failure points that will scrap the bootloader by itself.
And i though few years ago that GRUB was overengineered…
The new boot loader is idiotic in UEFI mode. It sort of makes sense in BIOS mode for IA32 and x64. In Windows 8 x64 and ARM, most people should use UEFI since most systems from the past 2 years have it. For EFI, a boot loader is redundant (thus idiotic) since the firmware already provides one.
They claim they did it for consistency, but by Windows 9, there won’t be any more BIOS systems remaining and everything should be UEFI based.
Why taint a decently designed firmware for the sake of legacy systems?
Now if only Apple makes their bootpicker compliant with UEFI boot loading. The Boot Picker doesn’t show arbitrarily located bootloaders that are specified in the NVRAM.
Edited 2011-12-05 23:16 UTC
Unfortunately, Apple is all about the arbitrary …
One they can’t control (and is often messed up)
Decently designed? I can think of many attributes for UEFI, but “decent” or “designed”?
Maybe let them update their firmware to UEFI first. It’s still EFI 1.3
Any arguments for the “is often messed up”? I haven’t seen any EFI implementations that have problems with the Bootloader. It just works. Apple is the only one to have a small quirk that will probably be solved once users start making bug reports. Right now it doesn’t affect anyone.
Again, arguments? EFI is very well designed. The problem with it’s single competition (OpenFirmware), which is also well designed, is that you have to code in Forth instead of C. C is more natural to everyone. The resulting F-Code is always interpreted, so it’s also a bit slower while the EFI binaries are always running at native speed. Even the architecture-independent Efi Byte Code is faster than interpreting F-Code.
No design firmwares are the BIOS and CoreBoot. No NVRAM access, no boot loader, no standard interfaces except for the ones we have since the original IBM PC. There are so many extensions to Int10 and Int13 that nobody can use them correctly and universally.
Coreboot does excellently the PlatformInitialization part of a firmware, but doesn’t provide a standardized ABI for applications, OS loaders or anything else. If I want to write a pre-OS Tetris for CoreBoot, I need to compile it for a certain version of CoreBoot. In EFI or OpenFirmware, a Tetris written for EFI 1.0 will work on all EFI releases and CPU architectures.
EFI 1.1 with a few bits of UEFI. I’m pretty sure that they will migrate to UEFI since Windows 8 will be a serious incentive for them. Until now, nobody needed it except for Mac OS which works correctly with 1.1. There are almost no differences between EFI 1.1 and UEFI from an OS perspective. It’s QueryCapsuleInfo and QueryVariableInfo and that’s it. It’s only two functions.
Don’t be so opinionated without knowing what you’re talking about.
Except for when trying to multiboot (oops, vendors only tested against a single-Windows-install). Or doing anything else that’s in the specs but outside the narrow scope of actual UEFI implementation.
It’s an operating system, stashed into a firmware chip.
Dynamic loaders, nested calls down to about 10 level.
For what? Pushing a couple of bits.
Except for Chuck Moore, I suppose. Making it “not everyone”.
Also, there is a c-to-fcode compiler, whose code quality is comparable to what I see in UEFI.
They could have booting down to sub-seconds in 2000 (like LinuxBIOS), instead of 2011. Speed was no concern in the UEFI design.
Trace what tianocore does on boot – it ain’t fast (and the commercial “UEFI”s out there barely comply to the specs)
coreboot has NVRAM, boot loaders (eg. GRUB2), no PC BIOS interfaces. Those are deferred to SeaBIOS, just like on UEFI it’s deferred to a CSM.
It does provide a stable interface: Load the payload, then get out of the way.
My two years old binary still runs on current coreboot (it’s written all lower case, by the way).
Only if compiled to EBC, and if that code doesn’t use native code.
For native binaries, you can’t even run a 32bit EFI binary on 64bit-EFI (on the same CPU).
Given that Windows 8 still boots from MBR, why should they bother? It’s not exactly in Apple’s interest to provide a good Windows experience – just good enough (and Microsoft readily takes the blame)
With all the secrecy and binary components in the UEFI ecosystem, I see why they did UEFI the way it is now – but they sacrificed the design for politics.
I guess coreboot will eventually provide UEFI interfaces – using its payload mechanism so it stays slim and fast for those who can live without the politics.
Yes, pushing a couple of bits, setting up and testing the hardware, booting an OS from any device connected to the system (SATA, IDE, SCSI, SAS, Hardware RAID, Software RAID, USB, Firewire, Thunderbolt, SD-Cards, CompactFlash Cards), Network, iSCSI, WAN networks (Lion Recovery), Software encrypted volumes, etc. All in a standardized way. Having support for filesystems, Block Devices, IP Networks, Display Adapters is required for simply booting an OS. Having support for checking the integrity of the boot loader is actually normal. The Secure Booting initiative is something that we should of had a long time ago, as long as the user can disable it for non-mainstream OS purposes.
When you add all these things up, what you get is something like the UEFI spec. All these are actually required to boot an OS. We do boot from iSCSI, NFS, RAID, USB, Firewire, as well as plain hard-disks, even if you don’t.
UEFI is a spec and Tiano Core is an implementation. Nobody is stating that an UEFI implementation should be based on Tiano Core.
Furthermore, I’ve seen Phoenix BIOSes based on Tiano Core boot in less than 2 seconds.
I’ve also used Tiano Core EDK2 extensively lately and I can tell you that it’s very well designed and structured and that the code is better written than anything I’ve ever seen. Even a novice can understand Tiano Core code, while the Linux code for most devices is usually undocumented and unreadable.
I don’t see an argument over here. Grub2 is required on some systems because of the failures of those firmwares. On EFI and on OpenFirmware a boot loader is simply not required as the firmware implements one correctly. Furthermore it’s much easier for Windows, Linux, Solaris and MacOS to use the firmware for setting the boot priority (see the startup disk control panel) than to expect all of them to understand each other’s boot loader. Windows knows BCD, Linux works good with grub, FreeBSD has it’s own boot loader . Why can’t all of them use the same mechanism? It’s inexcusable since a standardized one is provided by the firmware now in both UEFI and OpenFirmware.
Yes, and that’s what it’s all about. I want to be able to have full disk encryption, my choice for a filesystem, all on top of iSCSI over 10g ethernet that is resting on some powerful hardware raid. How can CoreBios handle that in a simple way? In UEFI you can simply build a device path that specifies all of that and set the Boot000 to that device path. In BIOS you can’t unless you have a Network card that masquerades itself as a SCSI controller via the ROM. And to make it even better, the bios sets the booting to the network card and you need to go into the network card boot rom and set it to boot over iSCSI. You are working with at least two ROM Setup Programs instead of a single and simple NVRAM variable.
And in coreboot a Video Card OptionROM written in 32bit CoreBoot API would run on a 64bit CoreBoot implementation? Oh, wait. There is no way to have an OptionROM driver for core boot.
EBC is not used by all manufacturers, true, but that doesn’t mean that EFI doesn’t provide you with an option to cover all the system architectures. I wouldn’t consider this to be a fault of the SPEC, but one of the system manufacturers. I don’t however see any point in compiling an Intel GPU EFI driver for anything but x86_64. It’s not like the CPU integrated GPU core also works on ARM and it’s not like NVidia or AMD provide ARM architecture drivers for Windows.
Quite simple, partitioning and AHCI:
1) MBR Partitioning:
a) EFI Protective Partition
b) Mac OS X Partition
c) Mac OS Recovery Partition
d) Windows Partition
e) Windows Recovery Partition?
We already have 5 partitions in the MBR out of a maximum of 4. GPT Hybrid partitioning can’t work in the future. When they started this, in a Bootcamp configuration you only had 3 partitions, which left 1 free for arbitrary usage. Now you need at least 4 partitions and Windows would rather have a fifth one.
Plus, it’s easier for them to migrate to UEFI than to keep maintaining the CSM in the firmware.
2) AHCI. It’s quite simple here, SSD TRIM and other benefits that come from not using IDE emulation, which will disappear from the silicon of the future Intel Chipsets.
EFI is a standard, you can implement it on top of anything you want, including CoreBoot. EFI has a platform initialization part that can be anything you want.
The advantage of how EFI is being used right now is that 99% of the firmware in even the cheapest board does not come from the vendors. The PI part comes from the chipset manufacturer (Intel or AMD mostly), so it’s of reasonable quality and the rest of the EFI environment comes from Tiano Core, where everyone submits the bug fixes and the optimizations. Thus, everyone gets the benefit of the code and no duplicated work is done. The board manufacturers just redesign the setup GUI and add support for their particular customizations of the board.
My personal problem with core boot is that it doesn’t solve any of the fundamental problems firmwares have. Let’s face it, hardware manufacturers don’t want to do OSS drivers. So you need a mechanism to build a firmware using the hardware drivers in a binary form straight from the vendor. You also need a single way of booting anything from anything.
Want to complain about EFI? Here’s my main complaint:
EFI Services are all boot-only.
Think of UGA or GOP. I think that they should by designed to be unload-able or usable by the OS. The OS should be able to use them if it doesn’t have a driver or unload them if it does. This obviously would make the drivers a lot harder to write (think virtual addressing mode conversion of all pointers), but not impossible, as shown by the normal Runtime Services.
As it currently stands, all drivers are lost once you do ExitBootServices and almost nothing remains once you go SetVirtualAddressMap.
OpenFirmware is actually better than any other firmware in this aspect.
1. PC Power On
2. BIOS POST
3. Bootloader
4. Selection Menu
5. Reboot
6. Bootloader
7. Skip Selection Menu
8. Boot into Selected OS
This obviously a terrible design with completely unnecessary failure modes. I disagree with the authors following assessment.
“It’s a subtle change as the boot menu is not shown the second time around, but the PC actually reboots after making the selection. We’re not clear on why Microsoft is doing this, but if I’d had to hazard a really wild guess, I’d say it’s to clean up the environment that’s been altered/modified/corrupted by the new boot menu.”
I’m almost positive he’s wrong about this being the reason for the reboot. Bootloaders and kexec can already solve the supposed “corruption” problem using far more obvious approaches, I’ve done it myself. It’s not like we’re in the cowboy years when the 286 processor required a physical reboot to exit protected mode.
I suspect it has more to do with secure boot failure modes inside the mini-OS. Here is my own hypothesis: In an effort to keep the mini OS simple/more reliable, it may not use the same driver/component validation used by the main kernel. So, the mini-OS will still be able to recover a system even if the RAID/network/video/etc drivers cannot be verified by the motherboard’s secure boot feature. Otherwise just imagine how awkward it would be if microsoft’s recovery tools failed to run on certain hardware configurations due to secure boot restrictions (which was one of my earlier stated concerns).
So instead of risking this scenario, the mini-os probably ignores secure boot policy and loads whatever drivers are necessary to access the hardware to configure/update the OS. This obviously creates a gaping hole with respect to secure boot if it were to continue and boot the main windows kernel, so instead it reboots the system so that secure boot can function normally the second time around.
The ridiculous bootup sequence makes sense in the context of my hypothesis.
Does anyone have a better hypothesis?
As far as Im concerned, its yet another example of Microsoft over complicating what should be something very simple and quick, especially as far as the end user is concerned. Im sure they have their reasons, both technical and not so technical for doing it but Im also sure its a lot of unneeded fluff that Microsoft is famous for. The Vista/7 bootloader was hell to edit manually using Microsoft’s own tools and it was of very little use for booting alternative operating systems. Dont get me wrong, I understand their desire to bias their booting strategy towards Windows, but going out of their way to do so and simply adding complication to something as inherently simple as a bootloader is an unnecessary waste of time and resources.
Unless Im way off base here, it would appear that the Win8 bootloader is essentially a stripped down Windows core, similar if not near-identical to WinPE (technically speaking). Im also guessing this is designed primarily for use on tablet devices rather than normal desktop/laptop machines. In that case, I can understand it to a degree, they want to provide a consistent, touch-able interface to the user… but is it really necessary to jump through the hoops they seem to be doing for that? Im hoping this new system will be exclusively for tablet devices rather than forced upon desktop users too.