SweClocker has just posted what might be the first video of the anticipated Bios replacement UEFI .Although UEFI promises to have a lot of advantages over and above BIOS, the most significant one for general end users will be a significant decrease in boot speed. (Swedclocker claims a boot speed of 1.37 seconds in its test configuration) it will also bring the following advantages: the ability to boot from large disks (over 2TB), CPU-independent architecture, CPU-independent drivers, Flexible pre-OS environment, including networking support, modular design, ability to modern graphical interface, fully fledged software environment , Support for 32/64 bit memory addressing , advanced security including encryption.
I read some forum posts where there is some degree of BIOS nostalgia, mostly questioning the need for a “better looking†and mouse driven user interface, but its clear that UEFI advantages offer much more than just better looks and ease of use. Resistance is also futile since some reports stated UEFI will be the preferred platform of the next release of Windows. We although won’t have to wait that long since UEFI will make its first debut to the mainstream consumer market next year.
1.37 seconds – its already difficult enough to press del fast enough to get in configure the dam thing
Hi,
Heh – you turn the computer on, the LCD monitor starts trying to figure out which video mode it is, and by the time you can see anything you’re too late.
– Brendan
Strikes me that an easy way around that would be to go into setup mode if a certain key is held down while booting the machine, kind of like what Macs do to choose the boot drive (you hold the option key at power-up and you can choose from which device to boot the machine). Maybe implement having delete held instead of having to be pressed at a specific time, and make whichever key is chosen standard so that we don’t have to use del on some machines, f2 on others, etc.
How about if you press “any key” then the boot is delay by several seconds to show the start-up menu options.
But just “Where is the any key?” (Homer)
Or maybe just have one key to get into the setup-screen, the same one for all manufactures ?
Why not “any key” then we would not have to remember which was the one key that you needed to press. It is not that anybody should be pressing the keyboard at start-up anyway. And if someone did accidentally press a key at start up it would then just take slightly longer longer to boot while the special start up menu keys were displayed.
It will never happen. God forbid the various PC manufacturers ever come together on such an insignificant (to them) matter. Never mind that it would be a huge boon to users of all computers.
One of the things that I’ve always liked about Macs, even with my current contempt for Apple, is the way most things just make sense. If a way of doing something works on one Mac, it works on them all.
I guess I shouldn’t complain though; such disparity serves to frustrate and annoy the average users, which keeps guys like me working.
One solution would be to just do it coreboot style — pre-computed and tested configurations, so there will be no need for much runtime configuration.
(Some kind of runtime configuration to coreboot would be really nice! Hint Hint!)
Or else, as darknexus said, some kind of standard keypress before boot would do just fine.
Oh bugger these manufacturers! Why can’t they just understand that proprietary drivers make no sense at all in the physical chip space they are working in? Once they get that point, they would be more willing to accommodate coreboot, and we would all be better off. It is not like drivers alone make them any money…
And when will coreboot finally think of crawling out of cluster space? Am waiting to get a purpose-built coreboot machine for my daily needs here!
Once coreboot + SSD + Wayland + NX rolls around next century, terminal services will finally make some useful sense. Instant-on and usable terminal systems. (Being tortured here with Windows Terminal Services, better known as What’s This Sh**.)
That does absolutely nothing for us overclockers. We take joy in getting the absolute most we can for our money and love having the option to fine tune every little detail or frequency, ratio and voltage to eek out every last drop.
Otherwise, yeah, I could see where some would be lazy and have no problem with burning $1.2k on a CPU instead of $200 for a CPU and $50 on cooling to get there.
What makes you so sure that we cannot implement overclocking in whatever setup that you are referring to?
Assuming you mean coreboot + fixed configurations, it can still be done. Remember how X would allow multiple declarations and with a obscure key combination, you could step through them?
Also, as I had said, I would very much enjoy runtime configuration for coreboot too. Adopt menuconfig again, anyone?
In coreboot, values that are exposed as configurable can be configured in the OS, as our nvram/CMOS config is selfdocumenting.
No need to walk through some weird setup utility that doesn’t even know your dvorak keyboard layout 😉
Then the 0.3-seconds-to-linux-kernel we achieve with coreboot on selected boards (boot time depends a lot on how complex the hardware is to initialize) are totally off-limits? 😉
sounds like a disadvantage to me.
But the self-fulfilling prophecy “no one will want to use Forth” resulted in coreboot, which has gone virtually nowhere in real-world usefulness, and allowed this DRM-infested sub-OS to find its way into commodity hardware.
Great. Now I get to try to find an UltraSparc machine that’s fast enough.
I tried. Seriously, I tried. Even built the a significant part of the OpenBIOS code.
Since I’ve moved on, most new components are in fact written in C, not Forth, so that’s not “self-fulfilling”, it’s an unfortunate fact.
And even with OpenBIOS, you still need someone to initialize RAM and other hardware. OpenFirmware says relatively little about that – if you want, you can run OpenBIOS (or the other OpenFirmware implementations) as payload on coreboot: coreboot gets the hardware up and running, OpenFirmware provides the interface to the user and the OS.
Noone does.
That’s the SmartFirmware based version, which gets more press _because_ it’s written in C, yet when it came to actual use, they used Open Firmware for the OLPC, which is all forth iirc. It would be rather silly to start with OpenBoot and add C…
I couldn’t find any C in the firmware code for OpenBoot.
I know Open Firmware can bring up the OLPC, Linux, and emulate enough BIOS to boot Windows XP. I’m willing to bet that BSDen and Solaris would boot on it as well, just based on my recent experience with OpenBoot.
Looking at what OpenBoot is (and downstream/forks are) capable of, it would have been called an OS in the 1980s.
Mouse-driven menus? Naive implementation, with bitmap graphics, exists.
I don’t think there’s any way to get this working with existing motherboards, but I wish it was a touch more obviously advanced, so some one would make a mobo that uses it/a fork.
You are aware that OpenBoot is not OpenBIOS.
And I know for a fact that OpenBIOS and coreboot are related (which was the link the original comment referred to) because I’m 50% of the original OpenBIOS core team that moved over to coreboot.
And OpenBIOS has a Forth kernel written in C, and then was built in Forth on top of that. Newer extensions are all C.
OpenBoot, SmartFirmware and SLOF were opensourced years after OpenBIOS was released – which is probably the only reason why some qemu targets still use OpenBIOS.
As for “OpenBoot can bring up windows”: sure it can – by totally hiding itself and providing the legacy PCBIOS interfaces. But that’s not the point of OpenFirmware.
Suppose I oversimplified and added unnecessary venom to my history…
I just wish things had gone down differently, regardless of the cause of those events…
OFW is cool, and I wish it was used more…
GRUB2 can be an EFI module (I use it on my Mac) and it is many time faster than loading the bios emulation, but slower than a quick boot enabled bios. But having a modular design here is an advantage, as UEFI could even provide some form of 2D framebuffer acceleration and device drivers directly from the firmware. OS independent, on chip driver can be cool.
Unfortunately, the NVIDIA driver does not support EFI at all on Linux.
I have a UEFI server from IBM here. It certainly doesn’t boot very fast (several minutes to OS loader), but it is kinda nice that you can configure the various cards from within the BIOS and not have to use the various ‘Press F8 to start Qlogic setup’ and the like.
That being said, I can’t get the machine to boot with both SAN adapters attached. It renumbers the disks and effing Windows gets confused. To bad I couldn’t convince anyone to use RHEL for this machine…
A decrease in boot speed or a decrease in boot time?
If you go by IBM servers then decrease in speed would be correct; they take a lifetime to boot. I’ve made the joke that my son will inherit the job of performing the os install once the server boots. UEFI is a pita from my limited experience and gains nothing.
My V880 takes about 4-10 minutes to get to the firmware prompt, depending on if diagnostic boot is set.
It’s intense.
lol, you don’t want to test ServeRAID configuration then (i’ve got 5150 on all 3 servers). damn thing runs configuration in some kind of web browser inside UEFI. click response can get up to 5 seconds and redraw is somewhere in range of first VGA cards. really… damn… slow…
all in all, i love my X series and don’t mind UEFI long booting on servers. configuring cards inside UEFI setup instead of waiting to press right key combination during boot is kinda nice feature.
they could definitely use some kind of caching and some kind of accelerated graphical interface.
edit… just watched video and graphic speed is way better there than on IBM UEFI
Edited 2010-11-08 20:28 UTC
Not a bad idea, but hopefully they don’t remove the ability to use the keyboard as well. Even better, since UEFI is modular, maybe we’ll get an Openfirmware-like command line for us geeks who want it.
I agree with you so much.
They had better not forbid custom UIs and whatnot, and whoever _enforces_ the use of mouse for _configuration_ needs serious medical attention.
Prettier interface and ability to use mouse instead of keyboard would be great, though. This would mean touch-screen terminals can be debugged without lugging along keyboards.
I have a UEFI motherboard, an Intel DG41TY. It boots very quickly. I think it saves some time by not doing the typical BIOS startup screen where it scans your RAM/IDE/etc.
What mechanisms to they have in place to be able to recover from UEFI crashes? Presumably the code is more complex than that found in a BIOS.
I thought EFI was the future BIOS replacement, but no one except Apple used it. What is the difference between the two? Why another BIOS replacement? Why will this work, when so many others have failed?
When EFI became opened up for OEM’s to participate in the development process it was rebranded UEFI. Apple has kept with the old EFI standard – I’m unsure with the latest releases of their hardware how much closer they are to the UEFI specification. Having had a look at my iMac and MacBook both of them are using EFI64 so it appears that Apple is in no hurry to make their EFI implementation 100% UEFI compliant.
Hi,
The BIOS has many limitations (mostly due to historical reasons):
1) The space reserved for option ROMs is severely undersized, but option ROMs are the only real choice for extending the BIOS. For example, a SCSI card needs a ROM to replace the BIOS disk services so you can boot from SCSI; and in the same way network cards require a ROM to allow you to boot from network. Video cards need a ROM so video works during boot. Because the space is limited, you’re screwed if you want to have 3 SCSI controllers and 2 video cards that all work during boot. It’s also almost impossible to upgrade a device’s ROM. EFI/UEFI solves these problems by allowing drivers to loaded into RAM from anywhere (instead of relying on device ROMs alone).
2) The interfaces used by the BIOS are seriously inadequate. For example, even if there was lots of space reserved for option ROMs, if you tried to get 2 SCSI cards working during boot there’d be conflicts (as both SCSI ROMs try to take control of disk services). There’s also problems with other interfaces. Remember the old “528 MB/504 MiB” disk size limit? The BIOS developers worked around that with clever hacks, creating a 8160 MiB limit. The 8160 MiB limit got worked around with an alternative set of disk functions (and system software needed to be modified to use the alternative disk functions). Of course these alternative disk functions typically don’t work for some devices (e.g. floppy, including USB and CD-ROM emulating floppy) so programmers still need to handle both the old disk services and the new ones. EFI/UEFI solves all these problems (completely different interfaces).
3) The interfaces used by the BIOS are mostly intended for assembly language, and can be painful for the majority of people who prefer using high level languages. EFI/UEFI solves that (interfaces use C calling conventions), which also makes it possible for the same code to be used on different architectures (e.g. 32-bit 80×86, 64-bit 80×86 and Itanium) with very few changes.
4) In most cases a boot loader is (initially) limited to 512 bytes. 512 bytes is too small to work-around the differences between different devices, so you end up needing different boot loaders to boot from different devices (e.g. one boot loader for hard disks, one boot loader for “El Torito” CD-ROM, one for PXE/network boot). Also, the boot loader starts in real mode (modern operating systems don’t use real mode), and after leaving real mode the BIOS function can’t be used. An OS can use ugly hacks (real mode emulation) to get around that, but they’re ugly. EFI/UEFI solves all this (the same boot loader can be used to boot from any device, the “initially 512 bytes” part is gone, and the boot loader can access EFI/UEFI services without real mode).
5) The space for the BIOS itself is not enough. The ROM itself can be large (where larger means more expensive), but the run-time part of it is limited to 128 KiB at most. BIOS manufacturers can’t add features because there’s no “run-time space” left, and they can’t increase the amount of “run-time space”. EFI/UEFI solves this – the ROM only needs to contain enough code to load more from elsewhere, and doesn’t need to contain “everything”; and EFI/UEFI allows the firmware to use as much RAM as it likes (and allows the OS to reclaim that RAM afterwards, so it’s not wasted).
For 80×86 (and Itanium), nobody else has made a serious attempt at establishing a new standard. The closest I can think of is LinuxBIOS/coreboot, but they’ve only created an implementation and haven’t attempted to create a new standard – they rely on payloads to support “the de-facto PC BIOS standard” or the “mult-boot” standard or the “whatever Linux expects pseudo-standard”; and the interface payloads use isn’t standardised and may change between releases. There’s also been other smaller projects, but none of them have any real market influence (and lots of market influence is required if you want to convince people like motherboard manufacturers, Apple and Microsoft to adopt a new standard).
While I’m here; I’d also like to point out that boot times depend on what work needs to be done during boot and how that work is done (e.g. in parallel, sequentially, with all CPUs, with only one CPU, etc). Most of this work is initialising and testing hardware devices; and the type of firmware being used makes very little difference (as all types of firmware need to initialise and test the same hardware devices). Switching to EFI/UEFI does not (in itself) imply faster boot times – you can have fast BIOS code or slow BIOS code, or fast UEFI code or slow UEFI code. You can also skip a lot of the testing and have fast/dodgy firmware instead of slower firmware that detects (and maybe even works around) hardware faults and supports extra features (like remote management).
People complaining about the speed of UEFI on server-class hardware could probably improve boot times by downgrading to less reliable desktop hardware with “let’s skip most tests and laugh while the company’s database gets trashed” firmware… 😉
– Brendan
“SweClocker has just posted what might be the first video of the anticipated Bios replacement UEFI .”
http://www.youtube.com
search ‘UEFI’
oldest result I can find on page 1 at a quick glance – one year old:
http://www.youtube.com/watch?v=rMFRP3qNFsQ
As that video says, Phoenix Instant Boot – which has been around for over a year – is a UEFI implementation.
Here is a slightly more informative video showing an unmodified Windows 7 install booting on UEFI, however it does not say whether the laptop is using an SSD or not:
http://www.youtube.com/watch?v=DLwaKb6pLrc
On my current PC, by far the most time spent is getting to be boot loader, from there Ubuntu 10.10 boots in around 5 to 10 seconds (It is currently installed on a SSD).
Edited 2010-11-08 23:18 UTC
And what is the real difference, other than lots o shiny of course, between this and just having a normal BIOS skip the hardware checks? Because at that speed it is doing exactly squat in the hardware checking dept, and frankly I can just go into my ECS Business Class board and turn on “speedboot” or whatever it is called and the boot will be so fast good luck on catching the BIOS screen if your hand isn’t on the button.
So other than adding the above 2TB limit, what good is it? We all know the more complex the code the more likely for malware and crashes, and all that pretty has to be sucking up some serious code. And does this mean Windows 8 will have NO option for booting on BIOS? If so Ballmer needs a good firing right now if he thinks all these folks with nice multicores and oodles of RAM are gonna toss our PCs just to give him a bigger payday.
I know that while I like Windows 7 it will be the LAST MSFT OS on my quad if it pulls that kind of stupidity, not to mention the EU might want to have a word about the Wintel monopoly, seeing as how this would be tying the OS to an arbitrary “feature” instead of simple speed and RAM requirements. To me it looks more like a way to try to force new hardware on us in a dead economy than something we are really needing. I mean how many OEMs are even shipping PCs with 2TB+ drives on them?
What I’m really interested in is whether UEFI allows for remote (over network or serial line) administration. I have an old Sun Ultra 10 that can be tweaked over serial line – it’s awesome.
anything with OpenBoot should be able to be tweaked over the net, over serial line, in Solaris you can eval/edit forth code while the OS is running, which I wish you could do under Free *nix, but NooOOOOoOOOoooo.
It’s really, really cool.
Considering the specs of an ultra 10, you might get more hacktastic fun factor by just turning it on with no OS and learning Forth.
Dude, I wrote a Forth interpreter some 15 years ago. That’s not what I’m interested in. I just need to be able to tweak every tunable of my system over serial line, or better, over the network. I don’t think Sun’s OpenBoot can do the latter. But serial line will suffice for the moment.
For my server machines I set up the Linux installation to use the serial line for the boot loader, the kernel boot messages and login console. So I can do quite a lot of stuff over serial line. But I cannot change BIOS parameters. OpenBoot lets me do that and that’s great. I always appreciate extra flexibility.
I have some vague recollection about some HP (or was it IBM) x86-based server that lets you take control over the BIOS setup via VNC (or was it RDP) protocol. Maybe that kind of stuff can be done with UEFI. That’d be cool!
The part that worries me is that’s proprietary ‘untrusted’ code running under everything…
If you wrote a Forth, then there’s nothing to learn there… I know OpenBoot has networking support, so I’m sure it wouldn’t be too hard to get into it remotely, though it might take some hacking…
Me too.
What I also would like to see (if UEFI drivers are possible) are real standards for cross-platform/cross-OS interfaces provided by UEFI. For example I have 2 gfx cards or 2 soundcards or 2 networks cards created by manufacturers X and Y. I would like EFI to provide 2 instances of the same device that can be queryed by the OS for capabilities. In other words, no third party drivers but OS developed drivers.
Edited 2010-11-09 16:58 UTC
In fact, Asus and Gigabyte and maybe others released first boards with EFI 2 years ago.
Graphical, mouse driven BIOS, great. So it took only nearly 30 years after Amiga kind of had it? 🙂