A couple nights ago I was looking over the UEFI spec, and I realized it shouldn’t be too hard to write UEFI applications in Rust. It turns out, you can, and here I will tell you how.
Language gets me giddy, but thank god lots of other people get giddy over stuff like this.
You could write anything. But some of the first 3d party software for UEFI was malware like viruses.
Isn’t that you can write anything to boot it, that part is fine, its the fact that it adds complexity and leads to more failures, at least from what I’ve seen.
Being a repairman and system builder I tend to see more than your average Joe and what I’ve found is that screw ups that would simply require a CMOS reset in BIOS will leave a cooked board with UEFI. For example power here can often get iffy, with line sags and brown outs and while the BIOS boards will often handle it fine or at worse need a reset the UEFI boards? Tend to cook.
At first I just thought “Well its the new tech, DDR3, more cores, thinner traces, blah blah blah” but then I happened to get a hold of some Biostar and Asrock boards from the transition where pretty much the ONLY difference between board A and board B was that board A was BIOS and board B was UEFI and guess what? The BIOS boards are still going strong, the last of the UEFI systems will be brought back day after tomorrow and from the sounds of it it’ll be joining the others in the dumpster.
Now as to the “why” I don’t know,maybe there is a problem with the UEFI chips, maybe they just have insanely low tolerances, hell if I know, all I know is when I run out of BIOS boards I’m gonna end up having to buy spares of every damn board i buy because the UEFI ones just don’t seem to work as well, at least that is what I’ve seen,YMMV.
If you look at the technical side of things, UEFI, is light years better. BIOS is kind of held together by bailing wire, gum, and electrical tape. But, its old as dirt so everyone knows how to deal with it. UEFI, is just staring to get seriously used, and has a low barrier to entry ( as evidenced by this article). So its going to take a while for people to figure out how to do it right, and problems to avoid. I only know a little of the software side, not sure how complex it is hardware wise, but it is different. I have not otherwise heard of problems with UEFI boards hardware. Mostly just stupid implementations, like the Samsung bug that allowed you to brick it by writing a variable to UEFI memory.
Well, not really better.
More modern.
It would be better if the UEFI implementations didn’t suck so bad: http://www.youtube.com/watch?v=V2aq5M3Q76U
Most of them are based on old versions of the same buggy implementation.
Kind of how a macbook pro isn’t “better” than an Amiga 500, just more “modern”? I guess so.
Bill Shooter of Bul,
Kind of how windows 8 isn’t “better” than an Amiga 500, just more “modern”.
See what I did there?
Hi,
There should currently be no hardware differences between “BIOS motherboard” and “UEFI motherboard”. The only difference is what data the manufacturer felt like storing in the flash ROM.
My guess is that you’re not a large enough statistical sample, and were just unlucky. I flipped a coin 10 times and it came up “heads” 7 times, therefore there must be a secret government plot to make one side of the coin heavier and the other lighter to influence the results. 😉
– Brendan
In this case, there is a logic explanation other than just random chance: the added UEFI complexity.
See, motherboards manufactures needs to maintain large product portfolios, with fast release cycles. Overburdened engineering teams is pretty much the norm in this industry.
The BIOS is a far more simple kind of software, adapting it to new products is far more trivial than with UEFI. And even with BIOS, there is bugs, lots of it.
Most manufacturers took near a decade just to deliver a descent ACPI implementation, let’s not even speak about slight more complex stuff, like virtualization support, hot-swappable storage, USB mass-storage boot, RAID, decent PXE support… and goes on.
Now, the manufacturers needs to mobilize their already spread thin engineering teams to adapt UEFI code, what is, in effect, almost a operating system by itself. The final result is obvious: far more bugs, in particular for cheaper low-end motherboards, that gets lower development budgets.
UEFI is one of these things that looks gorgeous on paper, but in practice it is a invitation to a disaster.
I’m pretty sure that most implementations out there is so screwed up that every single one would be very easy to exploit if you can override the operating system safeness. I can even point a easy entry point: these applications to fully manage the motherboard from the OS.
Well, ‘enabling’ the masses to code for uefi gives me the creeps. The least you can do is force them to code as low level as possible, then at least you can force a harder entry level and maybe not all the code jockeys will think they can create safe and trustable uefi programs. But that ship has seemingly sailed already.
This is the kind of reasoning why those early computer scientists hated Grace Hopper’s ideas of a compiler because it enabled us peons to work computers. They too thought all computer programming should remain low level switch flipping to discourage people from thinking they could do it too.
Your comment reeks of elitism.
That is what EUFI sounds like to me, a way to introduce ‘Secure Boot’ which just leads to more secure DRM and more closed systems.
Will the PC be a walled garden like iOS ?
I hope not.
Thanks, Microsoft, you pushed for that, now we’ll have another 20+ years of legacy UTF-16 strings
It’s actually worse, because it’s not even UTF. It’s just stupid UCS-2 limited to 16 bits. So it’s a mistake on top of a mistake.
The UEFI code signing structures are also MS-derived, to the point where the structure members start with ‘win’.
There’s two different 16-bit unicode encodings:
UCS-2 uses two bytes per character, and can thus encode codepoints up to 65536 (the Basic Multilingual Plane, BMP – also known as plane 0).
UTF-16 uses two bytes per character, but also has escape codes that can be used to encode 32-bit codepoints.
UEFI uses UCS-2. On the positive side, it’s easy to work with (strings take fixed amounts of space, and it’s trivial to cut/paste at positions that leave valid strings), and the lower 64k of unicode still covers enough to cover all modern languages. It’s also got decent language support – even C has wchar (for a 16 or 32bit single character) and supporting functions.
On the negative side, it can’t encode the higher planes of unicode (which would allow e.g. using unicode symbols instead of bitmaps ; the emojis are encoded in plane 1), and it’s considered obsolete, superceded by UTF-16. There’s also some extensions to the CJK range; I don’t honestly know if those are in daily use or if they’re historical variants for scholarly purposes.
Everything considered, UCS-2 isn’t a horrible choice for something like this, since it’s simple to work with while still covering all the characters needed to present text in currently-used languages (modulo the CJK extensions – but I suspect those aren’t required for decent enough everyday Chinese/Japanese/Korean.)
Edited 2013-11-20 14:21 UTC
Desiderantes,
“Thanks, Microsoft, you pushed for that, now we’ll have another 20+ years of legacy UTF-16 strings.”
Like others said, it uses UCS-2, which was the standard text coding used up through windows 2000.
Another example of mandated legacy microsoft tech in the UEFI standard is FAT32. It’s reasonable to support fat32 on the basis that’s it’s already common for removable media (aka thumb drives). However it’s a real shame UEFI didn’t include a standard vendor neutral file system like UDF to provide a non-breaking migration path away from FAT32. Even windows already supports UDF disks formatted elsewhere. Paint me unsurprised, now we’re stuck with legacy FAT32 due to UEFI compatibility restraints.
http://en.wikipedia.org/wiki/Universal_Disk_Format