Linked by Thom Holwerda on Thu 11th May 2006 15:50 UTC, submitted by anonymous
Privacy, Security, Encryption A feature called System Management Mode included in modern x86 cpus opens the way to the land of kernel space and the quest for ring zero. Federico Biancuzzi interviews French researcher Loc Duflot to learn about the System Management Mode attack, how to mitigate it, what hardware is vulnerable, and why we should be concerned with recent X Server bugs.
Permalink for comment 123832
To read all comments associated with this story, please click here.
RE: Whose idea was SMM mode?
by Brendan on Fri 12th May 2006 11:07 UTC in reply to "Whose idea was SMM mode?"
Member since:

AFAIK SMM was invented by Intel for the purpose of power management (see the APM specification) and system management (correcting RAM ECC errors in software, etc). It was introduced with the 80486, and SMM is not entirely 16 bit - it uses real mode segment addressing with 4 GB segment limits (similar to "unreal mode").

This isn't an X Server problem. This is shoddy SMM code. SMM is a de facto x86 hypervisor, and this researcher discovered that many SMM implementations have a gaping security hole - it's the equivalent of an OS letting a user process modify the kernel's interrupt table. Don't blame the X Server.

I agree - I'd call this specific problem a "BIOS chipset initialization security hole" (if the BIOS fails to set the "D_LCK" bit, or worse, requires the "D_LCK" bit to be clear).

Unfortunately nothing is that simple. There's actually plenty of other things in the chipset (and PCI configuration space) that can be messed with if you've got access to all I/O ports - bringing the entire system to a grinding halt wouldn't be hard. For example, you could enable a chipset's "ISA hole" from 15 MB to 16 MB to cause 1 MB of physical RAM to suddenly disappear, use the ISA DMA controllers and floppy controller to trash RAM (or copy RAM to a floppy), relocate the video card's display memory elsewhere so that nothing written to display memory can be seen, use an otherwise idle network card to transfer physical RAM to someone on the internet, wipe hard drives, etc.

I also disagree with the Loc Duflot's conclusions. The problem isn't that user level code (like the X server) is given access to I/O ports, it's that the OS gives it access to all I/O ports (rather than limiting it's access to the I/O ports it needs).

It's the same problem for any code running at CPL=0 because they also have access to all I/O ports. Any device driver running at CPL=0 could potentially have flaws that allow this sort of attack, and something like a "trojan" NVidea driver could theoretically be used to compromise both Windows and Linux.

One solution would be to limit access to I/O ports to what each piece of code needs. For example, only allow the video driver to access I/O ports associated with the video card, only allow a disk driver to access I/O ports associated with the disk controller, only allow a serial port driver to access I/O ports associated with the serial ports, etc.

This would solve all "I/O port access vulnerabilities", and is fairly common practice for micro-kernels. Most monolithic kernel developers would consider it "bad for performance" though... :-)

Reply Parent Score: 1