Linked by Hadrien Grasland on Fri 30th Dec 2011 08:24 UTC
Hardware, Embedded Systems In the world of alternative OS development, portability across multiple architectures is a challenging goal. Sometimes, it may be intrinsically hard to come up with hardware abstractions that work well everywhere, but many times the core problem is one of missing information. Here, I aim at learning more about the way non-x86 architectures deal with CPU IO ports, and in particular how they prevent user-mode software from accessing them.
Permalink for comment 501678
To read all comments associated with this story, please click here.
The headaches of Legacy Design.
by Snial on Fri 30th Dec 2011 09:54 UTC
Snial
Member since:
2011-12-30

On a non-x86 architecture it's easy and simple, you just map the memory addresses used for I/O out of user-space.

If we look at it a different way, I/O access is merely equivalent to an extra address bit, that is, you need an extra IO signal on a CPU, which could have been used to provide an extra address bit.

So, specific I/O instructions not only complicate work for the instruction set, compilers and device driver portability, but reduce memory space by a factor of two: and on the original 8-bit 8080 from which the x86 inherits its I/O model; the extra 64Kb would have been really valuable.

An I/O address space also doesn't really help even as an address space because virtually all practical computers from the 8-bit era onwards contained both I/O addressed hardware and memory-mapped hardware.

Video memory (actually an output device) was a common example. It was also true of the early 80s x86 PCs where addresses from 0xA0000 to 0xFFFFF were reserved for I/O, because the 1024 addresses provided by their (incompletely decoded) I/O slots weren't enough even then, 30 years ago.

So as you note, I/O portability is a problem for x86 PCs too since some I/O will be memory-mapped.

So why did Intel do this? Two reasons spring to mind: Firstly, the early Intel processors were really seen as embedded systems for controlling I/O, having a separate I/O signal reduced hardware (though this isn't convincing, you would still need additional decoding to handle more than 1 chip).

Secondly, and more probably, Intel were really memory chip producers and were new to processor design; employing people with no prior experience in computer architecture to design their early processors. Consequently, they simply copied I/O concepts from old minicomputers such as the pdp-8 and HP2100 without quite understanding why these machines had such features.

Hope this helps! (Julian Skidmore, designer of the DIY 8-bit computer FIGnition).

Reply Score: 6