Linked by Thom Holwerda on Mon 28th Sep 2009 23:15 UTC, submitted by poundsmack
Microsoft It seems like Microsoft Research is really busy these days with research operating systems. We had Singularity, a microkernel operating system written in managed code, and late last week we were acquainted with Barrelfish, a "multikernel" system which treats a multicore system as a network of independent cores, using ideas from distributed systems. Now, we have a third contestant, and it's called Helios.
Permalink for comment 386742
To read all comments associated with this story, please click here.
dragSidious
Member since:
2009-04-17


But I think the point of these research OSs are just that - research. When discussing distributed and network programming, one can easily begin to see that a modern day desktop is, in many ways, a distributed system. A video card has dedicated CPU's and main memory, hard drives are getting their own processors for internal encryption, shoot even the computer's main memory is not straight forward anymore - not with virtual memory/memory management units, etc... Thus far we've managed to wrap these hardware devices underneath a single kernel, but that's only because we were thinking inside the box - going along with tradition.


Well actually with hardware development its going opposite of what your saying. Everything is getting sucked into the CPU and generic.


Its all about Moore's law.

Moore's law says the number of transistors in a processor double about every 2 years. This is due to the improvements in lithography, quality of silicon ingots, and shrinking processes.

As the quality of silicon goes up, wafers get larger. Larger wafers mean less waste and cheaper production. Higher purity increases yields. Shrinking processes and higher quality lithography mean more elements can be stuck in a smaller and smaller area.

The best CPU design people have been able to create so far is a RISC design, which is fundamentally very small and very fast core. Modern x86 processors are RISC at their core and use extra silicon to create a machine code translation layer for the legacy CISC machine code.

And such since the best cpu design is relatively small core that runs fast with large cache then using all the extra silicon area for more and more cpu cores was the logical conclusion.

However there is a limit to that usefulness. People just are not that multitask oriented.

Then on top of that you have memory limitations and the amount of I/O pins you can squeeze into a Mainboard-to-cpu interface is fundamentally limited.

So the next step is just sucking in more and more motherboard functionality into the processor. AMD did it with the memory controller, Intel has recently followed suit.

The next step is to suck the GPU and most of the northbridge into the central processor. Intel is already doing that with the newer Atom designs in order to be competitive with ARM.

The age of the discrete video card is passing. There will be no special physics cards, no audio acceleration, no nothing.

On modern architectures even the entire term "hardware acceleration" is a misnomer. Your OpenGL and DirectX stacks are almost pure software, or at least will be in the next generation stuff. All hardware acceleration is nowadays is just software that is optimized to use both the graphical processor and central processor.

Pretty soon memory bandwidth requirements and latency issues will mean that sticking a huge GPU and video ram on the far end up a PCI Express bus will become prohibitively expensive and cause too much overhead. So the GPU will just be another core on your central processor. (well.. actually more then likely just larger blobs of dozens and dozens of tiny extremely-risc cores that will get described as "the gpgpu cores")


The future of the PC is "SoC" (system on a chip), which is already the standard setup for embedded systems due to the low price and high efficiency that design offers.

Instead of having the CPU, North Bridge, South Bridge, CPU, etc etc. all the same functionality will be streamlined and incorporated into a single hunk of silicon.

Then your motherboard will exist as a mere break-out board with all the I/O ports, a place to plug in memory, and voltage regulation.

It'll be cheaper, faster, and more reliable. The only difference between Desktop PC, Smart Phone, and Laptop would be one of form factor, the types of I/O included by default, and energy usage.

The discrete GPU will exist as mostly high-end systems for a long time, but even that will pass as modern NUMA architectures mean you can still pretty much unlimited numbers of multicore CPU/GPUs in a single system.
(There exist high-end Linux systems with over 4000 cpu cores on a single computer)

----------

What your talking about is a extremely old fashioned computer design.

The mainframe system had a bare OS in the central running on a relatively weak central processor. The central processor box had a number of different connections that could be used for almost anything and often multiplexed for a wide variety of very intellegant hardware. Network boxes, tape boxes, DASD units, etc etc. Each with their own complex microcode that offload everything. This means that mainframes have massive I/O capabilities that can be fully utilized with very little overhead.

Of course all of this means they are huge, expensive, difficult to maintain, difficult to program for, and are largely now legacy items running software that would be prohibitively expensive to port to other architectures.

Edited 2009-09-29 05:30 UTC

Reply Parent Score: 4