Linked by Thom Holwerda on Mon 28th Sep 2009 23:15 UTC, submitted by poundsmack
Microsoft It seems like Microsoft Research is really busy these days with research operating systems. We had Singularity, a microkernel operating system written in managed code, and late last week we were acquainted with Barrelfish, a "multikernel" system which treats a multicore system as a network of independent cores, using ideas from distributed systems. Now, we have a third contestant, and it's called Helios.
Order by: Score:
State of Singularity
by sukru on Tue 29th Sep 2009 01:01 UTC
sukru
Member since:
2006-11-19

I was very excited when they announced Singularity, and was happy to see an open source release at Codeplex later on.

Unfortunately the development seems to be stalled right now. The last commit was at Nov 14 2008.

Unlike the last two prototype OSes(?), this one actually had potential, since I'm already writing command line C# applications, which it apparently supports.

Hope they resurrect the project soon.

Reply Score: 3

RE: State of Singularity
by kaiwai on Tue 29th Sep 2009 04:29 UTC in reply to "State of Singularity"
kaiwai Member since:
2005-07-06

I was very excited when they announced Singularity, and was happy to see an open source release at Codeplex later on.

Unfortunately the development seems to be stalled right now. The last commit was at Nov 14 2008.

Unlike the last two prototype OSes(?), this one actually had potential, since I'm already writing command line C# applications, which it apparently supports.

Hope they resurrect the project soon.


The reason why it has died, like so many efforts by Microsoft to 'embrace' open source is the licence they choose. Take Singularity, it is licensed under Microsoft Research License, so no one is allowed to turn it even into a community oriented operating system let alone taking it to the final step of it being a product to be commercialised.

Unfortunately Microsoft can't seem to let go of their code or at least allowing others to embrace it and turn it into something useful without them (Microsoft) getting something directly out of someone elses enhancements. What they should have done is released it under an LGPL or even a CDDL licence and encourage the development of an ecosystem around it.

Edited 2009-09-29 04:31 UTC

Reply Score: 5

RE[2]: State of Singularity
by Karitku on Tue 29th Sep 2009 10:30 UTC in reply to "RE: State of Singularity"
Karitku Member since:
2006-01-12

It died because it's RESEARCH OS, no sane person with that much knowhow is gonna spend time doing research FREE! Since there is tons of similiar more projects with more user friendly approach it's pretty clear that those people with intrest are concentrating on those.

Reply Score: 2

RE[2]: State of Singularity
by sukru on Tue 29th Sep 2009 19:20 UTC in reply to "RE: State of Singularity"
sukru Member since:
2006-11-19

I thought it was MS-PL, but after reading your comment I checked, and it actually is licensed "not for commercial use".

On the other hand open source projects are stalled too (SharpOS, JOS).

I actually liked the idea of virtual machine based hardware drivers, but I guess it's for another time.

Reply Score: 2

RE: State of Singularity
by dragSidious on Tue 29th Sep 2009 04:36 UTC in reply to "State of Singularity"
dragSidious Member since:
2009-04-17

This is not a _prototype_ operating system. Its a research one.

The difference is that with a prototype it has some chance of actually making it out out into the real world. Helios, on the other hand, has zero chance of seeing the light of day.

Microsoft will take what they learned, patent it to make sure nobody else can use those ideas, and then may or may not introduce some of the features in later versions of NT.

---------------

Despite what people want to believe software development is a evolutionary process, not a revolutionary one.

NT has been under constant development since 1989. With userland portions dating back to the early 80's

Mac OS X has been under constant development since 1985 with the founding of Next Computer, with lots of BSD portions.

The BSDs themselves have been under 1977.

It is, in turn, based on concepts developed by original Unix that was started in the late 1960's and early 1970's.

Which was based on Multics concepts developed through the 1960's and a video game called "Space Travel". Of course Linux started as a combination of the Linux kernel (started in 1991) and GNU (1984).

And Linus learned his OS development from Minix, which started as a text book example of a OS published in college OS design textbooks in 1987.

Of course Solaris is a release of System7 Unix, which is a direct descendant of the first port of Unix from PDP-11 assembly to C. And earlier Sun OS versions were BSD based.

-------------------


Microsoft NT is about the most modern OS your going to see that has any widespread success. Every since OS since then has been a commercial failure.

For years and years the best design researchers could come up with was the Microkernel. The only successful kernel of that type is the one used for QNX (started in 1980). (if OS X uses a microkernel then so does NT).

And the only reason it was successful was because the Microkernel design allowed deterministic scheduling (aka realtime), but it could not scale to being acceptable on desktops. (the message passing just had too much overhead)

----------------


My point is, ultimately, that these things Microsoft are working on is just pure research. They are not designed to be used, are not meant to be useful, but are just playthings to try out new ideas.

If the things they create cannot be implemented in a productive manner in NT then they will never get used.

Reply Score: 4

RE[2]: State of Singularity
by pablo_marx on Tue 29th Sep 2009 05:04 UTC in reply to "RE: State of Singularity"
pablo_marx Member since:
2006-02-03

Of course Solaris is a release of System7 Unix, which is a direct descendant of the first port of Unix from PDP-11 assembly to C. And earlier Sun OS versions were BSD based.

System7 unix? Do you mean System V Release 4 (SVR4)? Or UNIX V7?

Microsoft NT is about the most modern OS your going to see that has any widespread success. Every since OS since then has been a commercial failure.

Of course by your UNIX rationale, NT is really a descendent of VMS which dates back to 1975, which was a descendent of RSX-11 dating back to 1972, of course leading back to RT-11 in 1970. I'm sure some DEC aficionado can neatly tie this back to TOPS-10 or DECSYS going back to 1963/1964. Doesn't sound that much more modern than UNIX....

Reply Score: 5

RE[3]: State of Singularity
by Dubhthach on Tue 29th Sep 2009 08:07 UTC in reply to "RE[2]: State of Singularity"
Dubhthach Member since:
2006-01-12

>>System7 unix? Do you mean System V Release 4 (SVR4)? Or UNIX V7?

Both of course as V7 is a direct ancestor of SVR4 (via System III and 32/V)

Like the way I'm a direct descendant of both my father and my great grandfather.

Reply Score: 2

RE[2]: State of Singularity
by dvzt on Tue 29th Sep 2009 07:40 UTC in reply to "RE: State of Singularity"
dvzt Member since:
2008-10-23

For years and years the best design researchers could come up with was the Microkernel. The only successful kernel of that type is the one used for QNX (started in 1980).


I think Tru64 has a microkernel too.

Reply Score: 3

RE[2]: State of Singularity
by Thom_Holwerda on Tue 29th Sep 2009 07:55 UTC in reply to "RE: State of Singularity"
Thom_Holwerda Member since:
2005-06-29

For years and years the best design researchers could come up with was the Microkernel. The only successful kernel of that type is the one used for QNX (started in 1980). (if OS X uses a microkernel then so does NT).


Eh...

Me thinks you need to look beyond what you can run on your own desktop. I can guarantee you that there are more microkernel installations out there than there are desktop and server computers in the world.

Reply Score: 2

RE[3]: State of Singularity
by dragSidious on Tue 29th Sep 2009 15:04 UTC in reply to "RE[2]: State of Singularity"
dragSidious Member since:
2009-04-17

You think that QNX is that widely used? I also forgotten that Vxworks is a Microkernel.

So ya.. a shitload of them in embedded systems.

-------


But having personal, first-hand experience with QNX.. they are pretty detestable things to work with and on and can't scale up very well.

Reply Score: 1

RE[4]: State of Singularity
by Thom_Holwerda on Tue 29th Sep 2009 16:19 UTC in reply to "RE[3]: State of Singularity"
Thom_Holwerda Member since:
2005-06-29

But having personal, first-hand experience with QNX.. they are pretty detestable things to work with and on and can't scale up very well.


Are you serious? QNX powers everything from high-tech medical equipment, down to VCRs, and all the way back up to the Space Shuttle's robotic arm. What do you mean "it doesn't scale"?

Again - there is more to computing than desktops, laptops, and servers. There's a much larger area of computing where general purpose stuff like NT, Linux, and Mac OS X don't dare to go because of a fear of getting curb stomped.

Edited 2009-09-29 16:20 UTC

Reply Score: 3

RE: State of Singularity
by n4cer on Wed 30th Sep 2009 00:02 UTC in reply to "State of Singularity"
n4cer Member since:
2005-07-06

I was very excited when they announced Singularity, and was happy to see an open source release at Codeplex later on. Unfortunately the development seems to be stalled right now. The last commit was at Nov 14 2008. Unlike the last two prototype OSes(?), this one actually had potential, since I'm already writing command line C# applications, which it apparently supports. Hope they resurrect the project soon.


Internally, Singularity transitioned to an incubation project called Midori. There's not much info available about it externally, though you can find a few articles here and there.

Reply Score: 2

Thom, I'm not so sure
by Johnny on Tue 29th Sep 2009 02:20 UTC
Johnny
Member since:
2009-08-15

Thom,
I would say that Helios does sound like a cool OS. But I have some concerns that I'm hoping folks can answer for me:


Applications. Applications is the killer, because the best operating system in the world won't succeed in the market place without some killer application(s). What applications can run on Helios?


If it were say an open source Unix based operating system I think that it would be checkmate with a huge volume of open source software ready to go. You could do something like Debian with all applications precompiled and a package manager that automatically handles the dependency issues. Or you could do something like the BSDs, with the ports collections.


What if the OS is a) not Unix like and b) Proprietary, how do you quickly ramp up developer interest to make developers write code for Helios? What's the incentive if you're not sure it's going to be around 5 years from now with a large enough market share to be profitable for proprietary software?

Availability. How hard is it to get access to Helios to evaluate the environment? I know it's early, but even betaware is worth looking at if it sort of works on some hardware out there.

Reply Score: 0

RE: Thom, I'm not so sure
by Morph on Tue 29th Sep 2009 04:26 UTC in reply to "Thom, I'm not so sure"
Morph Member since:
2007-08-20

Uh, it's a research project. To experiment with new ideas & be a proof-of-concept for new designs is the point. Supporting lots of apps, being readily available, competing with mainstream OS's etc, isn't.

Reply Score: 3

recycling names
by transputer_guy on Tue 29th Sep 2009 02:42 UTC
transputer_guy
Member since:
2005-07-08

There was an OS called Helios back in the early 90s built specifically to provide a nix like OS on Transputer arrays, hence massively parallel. When the chip died, the OS was abandoned soon after. I only ever saw it at a VR show in London around 1990.

I also have to wonder why Singularity hasn't continued or the experiment "done".

google Helios for Transputers

Good job Haiku got done before BeOS name got recycled too.

Reply Score: 3

RE: recycling names
by trenchsol on Tue 29th Sep 2009 13:00 UTC in reply to "recycling names"
trenchsol Member since:
2006-12-07

Perhaps Microsoft people have been playing DeusEx. There was a Helios supercomputer in the game. BTW, any information on DeusEx III ?

Reply Score: 2

Plan 9
by rdean400 on Tue 29th Sep 2009 03:11 UTC
rdean400
Member since:
2006-10-18

It kind of sounds like "Singularity meets Plan 9".

Reply Score: 1

RE: Plan 9
by tobyv on Tue 29th Sep 2009 12:18 UTC in reply to "Plan 9"
tobyv Member since:
2008-08-25

It kind of sounds like "Singularity meets Plan 9".


I was thinking the same.

Multi-kernel systems? User mode network stack? Plan 9 has this almost 20 years ago.

Add a dash of Oberon and you have MS research, 15 years later.

Reply Score: 1

RE: Plan 9
by jabjoe on Wed 30th Sep 2009 08:54 UTC in reply to "Plan 9"
jabjoe Member since:
2009-05-06

Plan 9's failure makes me feel like crying. It's how Unix should have gone, everything still being a file. These days most Unixs have bits bolted on that ignore that's it's a Unix system. They only look at that part in isolation, not how it fits with the whole. Simplicity before optimal, because what's optimal today might not be tomorrow and what might be optimal in isolation might not be when taken as a whole. Micro optimization rather than macro optimization. On top of that, simplicity isn't just easier to use but to maintain. When Unix was young, it really was everything through a simple/generic abstraction (i.e. file) in a single naming system (filesystem) that all tools could work with. ALSA and Pulse is what we have now in Linux, where as we should have something like OSSv4 and a X Audio plugin (which is also done (http://www.chaoticmind.net/~hcb/murx/xaudio/) , but unloved). And we have seperate API to use sockets, where as on Plan9 there was the /net folder with sockets as files. Glendix wouldn't be enough to get Linux up the Plan9 design, it needs come from Linus and the kernel itself.

Reply Score: 1

RE[2]: Plan 9
by Mark Williamson on Thu 1st Oct 2009 14:31 UTC in reply to "RE: Plan 9"
Mark Williamson Member since:
2005-07-06

The Glendix folks are hoping to get their Plan 9 compatibility features integrated into mainline Linux if possible. It wouldn't fix the non-file-like APIs but it would mean that some of the nice Plan 9 APIs (e.g. the pseudo filesystems for various things) become available to Linux applications too. Assuming they get any of it upstreamed!

Even if it's just a reasonably clean set of patches that distributors and / or uses could apply that would still be useful.

Reply Score: 2

Awesome
by ExtremeBass on Tue 29th Sep 2009 03:25 UTC
ExtremeBass
Member since:
2009-09-29

No download link? ;)

By the way I got BarrelFish to compile under Ubuntu 9.04, and run under Qemu. Is the instructions for accomplishing this worth sharing? ;)

Edited 2009-09-29 03:30 UTC

Reply Score: 2

RE: Awesome
by justinholt on Wed 30th Sep 2009 02:41 UTC in reply to "Awesome"
justinholt Member since:
2009-09-30

Please do! If you don't post it could you email me at holt (dot) justin 173 (at) gmail (dot) com

Reply Score: 1

RE: Awesome
by richmassena on Wed 30th Sep 2009 13:46 UTC in reply to "Awesome"
richmassena Member since:
2006-11-26

I would be interested in seeing those instructions.

Reply Score: 1

Return of the operating system
by bebop on Tue 29th Sep 2009 03:40 UTC
bebop
Member since:
2009-05-12

Reading this and really the last two weeks of news at OSNews, I am starting to get the feeling that operating systems are coming back into style.

It seems like just a few years ago, whenever an alternative operating system was presented on OSNews, there would always be at least one post about how we already have Linux/BSD/Windows/[insert OS here]. This would always lead into someone saying that there is no longer a need for alternative operating systems.

However with the introduction of multiple core as well as the rise of mainstream multiple processor computers, there has been a large push to really make programs (including OS's) more parallel.

It is great to see that there is interest in locally distributed operating systems. In my (relatively un-researched) opinion, I would say unless fabrication technologies and/or materials dramatically increase speed, the only way to move personal computers forward is to increase parallelism.

This is why I have been enjoying the news so much recently. With not one but THREE open source (depending on who you talk to) projects coming out of Microsoft, as well as an Alpha1 release from Haiku (BeOS arguably being the first "parallel" OS's), it has been a fun time for an operating system geek.

Reply Score: 2

Rethinking The Operating System
by SanDiegoDave on Tue 29th Sep 2009 03:55 UTC
SanDiegoDave
Member since:
2009-09-29

I definitely agree with bebop, this is moving forward with a new way to think about operating systems. I've had many a discussion over exactly what IS an operating system; although some people think it's the shell, most agree that it's the kernel, and others have other opinions.

I also like Johnny's point about applications - the OS doesn't succeed without applications (though they may be back end applications, they're apps nonethless).

But I think the point of these research OSs are just that - research. When discussing distributed and network programming, one can easily begin to see that a modern day desktop is, in many ways, a distributed system. A video card has dedicated CPU's and main memory, hard drives are getting their own processors for internal encryption, shoot even the computer's main memory is not straight forward anymore - not with virtual memory/memory management units, etc... Thus far we've managed to wrap these hardware devices underneath a single kernel, but that's only because we were thinking inside the box - going along with tradition.

I think BarrelFish is aimed at opening the possibilities at treating a "personal computer" as a distributed system, while Helios is aimed at taming the cloud as a much larger distributed system.

Thom mentions that Microsoft has their eyes on 2020, and I agree completely. These projects are most likely meant to figure out how to best work these micro and macro distributed systems, so by the time MS comes out with "Windows MicroCloud" and "Windows MacroCloud" they're backed by a solid decade of experience, not two years of hacking a previous version of windows (eh hem, WinMe & Longhorn).

Sorry for the long reply - I'm just extremely excited about this stuff!

Reply Score: 2

dragSidious Member since:
2009-04-17


But I think the point of these research OSs are just that - research. When discussing distributed and network programming, one can easily begin to see that a modern day desktop is, in many ways, a distributed system. A video card has dedicated CPU's and main memory, hard drives are getting their own processors for internal encryption, shoot even the computer's main memory is not straight forward anymore - not with virtual memory/memory management units, etc... Thus far we've managed to wrap these hardware devices underneath a single kernel, but that's only because we were thinking inside the box - going along with tradition.


Well actually with hardware development its going opposite of what your saying. Everything is getting sucked into the CPU and generic.


Its all about Moore's law.

Moore's law says the number of transistors in a processor double about every 2 years. This is due to the improvements in lithography, quality of silicon ingots, and shrinking processes.

As the quality of silicon goes up, wafers get larger. Larger wafers mean less waste and cheaper production. Higher purity increases yields. Shrinking processes and higher quality lithography mean more elements can be stuck in a smaller and smaller area.

The best CPU design people have been able to create so far is a RISC design, which is fundamentally very small and very fast core. Modern x86 processors are RISC at their core and use extra silicon to create a machine code translation layer for the legacy CISC machine code.

And such since the best cpu design is relatively small core that runs fast with large cache then using all the extra silicon area for more and more cpu cores was the logical conclusion.

However there is a limit to that usefulness. People just are not that multitask oriented.

Then on top of that you have memory limitations and the amount of I/O pins you can squeeze into a Mainboard-to-cpu interface is fundamentally limited.

So the next step is just sucking in more and more motherboard functionality into the processor. AMD did it with the memory controller, Intel has recently followed suit.

The next step is to suck the GPU and most of the northbridge into the central processor. Intel is already doing that with the newer Atom designs in order to be competitive with ARM.

The age of the discrete video card is passing. There will be no special physics cards, no audio acceleration, no nothing.

On modern architectures even the entire term "hardware acceleration" is a misnomer. Your OpenGL and DirectX stacks are almost pure software, or at least will be in the next generation stuff. All hardware acceleration is nowadays is just software that is optimized to use both the graphical processor and central processor.

Pretty soon memory bandwidth requirements and latency issues will mean that sticking a huge GPU and video ram on the far end up a PCI Express bus will become prohibitively expensive and cause too much overhead. So the GPU will just be another core on your central processor. (well.. actually more then likely just larger blobs of dozens and dozens of tiny extremely-risc cores that will get described as "the gpgpu cores")


The future of the PC is "SoC" (system on a chip), which is already the standard setup for embedded systems due to the low price and high efficiency that design offers.

Instead of having the CPU, North Bridge, South Bridge, CPU, etc etc. all the same functionality will be streamlined and incorporated into a single hunk of silicon.

Then your motherboard will exist as a mere break-out board with all the I/O ports, a place to plug in memory, and voltage regulation.

It'll be cheaper, faster, and more reliable. The only difference between Desktop PC, Smart Phone, and Laptop would be one of form factor, the types of I/O included by default, and energy usage.

The discrete GPU will exist as mostly high-end systems for a long time, but even that will pass as modern NUMA architectures mean you can still pretty much unlimited numbers of multicore CPU/GPUs in a single system.
(There exist high-end Linux systems with over 4000 cpu cores on a single computer)

----------

What your talking about is a extremely old fashioned computer design.

The mainframe system had a bare OS in the central running on a relatively weak central processor. The central processor box had a number of different connections that could be used for almost anything and often multiplexed for a wide variety of very intellegant hardware. Network boxes, tape boxes, DASD units, etc etc. Each with their own complex microcode that offload everything. This means that mainframes have massive I/O capabilities that can be fully utilized with very little overhead.

Of course all of this means they are huge, expensive, difficult to maintain, difficult to program for, and are largely now legacy items running software that would be prohibitively expensive to port to other architectures.

Edited 2009-09-29 05:30 UTC

Reply Score: 4

renox Member since:
2005-07-06

Modern x86 processors are RISC at their core


This sentence isn't correct: a RISC is a kind of Instruction Set (visible by the compiler) which allow efficient hardware usage by the compiler.
An x86 compiler cannot access the 'core RISC' inside an x86 CPU so it's not a 'core RISC', it's just that both RISCs CPU and x86 CPU share a lot of silicon.

However there is a limit to that usefulness. People just are not that multitask oriented.


Uh? You can also use multiple core to accelerate one single software, but it's difficult to program yes.

The age of the discrete video card is passing. There will be no special physics cards, no audio acceleration, no nothing.


Probably, but note that GPUs now have a cheap, huge memory bandwith (thanks to it's fixed on board memory configuration) that the GCPU won't have at first..
It's possible to use different algorithms to use less memory bandwith, but first generation GCPU won't be competitive with high end GPUs.

Reply Score: 3

dragSidious Member since:
2009-04-17



This sentence isn't correct: a RISC is a kind of Instruction Set (visible by the compiler) which allow efficient hardware usage by the compiler.
An x86 compiler cannot access the 'core RISC' inside an x86 CPU so it's not a 'core RISC', it's just that both RISCs CPU and x86 CPU share a lot of silicon.


Your splitting hairs. To paraphrase what your saying is

The core of the modern x86 is not "risc", it just is the same design as "risc" cpus. Its a design philosophy in my eyes. Instead of a lot of complex instructions you use a cpu that has a small set of fast instructions and your depending on your compiler to get it right.

Intel and AMD processors have logic that takes the x86 instruction sets and break them down into RISC-like instructions that are then executed by the rest of the processor. You can think of it as a hardware just-in-time compiler or something like that.



Uh? You can also use multiple core to accelerate one single software, but it's difficult to program yes.


So your agreeing with me then.


Probably, but note that GPUs now have a cheap, huge memory bandwith (thanks to it's fixed on board memory configuration) that the GCPU won't have at first..
It's possible to use different algorithms to use less memory bandwith, but first generation GCPU won't be competitive with high end GPUs.


Yes memory bandwidth is a issue with IGP.

But the problem with the current design is that with more and more applications using the GPU as a "GPGPU" you will never really have enough dedicated memory on that. On a modern composited desktop your looking at massive amounts of video RAM needed to cache out all those application window textures and whatnot.

Its the same reason why on a modern system with 8GB of RAM OSes still insist on having swap files and swap partitions. To make things go faster you want to use as much RAM as possible.


So all that latency stuff adds up.

So Instead of burning out hundreds of thousands of cycles on BOTH your cpu and gpu shoveling megabytes worth of data back and forth over PCI Express during normal application you end up with all the cores sharing the same cache.

Then instead of spending 200 dollars or whatever on a dedicated external video card they can spend that money on increasing the memory bandwidth from main memory to the processor and make all that fast dedicated video ram a part of your normal main memory.


edit:

Imagine a application that uses GPGPU instructions and CPU instructions in the same execution loop.

Since the GPGPU is only fast at certain things it would be desirable to easily program using both the GPU and the CPU.

So with a dedicated separate video card each time you execute a loop in that program your burning through much more cycles just moving data back and forth over the PCI Express bus then what it actually costs to execute it.

By integrating the GPU and the CPU into the same processor as seperate cores and then using the same memory and cache for both things a much slower cpu and gpu could massively outperform a otherwise faster dedicated video card for that sort of thing.

And be much easier to program for...

Edited 2009-09-29 15:32 UTC

Reply Score: 1

OSbunny Member since:
2009-05-23

@dragSidious: Very nicely described. I was also thinking about win modems and how increasingly even expansion cards offload processing onto CPUs. Also in the past you used to have separate math co-processors but these days they are all built into the CPU. So things have been heading in this direction for quite a while now.

But I still think integrating the GPU into the CPU will take a lot of time. Intel doesn't seem to be as good at making GPUs as it is at CPUs. So the likes of Nvidia and ATI will have the edge for many years to come.

Edited 2009-09-30 01:10 UTC

Reply Score: 1

RE: Rethinking The Operating System
by -pekr- on Tue 29th Sep 2009 14:43 UTC in reply to "Rethinking The Operating System"
-pekr- Member since:
2006-03-28

I don't remember precisely, but back then when there was possibility Amiga will use QNX kernel (1998), I studied their Neutrino architecture a bit, and it was exactly like that - everything in OS was like interconnected network of managers, passing messages here or there. Well, but maybe you mean something different.

I like when companies try new ideas publicly, or even old ideas in under new conditions. This might help computing world only better ....

Reply Score: 2

What is old becomes new?
by smilie on Thu 1st Oct 2009 17:38 UTC
smilie
Member since:
2006-07-19

The paper was interesting but, for some of us, nothing we have not seen 2 decades ago. Back when mainframe and minicomputers loaded intelligent peripheral controllers during the boot process. I just hope they don't try to patent concepts this old as something new...

The RISC processor was originally designed to provide a low cost processor for these limited requirement roles (though today's RISC processors follow a different definition of "RISC" than proposed in the 1980s.)

Reply Score: 1