Linked by Thom Holwerda on Tue 20th May 2014 21:23 UTC, submitted by BloopFloop
OSNews, Generic OSes

Arrakis is a research operating system from University of Washinton, built as a fork of Barrelfish.

In Arrakis, we ask the question whether we can remove the OS entirely from normal application execution. The OS only sets up the execution environment and interacts with an application in rare cases where resources need to be reallocated or name conflicts need to be resolved. The application gets the full power of the unmediated hardware, through an application-specific library linked into the application address space. This allows for unprecedented OS customizability, reliability and performance.

The first public version of Arrakis has been released recently, and the code is hosted on GitHub.

Thread beginning with comment 589261
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: This is awesome
by Alfman on Wed 21st May 2014 09:51 UTC in reply to "This is awesome"
Alfman
Member since:
2011-01-28

thesunnyk,

I've always been toying with the idea of forking and continuing with Barrelfish. The idea, if you're not aware, is to have a kernel per CPU core. This allows you to think about your computer in an inherently more distributed sense, pushing computation out over the network or otherwise having your computer "span" devices or even the internet.


I like this idea as well! Not sure if it'd be useful for normal people, but what it offers is kind of an alternate to VPS, with dedicated resources yet less than the cost of a dedicated server. This model makes a lot of sense especially with NUMA systems, which are inherently more scalable than uniform memory access due to the overhead of cache coherency that x86 mandates.

Edited 2014-05-21 09:52 UTC

Reply Parent Score: 3

RE[2]: This is awesome
by Megol on Wed 21st May 2014 15:08 in reply to "RE: This is awesome"
Megol Member since:
2011-04-11

thesunnyk,

"I've always been toying with the idea of forking and continuing with Barrelfish. The idea, if you're not aware, is to have a kernel per CPU core. This allows you to think about your computer in an inherently more distributed sense, pushing computation out over the network or otherwise having your computer "span" devices or even the internet.

"

QNX have some support for this using a network as the "bus" layer.
Other systems have been designed for supporting distributed single system image. Limiting the support to kernel design isn't the best way towards that, many other system layers should be adapted to support variable latency and proper handling of link failures.


I like this idea as well! Not sure if it'd be useful for normal people, but what it offers is kind of an alternate to VPS, with dedicated resources yet less than the cost of a dedicated server. This model makes a lot of sense especially with NUMA systems, which are inherently more scalable than uniform memory access due to the overhead of cache coherency that x86 mandates.


Do you know that NUMA was first used in systems without x86 processors? Do you realize that much of the work doing scalable coherency protocols have been done on RISC systems?
In short: this isn't something x86 specific, it's common for all systems following the Von Neumann design.

Reply Parent Score: 2

RE[3]: This is awesome
by Alfman on Wed 21st May 2014 19:14 in reply to "RE[2]: This is awesome"
Alfman Member since:
2011-01-28

Megol,

QNX have some support for this using a network as the "bus" layer.
Other systems have been designed for supporting distributed single system image. Limiting the support to kernel design isn't the best way towards that, many other system layers should be adapted to support variable latency and proper handling of link failures.


Well, the trouble with this is that NUMA is being designed to solve some inherent scalability problems of shared memory systems. And although you can apply some hacks to SMP operating systems to better support NUMA, generic SMP / MT software concepts are flawed by shared memory design patterns that fundamentally cannot scale. In other words they reach diminishing returns that cannot be overcome by simply adding more silicon.

I'm only vaguely familiar with Barrelfish, but one of it's goals is to do away with the design patterns that imply serialization bottlenecks, which are common to conventional operating systems today. In theory all operating systems could do away with the serial bottlenecks too, but not without "Limiting the support to kernel design" as you said. Physics is eventually going to force us to adopt variations of this model if we are to continue scaling.

Edited 2014-05-21 19:14 UTC

Reply Parent Score: 3