Linked by fvillanustre on Sun 1st May 2011 21:51 UTC
Linux "Qubes OS comes from an elegant concept: if you can isolate functional components within disposable containers, and you can separate those components that can be tainted through their interaction with the outside world from the core subsystems, you stand a good chance to preserve the integrity and security of the base Operating System at the possible expense of needing to jump through some hoops to move data around the system. All in all it sounds like a good proposition if it can be demonstrated to be practical." Read the full review.
Order by: Score:
already?
by project_2501 on Mon 2nd May 2011 00:32 UTC
project_2501
Member since:
2006-03-20

I'm surprised this hasn't happened already. The pieces are there .. Windows and Linux and others come pre-packed with virtualisation technology .. I'm surprised there isn't a "launch app in new isolation context" option ..

Reply Score: 3

RE: already?
by makc on Tue 3rd May 2011 12:27 UTC in reply to "already?"
makc Member since:
2006-01-11

Actually this sounds so much like "microkernel" to me ;)

Update: I read the article and it's a whole different story

Edited 2011-05-03 12:34 UTC

Reply Score: 2

Yikes! Mach Kernel Organization
by hackus on Mon 2nd May 2011 01:33 UTC
hackus
Member since:
2006-06-28

vs Static Kernel Organization....

Since this is so obvious...let me start by saying:

1) Separation of these functions is nothing new. It has already been tried to various degrees using Mach.

2) Seems like a great idea, however, implementing the details reveals really really bad performance for lots things.

For one, context switching.

3) Unlike the static guys, who pretty much hammered out how they want to organize a static kernel, and what operations should be built into the CPU hardware wise to speed things up, not so in the Mach world.

Nobody in the Mach world can come up with a agreed plan on how to do all of this compartmentalized sharing of messaging and security context switching between parts.

Till everyone agrees, the hardware manufacturers are not going to support it.

Till then, any sort of non static kernel OS implementation is going to get its arse beat in economy of scale and performance.

Furthermore, advances in static kernel design are gradually eliminating a lot of the concerns over shared address space issues.

By the time the Mach guys figure out what they want or need, static kernels will already be there, and probably beyond.

-Hack

Reply Score: 1

Not2Sure Member since:
2009-12-07

Yes but we're talking about protecting users from themselves. Hardware has vastly outstripped the performance needs of most users. That's why this big "end of the pc"/tablet/smartphone movement has any legs at all and why ARM is becoming so attractive with its much lower power usage/performance ratio.

For probably 90% of most corporate/enterprise users the performance penalty paid by all this virtualization/context switching would be negligible on current hardware compared to the needs of the task that is putting the system at risk, (browsing, installing questionable/untrusted executables, viewing email attachments in unsecure plugins/viewers, etc). The more we go down the absurd road of let's turn the browser into an operating system with hardware acceleration, webGL, etc that will perhaps be less true.

I think the biggest hurdle facing user adoption is making its use painless/seamless. Remember (one of?) the biggest gripes about Vista in the enterprise was all those UAC dialogs. Qubes seems to paint each "domain" in a different color window which is a nice UI cue, but graduating data from one domain to the other beyond the copy/cut/paste metaphor is always the sticking point.

And while it isn't absolutely a new concept, it is a project worth watching imho.

Reply Score: 1

r_a_trip Member since:
2005-07-06

Yes but we're talking about protecting users from themselves.

A noble idea, but software can't protect against user ignorance. Once you foolproofed a thingamajig, nature goes and invents a bigger fool.

Education would solve a lot of problems out there, but somehow the belligerent end-user won. Now techies are trying to find ways to let untrained individuals wield tremendous power without causing too much damage. Which is strange, because every other powertool comes with the expectation that anybody wielding it knows how to operate one.

A multipurpose computer simply isn't a toaster and treating it like one will, sooner or later, get you burned.

Not that it ultimately matters. Some belligerent end-user or a brainwashed techie will label me an elitist and the quest for uneducated toaster computing will hurry forward unabashed.

As a aside, maybe uneducated computing will become a reality with true A.I. Then again, the computer may opt to kill the dunce who is trying to give it commands :-)

Reply Score: 6

orestes Member since:
2005-07-06

UAC was intentionally designed to be annoying, to raise awareness of apps running as administrative users without good reason and trigger a sort of roundabout kick in the pants to developers via annoyed users.

I absolutely agree though that the big issue will be putting the metaphors into a scheme that feels natural to use for the average user instead of feeling like it's fighting against the user.

If the features are compelling enough the performance hit will be overlooked, just like it has been in most of the big jumps in computing.

Reply Score: 3

Neolander Member since:
2010-03-08

UAC was intentionally designed to be annoying, to raise awareness of apps running as administrative users without good reason and trigger a sort of roundabout kick in the pants to developers via annoyed users.

And in case users were not knowledgeable enough to know about this, and simply blamed the operating system and ignored/disabled the warnings as they remained overly common, what was Microsoft's plan ?

Edited 2011-05-02 12:16 UTC

Reply Score: 1

Thom_Holwerda Member since:
2005-06-29

And in case users were not knowledgeable enough to know about this, and simply blamed the operating system and ignored the warnings, what was Microsoft's plan


It worked, though. That's what matters. It performed its role of getting lazy programmers in line.

Reply Score: 1

Neolander Member since:
2010-03-08

Did it ? I still have to get through two warnings* and give a third-party program or script administrative privileges when installing the vast majority of Windows games and utilities found on the internet, as far as I can tell, even though most of these are totally harmless and could run perfectly well with limited privileges.

The worst is that these warnings are perfectly useless because of how uninformative they are. At the moment where I see a UAC prompt, I have no way of knowing what the privileged application is going to do with its admin rights, and as such am still basically forced to trust it or forget it, with no security added.

As a consequence, I often end up totally ignoring these warnings most of the time, only seeing them as an annoying extra installation step getting in my way, and a reminder of how broken software installation is on many desktop OSs, including Windows.

* "This file is a binary, do you really want to run it ?" and the UAC prompt.

Edited 2011-05-02 13:09 UTC

Reply Score: 1

adricnet Member since:
2005-07-01

Obviously from there it went straight to :

3) Profit!

Reply Score: 1

WorknMan Member since:
2005-11-13

For probably 90% of most corporate/enterprise users the performance penalty paid by all this virtualization/context switching would be negligible on current hardware compared to the needs of the task that is putting the system at risk


I hate when people say 'For 90% of users out there...". What they're really saying is "we're about to piss off the other 10%", and I always seem to be a part of that 10%.

Reply Score: 3

Neolander Member since:
2010-03-08

View it in a positive light : you're exceptional and unique ;)

Reply Score: 2

Fergy Member since:
2006-04-10

For probably 90% of most corporate/enterprise users the performance penalty paid by all this virtualization/context switching would be negligible on current hardware compared to the needs of the task that is putting the system at risk


I hate when people say 'For 90% of users out there...". What they're really saying is "we're about to piss off the other 10%", and I always seem to be a part of that 10%.

It is even worse. What is actually true is that 90% of the _time_ users don't use more than 10% of the system. But that single time that you need to do something and you can't makes you switch back to full featured versions.

Edited 2011-05-03 11:25 UTC

Reply Score: 2

moondevil Member since:
2005-07-08

The same is also being tried with Minix 3:

http://www.minix3.org/

Reply Score: 2

Not a review ?
by Neolander on Mon 2nd May 2011 08:02 UTC
Neolander
Member since:
2010-03-08

I fail to see how this qualifies as a review. Unless I miss something, no one's using the OS there, it's just a discussion of its theoretical merites.

Also, I have to read about it more carefully, but last time I've heard, "bluepill" was a windows-specific privilege escalation attack, which simply made clever use of intel VT to hide itself better. I fail to see how Qubes prevents this better than a vanilla Linux kernel, which already puts separate processes in separate adress spaces.

Can someone help me understand ?

(And am I the only one who thinks that this TXT thing is scary when you start to consider how an evil monopoly could use it ? This would probably be the end of all jailbreaking, making a locked-down device remain locked down forever)

Reply Score: 1

RE: Not a review ?
by Not2Sure on Mon 2nd May 2011 09:38 UTC in reply to "Not a review ?"
Not2Sure Member since:
2009-12-07

Uhh.. it would not in theory be possible to install a "bluepill" from within a virtualized instance after a hardware-verified boot process. Do you even read the linked article?

Reply Score: 1

RE[2]: Not a review ?
by Neolander on Mon 2nd May 2011 10:46 UTC in reply to "RE: Not a review ?"
Neolander Member since:
2010-03-08

I agree that hardware code verification would be a powerful defense against rootkits. Until they manage to corrupt the code which the hardware uses for its verification, that is. However, this hardware feature is not mainstream yet, and will remain so for a very long time. Qubes' security is based on something else.

Developers of Qubes state that their security is based on untrusted component isolation through virtual machines. However, the "virtual machine" words are so overused nowadays that they have lost their meaning a lot of time ago. What is being virtualized ? What kind of isolation does this new layer provides ?

Mainstream OS kernels already provide a form of virtualization : software doesn't access the hardware directly, doesn't share a common address space... So what's new here ? In what way is their additional virtualization layer more secure than what the Linux kernel already provides ?

Edited 2011-05-02 10:55 UTC

Reply Score: 1