Linked by Thom Holwerda on Thu 21st Jan 2010 19:24 UTC, submitted by Anonymous
OpenBSD "OpenBSD is widely touted as being 'secure by default', something often mentioned by OpenBSD advocates as an example of the security focused approach the OpenBSD project takes. Secure by default refers to the fact that the base system has been audited and considered to be free of vulnerabilities, and that only the minimal services are running by default. This approach has worked well; indeed, leading to 'Only two remote holes in the default install, in a heck of a long time!'. This is a common sense approach, and a secure default configuration should be expected of all operating systems upon an initial install. An argument often made by proponents of OpenBSD is the extensive code auditing performed on the base system to make sure no vulnerabilities are present. The goal is to produce quality code as most vulnerabilities are caused by errors in the source code. This a noble approach, and it has worked well for the OpenBSD project, with the base system having considerably less vulnerabilities than many other operating systems. Used as an indicator to gauge the security of OpenBSD however, it is worthless."
Permalink for comment 405473
To read all comments associated with this story, please click here.
RE: Author of the article here
by Mark Williamson on Fri 22nd Jan 2010 16:18 UTC in reply to "Author of the article here"
Mark Williamson
Member since:
2005-07-06

@allthatiswrong

Hi there - first off, thank you for an interesting and well-written article. It made some really good points and was an enjoyable read. Thanks also for coming here to join in the discussion!


I am also not aware of any of these frameworks actually being bypassed, only the policies. Given the realtivley small size of the framework code, it should be much easier to formally verify and audit, and make sure they are free of vulnerabilities. Due to the way they are designed, breaking into one part of the kernel will not enable bypassing these frameworks.


Agreed that the small size of a security framework should make verification easy - in fact, you could create a formal proof of the security framework's properties (I expect someone has done this). With the security framework in the kernel you have a very powerful way of constraining what all processes, no matter how privileged, can accomplish if compromised.

The main thing I disagree with (it wasn't explicitly mentioned in the article but I was trying to clarify it with my comment) is the idea that these mechanisms can protect against kernel-level code being exploited. They can protect against all sorts of things, including code running with root privileges, etc. This might well allow you to sandbox a compromised process such that it is less likely to be able to take advantage of kernel bugs in the first place, e.g. by limiting the devices and syscalls the process may access during normal operation.

But the issue I really wanted to draw some attention to is that if an attacker manages to get control over what code is executed in kernel mode - say by directly getting the kernel to execute some injected code by exploiting some bug - then it is literally impossible for an in-kernel security framework to prevent that attacker from completely controlling the machine. There's unfortunately nothing to stop malicious code at kernel level from simply altering any memory content in the machine - this is true for the most trivial device driver to the most fundamental piece of infrastructure.

The way popular OSes are structured and the way popular hardware works gives all kernel-level code equal and complete trust - and therefore the complete freedom to bypass any security checks that it wishes. The only way you might limit this is by having a separate component (e.g. a hypervisor) that is protected from kernel-level code and can enforce some restrictions on what the kernel level code does.

So for the OSes people use nowadays your only option is really to audit the kernel-level code thoroughly (which unfortunately includes most or all device drivers), since without that being secure your whole security framework may be undermined. However, the security framework can be used to mitigate and contain anything that doesn't run in the kernel, i.e. most code on the machine! With different OS architectures and / or new hardware support we might be able to do even better in future.

So I'm agreed that a modern OS should have a powerful policy framework for constraining user-level code and that this is arguably more important than simply auditing user-level code. The nice thing about a security framework as you describe is that it can potentially protect against bugs in applications that you, the OS developer, haven't even heard of let alone audited. And as an administrator it can constrain applications that are provided by a third party or are even closed source, such that you know *exactly* what the application is potentially doing without having to read the source code.

Thanks again for the article and for participating in the discussion.

Reply Parent Score: 3