Linked by Thom Holwerda on Thu 21st Jan 2010 19:24 UTC, submitted by Anonymous
OpenBSD "OpenBSD is widely touted as being 'secure by default', something often mentioned by OpenBSD advocates as an example of the security focused approach the OpenBSD project takes. Secure by default refers to the fact that the base system has been audited and considered to be free of vulnerabilities, and that only the minimal services are running by default. This approach has worked well; indeed, leading to 'Only two remote holes in the default install, in a heck of a long time!'. This is a common sense approach, and a secure default configuration should be expected of all operating systems upon an initial install. An argument often made by proponents of OpenBSD is the extensive code auditing performed on the base system to make sure no vulnerabilities are present. The goal is to produce quality code as most vulnerabilities are caused by errors in the source code. This a noble approach, and it has worked well for the OpenBSD project, with the base system having considerably less vulnerabilities than many other operating systems. Used as an indicator to gauge the security of OpenBSD however, it is worthless."
Thread beginning with comment 405473
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Author of the article here
by Mark Williamson on Fri 22nd Jan 2010 16:18 UTC in reply to "Author of the article here"
Mark Williamson
Member since:
2005-07-06

@allthatiswrong

Hi there - first off, thank you for an interesting and well-written article. It made some really good points and was an enjoyable read. Thanks also for coming here to join in the discussion!


I am also not aware of any of these frameworks actually being bypassed, only the policies. Given the realtivley small size of the framework code, it should be much easier to formally verify and audit, and make sure they are free of vulnerabilities. Due to the way they are designed, breaking into one part of the kernel will not enable bypassing these frameworks.


Agreed that the small size of a security framework should make verification easy - in fact, you could create a formal proof of the security framework's properties (I expect someone has done this). With the security framework in the kernel you have a very powerful way of constraining what all processes, no matter how privileged, can accomplish if compromised.

The main thing I disagree with (it wasn't explicitly mentioned in the article but I was trying to clarify it with my comment) is the idea that these mechanisms can protect against kernel-level code being exploited. They can protect against all sorts of things, including code running with root privileges, etc. This might well allow you to sandbox a compromised process such that it is less likely to be able to take advantage of kernel bugs in the first place, e.g. by limiting the devices and syscalls the process may access during normal operation.

But the issue I really wanted to draw some attention to is that if an attacker manages to get control over what code is executed in kernel mode - say by directly getting the kernel to execute some injected code by exploiting some bug - then it is literally impossible for an in-kernel security framework to prevent that attacker from completely controlling the machine. There's unfortunately nothing to stop malicious code at kernel level from simply altering any memory content in the machine - this is true for the most trivial device driver to the most fundamental piece of infrastructure.

The way popular OSes are structured and the way popular hardware works gives all kernel-level code equal and complete trust - and therefore the complete freedom to bypass any security checks that it wishes. The only way you might limit this is by having a separate component (e.g. a hypervisor) that is protected from kernel-level code and can enforce some restrictions on what the kernel level code does.

So for the OSes people use nowadays your only option is really to audit the kernel-level code thoroughly (which unfortunately includes most or all device drivers), since without that being secure your whole security framework may be undermined. However, the security framework can be used to mitigate and contain anything that doesn't run in the kernel, i.e. most code on the machine! With different OS architectures and / or new hardware support we might be able to do even better in future.

So I'm agreed that a modern OS should have a powerful policy framework for constraining user-level code and that this is arguably more important than simply auditing user-level code. The nice thing about a security framework as you describe is that it can potentially protect against bugs in applications that you, the OS developer, haven't even heard of let alone audited. And as an administrator it can constrain applications that are provided by a third party or are even closed source, such that you know *exactly* what the application is potentially doing without having to read the source code.

Thanks again for the article and for participating in the discussion.

Reply Parent Score: 3

f0dder Member since:
2009-08-05

The way popular OSes are structured and the way popular hardware works gives all kernel-level code equal and complete trust - and therefore the complete freedom to bypass any security checks that it wishes. The only way you might limit this is by having a separate component (e.g. a hypervisor) that is protected from kernel-level code and can enforce some restrictions on what the kernel level code does.
And even with a HyperVisor, you might be vulnerable to DMA-based attacks... at least with the original implementations of x86 VMX.

Reply Parent Score: 1

Mark Williamson Member since:
2005-07-06

And even with a HyperVisor, you might be vulnerable to DMA-based attacks... at least with the original implementations of x86 VMX.


True! I forgot about that. So if you were going to use a hypervisor to enforce this sort of thing you also need an IOMMU so it can protect itself from DMA. Modern x86 systems do have / are getting that hardware, though I'm not quite clear who has it now :-S

Reply Parent Score: 2

allthatiswrong Member since:
2010-01-22

Hi Mark!

I really am glad you and others found the article interesting and a good read.

How could I not join in the discussion when good points are being made? The whole point of my article is to get people to discuss and think about the issues I raised. By discussing the issues, hopefully we can all learn along the way.

It is interesting that you bring up making a formal proof of frameworks. For SELinux and RSBAC have done exactly this. SELinux is an implementation of the FLASK architecture, and RSBAC of the GFAC architecture, both of which are formally verified.

The main point you make is interesting, and the only real argument I have seen thus far. You are saying that these frameworks will not help against any kernel level bugs..., but I am not sure that this is the case.

These frameworks have all been around for almost 10 years...at least since 2002 or so. In that time, I have set them up and seen them mitigate many vulnerabilities in practice, and at the same time have never heard of an example of them being bypassed by a kernel level vulnerability being exploited.

It is my understanding that while these frameworks are a part of the kernel, they are distinct from other parts of the kernel, so as exploiting one bug in a kernel will not allow you access to disable or override these frameworks.

I am not even aware of this as a theoretical exploit.

If you could expand and clarify on this, I would be most interested, and would update my article to address this.

Thanks!

Reply Parent Score: 1

Mark Williamson Member since:
2005-07-06

Hi Mark!

I really am glad you and others found the article interesting and a good read.

How could I not join in the discussion when good points are being made? The whole point of my article is to get people to discuss and think about the issues I raised. By discussing the issues, hopefully we can all learn along the way.

Yep :-)

We have some good discussions on OSNews and I think this is one of them!.


It is interesting that you bring up making a formal proof of frameworks. For SELinux and RSBAC have done exactly this. SELinux is an implementation of the FLASK architecture, and RSBAC of the GFAC architecture, both of which are formally verified.


Ah, cool. I thought FLASK might have been formally verified but couldn't remember specifically. RSBAC I am not familiar with (not that I'm that familiar with SELinux - I have it enabled on my Fedora systems because they make it fairly painless but I don't actually tinker with policy!)


The main point you make is interesting, and the only real argument I have seen thus far. You are saying that these frameworks will not help against any kernel level bugs..., but I am not sure that this is the case.

These frameworks have all been around for almost 10 years...at least since 2002 or so. In that time, I have set them up and seen them mitigate many vulnerabilities in practice, and at the same time have never heard of an example of them being bypassed by a kernel level vulnerability being exploited.

It is my understanding that while these frameworks are a part of the kernel, they are distinct from other parts of the kernel, so as exploiting one bug in a kernel will not allow you access to disable or override these frameworks.

I am not even aware of this as a theoretical exploit.

If you could expand and clarify on this, I would be most interested, and would update my article to address this.

Thanks!


Well, the reason security frameworks are so powerful is that there's a very strong and well-defined boundary between user mode code and kernel code. The security framework lives in the kernel at the highest level of privilege which malicious application code, even if running as root, can be prevented from accessing. Normally root can effectively access everything in the machine, including altering the kernel - but actually, since the application code is in userspace, that's only possible if kernel code *allows* root to have these privileges this. A security framework that lives in the kernel can restrict userspace code in any way it wants, so even root can be confined securely.

Basically the problem I'm describing is rooted in the fact that modern kernels are horribly lacking in isolation because they're "monolithic", in the sense that all code within them shares the same address space and privilege level. The kernel is basically just a big single program within which all code is mutually trusting and equally privileged. So any code that's part of the core kernel or a loadable driver module actually ends up with privilege to read and alter any part of machine memory, access any device, etc. There's usually some sort of supported ABI / API for driver modules that they *ought* to use but there's not actually a solid protection boundary to stop them from accessing other things.

The in-kernel security frameworks in Linux exist as a Linux Security Module, which is an API boundary but it isn't actually a protection boundary - there's nothing to enforce the correct use of the API. There are well-defined places where other kernel code *should* call into the LSM layer and there are parts of the LSM that the rest of the kernel *shouldn't* touch. But there isn't actually anything in the Linux kernel to stop a malicious bit of in-kernel code from corrupting the LSM's state to make it more permissive, or disabling the LSM hooks so that it no longer gets to make access decisions.

That is, unfortunately, a direct consequence of the fact that most popular OSes are structured monolithically - same problem will exist on Linux, BSD, Windows, etc. It really is possible for any kernel code to alter any other kernel code, even to alter the security framework. There are some ways in which you could, perhaps, make this harder but I don't think any mainstream systems can actually prevent it.

So, all you need in order to circumvent the security framework is to get some malicious code into the kernel - and, assuming your security framework is solid, that needs there to be a kernel bug of some sort. I don't have an example of an actual in-the-wild bug here but I'm pretty sure one could be found with a bit of digging. But instead, here's an example of a bug I once saw that could occur ...

Device drivers are a good source of kernel bugs - there are many of them and not everybody can test them, since some devices are rare. An easy mistake to make would be to have function like:

void send_data_to_userspace(void *user_buffer)
{
memcpy(user_buffer, my_private_kernel_buffer, BUFFER_SIZE);
}

That code will copy data to a user-provided buffer from an in-kernel buffer. It's incorrect though - kernel code should not access user pointers directly, which won't even work on some CPUs. On x86 it will work perfectly, so testing on that platform will not show any odd behaviour. But actually this code is *trusting* a pointer from userspace, which could actually point anywhere, including into kernel data structures.

Now suppose your webcam driver contains this bug. You wouldn't expect that allowing access to a webcam would defeat your security framework and you've probably allowed *some* applications to access it. But if the application accessing it supplies a maliciously-crafted point it could potentially do anything up to and including disabling the entire framework.

If the bug is in a device driver then you can at least use the security framework to disable access to that device once you get the security advisory. If the bug is in the core kernel (in the past various fundamental system calls have had such exploits) then disabling access might not be an option.

Does that make some sense? I might have overcomplicated things by talking about a hypothetical example but basically there are many classes of bugs that might allow kernel compromise and on modern systems kernel compromise = everything compromised :-(

Anyhow, hope I've helped, thanks again.

Reply Parent Score: 3