Linked by Thom Holwerda on Thu 21st Jan 2010 19:24 UTC, submitted by Anonymous
OpenBSD "OpenBSD is widely touted as being 'secure by default', something often mentioned by OpenBSD advocates as an example of the security focused approach the OpenBSD project takes. Secure by default refers to the fact that the base system has been audited and considered to be free of vulnerabilities, and that only the minimal services are running by default. This approach has worked well; indeed, leading to 'Only two remote holes in the default install, in a heck of a long time!'. This is a common sense approach, and a secure default configuration should be expected of all operating systems upon an initial install. An argument often made by proponents of OpenBSD is the extensive code auditing performed on the base system to make sure no vulnerabilities are present. The goal is to produce quality code as most vulnerabilities are caused by errors in the source code. This a noble approach, and it has worked well for the OpenBSD project, with the base system having considerably less vulnerabilities than many other operating systems. Used as an indicator to gauge the security of OpenBSD however, it is worthless."
Thread beginning with comment 405537
To view parent comment, click here.
To read all comments associated with this story, please click here.
Mark Williamson
Member since:
2005-07-06

Hi Mark!

I really am glad you and others found the article interesting and a good read.

How could I not join in the discussion when good points are being made? The whole point of my article is to get people to discuss and think about the issues I raised. By discussing the issues, hopefully we can all learn along the way.

Yep :-)

We have some good discussions on OSNews and I think this is one of them!.


It is interesting that you bring up making a formal proof of frameworks. For SELinux and RSBAC have done exactly this. SELinux is an implementation of the FLASK architecture, and RSBAC of the GFAC architecture, both of which are formally verified.


Ah, cool. I thought FLASK might have been formally verified but couldn't remember specifically. RSBAC I am not familiar with (not that I'm that familiar with SELinux - I have it enabled on my Fedora systems because they make it fairly painless but I don't actually tinker with policy!)


The main point you make is interesting, and the only real argument I have seen thus far. You are saying that these frameworks will not help against any kernel level bugs..., but I am not sure that this is the case.

These frameworks have all been around for almost 10 years...at least since 2002 or so. In that time, I have set them up and seen them mitigate many vulnerabilities in practice, and at the same time have never heard of an example of them being bypassed by a kernel level vulnerability being exploited.

It is my understanding that while these frameworks are a part of the kernel, they are distinct from other parts of the kernel, so as exploiting one bug in a kernel will not allow you access to disable or override these frameworks.

I am not even aware of this as a theoretical exploit.

If you could expand and clarify on this, I would be most interested, and would update my article to address this.

Thanks!


Well, the reason security frameworks are so powerful is that there's a very strong and well-defined boundary between user mode code and kernel code. The security framework lives in the kernel at the highest level of privilege which malicious application code, even if running as root, can be prevented from accessing. Normally root can effectively access everything in the machine, including altering the kernel - but actually, since the application code is in userspace, that's only possible if kernel code *allows* root to have these privileges this. A security framework that lives in the kernel can restrict userspace code in any way it wants, so even root can be confined securely.

Basically the problem I'm describing is rooted in the fact that modern kernels are horribly lacking in isolation because they're "monolithic", in the sense that all code within them shares the same address space and privilege level. The kernel is basically just a big single program within which all code is mutually trusting and equally privileged. So any code that's part of the core kernel or a loadable driver module actually ends up with privilege to read and alter any part of machine memory, access any device, etc. There's usually some sort of supported ABI / API for driver modules that they *ought* to use but there's not actually a solid protection boundary to stop them from accessing other things.

The in-kernel security frameworks in Linux exist as a Linux Security Module, which is an API boundary but it isn't actually a protection boundary - there's nothing to enforce the correct use of the API. There are well-defined places where other kernel code *should* call into the LSM layer and there are parts of the LSM that the rest of the kernel *shouldn't* touch. But there isn't actually anything in the Linux kernel to stop a malicious bit of in-kernel code from corrupting the LSM's state to make it more permissive, or disabling the LSM hooks so that it no longer gets to make access decisions.

That is, unfortunately, a direct consequence of the fact that most popular OSes are structured monolithically - same problem will exist on Linux, BSD, Windows, etc. It really is possible for any kernel code to alter any other kernel code, even to alter the security framework. There are some ways in which you could, perhaps, make this harder but I don't think any mainstream systems can actually prevent it.

So, all you need in order to circumvent the security framework is to get some malicious code into the kernel - and, assuming your security framework is solid, that needs there to be a kernel bug of some sort. I don't have an example of an actual in-the-wild bug here but I'm pretty sure one could be found with a bit of digging. But instead, here's an example of a bug I once saw that could occur ...

Device drivers are a good source of kernel bugs - there are many of them and not everybody can test them, since some devices are rare. An easy mistake to make would be to have function like:

void send_data_to_userspace(void *user_buffer)
{
memcpy(user_buffer, my_private_kernel_buffer, BUFFER_SIZE);
}

That code will copy data to a user-provided buffer from an in-kernel buffer. It's incorrect though - kernel code should not access user pointers directly, which won't even work on some CPUs. On x86 it will work perfectly, so testing on that platform will not show any odd behaviour. But actually this code is *trusting* a pointer from userspace, which could actually point anywhere, including into kernel data structures.

Now suppose your webcam driver contains this bug. You wouldn't expect that allowing access to a webcam would defeat your security framework and you've probably allowed *some* applications to access it. But if the application accessing it supplies a maliciously-crafted point it could potentially do anything up to and including disabling the entire framework.

If the bug is in a device driver then you can at least use the security framework to disable access to that device once you get the security advisory. If the bug is in the core kernel (in the past various fundamental system calls have had such exploits) then disabling access might not be an option.

Does that make some sense? I might have overcomplicated things by talking about a hypothetical example but basically there are many classes of bugs that might allow kernel compromise and on modern systems kernel compromise = everything compromised :-(

Anyhow, hope I've helped, thanks again.

Reply Parent Score: 3

allthatiswrong Member since:
2010-01-22

Hi Mark,

Sorry for the delayed reply.

You do have a really good point, and I will update my article to address this.

This may also be of interest.

Basically, we can add separate protections into the kernel, and audit the kernel as much as possible to try and prevent things like this.

I am glad to say that there are no real examples I am aware of of someone breaking these systems through kernel vulnerabilities, and things like PaX really help here.

As you say though, the advantages MAC provide can not be denied, and this weak point does nothing to diminish the technology.

Cheers

Reply Parent Score: 1

f0dder Member since:
2009-08-05

There isn't really anything you can do to guard against malicious kernel-mode code. Sure, you could map the security subsystem (and related kernel structures) as read-only... the malware would just remap as writable.

OK, you install checks in the kernel APIs related to page table manipulations (there's probably more than you expect) and disallow deprotecting. So, the malware just accesses the pagetables directly which you can't guard against.

You can add a lot of mitigation, including randomized addresses for kernel structures... you could even go microkernel with separate address spaces for each module... but as soon as a piece of malware gains ring0, none of this is failsafe.

Only way to gain some safety against ring0 breach is through a hypervisor, but those are fairly complicated to design (i.e., possibility of bugs that could let malware break out) - and simply wrapping an existing kernel in a hypervisor isn't a silver bullet, either... a well-designed hypervisor can insulate each running OS instance against eachother, but still won't automagicaly insulate a specific OS instance from ring0 malware.

So ideally, you'd want an OS that's highly hypervisor-ware, and requires the use of hypervisor functions to manipulate address mappings (and probably a whole bunch of other things) - only then can pieces of a kernel instance be properly insulated. And there's a lot of subtle pitfalls that might still ultimately let a piece of malware end up tricking the legitimate pieces of kernel code to invoke the hypervisor on it's behalf.

Reply Parent Score: 1