In addition to the establishing of the seL4 Foundation and adding the open-source RISC-V architecture as one of their primary architectures, the seL4 micro-kernel has been seeing a lot of work and also research into future work. Among the ambitious research goals is to create a “truly secure, general-purpose OS”. This multi-server OS would be secure, support a range of use-cases and security policies, and perform comparable to monolithic systems.
Be sure to flip through the slides of the presentation in question for more information.
Yes, a nice proof of concept project with some good features and performance but on its own is not enough to guarantee security. I expect a lot of vodka/whiskey/schnapps/bourbon/saki was drunk in windowless rooms after reading this presentation.
Yes, quite.
To me, seL4 is one of the most interesting microkernels in existence. I’ve been following the L4 family on-and-off for the past 10+ years and had always been surprised that no one had tried to build a general-purpose OS on top of one of the variants. I see that L4Linux is still alive and kicking (https://l4linux.org/), They use L4Re (L4 Runtime Environment), but I’m not sure if this works with seL4.
Speaking about the L4 family in general, I’m surprised a bigger push has not been done for IoT applications. It seems to me that microkernels would be perfect for these small and limited devices, and seL4 should probably be at the top of the list. Having a “high-assurance, high performance” kernel for these applications just sounds like a no-brainer. But I guess it’s just easier to slap Linux on top of everything and let customer worry about the lack of security.
Anyway, I’m pretty seL4 is used for military applications, which is why it keeps chugging along without anyone every paying it any mind. Looking at the list of past and present members is quite telling: General Dynamics, Boeing, Raytheon,
@teco.sb
Because SeL4 is not a truly modern kernel. Oxford and Camrbridge have things going on researching next generation secure OS. A kernel also requires some form of userland. This is a security issue of its own. Then there is end use, training, and all the supporting ecosystem. You can’t just drop in a kernel and declare everything secure.
Also don’t forget that GCHQ and NSA et all will be operating behind closed doors and various influences will be felt which will put any general uptake of a truly secure system in doubt. Then there are compilers and coding practices and real world use. And not everything which needs to be secure needs a fancy kernel. DOS would do for some tasks. For other tasks still having a CPU anywhere near it is a bad idea.
Hence:
HollyB,
Do you have a specific criticism of sel4?
It has some fairly compelling proofs of implementation correctness and security enforcement, which is more than most operating systems can boast “modern” or otherwise.
https://sel4.systems/About/seL4-whitepaper.pdf
IMHO the software industry at large has a lot to learn from these guys.
It’s high assurance only in the sense that flying is not hitting the ground. I’m not reading my way through a white paper because I can’t be bothered. If you have anything specific in mind I’d rather you wrote it up with an appropriate quote. My time is precious – to me. Take it up with them and Oxford and Cambridge. I think you’ll find their list of footnotes is thicker than the white paper. Most of them will boil down to a lot of “Er but” and “on the other hand” and “hope nobody notices this”. If they haven’t done this work someone else in a windowless room most certainly will have.
Like I keep saying:
Go through every line I wrote in the previous comment and think about it instead of skimming past. Examine each one for security issues. Consider the details and system as a whole. Then examine the bigger systems surrounding this. Give it a poke. There’s loads of white papers on this.
It’s the same thing with stealth technology or in fact any other system. All systems leak information. All systems can be usurped. Again, there’s white papers on this.
Again:
HollyB,
Did you have a specific criticism in mind when you wrote your post…? If not, then what do you base your conclusions on?
That’s just the name of two universities, what does that prove exactly? If you are going to criticize SEL4 in their names, then It’s important to cite specifics, please.
What security issues?? Are you talking about 3rd party software running on top of SEL4? Of course that software isn’t going to be automatically secure. The proofs apply to the kernel. Any software that you want to run on top of it will also need to be verified in addition to the SEL4 microkernel, this should go without saying. However securing kernels is real progress.
Indeed, there is a lot of research on SEL4, but what are the specific papers that you are referring to? With regards to software leaks, I think you are wrong and SEL4 is one project that set out to prove that secure kernels are possible. In terms of hardware leaks, SEL4 is being used for research on this too. Here’s a paper that goes into hardware based leaks via timing channels (ie meltdown/spectre): They suggest it would be possible to fix hardware timing attacks, but we’re at the mercy of hardware manufacturers to cooperate.
https://arxiv.org/pdf/1901.08338.pdf
Folks at Cambridge University worked on proving formal correctness of SEL4 compiled on ARM…
http://talks.cam.ac.uk/talk/index/31068
They’ve also invited hosted SEL4 speakers discuss formal verification of larger software engineering projects using SEL4 as a foundation.
http://talks.cam.ac.uk/talk/index/38437
And here’s an Oxford paper suggesting SEL4 and EROS as promising for secure architectures without vulnerabilities.
https://www.cs.ox.ac.uk/files/2690/AOCS.pdf
Researchers at prestigious universities all around the world are finding value in the SEL4 kernel including Oxford and Cambridge, so what are you talking about when you say “Take it up with them and Oxford and Cambridge”?
Sure, formally verifying code isn’t easy. But IMHO this is exactly why more software engineers should be exposed to the techniques used in formally verified projects like SEL4.
I think the main criticism for SEL4 in the past would have been that it doesn’t do enough for general purpose use cases, but that’s exactly what this article is addressing, which is pretty exciting news.
You think that actually matters? The majority of security flaws are related to exploitable weaknesses in hardware (rowhammer, meltdown, spectre). Proving code (that is buggy because it didn’t anticipate various conditions) matches a mathematical proof (that is also buggy because it also failed to anticipate the same conditions) is completely worthless for anything other than marketing hype.
If you look carefully; this is what SeL4 is best at – hype (not security).
They claim it’s the “fastest” micro-kernel. They lie. It probably does have the lowest “overhead per message” but that does not translate to performance (e.g. “500 messages at 1000 cycles per message” is not faster than “50 messages to achieve the same result at 10000 cycles per message”); and for modern (multi-CPU) systems it does not translate into scalability either (e.g. one CPU struggling to cope with the overhead of thousands of messages while other CPUs are idle).
They claim their security features don’t cost performance. They lie. For one example; the presentation (from the article) describes partitioning (specifically, partitioning of the last level cache to prevent timing side-channel attacks); but what happens when one task needs lots of a shared resource while another task doesn’t (e.g. one task benefits from lots of cache while another task doesn’t use much/any cache) is that you get underutilization of the shared resource (e.g. half the cache not used at all while the task that needs it can’t use it because it’s reserved for the other task that does not need it), and that leads to performance disasters. Another example is capabilities – to prevent “brute force guessing” they need to be large (like maybe 256 bit) and “random”/unguessable, and to prevent replay attacks (storing a capability you were given when you needed it so that you can use it to do naughty things later when you shouldn’t have it) they need to be dynamically generated and revocable (and revoked often and therefore generated often). These mean some kind of cryptographic algorithm (e.g. secure hash), but all of the cryptographic algorithms have high overhead, so you’re looking at “frequent need for large extra overhead” (even just checking capabilities, without generating or revoking them, is a significant amount of extra overhead).
They also claim “hard real-time, with bounded IRQ latencies”. They lie. It’s impossible to predict the worst case latency of an IRQ that may have to wait for an indeterminate amount of time while an unknowable number of other (higher priority) IRQs are being handled. For some CPUs (all modern 80×86) it’s also impossible to determine when the CPU will go into System Management Mode (and how long the firmware will stay in that mode); and for all modern CPUs there are other effects (RAM bandwidth saturation, power/heat management, …); so things like calculating the worst case “time for a kernel function to return” is completely meaningless.
Brendan,
Absolutely it matters! Proofs are important for robustness and security. We’re one of the few industries that has virtually no engineering standards in place despite the fact that so much of the world runs on our code. If we could get the industry to rally around more tools & training to produce provably correct code, software engineering and indeed the whole world would be better off for it.
It seems like you are complaining about performance. Ok. That was never what set SEL4 apart though. If someone develops poofs for a monolithic kernel, it could well be faster and just as secure. I have nothing against that, if somebody does it, great! 🙂
I’m just tired of dealing with such bad code all the time…even in big successful projects. Maybe this will never change, but I feel we can do better by having CS students learn how to do code proofs and getting used to practicing it themselves. It would put us on a better track for the future.
@brendan
Yup. It’s the same with the Pfizer vaccine. Lots of getting their punch in first and media airtime but they are not the only show in town nor necessarily the best for definitions of best in a constantly shifting threat landscape. You see the same thing with Intel and Nvidia when they try to rain on someone’s parade the day before a product launch conference.
The people behind SeL4 deserve credit for getting something out the door and making people aware of it but confidence is not to be confused with competence. As they themselves say and a certain someone around here keeps missing from their own presentation slide:
@Alfman
I’m pretty much done with this topic. Read what is written and do your own legwork. I am neither a pot plant for you to gaze at nor your secretary. Read around the subjects and understand the different domains like you didn’t do with digital versus analogue.
HollyB,
What a surprise.
I’ve cited a plethora of papers and there’s tons more in support of SEL4. You’re the one who literally stated “I’m not reading my way through a white paper because I can’t be bothered”. What about you reading what is written and doing your legwork?
I asked you to elaborate on specific SEL4 insecurities, you declined.
You claimed that Oxford and Cambridge disagreed with me, when I asked you for sources, you declined.
You said you would go over papers if I quoted them, which I did and then you declined.
Step back and see just how hypocritical you are being.
Previously you said “As I said my first comment stands as a statement. It wasn’t a discussion and I have washed my hands of the topic”, but I think that cuts to the crux of why these discussions keep devolving. The expectation that your opinion can’t be challenged is neither realistic nor healthy.
I think SEL4 has advanced CS, big whoop. Next time just agree to disagree and we can move on. No big deal.
Seriously give it a try some time 🙂
2003: “We proved it’s secure!”.
2004: “Oh no, someone invented the “blue pill” attack and you got pawned”.
2005: “We proved it’s secure!”.
2006: “Oh no, that motherboard has an SMM vulnerability and you got pawned”.
2007: “We proved it’s secure!”.
2008: “Oh no, that computer has a rowhammer vulnerability and you got pawned”.
2009: “We proved it’s secure!”.
20010: “Oh no, the kernel crashed and nobody knows why. Does your computer’s memory use ECC, or …? Sorry you got pawned”.
2011: “We proved it’s secure!”.
2012: “Oh no, someone plugged in a malicious thunderbolt device and you got pawned”.
2013: “We proved it’s secure!”.
2014: “Oh no, there’s a new hyper-threading vulnerability that was just discovered and you got pawned”.
20016a: “We proved it’s secure!”.
20016b: “Oh no, the CPU has a meltdown vulnerability and you got pawned”.
20016c: “Oh no, the CPU has a spectre vulnerability (with speculative execution) and you got pawned”.
20016d: “Oh no, the CPU has a spectre vulnerability (with branch prediction) and you got pawned”.
20017: “We proved it’s secure!”.
20017: “Oh no, the CPU has another spectre vulnerability (with TLBs) and you got pawned”.
20018: “We proved it’s secure!”.
20018: “Oh no, the CPU has another vulnerability (with write combining buffers) and you got pawned”.
2019: “We proved it’s secure!”.
2020: “Oh no, new motherboards/DDR4 still have the old rowhammer vulnerability and you got pawned”.
But that’s just hardware alone. Did they prove that their proofs aren’t flawed? Of course not (they can’t) – they’re continually improving (fixing the flaws and weaknesses in) their proofs. Did they prove that the missing 2 GiB of stuff that is necessary to make a general purpose OS (including device drivers, file systems, etc – everything that isn’t in the deliberately minimized micro-kernel) all correctly implement all the (likely flawed) design requirements? No, they didn’t and can’t do that either.
What their “proofs” really mean is “under ideal circumstances that have been repeatedly shown to have never existed; ignoring the huge number of known existing timing attacks and all “not known yet” vulnerabilities, and ignoring all the software that actually matters (and only caring about an insignificant speck – the micro-kernel alone); we have proven that (if you only have a single CPU and don’t have real hardware) the micro-kernel behaves in the way our (possibly flawed and dodgy) proofs (that need to be constantly fixed) temporarily described at the time”.
No, we’re not. Hollywood also has no engineering because (just like software) there is no way to prove anything in a meaningful way about the quality of a movie. Butchers? Bakers? Candle stick makers? How about proving that people won’t get a virus and die when they board a cruise ship?
Tell me; what would you think if a bridge engineer slapped a sticker saying “Disclaimer of Warranty: This bridge is licensed as is. You bear the risk of using it.” on every bridge they build? That is exactly what SeL4 does (warranty disclaimers built into GPLv2, etc – their belief in formal verification doesn’t even extend beyond “not fit for any particular purpose”).
No; we’ll just get a false sense of security from paying 3 times as much for software that is no better.
Woah there. Bad code is a very different issue. A proof would only prove that bad code correctly implements a few carefully selected aspects of a bad design; and will not make code good; and will more likely (due to “only 24 hours per day” developer time restrictions) make code worse (time spent on proofs is time that isn’t spent fixing/adding/improving features or performance or compatibility or fault tolerance or security or anything that anyone actually cares about).
I’m working on my own (QNX-like) OS based on seL4 and while I’m not yet sure about the overhead of cache partitioning as I am still only working on the memory allocation subsystem, I can say that the capabilities shouldn’t impose very much overhead at all. seL4 capabilities are nothing but pointers into a CSpace (which is a hierarchical structure composed of arrays akin to a page table) and are the size of a normal memory address. They are assigned by user programs rather than the kernel and no hashing is involved at all.
There is no such thing as “guessing a capability” under seL4 since CSpaces are local to protection domains, can only be manipulated through system calls, and are themselves referenced through capabilities, so determining whether a program is authorized to use a capability is a simple matter of indexing the arrays of the CSpace (which I believe scales linearly with the number of levels in the CSpace, but usually there are relatively few levels) and seeing whether there is a valid object reference at that address. Revoking a capability is just removing the object reference from the address in
The message size limit isn’t really an issue either. Even though seL4 messages are fixed size, it is possible to share a buffer of arbitrary size between processes and put long messages into there, so it doesn’t make sense to speak of a limit as to how much can be done in a single message.
I am a little bit skeptical of the usefulness of formal verification in large dynamic general-purpose OSes though, since it seems like it would be difficult to verify the entire TCB of such systems, even if it’s centralized to only a few components. I’m thinking at some point I may have to switch from seL4 to a fork of it because there may end up being irreconcilable differences, although I won’t do that unless I have to.
Brendan,
Indeed, but you’re talking about hardware vulnerabilities in CPUs, it doesn’t make any sense to blame SEL4 for hardware faults. It’s a software proof that protects us from software errors.
They’ve certainly thought about these things, and I even linked to a paper about this very topic above:
https://arxiv.org/pdf/1901.08338.pdf
When it comes to hardware issues, we’d require cooperation between hardware and software to have proofs that extend into the hardware domain. Obviously lacking with proprietary CPU vendors, but with open architectures like RISCV, there’s actually a great opportunity for the hardware ASIC design itself to get validated in addition to the kernel. This would make for an outstanding security platform the likes of which has not been possible before!
At a physical level, you can always defeat security by tampering with the bus or memory, or even sand down the chip’s protective shell and literally probe the data paths.etc. There’s nothing that can be done about a sophisticated adversary with physical access. The proofs we’re talking about exist in the software domain where such proofs are possible and that’s where SEL4 is making strides.
It may take a long time before open CPUs become commonplace, but when they do I think that SEL4 has an extremely good shot at being the first to validate both hardware and software.
I was thinking more of engineering industries, such as bridge-making, airplane builders, etc. But even so I actually do think many of the industries you point to are more regulated than software. With unions regulations, standards bodies certifying technical compliance to standards like dolby dts. FDA agencies to inspect & recall food, etc.
That was exactly my point, actually. Unlike with most other industries, software comes with no guaranties that it’s fit for purpose. This isn’t specific to the licenses that SEL4 uses. The entire software industry has these disclaimers. At least with free software it kind of makes sense: you paid nothing, you get no guaranties, but with commercial software it’s particularly lousy that your footing the bill often still with no guaranties 🙁
Realistically the transition costs for large code basis would be enormous and probably infeasible…the same reason C isn’t going away any time soon. But developers who were trained from the outset would do much better at incorporating proofs into their natural workflow, it would be second nature to them. It’s possible the reduction in bugs could pay for itself. It’s hard to say for sure, but it would make for a very interesting research topic. 🙂
Well, all code that fails to do what it’s designed to do is bad code. Good code does what it’s designed to do, and proofs do in fact help with this. But I concede “good code” and “bad code” can have very different meanings in various contexts and that I need to be more careful how I say it.
@Brendan
Yep. If Alfram actually read what is written he would know that software proofs of security are not actually proofs of anything. As for the surrounding essentially uncheckable bloat which surrounds this holy wonder of a kernel? This is before you get into hardware issues.
Why I would want to spend my time reading a mountain of dreary whitepapers when I don’t want to nor have to I have no ideas especially when he can’t pay attention to a single paragraph. It’s the old “take a perfect sphere and roll it in a straight line problem” and I’m not minded to go chasing that ball.
I mean, “secure general purpose operating system”? That’s the first red flag. There’s no such thing and I listed all the main domain areas right at the beginning hence what they say themselves in their own slide presentation:
@Alfman
Yes. It’s called snakeoil. Please do listen very carefully and pay attention. The staff. The engineers. The scientists. Standards committes. Oversight. The software that checks the software. The hardware that checks the hardware. The software that builds the the hardware that builts the hardware. The list goes on. Okay, now human being are essentially points of weakness for compromise as are the basic circuits you are now down to a tank of water and assorted chemicals and a few bucklets of sand and some scrap iron. Because this is where this discussion ends up.
In any case a discrete system is just one element of a bigger problem.
I think those people knocking back whiskey/vodka/bourbon/saki in a windowless rooms just sent out for pizza.
The overhead (in terms of “time taken to allocate pages”) should be zero (it’s enough to have a different allocator for each color). The problems are:
a) underutilization of cache/s (e.g. 8 cores sharing the LLC, where a task that benefits from lots of cache is limited to only being able to use 12.5% of the cache even when all other CPUs are idle and not using the cache at all)
b) CPU migration/load balancing (e.g. if a task was using a CPU that became overloaded and other CPUs are idle or lightly loaded; do you do nothing/let the task continue trying to use the already overloaded CPU and have poor performance; or do you reallocate all of the task’s pages when migrating the task to a different CPU to maintain the partitioning and have a huge amount of overhead reallocating all pages; or do you migrate it without reallocating pages and break the partitioning/security?).
c) complexity. Note that it isn’t just one “last level cache”, it’s all the caches (L1, L2, …) plus TLBs plus “higher level translation caches” (plus branch prediction stuff plus …); with deliberate “address hashing” done on some CPUs (Sandy Bridge and later) to make it difficult to determine the relationship between address and location in cache (which is likely to ensure that partitioning one cache makes it impossible to partition another cache). On top of that, for a real kernel you have to care about things like shared memory (even if it’s just “physical pages containing a file’s data in VFS cache are mapped into multiple different processes as read-only”) where the entire “partitioning” idea breaks (because the same physical page that’s mapped into 2 or more processes can not be in 2 or more partitions).
Hrm – that’s different to what I was thinking. In this case (pointers/indexes to data kept in kernel space); if there are 1 million tasks and I have 1 million capabilities; what happens if I give all of my capabilities to all other tasks? Does the kernel crash (because I tricked it into consuming ~16 GiB of RAM) or does the kernel reach some kind of quota and fail to let other tasks have more capabilities? For either case; how is it not a denial of service attack waiting to happen?
For more fun; what if I use N threads that all create the maximum amount of CNodes (at the maximum depth); then use “seL4_CNode_Rotate()” with carefully selected capability slots designed to trick the kernel into thrashing the living daylights out of caches (which are already crippled via. partitioning)? Can I ruin IRQ latency with all the cache miss (and TLB miss?) overhead I cause while the kernel is running with IRQs disabled?
Yeah, shared buffers are a security disaster – typically the receiver validates the data in the shared buffer, then the attacker modifies the data after it was validated, then the receiver uses the malicious data without knowing it was modified after it was validated. To guard against this the receiver can copy the data to a private area before it’s validated (so it can’t be changed by the sender after it’s been validated); but that just causes a performance problem (the overhead of copying) and most programmers won’t remember to do that (and/or won’t want to do due to extra memory consumption and/or overhead). Shared memory would also break “cache partitioning”, so that’s nice I guess.
Large/arbitrary sized messages gives you the ability to do an “atomic move of physical RAM” between virtual address spaces; which avoids the security problem (as only one task can access the contents at any point in time) while also avoiding the overhead of copying (while also allowing for network transparency if you’re into that).
Note that I’m not/wasn’t complaining about the performance; I’m complaining about the hype/lies/false claims (e.g. that none of the security costs any performance). If they claimed “significantly more secure with a relatively small increase in overhead” it’d be fine (likely accurate enough, and not blatant false advertising). I can even guess how their lies started – they measured IPC latency alone using irrelevant micro-benchmarks and found that IPC latency is comparable (and sometimes better than) system call costs in some monolithic kernels; but over time “low IPC latency” slowly morphed into other forms (to “fast IPC” implying more than latency alone, to “fast security”, to “no overhead for any security at all”) until it became pure unsubstantiated bullshit..
My problem with formal verification is that the advocates try to make it sound like a silver bullet that will cure everything from social engineering attacks to undiscovered hardware vulnerabilities. I have never seen any evidence that it actually finds more problems than (e.g.) a combination of normal static analysis and unit testing. If you read their whitepapers it looks like half of what they’re using it for is detecting “undefined behavior in C” (NULL pointer dereferencing? Signed integer right shift? Seriously?) that wouldn’t exist in better designed programming languages to begin with, and ensuring safe coding practices were actually followed (e.g. that parameters from user-space are validated before use in kernel system calls).
To be fair; half of what I’ve been complaining about (relying on false assumptions about the correctness of hardware, inability to prove anything about timing side-channels, etc) they do freely admit in their white papers. It’s just the material intended for a wider audience (the general public) that leaves an exaggerated/misleading impression of the benefits (and fails to mention the high developer time costs).
HollyB,
Talking to me through Brendan is passive aggressive. Here’s the thing. Brendan’s opinion is informed and I respect him for it. He doesn’t ignore my points, he responds to them rationally. He’s not vengeful and doesn’t attack me as a person. Even if we disagree on things, I know he’s a sharp and I trust he knows what he’s talking about because it comes through in all his posts. He’s intellectually stimulating – I’m really glad people like him are here. He’s type of person we need to keep osnews discussions interesting and friendly. I genuinely appreciate him and I’d like to think he appreciates me. That sounds sappy, but oh well 🙂
I’ve had my share of overheated debates, but I still make an effort not to hold grudges. I’d much rather have a civil conversation and I’d much rather have that with you too. However you don’t really appear interested in civil conversations; you never supply sources to back your views, you act as though everything is beneath you (ie “I can’t be bothered”), you won’t answer questions, every post is cop out after cop out, and you’re constantly blaming me (and anyone who’s opinion you disagree with) for something completely hypocritical. You’re always free to disagree, but you don’t have to be so disagreeable. Look at Brendan’s posts, they have real substance and he put thought into them, not throw away lines.
Why is it snake oil? For the Nth time do you have a SPECIFIC criticism about SEL4’s security such that they deserve to be called snake oil? Can you justify that or is it just another throw away line that you “wash your hands of it”?
This is where mathematical proofs come into play. It’s not a panacea for the simple reason that mathematical proofs only cover the logical aspects and not the physical implementation, which can have it’s own vulnerabilities. You’d still need traditional auditing to make sure that the code & logic that you’ve proofed is actually being used. But where proofs do add value is in providing strong guaranties that the code you are provisioning does not contain it’s own faults. The software industry at large is notorious for faulty & vulnerable code, and that’s were proofs would really help. Is it going to solve things like social engineering? No of course not, but that’s not a reason to avoid proofs for the software itself.
@Brendan
The seL4 kernel never allocates anything after it has finished setting up the initial thread and its address space. User programs have to allocate kernel objects instead (under UX/RT only the root server will be responsible for allocating kernel objects), so any denial of service vulnerabilities would be in user-mode code, not the kernel.
UX/RT will avoid TOCTTOU vulnerabilities for the most part by having only connected file-based IPC and only checking permissions on open() for most APIs (if parts of an API need to have separated permissions they will normally be on separate files). Where permissions need to be checked on every message (such as the implementation of open() itself), the message will be copied from the shared buffer; simply using the traditional Unix read() instead of the zero-copy equivalents will be sufficient to ensure this (the traditional and zero-copy APIs will be fully interoperable).
The issue with in-kernel message copying is that it complicates page fault handling significantly. What if the kernel incurs a page fault while it’s copying the message? It is possible for it to call whatever process is responsible for paging, but doing that adds significant complexity. UX/RT will copy messages in the IPC transport layer library instead (any message that is too big for the shared buffer will be split up). Network transparency should still be possible with user-mode rather than kernel-mode copying.
That I’m not quite sure about because I haven’t gotten to the point where I can test that yet.
@Alfman
Read what has been written and comprehend it. Read what Brendan wrote and comprehend it. Until then you’re going Dunning-Kruger on the security topic. The people who produced seL4 and their auditors know there are weaknesses in their work but you’re just listening to the headline. Back your ego up and start again.
What I wrote beginning from the first paragraph in the first topic, and the following two comments is the condensed knowledge of people who are expert in the topic. They are people who have been working with security and systems for years, and who know or who have worked with some of the people I have already listed. The grand claim of a super handwavy secure system you make “if only…” is what they routinely call “snakeoil”. I’d rather take the word of people who know these systems from the ground up and work with them on a frequent basis and know security inside and out and who have a proven track record in the field than you no matter how deep your single domain expertise is.
Again, basic systems theory: All systems leak information, all systems can be usurped. Proofs are nothing of the sort because they are mathematically impossible. Yes you can have a “five nines” level of security but this is still not secure. The people currently playing a drinking game off your comments in windowless rooms have now slid under the table. These are the people paid to exploit the 0.00009% opporunity.
You really think Oxford and Cambridge, or any of the IHVs you mentioned are spilling the beans? Don’t be daft. seLinux4 may be secure in some contexts but that is about it. It is not today and never will be secure in and of itself and the proofs are bunk. The people who wrote the proofs know they are bunk. Don’t believe me? Phone them up and ask them yourself.
Security is a very difficult topic like nailing a tent down in the wind.
Sigh…
HollyB,
This far in the conversation and you still have not identified an SEL4 specific insecurity. If auditors found weaknesses then fine, back that up with a reference.
All hearsay, you’ve got to quote them!!!
Says you… where’s your proof that SEL4’s proofs are wrong or mathematically impossible? If there is a vulnerability somewhere in the system, it’s not going to be in the verified SEL4 portion of it. If you are in disagreement, then for gods sake provide some kind of evidence, link, or something! Otherwise it’s just you trash talking.
We already have a wealth of papers on the topic, I’ve read a few, but your position has been so uninformed I kind of doubt you’ve read a single one because you’re always gloating that you “can’t be bothered” to read whitepapers. I actually think it’d be pretty cool to invite someone from the SEL4 project to speak here. We’ve had some nice guests in the past but presently I feel the lack of professionalism here is a shame.
@Alfman
Ask them yourself! I’m not squabbling online with a none expert.
HollyB,
I already agree with them. It’s your argument that’s on the line, but whatever. Have a good day.
Yes, I was disappointed when I read that Fucsia didn’t use any of the research that have been done in the last 20 years.
jgfenix,
It would have been very interesting to see that, although I think google probably wants to have full ownership of the code so they can set their own terms.
There’s already Genode that can use seL4 and if I remember correctly you can write servers/components in SPARK so you can have a verified microkernel and verified servers.
jgfenix,
I would agree on using a safer language to write userspace programs (granted with formal proofs, the usual human errors won’t occur, but it still kind of feels inappropriate to use C, haha).
I’m not really familiar with SPARK though, what is your experience with it?
https://en.wikipedia.org/wiki/SPARK_(programming_language)
I wish all of these “verification conditions” would be verifiable at compile time in all languages, it would go a very long way towards eliminating common faults especially in unmanaged languages.. Also nulls and segfaults for pointer access.
@Alman
There are safer languages but no safe languages. There are still major gotchas in all of them. SPARK has its well known issues. Better. Not perfect.
It’s slightly orthogonal to the topic but “Microsoft To Add ‘Nation-State Activity Alerts’ To Defender for Office 365 ” (Slashdot). I’m curious what this much hyped trrendy functionality is actually based on. Certain types of attacks have a fingerprint which which typically be attributed to a high performance attacker (usually a nation state but well resourced criminal organsiations have this capability). Then there are small attacks where attrbution is difficult as “nation state” attackers hide behind random criminals and script-kiddies and vice versa: every side pretending to be every other side and in some cases appriorating each others software tools or using the same exploits. At the edge use of a highly complex zero day doesn’t imply a state level attacker although it can. Some attacks will require monitoring of the network to detect which is outside of the scope of Microsofts operations. Yes Microsoft have some capable netwrok intrusion detection software and operations and yes Microsoft are exposing this via the Office 365 dashboard but hype and reality aren’t always the same thing.
So, yes. “Microsoft To Add ‘Nation-State Activity Alerts’ To Defender for Office 365 “. [For lists of “Er” and “but” and “well, um” see footnotes 1-7482.]
Or you could jiust not use products like Office 365 or the cloud or an internet connection when you don’t need to if you’re that worried.
US domestic politics, I’d imagine. They won’t ever point the finger at China’s MSS or Israel’s Mossad.