CheriBSD is a Capability Enabled, Unix-like Operating System that extends FreeBSD to take advantage of Capability Hardware on Arm’s Morello and CHERI-RISC-V platforms. CheriBSD implements memory protection and software compartmentalization features, and is developed by SRI International and the University of Cambridge.
↫ CheriBSD website
This obviously raises the question – what exactly is CHERI? The FreeBSD Foundation has an article about this from 2023 providing more details.
CHERI extends existing architectures (Armv8-A, MIPS64 (retired), RISC-V, and x86_64 (in development)) with a new hardware type, the CHERI capability. In CHERI systems, all access to memory is via CHERI capabilities either explicitly via new instructions or implicitly via a Default Data Capability (DDC) and Program Counter Capability (PCC) used by instructions with integer arguments. Capabilities grant access to specific ranges of (virtual, or occasionally, physical) memory via a base and length, and can further restrict access with permissions, which are compressed into a 128-bit representation (64-bits for the address and 64-bits for the metadata). In memory and in registers, capabilities are protected by tags that are cleared when the capability data is modified by a non-capability instruction or if a capability instruction would increase the access the capability grants. Tags are stored separately from data and cannot be manipulated directly.
↫ Brooks Davis
CheriBSD brings this capability to anyone with compatible hardware, providing access to about 10000 pre-built memory-safe packages alongside more than 260000 pre-built memory-unsafe packages, as well as fully memory-safe versions of the KDE desktop, bhyve, and a ton of others. You can use both types of packages alongside one another, there’s a nice installer, and it basically seems like you’re using regular FreeBSD, just with additional complications, the biggest of which is, of course, the limited hardware support.
I have a feeling that if you’re the kind of person to own CHERI-enabled hardware, you’re most likely already aware of CheriBSD. Still, if this is something you’re looking for, be aware that you’re going ot need special hardware. It’s also important to note that DTrace won’t work on CheriBSD, and most optional modules, like firewall systems, don’t work either.

I think we have talked about this before.
CHERI is essentially a C ABI, which adds “tagged pointers”. It allows much safer access, and built in memory safety checks, like Rust or other languages.
With a bonus of being able to run existing software with no or minimal modifications.
There is also a CHERI Linux:
https://cheri-linux.org/
And it is done with RISC-V hardware extensions (or some arm64 boards)
The downside? Does not work on “current” regular hardware. Like old LISP machines it depends on hardware tagging. However there are alternatives like Fil-C that use efficient optimizations, LLVM capabilities, and some hardware tricks to get native-like performance. (It is also another C “ABI”)
Anyway,
I like the new experimental designs. We are finally moving away from “let’s build the hardware that runs C as fast as possible, who cares about securitt?”
sukru,
I’m still not happy with it because hardware tagging still remains worse than actually having memory safe languages. For one, tagging is probabilistic, and extensions like MTE on ARM cannot be relied to on stop 100% of errors with confidence.
https://www.usenix.org/system/files/login/articles/login_summer19_03_serebryany.pdf
On top of this, unlike memory safe languages that prevent errors from being possible, tagging is reactive and can not find any errors until their respective code paths are executed. Latent memory faults will remain in the software. Tagging does not make C less dangerous in life-critical applications.
Granted unsafe C code does better with tagging than none because it increases the chance of exploits faulting instead of executing malicious payloads, but reliability-wise we’re still falling short of safe C software. Tagging, like ASLR, fails to address the root causes of C’s memory problems. We keep investing in mitigations for unsafe C because of how addicted we are to C, but they all fall short of achieving robust software like safer languages can. There’s so much resistance, but IMHO the long term solution for robust software requires we ditch C or switch to a safe variation of C that doesn’t exhibit C’s usual memory faults.
Alfman,
Unfortunately we cannot always have what we want. And in general, time tested code is preferable over new “clean” code.
We can go back in time, and discuss why C prevailed. And while it is finally possible to do “safe” languages now, it was practically impossible back then. How could you do a Rust based OS in 8086 with 64KB of RAM, while “hello world” in Rust has been reduced to 600K in size? (It started much larger)
I can have a C program that is only a kilobyte in size, or less. And have assembly do great things in a 512 byte bootloader.
(You might be able to write a bootloader in Rust with “no_std”, but it would likely require wrestling the language, and the code would possibly be larger than assembly)
Now people are thinking about rewriting everything, again. But I would just say “good luck” to them, since it seems like some hard lessons can only be thought by experience.
sukru,
This view, though prevalent, sacrifices software robustness over the long term in exchange for lowering short term costs. The problem of course is that long term the cons of taking the easy path keeping broken foundations add up. Building a stronger foundation is not free, but over time it will have been better to switch to one than not to.
This type of short term thinking comes everywhere: CEOs/politicians/etc want to maximize short term benefits when they’re on the clock and to pass the harmful consequences onto someone else. A successful executive/politician doesn’t actually have to solve the problem if they can cast blame onto someone else. The result is ballooning technical/financial debt, which is exactly what we’re seeing whether we’re talking about unsafe programming languages or macro-economies.
C has accumulated loads of technical debt over decades. It’s human nature to favor short term thinking, but in reality we end paying more interest on that technical debt than it would cost to build software better foundations.
@Alfman
It is interesting to read this perspective from you as I know you are a fan of C++.
When I read @sukru’s response to you, I knew that they would be thinking of Rust and thought perhaps you would be thinking of C++. When @sukru says “I can have a C program that is only a kilobyte in size, or less. And have assembly do great things in a 512 byte bootloader.”, I imagine that you are thinking that you can have code just as compact and “more safe” when written in modern C++.
I think that Rust can also write safer code that is just as compact. In fact, depending on what you are doing, Rust code can be more compact than C as it handles more of the complexity for you, meaning you need less code to do it yourself (at the assembly level).
But I am curious when you say “This view, though prevalent, sacrifices software robustness over the long term in exchange for lowering short term costs. The problem of course is that long term the cons of taking the easy path keeping broken foundations add up.”. I agree with you 100%. But what is the scope of your thinking?
Specifically, is C++ an example of “broken foundations”? Or are the recent changes in the language resulting in something sound enough to continue with indefinitely?
The “rewrite it in Rust” crowd seems driven by exactly the thinking you outline here. The is why we see Rust in Windows, in the Linux kernel, and in UUtils. Not everyone is a fan of this approach though and I think the C++ faithful least of all. See Herb Sutter’s recent post.
LeFantome,
I have to jump in…
If we had infinite amount of resources, and robust test suites (end to end regression tests for all bugs we have encountered in the past)… there would be no technical reason to avoid rewriting everything.
But, time, and time again, these projects turned out to be expensive, time consuming, or really buggy.
(As someone who has seen many succcessful, and also unsuccessful translation / rewrite efforts in my personal life. When being paid 6 months for a small project from FORTRAN to C++ with a great team it works. I know from experience)
However, we have also seen extremely simple logic bugs which are worse than the “memory safety” it brings (like the Rust sudo rewrite basically giving away passwords)
It is more of a “survivorship bias”. Many see the good parts, but fail to acknowledge the hidden costs, and more numerous failures.
LeFantome,
Yes I agree. I don’t see a fundamental reason languages with safer abstractions can not be used in a memory constrained context. Rust would work fine even on a 16k Arduino. The language isn’t the problem, rather it’s the standard library and I see nothing wrong with doing away with the standard library in low level contexts.
If I can equate memory faults to a disease, I would say the mitigations (fuzzing, ASLR, tagged memory) are like treating the symptoms of the disease instead of curing the disease. In terms of “sound enough to continue indefinitely” I would distinguish between optimal solutions versus a actual predictions since these are not the same. A continuation of short term decisions indefinitely could be more likely than switching to a path that would be more optimal for the future.
I lost money on green energy investments. To me it’s so “obvious” that humanity “needs” to transition to renewable energy and it’s equally obvious that the sooner we make these transitions the better off we’ll be. And although I still feel my logic is sound, in putting my money on it I totally failed to account for short term reward bubbles even at very great long term expense. Long term benefits routinely get sidelined. Unfortunately it can take one or more disasters to get people out of their short term comfort bubbles. The mere prediction of future disasters isn’t enough, those disasters actually have to happen 🙁
Believe it or not I’m not a big fan of this approach either. Stuffing a safe language like rust into a pre-existing unsafe code base with unsafe C interfaces is somewhat tedious and defeats some of the core benefits rust brings to the table. It would be better for projects to be designed for safety from the outset. However despite this, I don’t see competing against dominant operating systems as a viable path forward, hence the reason rust advocates are pushing for it’s adoption to be more symbiotic and incremental.
sukru,
That cuts both ways. Many keep ignoring the long term costs associated with memory faults and unsafe languages. I do not deny that it’s costly to rebuild decades of C code, but at the same time when you look at the long term implications of unsafe language faults continuing generation after generation, the long term costs in aggregate greatly exceed the cost of fixing the foundations. The sacrifices of one generation would eliminate C’s technical debt from being passed onto future generations.
I’m not necessarily suggesting the C route isn’t sustainable, we might well continue using unsafe languages & interfaces indefinitely. However my point is that if we fail to fix it, future generations will continue paying interest on C’s technical debt well into the future and that interest *will* end up costing more than the principal (ie what it cost to build).
@sukru
Don’t get me wrong. I personally dislike total rewrites in general and have opposed them many times in my career.
The biggest problem is that we rarely fully appreciate what the old code is doing and why. It is almost inevitable that the new code will have bugs or be missing functionality. By the time all that is addressed, the code looks just as bad as the old stuff.
That said, I absolutely agree with refactoring and modernizing code bases. And I believe that you can move to a new language or framework while doing so. People object to the complexity and cost of doing so mostly be ignoring the complexity and cost of not doing it. At least that is my view.
As difficult as it is, I favour the approach being taken by the Linux kernel. They are adding the ability to add new stuff in Rust. At first, it is “optional” new stuff in Rust. Over time, they will start using Rust for mandatory components when they refactor old code. Eventually, this may lead to purposely re-writing important chunks just for the benefit of the new language. This may eventually lead to a fully Rust project. But they may take a very long time, or perhaps never complete, a full migration to Rust. The project will get better everyday in the meantime without a major disruption at any given point. To me, that is the way.
Many disagree.
@Alfman
> I don’t see competing against dominant operating systems as a viable path forward
It does not seem realistic to dethrone Linux given the staggering size of the ecosystem and the massive amount of momentum it has. But that has been said before.
We have an extremely interesting experiment to watch in Redox. It is still WAY too small to know if they will even get off the ground by I watch it with great interest. The lead developer has massive ambitions for it.
As an aside, I see Redox as a possibly more successful attempt at what GNU set out to do originally. First, philosophically as the idea of a complete and 100% Free Software general-purpose operating system. But also technically given the micro-kernel design. Redox is attempting to create not just a kernel but the C library, COW filesystem, audio sub-system, and display server that GNU never did get around to. Ironically, the bits Redox is not doing are core utilities (using UUtils) and compiler (using Rust) which were the core of the early GNU project. Still, Redox is going to be a soup-to-nuts complete operating system in Rust from scratch. Popularity is no guarantee but it is already staggering what they have accomplished.
A big difference between GNU and Redox is of course the choice of Rust versus C. But an even bigger difference is probably MIT vs GPL. In my view, both are reflections on what we have learned since the 80’s.
@Alfman
> I lost money on green energy investments
As they say, “the market can stay irrational longer than you can stay solvent”. Market valuation is about perception. Which is to say it is fashion and politics. But reality always gets its way in the end.
As for renewable energy, your were and remain completely correct. Thankfully, renewable energy supplied almost all of the worlds growth in energy demand last year. 2026 may be the first year that renewable energy not only generates enough to cover growth in demand but even starts to actually drive down the use of fossil fuels globally. That would be an amazing achievement in this era of hotter temperatures, electric vehicles, AI, and data center proliferation. And solar energy is providing about 80% of it which is the technology improving and dropping in cost the fastest. Solar will probably industrialize Africa which will truly change the world.
The really big change this year will be grid-scale battery technology that is not only affordable but safe (eg. sodium ion). Australia may go 100% solar at some point which will drive their car market to EV as well. I am not offering investment advice but the transition is inevitable at this point.
But politics still matter. The US is the only place in the world where the rate of solar growth is dropping. Also the only place where vehicle electrification is stalling. And this was before tariff hikes and the possibility of the US bringing Venezuelan oil reserves to the Texas coast at minimal cost.
Safer, less-expensive batteries that work better in both hot and cold will again super-charge EV adoption. They may not really ramp-up until next year though and it may remain out-of-fashion in the US for longer. That said, I think the EREV strategy is the right one for North America with its large vehicles, variable loads, and long hauls. As better batteries drop EV prices at the low-end, and EREV rolls out for larger vehicles, I expect the US to catch up with the rest of the world in terms of adoption. As batteries improve, EREV will eventually go full EV. The trend and timeline should all be crystal clear 12 months from now.
The problem with technology transitions is that it is easy to spot the trend but still surprisingly hard to pick winners. Solar is exploding. Most solar providers have not gotten rich. Which means investors have not either. The EV companies that have gotten the most rich may be the riskiest long-term bets. Like any gold rush, it is often safest to sell shovels.
If you are just worried about the world though, I think we are getting very close to the point where we will not be able to screw it up no matter how stupid we are. Hopefully it happens in time (and before we blow each other up).
LeFantome,
(responding here on purpose to clean thread)
100% of the solar and I think EV incentives got the axe in the US. It’s such a stupid policy because it sets us behind the rest of the world in terms of green innovation. We’re not just loosing the benefits of renewables, but we’re loosing out of the financial upside that comes with industry leadership. Politicians keep selling us out to the oil industry for the benefit of a few. Canceling the initiatives that were already in play delivering results can only set us further back. Ugh.
I’m no expert in the field, but from what I’ve read these battery alternatives might not have the energy density to be competitive in cars. I think they would make a lot of sense for grid applications however. Energy storage is one of the weak links for both solar and wind power. Cheap alternatives to LI batteries would be a very welcome addition to the market.
I did see this product review and it was quite critical of sodium battery. It passed the tests, but because it’s bigger and less dense than LIPO while also being quite a bit more expensive too, those are all cons. IIRC the only selling point was that it works when the battery is frozen.
“BLUETTI Pioneer Na SODIUM ION Battery Power Station | Did They Just KILL Lithium?”
https://www.youtube.com/watch?v=OoZ_g_MShTw
Hopefully that’s just a scales of economy thing though and these sodium batteries should get a lot cheaper. If so, they could completely displace LIPO for grid applications and leaving more materials available for EVs and devices where density is more critical.
Very good point.
It’s nice to hear somebody optimistic, although I’m having an uncomfortable laugh because I am less confident that we can’t screw things up (I am in the US, which might skew things for me).
Just watched another review for a Sodium Battery.
“Testing My First Sodium-Ion Solar Battery”
https://www.youtube.com/watch?v=zSPSCC3_hHw
As the other reviewer concluded, they don’t make any financial sense at current prices, but prices could/should come down. I’m more concerned about the specs though. This youtuber had trouble working with the Sodium battery because neither his chargers nor inverters were designed for Sodium, which is inconvenient for battery swapping but ultimately fixable for new applications. The charge/discharge efficiency of the battery chemistry is more concerning to me though (see the round trip efficiency chart at 0:45). Unless the efficiency of Sodium batteries can be significantly improved, that’s a large amount of energy being lost to internal battery losses.