A few months after my contract with Haiku, Inc. began, I rewrote the implementation of the Haiku kernel’s condition variables (as opposed to our userspace condition variables, which are from POSIX.) As this new implementation has run in Haiku for over a year and shipped in the latest release with no sign of any remaining issues, I figured it is high time for a deep-dive on the API, its implementation history, and the design of the new implementation I wrote.
I expect this article will be of broader interest than just to Haiku’s community, because Haiku’s condition variables API has some notable (and powerful) features not found in those of other operating systems, and its implementation is thus likewise unique (at least, as far as I have been able to figure out.)
I’m currently working on a “state of Haiku” sort of article, and I’m incredibly impressed with just how stable, fast, full-featured, and usable Haiku has become on real hardware. I’ve always kept an eye on Haiku in virtual machines, but now I’m running it on real hard hardware – where it belongs – and it’s been an absolute joy.
The fact that waddlesplash managed to pull off this switch basically without any issues and with few people noticing, is further illustration the project’s in a good place.
I’ve said it before and i’ll say it again. Part of Linux’s lack of mainstream adoption is a complete lack of standardisation. Sometimes too much choice is a bad thing.
Haiku, being a complete OS from kernel to userland, doesn’t suffer from this. You don’t have to worry about if you’re running RHEH, or Ubunku, or Arch Haiku. Haiku is Haiku, just like Windows is Windows, or MacOS is MacOS. This completely integrated experience makes porting apps, distributing binaries, and most importantly, end user support much more streamlined and easier.
Therefore, i stand by my belief that if any FOSS operating system has a chance against Windows or MacOS, it’s Haiku.
The123king,
Many alternative operating systems have technical merit, but they don’t have the support needed for widespread adoption. How do you get around this chicken and egg problem? I’m not objecting to your view, but it’s a genuine question as someone who’s skeptical of mass migration without extraordinary resources. and pull in industry.
IMHO the most plausible way for a newcomer to get a foothold is to get into new markets, taking over in old markets as an underdog seems futile.
QT has been ported, providing a great deal of Linux/KDE software, including a full office suite. WINE has been ported, allowing a great deal of Windows software to run unmodified. The OS is POSIX compatible, allowing for a vast amount of CLI UNIX-like software to be ported. There’s also quite a large repository of native software as well, providing an integrated, native experience, sans any compatibility layers or oddball porting issues.
The egg has been laid, it just needs to hatch into a chicken.
I fear that you are right. The desktop computer market matured out to MacOS/Windows. The smartphone market matured out to Android/iOS, and the server market matured out to Windows/Linux. Breaking the mould now is going to be one heck of an uphill battle.
BeOS was somehow both too early, and too late. It was technically groundbreaking in the 90’s, with it’s heavy use of threading, and support for SMP. However, much of the multiprocessing hardware at the time was dual, or quad-core at the best. Nowadays, with ThreadRippers pushing 64 cores/128 threads in one package, i would expect Haiku/BeOS to absolutely scream on such hardware.
There is a hole in the market coming up, with massively threaded ARM rigs. I’d expect Apple to release an ARM Mac Pro soon with 128+ cores. With such massively powerful ARM hardware, and Windows being locked into the x86 platform (despite Microsoft’s many attempts to break out of it), there will be plenty of room for a 3rd player in the high-end ARM workstation space, and i believe Haiku could be an ideal solution.
BeOS/Haiku is really only designed to scale out to maybe 4-8 CPUs… sure you could put more ad write software to scale to more but pretty much nobody has (relatively speaking because I’m sure you can find a handful of examples) compared to Linux and Windows which do have software that can scale more than that.
Beyond that… its actually slower than Linux most of the time, in filesystem, process management and pretty much everything.
Also the graphical API is akin to GDI drawn entirely on the GPU…. which was pretty much state of the art in the early to mid 90s, but there is no acceleration for drawing it. So, its again, about 10 years behind in having accelerated 2D.
There is work being doe to get acutal 3d drivers working…. so thats something at least.
The BeOS kernel could not handle more than 8 CPUs. Also, Haiku is always, and often considerably slower than Linux, no matter how many CPUs you have. It may appear to be faster, but that’s so far only the user experience. It’s not a shame being slower than Linux, though, as that’s a very high bar to cross.
However, Haiku was never designed to scale to just 4-8 CPUs, and unlike BeOS, there is no such restriction to the CPU count. As with BeOS, everything is threaded heavily, and that means it can often put many CPUs to use.
That does not necessarily mean that it’s internals are always up to par, though there are only very few locks left that slow down the system with more CPUs. In fact, depending on what you do, it shouldn’t really be noticeable at all.
But if you can provide benchmarks to prove your points, that would be much appreciated, and could be used to actually identify any bottlenecks left.
@axeld:
Not sure if this is relevant, but in testing Haiku on a laptop with a Ryzen 7 5700U CPU (8 cores/16 threads), it performed remarkably faster than on my other test system with a i5-9500T (6C/6T). It did get hung up on one thread out of the 16 on the Ryzen chip that stayed at 100% activity, but I think that was more of an AMD bug than Haiku-specific as I also ran into that bug with OpenBSD, in fact it prevented me from sleeping the laptop in the latter OS.
My point being, throwing more cores/threads at Haiku definitely increased overall performance.
The123king,
Would haiku even run on an ARM mac? I searched but all I found was running it via x86 emulation like this…
“Installing Haiku on an M1 Mac! – Running Haiku with x86 Emulation!”
https://www.youtube.com/watch?v=LpeyCoAP6qw
Unless apple intendeds to return to the server market, I don’t see such massive core counts being all that useful for typical MBP consumers. Don’t get me wrong, I’d still be happy to see more competition for the server and workstation markets, The competition is needed.
It would be a challenge for apple to scale the M1 design. The engineering tradeoffs created some benefits, but also some costs. By putting memory/gpu cores/special accelerators/cpu cores in the same physical package, it becomes far more difficult to scale up than other discrete approaches. Consider that on a system with dedicated GPUs, you can max out both the GPU and CPU as they don’t step on each other, meanwhile apple’s iGPU design results in heavy CPU loads throttling the iGPU and visa versa. In some cases M2 scores worse than the M1 because the M2 is throttling. So I don’t believe there’s enough headroom to add more cores and have them running full tilt.
So in order for apple to become more competitive on massive core counts, I think they have two options:
1) reconsider their all in one CPU design and go to more discrete components
2) they’d have to scale up as a cluster of discrete CPU stacks.
Obviously apple doesn’t do anything for my sake, but my choice would be for them to do both. I personally don’t see enough benefit in the shared iGPU design to be subjected to it’s cons. Make these discrete such that both the CPU and GPU can be scaled up independently without stepping on each other. The other approach brings us towards HPC, which is a time tested way to achieve massive computational power. Traditionally this resides almost exclusively in the domain of enterprise servers and IMHO it would be hard for apple to sell these guys on apple hardware if it’s tied to macos and vendor locked nand storage (as apple did with the x86 mac pro).
I agree with everything but the end of your post. Haiku is great but way to late.
I’m a musician and stay on windows because anyway, even if there is an alternative free OS, the tools I need are only proprietary.
When i’m DJing, I use a ThinkPad running the latest Ubuntu and using the Mixxx free software app. Everything runs great (thanks for PipeWire, whoever worked on it).
If this qt based software is ported to haiku, I could find it a usecase. But it won’t be a real native Haiku app and they’re not gonna be one anytime soon.
Looking forward to your article!
Deep dive… can we stop saying this. It’s the most overused pseudo technical jargon of the 2020s.
It’s cringe. Note this isn’t directed at anyone personally… just at everyone on the internet using this linguistic crutch instead of actually saying something meaningful.
What do you propose as an alternative phrase or idiom?
Morgan,
“I figured it is high time for an….” Introspection? Examination? Investigation?
Hmm, we may need to take a deep dive into some other alternatives here…
Sadly, I don’t expect much more from Haiku. Getting to this point was a great effort by some brilliant people. There is still activity but not at the level there was 5 years ago. There is no corporate sponsorship to speak of. There are no commercial applications at all. There is very little development of native Haiku applications despite the awesomeness of the API. 98% of the apps are ported from other platforms.
Patches are made to fix logged bugs. Some small amount of core development proceeds. I don’t see a grand vision for the next major release. I don’t see anyone taking up the challenge. I don’t see new ideas being coded in Haiku.
Maybe this is the expected outcome of the original vision to recreate BeOS as it was with the last release. This has been achieved and in spectacular fashion. Now that is accomplished I am not sure what comes next, if anything.