Poetry is in motion. The Haiku Project, its developers and team members announced the Haiku operating system released its third beta release, version R1/Beta3, July 25th, 2021. Version R1B3 continues the trend of more frequent releases to provide users and developers with an up to date and stable platform to work on.
This release combines the best of Haiku’s history as a spiritual successor of BeOS and the hard work of a passionate community. It provides several new features and performance improvements that make Haiku even better.
[…]WebKit, the backend of the bundled web browser developed by the Haiku team, WebPositive, received multiple major improvements. This provides a good base for further improvements as well as an improved browsing and website rendering experience in WebPositive, which developers will continue to focus on for the next release, Beta 4 and as Haiku nears its first initial release, R1.
Going from beta 2 to beta 3 is a giant leap if you haven’t been keeping up. Haiku is much farther along than people think, with the biggest drawback being, as always, that hardware support is going to be a mixed bag. Haiku is still every bit as clean, fast, and enticing as the original BeOS was over two decades ago, and I’ve scored two junkyard office PCs to see if I can get a proper Haiku box running.
Weird parallel Thom; I have two “junkyard” machines for testing OpenBSD and Haiku respectively. One is a Dell tower server from 2011 with a low power AMD GPU added for desktop use, long out of support from Dell and Microsoft but perfect for obscure OSes. The other is a mini-ITX system thrown together from scavenged parts, a bit newer with an AMD A10 APU. The former is running the latest OpenBSD snapshot, the latter is waiting on a new power supply and it will be my new Haiku playground.
Since I went minimalist and no longer develop software I threw all my old stuff out. If I had my way I’d be developing high performance graphics software and still testing it against a P200 and first generation dual core CPU to give compatibility and scaleability a good thrashing. The thing with producing software for a constrained system is it encourages you not to get sloppy and code stuff simple and lightweight and fast. Very fast. People get lazy and complacent with ninja development platforms and start adding in fat frameworks which add latency and lots of other horrors into the critical path. There are or were companies out there with a room full of clunkers so they could do this. None that I hear of now. Apparently Microsoft gave this up too and now simulate everything.
I agree in sentiment, but there comes a time when the past needs to be left in the past. Of course I say that while I still hold on to my PIII laptop from 2001. Still, though I’m not a developer I like the idea of writing code for the lowest common denominator; if it won’t run well on a 10+ year old machine then there’s room for improvement. I don’t think that should change until we’ve hit such a computational leap forward that current year machines are considered “antique/vintage”.
My mind is stuck in a 1980’s/1990’s development model plus I have my pet obsessions. I did look at a video with the lead developer of ID Software the other week and one by Linux Tech Tips on NVidia dropping support for older graphics cards. So I start reverse engineering what I’m seeing on screen as both had sections where Doom Eternal was discussed and played. Old habits die hard!
I think you grasp my point. Some software simply cannot run on older machines. A crude example is a 3D shooter is not going to fly on an 80 character monochrome screen with a 2.5Mhz CPU. Other stuff I think is a matter of choice. That’s the contentious area which justifies your point about generational changes.
It’s been a long time but I remember not everyone getting that graphics cards didn’t need to double their power when doubling resolution but power needed to be cubed. At the same time people were wowed by a few dozen shaders and didn’t grasp they were going to explode into thousands of shaders. A prediction by one developer we would never see framerates of 200Hz ever again is only partially true. At the same time people are targetting 60Hz or even 120Hz as a baseline to focus optimisation so all is not lost. Another thing people miss is the way networking was handled prior to Windows 2000. Networking would cause a pipeline stall in the OS but the way Windows 20o0 handled things meant the stalling went away and you could have massive amounts of disc access and networking activity without slowing down. In fact this is one change in OS architecture I don’t think anyone has noticed and they take it for granted today.
The biggest problem with Doom eternal is texture/shader swapping. On GPU’s with 4GB or less onboard memory the game causes massive stalling. This is Nvidias excuse for dropping older cards. It’s not they cannot run the game. It’s the design decisions and lack of scaleability which is causing the stall. I don’t see anything on screen in these videos which really justifies a stall like this. It’s not just ID but Epic have scaleability problems with data and graphics on the critical path.
Can you please share the links to the videos? That sounds very relevant to my interests! 😀
I was a programmer on mainframes and PCs in the ’80s and ’90s until health issues made it impossible to program. Diabetic Pain will strip away your memory and attention span.
Anyway, I remember when I found out about Command Keep (ID Software) and then the original 3D game from ID “Wolfenstein 3D” and then the DOOM games. I remember feeling so limited on that DOOM was a pseudo 3D game because you couldn’t have 2 characters literally above each other in a level.
Then came along the first Quake demo which was full 3D and how liberating true 3D felt.
PS: I first tried to build my own 3D game back in 1981 on an Atari 400 with 4K of RAM using line draw in basic to simulate hallways. Unfortunately the school I was learning to program at was for mainframe programming and someone had donated the Atari and there was no money for more memory. Note that it loaded and saved everything to cassette tapes because it had no floppy or hard drives.
A year later we were given an Atari 800 with 8K of RAM (WOW! 8K of RAM!!! Okay, still nowhere near enough RAM to make a 3D game but I made a 3 dimensional database on it and made $20 off a bet that it couldn’t be done. That guy was an idiot.
Anyway, those were the days when things were thrilling (when ID was out innovating everyone or at least appeared to be in the western world) and software was lean and mean and doing everything it could to wring the last little bit of frame rate and graphics out the hardware.
Compare that to nowadays and hardware is 10s of thousands of times faster with my smallest computer has 64 GB of RAM and software on it is ***SH***** compared to what it SHOULD be. Which is why I’m super excited to see what I can do on Haiku.
When BeOS came out (I don’t remember which was my first version of BeOS that I bought and didn’t just acquire but I still have all my disks for BeOS 3 through 5 plus BeProductive. BeOS was the first OS that I TOTALLY F’ing LOVED with OS/2 not too far behind it.
OS/2 is STILL not understood by most people. They think it is like Windows and for all of you that do, thinking that you run OS/2 like Windows is like saying that you think Swahili is English because they use the same alphabet. I’ve met about 50 people that understand OS/2 and thousands that don’t have a clue. We all lost when Windows won. You have zero idea how bad Windows is or why. PS: I’ve been supporting Windows since Windows 2 and my co-workers are very impressed how this “old dude” knows so much with Windows and a dozen other OSs.
There were quite a few innovative and competive engines around at the time. Id have always been pixel pushers. Other engines tended to facilitate different design goals and more complex gameplay The biggest industry changes were when algortithms and hardware common to both games and movie industry began to cross over. Renderman style shaders were implemented in hardware. This began to mature around the time of the GeForce 3 and onwards. Things are also going back in the other direction with game engines now being used to host architectural siulations and CGI scenery. The most recent innovation is this is done in real time and embedded with pratical scenes.
The CGI versus practical debate is an old one. I always felt practical effects were better and to a large degree they were and still are. CGI atempts to simulate light and there is no better light than a real light which is why some CGI looks a bit funny. Simulating light is very hard. Another element is design. Practicals can be a more natural way of working or more artistically stimulating. It’s also a lot better for actors to work with practicals. It can be a fine line with a lot of back and forth and I’m not going to knock CGI. It can do amazing things either impossible or extreely expensive if CGI didn’t exist. Then there is audience perception. How you perceive a scene which is part and parcel of all the elements including story is a factor too. If people perceive something as fake even if it is not then it takes away from the experience. The fact CGI might be used plus something looking too contrived can break immersion even if it was 100% real.
A lot of the most significant innovations aren’t pixel pushing. The real and more deep innovations are things like conveying a sense of character presence and atmosphere, and pacing, and so on. Games that make you think or with which you feel a connection or which have something to say about the world tend to lean more in this direction even if the way things are put together are more crude than the pixel pushers.
Another contrast is how everyone today wants 4K because it’s more “real”. Well, this is true in the sense the screen looks more like a real world surface, or they want 200Hz or higher framerates because this again is more real. Then play back an old standard definition movie DVD and ask which one looks more real. An old movie DVD will surpass a modern game on the latest ninja PC with 4K display any day of the week. Likewise, a talented illustrator could do more with five brushstrokes in as many seconds as many could do with the latest AI assisted photoshop in as many years.
Not that this has anything to do with Haiku,