Me, almost seven years ago (2010), about the dearth of news about alternative operating systems:
OSNews has moved on. As much as it saddens me to see the technology world settling on Macwinilux (don’t flatter yourself, those three are pretty much the same), it’s a fact I have to deal with. It’s my job to fill OSNews with lots of interesting news to discuss, and even though I would love to be able to talk about how new and exciting operating systems are going to take over the desktop world, I have to be realistic too. Geeks (meaning you and I) have made a very clear choice, and it doesn’t seem like anything’s about to bring back those exciting early days of OSNews.
Me, almost four years ago (2013), about why there are no mobile hobbyist operating systems:
So, what is the cause? I personally think it has to do with how we perceive our smartphones and tablets. They are much more personal, and I think we are less open to messing with them than we were to messing with our PCs a decade ago. Most of us have only one modern smartphone, and we use it every day, so we can’t live with a hobbyist operating system where, say, 3G doesn’t work or WiFi disconnects every five seconds due to undocumented stuff in the chip. Android ROMs may sound like an exception, but they really aren’t; virtually all of them support your hardware fully.
With people unwilling to sacrifice their smartphone to play with alternative systems, it makes sense that fewer people are interested in developing these alternative systems. It is, perhaps, telling that Robert Szeleney, the programmer behind SkyOS, moved to developing mobile games. And that Wim Cools, the developer of TriangleOS, moved towards developing web applications for small businesses. Hard work that puts food on the table, sure, and as people get older priorities shift, but you would expect new people to step up to the plate and take over.
So far, this hasn’t happened. All we can hope for is that the mobile revolution is still young, and that we should give it some more time for a new, younger generation of gifted programmers to go for that grand slam.
I sincerely hope so.
I don’t know, for some mysterious reason I figured I’d link to these seven and four year old stories.
I had white hardware with OPENSTEP as my main workstation from around 96-02, when I moved to mac. The rest of the home network back then was a fluctuating blend of Solaris, FreeBSD, BeOS, SCO, linux. Reading about Amiga, MorphOS and others was a future hardware shopping list. I still may end up with a new Amiga just to relive a little of that. I miss rolling up my sleeves and getting to know an unknown os and making it do what I want it to. Back then there were a lot of other os hobbyists and future sysadmins on irc. The mac gives me what I want most out of all the systems over time, but it’s kind of like being with the same woman forever. I miss strange.
I keep thinking about getting an AmigaOne X5000… but I can’t really justify it since I plan on getting a new house soonish. Not to mention I hardly have time to play around with my assortment of Atari STs, Atari 8bits and Amiga 4000D.
Turns out you were spot on: I run macOS these days, and write in Ruby.
Still, Haiku might wheeze out of the starting gate any day now, and ReactOS is still doing some interesting work. Beyond that though? Yeah, it’s a wasteland. Even Plan9 burned out quickly.
Edited 2017-01-03 23:39 UTC
Neo900 looks pretty cool in the mobile space, if they can ever get something out the door. There is also the Talos Secure Workstation that looks interesting but probably won’t get out the door.
But yeah, sad that overall we have maybe 3 desktop operating systems and two mobile operating systems which are derivatives of the desktop systems.
Which makes me weep. What you were doing with Syllable was very interesting to begin with. At what point did it all turn sour?
I also still hold out hope for Haiku and ReactOS (and don’t forget AROS), and I would add Genode to the list of active, interesting projects. Genode is rapidly maturing into something useful, and is very different from all other mainstream OSes.
https://en.wikipedia.org/wiki/Unikernel
If you in the Unikernel field you still find quite a few hobbyist OS around.
The Unikernel market sitting on virtual machine with fairly much standard hardware interfaces is a lot simpler problem for a hobbyist OS developer to take on.
This is really a simple problem. Most areas a hobbyist OS could go are doomed by the same problem being Drivers. Attempting to install ASOP android on lots of android devices is doomed by the same problem. But at least since Arm64 attempting to run a different OS on a arm device does risk taking out a critical e-fuse so making it never boot again.
Really it pays to look at the Linux kernel for how many drivers Linux has to-do exactly the same thing just in a different way because interfacing with that type of hardware does not have a interface standard.
The reality to have hobbyist OS we need standards in hardware so that hobbyist OS can support minimal number of drivers to work.
Yes even Wifi cards why do we need so many different way to give them instructions. All this hardware variation does not help when attempting to make a secure OS be it Linux, Windows or OS X but is also basically locks hobbyist out the markets with that problem.
You have to remember the boom of the hobbyist OS lines up with when we had VESA and equal standards for using video cards that worked. The disappearance of the hobbyist OS is the story of disappearance of standard hardware control interfaces.
Not sure I agree with you entirely on the hardware front. In the VESA days, you had myriad controllers and types all sitting on an ISA bus, with the corresponding IO and interrupt clashes, driver hardware probes potentially locking up unrelated devices. PS/2 mice weren’t even universal really until late-90’s ATX motherboards made the connector basically standard.
With PCI, USB xHCI and VGA, you can still provide a reasonable desktop with a minimum of device specific drivers. PCI allows you to find standard hardware connected to the local machine relatively easily, and xHCI and VGA allow you to do input/output (disk/keyboards/mice and graphics). So right off the bat, you have disk, keyboard, mouse and display just with those minimal drivers.
Add in a USB wired network driver, and you have network access as well. A couple of PCI network cards supported by QEMU and you have fully networked capable VM Hobby OS.
Any new Hobby OS should provide a USB stack right after basic PS/2 keyboard/mouse and IDE drivers (or the platform specific equivalents.)
But yes, certainly wifi cards are a horrendous mess of cost-cutting, minimal hardware which requires magic firmware and are difficult to reverse engineer, precluding use from much Hobby OS software.
But all in all, if I was starting a Hobby OS with a view to making it usable by others, I’d much rather do it now than 20 years ago.
christian,
Sure, modern VM’s do give you the benefit of targeting one specific hardware abstraction. There’s no denying that helps alot. However for some people like me there’s much less appeal to indy-os development inside a VM. For me personally, the whole thrill was in having control of the whole machine, but a VM gives you even less control than ordinary userspace software.
I suspect if you ask a lot of us who were doing bare metal IBM-PC programming 20 years ago, most of us will agree it was easier then.
I’d have loved to ignore complexities such as ACPI and SMP 20 years ago. And it is still easy to ignore those now, program to basically ISA devices only (which will still work as expected on your Core i7 with SATA based SSD) but to do so severly limits growth.
But a VM to all intents and purposes is a bare metal machine. So it isn’t any less of a thrill to do that first jump into paged address space, and have your program carry right on executing (I think that was the moment when I thought “hey, I can do this!”)
But as I was making these baby steps, largely a result of reading the 386BSD articles referenced a little while back, I was thinking if only I could have been at this stage 25 years ago
christian,
Yea I know an emulated machine today for all intents and purposes is as good as, if not better than a physical machine back then, but I still have my emotional (and perhaps irrational) attachment to genuine bare metal programming.
You can be proud of it whether it’s in a VM or not, but from a utilitarian perspective if you aren’t intending to support real hardware and target real physical machines, then you take the cons of bare metal programming without the normal benefits. Everything you do to program an OS in the VM could be done easier/better/more efficiently outside of the VM. IMHO bare metal programming minus the bare metal is very hard to justify in practical terms. Of course that’s just my opinion, and if you are doing it as a challenge or to learn, then absolutely jump in. It will definitely achieve that!
Edited 2017-01-05 18:50 UTC
Funny you should say that, because as I was writing my last reply, I figured I should try my kernel on bare hardware on a spare machine. Luckily, it booted right up, and went into its idle routine outputting to the console, but wouldn’t take any KB input. Strange, I thought, then I realised it’s a USB keyboard. D’oh! Strike one for the VM
christian,
Yep, my OS from the past would fail there too. However it’s still possible to read the USB keyboard through the BIOS (this is why dos works with usb keyboards). If you are using real mode code this is easy. If you are using protected mode, then bios becomes a bridge too far, haha.
For the sake of argument, you could exit back to real mode or enter VM86 mode temporarily to call the bios and read the key. Serial ports are another easy option for headless machines. At some point though it’s necessary to bite the bullet and implement a USB stack.
Alfman was true not true now.
Since the introduction of flash from firmware you can put a freedos disc that runs in real mode and find you don’t have working USB ports so have to find a PS/2 keyboard. Even enabling USB emulation of PS/2 fails because hardware vendor did not check that it worked and with firmware updates they broke it somewhere.
In that this can get ultra evil. I have had a computer that PS/2 works in EFI setup and real mode yet fails as soon as you boot into windows/linux/freedos. Then USB not work in freedos or EFI setup. So now you have a computer with 2 keyboards connected 1 keyboard PS/2 for firmware setup, 1 USB keyboard for running OS. Notice something here real-mode OS has no keyboard. This is some of the x86 motherboards out there.
Even serial ports can require odd quirks like sending strange values to set speed of operation.
Quirks are just absolutely horible.
Alfman take a generic fully functional USB stack on a VM attempt to run it on real hardware and watch quite a few fail spectacularly until you have all the different set-up quirks coded in. The worse I usb quirk I have seen use general USB stack setup resulted in turn off CPU fan completely this is not something arm this is a intel cpu laptop made by a vendor I will not name as they did fix it in a firmware update latter. Yet this broken firmware machine worked fine with Windows, Linux and Freebsd due to all three sending something not to standard.
Some of the quirks are big enough to brick the hardware attempting hobby OSs. Of course those means people attempting Hobby OS at times get badly bitten.
Yes in 1990 use real mode code was work around to lots of issues for Hobby OSs come forwards to 2017 real mode code may be completely busted requiring OS/bootloader to load drivers with quirk knowledge to get the hardware working.
Alfman I wish it was how you described of drop to real-mode and everything works then hobby OS developers would stand a chance. Switching from protected to real mode and back costs over head but a OS can work that way. Problem is even if a hobby OS developer takes the path of only use real-mode bios exposed functions they are not able to promise on the existing quirky hardware that hobby OS will run bare metal without providing a restricted list of approved hardware.
People complain about Linux not being able to work on every machine that runs windows. This is even worse for a hobby OS they don’t have the resources Linux has to map out the hardware quirks.
oiaohm,
You don’t have to tell me, I’ve been saying that quite a bit recently.
Edited 2017-01-07 20:00 UTC
To date, I’ve never seen a single instance where the motherboard BIOS keyboard handler failed to support it’s own USB&PS2 ports, they are bundled together after all.
You may not have see this but many different users have seen BIOS keyboard handler not working.
https://ubuntuforums.org/showthread.php?t=1709532
INTERRUPT 16h FUNCTIONS (KEYBOARD) that by bios should be 100 percent dependable is not in fact any more. You do find firmware where it don’t work at all or is intermittent like works for a max of 2 mins then dies because somewhere it buffer fills and stops taking new input. Yes this is most bootloaders keyboard less at the point Interrupt 16 fails when you strike firmware like. This what is now ultra fun when you attempt to multi OS Boot on those machine . The intermittent appear perfectly functional in bootloader yet if you attempt to run something like freedos or a hobby OS using Int 16 your keyboard magically fall out from under you. So the default BIOS or EFI provided keyboard interface can be badly quirked.
Alfman yes you have seen first hand where Linux fails to do a PS/2 keyboard but has enough quirk stuff to at least get a USB keyboard up. You can have that same issue appear under windows as well where Windows is missing the quirk to make both keyboards work. The reality that Windows and Linux cannot reliable do PS/2 and USB keyboards and mice on all hardware out there really don’t give hobby OS much hope of a good run.
Remember a Hobby OS can find it self in the location where it does not have the quirk information to fire up either a PS/2 or USB keyboard that is stable and will keep on working when on bare metal. OS without a keyboard is fairly much stuffed for desktop.
Of course firing up a PS/2 keyboard using the standard define interfaces should be 100 percent stable reality its not. Motherboard makers test their bios keyboard stack emulation for a max of 2 min. So modern day BIOS only promise 2 mins of keyboard functionality after that you better have fired up your own stuff with valid quirk info. Motherboard makers use to promise more when they used freedos to do firmware updating.
Of course all this comes a lot simpler under a hyper-visor/virtual machine for a hobby OS as those do in fact test that their bios emulation is stable. So inside a virtual machine/hyper-visor you can depend on Int 16 working to get yourself a keyboard or EFI equal both cannot be depended on when on bare metal to be anything more than temporary.
Yes I agree with you switching from real mode/vm86 to protected and back again to access keyboard and the like is “terribly unclean solution” but this is what Windows 3.11 and before windows did in protected mode. It was good enough for early windows if the solution still worked dependably would be good enough for early hobby OS today. Problem is this solution does not work dependably outside virtual machines.
Yes a standard is defined what all OS should be able to use to get a basic keyboard working(yes with overhead and be unclean) but if current day motherboards provide this complete luck of draw. Virtual machine and hyper-visors do more complete testing and test suites than what motherboard makers do.
The thing hobby OS need to start out is dependable hardware interfacing. Only place that has this these days is inside virtual machine/hyper-visor.
So its not that hobby OS are dead they have been forced to come virtual machine running OS and lots of people don’t run virtual machines and many would not even consider installing a virtual machine to try out a hobby OS so reducing the market a hobby OS can advertise it self on to get developers. Yes hobby OS market has shrunk for a reason.
End users with hobby OS have to get tolerant of the fact that it may not function on any of the real hardware they have directly due to how badly quirked in bare metal.
In fact if we could mandate a min standard of quality control on motherboards like EFI/BIOS provided keyboard interfaces must remain working this would make life simpler for all OS providers for fall back locations in case of issues. The issues causing hobby OS to disappear off bare metal are in fact causing all OS running on bare metal problems just OS with big enough resources have a big enough list of quirks with work around to hide the issue from end users most of the time.
I am really not happy with the volume of hardware that is quirked up on basic things like bios/efi keyboard support.
oiaohm,
I’ve never seen that personally. But that’s kind of my point, if the BIOS and bootloaders work, then you can use it too. If those don’t work, then consumers would be entitled to replace or refund it for being defective.
Considering that both the BIOS and windows stopped working for him one day, it’s pretty safe to say something broke
I’d have told him to try to reset the CMOS, and failing that return to manufacturer if under warranty.
It’s interesting that ubuntu did work. He neglected to mention if that was USB or PS/2. It’s entirely possible his PS/2 port was completely dead and USB was disabled in the BIOS. But Ubuntu picked up the keyboard just fine using it’s own usb stack.
Why are we diagnosing a case from 2011? Haha!
oiaohm,
I think part of the trouble is that manufactures are only concerned with windows compatibility and not much else.
UEFI could create an independent open source reference bootloader/OS who’s only purpose is to stress test the EFI implementation for compliance. All PCs should be required to pass these tests. That would go a long way in providing stronger assurances to developers that they could rely on the very mechanisms that were tested and certified.
Before the late 90’s with Vesa and other standard things you atleast got a screen with keyboard up. Yes you are right mouse before the late 90’s may have be DOA.
Today due to the massive number of variation and bugs in hardware attempting to fire up on bare metal may leave you without keyboard, mouse, screen, networking and even serial. All because you are missing some quirk.
PCI, USB xHCI and VGA<<
USB xHCI is not 100 percent the same when they come from different vendors so unless you have the quirks for the USB controller in a machine you don’t have USB ports.
http://forum.asrock.com/forum_posts.asp?TID=1548&title=asrock-970m-…
Yes USB controllers are quirky in bad ways just duel booting between Windows without bringing in someone writing a new OS.
What you got to send to fire up different motherboard southbridges to get access to PCI are different also due to quirks you may not have PCI either.
Back in the boom of hobby OSs what is basically the age of dos you could generic drop to BIOS instructions to basically fire up the hardware.
If you attempt just using basic EFI stuff today you are in for many rude failures. Like it works fine with EFI until you switch to protected mode then EFI functional fails for no good reason.
Any new Hobby OS should provide a USB stack right after basic PS/2 keyboard/mouse and IDE drivers (or the platform specific equivalents.)
Problem is christian doing something like that will work in a Virtual Machine as Virtual Mahcine implementations are fairly much functional without quirks.
Generic IDE drivers people find out under windows this causes data loss. Linux/Freebsd kernel has a stack detection and quirk alterations to make bare metal IDE controller behave. Some of these quirks make the news.
http://www.pcworld.com/article/3123075/linux/linux-wont-install-on-…
Windows 10 Mobile for arm is restricted to qualcomm only chips because the mountains of quirks and undocumented in ARM is just massive.
christian sorry to say none of your idea works with contact with real world conditions. Hobby OS developer has a decent time if they restrict themselves to virtual machines that are fairly qurik free. When Hobby OS developer starts attempt to go bare metal hardware stack of quirks come out at you.
Hobby OS has to get to a particular developer size before it can start handling the number of works to work on bare metal successfully enough to get a following in that field.
Basically you have not been paying attention. Some of the massive quirks have been making the news. Remember for every quirk big enough to effect the Linux kernel there is at least another 10 000 quirks a hobby OS would have to get past to be successful on bare metal.
The hill to get over is just huge.
I’d much rather do it now than 20 years ago
I built a hobby OS for my self back in 1992. It was not too bad back then if you did not care about not having mouse. Now doing the same thing now is way harder.
I’d not heard anything about that, it sounds fascinating (in a horrifying kind of way). Tried googling but couldn’t seem to come up with any relevant results. Don’t suppose you could elaborate, or point me to any more on that?
There’s still a lot of exciting stuff going on in OSes, but more and more I’m reading about it from outside of OSnews. For example, I get more of my info regarding Redox OS from Phoronix (a “Linux” site) than I get from OSnews, and I would have thought that I would get blow-by-blow accounts of Redox development. Maybe the thing that’s changed is… Thom? Maybe his heart’s just not in it any more (or maybe it was never in it, just in BeOS / Haiku). The thing is that back in the day, an “OS” was basically just a kernel and a couple of supporting tools. This means that anyone could “compete” by writing a fairly small amount of stuff. Today, by contrast, you need a desktop, applications, and cross-platform compatibility.
If there’s anything that Linux and BSD has taught us, it’s that even with putting all our eggs in that one Linux basket, what we end up with is 1% desktop market penetration, and a very timid upgrade cycle by the enterprise. There’s an order of magnitude more software, and making it all work is orders of magnitude more difficult. If that’s the bar for entry as an OSNews article, then frankly it is a very high bar.
I feel like OSnews could help in championing these burgeoning OSes, perhaps encouraging people to set up VMs or having regular reviews of these tiny OSes. That’s going to take some extraordinary effort from Thom, but certainly the OS “market” is not dead.
As for Mobile OSes, there’s also been a bunch of stuff happening, though it’s died again, with Sailfish, Firefox, Ubuntu, etc. I don’t believe the issue here is what Thom believes, however. I (and I bet many others here) have a bunch of older phones sitting around which I could shove new OSes onto. It’s not that we don’t, it’s that we can’t.
Unlike the IBM PC, which is largely hardware compatible with other PCs, save for some drivers, the mobile phone is not. Even the boot process is not standard, much less drivers, most importantly video drivers. Even getting mainline Linux running on a nexus device is a huge pain, and getting OSS drivers for Qualcomm stuff seems to be an impossible feat. How would this translate to a standard “OS distribution” which can be packaged and run without creating binary packages for each individual model of phone, like Cyanogen does? The closed nature of mobile phones makes mobile OSes harder. There needs to be a better solution.
I was hopeful for project ara ushering a new hardware independent world. But that died before it was born.
I guess my hope lies in the armv8 server standardization efforts by linearo. Maybe, google will embrace it, for fun and giggles? Still driver issues, but then an unicorn appears and … well umh. magick turning complete … Bazaar .. many eyes…
https://en.wikipedia.org/wiki/Project_Ara
Isn’t that somewhat already superseded by projects like Raspberry Pi and Beaglebone (and Pandaboard and…)?
No, project ara was a modular phone with upgradable components. Raspbery pi is not a phone and is not a great system architecture. Linearo is pushing armv8 system architecture similar to what already exists for X86 pcs. Arm architecture is varied and not guaranteed to have a common way of booting or determining what hardware is available . Raspberry Pi and the like use static device tree lists of the fixed hardware available for it. It makes managing it difficult for os vendors.
Greybus (the “modules” used this bus) was the main feature of the phone, I think. That’s going into mainline Linux, and I hope that means people can doodle with it wherever they are. Open source code is great because it often acts as documentation for these protocols.
Edited 2017-01-05 10:51 UTC
I also think that one big problem is that you cannot directly earn money with software platforms anymore.
It is commonly expected that software platforms like operating systems, virtual machines, web frameworks, etc. are free of charge. Money has to be earned with App Stores or products running on top of all this technology. Like games, apps, web-stores, etc.
That’s bad because also not much research is done in that areas. Today’s big operating systems are old, crappy and insecure. That should really change. But only some stuff is done here at Microsoft, Singularity and Midori (http://joeduffyblog.com/2015/11/03/blogging-about-midori/), or with Redox (http://www.redox-os.org/). And even Microsoft stopped that stuff as far as I know.
It is really sad that you can earn more money with simple stuff like games and web stuff, but not so easy with complex software like operating systems…
But I still hope that this will change someday. It must, we cannot spend billions of dollars constantly fixing insecure operating systems. Security should be a business case.
So how, exactly, is Mac, Windows and Linux “the same”? Because they use same UI paradigms? Well, it’s the same as saying “all green cars are the same… because they are all green”
Because their UI’s are all WIMP (“windows, icons, menus, pointer”). Their security systems all are built on the same old UNIX principles of root, users, groups and ACLs (access control lists). Their process model is also more or less the same. The sandboxing they support are also very similar. And finally they all use the same monolithic kernel model and are all written in C.
Contrast that with the experimental OSes that try to see if they can rethink one of the things I listed above. For example, Singularity tested the idea that if we just rewrote all software (what a stupid idea), then we could maybe block an entire school of security exploits at the language level. That in turn would mean we did not need the process isolation that mainstream OSes have today. In the early years of computing you had a lot more such ideas playing out as rewriting all software was actually sort of an option.
That’s better. At least some reasoning. But don’t you think those different approaches experimented with in early days of computing were “played” out of the mainstream market for a reason? Maybe current UI paradigms and process models as well as security approaches simply proven to be the best practice if 3 different OSes from completely different vendors decided to use them?
To use car analogy again, it’s like saying “All non-electric cars are the same because they use internal combustion engine”. Of course, some very fundamental principles are universal, but that does not mean “all instances of X are the same”…
Ragnarok,
I think there may be some unintentional cross-talk between you and what dpJudas’s said. I don’t think he means they are literally the same, but rather that they are similar and their combined popularity strangles the market, rendering alternative designs like singularity less viable.
To reuse your analogy, just because combustion engines dominate the market doesn’t strictly mean it’s the best technology. After all, the big oil companies and their lobbyists play a huge role in that. Hypothetically had we invested in researching & manufacturing alternatives rather than combustion engines, we would probably have better & more efficient engine technology today. Profit motives often promote inferior products/tech, unfortunately.
Edited 2017-01-04 15:15 UTC
Okay, I agree there might be some truth in this. My problem was this absolute generalization that “All these OSes are the same” when they are very different in so many aspects… Only certain very generic principles are “the same” between them.
[deleted]
Edited 2017-01-04 13:40 UTC
If your comment is just a string of insults towards Americans and myself (I’m not American, by the way), then yes, it will be deleted. Deal with it.
If you cannot tell criticism from insults, then it’s your problem. Or maybe you just can’t deal with criticism? Well, that’s also your problem.
I know from your point of view it looks like pure insults, but that’s because of huge ego that tries to dismiss anything that threatens it. Now if you would get some different perspective, everything would suddenly look different. Although these days it’s very popular to get offended, or insulted, or in general consider yourself perfect, so maybe it’s not just you or American culture that’s to blame.
There is a steady stream of minor news stories from RISC OS, which continues to be used as a secondary OS by thousands of people: https://www.riscosopen.org/forum/forums/1
Its certainly not the most exciting OS in active development, but then, given that watch operating systems develop is a fairly low octane past-time at the best of times, perhaps that shouldn’t matter.
Maybe someone from the RISC OS scene could be invited to submit update articles every once a month with latest developments.
Other less glamorous systems must have similar amounts of low-level activity which would interest this readership.
P.S. This comment is about articles on OSNews rather than the general health of the scene. Obviously RISC OS does next to nothing for the general future health of the alternative operating systems scene.
Thousands? That I find hard to believe. If you said “Thousands of AmigaOS users” I might have believed you. I very much doubt there are more than a few hundred regular RISCOS users these days.
Ah well, regular users and people who use it as a secondary system are not necessarily the same. When I used RISC OS it was an irregular couple of times a month thing, to mess around with a few experiments. Also, as an example, I’m not a regular Windows user, but its still a significant secondary system for me (and will be until you can get MS Office on Linux).
I used it a lot in the 1990’s, then some more in the mid 2000’s, and might possibly give it a spin again on the RasperryPI I inherited. By all accounts, I should probably be in your numbers… but really, without the Pi, RISCOS would be virtually gone by now.
No, not really.
There’s a thriving scene of users under emulation:
http://www.virtualacorn.co.uk/index2.htm
You can buy new hardware to run Risc OS under emulation or on native hardware:
http://www.arsvcs.demon.co.uk/
There is a choice of native hardware — e.g. A9 Home:
http://www.cjemicros.co.uk/micros/products/a9home.shtml
And new systems are still appearing:
https://www.riscosopen.org/news/articles/2015/10/23/preview-of-a-who…
VirtualAcorn owner Aaron Timbrell bought RISC OS Ltd:
http://www.riscos.com/
He works for ARM, and ARM licenses his PC ARM emulator for development work, which is what funded Virtual Acorn’s acquisition of RISC OS. So the 2 forks of the OS — the proprietary, closed-source RISC OS 4/6 and the shared-source RISC OS 5 from ROOL:
https://www.riscosopen.org/content/
… are both alive. RO 4/6 mainly run on old Acorn kit and on emulators, RO 5 on modern ARM hardware.
It’s a lot livelier than the Amiga world, I’d say.
Yeah, I’m looking at some of those pages… are you sure they are still alive? They look like someone’s GCSE web design project from the late 1990’s…
Iyonix, second hand, over £500? RiscPC’s with a £600+ price tag? I can see why the platform jumped on the RasperryPI!!
No, the Amiga scene is vibrant. I don’t follow it, but is is like any other good platform from the late 80’s that just won’t die.
Without and official SDK where apps just work off the bat with no tinkering or errors, no new operating system is worth the trouble it will cause to the average user. Linux, still relying on dependencies and justifying it with geeky technical reasons no average user actually cares about (taking up disk space when the average desktop PC comes with 1 TB of hard drive space now for one, 500 GB for laptops). For phones, even with the SDK, all the world governments and hackers love to find their way onto the phone, so a commercial entity backing them and removing spyware by force is helpful and you won’t get that from a new upstart since they won’t have the money to fund security research and might not ever get it. I’m sure someone will say “but it’s posix complaint”, but that only matters if GUIs were never invented. The alternative operating systems must at least support an offical SDK where once you develop something and distribute the binary, it is guarenteed to work for 10 years like on Windows, Mac, iOS, and Android if they are ever to be a reasonable choice.
You need more than an SDK, you need a stable ABI as well. Even if the Linux kernel had an official SDK, it’d be no good if third-party modules still needed recompiling with every kernel upgrade as they do now. An SDK is only half the puzzle. A stable base system is the other half.
dark2,
I think you make a valid point about combining the efforts of the indy projects with standardized SDKs. I honestly don’t know if it would make any real difference to the ultimate fate of indy projects, but I agree these are the kinds of solutions we need to combine efforts.
However just to nitpick something you said: windows does not guaranty any 3rd party software will work at all, much less for 10 years. Third party software (and even web) developers are always testing their software in numerous versions of windows because software can and often does break after windows upgrades. It’s why “wait for the first service pack before upgrading” has become the conventional wisdom.
Edited 2017-01-04 15:44 UTC
Most of the stuff that has trouble is either corporate software that barely got to the point of working before it was finalized, or stuff that doesn’t use the documented SDK. The end result is still the same though, unless there is an installer file that takes care of everything flawlessly for 99% of software, it’s still too much hassle for the end user.
dark2,
I actually do work mostly with corporate software, haha. But sometimes it works fine for years and then suddenly breaks on a new version of windows for some obscure reason. I agree with you about the user’s expectations, but I merely objected to “guarenteed to work for 10 years like on Windows” because it’s an overstatement. Not to derail the discussion though…
Like everyone here, I’d like to see more of the indy-OS scene succeed, but it’s also my experience that they won’t, because reasons… If anyone can think of something *we* can do to increase indy viability, let’s have that discussion. The indy scene has a lot of technical talent, and I’m always interested in discussing things at technical levels, but the largest impediments are often economic & market related rather than technical. On the PC side the market has mostly stopped growing and on the mobile side operating systems are expected to be customized for specific hardware – there’s no such thing as generic mobile operating system builds which makes it extremely difficult for any indy operating system to gain traction.
Maybe a Mark Shuttleworth could supply much needed resources, but barring that what are the paths for an indy scene to succeed in the long term? Does anyone have ideas? Also, obviously there’s a lot of pessimism to overcome… so the ideal plan would additionally have to address that too, haha.
Edited 2017-01-04 16:39 UTC
Mark closed bug 1 with feel good PR stuff about how android was now everywhere and desktop operating systems weren’t as important anymore. Linus said the thousands of window managers were fine… any change would need to come from a famous character like these, and also have them admit they were wrong about previous statements. Need a business person in charge of the drive, and not a techie.
dark2,
I find it difficult to envision many that would want to get into the world of indy OS development, especially if it’s not profitable. Conceivably one could do crowd sourcing or perhaps solicit corporate donors…
http://www.theregister.co.uk/2015/07/08/microsoft_donates_to_openbs…
On the other hand, thinking about it a bit longer makes me realize that perhaps it’s a bit misguided to think about this as a financial problem (even though it is an impediment) because the financial aspects would sort themselves out if only there were demand. A benefactor could spend millions on resources to build an OS and still not make a difference if there’s just no demand for it.
So instead of lack of money, maybe the lack of demand is just as detrimental. Rather than (or in addition to) financial solutions, we’d need to find ways to drive demand. The question is how to drive demand in a mature market where the incumbents are seen as already ‘good enough’?
In theory one needs to come up with a ‘killer feature’ that makes the indy os unique, like ZFS was for BSDs. But in practice how did that work out? As a linux user, I’ll admit being envious of the BSDs over their advanced file system, but it wasn’t enough to make me change my habits.
So perhaps I’m still thinking about it all wrong: instead of promoting new features, maybe the indy-os plan should be to sabotage the competition. Brilliant! haha. Now we’re thinking like microsoft, is this how successful businessmen think?
Edited 2017-01-04 19:21 UTC
I think Moore’s law is a large part of the lack of hobbyist systems/applications. Once the speed of improvement plateaus I think we’ll start seeing the hobbyist side pop up more. Right now there’s not much incentive. By that I mean a hobbyist can’t keep up with the changing hardware developments. So right now the best experience/features are only available on closed system software. Once the hardware stops improving, there’s a much greater incentive for the software engineers to step up. The desire to tinker is always going to be there and manufacturer limitations are going to get more annoying once the hope of newer/faster hardware with new manufacturer features fades away.
On the topic of Desktop operating systems. I think for them to succeed wine needs to either hit 100% compatibility/completeness. Or the Open Source community needs to sit down and smash out some working Virtual Machine drivers for 3DFX cards, and really push the compatibility of the VMs up to 100%. Right now there’s a ton of legacy software which still won’t virtualise properly/easily. Until I can emulate dos 6.22 with a Voodoo 3000 card on Linux I can’t run all my old games. The same is true for Windows 98 with a Voodoo card and even XP/Win7/Windows 10 don’t properly virtualise with full DirectX 9/10/11/12 support. These are things which need to be resolved because the dark truth is that Windows is where the apps are that people want.
Edited 2017-01-04 20:25 UTC
Darkmage,
That’s a lot of emphasis on ancient hardware. It would be fun to look at that all that stuff again, no doubt. However I’d be surprised if I still have any software that could use a 3DFX card from that era
Granted, you are absolutely right that bulletproof support for modern windows titles under linux would be a welcome edition for all gamers who’d like to switch to linux.
Edited 2017-01-04 21:49 UTC
The point I was making with the ancient hardware was simply. If I can’t access my own software from 15+ years ago when I migrated from Windows (and yes it’s still painful and I feel it regularly) how the heck is anyone else expected to move either?
In terms of the hardware moving target. The way I see it is hardware has gotten a lot more complex, there’s no incentive to reverse engineer the radios/firmware of your phone because next year a new phone will be out and all that work goes straight in the bin. Once development has slowed down we will see more people working to improve what they have instead of buying new things. We’re already seeing Moore’s law’s limits starting to be hit. We’re at 10nm now, at 7nm Intel is moving to carbon based CPUs. Beyond that there’s only a few more steppings until we’re at atomic scale computing. 7nm, 3nm, and then we’re pretty much at the limits from what I’ve read. Sure there will be fancy cooling solutions and optimised paths etc. But the limit is approaching.
Darkmage,
Hard to tell what will happen. In the PC world, single threaded execution hasn’t improved nearly as greatly as in the past. Instead we’re seeing more cores and specialized CPU opcodes. I predict in a few generations after we get bored of adding more cores, we’ll start seeing FPGAs cropping up in consumer gear so that software can rewire the transistors for specialized applications. Hopefully the damn thing uses an open API though, otherwise I’ll go on a lengthy rant about how they’re holding back innovation. Ah, the future
The Voodoo hardware was an example. PCI passthrough emulation would work, but it’d also take up a slot on my motherboard (less than ideal especially on a laptop). Dosbox has some patches for Voodoo emulation, but the better option would be for the Voodoo emulation to be ported into QEMU so it’d work on Windows 95/98 and DOS. That’s well out of my skill range as a programmer. What I’m really more annoyed with regarding emulation is things such as DirectX 1-5 not working correctly in emulated OSes (mainly because most Emulation is using WINE for DirectX support with the notable exception of PCI passthrough), lacklustre support for 3D without passing a GPU through to the guest OS etc. Having said this, I’ve already made the jump over to 100% Linux systems and I am working to rewrite certain core tools I used on Windows over to Linux. I understand in a lot of ways this basically makes me a Linux fanatic and well outside of the normal Linux user base. But I like the idea of open systems with code which stays usable for decades after initially being written.
Edited 2017-01-05 21:07 UTC
Darkmage,
I agree, improving the experience in the guest with 3d acceleration is something I want as well. Virtualbox supposedly has it but I’ve never gotten it to work. I believe one problem is actually that acceleration on the host can’t safety/efficiently get passed into the guest because these video cards lack an MMU to isolate memory access, which means the guest could attempt to attack the host through the GPU mapping operations. Fixing this in software is computationally costly. I really haven’t been following this closely though and it may be solved with recent hardware.
I’ve been keeping an eye on 3d acceleration through spice, but I really don’t know when it will become mature.
https://www.spice-space.org/features.html
Well, it is not so much a matter of lack of news about alternative systems, as it is a lack of awareness about those news. In the past people would gladly submit articles on new filesystems or different memory allocators, but today it is usually either Android/iPhone or government surveillance of the populace that makes the news. Such is the changes of technology. The change in focus may be annoying, but the focus on government surveillance at the very least is very important, if not directly “OSNews”…