Unix Archive

A brief history of Dell UNIX

“Dell UNIX? I didn’t know there was such a thing.” A couple of weeks ago I had my new XO with me for breakfast at a nearby bakery café. Other patrons were drawn to seeing an XO for the first time, including a Linux person from Dell. I mentioned Dell UNIX and we talked a little about the people who had worked on Dell UNIX. He expressed surprise that mention of Dell UNIX evokes the above quote so often and pointed out that Emacs source still has #ifdef for Dell UNIX. Quick Googling doesn’t reveal useful history of Dell UNIX, so here’s my version, a summary of the three major development releases. ↫ Charles H. Sauer I sure had never heard of Dell UNIX, and despite the original version of the linked article being very, very old – 2008 – there’s a few updates from 2020 and 2021 that add links to the files and instructions needed to install, set up, and run Dell UNIX in a virtual machine; 86Box or VirtualBox specifically. What was Dell UNIX? in the late ’80s, Dell started a the Olympic project, an effort to create a completely new architecture spanning desktops, workstations, and servers, some of which would be using multiple processors. When searching for an operating system for this project, the only real option was UNIX, and as such, the Olympic team set out to developer a UNIX variant. The first version was based on System V Release 3.2, used Motif and the X Window System, a DOS virtual machine to run, well, DOS applications called Merge, and compatibility with Microsoft Xenix. It might seem strange to us today, but Microsoft’s Xenix was incredibly popular at the time, and compatibility with it was a big deal. The Olympic project turned out to be too ambitious on the hardware front so it got cancelled, but the Dell UNIX project continued to be developed. The next release, Dell System V Release 4, was a massive release, and included a full X Window System desktop environment called X.desktop, an office suite, e-mail software, and a lot more. It also contained something Windows wouldn’t be getting for quite a few years to come: automatic configuration of device drivers. This was apparently so successful, it reduced the number of support calls during the first 90 days of availability by 90% compared to the previous release. Dell SVR4 finally seemed like real UNIX on a PC. We were justifiably proud of the quality and comprehensiveness, especially considering that our team was so much smaller than those of our perceived competitors at ISC, SCO and Sun(!). The reviewers were impressed. Reportedly, Dell SVR4 was chosen by Intel as their reference implementation in their test labs, chosen by Oracle as their reference Intel UNIX implementation, and used by AT&T USL for in house projects requiring high reliability, in preference to their own ports of SVR4.0. (One count showed Dell had resolved about 1800 problems in the AT&T source.) I was astonished one morning in the winter of 1991-92 when Ed Zander, at the time president of SunSoft, and three other SunSoft executives arrived at my office, requesting Dell help with their plans to put Solaris on X86. ↫ Charles H. Sauer Sadly, this would also prove to be the last release of Dell UNIX. After a few more point release, the brass at Dell had realised that Dell UNIX, intended to sell Dell hardware, was mostly being sold to people running it on non-Dell hardware, and after a short internal struggle, the entire project was cancelled since it was costing them more than it was earning them. As I noted, the article contains the files and instructions needed to run Dell UNIX today, on a virtual machine. I’m definitely going to try that out once I have some time, if only to take a peek at that X.desktop, because that looks absolutely stunning for its time.

What is PID 0?

The very short version: Unix PIDs do start at 0! PID 0 just isn’t shown to userspace through traditional APIs. PID 0 starts the kernel, then retires to a quiet life of helping a bit with process scheduling and power management. Also the entire web is mostly wrong about PID 0, because of one sentence on Wikipedia from 16 years ago. There’s a slightly longer short version right at the end, or you can stick with me for the extremely long middle bit! But surely you could just google what PID 0 is, right? Why am I even publishing this? ↫ David Anderson What a great read. Just great.

Evolution of the ELF object file format

The ELF object file format is adopted by many UNIX-like operating systems. While I’ve previously delved into the control structures of ELF and its predecessors, tracing the historical evolution of ELF and its relationship with the System V ABI can be interesting in itself. ↫ MaskRay The article wasn’t lying. I had no reason to know this – and I’m pretty sure most of you didn’t either – but it turns out the standards that define ELF got caught up in the legal murkiness and nastiness of UNIX. After the dissolution of the committee governing ELF in 1995, stewardship went from one familiar name to the next, first Novell, then The Santa Cruz Operation, then Caldera which renamed itself to The SCO Group, and eventually ending up at UnXis (now Xinuos) in 2011. In 2015, the last maintainer of ELF left Xinuos, and since then, it’s been effectively unmaintained. Which is kind of wild, considering ELF is a crucial building block of virtually all UNIX and UNIX-like operating systems today. The article mentions there’s a neutral Google Group that discusses, among other things, ELF, but that, too, has seen dwindling activity. Still, that group has reached consensus on some changes; changes that are now not reflected in any of the official texts. It’s a bit of a mess. If you ever wanted to know the status of ELF as a standard, this article’s for you.

Writing a Unix clone in about a month

I needed a bit of a break from “real work” recently, so I started a new programming project that was low-stakes and purely recreational. On April 21st, I set out to see how much of a Unix-like operating system for x86_64 targets that I could put together in about a month. The result is Bunnix. Not including days I didn’t work on Bunnix for one reason or another, I spent 27 days on this project. ↫ Drew DeVault Bunnix’ creator, Drew DeVault, has quite a bit of experience with writing operating systems, as they’re also the creator of Helios, an experimental microkernel operating system. Bunnix is remarkably capable for a 30-day project, and comes with support for both BIOS and UEFI boot, and it’ll boot on real hardware too. It doesn’t have USB support though, so if you’re going the real hardware route, you’ll need to take that into account for mouse and keyboard input. Bunnix has a relatively solid set of drivers, taking the short development time into account: among other things, there’s PCI, AHCI block devices, serial ports, framebuffers, and ext4 support. The kernel supports a virtual filesystem, a /dev filled with block devices, a terminal emulator, and more. Bunnix is single-user for now, so it doesn’t enforce file permissions, but DeVault states it should be relatively easy to implement multiuser support. A unique characteristic of Bunnix is that’s written mostly in Hare, complemented by some C. Hare is a relatively new programming language, which we touched on late last year when it was ported to OpenBSD. Implementing file systems proved to be one of the difficulties during development, partly due to Hare. I also learned a lot about mixing source languages into a Hare project, since the kernel links together Hare, assembly, and C sources – it works remarkably well but there are some pain points I noticed, particularly with respect to building the ABI integration riggings. It’d be nice to automate conversion of C headers into Hare forward declaration modules. Some of this work already exists in hare-c, but has a ways to go. If I were to start again, I would probably be more careful in my design of the filesystem layer. ↫ Drew DeVault DeVault’s post about Bunnix gives a lot more insight into the development of Bunnix, so I’d highly suggest to head on over to read more. Do note that DeVault considers Bunnix “done”, in the sense that the learning experience is over, and aside from a few random developments here and there, they won’t be doing any work on it anymore.

MacRelix: a Unix-like environment that runs in classic Mac OS

MacRelix is a Unix-like environment that runs in classic Mac OS. MacRelix natively supports classic 68K and PPC Mac OS, as well as Mac OS X on PPC via Carbon. ↫ MacRelix website The creator of MacRelix, Josh Juran, published an article in 2019 detailing the origins of the project. As a Mac OS developer, he was so unhappy with both CodeWarrior and Apple’s Macintosh Programmer’s Workshop (MPW), that he set out to create what would become MacRelix in 1999. Reading through the limitations and roadblocks he experienced with CodeWarrior and MPW, it’s not hard to see why he got frustrated – CodeWarrior’s targets were apparently a mess and a half to deal with. Then came target multiplication. Whereas the initial CodeWarrior developer releases shipped with each combination of language (C and Pascal) and architecture (68K and PPC) supported in a separate application, a later version of the IDE unified these, allowing the developer to have a single project file per project. To allow the same project to be built for both 68K and PPC architectures, the project data model included targets: One target would compile for 68K and link against 68K libraries, another would do the same for PPC. Targets could also be used to select an optimized build versus one for debugging. Combining both dichotomies yields four targets: 68K debug, 68K optimized, PPC debug, and PPC optimized. Then if your project involves multiple executables, like a code resource or shared library in addition to an application, you now have eight targets. Or, if you support one of, say, 68020 optimization, profiling, or a third executable, make that twelve. Or, for all of them, twenty-seven. ↫ Josh Juran Changing an option in your application required you to change it in every single target, too, which I can easily see is incredibly frustrating. MPW, for its part, was a massive improvement, he argues, but while it was clearly inspired by UNIX, it didn’t seem to actually implement any of the features and characteristics of UNIX. However, very much unlike Unix, the MPW Shell had only a single thread of execution — only one program could be running at once. Not only that, but there was no way for MPW’s compiled plugins (called tools) to invoke other tools or scripts — not even via system() (which blocks the calling program until the called program exits). Therefore, Make couldn’t actually do anything, but only printed out the commands for the user to run manually. You could code in Perl instead of the built-in language, but then your scripts couldn’t run other programs — only MPW shell scripts could do that. ↫ Josh Juran The limitations Juran was experiencing with these two tools pushed him to create his own solution, which went well beyond what MPW offered, even in 2019 when this article was published. Nowadays, MacRelix has pipes, signals, system calls, TCP sockets, and more. It works on both 68K and PowerPC Mac systems and builds as Carbon to run natively in OS X. It can be used on any Mac OS version from System 7 to Mac OS X 10.6 “Snow Leopard” (after which Apple removed the Rosetta PowerPC emulator). I haven’t implemented fork() yet, but I know how to do it. In addition to a Unix-like file system interface (which even handles long names by storing them in Desktop database comment fields)), MacRelix has a /proc filesystem (with human readable stack crawls) and also maps various parts of Mac OS (e.g. the ROM image in /sys/mac/rom). ↫ Josh Juran I had never heard of MacRelix, but it seems like an amazing tool Juran put a lot of thought, effort, and love into. Sadly, with the number of PowerPC Mac OS X users being vanishingly small, and the number of classic Mac OS users even smaller so, the future of MacRelix seems uncertain. I wonder what parts of it can be salvaged and upgraded to work on ARM macOS or even Intel macOS, because I think the ideas and concepts are incredibly cool. A related project by Juran is something called FORGE, a portable windowing API that used a virtual file system, meaning that instead of using functions as objects, it uses files. Juran mentions the example of a window title – which is a file, and if you want to change the title of that window you just change the file, which will be instantly reflected in the GUI. Here’s a Hello World example: Even though I’m not a programmer, this little tidbit of code makes perfect sense to me, and I understood it instantly. Of course, anything more complex will quickly leave my wheelhouse, but intuitively, I really like this. FORGE exists as a prototype inside MacRelix, so you can play with this concept while using MacRelix.

Windows NT and NetWare on PA-RISC, and a HP-UX port to x86

Back when I was working on my article about PA-RISC, HP-UX, and UNIX workstations in general, I made extensive use of OpenPA, Paul Weissmann’s invaluable and incredibly detailed resource about HP’s workstation efforts, HP-UX, and tons of related projects and products. Weissmann’s been doing some serious digging, and has unearthed details about a number of essentially forgotten operating system efforts. First, it turns out HP was porting Windows NT to PA-RISC in the early ’90s. Several magazine sources and USEnet posts around 1993 point to HP pursuing a PA-RISC port to NT, modified the PA-RISC architecture for bi-endianess and even conducted a back-room presention at the ’94 Comdex conference of a (modified HP 712?) PA-7100LC workstation running Windows NT. Mentions of NT on PA-RISC continued in 1994 with some customer interest but ended around 1995. ↫ Paul Weissmann at OpenPA The port eventually fizzled out due to a lack of interest from both customers and application developers, and HP realised its time was better spent on the future of x86, Intel’s Itanium, instead. HP also planned to work together with Novell to port NetWare to PA-RISC, but the work took longer than expected and it, too, was cancelled. The most recent secretive effort was the port of HP-UX to x86, an endeavour that took place during the final days of the UNIX workstation market. Parts of the conversation in these documents mention a successful boot of HP-UX on x86 in December of 2009, with porting efforts projected to cost 100M+ between 2010 and 2016. The plan was for mission-critical x86 systems (ProLiant DL980 and Superdome with x86) and first releases projected in 2011 (developer) and 2012 (Superdome and Linux ABI). ↫ Paul Weissmann at OpenPA I’m especially curious about that last one, as porting HP-UX to x86 seems like a massive effort during a time where it was already obvious Linux had completely obliterated the traditional UNIX market. It really feels like the last death saving throws of a platform everybody already knew wasn’t going to make it.

SysV init 3.09 released

Most of the Linux world has moved to systemd by now, but there are still quite a few popular other init systems, too. One of those is the venerable SysV init, which saw a brand new release yesterday. The biggest improvement also seems like it’ll enable a match made in heaven: SysVinit, but with musl. On Linux distributions which use the musl C library (instead of glibc) we can now build properly. Specifically, the hddown helper program now builds on musl C systems. ↫ SysVinit 3.09 release notes It’s important init systems like SysV init and runit don’t just die off or lose steam because of the systemd juggernaut, as competition, alternatives, and different ideas are what makes open source what it is.

Running UNIX on a Nintendo Entertainment System

Who wouldn’t want to run a UNIX-like operating system on their NES or Famicom? Although there’s arguably no practical reason for doing so, decrazyo has cobbled together a working port of Little Unix (LUnix), which was originally written for the Commodore 64 and 128 by Daniel Dallmann. The impetus for this project was initially curiosity, but when decrazyo saw that someone had already written a UNIX-like OS for the 6502 processor, it seemed apparent that the NES was too similar to the C64 to not port it. ↫ Maya Posch for Hackaday This is peak computing.

The history of Xenix

In the November 1980 issue of BYTE, the publication reported that Microsoft signed an agreement with Western Electric for the rights to develop and market UNIX from Bell Laboratories. The version of UNIX from Microsoft was to be specifically for the PDP-11, the Intel 8086, the Zilog Z8000, and the Motorola 68000, and its name was XENIX. Its major selling points were that it was supposed to be available for 16 bit microcomputers and that it would have MS BASIC, FORTRAN, and COBOL which were already widespread. ↫ Bradford Morgan White The story of Xenix, Microsoft’s UNIX.

Bringing the Unix philosophy to the 21st century

The Unix philosophy of using compact expert tools that do one thing well and pipelining them together to manipulate data is a great idea and has worked well for the past few decades. This philosophy was outlined in the 1978 Foreword to the Bell System Technical Journal describing the UNIX Time-Sharing System: Items i and ii are oft repeated, and for good reason. But it is time to take this philosophy to the 21st century by further defining a standard output format for non-interactive use. ↫ Kelly Brazil This seems like a topic people will have calm opinions about.

The Unix V6 shell and how control flow worked in it

On Unix, ‘test‘ and ‘[‘ are two names for (almost) the same program and shell builtin. Although today people mostly use it under its ‘[‘ name, when it was introduced in V7 along side the Bourne shell, it only was called ‘test‘; the ‘[‘ name was only nascent until years later. I don’t know for sure why it was called ‘test‘, but there are interesting hints about its potential genesis in the shell used in V6 Research Unix, the predecessor to V7, and the control flow constructs that shell used. ↫ Chris Siebenmann I’m fairly sure if I read this about 12 more times, I’ll start to maybe understand some of it.

Bringing memory safety to sudo and su by rewriting them in Rust

The sudo and su utilities mediate a critical privilege boundary on just about every open source operating system that powers the Internet. Unfortunately, these utilities have a long history of memory safety issues. By rewriting sudo and su in Rust we can make sure they don’t suffer from any more memory safety vulnerabilities. We’re going to get it done. Like I said – Rust is everywhere. Of course, these specific rewrites are not necessarily going to be picked up by the various Linux distributions, but the fact people are starting projects like this means it won’t be long before we’re going to see core UNIX utilities rewritten in Rust making their way to our machines.

The mass extinction of UNIX workstations

Back in the ’90s and very early 2000s, a whole market segment of computers existed that we don’t really talk about anymore today: the UNIX workstation. They were non-x86 machines running one of the many commercial UNIX variants, and were used for the very high end of computing. They were expensive, unique, different, and quite often incredibly overengineered. Countless companies made and sold these UNIX workstation. SGI was a big player in this market, with their fancy, colourful machines with MIPS processors running IRIX. There was also Sun Microsystems (and Oracle in the tail end), selling ever more powerful UltraSPARC workstations running Solaris. Industry legend DEC sold Alpha machines running Digital UNIX (later renamed to Tru64 UNIX when DEC was acquired by Compaq in 1998). IBM of course also sold UNIX workstations, powered by their PowerPC architecture and AIX operating system. As x86 became ever more powerful and versatile, and with the rise of Linux as a capable UNIX replacement and the adoption of the NT-based versions of Windows, the days of the UNIX workstations were numbered. A few years into the new millennium, virtually all traditional UNIX vendors had ended production of their workstations and in some cases even their associated architectures, with a lacklustre collective effort to move over to Intel’s Itanium – which didn’t exactly go anywhere and is now nothing more than a sour footnote in computing history. Approaching roughly 2010, all the UNIX workstations had disappeared. Development of MIPS, UltraSPARC (for workstations), Alpha, and others had all been wound down, and with a few exceptions, the various commercial UNIX variants started to languish in extended support purgatory, and by now, they’re all pretty much dead (save for Solaris). Users and industries moved on to x86 on the hardware side, and Linux, Windows, and in some cases, Mac OS X on the software side. I’ve always been fascinated by these UNIX workstations. They were this mysterious, unique computers running software that was entirely alien to me, and they were impossibly expensive. Over the years, I’ve owned exactly one of these machines – a Sun Ultra 5 running Solaris 9 – and I remember enjoying that little machine greatly. I was a student living in a tiny apartment with not much money to spare, but back in those days, you couldn’t load a single page on an online auction website without stumbling over piles of Ultra 5s and other UNIX workstations, so they were cheap and plentiful. Even as my financial situation improved and money wasn’t short anymore, my apartment was still far too small to buy even more computers, especially since UNIX workstations tended to be big and noisy. Fast forward to the 2020s, however, and everything’s changed. My house has plenty of space, and I even have my own dedicated office for work and computer nonsense, so I’ve got more than enough room to indulge and buy UNIX workstations. It was time to get back in the saddle. But soon I realised times had changed. Over the past few years, I have come to learn that If you want to get into buying, using, and learning from UNIX workstations today, you’ll run into various problems which can roughly be filed into three main categories: hardware availability, operating system availability, and third party software availability. I’ll walk through all three of these and give some examples that I’ve encountered, most of them based on the purchase of a UNIX workstation from a vendor I haven’t mentioned yet: Hewlett Packard. Hardware availability: a tulip for a house The first place most people would go to in order to buy a classic UNIX workstation is eBay. Everyone’s favourite auction site and online marketplace is filled with all kinds of UNIX workstations, from the ’80s all the way up to the final machines from the early 2000s. You’ll soon notice, however, that pricing seems to have gone absolutely – pardon my Gaelic – absolutely batshit insane. Are you interested in a Sun Ultra 45, from 2005, without any warranty and excluding shipping? That’ll be anywhere from €1500 to €2500. Or are you more into SGI, and looking to buy a a 175 Mhz Indigo 2 from the mid-’90s? Better pony up at least €1250. Something as underpowered as a Sun Ultra 10 from 1998 will run for anything between €700 and €1300. Getting something more powerful like an SGI Fuel? Forget about it. Going to refurbishers won’t help you much either. Just these past few days I was in contact with a refurbisher here in Sweden who is charging over €4000 for a Sun Ultra 45. For a US perspective, a refurbisher like UNIX HQ, for instance, has quite a decent selection of machines, but be ready to shell out $2000 for an IBM IntelliStation POWER 285 running AIX, $1300 for a Sun Blade 2500, or $2000-$2500 for an SGI Fuel, to list just a few. Of course, these prices are without shipping or possible customs fees. It will come as no surprise that shipping these machines is expensive. Shipping a UNIX workstation from the US – where supply is relatively ample – to Europe often costs more than the computer itself, easily doubling your total costs. On top of that, there’s the crapshoot lottery of customs fees, which, depending on the customs official’s mood, can really be just about anything. I honestly have no idea why pricing has skyrockted as much as it has. Machines like these were far, far cheaper only 5-10 years ago, but it seems something happened that pushed them up – quite a few of them are definitely not rare, so I doubt rarity is the cause. Demand can’t exactly be high either, so I doubt there’s so many people buying these that they’re forcing the price to go up. I do have a few theories, such as some machines being absolutely required in some specific niche somewhere and sellers just sitting on them until one breaks and must be replaced, whatever the cost,

The HP-UX Porting and Archive Centre

The HP-UX Porting and Archive Centre was established in August 1992 in the Department of Computer Science at Liverpool University in the United Kingdom, but has been run by Liverpool-based Connect Internet Solutions Limited since 1995. Its primary aim is to make public domain, freeware and Open Source software more readily available to users of Hewlett-Packard UNIX systems. The archive began with an initial collection of 150 packages, all of which had been successfully compiled and tested locally by staff at the Liverpool centre before being installed and made available on the archive. The centre continues to act as a porting body as well as an archive site – all software held in the archive has been verified to run successfully on HP-UX PA-RISC (and now Itanium) systems. As of October 2012, the Centre held over 1,500 packages! For reasons that will become apparent somewhere in the coming weeks, I’ve been spending a lot of time exploring and using HP-UX, and the HP-UX Porting and Archive Centre is one of those things that the four enthusiasts running HP-UX might find useful. It’s a vast collection of open source and freeware software built for HP-UX, installable either manually or using a specific script to resolve dependencies. This is one heck of a labour of love, considering HP-UX’, shall we say, unpopular status. Sadly, the Archive has a major limitation, one that I ran into: since 2017, only the very latest version of HP-UX – 11.31, also known as 11i v3 – is supported, meaning packages for the version I’m running, 11.11 or 11i v1, have long ago been deleted. On top of that, since 2020, all PA-RISC packages are marked as deprecated, meaning they’re no longer updated and will, at some point, be deleted too, leaving only Itanium 2 packages up for download. Using HP-UX as an enthusiast is one hell of a challenge, I can tell you that.

Transcending POSIX: the end of an era?

In this article, we provide a holistic view of the Portable Operating System Interface (POSIX) abstractions by a systematic review of their historical evolution. We discuss some of the key factors that drove the evolution and identify the pitfalls that make them infeasible when building modern applications. Some light reading to start the week.

The MGR Window System

MGR, sometimes said to be short for “ManaGeR”, sometimes short for “Munger”, is a simple network transparent window system. It was originally developed for the Sun 3 series of workstations by Stephen Uhler and colleagues beginning in 1984 while at Bellcore (later Telcordia, now part of Ericsson) and later enhanced by many others. The window system ran on many different hardware platforms, at least these: Sun 3/xx workstations running SunOS, which was the the original development platform, Sun SPARCstations (SunOS and then ported by me to Solaris), Intel x86 based PCs (Coherent, Minix, FreeBSD or Linux), Atari ST (under MiNT), AT&T UnixPC (SysV) and the Macintosh. I had never heard of MGR before, so this was a great read.

Why V7 Unix matters so much

When I talk about things involving the history of Unix, I often wind up mentioning V7, also known as Seventh Edition of Research Unix from Bell Labs (for a recent example, in my entry on when Unix got stack size limits). If you’re relatively new to the history of Unix, you might wonder why V7 keeps coming up so often. There are a number of reasons that V7 matters so much both for the history of Unix and for what is what we think of as being ‘Unix’ and the Unix way. The history of Unix is… Complicated.