Unix Archive

UNIX man pages

What might be somewhat more surprising though considering its research origins is that Unix almost since the very beginning had a comprehensive set of online reference documentation for all its commands, system calls, file formats, etc. These are the the manual- or man-pages. On Unix systems used interactively, the man-pages have historically always been installed, space permitting. The way the manual pages have evolved and how they are used has changed over the decades. This set of posts is intended to give people unfamiliar with them an overview, as well as offer a review to seasoned users. ↫ Alex Bochannek Right in this first article in the series there’s an interesting observation I never stopped and thought about: because the original creators of UNIX were writing the content of man pages with the very tools they were creating for UNIX, it led to a virtuous cycle. “Unix tools were used to document Unix, improving the documentation tools themselves as well.” I tend to use the internet now to learn how specific tools and commands work, but having such detailed man pages built right into the operating system was a huge deal pre-internet.

UnixWare in 2025: still actively developed and maintained

It kind of goes by under the radar, but aside from HP-UX, Solaris, and AIX, there’s another traditional classic UNIX still in active development today: UnixWare (and its sibling, OpenServer). Owned and developed by Xinuos, UnixWare and other related code and IP was acquired by them when the much-hated SCO crashed and burned about 15 years ago or so, and they’ve been maintaining it ever since. About a year ago, Xinuos released Update Pack 1 and Maintenance Pack 1 for UnixWare 7 Definitive 2018, followed by similar update packs for OpenServer 6 later in 2024. These update packs bring a bunch of bugfixes and performance improvements, as well as a slew of updated open source components, like new versions of SAMBA, sendmail, GCC and tons of other GNU components, OpenSSH and OpenSSL, and so, so much more, enabling a relatively modern and up-to-date build and porting environment. They can be installed through the patchck update utility, and while the Maintenance Pack is free for existing registered users, the Update Pack requires a separate license. UnixWare, while fully capable as a classic UNIX for workstations, isn’t really aimed at individuals or hobbyists (sadly), and instead focuses on existing enterprise deployments, where such licensing costs are par for the course. UnixWare runs on x86, and can be installed both on real hardware as well as in various virtualised environments. I contacted Xinuos a few days ago for a review license, and they supplied me with one so I can experiment with and write about UnixWare. I’ve currently got it installed in a Linux kvm, where it runs quite well, including the full X11R6 CDE desktop environment and graphical administration tools. Installing updates is a breeze thanks to patchck automating the process of finding, downloading, and installing the correct ones. I intend to ask Xinuos about an optimal configuration for running UnixWare on real hardware, too.

The Heirloom Project

Update: there’s a fork called heirloom-ng that is actually still somewhat maintained and contains some more changes and modernisations compared to the old version. The Heirloom Project provides traditional implementations of standard Unix utilities. In many cases, they have been derived from original Unix material released as Open Source by Caldera and Sun. Interfaces follow traditional practice; they remain generally compatible with System V, although extensions that have become common use over the course of time are sometimes provided. Most utilities are also included in a variant that aims at POSIX conformance. On the interior, technologies for the twenty-first century such as the UTF-8 character encoding or OpenType fonts are supported. ↫ The Heirloom Project website I had never heard of this before, but I like the approach they’re taking. This isn’t just taking System V tools and making them work on a modern UNIX-like system as-is; they’re also improving by them adding support for modern technologies, without actually changing their classic nature and the way old-fashioned users expect them to work. Sadly, the project seems to be dead, as the code hasn’t been altered since 2008. Perhaps someone new is willing to take up this project? As it currently stands, the tools are available for Linux, Solaris, Open UNIX, HP-UX, AIX, FreeBSD, NetBSD, and OpenBSD, but considering how long the code hasn’t been touched, I wonder if they still run and work on any of those systems today. They also come in various different versions which comply with different variants of the POSIX standard.

Apple’s macOS UNIX certification is a lie

As an online discussion grows longer, the probability of a someone mentioning macOS is a UNIX approaches 1. In fact, it was only late last year that The Open Group announced that macOS 15.0 was, once again, certified as UNIX, continuing Apple’s long-standing tradition of certifying macOS releases as “real” UNIX®. What does any of this actually, mean, though? Well, it turns out that if you actually dive into Apple’s conformance statements for macOS’ UNIX certification, it doesn’t really mean anything at all. First and foremost, we have to understand what UNIX certification really means. In order to be allowed to use the UNIX trademark, your operating system needs to comply with the Single UNIX Specification (SUS), which specifies programming interfaces for C, a command-line shell, and user commands, more or less identical to POSIX, as well as the X/Open Curses specification. The latest version is SUS version 4, originally published in 2008, with amendments published in 2013 and 2016, which were rolled up into version 4 in 2018. The various versions of the SUS that exist, in turn, correspond to a specific UNIX trademark. In table form: Trademark SUS version SUS published in: SUS last amended in: UNIX® 93 n.a. n.a. n.a. UNIX® 95 Version 1 1994 n.a. UNIX® 98 Version 2 1997 n.a. UNIX® 03 Version 3 2002 2004 UNIX® V7 Version 4 2008 2016 (2018 for roll-up) When you read that macOS is a certified UNIX, which of these versions and trademarks do you assume macOS complies with? You’d assume they would just target the latest trademark and SUS version, right? This would allow macOS to carry the UNIX® V7 trademark, because they would conform to version 4 of the SUS, which dates to 2016. The real answer is that macOS 15.0 only conforms to version 3 of the SUS, which dates all the way back to the ancient times of 2004, and as such, macOS is only UNIX® 03 (on both Intel and ARM). However, you can argue this is just semantics, since it’s not like UNIX and POSIX are very inclined to change. So now, like the UNIX nerd that you are, you want to see all this for yourself. You use macOS, safe in the knowledge that unlike those peasants using Linux or one of the BSDs, you’re using a real UNIX®. So you can just download all the tests suites (if you can afford them, but that’s a whole different can of worms) and run them, replicating Apple’s compliance testing, seeing for yourself, on your own macOS 15 installation, that macOS 15 is a real UNIX®, right? Well, no, you can’t, because the version of macOS 15 Apple certifies is not the version that’s running on everyone’s supported Macs. To gain its much-vaunted UNIX certification for macOS, Apple cheats. A lot. The various documents Apple needs to submit to The Open Group as part of the UNIX certification process are freely available, and mostly it’s a lot of very technical questions about various very specific aspects of macOS’ UNIX and POSIX compliance few of us would be able to corroborate without extensive research and in-depth knowledge of macOS, UNIX, and POSIX. However, at the end of every one of these Conformance Statements, there’s a text field where the applicant can write down “additional, explanatory material that was provided by the vendor”, and it’s in these appendices where we can see just how much Apple has to cheat to ensure macOS passes the various UNIX® 03 certification tests. In the first of these four documents, Internationalised System Calls and Libraries Extended V3, Apple’s “additional, explanatory material” reads as follows: Question 27: By default, core file generation is not enabled. To enable core file generation, you can issue this command: sudo launchctl limit core unlimited Testing Environment Addendum: macOS version 15.0 Sequoia, like previous versions, includes an additional security mechanism known as System Integrity Protection (SIP). This security policy applies to every running process, including privileged code and code that runs out of the sandbox. The policy extends additional protections to components on disk and at run-time, only allowing system binaries to be modified by the system installer and software updates. Code injection and runtime attachments to system binaries are no longer permitted. To run the VSX conformance test suite we first disable SIP as follows: – Shut down the system.– Press and hold the power button. Keep holding it while you see the Apple logo and the message “Continue holding for startup options”– Release the power button when you see “Loading startup options”– Choose “Options” and click “Continue”– Select an administrator account and enter its password.– From the Utilities menu in the Menu Bar, select Terminal.– At the prompt, issue the following command: “csrutil disable”– You should see a message that SIP is disabled. From the Apple menu, select “Restart”. By default, macOS coalesces timeouts that are scheduled to occur within 5 seconds of each other. This can randomly cause some sleep calls to sleep for different times than requested (which affects tests of file access times) so we disable this coalescing when testing. To disable timeout coalescing issue this command: sudo sysctl -w kern.timer.coalescing_enabled=0 By default there is no root user. We enable the root user for testing using the following series of steps:– Launch the Directory Utility by pressing Command and Space, and then typing “Directory Utility”– Click the Lock icon in Directory Utility and authenticate by entering an Administrator username and password.– From the Menu Bar in Directory Utility:– Choose Edit -> Enable Root User. Then enter a password for the root user, and confirm it.– Note: If you choose, you can later Disable Root User via the same menu. ↫ Apple’s appendix to Internationalised System Calls and Libraries Extended V3 The second conformance statement, Commands and Utilities V4, has another appendix, and it’s a real doozy (the indicate repeat remarks from the previous appendix; I’ve removed them for brevity): Testing Environment Addendum: The third and fourth conformance statements have

How UNIX spell ran in 64kB RAM

How do you fit a 250kB dictionary in 64kB of RAM and still perform fast lookups? For reference, even with modern compression techniques like gzip -9, you can’t compress this file below 85kB. In the 1970s, Douglas McIlroy faced this exact challenge while implementing the spell checker for Unix at AT&T. The constraints of the PDP-11 computer meant the entire dictionary needed to fit in just 64kB of RAM. A seemingly impossible task. ↫ Abhinav Upadhyay They still managed to do it, but had to employ some incredibly clever tricks to make it work, and make it work fast. Such skillful engineers interested in optimising and eeking the most possible performance out of underpowered hardware still exist today, but they’re not in any position to make lasting changes at any of the companies defining our technology today. Why spend money on skilled engineers, when you can just throw cheap hardware at the problem? I wonder just how many resources the spellchecking feature in Word or LibreOffice Writer takes up.

The history and use of /etc/glob in early Unixes

One of the innovations that the V7 Bourne shell introduced was built in shell wildcard globbing, which is to say expanding things like *, ?, and so on. Of course Unix had shell wildcards well before V7, but in V6 and earlier, the shell didn’t implement globbing itself; instead this was delegated to an external program, /etc/glob (this affects things like looking into the history of Unix shell wildcards, because you have to know to look at the glob source, not the shell). ↫ Chris Siebenmann I never knew expanding wildcars in UNIX shells was once done by a separate program, but if you stop and think about the original UNIX philosophy, it kind of makes sense. On a slightly related note, I’m currently very deep into setting up, playing with, and actively using HP-UX 11i v1 on the HP c8000 I was able to buy thanks to countless donations from you all, OSNews readers, and one of the things I want to get working is email in dtmail, the CDE email program. However, dtmail is old, and wants you to do email the UNIX way: instead of dtmail retrieving and sending email itself, it expects other programs to those tasks for you. In other words, to setup and use dtmail (instead of relying on a 2010 port of Thunderbird), I’ll have to learn how to set up things like sendmail, fetchmail, or alternatives to those tools. Those programs will in turn dump the emails in the maildir format for dtmail to work with. Configuring these tools could very well be above my paygrade, but I’ll do my best to try and get it working – I think it’s more authentic to use something like dtmail than a random Thunderbird port. In any event, this, too, feels very UNIX-y, much like delegating wildcard expansion to a separate program. What this also shows is that the “UNIX philosophy” was subject to erosion from the very beginning, and really isn’t a modern phenomenon like many people seem to imply. I doubt many of the people complaining about the demise of the UNIX philosophy today even knew wildcard expansion used to be done by a separate program.

Emulating HP-UX using QEMU

While we’re out here raising funds to make me daily-drive HP-UX 11i v1 – we’re at 59% of the goal, so I’m starting to prepare for the pain – it seems you can actually run older versions, HP-UX 10.20 and 11.00 to be specific, in a virtual machine using QEMU. QEMU is an open source computer emulation and virtualization software, first released in 2003 by Fabrice Bellard. It supports many different computer systems and includes support for many RISC architectures besides x86. PA-RISC emulation has been included in QEMU since 2018. QEMU emulates a complete computer in software without the need for specific virtualization hardware. With QEMU, a full HP Visualize B160L and C3700 workstation can be emulated to run PA-RISC operating systems like HP-UX Unix and compatible applications. ↫ Paul Weissman at OpenPA The emulation is complete enough that it can run X11 and CDE, and you can choose between emulating 32bit PA-RISC of 64bit PA-RISC. Devices and peripherals support is a bit of a mixed bag, with things like USB being only partially supported, and audio not working at all since an audio chip commonly found in PA-RISC workstations isn’t supported either. A number of SCSCI and networking devices found on HP’s workstations aren’t supported either, and a few chipsets don’t work either. As far as operating system support goes, you can run HP-UX 10.20, HP-UX 11.00, Linux, and NetBSD. Newer (11i v1 and later) and older (9.07 and 9.05) versions of HP-UX don’t work, and neither does NeXTSTEP 3.3. Some of these issues probably stem from missing QEMU drivers, others from a lack of testing; PA-RISC is, after all, not one of the most popular of the dead UNIX architectures, with things like SPARC and MIPS usually taking most of the spotlight. Absolutely nothing beats running operating systems on the bare metal they’re designed for, but with PA-RISC hardware becoming ever harder to obtain, it makes sense for emulation efforts to pick up speed so more people can explore HP-UX. I’m weirdly into HP-UX, despite its reputation as a difficult platform to work with, so I personally really want actual hardware, but for most of you, getting HP-UX 11i to work properly on QEMU is most likely the only way you will ever experience this commercial UNIX.

/tmp should not exist

I commented on Lobsters that /tmp is usually a bad idea, which caused some surprise. I suppose /tmp security bugs were common in the 1990s when I was learning Unix, but they are pretty rare now so I can see why less grizzled hackers might not be familiar with the problems. I guess that’s some kind of success, but sadly the fixes have left behind a lot of scar tissue because they didn’t address the underlying problem: /tmp should not exist. ↫ Tony Finch Not only is this an excellent, cohesive, and convincing argument against the existence of /tmp, it also contains some nice historical context as to why things are the way they are. Even without the arguments against /tmp, though, it just seems entirely more logical, cleaner, and sensible to have /tmp directories per user in per user locations. While I never would’ve been able to so eloquently explain the problem as Finch does, it just feels wrong to have every user resort to the exact same directory for temporary files, like a complex confluence of bad decisions you just know is going to cause problems, even if you don’t quite understand the intricate interplay.

Technology history: where Unix came from

Today, every Unix-like system can trace their ancestry back to the original Unix. That includes Linux, which uses the GNU tools – and the GNU tools are based on the Unix tools. Linux in 2024 is removed from the original Unix design, and for good reason – Linux supports architectures and tools not dreamt of during the original Unix era. But the core command line experience in Linux is still very similar to the Unix command line of the 1970s. The next time you use ls to list the files in a directory, remember that you’re using a command line that’s been with us for more than fifty years. ↫ Jim Hall An excellent overview of some of the more ancient UNIX commands that are still with us today. One thing I always appreciate when I dive into an operating system closer to “real” UNIX, like OpenBSD, or a actual UNIX, like HP-UX, is just how much more logical sense they make under the hood than a Linux system does. This is not a dunk on modern Linux – it has to cater to endless more modern needs than something ancient and dead like HP-UX – but what I learn while using these systems closer to the UNIX has made me appreciate proper UNIX more than I used to in the past. In what surely sounds like utter lunacy to system administrators who actually had to seriously administer HP-UX systems back in the day, I genuinely love using HP-UX, setting it up, configuring it, messing around with it, because it just makes so much more logical sense than the systems we use today. The knowledge gained from using BSD, HP-UX, and others, while not always directly applicable to Linux, does aid me in understanding certain Linux things better than I did before. What I’m trying to say is – go and load up an old UNIX, or at least a modern BSD. Aside from being great operating systems in their own right, they’re much easier to grasp than a modern Linux system, and you’ll learn a lot form the experience.

A brief history of Dell UNIX

“Dell UNIX? I didn’t know there was such a thing.” A couple of weeks ago I had my new XO with me for breakfast at a nearby bakery café. Other patrons were drawn to seeing an XO for the first time, including a Linux person from Dell. I mentioned Dell UNIX and we talked a little about the people who had worked on Dell UNIX. He expressed surprise that mention of Dell UNIX evokes the above quote so often and pointed out that Emacs source still has #ifdef for Dell UNIX. Quick Googling doesn’t reveal useful history of Dell UNIX, so here’s my version, a summary of the three major development releases. ↫ Charles H. Sauer I sure had never heard of Dell UNIX, and despite the original version of the linked article being very, very old – 2008 – there’s a few updates from 2020 and 2021 that add links to the files and instructions needed to install, set up, and run Dell UNIX in a virtual machine; 86Box or VirtualBox specifically. What was Dell UNIX? in the late ’80s, Dell started a the Olympic project, an effort to create a completely new architecture spanning desktops, workstations, and servers, some of which would be using multiple processors. When searching for an operating system for this project, the only real option was UNIX, and as such, the Olympic team set out to developer a UNIX variant. The first version was based on System V Release 3.2, used Motif and the X Window System, a DOS virtual machine to run, well, DOS applications called Merge, and compatibility with Microsoft Xenix. It might seem strange to us today, but Microsoft’s Xenix was incredibly popular at the time, and compatibility with it was a big deal. The Olympic project turned out to be too ambitious on the hardware front so it got cancelled, but the Dell UNIX project continued to be developed. The next release, Dell System V Release 4, was a massive release, and included a full X Window System desktop environment called X.desktop, an office suite, e-mail software, and a lot more. It also contained something Windows wouldn’t be getting for quite a few years to come: automatic configuration of device drivers. This was apparently so successful, it reduced the number of support calls during the first 90 days of availability by 90% compared to the previous release. Dell SVR4 finally seemed like real UNIX on a PC. We were justifiably proud of the quality and comprehensiveness, especially considering that our team was so much smaller than those of our perceived competitors at ISC, SCO and Sun(!). The reviewers were impressed. Reportedly, Dell SVR4 was chosen by Intel as their reference implementation in their test labs, chosen by Oracle as their reference Intel UNIX implementation, and used by AT&T USL for in house projects requiring high reliability, in preference to their own ports of SVR4.0. (One count showed Dell had resolved about 1800 problems in the AT&T source.) I was astonished one morning in the winter of 1991-92 when Ed Zander, at the time president of SunSoft, and three other SunSoft executives arrived at my office, requesting Dell help with their plans to put Solaris on X86. ↫ Charles H. Sauer Sadly, this would also prove to be the last release of Dell UNIX. After a few more point release, the brass at Dell had realised that Dell UNIX, intended to sell Dell hardware, was mostly being sold to people running it on non-Dell hardware, and after a short internal struggle, the entire project was cancelled since it was costing them more than it was earning them. As I noted, the article contains the files and instructions needed to run Dell UNIX today, on a virtual machine. I’m definitely going to try that out once I have some time, if only to take a peek at that X.desktop, because that looks absolutely stunning for its time.

What is PID 0?

The very short version: Unix PIDs do start at 0! PID 0 just isn’t shown to userspace through traditional APIs. PID 0 starts the kernel, then retires to a quiet life of helping a bit with process scheduling and power management. Also the entire web is mostly wrong about PID 0, because of one sentence on Wikipedia from 16 years ago. There’s a slightly longer short version right at the end, or you can stick with me for the extremely long middle bit! But surely you could just google what PID 0 is, right? Why am I even publishing this? ↫ David Anderson What a great read. Just great.

Evolution of the ELF object file format

The ELF object file format is adopted by many UNIX-like operating systems. While I’ve previously delved into the control structures of ELF and its predecessors, tracing the historical evolution of ELF and its relationship with the System V ABI can be interesting in itself. ↫ MaskRay The article wasn’t lying. I had no reason to know this – and I’m pretty sure most of you didn’t either – but it turns out the standards that define ELF got caught up in the legal murkiness and nastiness of UNIX. After the dissolution of the committee governing ELF in 1995, stewardship went from one familiar name to the next, first Novell, then The Santa Cruz Operation, then Caldera which renamed itself to The SCO Group, and eventually ending up at UnXis (now Xinuos) in 2011. In 2015, the last maintainer of ELF left Xinuos, and since then, it’s been effectively unmaintained. Which is kind of wild, considering ELF is a crucial building block of virtually all UNIX and UNIX-like operating systems today. The article mentions there’s a neutral Google Group that discusses, among other things, ELF, but that, too, has seen dwindling activity. Still, that group has reached consensus on some changes; changes that are now not reflected in any of the official texts. It’s a bit of a mess. If you ever wanted to know the status of ELF as a standard, this article’s for you.

Writing a Unix clone in about a month

I needed a bit of a break from “real work” recently, so I started a new programming project that was low-stakes and purely recreational. On April 21st, I set out to see how much of a Unix-like operating system for x86_64 targets that I could put together in about a month. The result is Bunnix. Not including days I didn’t work on Bunnix for one reason or another, I spent 27 days on this project. ↫ Drew DeVault Bunnix’ creator, Drew DeVault, has quite a bit of experience with writing operating systems, as they’re also the creator of Helios, an experimental microkernel operating system. Bunnix is remarkably capable for a 30-day project, and comes with support for both BIOS and UEFI boot, and it’ll boot on real hardware too. It doesn’t have USB support though, so if you’re going the real hardware route, you’ll need to take that into account for mouse and keyboard input. Bunnix has a relatively solid set of drivers, taking the short development time into account: among other things, there’s PCI, AHCI block devices, serial ports, framebuffers, and ext4 support. The kernel supports a virtual filesystem, a /dev filled with block devices, a terminal emulator, and more. Bunnix is single-user for now, so it doesn’t enforce file permissions, but DeVault states it should be relatively easy to implement multiuser support. A unique characteristic of Bunnix is that’s written mostly in Hare, complemented by some C. Hare is a relatively new programming language, which we touched on late last year when it was ported to OpenBSD. Implementing file systems proved to be one of the difficulties during development, partly due to Hare. I also learned a lot about mixing source languages into a Hare project, since the kernel links together Hare, assembly, and C sources – it works remarkably well but there are some pain points I noticed, particularly with respect to building the ABI integration riggings. It’d be nice to automate conversion of C headers into Hare forward declaration modules. Some of this work already exists in hare-c, but has a ways to go. If I were to start again, I would probably be more careful in my design of the filesystem layer. ↫ Drew DeVault DeVault’s post about Bunnix gives a lot more insight into the development of Bunnix, so I’d highly suggest to head on over to read more. Do note that DeVault considers Bunnix “done”, in the sense that the learning experience is over, and aside from a few random developments here and there, they won’t be doing any work on it anymore.

MacRelix: a Unix-like environment that runs in classic Mac OS

MacRelix is a Unix-like environment that runs in classic Mac OS. MacRelix natively supports classic 68K and PPC Mac OS, as well as Mac OS X on PPC via Carbon. ↫ MacRelix website The creator of MacRelix, Josh Juran, published an article in 2019 detailing the origins of the project. As a Mac OS developer, he was so unhappy with both CodeWarrior and Apple’s Macintosh Programmer’s Workshop (MPW), that he set out to create what would become MacRelix in 1999. Reading through the limitations and roadblocks he experienced with CodeWarrior and MPW, it’s not hard to see why he got frustrated – CodeWarrior’s targets were apparently a mess and a half to deal with. Then came target multiplication. Whereas the initial CodeWarrior developer releases shipped with each combination of language (C and Pascal) and architecture (68K and PPC) supported in a separate application, a later version of the IDE unified these, allowing the developer to have a single project file per project. To allow the same project to be built for both 68K and PPC architectures, the project data model included targets: One target would compile for 68K and link against 68K libraries, another would do the same for PPC. Targets could also be used to select an optimized build versus one for debugging. Combining both dichotomies yields four targets: 68K debug, 68K optimized, PPC debug, and PPC optimized. Then if your project involves multiple executables, like a code resource or shared library in addition to an application, you now have eight targets. Or, if you support one of, say, 68020 optimization, profiling, or a third executable, make that twelve. Or, for all of them, twenty-seven. ↫ Josh Juran Changing an option in your application required you to change it in every single target, too, which I can easily see is incredibly frustrating. MPW, for its part, was a massive improvement, he argues, but while it was clearly inspired by UNIX, it didn’t seem to actually implement any of the features and characteristics of UNIX. However, very much unlike Unix, the MPW Shell had only a single thread of execution — only one program could be running at once. Not only that, but there was no way for MPW’s compiled plugins (called tools) to invoke other tools or scripts — not even via system() (which blocks the calling program until the called program exits). Therefore, Make couldn’t actually do anything, but only printed out the commands for the user to run manually. You could code in Perl instead of the built-in language, but then your scripts couldn’t run other programs — only MPW shell scripts could do that. ↫ Josh Juran The limitations Juran was experiencing with these two tools pushed him to create his own solution, which went well beyond what MPW offered, even in 2019 when this article was published. Nowadays, MacRelix has pipes, signals, system calls, TCP sockets, and more. It works on both 68K and PowerPC Mac systems and builds as Carbon to run natively in OS X. It can be used on any Mac OS version from System 7 to Mac OS X 10.6 “Snow Leopard” (after which Apple removed the Rosetta PowerPC emulator). I haven’t implemented fork() yet, but I know how to do it. In addition to a Unix-like file system interface (which even handles long names by storing them in Desktop database comment fields)), MacRelix has a /proc filesystem (with human readable stack crawls) and also maps various parts of Mac OS (e.g. the ROM image in /sys/mac/rom). ↫ Josh Juran I had never heard of MacRelix, but it seems like an amazing tool Juran put a lot of thought, effort, and love into. Sadly, with the number of PowerPC Mac OS X users being vanishingly small, and the number of classic Mac OS users even smaller so, the future of MacRelix seems uncertain. I wonder what parts of it can be salvaged and upgraded to work on ARM macOS or even Intel macOS, because I think the ideas and concepts are incredibly cool. A related project by Juran is something called FORGE, a portable windowing API that used a virtual file system, meaning that instead of using functions as objects, it uses files. Juran mentions the example of a window title – which is a file, and if you want to change the title of that window you just change the file, which will be instantly reflected in the GUI. Here’s a Hello World example: Even though I’m not a programmer, this little tidbit of code makes perfect sense to me, and I understood it instantly. Of course, anything more complex will quickly leave my wheelhouse, but intuitively, I really like this. FORGE exists as a prototype inside MacRelix, so you can play with this concept while using MacRelix.

Windows NT and NetWare on PA-RISC, and a HP-UX port to x86

Back when I was working on my article about PA-RISC, HP-UX, and UNIX workstations in general, I made extensive use of OpenPA, Paul Weissmann’s invaluable and incredibly detailed resource about HP’s workstation efforts, HP-UX, and tons of related projects and products. Weissmann’s been doing some serious digging, and has unearthed details about a number of essentially forgotten operating system efforts. First, it turns out HP was porting Windows NT to PA-RISC in the early ’90s. Several magazine sources and USEnet posts around 1993 point to HP pursuing a PA-RISC port to NT, modified the PA-RISC architecture for bi-endianess and even conducted a back-room presention at the ’94 Comdex conference of a (modified HP 712?) PA-7100LC workstation running Windows NT. Mentions of NT on PA-RISC continued in 1994 with some customer interest but ended around 1995. ↫ Paul Weissmann at OpenPA The port eventually fizzled out due to a lack of interest from both customers and application developers, and HP realised its time was better spent on the future of x86, Intel’s Itanium, instead. HP also planned to work together with Novell to port NetWare to PA-RISC, but the work took longer than expected and it, too, was cancelled. The most recent secretive effort was the port of HP-UX to x86, an endeavour that took place during the final days of the UNIX workstation market. Parts of the conversation in these documents mention a successful boot of HP-UX on x86 in December of 2009, with porting efforts projected to cost 100M+ between 2010 and 2016. The plan was for mission-critical x86 systems (ProLiant DL980 and Superdome with x86) and first releases projected in 2011 (developer) and 2012 (Superdome and Linux ABI). ↫ Paul Weissmann at OpenPA I’m especially curious about that last one, as porting HP-UX to x86 seems like a massive effort during a time where it was already obvious Linux had completely obliterated the traditional UNIX market. It really feels like the last death saving throws of a platform everybody already knew wasn’t going to make it.

SysV init 3.09 released

Most of the Linux world has moved to systemd by now, but there are still quite a few popular other init systems, too. One of those is the venerable SysV init, which saw a brand new release yesterday. The biggest improvement also seems like it’ll enable a match made in heaven: SysVinit, but with musl. On Linux distributions which use the musl C library (instead of glibc) we can now build properly. Specifically, the hddown helper program now builds on musl C systems. ↫ SysVinit 3.09 release notes It’s important init systems like SysV init and runit don’t just die off or lose steam because of the systemd juggernaut, as competition, alternatives, and different ideas are what makes open source what it is.

Running UNIX on a Nintendo Entertainment System

Who wouldn’t want to run a UNIX-like operating system on their NES or Famicom? Although there’s arguably no practical reason for doing so, decrazyo has cobbled together a working port of Little Unix (LUnix), which was originally written for the Commodore 64 and 128 by Daniel Dallmann. The impetus for this project was initially curiosity, but when decrazyo saw that someone had already written a UNIX-like OS for the 6502 processor, it seemed apparent that the NES was too similar to the C64 to not port it. ↫ Maya Posch for Hackaday This is peak computing.

The history of Xenix

In the November 1980 issue of BYTE, the publication reported that Microsoft signed an agreement with Western Electric for the rights to develop and market UNIX from Bell Laboratories. The version of UNIX from Microsoft was to be specifically for the PDP-11, the Intel 8086, the Zilog Z8000, and the Motorola 68000, and its name was XENIX. Its major selling points were that it was supposed to be available for 16 bit microcomputers and that it would have MS BASIC, FORTRAN, and COBOL which were already widespread. ↫ Bradford Morgan White The story of Xenix, Microsoft’s UNIX.

Bringing the Unix philosophy to the 21st century

The Unix philosophy of using compact expert tools that do one thing well and pipelining them together to manipulate data is a great idea and has worked well for the past few decades. This philosophy was outlined in the 1978 Foreword to the Bell System Technical Journal describing the UNIX Time-Sharing System: Items i and ii are oft repeated, and for good reason. But it is time to take this philosophy to the 21st century by further defining a standard output format for non-interactive use. ↫ Kelly Brazil This seems like a topic people will have calm opinions about.

The Unix V6 shell and how control flow worked in it

On Unix, ‘test‘ and ‘[‘ are two names for (almost) the same program and shell builtin. Although today people mostly use it under its ‘[‘ name, when it was introduced in V7 along side the Bourne shell, it only was called ‘test‘; the ‘[‘ name was only nascent until years later. I don’t know for sure why it was called ‘test‘, but there are interesting hints about its potential genesis in the shell used in V6 Research Unix, the predecessor to V7, and the control flow constructs that shell used. ↫ Chris Siebenmann I’m fairly sure if I read this about 12 more times, I’ll start to maybe understand some of it.