IceWM, the venerable X11 window manager, has released a new version, bumping the version number to 4.0.0. This release brings a big update to the alt+tab feature. The Alt+Tab window switcher can now handle large numbers of application windows in both horizontal and in vertical mode. Type the first letter of an application class name in Alt+Tab, to select the next instance window of that application class. Select an application by pressing one of the number keys. Select an application by mouse in Alt+Tab in horizontal mode. Support navigating the quick switch with all navigation keys. Press the menu button on Alt+Tab to open the system menu. QuickSwitchPreview is a new mode to preview applications. These previews are updated while the quick switch is active. ↫ IceWM 4.0 release notes On top of this major set of improvements to alt+tab, there’s the usual list of bug fixes and small changes, as well as a bunch if updated translations.
The new year isn’t even a day old, and Haiku developer X512 dropped something major in Haiku users’ laps: the first alpha version of an accelerated NVIDIA graphics drivers for Haiku. Supporting at least NVIDIA Turing and Ampere GPUs, it’s very much in alpha state, but does allow for proper GPU acceleration, with the code surely making its way to Haiku builds in the near future. Don’t expect a flawless experience – this is alpha software – but even then, this is a major milestone for Haiku.
It’s 31 December 2025 today, the last day of the year, but it also happens to mark the end of support for the last and final version of one of my favourite operating systems: HP-UX. Today is the day HPE puts the final nail in the coffin of their long-running UNIX operating system, marking the end of another vestige of the heyday of the commercial UNIX variants, a reign ended by cheap x86 hardware and the increasing popularisation of Linux. HP-UX’ versioning is a bit of a convoluted mess for those not in the know, but the versions that matter are all part of the HP-UX 11i family. HP-UX 11i v1 and v2 (also known as 11.11 and 11.23, respectively) have been out of support for exactly a decade now, while HP-UX 11i v3 (also known as 11.31) is the version whose support ends today. To further complicate matters, like 11i v2, HP-UX 11i v3 supports two hardware platforms: HP 9000 (PA-RISC) and HP Integrity (Intel Itanium). Support for the HP-UX 11i v3 variant for HP 9000 ended exactly four years ago, and today marks the end of support for HP-UX 11i v3 for HP Integrity. And that’s all she wrote. I have two HP-UX 11i v1 PA-RISC workstations, one of them being my pride and joy: an HP c8000, the last and fastest PA-RISC workstation HP ever made, back in 2005. It’s a behemoth of a machine with two dual-core PA-8900 processors running at 1Ghz, 8 GB of RAM, a FireGL X3 graphics card, and a few other fun upgrades like an internal LTO3 tape drive that I use for keeping a bootable recovery backup of the entire system. It runs HP-UX 11i v1, fully updated and patched as best one can do considering how many patches have either vanished from the web or have never “leaked” from HPE (most patches from 2009 onwards are not available anywhere without an expensive enterprise support contract). The various versions of HP-UX 11i come with a variety “operating environments” you can choose from, depending on the role your installation is supposed to fulfill. In the case of my c8000, it’s running the Technical Computing Operating Environment, which is the OE intended for workstations. HP-UX 11i v1 was the last PA-RISC version of the operating system to officially support workstations, with 11i v2 only supporting Itanium workstations. There are some rumblings online that 11i v2 will still work just fine on PA-RISC workstations, but I have not yet tried this out. My c8000 also has a ton of other random software on it, of course, and only yesterday I discovered that the most recent release of sudo configures, compiles, and installs from source just fine on it. Sadly, a ton of other modern open source code does not run on it, considering the slightly outdated toolchain on HP-UX and few people willing and/or able to add special workarounds for such an obscure platform. Over the past few years, I’ve been trying to get into contact with HPE about the state of HP-UX’ patches, software, and drivers, which are slowly but surely disappearing from the web. A decent chunk is archived on various websites, but a lot of it isn’t, which is a real shame. Most patches from 2009 onwards are unavailable, various software packages and programs for HP-UX are lost to time, HP-UX installation discs and ISOs later than 2006-2009 are not available anywhere, and everything that is available is only available via non-sanctioned means, if you know what I mean. Sadly, I never managed to get into contact with anyone at HPE, and my concerns about HP-UX preservation seem to have fallen on deaf ears. With the end-of-life date now here, I’m deeply concerned even more will go missing, and the odds of making the already missing stuff available are only decreasing. I’ve come to accept that very few people seem to hold any love for or special attachment to HP-UX, and that very few people care as much about its preservation as I do. HP-UX doesn’t carry the movie star status of IRIX, nor the benefits of being available as both open source and on commodity hardware as Solaris, so far fewer people have any experience with it or have developed a fondness for it. HP-UX didn’t star in a Steven Spielberg blockbuster, it didn’t leave behind influential technologies like ZFS. Despite being supported up until today, it’s mostly forgotten – and not even HPE itself seems to care. And that makes me sad. When you raise your glasses tonight to mark the end of 2025 and welcome the new year, spare a thought for the UNIX everyone forgot still exists. I know I will.
I’d just like to interject for a moment. What you’re refering to as Linux, is in fact, Win32/Linux, or as I’ve recently taken to calling it, loss32 Win32 plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning system made useful by WINE, the ReactOS userland, and other vital system components comprising a full OS as defined by Microsoft. ↫ The loss32 homepage Joking introduction aside, this is exactly what you think it is: a Linux kernel with the Windows user interface running on top through Wine. I’m sure quite a few of use mused about this very concept at some point in time, but hikari_no_yume went a step further and created this working concept. It’s rough around the edges and needs a ton of work, but I do think the idea is sound and could offer real benefits for certain types of users. It’s definitely a more realistic idea than ReactOS, a project that’s perpetually chasing the dragon but never coming even close to catching it. Not having to recreate the entire Windows NT kernel, drivers, and subsystems, and using Linux instead, is simply a more realistic approach that could bring results within our lifetimes. The added benefit here is that this could still run Linux applications, too, of course. hikari_no_yume is looking for help with the project, and I hope they find it. This is a great idea, with an absolutely amazing name, too.
Nina Kalinina has been on an absolute roll lately, diving deep into VisiOn, uncovering Bellcore MGR, installing Linux on a PC-98 machine, and much more. This time, she’s ported Windows 2 to run on a machine it was never supposed to run on. I bought my first Apricot PC about three years ago, when I realised I wanted an 8086-based computer. At the time, I knew nothing about it and simply bought it because it looked rad and the price was low. I had no idea that it was not IBM PC-compatible, and that there were very few programs available for it. I have been on a quest to get a modern-ish word processor and spreadsheet program for it ever since. Which eventually made me “port” Windows 2 on it. In this post, I will tell you the story of this port. ↫ Nina Kalinina To get Windows 2 working on the Apricot, Kalinina had to create basic video, keyboard, and mouse drivers, allowing Windows 2 to boot into text mode. I wasn’t aware of this, but Windows 2 in text mode is funky: it’s rendering all the text you would see in a full Windows 2 user interface, just without any of the user interface elements. Further developing the video driver from scratch turned out to be too big of an undertaking for now, so she opted to extract the video driver from Windows 1 instead – which required a whole other unique approach. The keyboard and mouse drivers were extracted from Windows 1 in the same way. The end result is a fully working copy of Windows 2, including things like Word and Excel, which was the original goal in the first place. There aren’t many people around doing stuff like this, and it’s great to see such very peculiar, unique itches being scratched. Even if this is only relevant for exactly one person, it’s still been worth it.
I knew digital cameras and phones had to do a lot of processing and other types of magic to output anything human eyes can work with, but I had no idea just how much. This is wild.
There’s been endless talk online about just how bad Apple’s graphical user interface design has become over the years, culminating in the introduction of Liquid Glass across all of the company’s operating systems this year. Despite all the gnawing of teeth and scathing think pieces before the final rollout, it seems the average Apple user simply doesn’t care as much about GUI design as Apple bloggers thought they did, as there hasn’t been any uproar or stories in local media about how you should hold off on updating your iPhone. The examples of just how bad Apple’s GUI design has become keep on coming, though. This time it’s Howard Oakley showing once again how baffling the macOS UI is these days. If someone had told me 12 months ago what was going to happen this past year, I wouldn’t have believed them. Skipping swiftly past all the political, economic and social turmoil, I come to the interface changes brought in macOS Tahoe with Liquid Glass. After three months of strong feedback during beta-testing, I was disappointed when Tahoe was released on 15 September to see how little had been addressed. When 26.1 followed on 3 November it had only regressed, and 26.2 has done nothing. Here I summarise my opinions on where Tahoe’s overhaul has gone wrong. ↫ Howard Oakley at The Eclectic Light Company Apple bloggers and podcasters are hell-bent on blaming Apple’s terrible GUI design over the past 10 years on one man. Their first target was Jony Ive, who was handed control over not just hardware design, but also software design in 2012. When he left Apple, GUI design at Apple would finally surely improve again, and the Apple bloggers and podcasters let out a sigh of relief. History would turn out different, though – under Ive’s successor, Alan Dye, Apple’s downward trajectory in this area would continue unabated, culminating in the Liquid Glass abomination. Now that Alan Dye has left Apple, history is repeating itself: the very same Apple bloggers and podcasters are repeating themselves – surely now that Alan Dye is gone, GUI design at Apple will finally surely improve again. The possibility that GUI design at Apple does not hinge on the whims of just one person, but that instead the entire company has lost all sense of taste and craftmanship in this area does not cross their minds. Everyone around Jony Ive and Alan Dye, both below, alongside, and above them, had to sign off on Apple’s recent direction in GUI design, and the idea that the entire company would blindly follow whatever one person says, quality be damned, would have me far more worried as an Apple fan. At this point, it’s clear that Apple’s inability to design and build quality user interfaces is not the fault of just one fall guy, but an institutional problem. Anyone expecting a turnaround just because Ive Dye is gone isn’t seeing the burning forest through the trees.
We’re all familiar with things like marquee and blink, relics of HTML of the past, but there are far more weird and obscure HTML tags you may not be aware of. Luckily, Declan Chidlow at HTMLHell details a few of them so we can all scratch shake our heads in disbelief. But there are far more obscure tags which are perhaps less visually dazzling but equally or even more interesting. If you’re younger, this might very well be your introduction to them. If you’re older, this still might be an introduction, but also possibly a trip down memory lane or a flashback to the horrors of the first browser war. It depends. ↫ Declan Chidlow at HTMLHell I think my favourite is the dir tag, intended to be used to display lists of files and directories. We’re supposed to use list tags now to achieve the same result, but I do kind of like the idea of having a dedicated tag to indicate files, and perhaps have browsers render these lists in the same way the file manager of the platform it’s running on does. I don’t know if that was possible, but it seems like the logical continuation of a hypothetical dir tag. Anyway, should we implement bgsound on OSNews?
If you’re building a package manager and git-as-index seems appealing, look at Cargo, Homebrew, CocoaPods, vcpkg, Go. They all had to build workarounds as they grew, causing pain for users and maintainers. The pull request workflow is nice. The version history is nice. You will hit the same walls they did. ↫ Andrew Nesbitt It’s wild to read some of these stories. I can’t believe CocoaPods had 16000 directories contained in a single directory, which is absolutely bananas when you know how git actually works. Then there’s the issue that git is case-sensitive, as any proper file system should be, which causes major headaches on Windows and macOS, which are dumb and are case-insensitive. Even Windows’ path length limits, inherited from DOS, cause problems with git. There just so many problems with using git for a package managers’ database. The basic gist is that git is not a database, and shouldn’t be used as such. It’s incredulous to me that seasoned developers would opt for “solutions” like this.
Christmas is already behind us, but since this is an announcement from 11 December – that I missed – I’m calling this a very interesting and surprising Christmas present. The team and I are beyond excited to share what we’ve been cooking up over the last little while: a full desktop environment running on QNX 8.0, with support for self-hosted compilation! This environment both makes it easier for newly-minted QNX developers to get started with building for QNX, but it also vastly simplifies the process of porting Linux applications and libraries to QNX 8.0. ↫ John Hanam at the QNX Developer Blog What we have here is QNX 8.0 running the Xfce desktop environment on Wayland, a whole slew of build and development tools like clang, gcc, git, etc.), a ton of popular code editors and IDEs, a web browser (looks like GNOME Web?), access to all the ports on the QNX Open-Source Dashboard, and more. For now, it’s only available as a Qemu image to run on top of Ubuntu, but the plan is to also release an x86 image in the coming months so you can run this directly on real hardware. This isn’t quite the same as the QNX of old with its unique Photon microGUI, but it’s been known for a while now that Photon hasn’t been actively developed in a long time and is basically abandoned. Running Xfce on Wayland is obviously a much more sensible solution, and one that’s quite future-proof, too. As a certified QNX desktop enthusiast of yore, I can’t wait for the x86 image to arrive so I can try this out properly. There are downsides. This image, too, is encumbered by annoying non-commercial license requirements and sign-ups, and this also wouldn’t be the first time QNX starts an enthusiast effort, only to abandon it shortly after. Buyer beware, then, but I’m cautiously optimistic.
We’ve got more X11-related news this day, the day of Xmas. Phoenix is a new X server, written from scratch in Zig (not a fork of Xorg server). This X server is designed to be a modern alternative to the Xorg server. ↫ Phoenix’ readme page Phoenix will only support a modern subset of the X11 protocol, focusing on making sure modern applications from roughly the last 20 years or so work. It also takes quite a few pages out of the Wayland playbook by not having a server driver interface and by having a compositor included. On top of that, it will isolate applications from each other, and won’t have a single framebuffer for all displays, instead allowing different refresh rates for individual displays. The project also intends to develop new standards to support things like per-monitor DPI, among many other features. That’s a lot of features and capabilities to promise for an X server, and much like Wayland, the way they aim to get there is by effectively gutting traditional X and leaving a ton of cruft behind. The use of Zig is also interesting, as it can catch some issues before they affect any users thanks to Zig’s runtime safety option. At least it’s not yet another thing written in Rust like every other project competing with an established project. I think this look like an incredibly interesting project to keep an eye on, and I hope more people join the effort. Competition and fresh, new ideas are good, especially now that everything is gravitating towards Wayland – we need alternatives to promote the sharing of ideas.
Wayback, the tool that will allow you to run a legacy X11 desktop environment on top of Wayland, released a new version just before the Christmas. Wayback 0.3 overhauls its custom command line option parser to allow for more X.org options to be supported, and its manual pages have been cleaned up. Other fixes merely include fixing some small typos and similar small changes. Wayback is now also part of Alpine Linux’ stable releases, and has been made available in Fedora 42 and 43. Wayback remains alpha software and is still under major development – it’s not yet ready for primetime.
Can you use a cheap FPGA board as a base for a new computer inspired by the original IBM PC? Well, yes, of course, so that’s what Yuri Zaporozhets has set out to do just that. Based on the GateMateA1-EVB, the project’s got some of the basics worked out already – video output, keyboard support, etc. – and work is underway on a DOS-like operating system. A ton of work is still ahead, of course, but it’s definitely an interesting project.
Elementary OS, the user-friendly Linux distribution with its own unique desktop environment and applications, just released elementary OS 8.1. Its minor version number belies just how big of a punch this update packs, so don’t be fooled here. We released elementary OS 8 last November with a new Secure Session—powered by Wayland—that ensures applications respect your privacy and consent, a brand new Dock with productive multitasking and window management features, expanded access to cross-platform apps, a revamped updates experience, and new features and settings that empower our diverse community through Inclusive Design. Over the last year we’ve continued to build upon that work to deliver new features and fix issues based on your feedback, plus we’ve improved support for a range of devices including HiDPI and Multi-touch devices. ↫ Danielle Foré at the elementary OS blog The biggest change from a lower-level perspective is that elementary OS 8.1 changes the default session to Wayland, leaving the X11 session as a fallback in case of issues. Since the release of elementary OS 8, a ton of progress has been made in improving the Wayland session, fixing remaining issues, and so on, and the team now feels it’s ready to serve as the default session. Related to this is a new security feature in the Wayland session where the rest of the screen gets dimmed when a password dialog pops up, and other windows can’t steal focus. The switch to Wayland also allowed the team to bring fractional scaling to elementary OS with 8.1. Elementary OS is based on Ubuntu, and this new release brings an updated Hardware Enablement stack, which brings things like Linux 6.14 and Mesa 25. This is also the first release with support for ARM64 devices that can use UEFI, which includes quite a few popular ARM devices. Of course, the ARM64 version comes as a separate ISO. Furthermore, there’s a ton of improvements to the dock – which was released with 8 as a brand-new replacement for the venerable Plank – including bringing back some features that were lost in the transition from Plank to the new dock. Animations are smoother, elementary OS’ application store has seen a slew of improvements from clearer licensing information, to a controller icon for games that support them, to a label identifying applications that offer in-app purchases, and more. There’s a lot more here, like the accessibility improvements we talked about a few months ago, and tons more.
Mount Amiga filesystem images on macOS/Linux using native AmigaOS filesystem handlers via FUSE. amifuse runs actual Amiga filesystem drivers (like PFS3) through m68k CPU emulation, allowing you to read Amiga hard disk images without relying on reverse-engineered implementations. ↫ Amifuse GitHub page Absolutely wild.
Almost two months ago, a tape containing UNIX v4 was found. It was sent off to the Computer History Museum where bitsavers.org would handle the further handling of the tape, and this process has now completed. You can download the contents of the tape from Archive.org – which is sadly down at the moment – while squoze.net has a readme with instructions on how to actually run the copy of UNIX v4 recovered from the tape.
If you’ve been waiting for the right moment to try FreeBSD on a laptop, take note – 2025 has brought transformative changes. The Foundation’s ambitious Laptop Support & Usability Project is systematically addressing the gaps that have held FreeBSD back on modern laptop hardware. The project started in 2024 Q4 and covers areas including Wi-Fi, graphics, audio, installer, and sleep states. 2025 has been its first full year, and with a financial commitment of over $750k to date there has been substantial progress. ↫ Alice Sowerby for the FreeBSD Foundation I think that’s an understatement. As part of this effort, FreeBSD introduced support for Wi-Fi 4 and 5 in 2025, with 6 being worked on, and sound support has been greatly improved as well, with new tools and better support for automatic sound redirection for HDA cards. Another major area of improvement is support for various forms of sleep and wake, with modern standby coming in FreeBSD 15.1, and possibly hibernate in 15.2. On top of all this, there’s the usual graphics drivers updates, as well as changes to the installer to make it a bit more friendly to desktop use cases. The FreeBSD project is clearly taking desktop and especially laptop seriously lately, and they’re putting their money and developers where their mouth is. Add in the fact that FreeBSD already has pretty decent Wayland support, and it the platform will be able to continue to offer the latest KDE releases (and GNOME, if they figure out replacements for its systemd dependencies). With progress like this, we’re definitely going to see more and more people making the move to FreeBSD for desktop and laptop use over the coming years.
For years, link building felt like a numbers game. Agencies poured effort into generating as many backlinks as possible, hoping the sheer volume would push rankings upward. But as search enters a fully AI-driven era, that mindset no longer holds up. The landscape is shifting fast, and the winners in 2025 are the agencies that understand authority has become a matter of quality, context, and credibility, not link counts. AI-powered search now prioritizes brands that show up consistently in trusted conversations across the web. Weak links from irrelevant blogs don’t move the needle—and in some cases, they undermine trust. This change has pushed agencies to move away from volume-based link campaigns and gravitate toward partners who can deliver a genuine quality backlink service that reinforces a brand’s identity across authoritative sources. AI Overviews Elevated the Importance of Context The evolution of AI Overviews, conversational answers, and knowledge-graph-driven evaluations dramatically changed how search engines measure trust. Today’s systems don’t reward random backlinks. They reward contextual relevance, consistent mentions, and credible editorial presence. Instead of scanning for keyword-heavy anchor text, AI systems scan for patterns in how a brand is positioned across the web. If a link sits on a site with no topical relevance or editorial depth, generative engines simply ignore it. But when a brand appears in a well-written article on a publication that already has authority in that niche, the impact is dramatically different. That placement contributes to the brand’s entity profile, strengthens its association with important topics, and increases the chance of being cited or surfaced in AI-generated summaries. This is exactly why agencies are walking away from quantity-first link providers and embracing curated placements backed by real editorial standards. Client Expectations Have Evolved—They Want Meaningful Mentions Today’s clients know what good off-page SEO looks like. They don’t ask for “more links”—they ask where those links appeared, who wrote them, and whether those sites carry real authority. When monthly reports show links from low-quality domains, clients push back harder than ever. But when reports include recognized publishers, niche-relevant sites, or thought-leadership-style contributions, clients feel that value instantly. This shift in expectations is the reason so many agencies now rely on specialized partners who focus exclusively on quality. Instead of delivering a bundle of generic links, they prioritize relevance, readership, and editorial credibility. This heightened demand for quality has given rise to a new standard—one where every placement must reinforce trust, not dilute it. Entity-First SEO Made Quality Non-Negotiable As generative search systems rely more heavily on entity understanding, agencies are rethinking their entire off-page strategy. Entity-first SEO demands consistency. A brand must be described similarly across multiple contexts so that AI systems can clearly identify what it does, who it serves, and why it matters. High-quality placements accomplish this effortlessly because well-written articles create natural opportunities to reinforce brand identity. Poor-quality backlinks from random sites do the opposite—they introduce noise. A handful of strong editorial placements can accelerate a brand’s entity development in a way dozens of weaker links never could. This is why agencies are investing their budgets in fewer, better links rather than spreading it thin across low-impact sources. Scaling Quality Internally Is Hard—Partners Make It Possible Most agencies would love to build everything in-house—outreach, writing, editing, and publisher relationships. But doing so at scale is nearly impossible without dramatically increasing headcount. Quality link building requires real prospecting, personalized outreach, credible content, editorial approvals, and continuous quality checks. It’s no surprise that agencies are increasingly turning to white label blogger outreach partners who specialize in handling this work reliably and consistently. These partners already have the systems, relationships, and processes needed to deliver placement-ready content for any niche. With a trusted white-label partner, agencies can scale output without sacrificing quality. They maintain the client relationship, the strategy, and the reporting—but outsource the labor-intensive part of link acquisition to experts who focus on it exclusively. Real Editorial Links Offer Long-Term Stability One of the biggest advantages of high-quality backlinks is stability. Every major algorithm update over the last few years has punished thin, irrelevant, or manipulative link patterns. Sites dependent on low-quality backlinks see massive volatility. Meanwhile, brands built on real editorial links remain steady—or even improve—after updates. This pattern continues as AI becomes more central to the ranking process. Quality links align naturally with what AI considers trustworthy. They tell search engines, “This brand belongs in the conversation,” which is something no volume-based link strategy can replicate. Agencies now recognize that quality links are an investment in long-term resilience—not just short-term ranking boosts. Compounding Authority Is Becoming a Competitive Advantage Quality backlinks don’t just help a brand rank—they build momentum. A presence on credible sites increases the likelihood of co-citations, unlinked mentions, and secondary references. This creates a compounding effect, where each placement strengthens the next. Over time, a brand that consistently earns editorial links becomes more influential across its niche. It appears more often in AI-driven summaries. It becomes more familiar to search engines. It becomes easier for Google to categorize, understand, and trust. This compounding authority is quickly becoming the main competitive edge agencies can offer clients—especially in markets where competitors rely on outdated link tactics. Link Quality Isn’t a Trend—It’s the New Standard The changes we’re seeing in 2025 aren’t temporary. They’re not a passing trend or a response to a single algorithm update. They’re a reflection of a fundamental shift in how search engines interpret trust and how brands build influence online. Agencies that once relied on high-volume link-building playbooks are now rethinking everything. They’re choosing partners who offer a true quality backlink service. They’re leaning on white label blogger outreach to scale without compromising standards. They’re aligning off-page SEO with entity-first strategies that prepare clients for an AI-driven future. Authority is no longer something you can fake or accelerate with shortcuts. It’s something you build consistently, patiently, and strategically through high-quality, relevant, editorially integrated backlinks. And in 2025, that’s the difference between agencies that merely keep up—and agencies that lead.
If Excel rules the world, Word rules the legal profession. Jordan Bryan published a great article explaining why this is the case, and why this is unlikely to change any time soon, no matter how many people from the technology world think they can change this reality. Microsoft Word can never be replaced. OpenAI could build superintelligence surpassing human cognition in every conceivable dimension, rendering all human labor obsolete, and Microsoft Word will survive. Future contracts defining the land rights to distant galaxies will undoubtedly be drafted in Microsoft Word. Microsoft Word is immortal. ↫ Jordan Bryan at The Redline by Version Story Bryan cites two main reasons underpinning Microsoft Word’s immortality in the legal profession. First, lawyers need the various formatting options Word provides, and alternatives often suggested by outsiders, like Markdown, don’t come close to offering even 5% of the various formatting features lawyers and other writers of legal documents require. By the time you add all those features back to Markdown, you’ve recreated Word, but infinitely worse and more obtuse. Also, and this is entirely my personal opinion, Markdown sucks. Second, and this one you’ve surely heard before: Word’s .docx format is effectively a network protocol. Everyone in the legal profession uses it, can read it, work with it, mark it up, apply corrections, and so on – from judges to lawyers to clients. If you try to work with, say, Google Docs, instead, you create a ton of friction in every interaction you have with other people in the legal profession. I vividly remember this from my 15 years as a translator – every single document you ever worked with was a Microsoft Office document. Sure, the translation agency standing between the end client and the translator might have abstracted the document into a computer-aided translation tool like Trados, but you’re still working with .docx, and the translated document sent to the client is still .docx, and needs to look identical to the source, just in a different language. In the technology world, there’s a lot of people who come barging into some other profession or field, claiming to know everything, and suggest to “just do x”, without any deference to how said profession or field actually operates. “Just use Markdown and git” even if the people involved have no clue what a markup language even is let alone what git is; “just use LibreOffice” even if the people involved will skewer you for altering the formatting of a document even ever so slightly; we all know examples of this. An industry tends to work a certain way not because they’re stupid or haven’t seen the light – it tends to work that way because there’s a thousand little reasons you’re not aware of that make that way the best way.
In 1979, VisiCalc was released for the Apple II, and to this day, many consider it the very first spreadsheet program. Considering just how important spreadsheets have become since then – Excel rules the world – the first spreadsheet program is definitely an interesting topic to dive into. It turns out that while VisiCalc was the first spreadsheet program for home computers, it’s not actually the first spreadsheet program, period. That honour goes to LANPAR, created ten years before VisiCalc. Ten years before VisiCalc, two engineers at Bell Canada came up with a pretty neat idea. At the time, organizational budgets were created using a program that ran on a mainframe system. If a manager wanted to make a change to the budget model, that might take programmers months to create an updated version. Rene Pardo and Remy Landau discussed the problem and asked “what if the managers could make their own budget forms as they would normally write them?” And with that, a new idea was created: the spreadsheet program. The new spreadsheet was called LANPAR, for “LANguage for Programming Arrays at Random” (but really it was a mash-up of their last names: LANdau and PARdo). ↫ Jim Hall at Technically We Write While there wasn’t a graphical user interface on the screen with a grid and icons and everything else we associate with a spreadsheet today, it was still very much a spreadsheet. Individual cells were delinianated with semicolons, you could write down formulas to manipulate these cells, and the program could do forward referencing. The idea was to make it so easy to use, managers at Dell Canada could make budgeting changes overnight, instead of having programmers take weeks or months to do so. I’m not particularly well-versed in Excel and spreadsheets in general, but I can definitely imagine advanced users no longer really seeing the grids and numbers as individual entities, instead visualising everything much more closely to what LANPAR did. Like Neo when he finally peers through the Matrix.