Monthly Archive:: July 2024

AI causing burnout, lower productivity

Is machine learning, also known as “artificial intelligence”, really aiding workers and increasing productivity? A study by Upwork – which, as Baldur Bjarnason so helpfully points out, sells AI solutions and hence did not promote this study on its blog as it does with its other studies – reveals that this might not actually be the case. Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect. Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way. For example, survey respondents reported that they’re spending more time reviewing or moderating AI-generated content (39%), invest more time learning to use these tools (23%), and are now being asked to do more work (21%). Forty percent of employees feel their company is asking too much of them when it comes to AI. ↫ Upwork research This shouldn’t come as a surprise. We’re in a massive hype cycle when it comes to machine learning, and we’re being told it’s going to revolutionise work and lead to massive productivity gains. In practice, however, it seems these tools just can’t measure up to the hyped promises, and in fact is making people do less and work slower. There’s countless stories of managers being told by upper management to shove machine learning into everything, from products to employee workflows, whether it makes any sense to do so or not. I know from experience as a translator that machine learning can greatly improve my productivity, but the fact that there are certain types of tasks that benefit from ML, doesn’t mean every job suddenly thrives with it. I’m definitely starting to see some cracks in the hype cycle, and this study highlights a major one. I hope we can all come down to earth again, and really take a careful look at where ML makes sense and where it does not, instead of giving every worker a ChatGPT account and blanket demanding massive productivity gains that in no way match the reality on the office floor. And of course, despite demanding massive productivity increases, it’s not like workers are getting an equivalent increase in salary. We’ve seen massive productivity increases for decades now, while paychecks have not followed suit at all, and many people can actually buy less with their salary today than their parents could decades ago. Demands imposed by managers by introducing AI is only going to make this discrepancy even worse.

Logitech has an idea for a “forever mouse” that requires a subscription

Logitech CEO Hanneke Faber talked about someting called the “forever mouse”, which would be, as the name implies, a mouse that customers could use for a very long time. While you may think this would mean an incredibly well-built mouse, or one that can be easily repaired, which Logitech already makes somewhat possible through a partnership with iFixIt, another option the company is thinking about is a subscription model. Yes. Faber said subscription software updates would mean that people wouldn’t need to worry about their mouse. The business model is similar to what Logitech already does with video conferencing services (Logitech’s B2B business includes Logitech Select, a subscription service offering things like apps, 24/7 support, and advanced RMA). Having to pay a regular fee for full use of a peripheral could deter customers, though. HP is trying a similar idea with rentable printers that require a monthly fee. The printers differ from the idea of the forever mouse in that the HP hardware belongs to HP, not the user. However, concerns around tracking and the addition of ongoing expenses are similar. ↫ Scharon Harding at Ars Technica Now, buying a mouse whose terrible software requires subscription models would still be a choice you can avoid, but my main immediately conjured up a far darker scenario. PC makers have a long history of adding crapware to their machines in return for payments from the producers of said crapware. I can totally see what’s going to happen next. You buy a brand new laptop, unbox it at home, and turn it on. Before you know it, a dialog pops up right after he crappy Windows out-of-box experience asking you to subscribe to your laptop’s touchpad software in order to unlock its more advanced features like gestures. But why stop there? The keyboard of that new laptop has RGB backlighting, but if you want to change its settings, you’re going to have to pay for another subscription. Your laptop’s display has additional features and modes for specific types of content and more settings sliders, but you’ll have to pay up to unlock them. And so on. I’m not saying this will happen, but I’m also not saying it won’t. I’m sorry for birthing this idea into the world.

Microsoft’s CrowdStrike post-mortem

Microsoft has published a post-mortem of the CrowdStrike incident, and goes into great depths to describe where, exactly, the error lies, and how it could lead to such massive problems. I can’t comment anything insightful on the technical details and code they show to illustrate all of this – I’ll leave that discussion up to you – but Microsoft also spends considerable amount of time explaining why security vendors are choosing to use kernel-mode drivers. Microsoft lists three major reasons why security vendors opt for using kernel modules, and none of them will come as a great surprise to OSNews readers: kernel drivers provide more visibility into the system than a userspace tool would, there are performance benefits, and they’re more resistant to tampering. The downsides are legion, too, of course, as any crash or similar issue in kernel mode has far-reaching consequences. The goal, then, according to Microsoft, is to balance the need for greater insight, performance, and tamper resistance with stability. And while the company doesn’t say it directly, this is clearly where CrowdStrike failed – and failed hard. While you would want a security tool like CrowdStrike to perform as little as possible in kernelspace, and conversely as much as possible in userspace, that’s not what CrowdStrike did. They are running a lot of stuff in kernelspace that really shouldn’t be there, such as the update mechanism and related tools. In total, CrowdStrike loads four kernel drivers, and much of their functionality can be run in userspace instead. It is possible today for security tools to balance security and reliability. For example, security vendors can use minimal sensors that run in kernel mode for data collection and enforcement limiting exposure to availability issues. The remainder of the key product functionality includes managing updates, parsing content, and other operations can occur isolated within user mode where recoverability is possible. This demonstrates the best practice of minimizing kernel usage while still maintaining a robust security posture and strong visibility. Windows provides several user mode protection approaches for anti-tampering, like Virtualization-based security (VBS) Enclaves and Protected Processes that vendors can use to protect their key security processes. Windows also provides ETW events and user-mode interfaces like Antimalware Scan Interface for event visibility. These robust mechanisms can be used to reduce the amount of kernel code needed to create a security solution, which balances security and robustness. ↫ David Weston, Vice President, Enterprise and OS Security at Microsoft In what is surely an unprecedented event, I agree with the CrowdStrike criticism bubbling under the surface of this post-mortem by Microsoft. Everything seems to point towards CrowdStrike stuffing way more things in kernelspace than is needed, and as such creating a far larger surface for things to go catastrophically wrong than needed. While Microsoft obviously isn’t going to openly and publicly throw CrowdStrike under the bus, it’s very clear what they’re hinting at here, and this is about as close to a public flogging we’re going to get. Microsoft’s post-portem further details a ton of work Microsoft has recently done, is doing, and will soon be doing to further strenghthen Windows’ security, to lessen the need for kernelspace security drivers even more, including adding support for Rust to the Windows kernel, which should also aid in mitigating some common problems present in other, older programming languages (while not being a silver bullet either, of course).

NotMyFault: Microsoft’s tool to create BSoDs

Blue screens of death are not exactly in short supply on Windows machines lately, but what if you really want to cause your own kernel panic or complete system crash, just because you love that shade of crashy blue? Well, there’s a tool for that called NotMyFault, developed by Mark Russinovich as part of Sysinternals. NotMyFault is a tool that you can use to crash, hang, and cause kernel memory leaks on your Windows system. It’s useful for learning how to identify and diagnose device driver and hardware problems, and you can also use it to generate blue screen dump files on misbehaving systems. The download file includes 32-bit and 64-bit versions, as well as a command-line version that works on Nano Server. Chapter 7 in Windows Internals uses NotMyFault to demonstrate pool leak troubleshooting and Chapter 14 uses it for crash analysis examples. ↫ Mark Russinovich Using this tool, you can select exactly what kind of crash you want to cause, and after clicking the Crash button, your Windows computer will do exactly as it’s told and crash with a lovely blue screen of death. It comes in both a GUI and CLI version, and the latter also works on minimal Windows installations that don’t have the Windows shell installed. A tool like this may seem odd, but it can be particularly useful in situations where you’re trying to troubleshoot an issue, and to learn how to properly diagnose crashes. Or, you know, you can use it to create a panic at your workplace.

Managarm: microkernel-based OS with fully asynchronous I/O

Ah, another microkernel-based hobby operating system. The more, the merrier – and I mean this, without a hint of sarcasm. There’s definitely been a small resurgence in activity lately when it comes to small hobby and teaching operating systems, some of which are exploring some truly new ideas, and I’m definitely here for it. Today we have managarm. Some notable properties of managarm are: (i) managarm is based on a microkernel while common Desktop operating systems like Linux and Windows use monolithic kernels, (ii) managarm uses a completely asynchronous API for I/O and (iii) despite those internal differences, managarm provides good compatibility with Linux at the user space level. ↫ managarm GitHub page It’s a 64bit operating system with SMP support, an ACPI implementation, networking, USB3 support, and, as the quoted blurb details, a lot of support for Linux and POSIX. It can already run Weston, kmscon, and other things like Bash, the GNU Coreutils, and more. While not explicitly mentioned, I assume the best platform to run managarm on are most likely virtualisation tools, and there’s a detailed handbook to help you along during building and using this new operating system.

The bizarre secrets I found investigating corrupt Winamp skins

In January of 2021 I was exploring the corpus of Skins I collected for the Winamp Skin Museum and found some that seemed corrupted, so I decided to explore them. Winamp skins are actually just zip files with a different file extension, so I tried extracting their files to see what I could find. This ended up leading me down a series of wild rabbit holes. ↫ Jordan Eldredge I’m not going to spoil any of this.

Full-featured email server running OpenBSD

This blog post is a guide explaining how to setup a full-featured email server on OpenBSD 7.5. It was commissioned by a customer of my consultancy who wanted it to be published on my blog. Setting up a modern email stack that does not appear as a spam platform to the world can be a daunting task, the guide will cover what you need for a secure, functional and low maintenance email system. ↫ Solène Rapenne If you ever wanted to set up and run your own email server, this is a great way to do it. Solène, an OpenBSD developer, will help you through setting up IMAP, POP, and Webmail, an SMTP server with server-to-server encryption and hidden personal information, every possible measure to make sure your server is regarded as legitimate, and all the usual firewall and anti-spam stuff you are definitely going to need. Taking back email from Google – or even Proton, which is now doing both machine learning and Bitcoin, of all things – is probably one of the most daunting tasks for anyone willing to cut ties with as much of big tech as possible. Not only is there the technical barrier, there’s also the fact that the major email providers, like Gmail or whatever Microsoft offers these days, are trying their darnest to make self-hosting email as cumbersome as possible by trying to label everything you send as spam or downright malicious. It’s definitely not an easy task, but at least with guides like this there’s some set of easy steps to follow to get there.

OpenAI beta tests SearchGPT search engine

Normally I’m not that interested in reporting on news coming from OpenAI, but today is a little different – the company launched SearchGPT, a search engine that’s supposed to rival Google, but at the same time, they’re also kind of not launching a search engine that’s supposed to rival Google. What? We’re testing SearchGPT, a prototype of new search features designed to combine the strength of our AI models with information from the web to give you fast and timely answers with clear and relevant sources. We’re launching to a small group of users and publishers to get feedback. While this prototype is temporary, we plan to integrate the best of these features directly into ChatGPT in the future. If you’re interested in trying the prototype, sign up for the waitlist. ↫ OpenAI website Basically, before adding a more traditional web-search like feature set to ChatGPT, the company is first breaking them out into a separate, temporary product that users can test, before parts of it will be integrated into OpenAI’s main ChatGPT product. It’s an interesting approach, and with just how stupidly popular and hyped ChatGPT is, I’m sure they won’t have any issues assembling a large enough pool of testers. OpenAI claims SearchGPT will be different from, say, Google or AltaVista, by employing a conversation-style interface with real-time results from the web. Sources for search results will be clearly marked – good – and additional sources will be presented in a sidebar. True to the ChatGPT-style user interface, you can keep “talking” after hitting a result to refine your search further. I may perhaps betray my still relatively modest age, but do people really want to “talk” to a machine to search the web? Any time I’ve ever used one of these chatbot-style user interfaces -including ChatGPT – I find them cumbersome and frustrating, like they’re just adding an obtuse layer between me and the computer, and that I’d rather just be instructing the computer directly. Why try and verbally massage a stupid autocomplete into finding a link to an article I remember from a few days ago, instead of just typing in a few quick keywords? I am more than willing to concede I’m just out of touch with what people really want, so maybe this really is the future of search. I hope I can just always disable nonsense like this and just throw keywords at the problem.

Two threads, one core: how simultaneous multithreading works under the hood

Simultaneous multithreading (SMT) is a feature that lets a processor handle instructions from two different threads at the same time. But have you ever wondered how this actually works? How does the processor keep track of two threads and manage its resources between them? In this article, we’re going to break it all down. Understanding the nuts and bolts of SMT will help you decide if it’s a good fit for your production servers. Sometimes, SMT can turbocharge your system’s performance, but in other cases, it might actually slow things down. Knowing the details will help you make the best choice. ↫ Abhinav Upadhyay Some light reading for the (almost) weekend.

Intel: Raptor Lake faults excessive voltage from microcode, fix coming in August

In what started last year as a handful of reports about instability with Intel’s Raptor Lake desktop chips has, over the last several months, grown into a much larger saga. Facing their biggest client chip instability impediment in decades, Intel has been under increasing pressure to figure out the root cause of the issue and fix it, as claims of damaged chips have stacked up and rumors have swirled amidst the silence from Intel. But, at long last, it looks like Intel’s latest saga is about to reach its end, as today the company has announced that they’ve found the cause of the issue, and will be rolling out a microcode fix next month to resolve it. ↫ Ryan Smith at AnandTech It turns out the root cause of the problem is “elevated operating voltages”, caused by a buggy algorithm in Intel’s own microcode. As such, it’s at least fixable through a microcode update, which Intel says it will ship sometime mid-August. AnandTech, my one true source for proper reporting on things like this, is not entirely satisfied, though, as they state microcode is often used to just cover up the real root cause that’s located much deeper inside the processor, and as such, Intel’s explanation doesn’t actually tell us very much at all. Quite coincidentally, Intel also experienced a manufacturing flaw with a small batch of very early Raptor Lake processors. An “oxidation manufacturing flaw” found its way into a small number of early Raptor Lake processors, but the company claims it was caught early and shouldn’t be an issue any more. Of course, for anyone experiencing issues with their expensive Intel processors, this will linger in the back of their minds, too. Not exactly a flawless launch for Intel, but it seems its main only competitor, AMD, is also experiencing issues, as the company has delayed the launch of its new Ryzen 9000 chips due to quality issues. I’m not at all qualified to make any relevant statements about this, but with the recent launch of the Snapdragon Elite X and Pro chips, these issues couldn’t come at a worse time for Intel and AMD.

FreeBSD as a platform for your future technology

Choosing an operating system for new technology can be crucial for the success of any project. Years down the road, this decision will continue to inform the speed and efficiency of development. But should you build the infrastructure yourself or rely on a proven system? When faced with this decision, many companies have chosen, and continue to choose, FreeBSD. Few operating systems offer the immediate high performance and security of FreeBSD, areas where new technologies typically struggle. Having a stable and secure development platform reduces upfront costs and development time. The combination of stability, security, and high performance has led to the adoption of FreeBSD in a wide range of applications and industries. This is true for new startups and larger established companies such as Sony, Netflix, and Nintendo. FreeBSD continues to be a dependable ecosystem and an industry-leading platform. ↫ FreeBSD Foundation A FreeBSD marketing document highlighting FreeBSD’s strengths is, of course, hardly a surprise, but considering it’s fighting what you could generously call an uphill battle against the dominance of Linux, it’s still interesting to see what, exactly, FreeBSD highlights as its strengths. It should come as no surprise that its licensing model – the simple BSD license – is mentioned first and foremost, since it’s a less cumbersome license to deal with than something like the GPL. It’s philosophical debate we won’t be concluding any time soon, but the point still stands. FreeBSD also highlights that it’s apparently quite easy to upstream changes to FreeBSD, making sure that changes benefit everyone who uses FreeBSD. While I can’t vouch for this, it does seem reasonable to assume that it’s easier to deal with the integrated, one-stop-shop that is FreeBSD, compared to the hodge-podge of hundreds and thousands of groups whose software all together make up a Linux system. Like I said, this is a marketing document so do keep that in mind, but I still found it interesting.

You can contribute to KDE with non-C++ code

Not everything made by KDE uses C++. This is probably obvious to some people, but it’s worth mentioning nevertheless. And I don’t mean this as just “well duh, KDE uses QtQuick which is written with C++ and QML”. I also don’t mean this as “well duh, Qt has a lot of bindings to other languages”. I mean explicitly “KDE has tools written primarily in certain languages and specialized formats”. ↫ Thiago Sueto If you ever wanted to contribute to KDE but weren’t sure if your preferred programming language or tools were relevant, this is a great blog post detailing how you can contribute if you are familiar with any of the following: Python, Ruby, Perl, Containerfile/Docker/Podman, HTML/SCSS/JavaScript, Web Assembly, Flatpak/Snap, CMake, Java, and Rust. A complex, large project like KDE needs people with a wide variety of skills, so it’s definitely not just C++. An excellent place to start.

New Samsung phones block sideloading by default

The assault on a user’s freedom to install whatever they want on what is supposed to be their phone continues. This time, it’s Samsung adding an additional blocker to users installing applications from outside the Play Store and its own mostly useless Galaxy Store. Technically, Android already blocks sideloading by default at an operating system level. The permission that’s needed to silently install new apps without prompting the user, INSTALL_PACKAGES, can only be granted to preinstalled app stores like the Google Play Store, and it’s granted automatically to apps that request it. The permission that most third-party app stores end up using, REQUEST_INSTALL_PACKAGES, has to be granted explicitly by the user. Even then, Android will prompt the user every time an app with this permission tries to install a new app. Samsung’s Auto Blocker feature takes things a bit further. The feature, first introduced in One UI 6.0, fully blocks the installation of apps from unauthorized sources, even if those sources were granted the REQUEST_INSTALL_PACKAGES permission. ↫ Mishaal Rahman I’m not entirely sure why Samsung felt the need to add an additional, Samsung-specific blocking mechanism, but at least for now, you can turn it off in the Settings application. This means that in order to install an application from outside of the Play Store and the Galaxy Store on brand new Samsung phones – the ones shipping with OneUI 6.1.1 – you need to both give the regular Android permission to do so, but also turn off this nag feature. Having two variants of every application on your Samsung phone wasn’t enough, apparently.

Google won’t be deprecating third-party cookies from Chrome after all

This story just never ever ends. After delays, changes in plans, more delays, we now have more changed plans. After years of stalling, Google has now announced it is, in fact, not going to deprecate third-party cookies in Chrome by default. In light of this, we are proposing an updated approach that elevates user choice. Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time. We’re discussing this new path with regulators, and will engage with the industry as we roll this out. ↫ Anthony Chavez Google remains unclear about what, exactly, users will be able to choose between. The consensus seems to be that users will be able to choose between retaining third-party cookies and turning them off, but that’s based on a statement by the British Competition and Market Authority, and not on a statement from Google itself. It seems reasonable to assume the CMA knows what it’s talking about, but with a company like Google you never know what’s going to happen tomorrow, let alone a few months from now. While both Safari and Firefox have already made this move ages ago, it’s taking Google and Chrome a lot longer to deal with this issue, because Google needs to find different ways of tracking you that are not using third-party cookies. Google’s own testing with Privacy Sandbox, Chrome’s sarcastically-named alternative to third-party cookies, shows that it seems to perform reasonable well, which should definitely raise some alarm bells about just how private it really is. Regardless, I doubt this saga will be over any time soon.

No, Southwest Airlines is not still using Windows 3.1

A story that’s been persistently making the rounds since the CrowdStrike event is that while several airline companies were affected in one way or another, Southwest Airlines escaped the mayhem because they were still using windows 3.1. It’s a great story that fits the current zeitgeist about technology and its role in society, underlining that what is claimed to be technological progress is nothing but trouble, and that it’s better to stick with the old. At the same time, anybody who dislikes Southwest Airlines can point and laugh at the bumbling idiots working there for still using Windows 3.1. It’s like a perfect storm of technology news click and ragebait. Too bad the whole story is nonsense. But how could that be? It’s widely reported by reputable news websites all over the world, shared on social media like a strain of the common cold, and nobody seems to question it or doubt the veracity of the story. It seems that Southwest Airlines running on an operating system from 1992 is a perfectly believable story to just about everyone, so nobody is questioning it or wondering if it’s actually true. Well, I did, and no, it’s not true. Let’s start with the actual source of the claim that Southwest Airlines was unaffected by CrowdStrike because they’re still using Windows 3.11 for large parts of their primary systems. This claim is easily traced back to its origin – a tweet by someone called Artem Russakovskii, stating that “the reason Southwest is not affected is because they still run on Windows 3.1”. This tweet formed the basis for virtually all of the stories, but it contains no sources, no links, no background information, nothing. It was literally just this one line. It turned out be a troll tweet. A reply to the tweet by Russakovskii a day later made that very lear: “To be clear, I was trolling last night, but it turned out to be true. Some Southwest systems apparently do run Windows 3.1. lol.” However, that linked article doesn’t cite any sources either, so we’re right back where we started. After quite a bit of digging – that is, clicking a few links and like 3 minutes of searching online – following the various reference and links back to their sources, I managed to find where all these stories actually come from to arrive at the root claim that spawned all these other claims. It’s from an article by The Dallas Morning News, titled “What’s the problem with Southwest Airlines scheduling system?” At the end of last year, Southwest Airlines’ scheduling system had a major meltdown, leading to a lot of cancelled flights and stranded travelers just around the Christmas holidays. Of course, the media wanted to know what caused it, and that’s where this The Dallas Morning News article comes from. In it, we find the paragraphs that started the story that Southwest Airlines is still using Windows 3.1 (and Windows 95!): Southwest uses internally built and maintained systems called SkySolver and Crew Web Access for pilots and flight attendants. They can sign on to those systems to pick flights and then make changes when flights are canceled or delayed or when there is an illness. “Southwest has generated systems internally themselves instead of using more standard programs that others have used,” Montgomery said. “Some systems even look historic like they were designed on Windows 95.” SkySolver and Crew Web Access are both available as mobile apps, but those systems often break down during even mild weather events, and employees end up making phone calls to Southwest’s crew scheduling help desk to find better routes. During periods of heavy operational trouble, the system gets bogged down with too much demand. ↫ Kyle Arnold at The Dallas Morning News That’s it. That’s where all these stories can trace their origin to. These few paragraphs do not say that Southwest is still using ancient Windows versions; it just states that the systems they developed internally, SkySolver and Crew Web Access, look “historic like they were designed on Windows 95”. The fact that they are also available as mobile applications should further make it clear that no, these applications are not running on Windows 3.1 or Windows 95. Southwest pilots and cabin crews are definitely not carrying around pocket laptops from the ’90s. These paragraphs were then misread, misunderstood, and mangled in a game of social media and bad reporting telephone, and here we are. The fact that nobody seems to have taken the time to click through a few links to find the supposed source of these claims, instead focusing on cashing in on the clicks and rage these stories would illicit, is a rather damning indictment of the state of online (tech) media. Many of the websites reporting on these stories are part of giant media conglomerates, have a massive number of paid staff, and they’re being outdone by a dude in the Arctic with a small Patreon, minimal journalism training, and some common sense. This story wasn’t hard to debunk – a few clicks and a few minutes of online searching is all it took. Ask yourself – why do these massive news websites not even perform the bare minimum?

A brief history of Dell UNIX

“Dell UNIX? I didn’t know there was such a thing.” A couple of weeks ago I had my new XO with me for breakfast at a nearby bakery café. Other patrons were drawn to seeing an XO for the first time, including a Linux person from Dell. I mentioned Dell UNIX and we talked a little about the people who had worked on Dell UNIX. He expressed surprise that mention of Dell UNIX evokes the above quote so often and pointed out that Emacs source still has #ifdef for Dell UNIX. Quick Googling doesn’t reveal useful history of Dell UNIX, so here’s my version, a summary of the three major development releases. ↫ Charles H. Sauer I sure had never heard of Dell UNIX, and despite the original version of the linked article being very, very old – 2008 – there’s a few updates from 2020 and 2021 that add links to the files and instructions needed to install, set up, and run Dell UNIX in a virtual machine; 86Box or VirtualBox specifically. What was Dell UNIX? in the late ’80s, Dell started a the Olympic project, an effort to create a completely new architecture spanning desktops, workstations, and servers, some of which would be using multiple processors. When searching for an operating system for this project, the only real option was UNIX, and as such, the Olympic team set out to developer a UNIX variant. The first version was based on System V Release 3.2, used Motif and the X Window System, a DOS virtual machine to run, well, DOS applications called Merge, and compatibility with Microsoft Xenix. It might seem strange to us today, but Microsoft’s Xenix was incredibly popular at the time, and compatibility with it was a big deal. The Olympic project turned out to be too ambitious on the hardware front so it got cancelled, but the Dell UNIX project continued to be developed. The next release, Dell System V Release 4, was a massive release, and included a full X Window System desktop environment called X.desktop, an office suite, e-mail software, and a lot more. It also contained something Windows wouldn’t be getting for quite a few years to come: automatic configuration of device drivers. This was apparently so successful, it reduced the number of support calls during the first 90 days of availability by 90% compared to the previous release. Dell SVR4 finally seemed like real UNIX on a PC. We were justifiably proud of the quality and comprehensiveness, especially considering that our team was so much smaller than those of our perceived competitors at ISC, SCO and Sun(!). The reviewers were impressed. Reportedly, Dell SVR4 was chosen by Intel as their reference implementation in their test labs, chosen by Oracle as their reference Intel UNIX implementation, and used by AT&T USL for in house projects requiring high reliability, in preference to their own ports of SVR4.0. (One count showed Dell had resolved about 1800 problems in the AT&T source.) I was astonished one morning in the winter of 1991-92 when Ed Zander, at the time president of SunSoft, and three other SunSoft executives arrived at my office, requesting Dell help with their plans to put Solaris on X86. ↫ Charles H. Sauer Sadly, this would also prove to be the last release of Dell UNIX. After a few more point release, the brass at Dell had realised that Dell UNIX, intended to sell Dell hardware, was mostly being sold to people running it on non-Dell hardware, and after a short internal struggle, the entire project was cancelled since it was costing them more than it was earning them. As I noted, the article contains the files and instructions needed to run Dell UNIX today, on a virtual machine. I’m definitely going to try that out once I have some time, if only to take a peek at that X.desktop, because that looks absolutely stunning for its time.

OpenBSD workstation for the people

This is an attempt at building an OpenBSD desktop than could be used by newcomers or by people that don’t care about tinkering with computers and just want a working daily driver for general tasks. Somebody will obviously need to know a bit of UNIX but we’ll try to limit it to the minimum. ↫ Joel Carnat An excellent, to-the-point, no-nonsense guide about turning a default OpenBSD installation into a desktop operating system running Xfce. You definitely don’t need intimate, arcane knowledge of OpenBSD to follow along with this one.

OpenBSD gets hardware accelerated video decoding/encoding

Only yesterday, I mentioned one of the main reasons I decided to switch back to Fedora from OpenBSD were performance issues – and one of them was definitely the lack of hardware acceleration for video decoding/encoding. The lack of such technology means that decoding/encoding video is done using the processor, which is far less efficient than letting your GPU do it – which results in performance issues like stuttering and tearing, as well as a drastic reduction in battery life. Well, that’s changed now. Thanks to the work of, well, many, a major commit has added hardware accelerated video decoding/encoding to OpenBSD. Hardware accelerated video decode/encode (VA-API) support is beginning to land in #OpenBSD -current. libva has been integrated into xenocara with the Intel userland drivers in the ports tree. AMD requires Mesa support, hence the inclusion in base. A number of ports will be adjusted to enable VA-API support over time, as they are tested. ↫ Bryan Steele This is great news, and a major improvement for OpenBSD and the community. Apparently, performance in Firefox is excellent, and with simply watching video on YouTube being something a lot of people do with their computers – especially laptops – anyone using OpenBSD is going to benefit immensely from this work.

1989 networking: NetWare 386

NetWare 386 or 3.0 was a very limited release, with very few copies sold before it was superseded by newer versions. As such, it was considered lost to time, since it was only sold to large corporations – for a massive almost 8000 dollar price tag – who obviously didn’t care about software preservation. There are no original disks left, but a recent “warez” release has made the software available once again. As always, pirates save the day.

Managing Classic Mac OS resources in ResEdit

The Macintosh was intended to be different in many ways. One of them was its file system, which was designed for each file to consist of two forks, one a regular data fork as in normal file systems, the other a structured database of resources, the resource fork. Resources came to be used to store a lot of standard structured data, such as the specifications for and contents of alerts and dialogs, menus, collections of text strings, keyboard definitions and layouts, icons, windows, fonts, and chunks of code to be used by apps. You could extend the types of resource supported by means of a template, itself stored as a resource, so developers could define new resource types appropriate to their own apps. ↫ Howard Oakley And using ResEdit, a tool developed by Apple, you could manipulate the various resources to your heart’s content. I never used the classic Mac OS when it was current, and only play with it as a retro platform every now and then, so I ever used ResEdit when it was the cool thing to do. Looking back, though, and learning more about it, it seems like just another awesome capability that Apple lost along the way towards modern Apple. Perhaps I should load up on my old Macs and see with my own eyes what I can do with ResEdit.