Limine is an advanced, portable, multiprotocol bootloader that supports Linux, multiboot1 and 2, the native Limine boot protocol, and more. Limine is lightweight, elegant, fast, and the reference implementation of the Limine boot protocol. The Limine boot protocol’s main target audience is operating system and kernel developers that want to use a boot protocol which supports modern features in an elegant manner, that GRUB’s aging multiboot protocols do not (or do not properly). ↫ Limine website I wish trying out different bootloaders was an easier thing to do. Personally, since my systems only run Fedora Linux, I’d love to just move them all over to systemd-boot and not deal with GRUB at all anymore, but since it’s not supported by Fedora I’m worried updates might break the boot process at some point. On systems where only one operating system is installed, as a user I should really be given the choice to opt for the simplest, most basic boot sequence, even if it can’t boot any other operating systems or if it’s more limited than GRUB.
Following our recent work 5 with Ubuntu 24.04 LTS where we enabled frame pointers by default to improve debugging and profiling, we’re continuing our performance engineering efforts by evaluating the impact of O3 optimization in Ubuntu. O3 is a GCC optimization 14 level that applies more aggressive code transformations compared to the default O2 level. These include advanced function and the use of sophisticated algorithms aimed at enhancing execution speed. While O3 can increase binary size and compilation time, it has the potential to improve runtime performance. ↫ Ubuntu Discourse If these optimisations deliver performance improvements, and the only downside is larger binaries and longer compilation times, it seems like a bit of a no-brainer to enable these, assuming those mentioned downsides are within reason. Are there any downsides they’re not mentioning? Browsing around and doing some minor research it seems that -O3 optimisations may break some packages, and can even lead to performance degradation, defeating the purpose altogether. Looking at a set of benchmarks from Phoronix from a few years ago, in which the Linux kernel was compiled with either O2 and O3 and their performance compared, the results were effectively tied, making it seem not worth it at all. However, during these benchmarks, only the kernel was tested; everything else was compiled normally in both cases. Perhaps compiling the entire system with O3 will yield improvements in other parts of the system that do add up. For now, you can download unsupported Ubuntu ISOs compiled with O3 optimisations enabled to test them out.
Another month, another chunk of progress for the Servo rendering engine. The biggest addition is enabling table rendering to be spread across CPU cores. Parallel table layout is now enabled, spreading the work for laying out rows and their columns over all available CPU cores. This change is a great example of the strengths of Rayon and the opportunistic parallelism in Servo’s layout engine. ↫ Servo blog On top of this, there’s tons of improvements to the flexbox layout engine, support generic font families like ‘sans-serif’ and ‘monospace’ has been added, and Servo now supports OpenHarmony, the operating system developed by Huawei. This month also saw a lot of work on the development tools.
Most application on GNU/Linux by convention delegate to xdg-open when they need to open a file or a URL. This ensures consistent behavior between applications and desktop environments: URLs are always opened in our preferred browser, images are always opened in the same preferred viewer. However, there are situations when this consistent behavior is not desired: for example, if we need to override default browser just for one application and only temporarily. This is where xdg-override helps: it replaces xdg-open with itself to alter the behavior without changing system settings. ↫ xdg-override GitHub page I love this project ever since I came across it a few days ago. Not because I need it – I really don’t – but because of the story behind its creation. The author of the tool, Dmytro Kostiuchenko, wanted Slack, which he only uses for work, to only open his work browser – which is a different browser from his default browser. For example, imagine you normally use Firefox for everything, but for all your work-related things, you use Chrome. So, when you open a link sent to you in Slack by a colleague, you want that specific link to open in Chrome. Well, this is not easily achieved in Linux. Applications on Linux tend to use freedesktop.org’s xdg-open for this, which looks at the file mimeapps.list to learn which application opens which file type or URL. To solve Kostiuchenko’s issue, changing the variable $XDG_CONFIG_HOME just for Slack to point xdg-open to a different configuration file doesn’t work, because the setting will be inherited by everything else spwaned from Slack itself. Changing mimeapps.list doesn’t work either, of course, since that would affect all other applications, too. So, what’s the actual solution? We’d like also not to change xdg-open implementation globally in our system: ideally, the change should only affect Slack, not all other apps. But foremost, diverging from upstream is very unpractical. However, in the spirit of this solution, we can introduce a proxy implementation of xdg-open, which we’ll “inject” into Slack by adding it to PATH. ↫ Dmytro Kostiuchenko xdg-override takes this idea and runs with it: It is based on the idea described above, but the script won’t generate proxy implementation. Instead, xdg-override will copy itself to /tmp/xdg-override-$USER/xdg-open and will set a few $XDG_OVERRIDE_* variables and the $PATH. When xdg-override is invoked from this new location as xdg-open, it’ll operate in a different mode, parsing $XDG_OVERRIDE_MATCH and dispatching the call appropriately. I tested this script briefly, but automated tests are missing, so expect some rough edges and bugs. ↫ Dmytro Kostiuchenko I don’t fully understand how it works, but I get the overall gist of what it’s doing. I think it’s quite clever, and solves a very specific issue in a non-destructive way. While it’s not something most people will ever need, it feels like something that if you do need it, it will quickly become a default part of your toolbox or workflow.
Today, every Unix-like system can trace their ancestry back to the original Unix. That includes Linux, which uses the GNU tools – and the GNU tools are based on the Unix tools. Linux in 2024 is removed from the original Unix design, and for good reason – Linux supports architectures and tools not dreamt of during the original Unix era. But the core command line experience in Linux is still very similar to the Unix command line of the 1970s. The next time you use ls to list the files in a directory, remember that you’re using a command line that’s been with us for more than fifty years. ↫ Jim Hall An excellent overview of some of the more ancient UNIX commands that are still with us today. One thing I always appreciate when I dive into an operating system closer to “real” UNIX, like OpenBSD, or a actual UNIX, like HP-UX, is just how much more logical sense they make under the hood than a Linux system does. This is not a dunk on modern Linux – it has to cater to endless more modern needs than something ancient and dead like HP-UX – but what I learn while using these systems closer to the UNIX has made me appreciate proper UNIX more than I used to in the past. In what surely sounds like utter lunacy to system administrators who actually had to seriously administer HP-UX systems back in the day, I genuinely love using HP-UX, setting it up, configuring it, messing around with it, because it just makes so much more logical sense than the systems we use today. The knowledge gained from using BSD, HP-UX, and others, while not always directly applicable to Linux, does aid me in understanding certain Linux things better than I did before. What I’m trying to say is – go and load up an old UNIX, or at least a modern BSD. Aside from being great operating systems in their own right, they’re much easier to grasp than a modern Linux system, and you’ll learn a lot form the experience.
Android 14 introduced the ability for application stores to claim ownership over application updates, to ensure other installation sources won’t accidentally update applications they shouldn’t. What is still lacking, however, is for users to easily change the update ownership for applications. In other words, say you install an application by downloading an APK from GitHub, and later the application makes its way to F-Droid, you’ll get warning popups when F-Droid tries to update that application. That’s about to change, it seems, as Android Authority discovered that the Play Store application seems to be getting a new feature where it can take ownership of an application’s updates. A new flag spotted in the latest Google Play Store release suggests that users may see the option to install updates for apps downloaded from a different source. As you can see in the attached screenshots, the Play Store will show available updates for apps downloaded from different sources. On the app listing, you’ll also see a new “Update from Play” button that will switch the update ownership from the original source to the Play Store. ↫ Pranob Mehrotra at Android Authority Assuming this functionality is just an API other application stores can also tap into, this will be a great addition to Android for power users who use multiple application stores and want to properly manage which store updates what applications. It’s not something most people will ever really use or need, but if you’re the kind of person who does need it – it’ll become indispensable.
This is my second book written with Sphinx, after the new Learn TLA+. Sphinx uses a peculiar markup called reStructured Text (rST), which has a steeper learning curve than markdown. I only switched to it after writing a couple of books in markdown and deciding I needed something better. So I want to talk about why rst was that something. ↫ Hillel Wayne I’ve never liked Markdown – I find it quite arbitrary and unpleasant to look at, and the fact there’s countless variants that all differ a tiny bit doesn’t help – so even though I don’t actually use Markdown for anything, I always have a passing interest in possible alternatives, if only to see what other, different, and unique ideas are out there when it comes to relatively simple markup languages. Now, I’m quite sure reStructured Text isn’t for me either, since I feel like it’s far more powerful than Markdown, and serves a different, more complex purpose. That being said, I figured I’d highlight it here since it seems it may be interesting to some of you who work on documentation for your software projects or similar endeavours.
Serpent OS, a new Linux distribution with a completely custom package management system written in Rust, has released its very very rough pre-alpha release. They’ve been working on this for four years, and they’re making some interesting choices regarding packaging that I really like, at least on paper. This will of course appear to be a very rough (crap) prealpha ISO. Underneath the surface it is using the moss package manager, our very own package management solution written in Rust. Quite simply, every single transaction in moss generates a new filesystem tree (/usr) in a staging area as a full, stateless, OS transaction. When the package installs succeed, any transaction triggers are run in a private namespace (container) before finally activating the new /usr tree. Through our enforced stateless design, usr-merge, etc, we can atomically update the running OS with a single renameat2 call. As a neat aside, all OS content is deduplicated, meaning your last system transaction is still available on disk allowing offline rollbacks. ↫ Ikey Doherty Since this is only a very rough pre-alpha release, I don’t have much more to say at this point, but I do think it’s interesting enough to let y’all know about it. Even if you’re not the kind of person to dive into pre-alphas, I think you should keep an eye on Serpent OS, because I have a feeling they’re on to something valuable here.
Yesterday I highlighted a study that found that AI and ML, and the expectations around them, are actually causing people to need to work harder and more, instead of less. Today, I have another study for you, this time focusing a more long-term issue: when you use something like ChatGPT to troubleshoot and fix a bug, are you actually learning anything? A professor at MIT divided a group of students into three, and gave them a programming task in a language they did not know (FORTRAN). One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components. Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade. Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed. ↫ Esther Shein at ACM I find this an interesting result, but at the same time, not a very surprising one. It reminds me a lot of that when I went to high school, I was part of the first generation whose math and algebra courses were built around using a graphic calculator. Despite being able to solve and graph complex equations with ease thanks to our TI-83, we were, of course, still told to include our “work”, the steps taken to get from the question to the answer, instead of only writing down the answer itself. Since I was quite good “at computers”, and even managed to do some very limited programming on the TI-83, it was an absolute breeze for me to hit some buttons and get the right answers – but since I knew, and know, absolutely nothing about math, I couldn’t for the life of me explain how I got to the answers. Using ChatGPT to fix your programming problem feels like a very similar thing. Sure, ChatGPT can spit out a workable solution for you, but since you aren’t aware of the steps between problem and solution, you aren’t actually learning anything. By using ChatGPT, you’re not actually learning how to program or how to improve your skills – you’re just hitting the right buttons on a graphing calculator and writing down what’s on the screen, without understanding why or how. I can totally see how using ChatGPT for boring boilerplate code you’ve written a million times over, or to point you in the right direction while still coming up with your own solution to a problem, can be a good and helpful thing. I’m just worried about a degradation in skill level and code quality, and how society will, at some point, pay the price for that.
Is machine learning, also known as “artificial intelligence”, really aiding workers and increasing productivity? A study by Upwork – which, as Baldur Bjarnason so helpfully points out, sells AI solutions and hence did not promote this study on its blog as it does with its other studies – reveals that this might not actually be the case. Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect. Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way. For example, survey respondents reported that they’re spending more time reviewing or moderating AI-generated content (39%), invest more time learning to use these tools (23%), and are now being asked to do more work (21%). Forty percent of employees feel their company is asking too much of them when it comes to AI. ↫ Upwork research This shouldn’t come as a surprise. We’re in a massive hype cycle when it comes to machine learning, and we’re being told it’s going to revolutionise work and lead to massive productivity gains. In practice, however, it seems these tools just can’t measure up to the hyped promises, and in fact is making people do less and work slower. There’s countless stories of managers being told by upper management to shove machine learning into everything, from products to employee workflows, whether it makes any sense to do so or not. I know from experience as a translator that machine learning can greatly improve my productivity, but the fact that there are certain types of tasks that benefit from ML, doesn’t mean every job suddenly thrives with it. I’m definitely starting to see some cracks in the hype cycle, and this study highlights a major one. I hope we can all come down to earth again, and really take a careful look at where ML makes sense and where it does not, instead of giving every worker a ChatGPT account and blanket demanding massive productivity gains that in no way match the reality on the office floor. And of course, despite demanding massive productivity increases, it’s not like workers are getting an equivalent increase in salary. We’ve seen massive productivity increases for decades now, while paychecks have not followed suit at all, and many people can actually buy less with their salary today than their parents could decades ago. Demands imposed by managers by introducing AI is only going to make this discrepancy even worse.
Logitech CEO Hanneke Faber talked about someting called the “forever mouse”, which would be, as the name implies, a mouse that customers could use for a very long time. While you may think this would mean an incredibly well-built mouse, or one that can be easily repaired, which Logitech already makes somewhat possible through a partnership with iFixIt, another option the company is thinking about is a subscription model. Yes. Faber said subscription software updates would mean that people wouldn’t need to worry about their mouse. The business model is similar to what Logitech already does with video conferencing services (Logitech’s B2B business includes Logitech Select, a subscription service offering things like apps, 24/7 support, and advanced RMA). Having to pay a regular fee for full use of a peripheral could deter customers, though. HP is trying a similar idea with rentable printers that require a monthly fee. The printers differ from the idea of the forever mouse in that the HP hardware belongs to HP, not the user. However, concerns around tracking and the addition of ongoing expenses are similar. ↫ Scharon Harding at Ars Technica Now, buying a mouse whose terrible software requires subscription models would still be a choice you can avoid, but my main immediately conjured up a far darker scenario. PC makers have a long history of adding crapware to their machines in return for payments from the producers of said crapware. I can totally see what’s going to happen next. You buy a brand new laptop, unbox it at home, and turn it on. Before you know it, a dialog pops up right after he crappy Windows out-of-box experience asking you to subscribe to your laptop’s touchpad software in order to unlock its more advanced features like gestures. But why stop there? The keyboard of that new laptop has RGB backlighting, but if you want to change its settings, you’re going to have to pay for another subscription. Your laptop’s display has additional features and modes for specific types of content and more settings sliders, but you’ll have to pay up to unlock them. And so on. I’m not saying this will happen, but I’m also not saying it won’t. I’m sorry for birthing this idea into the world.
Microsoft has published a post-mortem of the CrowdStrike incident, and goes into great depths to describe where, exactly, the error lies, and how it could lead to such massive problems. I can’t comment anything insightful on the technical details and code they show to illustrate all of this – I’ll leave that discussion up to you – but Microsoft also spends considerable amount of time explaining why security vendors are choosing to use kernel-mode drivers. Microsoft lists three major reasons why security vendors opt for using kernel modules, and none of them will come as a great surprise to OSNews readers: kernel drivers provide more visibility into the system than a userspace tool would, there are performance benefits, and they’re more resistant to tampering. The downsides are legion, too, of course, as any crash or similar issue in kernel mode has far-reaching consequences. The goal, then, according to Microsoft, is to balance the need for greater insight, performance, and tamper resistance with stability. And while the company doesn’t say it directly, this is clearly where CrowdStrike failed – and failed hard. While you would want a security tool like CrowdStrike to perform as little as possible in kernelspace, and conversely as much as possible in userspace, that’s not what CrowdStrike did. They are running a lot of stuff in kernelspace that really shouldn’t be there, such as the update mechanism and related tools. In total, CrowdStrike loads four kernel drivers, and much of their functionality can be run in userspace instead. It is possible today for security tools to balance security and reliability. For example, security vendors can use minimal sensors that run in kernel mode for data collection and enforcement limiting exposure to availability issues. The remainder of the key product functionality includes managing updates, parsing content, and other operations can occur isolated within user mode where recoverability is possible. This demonstrates the best practice of minimizing kernel usage while still maintaining a robust security posture and strong visibility. Windows provides several user mode protection approaches for anti-tampering, like Virtualization-based security (VBS) Enclaves and Protected Processes that vendors can use to protect their key security processes. Windows also provides ETW events and user-mode interfaces like Antimalware Scan Interface for event visibility. These robust mechanisms can be used to reduce the amount of kernel code needed to create a security solution, which balances security and robustness. ↫ David Weston, Vice President, Enterprise and OS Security at Microsoft In what is surely an unprecedented event, I agree with the CrowdStrike criticism bubbling under the surface of this post-mortem by Microsoft. Everything seems to point towards CrowdStrike stuffing way more things in kernelspace than is needed, and as such creating a far larger surface for things to go catastrophically wrong than needed. While Microsoft obviously isn’t going to openly and publicly throw CrowdStrike under the bus, it’s very clear what they’re hinting at here, and this is about as close to a public flogging we’re going to get. Microsoft’s post-portem further details a ton of work Microsoft has recently done, is doing, and will soon be doing to further strenghthen Windows’ security, to lessen the need for kernelspace security drivers even more, including adding support for Rust to the Windows kernel, which should also aid in mitigating some common problems present in other, older programming languages (while not being a silver bullet either, of course).
Blue screens of death are not exactly in short supply on Windows machines lately, but what if you really want to cause your own kernel panic or complete system crash, just because you love that shade of crashy blue? Well, there’s a tool for that called NotMyFault, developed by Mark Russinovich as part of Sysinternals. NotMyFault is a tool that you can use to crash, hang, and cause kernel memory leaks on your Windows system. It’s useful for learning how to identify and diagnose device driver and hardware problems, and you can also use it to generate blue screen dump files on misbehaving systems. The download file includes 32-bit and 64-bit versions, as well as a command-line version that works on Nano Server. Chapter 7 in Windows Internals uses NotMyFault to demonstrate pool leak troubleshooting and Chapter 14 uses it for crash analysis examples. ↫ Mark Russinovich Using this tool, you can select exactly what kind of crash you want to cause, and after clicking the Crash button, your Windows computer will do exactly as it’s told and crash with a lovely blue screen of death. It comes in both a GUI and CLI version, and the latter also works on minimal Windows installations that don’t have the Windows shell installed. A tool like this may seem odd, but it can be particularly useful in situations where you’re trying to troubleshoot an issue, and to learn how to properly diagnose crashes. Or, you know, you can use it to create a panic at your workplace.
Ah, another microkernel-based hobby operating system. The more, the merrier – and I mean this, without a hint of sarcasm. There’s definitely been a small resurgence in activity lately when it comes to small hobby and teaching operating systems, some of which are exploring some truly new ideas, and I’m definitely here for it. Today we have managarm. Some notable properties of managarm are: (i) managarm is based on a microkernel while common Desktop operating systems like Linux and Windows use monolithic kernels, (ii) managarm uses a completely asynchronous API for I/O and (iii) despite those internal differences, managarm provides good compatibility with Linux at the user space level. ↫ managarm GitHub page It’s a 64bit operating system with SMP support, an ACPI implementation, networking, USB3 support, and, as the quoted blurb details, a lot of support for Linux and POSIX. It can already run Weston, kmscon, and other things like Bash, the GNU Coreutils, and more. While not explicitly mentioned, I assume the best platform to run managarm on are most likely virtualisation tools, and there’s a detailed handbook to help you along during building and using this new operating system.
In January of 2021 I was exploring the corpus of Skins I collected for the Winamp Skin Museum and found some that seemed corrupted, so I decided to explore them. Winamp skins are actually just zip files with a different file extension, so I tried extracting their files to see what I could find. This ended up leading me down a series of wild rabbit holes. ↫ Jordan Eldredge I’m not going to spoil any of this.
This blog post is a guide explaining how to setup a full-featured email server on OpenBSD 7.5. It was commissioned by a customer of my consultancy who wanted it to be published on my blog. Setting up a modern email stack that does not appear as a spam platform to the world can be a daunting task, the guide will cover what you need for a secure, functional and low maintenance email system. ↫ Solène Rapenne If you ever wanted to set up and run your own email server, this is a great way to do it. Solène, an OpenBSD developer, will help you through setting up IMAP, POP, and Webmail, an SMTP server with server-to-server encryption and hidden personal information, every possible measure to make sure your server is regarded as legitimate, and all the usual firewall and anti-spam stuff you are definitely going to need. Taking back email from Google – or even Proton, which is now doing both machine learning and Bitcoin, of all things – is probably one of the most daunting tasks for anyone willing to cut ties with as much of big tech as possible. Not only is there the technical barrier, there’s also the fact that the major email providers, like Gmail or whatever Microsoft offers these days, are trying their darnest to make self-hosting email as cumbersome as possible by trying to label everything you send as spam or downright malicious. It’s definitely not an easy task, but at least with guides like this there’s some set of easy steps to follow to get there.
Normally I’m not that interested in reporting on news coming from OpenAI, but today is a little different – the company launched SearchGPT, a search engine that’s supposed to rival Google, but at the same time, they’re also kind of not launching a search engine that’s supposed to rival Google. What? We’re testing SearchGPT, a prototype of new search features designed to combine the strength of our AI models with information from the web to give you fast and timely answers with clear and relevant sources. We’re launching to a small group of users and publishers to get feedback. While this prototype is temporary, we plan to integrate the best of these features directly into ChatGPT in the future. If you’re interested in trying the prototype, sign up for the waitlist. ↫ OpenAI website Basically, before adding a more traditional web-search like feature set to ChatGPT, the company is first breaking them out into a separate, temporary product that users can test, before parts of it will be integrated into OpenAI’s main ChatGPT product. It’s an interesting approach, and with just how stupidly popular and hyped ChatGPT is, I’m sure they won’t have any issues assembling a large enough pool of testers. OpenAI claims SearchGPT will be different from, say, Google or AltaVista, by employing a conversation-style interface with real-time results from the web. Sources for search results will be clearly marked – good – and additional sources will be presented in a sidebar. True to the ChatGPT-style user interface, you can keep “talking” after hitting a result to refine your search further. I may perhaps betray my still relatively modest age, but do people really want to “talk” to a machine to search the web? Any time I’ve ever used one of these chatbot-style user interfaces -including ChatGPT – I find them cumbersome and frustrating, like they’re just adding an obtuse layer between me and the computer, and that I’d rather just be instructing the computer directly. Why try and verbally massage a stupid autocomplete into finding a link to an article I remember from a few days ago, instead of just typing in a few quick keywords? I am more than willing to concede I’m just out of touch with what people really want, so maybe this really is the future of search. I hope I can just always disable nonsense like this and just throw keywords at the problem.
Simultaneous multithreading (SMT) is a feature that lets a processor handle instructions from two different threads at the same time. But have you ever wondered how this actually works? How does the processor keep track of two threads and manage its resources between them? In this article, we’re going to break it all down. Understanding the nuts and bolts of SMT will help you decide if it’s a good fit for your production servers. Sometimes, SMT can turbocharge your system’s performance, but in other cases, it might actually slow things down. Knowing the details will help you make the best choice. ↫ Abhinav Upadhyay Some light reading for the (almost) weekend.
In what started last year as a handful of reports about instability with Intel’s Raptor Lake desktop chips has, over the last several months, grown into a much larger saga. Facing their biggest client chip instability impediment in decades, Intel has been under increasing pressure to figure out the root cause of the issue and fix it, as claims of damaged chips have stacked up and rumors have swirled amidst the silence from Intel. But, at long last, it looks like Intel’s latest saga is about to reach its end, as today the company has announced that they’ve found the cause of the issue, and will be rolling out a microcode fix next month to resolve it. ↫ Ryan Smith at AnandTech It turns out the root cause of the problem is “elevated operating voltages”, caused by a buggy algorithm in Intel’s own microcode. As such, it’s at least fixable through a microcode update, which Intel says it will ship sometime mid-August. AnandTech, my one true source for proper reporting on things like this, is not entirely satisfied, though, as they state microcode is often used to just cover up the real root cause that’s located much deeper inside the processor, and as such, Intel’s explanation doesn’t actually tell us very much at all. Quite coincidentally, Intel also experienced a manufacturing flaw with a small batch of very early Raptor Lake processors. An “oxidation manufacturing flaw” found its way into a small number of early Raptor Lake processors, but the company claims it was caught early and shouldn’t be an issue any more. Of course, for anyone experiencing issues with their expensive Intel processors, this will linger in the back of their minds, too. Not exactly a flawless launch for Intel, but it seems its main only competitor, AMD, is also experiencing issues, as the company has delayed the launch of its new Ryzen 9000 chips due to quality issues. I’m not at all qualified to make any relevant statements about this, but with the recent launch of the Snapdragon Elite X and Pro chips, these issues couldn’t come at a worse time for Intel and AMD.
Choosing an operating system for new technology can be crucial for the success of any project. Years down the road, this decision will continue to inform the speed and efficiency of development. But should you build the infrastructure yourself or rely on a proven system? When faced with this decision, many companies have chosen, and continue to choose, FreeBSD. Few operating systems offer the immediate high performance and security of FreeBSD, areas where new technologies typically struggle. Having a stable and secure development platform reduces upfront costs and development time. The combination of stability, security, and high performance has led to the adoption of FreeBSD in a wide range of applications and industries. This is true for new startups and larger established companies such as Sony, Netflix, and Nintendo. FreeBSD continues to be a dependable ecosystem and an industry-leading platform. ↫ FreeBSD Foundation A FreeBSD marketing document highlighting FreeBSD’s strengths is, of course, hardly a surprise, but considering it’s fighting what you could generously call an uphill battle against the dominance of Linux, it’s still interesting to see what, exactly, FreeBSD highlights as its strengths. It should come as no surprise that its licensing model – the simple BSD license – is mentioned first and foremost, since it’s a less cumbersome license to deal with than something like the GPL. It’s philosophical debate we won’t be concluding any time soon, but the point still stands. FreeBSD also highlights that it’s apparently quite easy to upstream changes to FreeBSD, making sure that changes benefit everyone who uses FreeBSD. While I can’t vouch for this, it does seem reasonable to assume that it’s easier to deal with the integrated, one-stop-shop that is FreeBSD, compared to the hodge-podge of hundreds and thousands of groups whose software all together make up a Linux system. Like I said, this is a marketing document so do keep that in mind, but I still found it interesting.