Monthly Archive:: August 2024
Payment security is changing due to blockchain’s growing use in safe online transactions across industries. This technique guarantees transaction security, transparency, and immutability. Understanding Blockchain Technology Blockchain is a decentralized ledger that securely logs transactions across numerous computers, making it very impossible to tamper with or change data. Verified transactions that are immutable once locked are contained in every block of the chain, guaranteeing transparency and confidence. This approach improves transparency and security by eliminating centralized control. Thanks to blockchain’s transparency, everyone may take part in transaction validation, as each completed transaction is documented on a public ledger for simple tracking. Users can manage their assets with digital wallets, which offer a quick and safe way to conduct online transactions. All things considered, blockchain’s structure protects data integrity, making it a reliable option for safe online operations. Blockchain’s Impact on the Gaming Industry Blockchain technology is transforming the gaming sector by increasing online transaction security and transparency. By guaranteeing safe and effective payment processing, blockchain becomes relevant across genres, including US casino gaming platforms, esports, and NFT-based games. It lowers the possibility of fraud and guarantees equitable gameplay. Blockchain generates a tamper-proof system advantageous to operators and players by decentralizing transaction records. Additionally, because they offer quick, private, and affordable options, cryptocurrencies like Ethereum and Bitcoin are quickly gaining traction as payment options in online gaming. Blockchain is becoming an essential part of the future of online gaming businesses as more use it to streamline their operations and establish confidence. This expanding role emphasizes how blockchain might change online transactions in several industries, including gaming. Enhancing Payment Security with Blockchain Blockchain uses cryptography to prevent fraud and theft, which greatly improves the security of Internet transactions. Because of its immutability, data cannot be changed once it is captured, prohibiting illegal changes. Blockchain-enhanced identification verification lowers identity theft in financial transactions, while tokenization further protects data by substituting unique identifiers for crucial payment details. The seamless conversion between fiat money and cryptocurrencies made possible by blockchain’s integration with conventional banking institutions expedites safe payment procedures. Biometric technology, such as fingerprint scanning and facial recognition, are increasingly integrated into mobile wallets to enhance security. Furthermore, dynamic QR codes have increased the security and effectiveness of QR-based payments by providing real-time updates and fraud protection. Financial Institutions and Blockchain Blockchain is being used by financial organizations more and more to improve payment security and lower data breaches. Because blockchain technology is decentralized, hackers are unable to exploit a single point of weakness. Smart contract implementation expedites the response to questionable activity and automates the identification of fraud. Blockchain’s real-time transaction monitoring makes it possible to quickly identify transactions that might be fraudulent. Blockchain’s advantages may potentially be used by Central Bank Digital Currencies (CBDCs) to increase the effectiveness and security of payments. Because of blockchain’s immutability, data cannot be changed once it is recorded, maintaining the reliability and correctness of transaction logs and assisting with anti-money laundering initiatives by enhancing identity verification. Securing E-commerce Transactions with Blockchain Because blockchain technology offers a distributed ledger that safeguards online transactions, it is revolutionizing e-commerce. Its unchangeable record-keeping lowers fraud and increases customer confidence. Blockchain simplifies payment processing by enabling safe, direct transfers between parties without the need for middlemen. Tokenization, which replaces critical payment data with a special code that cannot be decrypted to disclose original details like credit card numbers, is a crucial component of this security. This prevents intercepted data from being useful. Blockchain facilitates blockchain-oriented payment gateways and safe payment methods like stablecoins and cryptocurrencies. These advances increase the general effectiveness of e-commerce payment procedures while shielding customers from fraudulent transactions. Blockchain in Supply Chain Management Blockchain technology is revolutionizing supply chain management. It offers a distributed ledger that improves transparency and enables real-time product tracking. By facilitating the process of authenticating products and confirming their origins, this technology enhances consumer trust and encourages ethical business practices. Pharmaceutical companies stand to gain a great deal from blockchain’s capacity to control drug recalls and guarantee the legitimacy of medications. Blockchain lowers costs, increases transactional efficiency, and streamlines supply chain procedures by doing away with the need for middlemen. By automating payments between parties, smart contracts improve operations even further. Platforms such as MediLedger demonstrate how smart contracts can automate contract execution and product verification while guaranteeing compliance with intricate rules. Real-World Applications of Blockchain in Various Industries Blockchain technology is revolutionizing online transaction security in a number of different sectors. Solutions like Medicalchain enable users to restrict access, improving data security in the healthcare industry and providing individuals with control over their personal records. Through programs like Embleema, blockchain is also expediting virtual trials while guaranteeing the safe management of patient data. Companies like Propy are using blockchain in real estate to build safe, transparent systems for tracking property ownership, which lowers fraud. These systems show how blockchain may improve security, streamline procedures, and foster trust amongst many businesses. Because of its decentralized, unchangeable ledger, which guarantees data accuracy and lifespan, businesses and customers involved in digital markets or services can both profit. Advanced Security Measures with Blockchain Preventing security breaches in blockchain technology requires the use of sophisticated security mechanisms. Multi-factor authentication (MFA), which necessitates several verification techniques for access and transaction confirmation, is one of the essential elements. Asymmetric encryption depends on a special pair of keys to increase security, whereas symmetric encryption uses the same key for both encryption and decryption. Payment data is encrypted while it is transmitted to prevent information from being intercepted without the decryption key. A digital signature is a feature of every blockchain transaction that helps to further prevent manipulation. Furthermore, the consensus mechanism makes sure that all of the network nodes validate transactions before adding them to the ledger, protecting private information and preventing fraud and illegal access to online transactions. Future Trends in Blockchain and Online Payment Security Blockchain technology advancements portend a promising future for online payment security. Future innovations are expected
Speaking of an operating system for toddlers: Apple is eliminating the option to Control-click to open Mac software that is not correctly signed or notarized in macOS Sequoia. To install apps that Gatekeeper blocks, users will need to open up System Settings and go to the Privacy and Security section to “review security information” before being able to run the software. ↫ Juli Clover at MacRumors On a related note, I’ve got an exclusive photo of the next MacBook Pro.
With macOS Sequoia this fall, using apps that need access to screen recording permissions will become a little bit more tedious. Apple is rolling out a change that will require you to give explicit permission on a weekly basis to these types of apps, and every time you reboot your Mac. If you’ve been using the macOS Sequoia beta this summer in conjunction with a third-party screenshot or screen recording app, you’ve likely been prompted multiple times to continue allowing that app access to your screen. While many speculated this could be a bug, that’s not the case. ↫ Chance Miller Everybody is making comparisons to Windows Vista, but I don’t think that’s fair at all. Windows Vista suffered from an avelanche of permission dialogs because the wider Windows application, driver, and peripheral ecosystem was not at all used to the security boundaries present in Windows NT being enforced. Vista was the first consumer-focused version of Windows that started doing this, and after a difficult transition period, the flood of dialogs settled down, and for a long time now you can blame Windows for a lot of things, but it’s definitely not throwing up more permission dialogs than, say, an average desktop-focused Linux distribution. In other words, Vista’s UAC dialogs were a desperately necessary evil, an adjustment period the Windows ecosystem simply had to go through, and Windows as a whole is better off for it today. This, however, is different. This is Apple having such a low opinion of its users, and such a deep disregard for basic usability and computer ownership, that it feels entirely okay with bothering its users with weekly – or more, if you tend to reboot – nag dialogs for applications the user has already properly given permission to. I don’t have any real issues with a reminder or permission dialog upon first launching a newly installed screen recording application – or when an exisiting application gains this functionality in an update – but nagging users weekly is just beyond insanity. More and more it feels like macOS is becoming an operating system for toddlers – or at least, that’s how Apple seems to view its users.
When you’re shopping online, you’ll likely find yourself jumping between multiple tabs to read reviews and research prices. It can be cumbersome doing all that back and forth tab switching, and online comparison is something we hear users want help with. In the next few weeks, starting in the U.S., Chrome will introduce Tab compare, a new feature that presents an AI-generated overview of products from across multiple tabs, all in one place. Imagine you’re looking for a new Bluetooth portable speaker for an upcoming trip, but the product details and reviews are spread across different pages and websites. Soon, Chrome will offer to generate a comparison table by showing a suggestion next to your tabs. By bringing all the essential details — product specs, features, price, ratings — into one tab, you’ll be able to easily compare and make an informed decision without the endless tab switching. ↫ Parisa Tabriz Is this really what people want from their browser, or am I just completely out of touch? I’m not at all convinced the latter isn’t the case, but this just seems like a filler feature. Is this really what all the AI hype is about? Is this kind of nonsense the end game we’re killing the planet even harder for?
It seems to be bootloader season, because we’ve got another one – this time, a research project with very limited application for most people. SentinelBoot is a cryptographically secure bootloader aimed at enhancing boot flow safety of RISC-V through memory-safe principles, predominantly leveraging the Rust programming language with its ownership, borrowing, and lifetime constraints. Additionally, SentinelBoot employs public-key cryptography to verify the integrity of a booted kernel (digital signature), by the use of the RISC-V Vector Cryptography extension, establishing secure boot functionality. SentinelBoot achieves these objectives with a 20.1% hashing overhead (approximately 0.27s additional runtime) when compared to an example U-Boot binary (mainline at time of development), and produces a resulting binary one-tenth the size of an example U-Boot binary with half the memory footprint. ↫ Lawrence Hunter SentinelBoot is a project undertaken at the University of Manchester, and its goal is probably clear from the description: to develop a more secure bootloader for RISC V devices. An additional element is that they looked specifically at devices that receive updates over-the-air, like smartphones. In addition, scenarios where an attacker has physical access to the device in question were not considered, for obvious reasons – in such cases, the attacker can just replace the bootloader altogether anyway, and no amount of fancy Rust code is going to save you there. The details of the implementation as described in the article are definitely a little bit over my head, but the gist seems to be that the project’s been able to achieve a much more secure boot process without giving up much in performance. This being a research project with an intentionally limited scope does mean it’s most just something that’ll immediately benefit all of us, but it’s these kinds of projects that can really push the state of the art and try out the viability of new ideas.
As you all know, I continue to use WordStar for DOS 7.0 as my word-processing program. It was last updated in December 1992, and the company that made it has been defunct for decades; the program is abandonware. There was no proper archive of WordStar for DOS 7.0 available online, so I decided to create one. I’ve put weeks of work into this. Included are not only full installs of the program (as well as images of the installation disks), but also plug-and-play solutions for running WordStar for DOS 7.0 under Windows, and also complete full-text-searchable PDF versions of all seven manuals that came with WordStar — over a thousand pages of documentation. ↫ Robert J. Sawyer WordStar for DOS is definitely a bit of a known entity in our circles for still being used by a number of world-famous authors. WordStar 4.0 is still being used by George R. R. Martin – assuming he’s still even working on The Winds of Winter – and there must be some sort of reason as to why it’s still so oddly popular. Thanks to this work by author Robert J. Sawyer, accessing and using version 7 of WordStar for DOS is now easier than ever. One of the reasons Sawyer set out to do this was making sure that if he passes away, the people responsible for his estate and works will have an easy way to access his writings. It’s refreshing to see an author think ahead this far, and it will surely help a ton of other people too, since there’s quite a few documents lingering around using the WordStar format.
That sure is a big news drop for a random Tuesday. A federal judge ruled that Google violated US antitrust law by maintaining a monopoly in the search and advertising markets. “After having carefully considered and weighed the witness testimony and evidence, the court reaches the following conclusion: Google is a monopolist, and it has acted as one to maintain its monopoly,” according to the court’s ruling, which you can read in full at the bottom of this story. “It has violated Section 2 of the Sherman Act.” ↫ Lauren Feiner at The Verge Among many other things, the judge mentions Google’s own admissions that the company can do pretty much whatever it wants with Google Search and its advertisement business, without having to worry about users opting to go elsewhere or ad buyers leaving the Google platform. Studies from inside Google itself made it very clear that Google could systematically make Search worse without it affecting user and/or usage numbers in any way, shape, or form – because users have nowhere else to realistically go. While the ability to raise prices at will without fear of losing customers is a sure sign of being a monopoly, so is being able to make a product worse without fear of losing customers, the judge argues. Google plans to appeal, obviously, and this ruling has nothing yet to say about potential remedies, so what, exactly, is going to change is as of yet unknown. Potential remedies will be handled during the next phase of the proceedings, with the wildest and most aggressive remedy being a potential break-up of Google, Alphabet, or whatever it’s called today. My sights are definitely set on a break-up – hopefully followed by Apple, Amazon, Facebook, and Microsoft – to create some much-needed breathing room into the technology market, and pave the way for a massive number of newcomers to compete on much fairer terms. Of note is that the judge also put yet another nail in the coffin of Google’s various exclusivity deals, most notable with Apple and, for our interests, with Mozilla. Google pays Apple well over 20 billion dollars a year to be the default search engine on iOS, and it pays about 80% of Mozilla’s revenue to be the default search engine in Firefox. According to the judge, such deals are anticompetitive. Mehta rejected Google’s arguments that its contracts with phone and browser makers like Apple were not exclusionary and therefore shouldn’t qualify it for liability under the Sherman Act. “The prospect of losing tens of billions in guaranteed revenue from Google — which presently come at little to no cost to Apple — disincentivizes Apple from launching its own search engine when it otherwise has built the capacity to do so,” he wrote. ↫ Lauren Feiner at The Verge If the end of these deals become part of the package of remedies, it will be a massive financial blow to Apple – 20 billion dollars a year is about 15% of Apple’s total annual operating profits, and I’m also pretty sure those Google billions are counted as part of Tim Cook’s much-vaunted services revenue, so losing it would definitely impact Apple directly where it hurts. Sure, it’s not like it’ll make Apple any less of a dangerous behemoth, but it will definitely have some explaining to do to investors. Much more worrisome, however, is the similar deal Google has with Mozilla. About 80% of Mozilla’s total revenue comes from a search deal with Google, and if that deal were to be dissolved, the consequences for Mozilla, and thus for Firefox, would be absolutely immense. This is something I’ve been warning about for years now, and the end of this deal would be yet another worry that I’ve voiced repeatedly becoming reality, right after Mozilla becoming an advertising company and making Firefox worse in the name of quick profits. One by one, every single concern I’ve voiced about the future of Firefox is becoming reality. Canonical, Fedora, KDE, GNOME, and many other stakeholders – ignore these developments at your own peril.
After a number of very bug security incidents involving Microsoft’s software, the company promised it would take steps to put security at the top of its list of priorities. Today we got another glimpse of the step it’s taking, since the company is going to take security into account during performance reviews. Kathleen Hogan, Microsoft’s chief people officer, has outlined what the company expects of employees in an internal memo obtained by The Verge. “Everyone at Microsoft will have security as a Core Priority,” says Hogan. “When faced with a tradeoff, the answer is clear and simple: security above all else.” A lack of security focus for Microsoft employees could impact promotions, merit-based salary increases, and bonuses. “Delivering impact for the Security Core Priority will be a key input for managers in determining impact and recommending rewards,” Microsoft is telling employees in an internal Microsoft FAQ on its new policy. ↫ Tom Warren at The Verge Now, I’ve never worked in a corporate environment or something even remotely close to it, but something about this feels off to me. Often, it seems that individual, lower-level employees know all too well they’re cutting corners, but they’re effectively forced to because management expects almost inhuman results from its workers. So, in the case of a technology company like Microsoft, this means workers are pushed to write as much code as possible, or to implement as many features as possible, and the only way to achieve the goals set by management is to take shortcuts – like not caring as much about code quality or security. In other words, I don’t see how Microsoft employees are supposed to make security their top priority, while also still having to achieve any unrealistic goals set by management and other higher-ups. What I’m missing from this memo and associated reporting is Microsoft telling its employees that if unrealistic targets, crunch, low pay, and other factors that contribute to cutting corners get in the way of putting security first, they have the freedom to choose security. If employees are not given such freedom, demanding even more from them without anything in return seems like a recipe for disaster to me, making this whole memo quite moot. We’ll have to see what this will amount to in practice, but with how horrible employees are treated in most industries these days, especially in countries with terrible union coverage and laughable labour protection laws like the US, I don’t have high hopes for this.
CP/M is turning 50 this year. The ancient Control Program for Microcomputers, or CP/M for short, has been enjoying a modest renaissance in recent years. By 21st century standards, it’s unimaginably tiny and simple. The whole OS fits into under 200 kB, and the resident bit of the kernel is only about 3 kB. Today, in the era of end-user OSes in the tens-of-gigabytes size range, this exerts a fascination to a certain kind of hobbyist. Back when it was new, though, this wasn’t minimalist – it was all that early hardware could support. ↫ Liam Proven I’m a little too young to have experienced CP/M as anything other than a retro platform – I’m from 1984, and we got our first computer in 1990 or so – but its importance and influence cannot be overstated. Many of the conventions set by CP/M made their way to the various DOS variants, and in turn, we still see some of those conventions in Windows today. Had Digital Research, the company CP/M creator Gary Kildall set up to sell CP/M, accepted the deal with IBM to make CP/M the default operating system for the then newly-created IBM PC, we’d be living in a very different world today. Digital Research would also create several other popular and/or influential software products beyond CP/M, such as DR DOS and GEM, as well as various other DOS variants and CP/M versions with DOS compatibility. It would eventually be acquired by Novell, where it faded into obscurity.
Not too long ago I linked to a blog post by long-time OSNews reader (and silver Patreon) and friend of mine Morgan, about how to set up OpenBSD as a workstation operating system – and in fact, I personally used that guide in my own OpenBSD journey. Well, Morgan’s back with another, similar article, this time covering FreeBSD. After going through the basic steps needed to make FreeBSD a bit more amenable to desktop use, Morgan notes about performance: Now let’s compare FreeBSD. Well, quite frankly, there is no comparison! FreeBSD just feels snappier and more responsive on the desktop; at the same 170Hz refresh it actually feels like 170Hz. Void Linux always felt fast enough and I thought it had no lag at all at that refresh rate, but comparing them side by side (FreeBSD installed on the NVMe drive, Void running from a USB 4 SSD with similar performance), FreeBSD is smooth as glass and I started noticing just the slightest lag/stutter on Void. The same holds true for Firefox; I use smooth scrolling and on FreeBSD it really is perfectly smooth. Similarly, Youtube performance is unreal, with no dropped frames at any resolution all the way up to 4Kp60, and the videos look so much smoother! ↫ Morgan/kaidenshi This is especially relevant for me personally, since the prime reason I switched my workstation back to Fedora KDE was OpenBSD’s performance issues. While those performance issues were entirely expected and the result of the operating system’s focus on security and hardening, it did mean it’s just not suitable for me as a workstation operating system, even if I like the internals and find it a joy to use, even under the hood. If FreeBSD delivers more solid desktop and workstation performance, it might be time I set up a FreeBSD KDE installation and see if it can handle my workstation’s 270Hz 4K display. As I keep reiterating – the BSD world has a lot to offer those wishing to run a UNIX-like workstation operating system, and it’s articles like these that help people get started. A lot of the steps taken may seem elementary to many of us, but for people coming from Linux or even Windows, they may be unfamiliar and daunting, so having it all laid out in a straightforward manner is quite helpful.
As uBlock Origin lead developer and maintainer Raymond Hill explained on Friday, this is the result of Google deprecating support for the Manifest v2 (MV2) extensions platform in favor of Manifest v3 (MV3). “uBO is a Manifest v2 extension, hence the warning in your Google Chrome browser. There is no Manifest v3 version of uBO, hence the browser will suggest alternative extensions as a replacement for uBO,” Hill explained. ↫ Sergiu Gatlan at Bleeping Computer If you’re still using Chrome, or any possible Chrome skins who have not committed to keeping Manifest v2 extensions enabled, it’s really high time to start thinking about jumping ship if ad blocking matters to you. Of course, we don’t know for how long Firefox will remain able to properly block ads either, but for now, it’s obviously the better choice for those of us who care about a better browsing experience. And just to reiterate: I fully support anyone’s right to block ads, even on OSNews. Your computer, your rules. There are a variety of other, better means to support OSNews – our Patreon, individual donations through Ko-Fi, or buying our merch – that are far better for us than ads will ever be.
Limine is an advanced, portable, multiprotocol bootloader that supports Linux, multiboot1 and 2, the native Limine boot protocol, and more. Limine is lightweight, elegant, fast, and the reference implementation of the Limine boot protocol. The Limine boot protocol’s main target audience is operating system and kernel developers that want to use a boot protocol which supports modern features in an elegant manner, that GRUB’s aging multiboot protocols do not (or do not properly). ↫ Limine website I wish trying out different bootloaders was an easier thing to do. Personally, since my systems only run Fedora Linux, I’d love to just move them all over to systemd-boot and not deal with GRUB at all anymore, but since it’s not supported by Fedora I’m worried updates might break the boot process at some point. On systems where only one operating system is installed, as a user I should really be given the choice to opt for the simplest, most basic boot sequence, even if it can’t boot any other operating systems or if it’s more limited than GRUB.
Following our recent work 5 with Ubuntu 24.04 LTS where we enabled frame pointers by default to improve debugging and profiling, we’re continuing our performance engineering efforts by evaluating the impact of O3 optimization in Ubuntu. O3 is a GCC optimization 14 level that applies more aggressive code transformations compared to the default O2 level. These include advanced function and the use of sophisticated algorithms aimed at enhancing execution speed. While O3 can increase binary size and compilation time, it has the potential to improve runtime performance. ↫ Ubuntu Discourse If these optimisations deliver performance improvements, and the only downside is larger binaries and longer compilation times, it seems like a bit of a no-brainer to enable these, assuming those mentioned downsides are within reason. Are there any downsides they’re not mentioning? Browsing around and doing some minor research it seems that -O3 optimisations may break some packages, and can even lead to performance degradation, defeating the purpose altogether. Looking at a set of benchmarks from Phoronix from a few years ago, in which the Linux kernel was compiled with either O2 and O3 and their performance compared, the results were effectively tied, making it seem not worth it at all. However, during these benchmarks, only the kernel was tested; everything else was compiled normally in both cases. Perhaps compiling the entire system with O3 will yield improvements in other parts of the system that do add up. For now, you can download unsupported Ubuntu ISOs compiled with O3 optimisations enabled to test them out.
Another month, another chunk of progress for the Servo rendering engine. The biggest addition is enabling table rendering to be spread across CPU cores. Parallel table layout is now enabled, spreading the work for laying out rows and their columns over all available CPU cores. This change is a great example of the strengths of Rayon and the opportunistic parallelism in Servo’s layout engine. ↫ Servo blog On top of this, there’s tons of improvements to the flexbox layout engine, support generic font families like ‘sans-serif’ and ‘monospace’ has been added, and Servo now supports OpenHarmony, the operating system developed by Huawei. This month also saw a lot of work on the development tools.
Most application on GNU/Linux by convention delegate to xdg-open when they need to open a file or a URL. This ensures consistent behavior between applications and desktop environments: URLs are always opened in our preferred browser, images are always opened in the same preferred viewer. However, there are situations when this consistent behavior is not desired: for example, if we need to override default browser just for one application and only temporarily. This is where xdg-override helps: it replaces xdg-open with itself to alter the behavior without changing system settings. ↫ xdg-override GitHub page I love this project ever since I came across it a few days ago. Not because I need it – I really don’t – but because of the story behind its creation. The author of the tool, Dmytro Kostiuchenko, wanted Slack, which he only uses for work, to only open his work browser – which is a different browser from his default browser. For example, imagine you normally use Firefox for everything, but for all your work-related things, you use Chrome. So, when you open a link sent to you in Slack by a colleague, you want that specific link to open in Chrome. Well, this is not easily achieved in Linux. Applications on Linux tend to use freedesktop.org’s xdg-open for this, which looks at the file mimeapps.list to learn which application opens which file type or URL. To solve Kostiuchenko’s issue, changing the variable $XDG_CONFIG_HOME just for Slack to point xdg-open to a different configuration file doesn’t work, because the setting will be inherited by everything else spwaned from Slack itself. Changing mimeapps.list doesn’t work either, of course, since that would affect all other applications, too. So, what’s the actual solution? We’d like also not to change xdg-open implementation globally in our system: ideally, the change should only affect Slack, not all other apps. But foremost, diverging from upstream is very unpractical. However, in the spirit of this solution, we can introduce a proxy implementation of xdg-open, which we’ll “inject” into Slack by adding it to PATH. ↫ Dmytro Kostiuchenko xdg-override takes this idea and runs with it: It is based on the idea described above, but the script won’t generate proxy implementation. Instead, xdg-override will copy itself to /tmp/xdg-override-$USER/xdg-open and will set a few $XDG_OVERRIDE_* variables and the $PATH. When xdg-override is invoked from this new location as xdg-open, it’ll operate in a different mode, parsing $XDG_OVERRIDE_MATCH and dispatching the call appropriately. I tested this script briefly, but automated tests are missing, so expect some rough edges and bugs. ↫ Dmytro Kostiuchenko I don’t fully understand how it works, but I get the overall gist of what it’s doing. I think it’s quite clever, and solves a very specific issue in a non-destructive way. While it’s not something most people will ever need, it feels like something that if you do need it, it will quickly become a default part of your toolbox or workflow.
Today, every Unix-like system can trace their ancestry back to the original Unix. That includes Linux, which uses the GNU tools – and the GNU tools are based on the Unix tools. Linux in 2024 is removed from the original Unix design, and for good reason – Linux supports architectures and tools not dreamt of during the original Unix era. But the core command line experience in Linux is still very similar to the Unix command line of the 1970s. The next time you use ls to list the files in a directory, remember that you’re using a command line that’s been with us for more than fifty years. ↫ Jim Hall An excellent overview of some of the more ancient UNIX commands that are still with us today. One thing I always appreciate when I dive into an operating system closer to “real” UNIX, like OpenBSD, or a actual UNIX, like HP-UX, is just how much more logical sense they make under the hood than a Linux system does. This is not a dunk on modern Linux – it has to cater to endless more modern needs than something ancient and dead like HP-UX – but what I learn while using these systems closer to the UNIX has made me appreciate proper UNIX more than I used to in the past. In what surely sounds like utter lunacy to system administrators who actually had to seriously administer HP-UX systems back in the day, I genuinely love using HP-UX, setting it up, configuring it, messing around with it, because it just makes so much more logical sense than the systems we use today. The knowledge gained from using BSD, HP-UX, and others, while not always directly applicable to Linux, does aid me in understanding certain Linux things better than I did before. What I’m trying to say is – go and load up an old UNIX, or at least a modern BSD. Aside from being great operating systems in their own right, they’re much easier to grasp than a modern Linux system, and you’ll learn a lot form the experience.
Android 14 introduced the ability for application stores to claim ownership over application updates, to ensure other installation sources won’t accidentally update applications they shouldn’t. What is still lacking, however, is for users to easily change the update ownership for applications. In other words, say you install an application by downloading an APK from GitHub, and later the application makes its way to F-Droid, you’ll get warning popups when F-Droid tries to update that application. That’s about to change, it seems, as Android Authority discovered that the Play Store application seems to be getting a new feature where it can take ownership of an application’s updates. A new flag spotted in the latest Google Play Store release suggests that users may see the option to install updates for apps downloaded from a different source. As you can see in the attached screenshots, the Play Store will show available updates for apps downloaded from different sources. On the app listing, you’ll also see a new “Update from Play” button that will switch the update ownership from the original source to the Play Store. ↫ Pranob Mehrotra at Android Authority Assuming this functionality is just an API other application stores can also tap into, this will be a great addition to Android for power users who use multiple application stores and want to properly manage which store updates what applications. It’s not something most people will ever really use or need, but if you’re the kind of person who does need it – it’ll become indispensable.
This is my second book written with Sphinx, after the new Learn TLA+. Sphinx uses a peculiar markup called reStructured Text (rST), which has a steeper learning curve than markdown. I only switched to it after writing a couple of books in markdown and deciding I needed something better. So I want to talk about why rst was that something. ↫ Hillel Wayne I’ve never liked Markdown – I find it quite arbitrary and unpleasant to look at, and the fact there’s countless variants that all differ a tiny bit doesn’t help – so even though I don’t actually use Markdown for anything, I always have a passing interest in possible alternatives, if only to see what other, different, and unique ideas are out there when it comes to relatively simple markup languages. Now, I’m quite sure reStructured Text isn’t for me either, since I feel like it’s far more powerful than Markdown, and serves a different, more complex purpose. That being said, I figured I’d highlight it here since it seems it may be interesting to some of you who work on documentation for your software projects or similar endeavours.
Serpent OS, a new Linux distribution with a completely custom package management system written in Rust, has released its very very rough pre-alpha release. They’ve been working on this for four years, and they’re making some interesting choices regarding packaging that I really like, at least on paper. This will of course appear to be a very rough (crap) prealpha ISO. Underneath the surface it is using the moss package manager, our very own package management solution written in Rust. Quite simply, every single transaction in moss generates a new filesystem tree (/usr) in a staging area as a full, stateless, OS transaction. When the package installs succeed, any transaction triggers are run in a private namespace (container) before finally activating the new /usr tree. Through our enforced stateless design, usr-merge, etc, we can atomically update the running OS with a single renameat2 call. As a neat aside, all OS content is deduplicated, meaning your last system transaction is still available on disk allowing offline rollbacks. ↫ Ikey Doherty Since this is only a very rough pre-alpha release, I don’t have much more to say at this point, but I do think it’s interesting enough to let y’all know about it. Even if you’re not the kind of person to dive into pre-alphas, I think you should keep an eye on Serpent OS, because I have a feeling they’re on to something valuable here.
Yesterday I highlighted a study that found that AI and ML, and the expectations around them, are actually causing people to need to work harder and more, instead of less. Today, I have another study for you, this time focusing a more long-term issue: when you use something like ChatGPT to troubleshoot and fix a bug, are you actually learning anything? A professor at MIT divided a group of students into three, and gave them a programming task in a language they did not know (FORTRAN). One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components. Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade. Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed. ↫ Esther Shein at ACM I find this an interesting result, but at the same time, not a very surprising one. It reminds me a lot of that when I went to high school, I was part of the first generation whose math and algebra courses were built around using a graphic calculator. Despite being able to solve and graph complex equations with ease thanks to our TI-83, we were, of course, still told to include our “work”, the steps taken to get from the question to the answer, instead of only writing down the answer itself. Since I was quite good “at computers”, and even managed to do some very limited programming on the TI-83, it was an absolute breeze for me to hit some buttons and get the right answers – but since I knew, and know, absolutely nothing about math, I couldn’t for the life of me explain how I got to the answers. Using ChatGPT to fix your programming problem feels like a very similar thing. Sure, ChatGPT can spit out a workable solution for you, but since you aren’t aware of the steps between problem and solution, you aren’t actually learning anything. By using ChatGPT, you’re not actually learning how to program or how to improve your skills – you’re just hitting the right buttons on a graphing calculator and writing down what’s on the screen, without understanding why or how. I can totally see how using ChatGPT for boring boilerplate code you’ve written a million times over, or to point you in the right direction while still coming up with your own solution to a problem, can be a good and helpful thing. I’m just worried about a degradation in skill level and code quality, and how society will, at some point, pay the price for that.