Internet Archive

You can use newline characters in URLs

I had no idea, but apparently, you can just use newline characters and tabs in URLs without any issues. Notice how it reports an error if there is a tab or newline character, but continues anyway? The specification says that A validation error does not mean that the parser terminates and it encourages systems to report errors somewhere. Effectively, the error is ignored although it might be logged. Thus our HTML is fine in practice. ↫ Daniel Lemire This reminds me of the “Email is easy” quiz.

“Never buy a .online domain”

I’ve been a .com purist for over two decades of building. Once, I broke that rule and bought a .online TLD for a small project. This is the story of how it went up in flames. ↫ Tony S. An absolute horror story about Google’s dominance over the web, in places nobody really talks about. Scary.

The age-verification trap: verifying user’s ages undermines everyone’s data protection

Social media is going the way of alcohol, gambling, and other social sins: Societies are deciding it’s no longer kid stuff. Lawmakers point to compulsive use, exposure to harmful content, and mounting concerns about adolescent mental health. So, many propose to set a minimum age, usually 13 or 16. In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law. This is the age-verification trap. Strong enforcement of age rules undermines data privacy. ↫ Waydell D. Carvalho The answer to the dangers of social media is not to ban social media use among minors, for a whole variety of reasons. There’s data privacy, as the linked article goes into, but there’s also the fact that for a lot of people, including minors, who live in regressive, backwards environments and/or are victims of abuse, social media is their only support network. Cut them off from social media, and you cut them off from the very people who can save them from further abuse. The problem isn’t social media in and of itself – it’s profit-seeking social media. Companies like Facebook and TikTok spend billions to hyper-optimise and hyper-target vulnerable people, much like how tobacco companies and drug dealers do, to feed and worsen their addiction because keeping people addicted is how they maximise profits. The solution to the dangers of corporate social media is to strictly regulate their behaviour, something we already do with countless dangerous products and services. I’m obviously not qualified to come up with specific measures that would need to be taken, but I think we can all agree that whatever corporate social media have been and are doing is dangerous, unethical, should be stopped.

A brief history of barbed wire fence telephone networks

If you look at the table of contents for my book, Other Networks: A Radical Technology Sourcebook, you’ll see that entries on networks before/outside the internet are arranged first by underlying infrastructure and then chronologically. You’ll also notice that within the section on wired networks, there are two sub-sections: one for electrical wire and another for barbed wire. Even though the barbed wire section is quite short, it was one of the most fascinating to research and write about – mostly because the history of using barbed wire to communicate is surprisingly long and almost entirely undocumented, even though barbed wire fence phones in particular were an essential part of early- to mid-twentieth century rural life in many parts of the U.S. and Canada! ↫ Lori Emerson I had no idea this used to be a thing, but it obviously makes a ton of sense. If you can have a conversation by stringing a few tin cans together, you can obviously do something similar across metal barbed wire. There’s something poetic about using one of mankind’s most dividing inventions to communicate, and thus bring people closer together.

Has the Browser Become the Real Operating System Kernel?

For decades, the operating system kernel handled process scheduling, memory isolation, hardware abstraction, and resource allocation. Applications sat neatly on top, consuming services but rarely redefining them. That boundary is no longer as clear. Today’s browsers now ship with their own process managers, JIT compilers, sandboxing layers, GPU pipelines, and even virtualized runtimes through WebAssembly. For many users, the browser is the main application environment. The question is no longer rhetorical: are browsers starting to behave like miniature operating systems? Abstraction Layers And The Rise Of Webassembly WebAssembly changed the browser from a document renderer into a portable execution environment. It allows near-native code to run inside a sandbox, abstracting away the underlying hardware and much of the host OS. In practice, this means the browser mediates between application logic and CPU, memory, and graphics resources. That mediation is increasingly optimized for specific platforms. Oasis and Safari leverage platform-specific OS hooks to use 60% less RAM than Chrome when running on their native operating systems. Those gains do not come from web standards alone; they depend on tight integration with kernel services, graphics drivers, and memory subsystems. As a result, the browser engine has become a portability layer comparable to what POSIX once offered. Developers target Chromium or WebKit, and the browser translates that intent into system calls, GPU queues, and thread pools. The abstraction is deep enough that many applications no longer need to care which OS sits beneath. Latency Management In Real-Time Web Applications Real-time collaboration tools, cloud IDEs, and browser-based games have pushed latency management into the browser core. Task scheduling, priority hints, and background throttling now resemble lightweight kernel schedulers. When dozens of tabs compete for CPU time, the browser arbitrates. Performance differences show how much this layer matters. Brave browser loads web pages 21% faster, uses 9% less CPU, and consumes 4% less battery on average compared to main competitors due to native ad and tracker blocking. Those savings are essentially policy decisions about resource allocation, implemented above the kernel but below the application. The same infrastructure powers high-demand workloads. Streaming platforms, complex dashboards, and even high-traffic environments such as online gambling platforms depend on predictable frame timing and low input latency inside the browser sandbox. For instance, a player placing a live bet or spinning a slot at online casinos cannot experience input delays or dropped frames without it affecting the interaction itself. The browser must process animations, user clicks, network responses, and security checks almost simultaneously, ensuring results appear instantly and consistently across thousands of simultaneous sessions. This means the browser’s event loop and rendering pipeline function like a specialized runtime scheduler. They coordinate animation frames, WebSocket traffic, and UI updates so that gameplay remains smooth even when the page is performing constant background communication with remote servers. For OS enthusiasts, this is notable. The kernel still schedules threads, but the browser increasingly decides which threads exist, when they wake, and how aggressively they consume resources. Memory Isolation Differences Between Tabs And Processes Security concerns accelerated this architectural shift. Chromium’s Site Isolation model assigns separate OS processes to different sites, reducing cross-origin attacks. That approach mirrors traditional multi-process isolation strategies in Unix-like systems. There is a cost. Chrome’s “Site Isolation” feature increases memory usage by an estimated 10–20% to enhance security through dedicated OS processes per website. The browser chooses stronger isolation boundaries and accepts higher RAM pressure, effectively trading kernel-level efficiency for application-level containment. Tab isolation also obscures responsibility. The kernel sees multiple browser processes, but it is the browser that defines their lifecycles, privileges, and communication channels. Shared memory, IPC mechanisms, and sandbox rules are orchestrated by the engine, not directly by the OS administrator. For developers used to thinking in terms of system daemons and user processes, this inversion is striking. The browser becomes a supervisor, while the kernel enforces boundaries defined elsewhere. The Diminishing Role Of The Underlying Host OS None of this means the host operating system is irrelevant. Linux still manages cgroups and namespaces. Windows enforces kernel patch protection and virtualization-based security. macOS controls entitlements and code signing. Yet from the application’s perspective, the browser often feels like the real platform. Benchmarks in 2026 highlight how OS-specific optimizations now flow through browser engines rather than standalone apps. Safari’s tight macOS integration and Chrome’s GPU tuning on Windows show that performance differences emerge from how deeply browsers hook into kernel services. The OS provides primitives; the browser assembles the policy. For administrators and hobbyists, this shifts where meaningful experimentation happens. Tuning the scheduler or switching filesystems still matters, but adjusting browser flags, sandbox modes, and rendering backends can produce equally visible results. Today’s browser has not replaced the kernel, yet it increasingly defines how users experience it, acting as a policy engine layered directly atop system calls.

The Dillo appreciation post

About a year ago I mentioned that I had rediscovered the Dillo Web Browser. Unlike some of my other hobbies, endeavours, and interests, my appreciation for Dillo has not wavered. I only have a moment to gush today, so I’ll cut right to it. Dillo has been plugging along nicely (see the Git forge.) and adding little features. Features that even I, a guy with a blog, can put to use. Here are a few of my favourites. ↫ Bobby Hiltz If you’re looking for a more minimalist, less distracting browser experience that gives you a ton of interesting UNIXy control, you should really consider giving Dillo a try.

Just the Browser: scripts to remove all the crap from your browser

Are you a normal person and thus sick of all the nonsensical, non-browser stuff browser makers keep adding to your browser, but for whatever reason you don’t want to or cannot switch to one of the forks of your browser of choice? Just the Browser helps you remove AI features, telemetry data reporting, sponsored content, product integrations, and other annoyances from desktop web browsers. The goal is to give you “just the browser” and nothing else, using hidden settings in web browsers intended for companies and other organizations. This project includes configuration files for popular web browsers, documentation for installing and modifying them, and easy installation scripts. Everything is open-source on GitHub. ↫ Just The Browser’s website It comes in the form of scripts for Windows, Linux, or macOS, and can be used for Google Chrome, Microsoft Edge, and Mozilla Firefox. It’s all open source so you can check the scripts for yourself, but there are also manual guides for each browser if you’re not too keen on running an unknown script. The changes won’t be erased by updates, unless the specific settings and configuration flags used are actually removed or altered by the browser makers. That’s all there’s to it – a very straightforward tool.

Modern HTML features on text-based web browsers

They’re easily overlooked between all the Chrome and Safari violence, but there are still text-based web browsers, and people still use them. How do they handle the latest HTML features? While CSS is the star of the show when it comes to new features, HTML ain’t stale either. If we put the long-awaited styleable selects and Apple’s take on toggle switches aside, there’s a lot readily available cross-browser. But here’s the thing: Whenever we say cross-browser, we usually look at the big ones, never at text-based browsers. So in this article I wanna shed some light on how they handle the following recent additions. ↫ Matthias Zöchling Text-based web browsers work best with regular HTML, as things like CSS and JavaScript won’t work. Despite the new features highlighted in the article being HTML, however, text-based browser have a hard time dealing with them, and it’s likely that as more and more modern features get added to HTML, text-based browsers are going to have an increasingly harder time dealing with the web. At least OSNews seems to render decently usable on text-based web browsers, but ideal it is not. I don’t really have the skills to fix any issues on that front, but I can note that I’m working on a extremely basic, HTML-only version of OSNews generated from our RSS feed, hosted on some very unique retro hardware. I can’t guarantee it’ll become available – I’m weary about hosting something from home using unique hardware and outdated software – but if it does, y’all will know about it, of course.

You are not required to close your <p>, <li>, <img>, or <br> tags in HTML

Are you an author writing HTML? Just so we’re clear: Not XHTML. HTML. Without the X. If you are, repeat after me, because apparently this bears repeating (after the title): You are not required to close your &lt;p>, &lt;li>, &lt;img>, or &lt;br> tags in HTML. ↫ Daniel Tan Back when I still had to write OSNews’ stories in plain HTML – yes, that’s what we did for a very long time – I always properly closed my tags. I did so because I thought you had to, but also because I think it looks nicer, adds a ton of clarity, and makes it easier to go back later and make any possible changes or fix errors. It definitely added to the workload, which was especially annoying when dealing with really long, detailed articles, but the end result was worth it. I haven’t had to write in plain HTML for ages now, since OSNews switched to WordPress and thus uses a proper WYSIWYG editor, so I haven’t thought about closing HTML tags in a long time – until I stumbled upon this article. I vaguely remember I would “fix” other people’s HTML in our backend by adding closing tags, and now I feel a little bit silly for doing so since apparently it wasn’t technically necessary at all. Luckily, it’s also not wrong to close your tags, and I stick by my readability arguments. Sometimes it’s easy to forget just how old HTML has become, and how mangled it’s become over the years.

The HTML elements time forgot

We’re all familiar with things like marquee and blink, relics of HTML of the past, but there are far more weird and obscure HTML tags you may not be aware of. Luckily, Declan Chidlow at HTMLHell details a few of them so we can all scratch shake our heads in disbelief. But there are far more obscure tags which are perhaps less visually dazzling but equally or even more interesting. If you’re younger, this might very well be your introduction to them. If you’re older, this still might be an introduction, but also possibly a trip down memory lane or a flashback to the horrors of the first browser war. It depends. ↫ Declan Chidlow at HTMLHell I think my favourite is the dir tag, intended to be used to display lists of files and directories. We’re supposed to use list tags now to achieve the same result, but I do kind of like the idea of having a dedicated tag to indicate files, and perhaps have browsers render these lists in the same way the file manager of the platform it’s running on does. I don’t know if that was possible, but it seems like the logical continuation of a hypothetical dir tag. Anyway, should we implement bgsound on OSNews?

Migrating Dillo away from GitHub

What do you do if you develop a lightweight browser that doesn’t support JavaScript, but you once chose GitHub as the home for your code? You’re now in the unenviable position that your own browser can no longer access your own online source repository because it requires JavaScript, which is both annoying and, well, a little awkward. The solution is, of course, obvious: you move somewhere else. That’s exactly what the Dillo browser did. They set up a small VPS, opted for cgit as the git frontend for its performance and small size, and for the bug tracker, they created a brand new, very simple bug tracker. To avoid this problem, I created my own bug tracker software, buggy, which is a very simple C tool that parses plain Markdown files and creates a single HTML page for each bug. All bugs are stored in a git repository and a git hook regenerates the bug pages and the index on each new commit. As it is simply plain text, I can edit the bugs locally and only push them to the remote when I have Internet back, so it works nice offline. Also, as the output is just an static HTML site, I don’t need to worry about having any vulnerabilities in my code, as it will only run at build time. ↫ Rodrigo Arias Mallo There’s more considerations detailed in the article about Dillo’s migration, and it can serve as inspiration for anyone else running a small open source project who wishes to leave GitHub behind. With GitHub’s continuing to add more and more complexity and “AI” to separate open source code from its licensing terms, we may see more and more projects giving GitHub the finger.

The Challenges of Building High-Traffic Real-Time Web Platforms Today

Real-time web platforms used to be a niche engineering flex. Now they’re the default expectation. Users want dashboards that update instantly, chats that feel live, sports scores that tick without refresh, and collaborative tools where changes appear the moment someone types. The modern web has effectively trained everyone to expect “now,” and at scale, “now” is expensive. What makes today’s challenge interesting is that the hard parts are no longer limited to raw throughput. The real struggle is building systems that are fast, correct, observable, secure, and cost-controlled, all while operating in a world of flaky networks, mobile clients, and sudden traffic spikes driven by social and news cycles. Real-time is a product promise, not just a protocol Teams often treat real-time as a technical decision: WebSockets, Server-Sent Events, long polling, or a third-party pub/sub service. In practice, real-time is a product promise with strict implications. If an app claims “live updates” but misses messages, duplicates events, or lags under load, trust collapses quickly. This is especially visible in high-stakes environments: trading, logistics, gaming, and live entertainment. Even outside of finance, a live platform is judged based on responsiveness and consistency. That means engineering must optimize for perceived speed and correctness, not just raw latency numbers. Scaling the fan-out problem without going broke The toughest math in real-time systems is fan-out: One event may need to reach thousands or millions of clients. A single “state change” can become a storm of outbound messages. Common pains usually involve: Approaches vary: some platforms partition users by region, shard by topic, or push more aggregation to the edge. Others shift from “push everything” to “push deltas” or “push only what a client is actively viewing.” At scale, relevance filtering is not a luxury. It’s survival. Consistency, ordering, and the ugly truth about distributed systems Real-time systems are very rarely a single system. They’re a distributed mesh: load balancers, gateways, message brokers, caches, databases, analytics pipelines, and third-party services. Keeping events ordered and consistent across this mesh can be brutal. Teams must answer questions that sound simple until production arrives: Many systems accept “eventual consistency” for performance, but the user still expects things to be logically consistent. A notification arriving before the action it refers to is a small bug that feels like a broken product. Infrastructure is only half the story: the client is the battlefield Most of the attention goes to server-side architecture, while clients are usually where real-world experiences fail. Challenges include: Mobile networks are unstable, and browsers differ in resource behaviour. A real-time platform working perfectly on a desktop fibre degrades badly on a mid-range phone with intermittent connectivity.  This makes protocol design that includes replay windows and acknowledgements with rate limits necessary to protect the server and the client. Observability and incident response: real-time breaks loudly Traditional web apps fail in slower ways. Real-time platforms fail like a fire alarm: connection drops spike, message queues backlog, and every user feels it simultaneously. Teams need strong observability: Without this, incidents become guesswork. Worse, many problems look the same from the outside: “it’s laggy.” The only way to diagnose quickly is to measure everything with precision. It’s not uncommon to see real-time workloads compared to live “always-on” consumer services, including sports and gaming platforms where millions follow events at once.  Even unrelated brands and communities, sometimes labelled in shorthand like casino777, highlight how traffic can surge around timetables and live moments, forcing systems to handle sudden concurrency without warning. Security and abuse, Real-time is a magnet for abusers Real-time systems increase the attack surface. Persistent connections are attractive for bots and abuse because they are resource-hungry by nature. Common hazards: Modern platforms must enforce robust authentication, per-user quotas, and behavioural detection. Rate limiting is not optional. Neither is isolating noisy clients and dropping connections aggressively when abuse patterns appear. Takeaway: the hard part is balancing speed, correctness, and cost Building high-traffic real-time web platforms today is difficult because every requirement fights another: The teams that succeed treat real-time as a full-stack discipline. They design protocols for unreliable networks, build scalable fan-out with relevance, prove correctness through idempotency and ordering rules, and invest heavily in monitoring and abuse controls.  In 2025, “real-time” is not a special feature. It’s an expectation. Meeting it at scale means engineering for the messy, human reality of the web, not the clean diagrams in architecture slides.

Servo GTK: a widget to embed Servo in GTK4

Servo, the Rust-based browsing engine spun off from Mozilla, keeps making progress every month, and this made Ignacio Casal Quinteiro wonder: what if we make a GTK widget so we can test Servo and compare it to WebKitGTK? As part of my job at Amazon I started working in a GTK widget which will allow embedding a Servo Webview inside a GTK application. This was mostly a research project just to understand the current state of Servo and whether it was at a good enough state to migrate from WebkitGTK to it. I have to admit that it is always a pleasure to work with Rust and the great gtk-rs bindings. Instead, Servo while it is not yet ready for production, or at least not for what we need in our product, it was simple to embed and to get something running in just a few days. The community is also amazing, I had some problems along the way and they were providing good suggestions to get me unblocked in no time. ↫ Ignacio Casal Quinteiro The code is now out there, and while not yet ready for widespread use, this will make it easier for GTK developer to periodically assess the state of Servo, hopefully some day concluding it can serve as a replacement for WebKitGTK.

China is selling its Great Firewall censorship tools to countries around the world

We’re all aware of the Chinese Great Firewall, the tool the Chinese government uses for mass censorship and for safeguarding and strengthening its totalitarian control over the country and its population. It turns out that through a Chinese shell company called Geedge Networks, China is also selling the Great Firewall to other totalitarian regimes around the world. Thanks to a massive leak of 500 GB of source code, work logs, and internal communication records, we now have more insight into how the Great Firewall works than ever before, leading to in-depth reports like this one from InterSecLab. The findings are chilling, but not surprising. First and foremost, Geedge is selling the Great Firewall to a variety of totalitarian regimes around the world, namely Kazakhstan, Ethiopia, Pakistan, Myanmar, and another unidentified country. These governments can then ask Geedge to make specific changes and ask them to focus on specific capabilities to further enhance the functionality of the Great Firewall, but what it can already do today is bad enough. The suite of products offered by Geedge Networks allow a client government unprecedented access to internet user data and enables governments to use this data to police national and regional networks. These capabilities include deep packet inspection for advanced classification, interception, and manipulation of application and user traffic; monitoring the geographic location of mobile subscribers in real time; analyzing aggregated network traffic in specific areas, such as during a protest or event; flagging unusual traffic patterns as suspicious; creating tailored blocking rules to obstruct access to a website or application (such as a VPN (Virtual Private Network) or circumvention tool); throttling traffic to specific services; identifying individual internet users for accessing websites or using circumvention tools or VPNs; assigning individual internet users reputation scores based on their online activities; and infecting users with malware through in-path injection. ↫ The Internet Coup: A Technical Analysis on How a Chinese Company is Exporting The Great Firewall to Autocratic Regimes Internet service providers participate in the implementation of the suite of tools, either freely or by force, and since the tools are platform-agnostic it doesn’t matter which platforms people are using in any given country, making international sanctions effectively useless. It also won’t surprise you that Geedge steals both proprietary and open source code, without regards for licensing terms. Furthermore, China is allowing provinces and regions within its borders to tailor and adapt the Great Firewall to their own local needs, providing a blueprint for how to export the suite of tools to other countries. With quite a few countries sliding ever further towards authoritarianism, I’m sure even places not traditionally thought of as totalitarian are lustfully looking at the Chinese Great Firewall, wishing they had something similar in their own countries.

Even the birthplace of the world wide web wants you to use adblockers

I recently removed all advertising from OSNews, and one of the reasons to do so is that online ads have become a serious avenue for malware and other security problems. Advertising on the web has become such a massive security risk that even the very birthplace of the world wide web, CERN, now strongly advises its staff to use adblockers. If you value your privacy and, also important, if you value the security of your computer, consider installing an ad blocker. While there is a plethora of them out there, the Computer Security Office’s members use, e.g. uBlock origin (Firefox) or Origin Lite (Chrome), AdblockPlus, Ghostery and Privacy Badger of the US-based Electronic Frontier Foundation. They all come in free (as in “free beer”) versions for all major browsers and also offer more sophisticated features if you are willing to pay. Once enabled, and depending on your desired level of protection, they can provide another thorough layer of protection to your device – and subsequently to CERN. ↫ CERN’s Computer Security Office I think it’s high time lawmakers take a long, hard look at the state of online advertising, and consider taking strong measures like banning online tracking and targeted advertising. Even the above-board online advertising industry is built atop dubious practices and borderline criminal behaviour, and things only get worse from there. Malicious actors even manage to infiltrate Google’s own search engine with dangerous ads, and that’s absolutely insane when you think about it. I’ve reached the point where I consider any website with advertising to be disrespectful and putting its visitors at risk, willingly and knowingly. Adblockers are not just a nice-to-have, but an absolute necessity for a pleasant and safe browsing experience, and that should be an indicator that we need to really stop and think what we’re doing here.

In-application browsers: the worst erosion of user choice you haven’t heard of

A long, long time ago, Android treated browser tabs in a very unique way. Individual tabs were were seen as ‘applications’, and would appear interspersed with the recent applications list as if they were, indeed, applications. This used to be one of my favourite Android features, as it made websites feel very well integrated into the overall user experience, and gave them a sense of place within your workflows. Eventually, though, Google decided to remove this unique approach, as we can’t have nice things and everything must be bland, boring, and the same, and now finding a website you have open requires going to your browser and finding the correct tab. More approachable to most people, I’d wager, but a reduction in usability, for me. I still mourn this loss. Similarly, we’ve seen a huge increase in the use of in-application browsers, a feature designed to trap users inside applications, instead of letting them freely explore the web the moment they click on a link inside an application. Application developers don’t want you leaving their application, so almost all of them, by default, will now open a webview inside the application when you click on an outbound link. For advertising companies, like Google and Facebook, this has the additional benefit of circumventing any and all privacy protections you may have set up in your browser, since those won’t apply to the webview the application opens. This sucks. I hate in-application browsers with a passion. Decades of internet use have taught me that clicking on a link means I’m opening a website in my browser. That’s what I want, that’s what I expect, and that’s how it should be. In-application webviews entirely break this normal chain of events; not because it improves the user experience, but because it benefits the bottom line of others. It’s also a massive security risk. Worst of all, this switch grants these apps the ability to spy and manipulate third-party websites. Popular apps like Instagram, Facebook Messenger and Facebook have all been caught injecting JavaScript via their in-app browsers into third party websites. TikTok was running commands that were essentially a keylogger. While we have no proof that this data was used or exfiltrated from the device, the mere presence of JavaScript code collecting this data combined with no plausible explanation is extremely concerning. ↫ Open Web Advocacy Open Web Advocacy has submitted a detailed and expansive report to the European Commission detailing the various issues with these in-application browsers, and suggests a number of remedies to strengthen security, improve privacy, and preserve browser choice. I hope this gets picked up, because in-application browsers are just another way in which we’re losing control over our devices.

Google is killing the open web

Google is managing to achieve what Microsoft couldn’t: killing the open web. The efforts of tech giants to gain control of and enclose the commons for extractive purposes have been clear to anyone who has been following the history of the Internet for at least the last decade, and the adopted strategies are varied in technique as they are in success, from Embrace, Extend, Extinguish (EEE) to monopolization and lock-in. What I want to talk about in this article is the war Google has been waging on XML for over a decade, why it matters that they’ve finally encroached themselves enough to get what they want, and what we can do to fight this. ↫ Oblomov (I can’t discern the author’s preferred name) Google’s quest to destroy the open web – or at the very least, aggressively contain it – is not new, and we’re all aware of it. Since Google makes most of its money from online advertising, what the company really wants is a sanitised, corporate web that is deeply centralised around as few big players as possible. The smaller the number of players that have an actual influence on web, the better – it’s much easier for Google to find common ground with other megacorps like Apple or Microsoft than with smaller players, open source projects, and individuals who actually care about the openness of the web. One of Google’s primary points of attack is XML and everything related to it, like RSS, XMLT, and so on. If you use RSS, you’re not loading web pages and thus not seeing Google’s ads. If you use XSLT to transform an RSS feed into a browsable website, you’re again not seeing any ads. Effectively, anything that we can use to avoid online advertising is a direct threat to Google’s bottom line, and thus you can be certain Google will try to remove, break, or otherwise cripple it in some way. The most recent example is yet another attempt by Google to kill XSLT. XSLT, or Extensible Stylesheet Language Transformations, is a language which allows you to transform any XML document – like an RSS feed – into other formats, like HTML, plaintext, and tons more. Google has been trying to kill XSLT for over a decade, but it’s such an unpopular move that they had to back down the first time they proposed its removal. They’re proposing it again, and the feedback has been just as negative. And we finally get to these days. Just as RSS feeds are making a comeback and users are starting to grow skeptic of the corporate silos, Google makes another run to kill XSLT, this time using the WHATWG as a sock puppet. Particularly of note, the corresponding Chromium issue was created before the WHATWG Github issue. It is thus to no one’s surprise that the overwhelmingly negative reactions to the issue, the detailed explanations about why XSLT is important, how instead of removing it browsers should move to more recent versions of the standard, and even the indications of existing better and more secure libraries to base such new implementations on, every counterpoint to the removal have gone completely ignored. In the end, the WHATWG was forced to close down comments to the Github issue to stop the flood of negative feedback, so that the Googler could move on to the next step: commencing the process of formalizing the dismissal of XSLT. ↫ Oblomov (I can’t discern the author’s preferred name) At this point in time, there’s really no more web standards as we idealise them in our head. It’s effectively just Google, and perhaps Apple, deciding what is a web “standard” and what isn’t, their decisions guided not by what’s best for a healthy and thriving open web, but by what’s best for their respective bottom lines. The reason the web looks and feels like ass now is not because we wanted it to be so, but because Google and the other technology giants made it so. Everyone is just playing by their rules because otherwise, you won’t show up in Google Search or your site won’t work properly in mobile Safari. This very detailed article and the recent renewed interest in XSLT – thanks for letting everyone know, Google! – has me wondering if OSNews should use XSLT to create a pretty version of our RSS feed that will render nicely even in browsers without any RSS support. It doesn’t seem too difficult, so I’ll see if I can find someone to figure this out (I lack the skills, obviously). We’ve already removed our ads, and I think our RSS feed is full-article already anyway, so why not have a minimal version of OSNews you could browse to in your browser that’s based on our RSS feed and XSLT?

AOL announces it’s ending its dial-up internet service

AOL routinely evaluates its products and services and has decided to discontinue Dial-up Internet. This service will no longer be available in AOL plans. As a result, on September 30, 2025 this service and the associated software, the AOL Dialer software and AOL Shield browser, which are optimized for older operating systems and dial-up internet connections, will be discontinued. ↫ AOL support document I’ve seen a few publications writing derisively about this, surprised dial-up internet is still a thing, but I think that’s misguided and definitely a bit elitist. In a country as large as the United States, there’s bound to be quite a few very remote and isolated places where dial-up might be the best or even only option to get online. On top of that, I’m sure there are people out there who use the internet so sparingly that dial-up may suit their needs just fine. I genuinely hope this move by AOL doesn’t cut a bunch of people off of the internet without any recourse, especially if it involves, say, isolated and lonely seniors to whom such changes may be too difficult to handle. Access to the internet is quite crucial in the modern world, and we shouldn’t be ridiculing people just because they don’t have access to super high-speed broadband.

“AWS deleted my 10-year account and all data without warning”

AWS: Not even once. This prominent Ruby developer lost his entire test environment – which, ironically, was pivotal to AWS’ own infrastructure – because of a rogue team within AWS itself that apparently answers to no one and worked hard to cover up a dumb mistake. On July 23, 2025, AWS deleted my 10-year-old account and every byte of data I had stored with them. No warning. No grace period. No recovery options. Just complete digital annihilation. This is the story of a catastrophic internal mistake at AWS MENA, a 20-day support nightmare where I couldn’t get a straight answer to “Does my data still exist?”, and what it reveals about trusting cloud providers with your data. ↫ Abdelkader Boudih Nightmare scenario doesn’t even begin to describe what happened here.