Internet Archive
Dillo is an amazing web browser for those of us who want their web browsing experience to be calmer and less flashing. Dillo also happens to be a very UNIX-y browser, and their latest release, 3.3.0, underlines that. A new dilloc program is now available to control Dillo from the command line or from a script. It searches for Dillo by the PID in the DILLO_PID environment variable or for a unique Dillo process if not set. ↫ Dillo 3.3.0 release notes You can use this program to control your Dillo instance, with basic commands like reloading the current URL, opening a new URL, and so on, but also things like dumping the current page’s contents. I have a feeling more commands and features will be added in future releases, but for now, even the current set of commands can be helpful for scripting purposes. I’m sure some of you who live and die in the terminal are already thinking of all the possibilities here. You can now also add page actions to the right-click context menu, so you can do things like reload a page with a Chrome curl impersonator to avoid certain JavaScript walls. This, too, is of course extensible. Dillo 3.3.0 also brings experimental support for building the browser with FLTK 1.4, and implemented a fix specifically to make OAuth work properly.
Have you ever tried clicking the back button in your browser, only to realise the website you’re on somehow doesn’t allow that? Out of all the millions of annoyances on the web, Google has decided to finally address this one: they’re going to punish the search rankings of websites that use this back button hijacking. Pages that are engaging in back button hijacking may be subject to manual spam actions or automated demotions, which can impact the site’s performance in Google Search results. To give site owners time to make any needed changes, we’re publishing this policy two months in advance of enforcement on June 15, 2026. ↫ Google Search Central It’s always uncomfortable when Google unilaterally takes actions such as these, since rarely do Google’s interests align with our own as users. This is in such rare case, though, and I can’t wait to see this insipid practice relegated to the dustbin of history.
I don’t like to cover “current events” very much, but the American government just revealed a truly bewildering policy effectively banning import of new consumer router models. This is ridiculous for many reasons, but if this does indeed come to pass it may be beneficial to learn how to “homebrew” a router. Fortunately, you can make a router out of basically anything resembling a computer. ↫ Noah Bailey I genuinely can’t believe making your own router with Linux or BSD might become a much more widespread thing in the US. I’m not saying it’s a bad thing – it’ll teach some people something new – but it just feels so absurd.
I had no idea, but apparently, you can just use newline characters and tabs in URLs without any issues. Notice how it reports an error if there is a tab or newline character, but continues anyway? The specification says that A validation error does not mean that the parser terminates and it encourages systems to report errors somewhere. Effectively, the error is ignored although it might be logged. Thus our HTML is fine in practice. ↫ Daniel Lemire This reminds me of the “Email is easy” quiz.
I’ve been a .com purist for over two decades of building. Once, I broke that rule and bought a .online TLD for a small project. This is the story of how it went up in flames. ↫ Tony S. An absolute horror story about Google’s dominance over the web, in places nobody really talks about. Scary.
Social media is going the way of alcohol, gambling, and other social sins: Societies are deciding it’s no longer kid stuff. Lawmakers point to compulsive use, exposure to harmful content, and mounting concerns about adolescent mental health. So, many propose to set a minimum age, usually 13 or 16. In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law. This is the age-verification trap. Strong enforcement of age rules undermines data privacy. ↫ Waydell D. Carvalho The answer to the dangers of social media is not to ban social media use among minors, for a whole variety of reasons. There’s data privacy, as the linked article goes into, but there’s also the fact that for a lot of people, including minors, who live in regressive, backwards environments and/or are victims of abuse, social media is their only support network. Cut them off from social media, and you cut them off from the very people who can save them from further abuse. The problem isn’t social media in and of itself – it’s profit-seeking social media. Companies like Facebook and TikTok spend billions to hyper-optimise and hyper-target vulnerable people, much like how tobacco companies and drug dealers do, to feed and worsen their addiction because keeping people addicted is how they maximise profits. The solution to the dangers of corporate social media is to strictly regulate their behaviour, something we already do with countless dangerous products and services. I’m obviously not qualified to come up with specific measures that would need to be taken, but I think we can all agree that whatever corporate social media have been and are doing is dangerous, unethical, should be stopped.
If you look at the table of contents for my book, Other Networks: A Radical Technology Sourcebook, you’ll see that entries on networks before/outside the internet are arranged first by underlying infrastructure and then chronologically. You’ll also notice that within the section on wired networks, there are two sub-sections: one for electrical wire and another for barbed wire. Even though the barbed wire section is quite short, it was one of the most fascinating to research and write about – mostly because the history of using barbed wire to communicate is surprisingly long and almost entirely undocumented, even though barbed wire fence phones in particular were an essential part of early- to mid-twentieth century rural life in many parts of the U.S. and Canada! ↫ Lori Emerson I had no idea this used to be a thing, but it obviously makes a ton of sense. If you can have a conversation by stringing a few tin cans together, you can obviously do something similar across metal barbed wire. There’s something poetic about using one of mankind’s most dividing inventions to communicate, and thus bring people closer together.
For decades, the operating system kernel handled process scheduling, memory isolation, hardware abstraction, and resource allocation. Applications sat neatly on top, consuming services but rarely redefining them. That boundary is no longer as clear. Today’s browsers now ship with their own process managers, JIT compilers, sandboxing layers, GPU pipelines, and even virtualized runtimes through WebAssembly. For many users, the browser is the main application environment. The question is no longer rhetorical: are browsers starting to behave like miniature operating systems? Abstraction Layers And The Rise Of Webassembly WebAssembly changed the browser from a document renderer into a portable execution environment. It allows near-native code to run inside a sandbox, abstracting away the underlying hardware and much of the host OS. In practice, this means the browser mediates between application logic and CPU, memory, and graphics resources. That mediation is increasingly optimized for specific platforms. Oasis and Safari leverage platform-specific OS hooks to use 60% less RAM than Chrome when running on their native operating systems. Those gains do not come from web standards alone; they depend on tight integration with kernel services, graphics drivers, and memory subsystems. As a result, the browser engine has become a portability layer comparable to what POSIX once offered. Developers target Chromium or WebKit, and the browser translates that intent into system calls, GPU queues, and thread pools. The abstraction is deep enough that many applications no longer need to care which OS sits beneath. Latency Management In Real-Time Web Applications Real-time collaboration tools, cloud IDEs, and browser-based games have pushed latency management into the browser core. Task scheduling, priority hints, and background throttling now resemble lightweight kernel schedulers. When dozens of tabs compete for CPU time, the browser arbitrates. Performance differences show how much this layer matters. Brave browser loads web pages 21% faster, uses 9% less CPU, and consumes 4% less battery on average compared to main competitors due to native ad and tracker blocking. Those savings are essentially policy decisions about resource allocation, implemented above the kernel but below the application. The same infrastructure powers high-demand workloads. Streaming platforms, complex dashboards, and even high-traffic environments such as online gambling platforms depend on predictable frame timing and low input latency inside the browser sandbox. For instance, a player placing a live bet or spinning a slot at online casinos cannot experience input delays or dropped frames without it affecting the interaction itself. The browser must process animations, user clicks, network responses, and security checks almost simultaneously, ensuring results appear instantly and consistently across thousands of simultaneous sessions. This means the browser’s event loop and rendering pipeline function like a specialized runtime scheduler. They coordinate animation frames, WebSocket traffic, and UI updates so that gameplay remains smooth even when the page is performing constant background communication with remote servers. For OS enthusiasts, this is notable. The kernel still schedules threads, but the browser increasingly decides which threads exist, when they wake, and how aggressively they consume resources. Memory Isolation Differences Between Tabs And Processes Security concerns accelerated this architectural shift. Chromium’s Site Isolation model assigns separate OS processes to different sites, reducing cross-origin attacks. That approach mirrors traditional multi-process isolation strategies in Unix-like systems. There is a cost. Chrome’s “Site Isolation” feature increases memory usage by an estimated 10–20% to enhance security through dedicated OS processes per website. The browser chooses stronger isolation boundaries and accepts higher RAM pressure, effectively trading kernel-level efficiency for application-level containment. Tab isolation also obscures responsibility. The kernel sees multiple browser processes, but it is the browser that defines their lifecycles, privileges, and communication channels. Shared memory, IPC mechanisms, and sandbox rules are orchestrated by the engine, not directly by the OS administrator. For developers used to thinking in terms of system daemons and user processes, this inversion is striking. The browser becomes a supervisor, while the kernel enforces boundaries defined elsewhere. The Diminishing Role Of The Underlying Host OS None of this means the host operating system is irrelevant. Linux still manages cgroups and namespaces. Windows enforces kernel patch protection and virtualization-based security. macOS controls entitlements and code signing. Yet from the application’s perspective, the browser often feels like the real platform. Benchmarks in 2026 highlight how OS-specific optimizations now flow through browser engines rather than standalone apps. Safari’s tight macOS integration and Chrome’s GPU tuning on Windows show that performance differences emerge from how deeply browsers hook into kernel services. The OS provides primitives; the browser assembles the policy. For administrators and hobbyists, this shifts where meaningful experimentation happens. Tuning the scheduler or switching filesystems still matters, but adjusting browser flags, sandbox modes, and rendering backends can produce equally visible results. Today’s browser has not replaced the kernel, yet it increasingly defines how users experience it, acting as a policy engine layered directly atop system calls.
About a year ago I mentioned that I had rediscovered the Dillo Web Browser. Unlike some of my other hobbies, endeavours, and interests, my appreciation for Dillo has not wavered. I only have a moment to gush today, so I’ll cut right to it. Dillo has been plugging along nicely (see the Git forge.) and adding little features. Features that even I, a guy with a blog, can put to use. Here are a few of my favourites. ↫ Bobby Hiltz If you’re looking for a more minimalist, less distracting browser experience that gives you a ton of interesting UNIXy control, you should really consider giving Dillo a try.
If only it was that simple – cue the rollercoaster ride. What an absolutely garish state of affairs lies behind this simple radio button on a website. I’m also well aware OSNews has a certain amount of complexity it might not need, and while I can’t fix that, I am at least working on a potential solution.
Are you a normal person and thus sick of all the nonsensical, non-browser stuff browser makers keep adding to your browser, but for whatever reason you don’t want to or cannot switch to one of the forks of your browser of choice? Just the Browser helps you remove AI features, telemetry data reporting, sponsored content, product integrations, and other annoyances from desktop web browsers. The goal is to give you “just the browser” and nothing else, using hidden settings in web browsers intended for companies and other organizations. This project includes configuration files for popular web browsers, documentation for installing and modifying them, and easy installation scripts. Everything is open-source on GitHub. ↫ Just The Browser’s website It comes in the form of scripts for Windows, Linux, or macOS, and can be used for Google Chrome, Microsoft Edge, and Mozilla Firefox. It’s all open source so you can check the scripts for yourself, but there are also manual guides for each browser if you’re not too keen on running an unknown script. The changes won’t be erased by updates, unless the specific settings and configuration flags used are actually removed or altered by the browser makers. That’s all there’s to it – a very straightforward tool.
They’re easily overlooked between all the Chrome and Safari violence, but there are still text-based web browsers, and people still use them. How do they handle the latest HTML features? While CSS is the star of the show when it comes to new features, HTML ain’t stale either. If we put the long-awaited styleable selects and Apple’s take on toggle switches aside, there’s a lot readily available cross-browser. But here’s the thing: Whenever we say cross-browser, we usually look at the big ones, never at text-based browsers. So in this article I wanna shed some light on how they handle the following recent additions. ↫ Matthias Zöchling Text-based web browsers work best with regular HTML, as things like CSS and JavaScript won’t work. Despite the new features highlighted in the article being HTML, however, text-based browser have a hard time dealing with them, and it’s likely that as more and more modern features get added to HTML, text-based browsers are going to have an increasingly harder time dealing with the web. At least OSNews seems to render decently usable on text-based web browsers, but ideal it is not. I don’t really have the skills to fix any issues on that front, but I can note that I’m working on a extremely basic, HTML-only version of OSNews generated from our RSS feed, hosted on some very unique retro hardware. I can’t guarantee it’ll become available – I’m weary about hosting something from home using unique hardware and outdated software – but if it does, y’all will know about it, of course.
Are you an author writing HTML? Just so we’re clear: Not XHTML. HTML. Without the X. If you are, repeat after me, because apparently this bears repeating (after the title): You are not required to close your <p>, <li>, <img>, or <br> tags in HTML. ↫ Daniel Tan Back when I still had to write OSNews’ stories in plain HTML – yes, that’s what we did for a very long time – I always properly closed my tags. I did so because I thought you had to, but also because I think it looks nicer, adds a ton of clarity, and makes it easier to go back later and make any possible changes or fix errors. It definitely added to the workload, which was especially annoying when dealing with really long, detailed articles, but the end result was worth it. I haven’t had to write in plain HTML for ages now, since OSNews switched to WordPress and thus uses a proper WYSIWYG editor, so I haven’t thought about closing HTML tags in a long time – until I stumbled upon this article. I vaguely remember I would “fix” other people’s HTML in our backend by adding closing tags, and now I feel a little bit silly for doing so since apparently it wasn’t technically necessary at all. Luckily, it’s also not wrong to close your tags, and I stick by my readability arguments. Sometimes it’s easy to forget just how old HTML has become, and how mangled it’s become over the years.
We’re all familiar with things like marquee and blink, relics of HTML of the past, but there are far more weird and obscure HTML tags you may not be aware of. Luckily, Declan Chidlow at HTMLHell details a few of them so we can all scratch shake our heads in disbelief. But there are far more obscure tags which are perhaps less visually dazzling but equally or even more interesting. If you’re younger, this might very well be your introduction to them. If you’re older, this still might be an introduction, but also possibly a trip down memory lane or a flashback to the horrors of the first browser war. It depends. ↫ Declan Chidlow at HTMLHell I think my favourite is the dir tag, intended to be used to display lists of files and directories. We’re supposed to use list tags now to achieve the same result, but I do kind of like the idea of having a dedicated tag to indicate files, and perhaps have browsers render these lists in the same way the file manager of the platform it’s running on does. I don’t know if that was possible, but it seems like the logical continuation of a hypothetical dir tag. Anyway, should we implement bgsound on OSNews?
What do you do if you develop a lightweight browser that doesn’t support JavaScript, but you once chose GitHub as the home for your code? You’re now in the unenviable position that your own browser can no longer access your own online source repository because it requires JavaScript, which is both annoying and, well, a little awkward. The solution is, of course, obvious: you move somewhere else. That’s exactly what the Dillo browser did. They set up a small VPS, opted for cgit as the git frontend for its performance and small size, and for the bug tracker, they created a brand new, very simple bug tracker. To avoid this problem, I created my own bug tracker software, buggy, which is a very simple C tool that parses plain Markdown files and creates a single HTML page for each bug. All bugs are stored in a git repository and a git hook regenerates the bug pages and the index on each new commit. As it is simply plain text, I can edit the bugs locally and only push them to the remote when I have Internet back, so it works nice offline. Also, as the output is just an static HTML site, I don’t need to worry about having any vulnerabilities in my code, as it will only run at build time. ↫ Rodrigo Arias Mallo There’s more considerations detailed in the article about Dillo’s migration, and it can serve as inspiration for anyone else running a small open source project who wishes to leave GitHub behind. With GitHub’s continuing to add more and more complexity and “AI” to separate open source code from its licensing terms, we may see more and more projects giving GitHub the finger.
Real-time web platforms used to be a niche engineering flex. Now they’re the default expectation. Users want dashboards that update instantly, chats that feel live, sports scores that tick without refresh, and collaborative tools where changes appear the moment someone types. The modern web has effectively trained everyone to expect “now,” and at scale, “now” is expensive. What makes today’s challenge interesting is that the hard parts are no longer limited to raw throughput. The real struggle is building systems that are fast, correct, observable, secure, and cost-controlled, all while operating in a world of flaky networks, mobile clients, and sudden traffic spikes driven by social and news cycles. Real-time is a product promise, not just a protocol Teams often treat real-time as a technical decision: WebSockets, Server-Sent Events, long polling, or a third-party pub/sub service. In practice, real-time is a product promise with strict implications. If an app claims “live updates” but misses messages, duplicates events, or lags under load, trust collapses quickly. This is especially visible in high-stakes environments: trading, logistics, gaming, and live entertainment. Even outside of finance, a live platform is judged based on responsiveness and consistency. That means engineering must optimize for perceived speed and correctness, not just raw latency numbers. Scaling the fan-out problem without going broke The toughest math in real-time systems is fan-out: One event may need to reach thousands or millions of clients. A single “state change” can become a storm of outbound messages. Common pains usually involve: Approaches vary: some platforms partition users by region, shard by topic, or push more aggregation to the edge. Others shift from “push everything” to “push deltas” or “push only what a client is actively viewing.” At scale, relevance filtering is not a luxury. It’s survival. Consistency, ordering, and the ugly truth about distributed systems Real-time systems are very rarely a single system. They’re a distributed mesh: load balancers, gateways, message brokers, caches, databases, analytics pipelines, and third-party services. Keeping events ordered and consistent across this mesh can be brutal. Teams must answer questions that sound simple until production arrives: Many systems accept “eventual consistency” for performance, but the user still expects things to be logically consistent. A notification arriving before the action it refers to is a small bug that feels like a broken product. Infrastructure is only half the story: the client is the battlefield Most of the attention goes to server-side architecture, while clients are usually where real-world experiences fail. Challenges include: Mobile networks are unstable, and browsers differ in resource behaviour. A real-time platform working perfectly on a desktop fibre degrades badly on a mid-range phone with intermittent connectivity. This makes protocol design that includes replay windows and acknowledgements with rate limits necessary to protect the server and the client. Observability and incident response: real-time breaks loudly Traditional web apps fail in slower ways. Real-time platforms fail like a fire alarm: connection drops spike, message queues backlog, and every user feels it simultaneously. Teams need strong observability: Without this, incidents become guesswork. Worse, many problems look the same from the outside: “it’s laggy.” The only way to diagnose quickly is to measure everything with precision. It’s not uncommon to see real-time workloads compared to live “always-on” consumer services, including sports and gaming platforms where millions follow events at once. Even unrelated brands and communities, sometimes labelled in shorthand like casino777, highlight how traffic can surge around timetables and live moments, forcing systems to handle sudden concurrency without warning. Security and abuse, Real-time is a magnet for abusers Real-time systems increase the attack surface. Persistent connections are attractive for bots and abuse because they are resource-hungry by nature. Common hazards: Modern platforms must enforce robust authentication, per-user quotas, and behavioural detection. Rate limiting is not optional. Neither is isolating noisy clients and dropping connections aggressively when abuse patterns appear. Takeaway: the hard part is balancing speed, correctness, and cost Building high-traffic real-time web platforms today is difficult because every requirement fights another: The teams that succeed treat real-time as a full-stack discipline. They design protocols for unreliable networks, build scalable fan-out with relevance, prove correctness through idempotency and ordering rules, and invest heavily in monitoring and abuse controls. In 2025, “real-time” is not a special feature. It’s an expectation. Meeting it at scale means engineering for the messy, human reality of the web, not the clean diagrams in architecture slides.
Servo, the Rust-based browsing engine spun off from Mozilla, keeps making progress every month, and this made Ignacio Casal Quinteiro wonder: what if we make a GTK widget so we can test Servo and compare it to WebKitGTK? As part of my job at Amazon I started working in a GTK widget which will allow embedding a Servo Webview inside a GTK application. This was mostly a research project just to understand the current state of Servo and whether it was at a good enough state to migrate from WebkitGTK to it. I have to admit that it is always a pleasure to work with Rust and the great gtk-rs bindings. Instead, Servo while it is not yet ready for production, or at least not for what we need in our product, it was simple to embed and to get something running in just a few days. The community is also amazing, I had some problems along the way and they were providing good suggestions to get me unblocked in no time. ↫ Ignacio Casal Quinteiro The code is now out there, and while not yet ready for widespread use, this will make it easier for GTK developer to periodically assess the state of Servo, hopefully some day concluding it can serve as a replacement for WebKitGTK.
We’re all aware of the Chinese Great Firewall, the tool the Chinese government uses for mass censorship and for safeguarding and strengthening its totalitarian control over the country and its population. It turns out that through a Chinese shell company called Geedge Networks, China is also selling the Great Firewall to other totalitarian regimes around the world. Thanks to a massive leak of 500 GB of source code, work logs, and internal communication records, we now have more insight into how the Great Firewall works than ever before, leading to in-depth reports like this one from InterSecLab. The findings are chilling, but not surprising. First and foremost, Geedge is selling the Great Firewall to a variety of totalitarian regimes around the world, namely Kazakhstan, Ethiopia, Pakistan, Myanmar, and another unidentified country. These governments can then ask Geedge to make specific changes and ask them to focus on specific capabilities to further enhance the functionality of the Great Firewall, but what it can already do today is bad enough. The suite of products offered by Geedge Networks allow a client government unprecedented access to internet user data and enables governments to use this data to police national and regional networks. These capabilities include deep packet inspection for advanced classification, interception, and manipulation of application and user traffic; monitoring the geographic location of mobile subscribers in real time; analyzing aggregated network traffic in specific areas, such as during a protest or event; flagging unusual traffic patterns as suspicious; creating tailored blocking rules to obstruct access to a website or application (such as a VPN (Virtual Private Network) or circumvention tool); throttling traffic to specific services; identifying individual internet users for accessing websites or using circumvention tools or VPNs; assigning individual internet users reputation scores based on their online activities; and infecting users with malware through in-path injection. ↫ The Internet Coup: A Technical Analysis on How a Chinese Company is Exporting The Great Firewall to Autocratic Regimes Internet service providers participate in the implementation of the suite of tools, either freely or by force, and since the tools are platform-agnostic it doesn’t matter which platforms people are using in any given country, making international sanctions effectively useless. It also won’t surprise you that Geedge steals both proprietary and open source code, without regards for licensing terms. Furthermore, China is allowing provinces and regions within its borders to tailor and adapt the Great Firewall to their own local needs, providing a blueprint for how to export the suite of tools to other countries. With quite a few countries sliding ever further towards authoritarianism, I’m sure even places not traditionally thought of as totalitarian are lustfully looking at the Chinese Great Firewall, wishing they had something similar in their own countries.
I recently removed all advertising from OSNews, and one of the reasons to do so is that online ads have become a serious avenue for malware and other security problems. Advertising on the web has become such a massive security risk that even the very birthplace of the world wide web, CERN, now strongly advises its staff to use adblockers. If you value your privacy and, also important, if you value the security of your computer, consider installing an ad blocker. While there is a plethora of them out there, the Computer Security Office’s members use, e.g. uBlock origin (Firefox) or Origin Lite (Chrome), AdblockPlus, Ghostery and Privacy Badger of the US-based Electronic Frontier Foundation. They all come in free (as in “free beer”) versions for all major browsers and also offer more sophisticated features if you are willing to pay. Once enabled, and depending on your desired level of protection, they can provide another thorough layer of protection to your device – and subsequently to CERN. ↫ CERN’s Computer Security Office I think it’s high time lawmakers take a long, hard look at the state of online advertising, and consider taking strong measures like banning online tracking and targeted advertising. Even the above-board online advertising industry is built atop dubious practices and borderline criminal behaviour, and things only get worse from there. Malicious actors even manage to infiltrate Google’s own search engine with dangerous ads, and that’s absolutely insane when you think about it. I’ve reached the point where I consider any website with advertising to be disrespectful and putting its visitors at risk, willingly and knowingly. Adblockers are not just a nice-to-have, but an absolute necessity for a pleasant and safe browsing experience, and that should be an indicator that we need to really stop and think what we’re doing here.
A long, long time ago, Android treated browser tabs in a very unique way. Individual tabs were were seen as ‘applications’, and would appear interspersed with the recent applications list as if they were, indeed, applications. This used to be one of my favourite Android features, as it made websites feel very well integrated into the overall user experience, and gave them a sense of place within your workflows. Eventually, though, Google decided to remove this unique approach, as we can’t have nice things and everything must be bland, boring, and the same, and now finding a website you have open requires going to your browser and finding the correct tab. More approachable to most people, I’d wager, but a reduction in usability, for me. I still mourn this loss. Similarly, we’ve seen a huge increase in the use of in-application browsers, a feature designed to trap users inside applications, instead of letting them freely explore the web the moment they click on a link inside an application. Application developers don’t want you leaving their application, so almost all of them, by default, will now open a webview inside the application when you click on an outbound link. For advertising companies, like Google and Facebook, this has the additional benefit of circumventing any and all privacy protections you may have set up in your browser, since those won’t apply to the webview the application opens. This sucks. I hate in-application browsers with a passion. Decades of internet use have taught me that clicking on a link means I’m opening a website in my browser. That’s what I want, that’s what I expect, and that’s how it should be. In-application webviews entirely break this normal chain of events; not because it improves the user experience, but because it benefits the bottom line of others. It’s also a massive security risk. Worst of all, this switch grants these apps the ability to spy and manipulate third-party websites. Popular apps like Instagram, Facebook Messenger and Facebook have all been caught injecting JavaScript via their in-app browsers into third party websites. TikTok was running commands that were essentially a keylogger. While we have no proof that this data was used or exfiltrated from the device, the mere presence of JavaScript code collecting this data combined with no plausible explanation is extremely concerning. ↫ Open Web Advocacy Open Web Advocacy has submitted a detailed and expansive report to the European Commission detailing the various issues with these in-application browsers, and suggests a number of remedies to strengthen security, improve privacy, and preserve browser choice. I hope this gets picked up, because in-application browsers are just another way in which we’re losing control over our devices.