Internet Archive

Just the Browser: scripts to remove all the crap from your browser

Are you a normal person and thus sick of all the nonsensical, non-browser stuff browser makers keep adding to your browser, but for whatever reason you don’t want to or cannot switch to one of the forks of your browser of choice? Just the Browser helps you remove AI features, telemetry data reporting, sponsored content, product integrations, and other annoyances from desktop web browsers. The goal is to give you “just the browser” and nothing else, using hidden settings in web browsers intended for companies and other organizations. This project includes configuration files for popular web browsers, documentation for installing and modifying them, and easy installation scripts. Everything is open-source on GitHub. ↫ Just The Browser’s website It comes in the form of scripts for Windows, Linux, or macOS, and can be used for Google Chrome, Microsoft Edge, and Mozilla Firefox. It’s all open source so you can check the scripts for yourself, but there are also manual guides for each browser if you’re not too keen on running an unknown script. The changes won’t be erased by updates, unless the specific settings and configuration flags used are actually removed or altered by the browser makers. That’s all there’s to it – a very straightforward tool.

Modern HTML features on text-based web browsers

They’re easily overlooked between all the Chrome and Safari violence, but there are still text-based web browsers, and people still use them. How do they handle the latest HTML features? While CSS is the star of the show when it comes to new features, HTML ain’t stale either. If we put the long-awaited styleable selects and Apple’s take on toggle switches aside, there’s a lot readily available cross-browser. But here’s the thing: Whenever we say cross-browser, we usually look at the big ones, never at text-based browsers. So in this article I wanna shed some light on how they handle the following recent additions. ↫ Matthias Zöchling Text-based web browsers work best with regular HTML, as things like CSS and JavaScript won’t work. Despite the new features highlighted in the article being HTML, however, text-based browser have a hard time dealing with them, and it’s likely that as more and more modern features get added to HTML, text-based browsers are going to have an increasingly harder time dealing with the web. At least OSNews seems to render decently usable on text-based web browsers, but ideal it is not. I don’t really have the skills to fix any issues on that front, but I can note that I’m working on a extremely basic, HTML-only version of OSNews generated from our RSS feed, hosted on some very unique retro hardware. I can’t guarantee it’ll become available – I’m weary about hosting something from home using unique hardware and outdated software – but if it does, y’all will know about it, of course.

You are not required to close your <p>, <li>, <img>, or <br> tags in HTML

Are you an author writing HTML? Just so we’re clear: Not XHTML. HTML. Without the X. If you are, repeat after me, because apparently this bears repeating (after the title): You are not required to close your &lt;p>, &lt;li>, &lt;img>, or &lt;br> tags in HTML. ↫ Daniel Tan Back when I still had to write OSNews’ stories in plain HTML – yes, that’s what we did for a very long time – I always properly closed my tags. I did so because I thought you had to, but also because I think it looks nicer, adds a ton of clarity, and makes it easier to go back later and make any possible changes or fix errors. It definitely added to the workload, which was especially annoying when dealing with really long, detailed articles, but the end result was worth it. I haven’t had to write in plain HTML for ages now, since OSNews switched to WordPress and thus uses a proper WYSIWYG editor, so I haven’t thought about closing HTML tags in a long time – until I stumbled upon this article. I vaguely remember I would “fix” other people’s HTML in our backend by adding closing tags, and now I feel a little bit silly for doing so since apparently it wasn’t technically necessary at all. Luckily, it’s also not wrong to close your tags, and I stick by my readability arguments. Sometimes it’s easy to forget just how old HTML has become, and how mangled it’s become over the years.

The HTML elements time forgot

We’re all familiar with things like marquee and blink, relics of HTML of the past, but there are far more weird and obscure HTML tags you may not be aware of. Luckily, Declan Chidlow at HTMLHell details a few of them so we can all scratch shake our heads in disbelief. But there are far more obscure tags which are perhaps less visually dazzling but equally or even more interesting. If you’re younger, this might very well be your introduction to them. If you’re older, this still might be an introduction, but also possibly a trip down memory lane or a flashback to the horrors of the first browser war. It depends. ↫ Declan Chidlow at HTMLHell I think my favourite is the dir tag, intended to be used to display lists of files and directories. We’re supposed to use list tags now to achieve the same result, but I do kind of like the idea of having a dedicated tag to indicate files, and perhaps have browsers render these lists in the same way the file manager of the platform it’s running on does. I don’t know if that was possible, but it seems like the logical continuation of a hypothetical dir tag. Anyway, should we implement bgsound on OSNews?

Migrating Dillo away from GitHub

What do you do if you develop a lightweight browser that doesn’t support JavaScript, but you once chose GitHub as the home for your code? You’re now in the unenviable position that your own browser can no longer access your own online source repository because it requires JavaScript, which is both annoying and, well, a little awkward. The solution is, of course, obvious: you move somewhere else. That’s exactly what the Dillo browser did. They set up a small VPS, opted for cgit as the git frontend for its performance and small size, and for the bug tracker, they created a brand new, very simple bug tracker. To avoid this problem, I created my own bug tracker software, buggy, which is a very simple C tool that parses plain Markdown files and creates a single HTML page for each bug. All bugs are stored in a git repository and a git hook regenerates the bug pages and the index on each new commit. As it is simply plain text, I can edit the bugs locally and only push them to the remote when I have Internet back, so it works nice offline. Also, as the output is just an static HTML site, I don’t need to worry about having any vulnerabilities in my code, as it will only run at build time. ↫ Rodrigo Arias Mallo There’s more considerations detailed in the article about Dillo’s migration, and it can serve as inspiration for anyone else running a small open source project who wishes to leave GitHub behind. With GitHub’s continuing to add more and more complexity and “AI” to separate open source code from its licensing terms, we may see more and more projects giving GitHub the finger.

The Challenges of Building High-Traffic Real-Time Web Platforms Today

Real-time web platforms used to be a niche engineering flex. Now they’re the default expectation. Users want dashboards that update instantly, chats that feel live, sports scores that tick without refresh, and collaborative tools where changes appear the moment someone types. The modern web has effectively trained everyone to expect “now,” and at scale, “now” is expensive. What makes today’s challenge interesting is that the hard parts are no longer limited to raw throughput. The real struggle is building systems that are fast, correct, observable, secure, and cost-controlled, all while operating in a world of flaky networks, mobile clients, and sudden traffic spikes driven by social and news cycles. Real-time is a product promise, not just a protocol Teams often treat real-time as a technical decision: WebSockets, Server-Sent Events, long polling, or a third-party pub/sub service. In practice, real-time is a product promise with strict implications. If an app claims “live updates” but misses messages, duplicates events, or lags under load, trust collapses quickly. This is especially visible in high-stakes environments: trading, logistics, gaming, and live entertainment. Even outside of finance, a live platform is judged based on responsiveness and consistency. That means engineering must optimize for perceived speed and correctness, not just raw latency numbers. Scaling the fan-out problem without going broke The toughest math in real-time systems is fan-out: One event may need to reach thousands or millions of clients. A single “state change” can become a storm of outbound messages. Common pains usually involve: Approaches vary: some platforms partition users by region, shard by topic, or push more aggregation to the edge. Others shift from “push everything” to “push deltas” or “push only what a client is actively viewing.” At scale, relevance filtering is not a luxury. It’s survival. Consistency, ordering, and the ugly truth about distributed systems Real-time systems are very rarely a single system. They’re a distributed mesh: load balancers, gateways, message brokers, caches, databases, analytics pipelines, and third-party services. Keeping events ordered and consistent across this mesh can be brutal. Teams must answer questions that sound simple until production arrives: Many systems accept “eventual consistency” for performance, but the user still expects things to be logically consistent. A notification arriving before the action it refers to is a small bug that feels like a broken product. Infrastructure is only half the story: the client is the battlefield Most of the attention goes to server-side architecture, while clients are usually where real-world experiences fail. Challenges include: Mobile networks are unstable, and browsers differ in resource behaviour. A real-time platform working perfectly on a desktop fibre degrades badly on a mid-range phone with intermittent connectivity.  This makes protocol design that includes replay windows and acknowledgements with rate limits necessary to protect the server and the client. Observability and incident response: real-time breaks loudly Traditional web apps fail in slower ways. Real-time platforms fail like a fire alarm: connection drops spike, message queues backlog, and every user feels it simultaneously. Teams need strong observability: Without this, incidents become guesswork. Worse, many problems look the same from the outside: “it’s laggy.” The only way to diagnose quickly is to measure everything with precision. It’s not uncommon to see real-time workloads compared to live “always-on” consumer services, including sports and gaming platforms where millions follow events at once.  Even unrelated brands and communities, sometimes labelled in shorthand like casino777, highlight how traffic can surge around timetables and live moments, forcing systems to handle sudden concurrency without warning. Security and abuse, Real-time is a magnet for abusers Real-time systems increase the attack surface. Persistent connections are attractive for bots and abuse because they are resource-hungry by nature. Common hazards: Modern platforms must enforce robust authentication, per-user quotas, and behavioural detection. Rate limiting is not optional. Neither is isolating noisy clients and dropping connections aggressively when abuse patterns appear. Takeaway: the hard part is balancing speed, correctness, and cost Building high-traffic real-time web platforms today is difficult because every requirement fights another: The teams that succeed treat real-time as a full-stack discipline. They design protocols for unreliable networks, build scalable fan-out with relevance, prove correctness through idempotency and ordering rules, and invest heavily in monitoring and abuse controls.  In 2025, “real-time” is not a special feature. It’s an expectation. Meeting it at scale means engineering for the messy, human reality of the web, not the clean diagrams in architecture slides.

Servo GTK: a widget to embed Servo in GTK4

Servo, the Rust-based browsing engine spun off from Mozilla, keeps making progress every month, and this made Ignacio Casal Quinteiro wonder: what if we make a GTK widget so we can test Servo and compare it to WebKitGTK? As part of my job at Amazon I started working in a GTK widget which will allow embedding a Servo Webview inside a GTK application. This was mostly a research project just to understand the current state of Servo and whether it was at a good enough state to migrate from WebkitGTK to it. I have to admit that it is always a pleasure to work with Rust and the great gtk-rs bindings. Instead, Servo while it is not yet ready for production, or at least not for what we need in our product, it was simple to embed and to get something running in just a few days. The community is also amazing, I had some problems along the way and they were providing good suggestions to get me unblocked in no time. ↫ Ignacio Casal Quinteiro The code is now out there, and while not yet ready for widespread use, this will make it easier for GTK developer to periodically assess the state of Servo, hopefully some day concluding it can serve as a replacement for WebKitGTK.

China is selling its Great Firewall censorship tools to countries around the world

We’re all aware of the Chinese Great Firewall, the tool the Chinese government uses for mass censorship and for safeguarding and strengthening its totalitarian control over the country and its population. It turns out that through a Chinese shell company called Geedge Networks, China is also selling the Great Firewall to other totalitarian regimes around the world. Thanks to a massive leak of 500 GB of source code, work logs, and internal communication records, we now have more insight into how the Great Firewall works than ever before, leading to in-depth reports like this one from InterSecLab. The findings are chilling, but not surprising. First and foremost, Geedge is selling the Great Firewall to a variety of totalitarian regimes around the world, namely Kazakhstan, Ethiopia, Pakistan, Myanmar, and another unidentified country. These governments can then ask Geedge to make specific changes and ask them to focus on specific capabilities to further enhance the functionality of the Great Firewall, but what it can already do today is bad enough. The suite of products offered by Geedge Networks allow a client government unprecedented access to internet user data and enables governments to use this data to police national and regional networks. These capabilities include deep packet inspection for advanced classification, interception, and manipulation of application and user traffic; monitoring the geographic location of mobile subscribers in real time; analyzing aggregated network traffic in specific areas, such as during a protest or event; flagging unusual traffic patterns as suspicious; creating tailored blocking rules to obstruct access to a website or application (such as a VPN (Virtual Private Network) or circumvention tool); throttling traffic to specific services; identifying individual internet users for accessing websites or using circumvention tools or VPNs; assigning individual internet users reputation scores based on their online activities; and infecting users with malware through in-path injection. ↫ The Internet Coup: A Technical Analysis on How a Chinese Company is Exporting The Great Firewall to Autocratic Regimes Internet service providers participate in the implementation of the suite of tools, either freely or by force, and since the tools are platform-agnostic it doesn’t matter which platforms people are using in any given country, making international sanctions effectively useless. It also won’t surprise you that Geedge steals both proprietary and open source code, without regards for licensing terms. Furthermore, China is allowing provinces and regions within its borders to tailor and adapt the Great Firewall to their own local needs, providing a blueprint for how to export the suite of tools to other countries. With quite a few countries sliding ever further towards authoritarianism, I’m sure even places not traditionally thought of as totalitarian are lustfully looking at the Chinese Great Firewall, wishing they had something similar in their own countries.

Even the birthplace of the world wide web wants you to use adblockers

I recently removed all advertising from OSNews, and one of the reasons to do so is that online ads have become a serious avenue for malware and other security problems. Advertising on the web has become such a massive security risk that even the very birthplace of the world wide web, CERN, now strongly advises its staff to use adblockers. If you value your privacy and, also important, if you value the security of your computer, consider installing an ad blocker. While there is a plethora of them out there, the Computer Security Office’s members use, e.g. uBlock origin (Firefox) or Origin Lite (Chrome), AdblockPlus, Ghostery and Privacy Badger of the US-based Electronic Frontier Foundation. They all come in free (as in “free beer”) versions for all major browsers and also offer more sophisticated features if you are willing to pay. Once enabled, and depending on your desired level of protection, they can provide another thorough layer of protection to your device – and subsequently to CERN. ↫ CERN’s Computer Security Office I think it’s high time lawmakers take a long, hard look at the state of online advertising, and consider taking strong measures like banning online tracking and targeted advertising. Even the above-board online advertising industry is built atop dubious practices and borderline criminal behaviour, and things only get worse from there. Malicious actors even manage to infiltrate Google’s own search engine with dangerous ads, and that’s absolutely insane when you think about it. I’ve reached the point where I consider any website with advertising to be disrespectful and putting its visitors at risk, willingly and knowingly. Adblockers are not just a nice-to-have, but an absolute necessity for a pleasant and safe browsing experience, and that should be an indicator that we need to really stop and think what we’re doing here.

In-application browsers: the worst erosion of user choice you haven’t heard of

A long, long time ago, Android treated browser tabs in a very unique way. Individual tabs were were seen as ‘applications’, and would appear interspersed with the recent applications list as if they were, indeed, applications. This used to be one of my favourite Android features, as it made websites feel very well integrated into the overall user experience, and gave them a sense of place within your workflows. Eventually, though, Google decided to remove this unique approach, as we can’t have nice things and everything must be bland, boring, and the same, and now finding a website you have open requires going to your browser and finding the correct tab. More approachable to most people, I’d wager, but a reduction in usability, for me. I still mourn this loss. Similarly, we’ve seen a huge increase in the use of in-application browsers, a feature designed to trap users inside applications, instead of letting them freely explore the web the moment they click on a link inside an application. Application developers don’t want you leaving their application, so almost all of them, by default, will now open a webview inside the application when you click on an outbound link. For advertising companies, like Google and Facebook, this has the additional benefit of circumventing any and all privacy protections you may have set up in your browser, since those won’t apply to the webview the application opens. This sucks. I hate in-application browsers with a passion. Decades of internet use have taught me that clicking on a link means I’m opening a website in my browser. That’s what I want, that’s what I expect, and that’s how it should be. In-application webviews entirely break this normal chain of events; not because it improves the user experience, but because it benefits the bottom line of others. It’s also a massive security risk. Worst of all, this switch grants these apps the ability to spy and manipulate third-party websites. Popular apps like Instagram, Facebook Messenger and Facebook have all been caught injecting JavaScript via their in-app browsers into third party websites. TikTok was running commands that were essentially a keylogger. While we have no proof that this data was used or exfiltrated from the device, the mere presence of JavaScript code collecting this data combined with no plausible explanation is extremely concerning. ↫ Open Web Advocacy Open Web Advocacy has submitted a detailed and expansive report to the European Commission detailing the various issues with these in-application browsers, and suggests a number of remedies to strengthen security, improve privacy, and preserve browser choice. I hope this gets picked up, because in-application browsers are just another way in which we’re losing control over our devices.

Google is killing the open web

Google is managing to achieve what Microsoft couldn’t: killing the open web. The efforts of tech giants to gain control of and enclose the commons for extractive purposes have been clear to anyone who has been following the history of the Internet for at least the last decade, and the adopted strategies are varied in technique as they are in success, from Embrace, Extend, Extinguish (EEE) to monopolization and lock-in. What I want to talk about in this article is the war Google has been waging on XML for over a decade, why it matters that they’ve finally encroached themselves enough to get what they want, and what we can do to fight this. ↫ Oblomov (I can’t discern the author’s preferred name) Google’s quest to destroy the open web – or at the very least, aggressively contain it – is not new, and we’re all aware of it. Since Google makes most of its money from online advertising, what the company really wants is a sanitised, corporate web that is deeply centralised around as few big players as possible. The smaller the number of players that have an actual influence on web, the better – it’s much easier for Google to find common ground with other megacorps like Apple or Microsoft than with smaller players, open source projects, and individuals who actually care about the openness of the web. One of Google’s primary points of attack is XML and everything related to it, like RSS, XMLT, and so on. If you use RSS, you’re not loading web pages and thus not seeing Google’s ads. If you use XSLT to transform an RSS feed into a browsable website, you’re again not seeing any ads. Effectively, anything that we can use to avoid online advertising is a direct threat to Google’s bottom line, and thus you can be certain Google will try to remove, break, or otherwise cripple it in some way. The most recent example is yet another attempt by Google to kill XSLT. XSLT, or Extensible Stylesheet Language Transformations, is a language which allows you to transform any XML document – like an RSS feed – into other formats, like HTML, plaintext, and tons more. Google has been trying to kill XSLT for over a decade, but it’s such an unpopular move that they had to back down the first time they proposed its removal. They’re proposing it again, and the feedback has been just as negative. And we finally get to these days. Just as RSS feeds are making a comeback and users are starting to grow skeptic of the corporate silos, Google makes another run to kill XSLT, this time using the WHATWG as a sock puppet. Particularly of note, the corresponding Chromium issue was created before the WHATWG Github issue. It is thus to no one’s surprise that the overwhelmingly negative reactions to the issue, the detailed explanations about why XSLT is important, how instead of removing it browsers should move to more recent versions of the standard, and even the indications of existing better and more secure libraries to base such new implementations on, every counterpoint to the removal have gone completely ignored. In the end, the WHATWG was forced to close down comments to the Github issue to stop the flood of negative feedback, so that the Googler could move on to the next step: commencing the process of formalizing the dismissal of XSLT. ↫ Oblomov (I can’t discern the author’s preferred name) At this point in time, there’s really no more web standards as we idealise them in our head. It’s effectively just Google, and perhaps Apple, deciding what is a web “standard” and what isn’t, their decisions guided not by what’s best for a healthy and thriving open web, but by what’s best for their respective bottom lines. The reason the web looks and feels like ass now is not because we wanted it to be so, but because Google and the other technology giants made it so. Everyone is just playing by their rules because otherwise, you won’t show up in Google Search or your site won’t work properly in mobile Safari. This very detailed article and the recent renewed interest in XSLT – thanks for letting everyone know, Google! – has me wondering if OSNews should use XSLT to create a pretty version of our RSS feed that will render nicely even in browsers without any RSS support. It doesn’t seem too difficult, so I’ll see if I can find someone to figure this out (I lack the skills, obviously). We’ve already removed our ads, and I think our RSS feed is full-article already anyway, so why not have a minimal version of OSNews you could browse to in your browser that’s based on our RSS feed and XSLT?

AOL announces it’s ending its dial-up internet service

AOL routinely evaluates its products and services and has decided to discontinue Dial-up Internet. This service will no longer be available in AOL plans. As a result, on September 30, 2025 this service and the associated software, the AOL Dialer software and AOL Shield browser, which are optimized for older operating systems and dial-up internet connections, will be discontinued. ↫ AOL support document I’ve seen a few publications writing derisively about this, surprised dial-up internet is still a thing, but I think that’s misguided and definitely a bit elitist. In a country as large as the United States, there’s bound to be quite a few very remote and isolated places where dial-up might be the best or even only option to get online. On top of that, I’m sure there are people out there who use the internet so sparingly that dial-up may suit their needs just fine. I genuinely hope this move by AOL doesn’t cut a bunch of people off of the internet without any recourse, especially if it involves, say, isolated and lonely seniors to whom such changes may be too difficult to handle. Access to the internet is quite crucial in the modern world, and we shouldn’t be ridiculing people just because they don’t have access to super high-speed broadband.

“AWS deleted my 10-year account and all data without warning”

AWS: Not even once. This prominent Ruby developer lost his entire test environment – which, ironically, was pivotal to AWS’ own infrastructure – because of a rogue team within AWS itself that apparently answers to no one and worked hard to cover up a dumb mistake. On July 23, 2025, AWS deleted my 10-year-old account and every byte of data I had stored with them. No warning. No grace period. No recovery options. Just complete digital annihilation. This is the story of a catastrophic internal mistake at AWS MENA, a 20-day support nightmare where I couldn’t get a straight answer to “Does my data still exist?”, and what it reveals about trusting cloud providers with your data. ↫ Abdelkader Boudih Nightmare scenario doesn’t even begin to describe what happened here.

“I tried Servo, the undercover web browser engine made with Rust”

Servo is unique for a few other reasons, too. It’s managed by the Linux Foundation Europe with decisions made by a technical steering committee, not a big tech company. One of the main goals is to be an “embeddable web rendering engine,” meaning it’s not just for browsers—it could be a replacement for Electron or the Android WebView. Servo is also the first completely new browser engine in decades, so it’s taking lessons learned from mainstream browsers while building a new foundation. ↫ Corbin Davenport At the moment, as Davenport notes, Servo is far from ready to be a daily driver browser engine. Tons of websites’ rendering is broken and some crash the browser altogether, and performance is nowhere near that of the other browser engines. This makes perfect sense, as Servo is still in heavy development, and there’s no massive corporation with endless money (and ulterior motives) backing it. Still, out of all the various attempts at wrestling control away from Blink and WebKit, I feel like Servo’s the one with the most promise in the long term.

Anubis, tool to stop “AI” crawler abuse, gains non-JavaScript option

In recent weeks and months, you may have noticed that when accessing some websites, you see a little progress bar and a character, performing some sort of check. You’ve most likely encountered Anubis, a tool to distinguish real human browser users from “AI” content crawlers that are causing real damage and harm. It turns out Anubis is quite effective at what it does, but it did come with a limitation: it required JavaScript to be enabled. Well, no more. One of the first issues in Anubis before it was moved to the TecharoHQ org was a request to support challenging browsers without using JavaScript. This is a pretty challenging thing to do without rethinking how Anubis works from a fundamentally low level, and with v1.20.0, Anubis finally has support for running without client-side JavaScript thanks to the Meta Refresh challenge. ↫ Xe Iaso Before this new non-JS challenge, users who disabled client-side JavaScript or browsers which don’t support JavaScript were straight-up blocked from passing Anubis’ test, meaning they couldn’t access the website Anubis was protecting from “AI” scraper abuse. This is now no longer the case.

Cappa Magne: Solemn Occasions and Processions

What is a Cappa Magna? The Cappa Magna is a wide ecclesiastical cloak with a long train, traditionally worn by canons, bishops, and cardinals of the Catholic Church on solemn occasions and processions. Its use dates back to the Middle Ages and symbolizes the dignity and authority of its wearer. Origins and History of the Cappa Magna The origins of the Cappa Magna are lost in the history of the Church. It is believed to derive from similar cloaks worn by Roman officials. Over the centuries, the Cappa Magna has evolved, becoming a distinctive symbol of the high-ranking clergy. Initially, it was a practical garment to protect from the cold, but gradually it took on a ceremonial meaning. Symbolic Meaning of the Cappa Magna The Cappa Magna is not only a garment but a symbol rich in meaning. Its wide train represents the extension of the authority of its wearer and his responsibility towards the Church and the faithful. The color of the Cappa Magna varies depending on the rank of its wearer and the liturgical period. Red is traditionally reserved for cardinals, while purple is worn by bishops. When is the Cappa Magna Worn? The Cappa Magna is reserved for solemn occasions and processions. Some examples include: In general, the Cappa Magna is worn when it is desired to emphasize the solemnity and importance of the event. The Different Types of Cappa Magna There are different variations of the Cappa Magna, depending on the rank of its wearer and the occasion. The main differences concern the color, the fabric, and the presence or absence of fur. For example, the cardinalitial Cappa Magna is traditionally in red wool with silk lining and edged with ermine in winter. The Cappa Magna Today The use of the Cappa Magna has decreased in recent decades, especially after the liturgical reforms of the Second Vatican Council. However, it remains a significant garment for many members of the clergy and is still worn on special occasions. Some see it as a symbol of tradition and continuity, while others consider it a legacy of the past. Regardless of the different opinions, the Cappa Magna continues to fascinate and arouse interest. Where to Buy a Cappa Magna If you are interested in purchasing a Cappa Magna, you can find it in stores specializing in religious items or online. On HaftinaUSA.com, we offer a wide selection of high-quality Cappa Magne, made with the best materials and in accordance with tradition. Whether you are a member of the clergy or a passionate about history and liturgy, you will surely find the perfect Cappa Magna for your needs. The Cappa Magna and Ecclesiastical Fashion The Cappa Magna, despite being a traditional garment, has influenced ecclesiastical fashion over the centuries. Its elegant design and its imposing appearance have inspired the creation of other liturgical and ceremonial garments. The Cappa Magna is an example of how tradition and innovation can coexist in ecclesiastical fashion. How to Wear the Cappa Magna Correctly Wearing the Cappa Magna correctly requires some knowledge of ecclesiastical protocol. It is important to make sure that the cloak is well draped and that the train is arranged in an orderly manner. Furthermore, it is essential to combine the Cappa Magna with the other liturgical garments appropriate for the occasion. The Cappa Magna: A Treasure of the Catholic Tradition In conclusion, the Cappa Magna is a treasure of the Catholic tradition, a symbol of dignity, authority, and continuity. Its use in solemn occasions and processions emphasizes the importance and sanctity of these events. If you are interested in the history of the Church and the liturgy, the Cappa Magna is a fascinating topic to explore. Visit HaftinaUSA.com to discover our collection of Cappa Magne and other high-quality religious items. HaftinaUSA.com: Your Partner for Liturgical Clothing At HaftinaUSA.com, we are proud to offer a wide range of high-quality liturgical clothing, including Cappa Magne, sacred vestments, and accessories. We are committed to providing our customers with products that respect tradition and that are made with the best materials. Explore our site today and discover the difference that quality can make! Choosing the Perfect Cappa Magna for Every Occasion The choice of the ideal Cappa Magna depends on several factors, including the rank of its wearer, the occasion, and the liturgical period. On HaftinaUSA, our team of experts is available to help you choose the perfect Cappa Magna for every occasion. Contact us today for a personalized consultation!

The flip phone web: browsing with the original Opera Mini

Opera Mini was first released in 2005 as a web browser for mobile phones, with the ability to load full websites by sending most of the work to an external server. It was a massive hit, but it started to fade out of relevance once smartphones entered mainstream use. Opera Mini still exists today as a web browser for iPhone and Android—it’s now just a tweaked version of the regular Opera mobile browser, and you shouldn’t use Opera browsers. However, the original Java ME-based version is still functional, and you can even use it on modern computers. ↫ Corbin Davenport I remember using Opera Mini back in the day on my PocketPC and Palm devices. It wasn’t my main browser on those devices, but if some site I really needed was acting up, Opera Mini could be a lifesaver, but as we all remember, the mobile web before the arrival of the iPhone was a trashfire. Interestingly enough, we circled back to the mobile web being a trashfire, but at least we can block ads now to make it bearable. Since Opera Mini is just a Java application, the client part of the equation will probably remain executable for a long time, but once Opera decides to close the server side of things, it will stop being useful. Perhaps one day someone will reverse-engineer the protocol and APIs, paving the way for a custom server we can all run as part of the retrocomputing hobby. There’s always someone crazy and dedicated enough.

E-COM: the $40 million USPS project to send email on paper

How do you get email to the folks without computers? What if the Post Office printed out email, stamped it, dropped it in folks’ mailboxes along with the rest of their mail, and saved the USPS once and for all? And so in 1982 E-COM was born—and, inadvertently, helped coin the term “e-mail.” ↫ Justin Duke The implementation of E-COM was awesome. You’d enter the messages on your computer, send it to the post office using a TTY or IBM 2780/3789 terminals, to Sperry Rand Univac 1108 computer systems at one of 25 post offices. Postal staff would print the messages and send them through the regular postal system to their recipients. The USPS actually tried to get a legal monopoly on this concept, but the FCC fought them in court and won out. E-COM wasn’t the breakout success the USPS had hoped for, but it did catch on in one, unpleasant way: spam. The official-looking E-COM enevelopes from the USPS were very attractive to junk mail companies, and it was estimated that about six companies made up 70% of the total E-COM volume of 15 million messages in its second year of operation. The entire article is definitely recommended reading, as it contains a ton more information about E-COM and some of the other attempts by USPS to ride the coattails of the computer and internet revolution, including the idea to give every US resident an @.us e-mail address. Wild.