Internet Archive

Chromium’s influence on Chromium alternatives

I don’t think most people realize how Firefox and Safari depend on Google for more than “just” revenue from default search engine deals and prototyping new web platform features. Off the top of my head, Safari and Firefox use the following Chromium libraries: libwebrtc, libbrotli, libvpx, libwebp, some color management libraries, libjxl (Chromium may eventually contribute a Rust JPEG-XL implementation to Firefox; it’s a hard image format to implement!), much of Safari’s cryptography (from BoringSSL), Firefox’s 2D renderer (Skia)…the list goes on. Much of Firefox’s security overhaul in recent years (process isolation, site isolation, user namespace sandboxes, effort on building with ControlFlowIntegrity) is directly inspired by Chromium’s architecture. ↫ Rohan “Seirdy” Kumar Definitely an interesting angle on the browser debate I hadn’t really stopped to think about before. The argument is that while Chromium’s dominance is not exactly great, the other side of the coin is that non-Chromium browsers also make use of a lot of Chromium code all of us benefit from, and without Google doing that work, Mozilla would have to do it by themselves, and let’s face it, it’s not like they’re in a great position to do so. I’m not saying I buy the argument, but it’s an argument nonetheless. I honestly wouldn’t mind a slower development pace for the web, since I feel a lot of energy and development goes into things making the web worse, not better. Redirecting some of that development into things users of the web would benefit from seems like a win to me, and with the dominant web engine Chromium being run by an advertising company, we all know where their focus lies, and it ain’t on us as users. I’m still firmly on the side of less Chromium, please.

Google’s ad-blocking crackdown underway

Google has gotten a bad reputation as of late for being a bit overzealous when it comes to fighting ad blockers. Most recently, it’s been spotted automatically turning off popular ad blocking extension uBlock Origin for some Google Chrome users. To a degree, that makes sense—Google makes its money off ads. But with malicious ads and data trackers all over the internet these days, users have legitimate reasons to want to block them. The uBlock Origin controversy is just one facet of a debate that goes back years, and it’s not isolated: your favorite ad blocker will likely be affected next. Here are the best ways to keep blocking ads now that Google is cracking down on ad blockers. ↫ Michelle Ehrhardt at LifeHacker Here’s the cold and harsh reality: ad blocking will become ever more difficult as time goes on. Not only is Google obviously fighting it, other browser makers will most likely follow suit. Microsoft is an advertising company, so Edge will follow suit in dropping Manifest v2 support. Apple is an advertising company, and will do whatever they can to make at least their own ads appear. Mozilla is an advertising company, too, now, and will continue to erode their users’ trust in favour of nebulous nonsense like privacy-respecting advertising in cooperation with Facebook. The best way to block ads is to move to blocking at the network level. Get a cheap computer or Raspberry Pi, set up Pi-Hole, and enjoy some of the best adblocking you’re ever going to get. It’s definitely more involved than just installing a browser extension, but it also happens to be much harder for advertising companies to combat. If you’re feeling generous, set up Pi-Holes for your parents, friends, and relatives. It’s worth it to make their browsing experience faster, safer, and more pleasant. And once again I’d like to reiterate that I have zero issues with anyone blocking the ads on OSNews. Your computer, your rules. It’s not like display ads are particularly profitable anyway, so I’d much rather you support us through Patreon or a one-time donation through Ko-Fi, which is a more direct way of ensuring OSNews continues to exist. Also note that the OSNews Matrix room – think IRC, but more modern, and fully end-to-end encrypted – is now up and running and accessible to all OSNews Patreons as well.

Internet Archive hacked and victim of DDoS attacks

Internet Archive’s “The Wayback Machine” has suffered a data breach after a threat actor compromised the website and stole a user authentication database containing 31 million unique records. News of the breach began circulating Wednesday afternoon after visitors to archive.org began seeing a JavaScript alert created by the hacker, stating that the Internet Archive was breached. “Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!,” reads a JavaScript alert shown on the compromised archive.org site. ↫ Lawrence Abrams at Bleeping Computer To make matters worse, the Internet Archive was also suffering from waves of distributed denial-of-service attacks, forcing the IA to take down the site while strengthening everything up. It seems the attackers have no real motivation, other than the fact they can, but it’s interesting, shall we say, that the Internet Archive has been under legal assault by big publishers for years now, too. I highly doubt the two are related in any way, but it’s an interesting note nonetheless. I’m still catching up on all the various tech news stories, but this one was hard to miss. A lot of people are rightfully angry and dismayed about this, since attacking the Internet Archive like this kind of feels like throwing Molotov cocktails at a local library – there’s literally not a single reason to do so, and the only people you’re going to hurt are underpaid librarians and chill people who just want to read some books. Whomever is behind this are just assholes, no ifs and buts about it.

A Comprehensive Guide to Choosing the Right DevOps Managed Services

The world of software development is rapidly changing. More and more companies are adopting DevOps practices to improve collaboration, increase deployment frequency, and deliver higher-quality software. However, implementing DevOps can be challenging without the right people, processes, and tools. This is where DevOps managed services providers can help. Choosing the right DevOps partner is crucial to maximizing DevOps’s benefits at your organization. This comprehensive guide covers everything you need to know about selecting the best DevOps managed services provider for your needs. What are DevOps Managed Services? DevOps managed services provide ongoing management, support, and expertise to help organizations implement DevOps practices. A managed services provider (MSP) becomes an extension of your team, handling tasks like: This removes the burden of building in-house DevOps competency. It lets your engineers focus on delivering business value instead of struggling with new tools and processes. Benefits of Using DevOps Managed Services Here are some of the main reasons to leverage an MSP to assist your DevOps transformation: Accelerate Time-to-Market A mature MSP has developed accelerators and blueprints based on years of project experience. This allows them to rapidly stand up CI/CD pipelines, infrastructure, and other solutions. You’ll be able to deploy code faster. Increase Efficiency MSPs scale across clients, allowing them to create reusable frameworks, scripts, and integrations for data warehouse services, for example. By leveraging this pooled knowledge, you avoid “reinventing the wheel,” which gets your team more done. Augment Internal Capabilities Most IT teams struggle to hire DevOps talent. Engaging an MSP gives you instant access to specialized skills like site reliability engineering (SRE), security hardening, and compliance automation. Gain Expertise Most companies are still learning DevOps. An MSP provides advisory services based on what works well across its broad client base, helping you adopt best practices instead of making mistakes. Reduce Cost While the exact savings will vary, research shows DevOps and managed services can reduce costs through fewer defects, improved efficiency, and optimized infrastructure usage. Key Factors to Consider Choosing the right MSP gives you the greatest chance of success. However, evaluating providers can seem overwhelming, given the diversity of services available. Here are the 5 criteria to focus on: 1. DevOps Experience and Maturity Confirm that the provider has real-world expertise, specifically in DevOps engagements. Ask questions such as: They can guide your organization on the DevOps journey if you want confidence. Also, examine their internal DevOps maturity. An MSP that “walks the talk” by using DevOps practices in their operations is better positioned to help instill those disciplines in your teams. 2. People, Process, and Tools A quality MSP considers all three pillars of DevOps success: People – They have strong technical talent in place and provide training to address any skill gaps. Cultural change is considered part of any engagement. Process – They enforce proven frameworks for infrastructure management, CI/CD, metrics gathering, etc. But also customize it to your environment vs. taking a one-size-fits-all approach. Tools – They have preferred platforms and toolchains based on experience. But integrate well with your existing investments vs. demanding wholesale changes. Aligning an MSP across people, processes, and tools ensures a smooth partnership. 3. Delivery Model and Location Understand how the MSP prefers to deliver services: If you have on-site personnel, also consider geographic proximity. An MSP with a delivery center nearby can rotate staff more easily. Most MSPs are flexible to align with what works best for a client. Be clear on communication and availability expectations upfront. 4. Security and Compliance Expertise Today, DevOps and security should go hand-in-hand. Evaluate how much security knowledge the provider brings to the table. Relevant capabilities can include: Not all clients require advanced security skills. However, given increasing regulatory demands, an MSP that offers broader experience can provide long-term value. 5. Cloud vs On-Premises Support Many DevOps initiatives – particularly when starting – focus on the public cloud, given cloud platforms’ automation capabilities. However, most enterprises take a hybrid approach, leveraging both on-premises and public cloud. Be clear if you need an MSP able to support: The required mix of cloud vs. on-prem support should factor into provider selection. Engagement Models for DevOps Managed Services MSPs offer varying ways clients can procure their DevOps expertise: Staff Augmentation Add skilled DevOps consultants to your team for a fixed time period (typically 3-6 months). This works well to fill immediate talent gaps. Project Based Engage an MSP for a specific initiative, such as building a CI/CD pipeline for a business-critical application. Clear the scope and deliverables. Ongoing Managed Services Retain an MSP to provide ongoing DevOps support under a longer-term (1+ year) contract. More strategic partnerships where MSP metrics and incentives align with client goals. Hybrid Approaches Blend staff augmentation, project work, and managed services. Provides flexibility to get quick wins while building long-term capabilities. Evaluate which model (or combination) suits your requirements and budget. Overview of Top Managed Service Providers The market for DevOps-managed services features a wide range of global systems integrators, niche specialists, regional firms, and digital transformation agencies. Here is a sampling of leading options across various categories: Langate Accenture Cognizant Wipro EPAM Advanced Technology Consulting ClearScale This sampling shows the diversity of options and demonstrates key commonalities, such as automation skills, CI/CD expertise, and experience driving cultural change. As you evaluate providers, develop a shortlist of 2-3 options that seem best aligned. Then, further validation will be made through detailed discovery conversations and proposal walkthroughs. A Framework for Comparing Providers With so many aspects to examine, it helps to use a scorecard to track your assessment as you engage potential DevOps MSPs: Criteria Weight Provider 1 Provider 2 Provider 3 Years of Experience 10% Client References/Case Studies 15% Delivery Locations 10% Cultural Change Methodology 15% Security and Compliance Capabilities 10% Public Cloud Skills 15% On-Premises Infrastructure Expertise 15% Budget Fit 10% Total Score 100% Customize categories and weighting based on your priorities. Scoring forces clearer decisions compared to general impressions. Share the framework with stakeholders to build consensus on the

Servo gets tabbed browsing, Windows improvements, and more

If you’re reading this, you did a good job surviving another month, and that means we’ve got another monthly update from the Servo project, the Rust-based browser engine originally started by Mozilla. The major new feature this month is tabbed browsing in the Servo example browser, as well as extensive improvements for Servo on Windows. Servo-the-browser now has a redesigned toolbar and tabbed browsing! This includes a slick new tab page, taking advantage of a new API that lets Servo embedders register custom protocol handlers. ↫ Servo’s blog Servo now runs better on Windows, with keyboard navigation now fixed, --output to PNG also fixed, and fixes for some font- and GPU-related bugs, which were causing misaligned glyphs with incorrect colors on servo.org and duckduckgo.com, and corrupted images on wikipedia.org. Of course, that’s not at all, as there’s also the usual massive list of improved standards support, new APIs, improvements to some of the developer tools (including massive improvements in Windows build times), and a huge number of fixed bugs.

Can you convert a video to pure CSS?

He regularly shares cool examples of fancy css animations. At the time of writing his focus has been on css scroll animations. I guess there are some new properties that allow playing a css animation based on the scroll position. Apple has been using this on their marketing pages or so jhehy says. The property seems pretty powerful. But how powerful? This got me thinking… Could it play a video as pure css? ↫ David Gerrells The answer is yes. This is so cursed, I love it – and he even turned it into an app so anyone can convert a video into CSS.

The journey of an internet packet: exploring networks with traceroute

The internet is a complex network of routers, switches, and computers, and when we try to connect to a server, our packets go through many routers before reaching the destination. If one of these routers is misconfigured or down, the packet can be dropped, and we can’t reach the destination. In this post, we will see how traceroute works, and how it can help us diagnose network problems. ↫ Sebastian Marines I’m sure most of us have used traceroute at some point in our lives, but I never once wondered how,, exactly, it works. The internet – and networking in general – always feels like arcane magic to me, not truly understandable by mere mortals without years of dedicated study and practice. Even something as simple as managing a home router can be a confusing nightmare of abbreviations, terminology, and backwards compatibility hacks, so you can imagine how complex it gets when you leave your home network and start sending packets out into the wider world. This post does a great job of explaining exactly how traceroute works without overloading you with stuff you don’t need to know.

Ethernet history deepdive: why do we have different frame types?

The history of Ethernet is fascinating. The reason why we have three different frame types is that DIX used the Ethernet II frame that is prevalent today, while IEEE intended to use a different frame format that could be used for different MAC layers, such as token bus, token ring, FDDI, and so on. The IEEE were also inspired by HDLC, and modeled their frame header more in alignment with the OSI reference model that had the concept of SAPs. When they discovered that the number of available SAPs weren’t enough, they made an addition to the 802 standard to support SNAP frames. In networks today, Ethernet II is dominant, but some control protocols may use LLC and/or SNAP frames. ↫ Daniel Dib I just smiled and nodded.

Chrome iOS browser on Blink

Earlier this year, under pressure from the European Union, Apple was finally forced to open up iOS and allow alternative browser engines, at least in the EU. Up until then, Apple only allowed its own WebKit engine to run on iOS, meaning that even what seemed like third-party browsers – Chrome, Firefox, and so on – were all just Safari skins, running Apple’s WebKit underneath (with additional restrictions to make them perform worse than Safari). Even with other browser engines now being allowed on iOS in the EU, there’s still hurdles, as Apple requires browser makers to maintain two different browsers, one for the EU, and another one for the rest of the world. It seems the Chromium community is already working on bringing the Chromium Blink browser engine to iOS, but there’s still a lot of work to be done. A blog post by the open source consultancy company Igalia digs into the details, since they are contributing to the effort. While they’ve got the basics covered, it’s far from completed or ready for release. We’ve briefly looked at the current status of the project so far, but many functionalities still need to be supported. For example, regarding UI features, functionalities such as printing preview, download, text selection, request desktop site, zoom text, translate, find in page, and touch events are not yet implemented or are not functioning correctly. Moreover, there are numerous failing or skipped tests in unit tests, browser tests, and web tests. Ensuring that these tests are enabled and passing the test should also be a key focus moving forward. ↫ Gyuyoung Weblog I don’t use iOS, nor do I intend to any time soon, but the coming availability of browser engines that compete with WebKit is going to be great for the web. I’ve heard from so many web developers that Safari on iOS is a bit of a nightmare to support, since without any competition on iOS it often stagnates and lags behind in supporting features other browsers already implemented. With WebKit on iOS facing competition, that might change. Now, there’s a line of thought that all this will do is make Chrome even more dominant, but I don’t think that’s going to be an issue. Safari is still the default for most people, and changing defaults is not something most people will do, especially not the average iOS user. On top of that, this is only available in the EU, so I honestly don’t think we have to worry about this any time soon, but obviously, we do have to remain vigilant.

Verso: a browser using Servo

I regularly report on the progress made by the Servo project, the Rust-based browser engine that was spun out of Mozilla into its own project. Servo has its own reference browser implementation, too, but did you know there’s already other browsers using Servo, too? Sure, it’s clearly a work-in-progress thing, and it’s missing just about every feature we’ve come to expect from a browser, but it’s cool nonetheless. Verso is a web browser built on top of Servo web engine. It’s still under development. We dont’ accept any feature request at the moment. But if you are interested, feel free to help test it. ↫ Verso GitHub page It runs on Linux, Windows, and macOS.

Servo enables parallel table layout

Another month, another chunk of progress for the Servo rendering engine. The biggest addition is enabling table rendering to be spread across CPU cores. Parallel table layout is now enabled, spreading the work for laying out rows and their columns over all available CPU cores. This change is a great example of the strengths of Rayon and the opportunistic parallelism in Servo’s layout engine. ↫ Servo blog On top of this, there’s tons of improvements to the flexbox layout engine, support generic font families like ‘sans-serif’ and ‘monospace’ has been added, and Servo now supports OpenHarmony, the operating system developed by Huawei. This month also saw a lot of work on the development tools.

OpenAI beta tests SearchGPT search engine

Normally I’m not that interested in reporting on news coming from OpenAI, but today is a little different – the company launched SearchGPT, a search engine that’s supposed to rival Google, but at the same time, they’re also kind of not launching a search engine that’s supposed to rival Google. What? We’re testing SearchGPT, a prototype of new search features designed to combine the strength of our AI models with information from the web to give you fast and timely answers with clear and relevant sources. We’re launching to a small group of users and publishers to get feedback. While this prototype is temporary, we plan to integrate the best of these features directly into ChatGPT in the future. If you’re interested in trying the prototype, sign up for the waitlist. ↫ OpenAI website Basically, before adding a more traditional web-search like feature set to ChatGPT, the company is first breaking them out into a separate, temporary product that users can test, before parts of it will be integrated into OpenAI’s main ChatGPT product. It’s an interesting approach, and with just how stupidly popular and hyped ChatGPT is, I’m sure they won’t have any issues assembling a large enough pool of testers. OpenAI claims SearchGPT will be different from, say, Google or AltaVista, by employing a conversation-style interface with real-time results from the web. Sources for search results will be clearly marked – good – and additional sources will be presented in a sidebar. True to the ChatGPT-style user interface, you can keep “talking” after hitting a result to refine your search further. I may perhaps betray my still relatively modest age, but do people really want to “talk” to a machine to search the web? Any time I’ve ever used one of these chatbot-style user interfaces -including ChatGPT – I find them cumbersome and frustrating, like they’re just adding an obtuse layer between me and the computer, and that I’d rather just be instructing the computer directly. Why try and verbally massage a stupid autocomplete into finding a link to an article I remember from a few days ago, instead of just typing in a few quick keywords? I am more than willing to concede I’m just out of touch with what people really want, so maybe this really is the future of search. I hope I can just always disable nonsense like this and just throw keywords at the problem.

“Majority of websites and mobile apps use dark patterns”

A global internet sweep that examined the websites and mobile apps of 642 traders has found that 75,7% of them employed at least one dark pattern, and 66,8% of them employed two or more dark patterns. Dark patterns are defined as practices commonly found in online user interfaces and that steer, deceive, coerce, or manipulate consumers into making choices that often are not in their best interests. ↫ International Consumer Protection and Enforcement Network Dark patterns are everywhere, and it’s virtually impossible to browse the web, use certain types of services, or install mobile applications, without having to dodge and roll just to avoid all kinds of nonsense being thrown at you. It’s often not even ads that make the web unusable – it’s all the dark patterns tricking you into viewing ads, entering into a subscription, enabling notifications, sharing your email address or whatever, that’s the real reason. This is why one of the absolute primary demands I have for the next version of OSNews is zero dark patterns. I don’t want any dialogs begging you to enable ads, no modal windows demanding you sign up for a newsletter, no popups asking you to enable notifications, and so on – none of that stuff. My golden standard is “your computer, your rules”, and that includes your right to use ad blockers or anything else to change the appearance or functioning of our website on your computer. It’d be great if dark patterns became illegal somehow, but it would be incredibly difficult to write any legislation that would properly cover these practices.

Cloudflare lets customers block AI bots, scrapers and crawlers with a single click

It seems the dislike for machine learning runs deep. In a blog post, Cloudflare has announced that blocking machine learning scrapers is so popular, they decided to just add a feature to the Cloudflare dashboard that will block all machine learning scrapers with a single click. We hear clearly that customers don’t want AI bots visiting their websites, and especially those that do so dishonestly. To help, we’ve added a brand new one-click to block all AI bots. It’s available for all customers, including those on the free tier. To enable it, simply navigate to the Security > Bots section of the Cloudflare dashboard, and click the toggle labeled AI Scrapers and Crawlers. ↫ Cloudflare blog According to Cloudflare, 85% of their customers block machine learning scrapers from taking content from their websites, and that number definitely does not surprise me. People clearly understand that multibillion dollar megacorporations freely scraping every piece of content on the web for their own further obscene enrichment while giving nothing back – in fact, while charging us for it – is inherently wrong, and as such, they choose to block them from doing so. Of course, it makes sense for Cloudflare to try and combat junk traffic, so this is one of those cases where the corporate interests of Cloudflare actually line up with the personal interests of its customers, so making blocking machine learning scrapers as easy as possible benefits both parties. I think OSNews, too, makes use of Cloudflare, so I’m definitely going to ask OSNews’ owner to hit that button. Cloudflare further details that a lot of people are blocking crawlers run by companies like Amazon, Google, and OpenAI, but completely miss far more active crawlers like those run by the Chinese company ByteDance, probably because those companies don’t dominate the “AI” news cycle. Then there’s the massive number of machine learning crawlers that just straight-up lie about their intentions, trying to hide the fact they’re machine learning bots. We fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection. We will continue to keep watch and add more bot blocks to our AI Scrapers and Crawlers rule and evolve our machine learning models to help keep the Internet a place where content creators can thrive and keep full control over which models their content is used to train or run inference on. ↫ Cloudflare blog I find this particularly funny because what’s happening here is machine learning models being used to block… Machine learning models. Give it a few more years down the trajectory we’re currently on, and the internet will just be bots reading content posted by other bots.

Vivaldi takes firm stance against AI, will not include it in its browser

The web browser Vivaldi is taking a firm stance against including machine learning tools to its browser. So, as we have seen, LLMs are essentially confident-sounding lying machines with a penchant to occasionally disclose private data or plagiarise existing work. While they do this, they also use vast amounts of energy and are happy using all the GPUs you can throw at them which is a problem we’ve seen before in the field of cryptocurrencies. As such, it does not feel right to bundle any such solution into Vivaldi. There is enough misinformation going around to risk adding more to the pile. We will not use an LLM to add a chatbot, a summarization solution or a suggestion engine to fill up forms for you until more rigorous ways to do those things are available. ↫ Julien Picalausa on the Vivaldi blog I’m not a particular fan of Vivaldi personally – it doesn’t integrate with KDE well visually and its old-fashioned-Opera approach of throwing everything but the kitchen sink at itself is just too cluttered for me – but props to the Vivaldi team for taking such clear and firm stance. There’s a ton of pressure from big money interests to add machine learning to everything from your operating system to your nail scissors, and popular tech publishers are certainly going to publish articles decrying Vivaldi’s choice, so they’re not doing this without any risk. With even Firefox adding machine learning tools to the browser, there’s very few – if any – browsers left, other than Vivaldi, it seems – that will be free of these tools. I can only hope we’re going to see a popular Firefox fork without this nonsense take off, and I’m definitely keeping my eye on the various options that already exist today.

Andreas Kling steps down from SerenityOS to focus entirely on the Ladybird browser

We’ve got some possibly sad, possibly great news. Today, Andreas Kling, the amazing developer who started SerenityOS as a way to regain a sense or normalcy after completing his drug rehab program, has announced he’s stepping down as the ‘big dictator for life’ of the SerenityOS project, handing leadership over the maintainer group. The other half of the coin, however, is that Kling will officially fork Ladybird, the cross-platform web browser that originated as part of SerenityOS, turning it into a proper, separate project. Personally, for the past two years, I’ve been almost entirely focused on Ladybird, a new web browser that started as a simple HTML viewer for SerenityOS. When Ladybird became a cross-platform project in 2022, I switched all my attention to the Linux version, as testing on Linux was much easier and didn’t require booting into SerenityOS. Time flew by, and now I can’t remember the last time I worked on something in SerenityOS that wasn’t related to Ladybird. ↫ Andreas Kling If you know a little bit about Kling’s career, it’s not entirely surprising that his heart lies with working on a browser engine. He originally worked at Nokia, and then at Apple in San Francisco on WebKit, and there’s most likely some code that he’s written in the browser you’re using right now (except, perhaps, for us Firefox users). As such, it makes sense that once Ladybird grew into something more than just a simple HTML viewer, he’d be focusing on it a lot. As part of the fork, Ladybird will focus entirely on Linux and macOS, and drop SerenityOS as a target. This may seem weird at first, but this is an entirely amicable and planned step, as this allows Ladybird to adopt, use, and integrate third party code, something SerenityOS does not allow. In addition, many of these open source projects Ladybird couldn’t really use anyway because they simply didn’t exist for SerenityOS in the first place. This decision creates a lot of breathing room and flexibility for both projects. Ladybird was getting a lot of attention from outside of SerenityOS circles, from large donations to code contributions. I’m not entirely surprised by this step, and I really hope it’s going to be the beginning of something great. We really need new and competitive browser engines to push the web forward, and alongside Servo, it now seems Ladybird has also picked up the baton. What this will mean for SerenityOS remains to be seen. As Kling said, he hasn’t really been involved with SerenityOS outside of Ladybird work for two years now, so it seems the rest of the contributors were already doing a lot of the heavy lifting. I hope this doesn’t mean the project will peter out, since it has a certain flair few other operating systems have.

This message does not exist

The act of discarding a message that does not exist must therefore do one of two things. It may cause the message contents to also cease to exist. Alternately, it might not affect the existence but only the accessibility of message contents. Perhaps they continue to exist, but discarding the message (which already did not exist) causes the copy operation to cease being invokable on the message contents (even though they do continue to exist). The story of existence has many mysteries. ↫ Mark J. Nelson The one question that can really break my brain in a way that is feels like it’s physically hurting – which it can’t, because, fun fact, there’s no pain receptors in the brain – is the question what exists outside of the universe? Any answer you can come up with just leads to more questions which just lead to more questions, in an infinite loop of possible answers and questions that the human mind is not equipped to grasp. Anyway, it turns out using Outook can lead to the same existential crises.

Servo sees another month full of improvements

Servo, the Rust-based browser engine originally started by Mozilla but since spun off into an entity under the umbrella of the Linux Foundation, has published another monthly update. As almost every month, there’s been a lot of progress on rendering tech I don’t quite understand, and further improved support for various standards. Another major focus is the ongoing font system rework, which is yielding not only vastly improved support for font rendering options, but is also reducing the memory load. The example browser included in Servo is also making progress, from reducing the amount of errors on Windows, to implementing support for using extra mouse buttons to go back and forward, and showing the link desination when hovering the mouse over a link.

Bing went down, and lots of people discovered alternative search engines are whitelabel versions of Bing

It turns out way fewer people knew search engines like DuckDuckGo are just whitelabel versions of Microsoft Bing than I thought. Today, in most of Europe and Asia, search engines like DuckDuckGo, Ecosia, Qwant, other alternative search engines, ChatGPT internet search, and even Windows Copilot were all down. It turns out the culprit was Microsoft Bing; and when Microsoft Bing goes down, everyone who uses it goes down too. Alternative search engines often try to be vague about their whitelabel status, or even outright hide it altogether. Bing is a popular search engine for whitelabeling, so when Bing goes down, almost the entire house of cards of alternative search engines comes tumbling down as well. DuckDuckGo, for instance, places a lot of emphasis on using specialised search engines like TripAdvisor and direct sources like Sportradar or Wikipedia, as well as its own crawler and other indexes. However, as we saw today, as soon as Bing goes down, DuckDuckGo just stops working entirely. DDG happens to be my main search engine – a case of less shit than everyone else – so all throughout the day I was met with the error message “There was an error displaying the search results. Please try again.” I don’t begrudge DDG or other search engines for repackaging Bing search results – building a truly new search engine and running it is incredibly hard, costly, and you’ll always be lagging behind – but I was surprised by how many people didn’t know just how common this practice really was. My Fediverse feeds were filled with people surprised to learn they’d been using Bing all along, just wrapped in a nicer user interface and with some additional features.

Slack users horrified to discover messages used for “AI” training

After launching Slack AI in February, Slack appears to be digging its heels in, defending its vague policy that by default sucks up customers’ data—including messages, content, and files—to train Slack’s global AI models. ↫ Ashley Belanger at Ars Technica I’ve never used Slack and don’t intend to ever start, but the outcry about this reached far beyond Slack and its own communities. It’s been all over various forums and social media, and I’m glad Ars dove into it to collect all the various conflicting statements, policies, and blog posts Slack has made about their “Ai” policies. However, even after reading Ars’ article and the various articles about this at other outlets, I still have no idea what, exactly, Slack is or is not using to train its “AI” models. I know a lot of people here think I am by definition against all forms of what companies are currently calling “AI”, but this is really not the case. I think there are countless areas where these technologies can make meaningful contributions, and a great example I encountered recently is the 4X strategy game Stellaris, one of my favourite games. The game recently got a big update called The Machine Age, which focuses on changing and improving the gameplay when you opt to play as cybernetically enhanced or outright robotic races. As per Steam’s new rules regarding the use of AI in games, the Steam page included the following clarification about the use of “AI”: We employ generative AI technologies during the creation of some assets. Typically this involves the ideation of content and visual reference material. These elements represent a minor component of the overall development. AI has been used to generate voices for an AI antagonist and a player advisor. ↫ The Machine Age Steam page The game’s director explained that during the very early ideation phase, when someone like him, who isn’t a creative person, gets an idea, they might generate a piece of “AI” art and put it up on an ideation wall with tons of other assets just to get the point across, after which several rounds of artists and developers mould and shape some of those ideas into a final product. None of the early “AI” content makes it in the game. Similarly, while the game includes the voice for an AI antagonist and player advisor, the voice actors whose work was willingly used to generate the lines in the game are receiving royalties for each of those lines. I have no issues whatsoever with this, because here it’s clear everyone involved is doing so in an informed manner and entirely willingly. Everything is above board, consent is freely given, and everybody knows what’s going on. This is a great example of ethical “AI” use; tools to help people make a product, easier – without stealing other people’s work or violating various licenses in the process. What Slack is doing here – and what Copilot, OpenAI, and the various other tools do – is the exact opposite of this. Consent is only sought when the parties involved are big and powerful enough to cause problems, and even though they claim “AI” is not ripping anyone off, they also claim “AI” can’t work without taking other people’s work. Instead of being open and transparent about what they do, they hide themselves behind magical algorithms and shroud the origins of their “AI” training data in mystery. If you’re using Slack – and odds are you do – I would strongly consider urging your boss to opt your organisation out of Slack’s “AI” data theft operation. You have no idea how much private information and corporate data is being exposed by these Salesforce clowns.