Internet Archive
Kian Bradley was downloading something using BitTorrent, and noticed that quite a few trackers were dead. Most of the trackers were totally dead. Either the hosts were down or the domains weren’t being used. That got me thinking. What if I picked up one of these dead domains? How many clients would try to connect? Kian Bradley It turns out the answer is a lot.
What is a Cappa Magna? The Cappa Magna is a wide ecclesiastical cloak with a long train, traditionally worn by canons, bishops, and cardinals of the Catholic Church on solemn occasions and processions. Its use dates back to the Middle Ages and symbolizes the dignity and authority of its wearer. Origins and History of the Cappa Magna The origins of the Cappa Magna are lost in the history of the Church. It is believed to derive from similar cloaks worn by Roman officials. Over the centuries, the Cappa Magna has evolved, becoming a distinctive symbol of the high-ranking clergy. Initially, it was a practical garment to protect from the cold, but gradually it took on a ceremonial meaning. Symbolic Meaning of the Cappa Magna The Cappa Magna is not only a garment but a symbol rich in meaning. Its wide train represents the extension of the authority of its wearer and his responsibility towards the Church and the faithful. The color of the Cappa Magna varies depending on the rank of its wearer and the liturgical period. Red is traditionally reserved for cardinals, while purple is worn by bishops. When is the Cappa Magna Worn? The Cappa Magna is reserved for solemn occasions and processions. Some examples include: In general, the Cappa Magna is worn when it is desired to emphasize the solemnity and importance of the event. The Different Types of Cappa Magna There are different variations of the Cappa Magna, depending on the rank of its wearer and the occasion. The main differences concern the color, the fabric, and the presence or absence of fur. For example, the cardinalitial Cappa Magna is traditionally in red wool with silk lining and edged with ermine in winter. The Cappa Magna Today The use of the Cappa Magna has decreased in recent decades, especially after the liturgical reforms of the Second Vatican Council. However, it remains a significant garment for many members of the clergy and is still worn on special occasions. Some see it as a symbol of tradition and continuity, while others consider it a legacy of the past. Regardless of the different opinions, the Cappa Magna continues to fascinate and arouse interest. Where to Buy a Cappa Magna If you are interested in purchasing a Cappa Magna, you can find it in stores specializing in religious items or online. On HaftinaUSA.com, we offer a wide selection of high-quality Cappa Magne, made with the best materials and in accordance with tradition. Whether you are a member of the clergy or a passionate about history and liturgy, you will surely find the perfect Cappa Magna for your needs. The Cappa Magna and Ecclesiastical Fashion The Cappa Magna, despite being a traditional garment, has influenced ecclesiastical fashion over the centuries. Its elegant design and its imposing appearance have inspired the creation of other liturgical and ceremonial garments. The Cappa Magna is an example of how tradition and innovation can coexist in ecclesiastical fashion. How to Wear the Cappa Magna Correctly Wearing the Cappa Magna correctly requires some knowledge of ecclesiastical protocol. It is important to make sure that the cloak is well draped and that the train is arranged in an orderly manner. Furthermore, it is essential to combine the Cappa Magna with the other liturgical garments appropriate for the occasion. The Cappa Magna: A Treasure of the Catholic Tradition In conclusion, the Cappa Magna is a treasure of the Catholic tradition, a symbol of dignity, authority, and continuity. Its use in solemn occasions and processions emphasizes the importance and sanctity of these events. If you are interested in the history of the Church and the liturgy, the Cappa Magna is a fascinating topic to explore. Visit HaftinaUSA.com to discover our collection of Cappa Magne and other high-quality religious items. HaftinaUSA.com: Your Partner for Liturgical Clothing At HaftinaUSA.com, we are proud to offer a wide range of high-quality liturgical clothing, including Cappa Magne, sacred vestments, and accessories. We are committed to providing our customers with products that respect tradition and that are made with the best materials. Explore our site today and discover the difference that quality can make! Choosing the Perfect Cappa Magna for Every Occasion The choice of the ideal Cappa Magna depends on several factors, including the rank of its wearer, the occasion, and the liturgical period. On HaftinaUSA, our team of experts is available to help you choose the perfect Cappa Magna for every occasion. Contact us today for a personalized consultation!
Opera Mini was first released in 2005 as a web browser for mobile phones, with the ability to load full websites by sending most of the work to an external server. It was a massive hit, but it started to fade out of relevance once smartphones entered mainstream use. Opera Mini still exists today as a web browser for iPhone and Android—it’s now just a tweaked version of the regular Opera mobile browser, and you shouldn’t use Opera browsers. However, the original Java ME-based version is still functional, and you can even use it on modern computers. ↫ Corbin Davenport I remember using Opera Mini back in the day on my PocketPC and Palm devices. It wasn’t my main browser on those devices, but if some site I really needed was acting up, Opera Mini could be a lifesaver, but as we all remember, the mobile web before the arrival of the iPhone was a trashfire. Interestingly enough, we circled back to the mobile web being a trashfire, but at least we can block ads now to make it bearable. Since Opera Mini is just a Java application, the client part of the equation will probably remain executable for a long time, but once Opera decides to close the server side of things, it will stop being useful. Perhaps one day someone will reverse-engineer the protocol and APIs, paving the way for a custom server we can all run as part of the retrocomputing hobby. There’s always someone crazy and dedicated enough.
How do you get email to the folks without computers? What if the Post Office printed out email, stamped it, dropped it in folks’ mailboxes along with the rest of their mail, and saved the USPS once and for all? And so in 1982 E-COM was born—and, inadvertently, helped coin the term “e-mail.” ↫ Justin Duke The implementation of E-COM was awesome. You’d enter the messages on your computer, send it to the post office using a TTY or IBM 2780/3789 terminals, to Sperry Rand Univac 1108 computer systems at one of 25 post offices. Postal staff would print the messages and send them through the regular postal system to their recipients. The USPS actually tried to get a legal monopoly on this concept, but the FCC fought them in court and won out. E-COM wasn’t the breakout success the USPS had hoped for, but it did catch on in one, unpleasant way: spam. The official-looking E-COM enevelopes from the USPS were very attractive to junk mail companies, and it was estimated that about six companies made up 70% of the total E-COM volume of 15 million messages in its second year of operation. The entire article is definitely recommended reading, as it contains a ton more information about E-COM and some of the other attempts by USPS to ride the coattails of the computer and internet revolution, including the idea to give every US resident an @.us e-mail address. Wild.
Notifications in Chrome are a useful feature to keep up with updates from your favorite sites. However, we know that some notifications may be spammy or even deceptive. We’ve received reports of notifications diverting you to download suspicious software, tricking you into sharing personal information or asking you to make purchases on potentially fraudulent online store fronts. To defend against these threats, Chrome is launching warnings of unwanted notifications on Android. This new feature uses on-device machine learning to detect and warn you about potentially deceptive or spammy notifications, giving you an extra level of control over the information displayed on your device. ↫ Hannah Buonomo and Sarah Krakowiak Criel on the Chromium Blog So first web browser makers introduce notifications, a feature nobody asked for and everybody hates, and now they’re using “AI” to combat the spam they themselves enabled and forced onto everyone? Don’t we have a name for a business model where you purport to protect your clients from threats you yourself pose? Turning off notifications is one of the first things I do after installing a browser. I do not ever want any website sending me a notification, nor do I want any of them to ask me for permission to do so. They’re such an obvious annoyance and massive security threat, and it’s absolutely mindboggling to me we just accept them as a feature we have to live with. I genuinely wish browsers like Firefox, which claim to protect your privacy, would just have the guts to be opinionated and rip shit features like this straight out of their browser. Using “AI” to combat spam notifications instead of just turning notifications off is peak techbro.
The majority of the traffic on the web is from bots. For the most part, these bots are used to discover new content. These are RSS Feed readers, search engines crawling your content, or nowadays AI bots crawling content to power LLMs. But then there are the malicious bots. These are from spammers, content scrapers or hackers. At my old employer, a bot discovered a wordpress vulnerability and inserted a malicious script into our server. It then turned the machine into a botnet used for DDOS. One of my first websites was yanked off of Google search entirely due to bots generating spam. At some point, I had to find a way to protect myself from these bots. That’s when I started using zip bombs. ↫ Ibrahim Diallo I mean, when malicious bots harm your website, isn’t combating them with something like zip bombs simply just self-defense?
What if you want to use a web browser like Dillo, which lacks JavaScript support and can’t play audio or video inside the browser? Dillo doesn’t have the capability to play audio or video directly from the browser, however it can easily offload this task to other programs. This page collects some examples of how to do watch videos and listen to audio tracks or podcasts by using an external player program. In particular we will cover mpv with yt-dlp which supports YouTube and Bandcamp among many other sites. ↫ Dillo website The way Dillo handles this feels very UNIX-y, in that it will call an external program – mpv and yt-dlp, for instance – to play a YouTube from an “Open in mpv” option in the right-click menu for a link. It’s nothing earth-shattering or revolutionary, of course, but I very much appreciate that Dillo bakes this functionality right in, allowing you to define any such actions and add them to the context menu.
Clearly, online search isn’t bad enough yet, so Google is intensifying its efforts to continue speedrunning the downfall of Google Search. They’ve announced they’re going to show even more “AI”-generated answers in Search results, to more people. Today, we’re sharing that we’ve launched Gemini 2.0 for AI Overviews in the U.S. to help with harder questions, starting with coding, advanced math and multimodal queries, with more on the way. With Gemini 2.0’s advanced capabilities, we provide faster and higher quality responses and show AI Overviews more often for these types of queries. Plus, we’re rolling out to more people: teens can now use AI Overviews, and you’ll no longer need to sign in to get access. ↫ Robby Stein On top of this, Google is also testing a new search mode where “AI” takes over the entire search experience. Instead of seeing the usual list of links, the entire page of “results” will be generated by “AI”. This feature, called “AI Mode” is opt-in for now. You can opt-in in Labs, but you do need to be a paying Google One AI Premium subscriber. I guess it’s only a matter of time before this “AI Mode” will be the default on Google Search, because it allows Google to keep its users on Google.com, and this makes it easier to show them ads and block out competitors. We all know where this is going. But, I hear you say, I use DuckDuckGo! I don’t have to deal with any of this! Well, I’ve got some bad news for you, because DuckDuckGo, too, is greatly expanding its use of “AI” in search. DDG will provide free, anonymous access to various “AI” chatbots, deliver more “AI”-generated search results based on more sources (but still English-only), and more – all without needing to have an account. A few of these features were already available in beta, and are now becoming generally available. Props to DuckDuckGo for providing a ton of options to turn all of this stuff off, though. They give users quite a bit of control over how often these “AI”-generated search results appear, and you can even turn them off completely. All the “AI” chatbot stuff is delegated to a separate website, and any link to it from the normal search results can be disabled, too. It’s entirely possible to have DuckDuckGo just show a list of regular search results, exactly as it should be. Let’s hope DDG can keep these values going, because if they, too, start pushing this “AI” nonsense without options to turn it off, I honestly have no idea where else to go.
At some point, I wondered—what if I sent a packet using a transport protocol that didn’t exist? Not TCP, not UDP, not even ICMP—something completely made up. Would the OS let it through? Would it get stopped before it even left my machine? Would routers ignore it, or would some middlebox kill it on sight? Could it actually move faster by slipping past common firewall rules? No idea. So I had to try. ↫ Hawzen Okay so the end result is that it’s technically possible to send a packet across the internet that isn’t TCP/UDP/ICMP, but you have to take that literally: one packet.
Lets start with some context, the project consists of implementing, shipping and maintaining native Wayland support in the Chromium project. Our team at Igalia has been leading the effort since it was first merged upstream back in 2016. For more historical context, there are a few blog posts and this amazing talk, by my colleagues Antonio Gomes and Max Ihlenfeldt, presented at last year’s Web Engines Hackfest. Especially due to the Lacros project, progresses on Linux Desktop has been slower over the last few years. Fortunately, the scenario changed since last year, when a new sponsor came up and made it possible to address most of the outstanding missing features and issues required to move Ozone Wayland to the finish line. ↫ Nick Yamane There still quite a bit of work left to do, but a lot of progress has been made. As usual, Nvidia setups are problematic, which is a recurring theme for pretty much anything Wayland-related. Aside from the usual Nvidia problems, a lot of work has been done on improving and fixing fractional scaling, adding support for the text-input-v3 protocol, reimplementing tab dragging using the proper Wayland protocol, and a lot more. They’re also working on session management, which is very welcome for Chrome/Chromium users as it will allow the browser to remember window positions properly between restarts. Work is also being done to get Chromium’s interactive UI tests infrastructure and code working with Wayland compositors, with a focus on GNOME/Mutter – no word on KDE’s Kwin, though. I hope they get the last wrinkles worked out quick. The most popular browser needs to support Wayland out of the box.
We’ve got a new Dillo release for you this weekend! We added SVG support for math formulas and other simple SVG images by patching the nanosvg library. This is specially relevant for Wikipedia math articles. We also added optional support for WebP images via libwebp. You can use the new option ignore_image_formats to ignore image formats that you may not trust (libwebp had some CVEs recently). ↫ Dillo website This release also comes with some UI tweaks, like the ability to move the scrollbar to the left, use the scrollbar to go back and forward exactly one page, the ability to define custom link actions in the context menu, and more – including the usual bug fixes, of course. Once the pkgsrc bug on HP-UX I discovered and reported is fixed, Dillo is one of the first slightly more complex packages I intend to try and build on HP-UX 11.11.
If you don’t want OpenAI’s, Apple’s, Google’s, or other companies’ crawlers sucking up the content on your website, there isn’t much you can do. They generally don’t care about the venerable robots.txt, and while people like Aaron Schwartz were legally bullied into suicide for downloading scientific articles using a guest account, corporations are free to take whatever they want, permission or no. If corporations don’t respect us, why should we respect them? There are ways to fight back against these scrapers, and the latest is especially nasty in all the right ways. This is a tarpit intended to catch web crawlers. Specifically, it’s targeting crawlers that scrape data for LLM’s – but really, like the plants it is named after, it’ll eat just about anything that finds its way inside. It works by generating an endless sequences of pages, each of which with dozens of links, that simply go back into the tarpit. Pages are randomly generated, but in a deterministic way, causing them to appear to be flat files that never change. Intentional delay is added to prevent crawlers from bogging down your server, in addition to wasting their time. Lastly, optional Markov-babble can be added to the pages, to give the crawlers something to scrape up and train their LLMs on, hopefully accelerating model collapse. ↫ ZADZMO.org You really have to know what you’re doing when you set up this tool. It is intentionally designed to cause harm to LLM web crawlers, but it makes no distinction between LLM crawlers and, say, search engine crawlers, so it will definitely get you removed from search results. On top of that, because Nepenthes is designed to feed LLM crawlers what they’re looking for, they’re going to love your servers and thus spike your CPU load constantly. I can’t reiterate enough that you should not be using this if you don’t know what you’re doing. Setting it all up is fairly straightforward, but of note is that if you want to use the Markov generation feature, you’ll need to provide your own corpus for it to feed from. None is included to make sure every installation of Nepenthes will be different and unique because users will choose their own corpus to set up. You can use whatever texts you want, like Wikipedia articles, royalty-free books, open research corpuses, and so on. Nepenthes will also provide you with statistics to see what cats you’ve dragged in. You can use Nepenthes defensively to prevent LLM crawlers from reaching your real content, while also collecting the IP ranges of the crawlers so you can start blocking them. If you’ve got enough bandwith and horsepower, you can also opt to use Nepenthes offensively, and you can have some real fun with this. Let’s say you’ve got horsepower and bandwidth to burn, and just want to see these AI models burn. Nepenthes has what you need: Don’t make any attempt to block crawlers with the IP stats. Put the delay times as low as you are comfortable with. Train a big Markov corpus and leave the Markov module enabled, set the maximum babble size to something big. In short, let them suck down as much bullshit as they have diskspace for and choke on it. ↫ ZADZMO.org In a world where we can’t fight back against LLM crawlers in a sensible and respectful way, tools like these are exactly what we need. After all, the imbalance of power between us normal people and corporations is growing so insanely out of any and all proportions, that we don’t have much choice but to attempt to burn it all down with more… Destructive methods. I doubt this will do much to stop LLM crawlers from taking whatever they want without consent – as I’ve repeatedly said, Silicon Valley does not understand consent – but at least it’s joyfully cathartic.
Mastodon, the only remaining social network that isn’t a fascist hellhole like Twitter or Facebook, is changing its legal and operational foundation to a proper European non-profit. Simply, we are going to transfer ownership of key Mastodon ecosystem and platform components (including name and copyrights, among other assets) to a new non-profit organization, affirming the intent that Mastodon should not be owned or controlled by a single individual. It also means a different role for Eugen, Mastodon’s current CEO. Handing off the overall Mastodon management will free him up to focus on product strategy where his original passion lies and he gains the most satisfaction. ↫ Official Mastodon blog Eugen Rochko has always been clear and steadfast about Mastodon not being for sale and not accepting any outside investments despite countless offers, and after eight years of both creating and running Mastodon, it makes perfect sense to move the network and its assets to a proper European non-profit. Mastodon’s actual control over the entire federated ActivityPub network – the Fediverse – is actually limited, so it’s not like the network is dependent on Mastodon, but there’s no denying it’s the most well-known part of the Fediverse. The Fediverse is the only social network on which OSNews is actively present (and myself, too, for that matter). By “actively present” I only mean I’m keeping an eye on any possible replies; the feed itself consists exclusively of links to our stories as soon as they’re published, and that’s it. Everything else you might encounter on social media is either legacy cruft we haven’t deleted yet, or something a third-party set up that we don’t control. RSS means it’s easy for people to set up third-party, unaffiliated accounts on any social medium posting links to our stories, and that’s entirely fine, of course. However, corporate social media controlled by the irrational whims of delusional billionaires with totalitarian tendencies is not something we want to be a part of, so aside from visiting OSNews.com and using our RSS feeds, the only other official way to follow OSNews is on Mastodon.
I don’t think most people realize how Firefox and Safari depend on Google for more than “just” revenue from default search engine deals and prototyping new web platform features. Off the top of my head, Safari and Firefox use the following Chromium libraries: libwebrtc, libbrotli, libvpx, libwebp, some color management libraries, libjxl (Chromium may eventually contribute a Rust JPEG-XL implementation to Firefox; it’s a hard image format to implement!), much of Safari’s cryptography (from BoringSSL), Firefox’s 2D renderer (Skia)…the list goes on. Much of Firefox’s security overhaul in recent years (process isolation, site isolation, user namespace sandboxes, effort on building with ControlFlowIntegrity) is directly inspired by Chromium’s architecture. ↫ Rohan “Seirdy” Kumar Definitely an interesting angle on the browser debate I hadn’t really stopped to think about before. The argument is that while Chromium’s dominance is not exactly great, the other side of the coin is that non-Chromium browsers also make use of a lot of Chromium code all of us benefit from, and without Google doing that work, Mozilla would have to do it by themselves, and let’s face it, it’s not like they’re in a great position to do so. I’m not saying I buy the argument, but it’s an argument nonetheless. I honestly wouldn’t mind a slower development pace for the web, since I feel a lot of energy and development goes into things making the web worse, not better. Redirecting some of that development into things users of the web would benefit from seems like a win to me, and with the dominant web engine Chromium being run by an advertising company, we all know where their focus lies, and it ain’t on us as users. I’m still firmly on the side of less Chromium, please.
Google has gotten a bad reputation as of late for being a bit overzealous when it comes to fighting ad blockers. Most recently, it’s been spotted automatically turning off popular ad blocking extension uBlock Origin for some Google Chrome users. To a degree, that makes sense—Google makes its money off ads. But with malicious ads and data trackers all over the internet these days, users have legitimate reasons to want to block them. The uBlock Origin controversy is just one facet of a debate that goes back years, and it’s not isolated: your favorite ad blocker will likely be affected next. Here are the best ways to keep blocking ads now that Google is cracking down on ad blockers. ↫ Michelle Ehrhardt at LifeHacker Here’s the cold and harsh reality: ad blocking will become ever more difficult as time goes on. Not only is Google obviously fighting it, other browser makers will most likely follow suit. Microsoft is an advertising company, so Edge will follow suit in dropping Manifest v2 support. Apple is an advertising company, and will do whatever they can to make at least their own ads appear. Mozilla is an advertising company, too, now, and will continue to erode their users’ trust in favour of nebulous nonsense like privacy-respecting advertising in cooperation with Facebook. The best way to block ads is to move to blocking at the network level. Get a cheap computer or Raspberry Pi, set up Pi-Hole, and enjoy some of the best adblocking you’re ever going to get. It’s definitely more involved than just installing a browser extension, but it also happens to be much harder for advertising companies to combat. If you’re feeling generous, set up Pi-Holes for your parents, friends, and relatives. It’s worth it to make their browsing experience faster, safer, and more pleasant. And once again I’d like to reiterate that I have zero issues with anyone blocking the ads on OSNews. Your computer, your rules. It’s not like display ads are particularly profitable anyway, so I’d much rather you support us through Patreon or a one-time donation through Ko-Fi, which is a more direct way of ensuring OSNews continues to exist. Also note that the OSNews Matrix room – think IRC, but more modern, and fully end-to-end encrypted – is now up and running and accessible to all OSNews Patreons as well.
Internet Archive’s “The Wayback Machine” has suffered a data breach after a threat actor compromised the website and stole a user authentication database containing 31 million unique records. News of the breach began circulating Wednesday afternoon after visitors to archive.org began seeing a JavaScript alert created by the hacker, stating that the Internet Archive was breached. “Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!,” reads a JavaScript alert shown on the compromised archive.org site. ↫ Lawrence Abrams at Bleeping Computer To make matters worse, the Internet Archive was also suffering from waves of distributed denial-of-service attacks, forcing the IA to take down the site while strengthening everything up. It seems the attackers have no real motivation, other than the fact they can, but it’s interesting, shall we say, that the Internet Archive has been under legal assault by big publishers for years now, too. I highly doubt the two are related in any way, but it’s an interesting note nonetheless. I’m still catching up on all the various tech news stories, but this one was hard to miss. A lot of people are rightfully angry and dismayed about this, since attacking the Internet Archive like this kind of feels like throwing Molotov cocktails at a local library – there’s literally not a single reason to do so, and the only people you’re going to hurt are underpaid librarians and chill people who just want to read some books. Whomever is behind this are just assholes, no ifs and buts about it.
The world of software development is rapidly changing. More and more companies are adopting DevOps practices to improve collaboration, increase deployment frequency, and deliver higher-quality software. However, implementing DevOps can be challenging without the right people, processes, and tools. This is where DevOps managed services providers can help. Choosing the right DevOps partner is crucial to maximizing DevOps’s benefits at your organization. This comprehensive guide covers everything you need to know about selecting the best DevOps managed services provider for your needs. What are DevOps Managed Services? DevOps managed services provide ongoing management, support, and expertise to help organizations implement DevOps practices. A managed services provider (MSP) becomes an extension of your team, handling tasks like: This removes the burden of building in-house DevOps competency. It lets your engineers focus on delivering business value instead of struggling with new tools and processes. Benefits of Using DevOps Managed Services Here are some of the main reasons to leverage an MSP to assist your DevOps transformation: Accelerate Time-to-Market A mature MSP has developed accelerators and blueprints based on years of project experience. This allows them to rapidly stand up CI/CD pipelines, infrastructure, and other solutions. You’ll be able to deploy code faster. Increase Efficiency MSPs scale across clients, allowing them to create reusable frameworks, scripts, and integrations for data warehouse services, for example. By leveraging this pooled knowledge, you avoid “reinventing the wheel,” which gets your team more done. Augment Internal Capabilities Most IT teams struggle to hire DevOps talent. Engaging an MSP gives you instant access to specialized skills like site reliability engineering (SRE), security hardening, and compliance automation. Gain Expertise Most companies are still learning DevOps. An MSP provides advisory services based on what works well across its broad client base, helping you adopt best practices instead of making mistakes. Reduce Cost While the exact savings will vary, research shows DevOps and managed services can reduce costs through fewer defects, improved efficiency, and optimized infrastructure usage. Key Factors to Consider Choosing the right MSP gives you the greatest chance of success. However, evaluating providers can seem overwhelming, given the diversity of services available. Here are the 5 criteria to focus on: 1. DevOps Experience and Maturity Confirm that the provider has real-world expertise, specifically in DevOps engagements. Ask questions such as: They can guide your organization on the DevOps journey if you want confidence. Also, examine their internal DevOps maturity. An MSP that “walks the talk” by using DevOps practices in their operations is better positioned to help instill those disciplines in your teams. 2. People, Process, and Tools A quality MSP considers all three pillars of DevOps success: People – They have strong technical talent in place and provide training to address any skill gaps. Cultural change is considered part of any engagement. Process – They enforce proven frameworks for infrastructure management, CI/CD, metrics gathering, etc. But also customize it to your environment vs. taking a one-size-fits-all approach. Tools – They have preferred platforms and toolchains based on experience. But integrate well with your existing investments vs. demanding wholesale changes. Aligning an MSP across people, processes, and tools ensures a smooth partnership. 3. Delivery Model and Location Understand how the MSP prefers to deliver services: If you have on-site personnel, also consider geographic proximity. An MSP with a delivery center nearby can rotate staff more easily. Most MSPs are flexible to align with what works best for a client. Be clear on communication and availability expectations upfront. 4. Security and Compliance Expertise Today, DevOps and security should go hand-in-hand. Evaluate how much security knowledge the provider brings to the table. Relevant capabilities can include: Not all clients require advanced security skills. However, given increasing regulatory demands, an MSP that offers broader experience can provide long-term value. 5. Cloud vs On-Premises Support Many DevOps initiatives – particularly when starting – focus on the public cloud, given cloud platforms’ automation capabilities. However, most enterprises take a hybrid approach, leveraging both on-premises and public cloud. Be clear if you need an MSP able to support: The required mix of cloud vs. on-prem support should factor into provider selection. Engagement Models for DevOps Managed Services MSPs offer varying ways clients can procure their DevOps expertise: Staff Augmentation Add skilled DevOps consultants to your team for a fixed time period (typically 3-6 months). This works well to fill immediate talent gaps. Project Based Engage an MSP for a specific initiative, such as building a CI/CD pipeline for a business-critical application. Clear the scope and deliverables. Ongoing Managed Services Retain an MSP to provide ongoing DevOps support under a longer-term (1+ year) contract. More strategic partnerships where MSP metrics and incentives align with client goals. Hybrid Approaches Blend staff augmentation, project work, and managed services. Provides flexibility to get quick wins while building long-term capabilities. Evaluate which model (or combination) suits your requirements and budget. Overview of Top Managed Service Providers The market for DevOps-managed services features a wide range of global systems integrators, niche specialists, regional firms, and digital transformation agencies. Here is a sampling of leading options across various categories: Langate Accenture Cognizant Wipro EPAM Advanced Technology Consulting ClearScale This sampling shows the diversity of options and demonstrates key commonalities, such as automation skills, CI/CD expertise, and experience driving cultural change. As you evaluate providers, develop a shortlist of 2-3 options that seem best aligned. Then, further validation will be made through detailed discovery conversations and proposal walkthroughs. A Framework for Comparing Providers With so many aspects to examine, it helps to use a scorecard to track your assessment as you engage potential DevOps MSPs: Criteria Weight Provider 1 Provider 2 Provider 3 Years of Experience 10% Client References/Case Studies 15% Delivery Locations 10% Cultural Change Methodology 15% Security and Compliance Capabilities 10% Public Cloud Skills 15% On-Premises Infrastructure Expertise 15% Budget Fit 10% Total Score 100% Customize categories and weighting based on your priorities. Scoring forces clearer decisions compared to general impressions. Share the framework with stakeholders to build consensus on the
If you’re reading this, you did a good job surviving another month, and that means we’ve got another monthly update from the Servo project, the Rust-based browser engine originally started by Mozilla. The major new feature this month is tabbed browsing in the Servo example browser, as well as extensive improvements for Servo on Windows. Servo-the-browser now has a redesigned toolbar and tabbed browsing! This includes a slick new tab page, taking advantage of a new API that lets Servo embedders register custom protocol handlers. ↫ Servo’s blog Servo now runs better on Windows, with keyboard navigation now fixed, --output to PNG also fixed, and fixes for some font- and GPU-related bugs, which were causing misaligned glyphs with incorrect colors on servo.org and duckduckgo.com, and corrupted images on wikipedia.org. Of course, that’s not at all, as there’s also the usual massive list of improved standards support, new APIs, improvements to some of the developer tools (including massive improvements in Windows build times), and a huge number of fixed bugs.
He regularly shares cool examples of fancy css animations. At the time of writing his focus has been on css scroll animations. I guess there are some new properties that allow playing a css animation based on the scroll position. Apple has been using this on their marketing pages or so jhehy says. The property seems pretty powerful. But how powerful? This got me thinking… Could it play a video as pure css? ↫ David Gerrells The answer is yes. This is so cursed, I love it – and he even turned it into an app so anyone can convert a video into CSS.
The internet is a complex network of routers, switches, and computers, and when we try to connect to a server, our packets go through many routers before reaching the destination. If one of these routers is misconfigured or down, the packet can be dropped, and we can’t reach the destination. In this post, we will see how traceroute works, and how it can help us diagnose network problems. ↫ Sebastian Marines I’m sure most of us have used traceroute at some point in our lives, but I never once wondered how,, exactly, it works. The internet – and networking in general – always feels like arcane magic to me, not truly understandable by mere mortals without years of dedicated study and practice. Even something as simple as managing a home router can be a confusing nightmare of abbreviations, terminology, and backwards compatibility hacks, so you can imagine how complex it gets when you leave your home network and start sending packets out into the wider world. This post does a great job of explaining exactly how traceroute works without overloading you with stuff you don’t need to know.