Monthly Archive:: September 2024

Arch Linux and Valve deepen ties with direct collaboration

When Valve took its second major crack at making Steam machines happen, in the form of the Steam Deck, one of the big surprises was the company’s choice to base the Linux operating system the Steam Deck uses on Arch Linux, instead of the Debian base it was using before. It seems this choice is not only benefiting Valve, but also Arch. We are excited to announce that Arch Linux is entering into a direct collaboration with Valve. Valve is generously providing backing for two critical projects that will have a huge impact on our distribution: a build service infrastructure and a secure signing enclave. By supporting work on a freelance basis for these topics, Valve enables us to work on them without being limited solely by the free time of our volunteers. ↫ Levente Polyak This is great news for Arch, but of course, also for Linux in general. The work distributions do to improve their user experience tend to be picked up by other distributions, and it’s clear that Valve’s contributions have been vast. With these collaborations, Valve is also showing it’s in it for the long term, and not just interested in taking from the community, but also in giving, which is good news for the large number of people now using Linux for gaming. The Arch team highlights that these projects will follow the regular administrative and decision-making processes within the distribution, so we’re not looking at parallel efforts forced upon everyone else without a say.

California’s new law forces digital stores to admit you’re just licensing content, not buying it

California Governor Gavin Newsom has signed a law (AB 2426) to combat “disappearing” purchases of digital games, movies, music, and ebooks. The legislation will force digital storefronts to tell customers they’re just getting a license to use the digital media, rather than suggesting they actually own it. When the law comes into effect next year, it will ban digital storefronts from using terms like “buy” or “purchase,” unless they inform customers that they’re not getting unrestricted access to whatever they’re buying. Storefronts will have to tell customers they’re getting a license that can be revoked as well as provide a list of all the restrictions that come along with it. Companies that break the rule could be fined for false advertising. ↫ Emma Roth at The Verge A step in the right direction, but a lot more is definitely needed. This law in particular seems to leave a lot of wiggle room for companies to keep using the “purchase” term while hiding the disclosure somewhere in the very, very small fine print. I would much rather a law like this just straight up ban the use of the term “purchase” and similar terms when all you’re getting is a license. Why allow them to keep lying about the nature of the transaction in exchange for some fine print somewhere? The software industry in particular has been enjoying a free ride when it comes to consumer protection laws, and the kind of malpractice, lack of accountability, and laughable quality control would have any other industry shut down in weeks for severe negligence. We’re taking baby steps, but it seems we’re finally arriving at a point where basic consumer protection laws and rights are being applied to software, too. Several decades too late, but at least it’s something.

The Critical Role of Cart and Payment in App and Brand Design

A cart and payment process is a critical yet often overlooked part of the user journey that can make or break an ecommerce app. From the outset, cart design, user experience and flexible payment options should be at the top of the agenda for digital brands wanting to drive conversions. Understanding User Intent When users add items to their carts, they have shown a clear intent to purchase. To complete that transaction, the following checkout process needs to be as easy and seamless for them as possible. Higher abandonment rates occur due to unnecessary friction caused by a confusing interface, complicated payment flows, and the absence of preferred payment methods, among other things. Studies show that 76% of online shopping carts are eventually abandoned, and clunky checkout design is a big part of this. The cart and payment section is the last step in persuading users to buy. An optimized experience directly correlates with higher conversion rates and more revenue. Key Aspects to Optimize There are 3 key aspects of the cart and checkout process that need to be optimized for conversion-focused brands: 1. Cart Design and User Experience The cart should provide a simple, visual summary of items added for purchase along with quantity selected and total order value. Allowing users to easily edit item properties, apply discounts, and estimate shipping simplifies what can be an anxiety inducing process, especially on mobile. Advanced features like saved carts for returning users further facilitate purchases. Offering guest checkout alongside account creation streamlines the process for first-time customers. 2. Flexible Payment Options Research shows that cart abandonment is reduced on sites that offer preferred payment methods. The more payment modes you enable, the higher the chances that users will find an option they trust and feel comfortable with. Major credit cards, mobile wallets, Buy Now Pay Later schemes, bank transfers, and other must have options; location based popular payment methods like Sofort, iDeal if you are selling across geographies. PCI-compliant integration with payment gateways such as Stripe and PayPal unlocks multiple payment methods while additionally ensuring transaction and sensitive user data security. Discover how to add payment gateway in app to enhance the payment capabilities of your mobile app. 3. Testing and Optimization No cart experience is perfect out of the box. Running A/B tests by tweaking design elements, flows, payment options, etc., provides data-backed insights on what users respond best to. Tools like Hotjar record user sessions directly in your live cart which surfaces pain points that can then be fixed. Analytics dashboards reveal drop off rates at each step, average order values and other trends that indicate scope for improvement if benchmarked periodically. Examples of Brands with Great Cart Experiences Some standout examples of brands that ace the cart and payment process: 1. Made.com Made.com offers a clean, distraction free cart with focus on only relevant details like items added, shipping estimates, order total, discounts and gift cards applied. Purchasing without account creation can be done through their guest checkout and the option to save details for faster repeat orders. At checkout, there are multiple payment methods clearly presented, along with clear messaging around security and returns policy – both essential to gain user trust for a furniture brand. 2. Bolt.com Bolt presents users with a single-page visual cart that provides details of services (food, rides, etc.) along with associated quantities, pricing and taxes. Pre-added tips can be edited before seamlessly checking out via integrated payment partner Stripe. Discounts and promo codes can also be applied directly on this page. The cart is optimized for speed, which is in line with Bolt’s brand promise of efficient deliveries and payments. 3. Amazon Amazon offers the gold standard for guided cart experiences, with persistent visibility into items added for purchase, alerts on discounts and delivery estimates. Their patented one-click buying option removes friction, allowing power users to skip checkout. However, multiple payment methods, including COD and EMI schemes, make it accessible for first-time buyers, too. The entire purchase process is geared towards user convenience, distilled through decades of testing and user data. Designing a Cart Experience from Scratch Creating an effective cart experience requires understanding user psychology, buyer journeys, and an iterative design approach. Here is a step-by-step process to follow: 1. Define Goals and Outcomes First, define what a successful cart and checkout flow needs to achieve from a business point of view. Typical goals include: So that design choices align with business impact and tie these to overall revenue and growth goals. 2. Map the Existing User Journey Analyze data around existing user behavior across the checkout process, e.g.: The above can be gleaned from tools such as Google Analytics and Hotjar. 3. Competitor Benchmarking Look at how competitor brands within your industry offer study cart experiences. Find out what flows, or features appeal most to users. For example, they can provide guest checkout or Apple/Google Pay for mobility apps or BNPL options for D2C brands. The right cart design combines learnings from data and real-world behavior. 4. Create and Test Hypotheses Using what has been researched so far, imagine what cart element and flow changes could positively impact your goals, e.g. Once you’ve tested these hypotheses with real users through interviews or prototypes, roll them out globally. Tools like UserTesting.com can be used to do quick user studies to find feedback.   Choosing the Right Payment Partner To allow flexible payment options, though, one must interface with a payment service provider.  Here are key aspects to evaluate when choosing a payment partner: The partner should advocate consistent checkout integration among several platforms, including web, mobile apps, POS systems, etc.  For businesses selling cross-border, the partner must offer payment acceptance using local methods in 100+ markets, multi-currency processing and DCC. Using 3D security, risk-based analysis, artificial intelligence, etc., payment partners guard transactions. You should choose a mate with advanced competence in this field. The partner should support integration with credit cards, the most popular mobile wallets, UPI, BNPL schemes and

COSMIC alpha 2 released

System76, the premiere Linux computer manufacturer and creator of the COSMIC desktop environment, has updated COSMIC’s Alpha release to Alpha 2. The latest release includes more Settings pages, the bulk of functionality for COSMIC Files, highly requested window management features, and considerable infrastructure work for screen reader support, as well as some notable bug fixes. ↫ system76’s blog The pace of development for COSMIC remains solid, even after the first alpha release. This second alpha keeps adding a lot of things considered basic for any desktop environment, such as settings panels for power and battery, sounds, displays, and many more. It also brings window management support for focus follows cursor and cursor follows focus, which will surely please the very specific, small slice of people who swear by those. Also, you can now disable the super key. A major new feature that I’m personally very happy about is the “adjust density” feature. COSMIC will allow you to adjust the spacing between the various user interface elements so you can choose to squeeze more information on your screen, which is one of the major complaints I have about modern UI design in macOS, Windows, and GNOME. Being able to adjust this to your liking is incredibly welcome, especially combined with COSMIC’s ability to change from ’rounded’ UI elements to ‘square’ UI elements. The file manager has also been vastly, vastly improved, tons of bugs were fixed, and much, much more. It seems COSMIC is on the right path, and I can’t wait to try out the first final result once it lands.

Tcl/Tk 9.0 released

Tcl 9.0 and Tk 9.0 – usually lumped together as Tcl/Tk – have been released. Tcl 9.0 brings 64bit compatibility so it can address data values larger than 2 GB, better Unicode support, support for mounting ZIP files as file systems, and much, much more. Tk 9.0 gets support for scalable vector graphics, much better platform integration with things like system trays, gestures, and so on, and much more.

A Comprehensive Guide to Choosing the Right DevOps Managed Services

The world of software development is rapidly changing. More and more companies are adopting DevOps practices to improve collaboration, increase deployment frequency, and deliver higher-quality software. However, implementing DevOps can be challenging without the right people, processes, and tools. This is where DevOps managed services providers can help. Choosing the right DevOps partner is crucial to maximizing DevOps’s benefits at your organization. This comprehensive guide covers everything you need to know about selecting the best DevOps managed services provider for your needs. What are DevOps Managed Services? DevOps managed services provide ongoing management, support, and expertise to help organizations implement DevOps practices. A managed services provider (MSP) becomes an extension of your team, handling tasks like: This removes the burden of building in-house DevOps competency. It lets your engineers focus on delivering business value instead of struggling with new tools and processes. Benefits of Using DevOps Managed Services Here are some of the main reasons to leverage an MSP to assist your DevOps transformation: Accelerate Time-to-Market A mature MSP has developed accelerators and blueprints based on years of project experience. This allows them to rapidly stand up CI/CD pipelines, infrastructure, and other solutions. You’ll be able to deploy code faster. Increase Efficiency MSPs scale across clients, allowing them to create reusable frameworks, scripts, and integrations for data warehouse services, for example. By leveraging this pooled knowledge, you avoid “reinventing the wheel,” which gets your team more done. Augment Internal Capabilities Most IT teams struggle to hire DevOps talent. Engaging an MSP gives you instant access to specialized skills like site reliability engineering (SRE), security hardening, and compliance automation. Gain Expertise Most companies are still learning DevOps. An MSP provides advisory services based on what works well across its broad client base, helping you adopt best practices instead of making mistakes. Reduce Cost While the exact savings will vary, research shows DevOps and managed services can reduce costs through fewer defects, improved efficiency, and optimized infrastructure usage. Key Factors to Consider Choosing the right MSP gives you the greatest chance of success. However, evaluating providers can seem overwhelming, given the diversity of services available. Here are the 5 criteria to focus on: 1. DevOps Experience and Maturity Confirm that the provider has real-world expertise, specifically in DevOps engagements. Ask questions such as: They can guide your organization on the DevOps journey if you want confidence. Also, examine their internal DevOps maturity. An MSP that “walks the talk” by using DevOps practices in their operations is better positioned to help instill those disciplines in your teams. 2. People, Process, and Tools A quality MSP considers all three pillars of DevOps success: People – They have strong technical talent in place and provide training to address any skill gaps. Cultural change is considered part of any engagement. Process – They enforce proven frameworks for infrastructure management, CI/CD, metrics gathering, etc. But also customize it to your environment vs. taking a one-size-fits-all approach. Tools – They have preferred platforms and toolchains based on experience. But integrate well with your existing investments vs. demanding wholesale changes. Aligning an MSP across people, processes, and tools ensures a smooth partnership. 3. Delivery Model and Location Understand how the MSP prefers to deliver services: If you have on-site personnel, also consider geographic proximity. An MSP with a delivery center nearby can rotate staff more easily. Most MSPs are flexible to align with what works best for a client. Be clear on communication and availability expectations upfront. 4. Security and Compliance Expertise Today, DevOps and security should go hand-in-hand. Evaluate how much security knowledge the provider brings to the table. Relevant capabilities can include: Not all clients require advanced security skills. However, given increasing regulatory demands, an MSP that offers broader experience can provide long-term value. 5. Cloud vs On-Premises Support Many DevOps initiatives – particularly when starting – focus on the public cloud, given cloud platforms’ automation capabilities. However, most enterprises take a hybrid approach, leveraging both on-premises and public cloud. Be clear if you need an MSP able to support: The required mix of cloud vs. on-prem support should factor into provider selection. Engagement Models for DevOps Managed Services MSPs offer varying ways clients can procure their DevOps expertise: Staff Augmentation Add skilled DevOps consultants to your team for a fixed time period (typically 3-6 months). This works well to fill immediate talent gaps. Project Based Engage an MSP for a specific initiative, such as building a CI/CD pipeline for a business-critical application. Clear the scope and deliverables. Ongoing Managed Services Retain an MSP to provide ongoing DevOps support under a longer-term (1+ year) contract. More strategic partnerships where MSP metrics and incentives align with client goals. Hybrid Approaches Blend staff augmentation, project work, and managed services. Provides flexibility to get quick wins while building long-term capabilities. Evaluate which model (or combination) suits your requirements and budget. Overview of Top Managed Service Providers The market for DevOps-managed services features a wide range of global systems integrators, niche specialists, regional firms, and digital transformation agencies. Here is a sampling of leading options across various categories: Langate Accenture Cognizant Wipro EPAM Advanced Technology Consulting ClearScale This sampling shows the diversity of options and demonstrates key commonalities, such as automation skills, CI/CD expertise, and experience driving cultural change. As you evaluate providers, develop a shortlist of 2-3 options that seem best aligned. Then, further validation will be made through detailed discovery conversations and proposal walkthroughs. A Framework for Comparing Providers With so many aspects to examine, it helps to use a scorecard to track your assessment as you engage potential DevOps MSPs: Criteria Weight Provider 1 Provider 2 Provider 3 Years of Experience 10% Client References/Case Studies 15% Delivery Locations 10% Cultural Change Methodology 15% Security and Compliance Capabilities 10% Public Cloud Skills 15% On-Premises Infrastructure Expertise 15% Budget Fit 10% Total Score 100% Customize categories and weighting based on your priorities. Scoring forces clearer decisions compared to general impressions. Share the framework with stakeholders to build consensus on the

Notice

Just want to let y’all know that my family and I have been hit hard with bronchitis these past two weeks, and especially my recovery is going quite slowly (our kids are healthy again, and my wife is recovering quite well!). As such, I haven’t been able to do much OSNews work. I hope things will finally clear up a bit over the weekend so that I can resume normal service come Monday. Enjoy your weekend, y’all!

Eliminating memory safety vulnerabilities at the source

The push towards memory safe programming languages is strong, and for good reason. However, especially for bigger projects with a lot of code that potentially needs to be rewritten or replaced, you might question if all the effort is even worth it, particularly if all the main contributors would also need to be retrained. Well, it turns out that merely just focusing on writing new code in a memory safe language will drastically reduce the number of memory safety issues in a project as a whole. Memory safety vulnerabilities remain a pervasive threat to software security. At Google, we believe the path to eliminating this class of vulnerabilities at scale and building high-assurance software lies in Safe Coding, a secure-by-design approach that prioritizes transitioning to memory-safe languages. This post demonstrates why focusing on Safe Coding for new code quickly and counterintuitively reduces the overall security risk of a codebase, finally breaking through the stubbornly high plateau of memory safety vulnerabilities and starting an exponential decline, all while being scalable and cost-effective. ↫ Jeff Vander Stoep and Alex Rebert at the Google Security Blog In this blog post, Google highlights that even if you only write new code in a memory-safe language, while only applying bug fixes to old code, the number of memory safety issues will decreases rapidly, even when the total amount of code written in unsafe languages increases. This is because vulnerabilities decay exponentially – in other words, the older the code, the fewer vulnerabilities it’ll have. In Android, for instance, using this approach, the percentage of memory safety vulnerabilities dropped from 76% to 24% over 6 years, which is a great result and something quite tangible. Despite the majority of code still being unsafe (but, crucially, getting progressively older), we’re seeing a large and continued decline in memory safety vulnerabilities. The results align with what we simulated above, and are even better, potentially as a result of our parallel efforts to improve the safety of our memory unsafe code. We first reported this decline in 2022, and we continue to see the total number of memory safety vulnerabilities dropping. ↫ Jeff Vander Stoep and Alex Rebert at the Google Security Blog What this shows is that a large project, like, say, the Linux kernel, for no particular reason whatsoever, doesn’t need to replace all of its code with, say, Rust, again, for no particular reason whatsoever, to reap the benefits of a modern, memory-safe language. Even by focusing on memory-safe languages only for new code, you will still exponentially reduce the number of memory safety vulnerabilities. This is not a new discovery, as it’s something observed and confirmed many times before, and it makes intuitive sense, too; older code has had more time to mature.

What happened to the Japanese PC platforms?

The other day a friend asked me a pretty interesting question: what happened to all those companies who made those Japanese computer platforms that were never released outside Japan? I thought it’d be worth expanding that answer into a full-size post. ↫ Misty De Meo Japan had a number of computer makers that sold platforms that looked and felt like western PCs, but were actually quite different hardware-wise, and incompatible with the IBM PC. None of these exist anymore today, and the reason is simple: Windows 95. The Japanese platforms compatible enough with the IBM PC that they could get a Windows 95 port turned into a commodity with little to distinguish them from regular IBM PCs, and the odd platform that didn’t use an x86 chip at all – like the X68000 – didn’t get a Windows port and thus just died off. The one platform mentioned in this article that I had never heard of was FM Towns, made by Fujitsu, which had its own graphical operating system called Towns OS. The FM Towns machines and the Towns OS were notable and unique at the time in that it was the first operating system to boot from CD-ROM, and it just so happens that Joe Groff published an article earlier this year detailing this boot process, including a custom bootable image he made. Here in the west we mostly tend to remember the PC-98 and X86000 platforms for their gaming catalogs and stunning designs, but that’s like only remembering the IBM PC for its own gaming catalog. These machines weren’t just glorified game consoles – they were full-fledged desktop computers used for the same boring work stuff we used the IBM PC for, and it truly makes me sad I don’t speak a single character of Japanese, so a unique operating system like Towns OS will always remain a curiosity for me.

Microsoft deprecates Windows Server Update Services, suggests cloud services instead

As part of our vision for simplified Windows management from the cloud, Microsoft has announced deprecation of Windows Server Update Services (WSUS). Specifically, this means that we are no longer investing in new capabilities, nor are we accepting new feature requests for WSUS. However, we are preserving current functionality and will continue to publish updates through the WSUS channel. We will also support any content already published through the WSUS channel. ↫ Nir Froimovici What an odd feature to deprecate. Anyone with a large enough fleet of machines probably makes use of Windows Server Update Services, as it adds some much-needed centralised control to the downloading and deployment of Windows updates, so you can do localised partial rollouts for testing, which, as the CrowdStrike debacle showed us once more, is quite important. WSUS also happens to be a local tool, that is set up and run locally, instead of in the cloud, and that’s where we get to the real reason WSUS is being deprecated. Microsoft is advising IT managers who use WSUS to switch to Microsoft’s alternatives, like Windows Autopatch, Microsoft Intune, and Azure Update Manager. These all happen to run in the cloud, giving up that control WSUS provided by running locally, and they’re not free either – they’re subscription services, of course. I mean, technically WSUS isn’t free either as it’s part of Windows Server, but these cloud services come on top of the cost of Windows Server itself. Nobody escapes the relentless march of subscription costs.

Disable Sequoia’s monthly screen recording permission prompt

The widely–reported “foo is requesting to bypass the system private window picker and directly access your screen and audio” prompt in Sequoia (which Apple has moved from daily to weekly to now monthly) can be disabled by quitting the app, setting the system date far into the future, opening and using the affected app to trigger the nag, clicking “Allow For One Month”, then restoring the correct date. ↫ tinyapps.org blog Or, and this is a bit of a radical idea, you could use an operating system that doesn’t infantalise its users.

Qualcomm wants to buy Intel

On Friday afternoon, The Wall Street Journal reported Intel had been approached by fellow chip giant Qualcomm about a possible takeover. While any deal is described as “far from certain,” according to the paper’s unnamed sources, it would represent a tremendous fall for a company that had been the most valuable chip company in the world, based largely on its x86 processor technology that for years had triumphed over Qualcomm’s Arm chips outside of the phone space. ↫ Richard Lawler and Sean Hollister at The Verge Either Qualcomm is only interested in buying certain parts of Intel’s business, or we’re dealing with someone trying to mess with stock prices for personal gain. The idea of Qualcomm acquiring Intel seems entirely outlandish to me, and that’s not even taking into account that regulators will probably have a thing or two to say about this. The one thing such a crazy deal would have going for it is that it would create a pretty strong and powerful all-American chip giant, which is a PR avenue the companies might explore if this is really serious. One of the most valuable assets Intel has is the x86 architecture and the associated patents and licensing deals, and the immense market power that comes with those. Perhaps Qualcomm is interested in designing x86 chips, or, more likely, perhaps they’re interested in all that sweet, sweet licensing money they could extract by allowing more companies to design and sell x86 processors. The x86 market currently consists almost exclusively of Intel and AMD, a situation which may be leaving a lot of licensing money on the table. Pondering aside, I highly doubt this is anything other than an overblown, misinterpreted story.

Slowly booting full Linux on the Intel 4004 for fun, art, and absolutely no profit

Can you run Linux on the Intel 4004, the first commercially produced microprocessor, released to the world in 1971? Well, Dmitry Grinberg, the genius engineer who got Linux to run on all kinds of incredibly underpowered hardware, sought to answer this very important question. In short, yes, you can run Linux on the 4004, but much as with other extremely limited and barebones chips, you have to get… Creative. Very creative. Of course, Linux cannot and will not boot on a 4004 directly. There is no C compiler targeting the 4004, nor could one be created due to the limitations of the architecture. The amount of ROM and RAM that is addressable is also simply too low. So, same as before, I would have to resort to emulation. My initial goal was to fit into 4KB of code, as that is what an unmodified unassisted 4004 can address. 4KB of code is not much at all to emulate a complete system. After studying the options, it became clear that MIPS R3000 would be the winner here. Every other architecture I considered would be harder to emulate in some way. Some architectures had arbitrarily-shifted operands all the time (ARM), some have shitty addressing modes necessitating that they would be slow (RISCV), some would need more than 4KB to even decode instructions (x86), and some were just too complex to emulate in so little space (PPC). … so … MIPS again… OK! ↫ Dmitry Grinberg This is just one very small aspect of this massive undertaking, and the article and videos accompanying his success are incredibly detailed and definitely not for the faint of heart. The amount of skill, knowledge, creativity, and persistence on display here is stunning, and many of us can only dream of being able to do stuff like this. I absolutely love it. Of course, the Linux kernel had to be slimmed down considerably, as a lot of stuff currently in the kernel are of absolutely no use on such an old system. Boot time is measured in days, still, but it helped a lot. Grinberg also turned the whole setup into what is effectively an art piece you can hang on the wall, where you can have it run and, well, do things – not much, of course, but he did include a small program that draws mandelbrot set on the VFD and serial port, which is a neat trick. He plans on offering the whole thing as a kit, but a lot of it depends on getting enough of the old chips to offer a complete, ready-to-assemble kit in the first place.

Why Apple uses JPEG XL in the iPhone 16 and what it means for your photos

The iPhone 16 family has arrived and includes many new features, some of which Apple has played very close to its vest. One such improvement is the inclusion of JPEG XL file types, which promise improved image quality compared to standard JPEG files while delivering relatively smaller file sizes. Overall, JPEG XL addresses many of JPEG’s shortcomings. The 30-year-old format is not very efficient, only offers eight-bit color depth, doesn’t support HDR, doesn’t do alpha transparency, doesn’t support animations, doesn’t support multiple layers, includes compression artifacts, and exhibits banding and visual noise. JPEG XL tackles these issues, and unlike WebP and AVIF formats, which each have some noteworthy benefits too, JPEG XL has been built from the ground up with still images in mind. ↫ Jeremy Gray at PetaPixel Excellent news, and it will hopefully mean others will follow – something that tends to happen when Apple finally supports to the new thing.

Biggest Esports Tournaments of 2024

Esports competitions are the pinnacle of competitive gaming, bringing together the top players and teams worldwide to participate in thrilling events. These events, frequently staged in enormous venues and broadcast to millions of people online, include many popular titles such as League of Legends, Dota 2, and Counter-Strike.Aside from the spectacular events, esports tournaments are well-known for the ardent fanbases that bet on who will win. Frequently updated about Dota 2 and LoL odds, these enthusiasts propel the tournaments forward, attracting larger audiences, higher stakes, and increased media coverage. These events will be greater than ever in 2024, providing unforgettable moments that will impact the future of gaming. League of Legends World ChampionshipThe 2024 League of Legends World Championship will make history as the first to be held in London, in the O2 Arena, in front of 15,000 fans. This November, twenty elite teams will compete for the coveted Summoner’s Cup and a sizable prize pool, which last year totaled $2.25 million. The competition begins with the Play-In Stage, where teams such as the MAD Lions and PSG Talon compete for a berth in the main event. The double-elimination system leaves no room for error, paving the way for thrilling comebacks and high-stakes matchups. Following the Play-In Stage, the Swiss Stage takes over, with 16 elite teams from various areas competing in a competitive five-round format. This phase guarantees exciting matchups and second opportunities, as only the most adaptable teams progress to the final rounds. The Swiss format stresses perseverance, giving fans unexpected victories and strategic games. The Worlds Anthem, performed this year by Linkin Park, is a trademark of the competition “Heavy Is The Crown.” The song reflects the tournament’s attitude of triumph over adversity, which resonates with contestants and supporters. The anthem’s dramatic music video captures the togetherness and passion that define the League of Legends community, raising Worlds to more than just a tournament. Dota 2: The InternationalThe International is synonymous with Dota 2 and continues to be one of the world’s most anticipated esports competitions. It is well-known for its large prize pools and record-breaking viewership, and it defines competitive gaming. This year, Valve is upgrading the experience with up to ten LAN events, giving fans additional options to participate in the competition at all stages. Beyond the significant rewards, The International represents strong competition and the pinnacle of Dota 2, with suspenseful and dramatic encounters. The tournament’s heritage is founded on passionate rivalries and amazing moments, making it a must-see for fans year after year. The first prize pool for 2024, financed by Valve, begins at $1.6 million and is expected to expand to more than $2.6 million through community contributions. Historically, the winning team wins more than 40% of the overall prize money, with significant benefits for other players, adding to the event’s appeal. This year’s leading contenders include Team Spirit, which is looking to make history by winning its third championship, and the Gaimin Gladiators, which finished second last year and is looking for its first TI victory. Teams such as Team Liquid, Xtreme Gaming, and newcomers The Falcons heighten the competition excitement, promising an exciting tournament full of legacy-defining moments. Valorant Champions Tour (VCT) The Valorant Champions Tour (VCT) has established itself as the leading esports championship, attracting top talent and enthusiasts worldwide. The VCT Masters will be held in Shanghai in 2024, reflecting the city’s booming esports culture. The event builds on the success of Valorant Champions 2023, which attracted a peak audience of 1.29 million. Riot Games has implemented new player trade and loan mechanisms, which adds strategic depth to this year’s event. The competition starts with regional qualifiers, in which teams from various places compete for a berth in the main event, assuring a diversified participation. The knockout stage employs a double-elimination structure, which provides second opportunities and heightens the suspense. Fans can expect intense matches and spectacular moments as teams compete for the ultimate championship. Fortnite Champion Series (FNCS) The Fortnite Champion Series (FNCS) is a highlight of the esports season, known for its big prize pools and high viewership. FNCS 2024 anticipates tough competition, building on last year’s success of 6 million hours watched. When it begins in early 2024, the tournament, which has a $4 million prize pool, will draw top-tier participants from all over the world. The event includes several crucial tournaments and qualifying rounds to determine the top-performing teams. Each tournament highlights regional diversity, bringing distinct strategies and growing gameplay to light. These events also emphasize Fortnite’s dynamic meta, which requires teams to adapt and excel.Renowned organizations such as FaZe Clan and TSM are anticipated to lead the tournament, while Dignitas and Team Falcons will also want to make their impact. The FNCS 2024 not only provides entertaining confrontations but also influences worldwide rankings, making each match crucial for future qualifying. Fans can expect an electrifying event as elite players compete for supremacy in one of the most prominent esports events. Apex Legends Global Series The Apex Legends Global Series is becoming increasingly popular, capturing a diverse and enthusiastic audience. The series features a huge prize pool, with $2 million committed to the World Championship finals alone. Each section of the series allows competitors to win from a $500,000 prize pool, ensuring that competition stays high at all levels. In 2023, viewership for the Apex Legends Global Series increased by 13%, totaling 47.9 million hours watched. This spike can be attributed largely to playoffs, which attract a lot of attention since they feature elite teams from diverse locations fighting in high-energy clashes that represent the game’s intense essence. This gaming event exemplifies Apex Legends’ long-term appeal to players and deeply ingrained competitive spirit. Counter-Strike 2 Major Championships The PGL Major Copenhagen 2024 promises to be a watershed moment in the Counter-Strike 2 series, with 16 elite teams vying for a $1.25 million prize pool. It will take place at Denmark’s Royal Arena from March 21 to March 31. It will showcase some of the

Nintendo and The Pokémon Company file patent lawsuit against maker of hit game Palworld

Nintendo, together with The Pokémon Company, filed a patent infringement lawsuit in the Tokyo District Court against Pocketpair, Inc. on September 18, 2024. This lawsuit seeks an injunction against infringement and compensation for damages on the grounds that Palworld, a game developed and released by the Defendant, infringes multiple patent rights. ↫ Nintendo press release Since the release of Palworld, which bears a striking resemblance to the Pokémon franchise, everybody’s been kind of expecting a reaction from both Nintendo and The Pokémon Company, and here it is. What’s odd is that it’s not a trademark, trade dress, or copyright lawsuit, but a patent one, which is not what you’d expect when looking at how similar the Palworld creatures look to Pokémon, to the point where some people even suggest the 3D models were simply lifted wholesale from the latest Nintendo Switch Pokémon games. There’s no mention of which patents Pocketpair supposedly infringes upon, and in a statement, the company claims it, too, has no idea which patents are supposedly in play. I have to admit I never even stopped to think game patents were a thing at all, but now that I spent more than 2 seconds pondering this concept, of course they exist. This lawsuit will be quite interesting to follow, because the games industry is one of the few technology sectors out there where copying each others ideas, concepts, mechanics, and styles is not only normal, it’s entirely expected and encouraged. New ideas spread through the games industry like wildfires, and if some new mechanic is a hit with players, it’ll be integrated into other games within a few months, and games coming out a year later are expected to have the hit new mechanics from last year. It’s a great example of how beneficial it is to have ideas freely spread, and how awesome it is to see great games take existing mechanics and apply interesting twists, or use them in entirely different genres than where they originated from. Demon’s Souls and the Dark Souls series are a great example of a series of games that not only established a whole new genre other games quickly capitalised on, but also introduced the gaming world to a whole slew of new and unique mechanics that are now being applied in all kinds of new and interesting ways. Lawsuits like this one definitely pose a threat to this, so I hope that either this fails spectacularly in court, or that the patents in question are so weirdly specific as to be utterly without merit in going after any other game.

DirectX adopting SPIR-V as the interchange format of the future

As we look to the future, maintaining a proprietary IR format (even one based on an open-source project) is counter to our commitments to open technologies, so Shader Model 7.0 will adopt SPIR-V as its interchange format. Over the next few years, we will be working to define a SPIR-V environment for Direct3D, and a set of SPIR-V extensions to support all of Direct3D’s current and future shader programming features through SPIR-V. This will allow developers to take better advantage of existing tools and unify the ecosystem around investing in one IR. ↫ Chris Bieneman and Cassie Hoef at the DirectX Developer Blog SPIR-V is developed by the Khronos Group and is an “intermediate language for parallel computing and graphics by Khronos Group”. I don’t know what any of this means, but any adoption of Khronos technologies is a good thing, especially by a heavyweight like Microsoft.

European Commission to order Apple to take interoperability measures after company refuses to comply with DMA

The European Commission has taken the next step in forcing Apple to comply with the Digital Markets Act. The EC has started two so-called specification proceedings, in which they can more or less order Apple exactly what it needs to do to comply with the DMA – in this case covering the interoperability obligation set out in Article 6(7) of the DMA. The two proceedings entail the following: The first proceeding focuses on several iOS connectivity features and functionalities, predominantly used for and by connected devices. Connected devices are a varied, large and commercially important group of products, including smartwatches, headphones and virtual reality headsets. Companies offering these products depend on effective interoperability with smartphones and their operating systems, such as iOS. The Commission intends to specify how Apple will provide effective interoperability with functionalities such as notifications, device pairing and connectivity. The second proceeding focuses on the process Apple has set up to address interoperability requests submitted by developers and third parties for iOS and IPadOS. It is crucial that the request process is transparent, timely, and fair so that all developers have an effective and predictable path to interoperability and are enabled to innovate. ↫ European Commission press release It seems the European Commission is running out of patience, and in lieu of waiting on Apple to comply with the DMA on its own, is going to tell Apple exactly what it must do to comply with the interoperability obligation. This means that, once again, Apple’s childish, whiny approach to DMA compliance is backfiring spectacularly, with the company no longer having the opportunity to influence and control its own interoperability measures – the EC is simply going to tell them what they must do. The EC will complete these proceedings within six months, and will provide Apple with its preliminary findings which will explain what is expected of Apple. These findings will also be made public to invite comments from third parties. The proceedings are unrelated to any fines for non-compliance, which are separate.

GNOME 47 released with accent colours and completely new open/save file dialogs

The GNOME project has released their newest major version, GNOME 47, and while it’s not the most groundbreaking release, there’s still a ton of good stuff in here. Two features really stand our, with the first one being the addition of accent colours. Instead of being locked into the default GNOME blue accent colour, you can now choose between a variety of colours, which is a very welcome addition. I use the accent colour feature on all my computers, and since I run KDE, I also have this nifty KDE feature where it’ll select an accent colour automatically based on your wallpaper. No, this isn’t a groundbreaking feature, but considering GNOME’s tendency towards not allowing any customisation, this is simply very welcome. A much more substantial feature comes in the form of brand new open/save file dialogs, and I’m sure even the GNOME developers themselves are collectively sighing in relief about this one. GNOME’s open/save dialogs were so bad they became a meme, and now they’re finally well and truly fixed, thanks to effectively removing the old ones and adding new ones based on the GNOME Files file manager. GNOME 47 comes with brand new file open and save file dialogs. The new dialogs are a major upgrade compared with the previous versions, and are based on the existing Files app rather than being a separate codebase. This results in the new dialogs having a much more complete set of features compared with the old open and save dialogs. With the new dialogs you can zoom the view, change the sort order in the icon view, rename files and folders, preview files, and more. ↫ GNOME 47 release notes And yes, this includes thumbnails. There’s tons more in GNOME 47, like a new design for dialog windows that look and feel more like they belong on a mobile UI, tons of improvements to Files, the Settings application, GNOME Online Accounts, Web, and more. GNOME 47 will make its way to your distribution of choice soon enough, but of course, you can always build and install it yourself if you’re so inclined.