Legal Archive

California passes strongest right-to-repair bill yet, requiring 7 years of parts

California, the home to many of tech’s biggest companies and the nation’s most populous state, is pushing ahead with a right-to-repair bill for consumer electronics and appliances. After unanimous votes in the state Assembly and Senate, the bill passed yesterday is expected to move through a concurrence vote and be signed by Governor Gavin Newsom. Excellent news from California, and I’d like to congratulate everyone involved in the effort getting this passed. Much like consumer protection laws from the EU, such laws from California also have a tendency to benefit consumers far beyond the borders of the original jurisdiction.

How big tech got so damn big

Enter the trustbusters, led by Senator John Sherman, author of the 1890 Sherman Act, America’s first antitrust law. In arguing for his bill, Sherman said to the Senate: “If we will not endure a King as a political power we should not endure a King over the production, transportation, and sale of the necessaries of life. If we would not submit to an emperor we should not submit to an autocrat of trade with power to prevent competition and to fix the price of any commodity.” In other words, when a company gained too much power, it became the same kind of kingly authority that the colonists overthrew in 1776. Government “by the people, of the people, and for the people” was incompatible with concentrated corporate power from companies so large that they were able to determine how people lived their lives, made their incomes, and structured their cities and towns. Break up big tech. Apple, Google, Amazon, Microsoft, Facebook – they need to be chopped up into smaller parts that need to compete with one another. The amount of life this will breathe into the economy, as well as the burst of innovation that it will cause, will do more for people’s lives than a trillion nonsense trickle-down policies that favour the rich and powerful.

UK has not backed down in tech encryption row, minister says

Over the past few days, there have been a lot of reports in the media that the UK government was backing down from its requirement that every end-to-end encrypted messenger application inside the country had to give the government backdoor access to these messenger applications. However, after reading the actual words from the UK’s junior minister Stephen Parkinson, it seemed like all she did was give a “pinky promise!” not to enforce this requirement. The law itself did not change, is not changing, and will not change, and the requirement is still in there. Today, the UK’s technology minister Michelle Donelan made that even clearer than it already was. Donelan, however, denied on Thursday that the bill had been watered down in the final stages before it becomes law. “We haven’t changed the bill at all,” she told Times Radio. “If there was a situation where the mitigations that the social media providers are taking are not enough, and if after further work with the regulator they still can’t demonstrate that they can meet the requirements within the bill, then the conversation about technology around encryption takes place,” she said. This raises an interesting question – why was everyone so keen on pushing the narrative yesterday that the “technology sector” had won, and that the UK government had backed down? Well, Facebook and Apple have kind of talked themselves into a corner in response to the UK’s requirement for backdoor access to WhatsApp and iMessage. The two companies threatened they would pull these services out of the UK if the government didn’t remove this requirement. When it became clear that the UK government wasn’t going to back down, Facebook and Apple were going to lose a lot of face if they didn’t actually pull WhatsApp and iMessage out of the UK in response. They needed something to get them out of this. This vague pinky promise is all they needed. Now they can shit all over their supposed morals and values once again, completely abandon their grandstanding and promises about protecting end-to-end encryption in messaging, and continue to operate in the UK as if nothing has changed, despite them legally being obligated to break end-to-end encryption if the UK government asks them to – which they can now do whenever it pleases them. And entirely unsurprisingly, the general tech media, ever looking to please the corporations they are supposed to do the journalism stuff about, fell for it, hook, line, and sinker. The narrative that the UK backed down and Facebook and Google won is out there now, and that’s all the tech sector needed.

Digital Markets Act: Commission designates six gatekeepers

The European Commission has today designated, for the first time, six gatekeepers – Alphabet, Amazon, Apple, ByteDance, Meta, Microsoft – under the Digital Markets Act (DMA). In total, 22 core platform services provided by gatekeepers have been designated. The six gatekeepers will now have six months to ensure full compliance with the DMA obligations for each of their designated core platform services. Following their designation, gatekeepers now have six months to comply with the full list of do’s and don’ts under the DMA, offering more choice and more freedom to end users and business users of the gatekeepers’ services. However, some of the obligations will start applying as of designation, for example, the obligation to inform the Commission of any intended concentration. It is for the designated companies to ensure and demonstrate effective compliance. To this end, they have  6 months to submit a detailed compliance report in which they outline how they comply with each of the obligations of the DMA. The EC also notes that due to submissions from Apple and Microsoft arguing that iMessage and Bing, Edge, and Microsoft Advertising respectively, do not qualify to be subject to the DMA, the EC has opened four market investigations into these four services to further assess the situation. On top of that, for Gmail, and the Samsung Internet Browser, the EC has concluded that their owners have successfully argued they should not fall under the DMA. This is one of the biggest pieces of legislation to hit powerful corporations in a long time – especially in tech, which basically has been a wild west free-for-all regulation-wise – and it’s going to have some massive consequences for all of us.

Apple and Microsoft fight Brussels over ‘gatekeeper’ label for iMessage and Bing

Apple and Microsoft have argued with Brussels that some of their services are insufficiently popular to be designated as “gatekeepers” under new landmark EU legislation designed to curb the power of Big Tech. Brussels’ battle with the two US companies over Apple’s iMessage chat app and Microsoft’s Bing search engine comes ahead of Wednesday’s publication of the first list of services to be regulated by the Digital Markets Act. Microsoft’s argument seems to make sense. Microsoft was unlikely to dispute the designation of its Windows operating system, which dominates the PC industry, as a gatekeeper, these people said. But it has argued that Bing has a market share of just 3 per cent and further legal scrutiny would put it at a greater disadvantage. I guess the validity of Microsoft’s argument hinges on if that 3% equates to the number of users requirements set by the European Union, but I guess we’ll find out tomorrow. Apple’s argument, though, seems more precarious. Separately, Apple argued that iMessage did not meet the threshold of user numbers at which the rules applied and therefore should not comply with obligations that include opening the service to rival apps such as Meta’s WhatsApp, said the two people. Analysts have estimated that iMessage, which is built into every iPhone, iPad and Mac, has as many as 1bn users globally, but Apple has not disclosed any figures for several years. The decision is likely to hinge on how Apple and the EU define the market in which iMessage operates. One billion users worldwide is most definitely going to mean it exceeds the minimums set by the DSA. Apple, you’re going to have to open up iMessage, and allow competitors and newcomers to interoperate with it. Using messaging services as lock-in is outdated, anti-consumer, and harmful to competition. And if you don’t like it – as they say on the Isle of Man, a boat leaves in the morning.

Five changes EU consumers will notice due to the DSA

The EU Digital Services Act went into effect last Friday, and since there’s an insane amount of misinformation from big tech astroturfers about what the DSA means, it’s time to list what the DSA really does for people in the EU. People in the 27-nation European Union can alter some of what shows up when they search, scroll and share on the biggest social media platforms like TikTok, Instagram and Facebook and other tech giants like Google and Amazon. That’s because Big Tech companies, most headquartered in the U.S., are now subject to a pioneering new set of EU digital regulations. The Digital Services Act aims to protect European users when it comes to privacy, transparency and removal of harmful or illegal content. Here are five things that will change when you sign on. All of these are excellent improvements and gives us as consumers more sticks to fight with. The EU is far from perfect – just like any other government – but as far as consumer protection goes, they’re leading the charge. Never forget who would not want consumers to have more protections.

YouTube may face billions in fines if FTC confirms child privacy violations

Four nonprofit groups seeking to protect kids’ privacy online asked the Federal Trade Commission (FTC) to investigate YouTube today, after back-to-back reports allegedly showed that YouTube is still targeting personalized ads on videos “made for kids”. Now it has become urgent that the FTC probe YouTube’s data and advertising practices, the groups’ letter said, and potentially intervene. Otherwise, it’s possible that YouTube could continue to allegedly harvest data on millions of kids, seemingly in violation of the Children’s Online Privacy Protection Act (COPPA) and the FTC Act. Targeted online advertising already oozes sleaziness, but targeting children is on a whole different level. There’s a reason you should keep a close eye on what your kids are watching on YouTube, and the various content rabbit holes YouTube’s algorithm can trap people in aren’t the only reason to do so. I’m not one of those extremists that believes YouTube is universally bad for kids – it all depends on what you watch, not that you watch – but that doesn’t mean I’m about to hand the remote control to my kids and leave the room.

The Kids Online Safety Act isn’t all right, critics say

Debate continues to rage over the federal Kids Online Safety Act (KOSA), which seeks to hold platforms liable for feeding harmful content to minors. KOSA is lawmakers’ answer to whistleblower Frances Haugen’s shocking revelations to Congress. In 2021, Haugen leaked documents and provided testimony alleging that Facebook knew that its platform was addictive and was harming teens—but blinded by its pursuit of profits, it chose to ignore the harms. But when Blumenthal introduced KOSA last year, the bill faced immediate and massive blowback from more than 90 organizations—including tech groups, digital rights advocates, legal experts, child safety organizations, and civil rights groups. These critics warned lawmakers of KOSA’s many flaws, but they were most concerned that the bill imposed a vague “duty of care” on platforms that was “effectively an instruction to employ broad content filtering to limit minors’ access to certain online content.” The fear was that the duty of care provision would likely lead platforms to over-moderate and imprecisely filter content deemed controversial—things like information on LGBTQ+ issues, drug addiction, eating disorders, mental health issues, or escape from abusive situations. Since then, Ars Technica reports in this detailed article, the law does seem to have been amended in positive, constructive ways – but not nearly far enough to make it workable and not prone to massive abuse. Sadly, it seems the bill is poised to pass, so we’ll have to see what the eventual, final version will look like.

France’s browser-based website blocking proposal will set a disastrous precedent for the open internet

In a well-intentioned yet dangerous move to fight online fraud, France is on the verge of forcing browsers to create a dystopian technical capability. Article 6 (para II and III) of the SREN Bill would force browser providers to create the means to mandatorily block websites present on a government provided list. Such a move will overturn decades of established content moderation norms and provide a playbook for authoritarian governments that will easily negate the existence of censorship circumvention tools. France wants to outdo everyone else for the worst tech policy ideas in history.

What happened to Dolphin on Steam?

The Dolphin project has broken the silence regarding their legal tussle with Nintendo and Valve, giving a far more detailed elaboration of what, exactly happened. First things first – Nintendo did not send Valve or Dolphin a Digital Millenium Copyright Act (DMCA) section 512(c) notice (commonly known as a DMCA Takedown Notice) against our Steam page. Nintendo has not taken any legal action against Dolphin Emulator or Valve. What actually happened was that Valve’s legal department contacted Nintendo to inquire about the announced release of Dolphin Emulator on Steam. In reply to this, a lawyer representing Nintendo of America requested Valve prevent Dolphin from releasing on the Steam store, citing the DMCA as justification. Valve then forwarded us the statement from Nintendo’s lawyers, and told us that we had to come to an agreement with Nintendo in order to release on Steam. Considering the strong legal wording at the start of the document and the citation of DMCA law, we took the letter very seriously. We wanted to take some time and formulate a response, however after being flooded with questions, we wrote a fairly frantic statement on the situation as we understood it at the time, which turned out to only fuel the fires of speculation. So, after a long stay of silence, we have a difficult announcement to make. We are abandoning our efforts to release Dolphin on Steam. Valve ultimately runs the store and can set any condition they wish for software to appear on it. But given Nintendo’s long-held stance on emulation, we find Valve’s requirement for us to get approval from Nintendo for a Steam release to be impossible. Unfortunately, that’s that. The post also goes into greater detail about the Wii Common Key that’s been part of Dolphin’s codebase for 15 years. This key was originally extracted from the GameCube hardware itself, and a lot of people online claimed that Dolphin should just remove this key and all would be well. After consulting with their lawyers, Dolphin has come to the conclusion that including the key poses no legal risk for the project, and even if it somehow did, the various other parts of the Dolphin codebase that make emulation of original games possible would pose a much bigger legal threat anyway. So, the team will keep on including the key, and the only outcome here is that Dolphin will not be available on Steam.

FTC rewrites rules on Big Tech mergers with aim to ease monopoly-busting

Ars Technica: Antitrust enforcers released a draft update outlining new rules today that officials say will make it easier to crack down on mergers and acquisitions that could substantially lessen competition in the US. Now the public has 60 days to review the draft guidelines and submit comments to the Federal Trade Commission (FTC) and the Department of Justice (DOJ) before the agencies’ September 18 deadline. A fierce debate has already started between those in support and those who oppose the draft guidelines. Any corporation should be serving the democratically elected government of a country – not the other way around. If a merger or acquisition is deemed harmful to the competitive landscape, and thus to consumers, a government should be able to just stop it. The same applies to corporations who grow too large, too rich, too powerful – if a company’s actions start to dictate significant parts of the market or even economy, they are a threat to the stability and functioning of the society it’s claiming to be a part of, and as such, they should be able to be split up or their actions otherwise remedied to protect society. In other words, any steps the Us FTC and DOJ take to take control over runaway corporations are positive.

No cyber resilience without open source sustainability

Together with the open source software community, GitHub has been working to support EU policymakers to craft the Cyber Resilience Act (CRA). The CRA seeks to improve the cybersecurity of digital products (including the 96 percent that contain open source) in the EU by imposing strict requirements for vendors supplying products in the single market, backed by fines of up to €15 million or 2.5% of global revenue. This goal is welcome: security is too often an afterthought when shipping a product. But as written it threatens open source without bolstering resilience. Even though the CRA, as part of a long-standing line of EU ‘open’ strategy, has an exemption for open source software developed or supplied outside the course of a commercial activity, challenges in defining the scope have been the focus of considerable community activity. Three serious problems remain with the Parliament text set for the industry (‘ITRE’) committee vote on July 19. These three problems are set out below. Absent dissent, this may become the final position without further deliberation or a full Parliament plenary vote. We encourage you to share your thoughts with your elected officials today. The three problems are substantial for open source projects. First, if an open source project receives donations and/or has corporate developers working on it, it would be regulated by the CRA and thus face a huge amount of new administrative rules and regulations to follow that would no doubt be far too big a burden for especially smaller projects or individual developers. On top of that, the CRA, as it currently stands, also intends to mess with the disclosure process for vulnerabilities in a way that doesn’t seem to actually help. These three problems are big, and could have far-reaching consequences for open source.

Online advertising giant: people who want to reign in online ads are “extremists”

The Interactive Advertising Bureau, one of the biggest names in online advertising, held some sort of corporate event or whatever in January of this year, and the IAB CEO, David Cohen, held a speech there to rally the troops. Apparently, those of us who are fighting back against the online advertising industry? We’re “extremists”. Extremists are winning the battle for hearts and minds in Washington D.C. and beyond. We cannot let that happen. These extremists are political opportunists who’ve made it their mission to cripple the advertising industry and eliminate it from the American economy and culture. This guy, who uses double spaces after a period and hence is already on my shitlist, just gave us an amazing creed.

The shady world of Brave selling copyrighted data for AI training

As you may have noticed, I used the word copyrighted for the title of this story. And it’s not without reason. I think this story could have been fairly decent even without the copyright part, so before we get to the nitty gritty stuff – I can 100% confirm that Brave lets you ingest copyrighted material through their Brave Search API, to which they also assign you “rights”. Time and time again, Brave gets caught doing slimy things. Just don’t use Brave. There are far, far better and more ethical alternatives.

European Commission blesses new user data transfer agreement between EU and US

Today, the European Commission adopted its adequacy decision for the EU-U.S. Data Privacy Framework. The decision concludes that the United States ensures an adequate level of protection – comparable to that of the European Union – for personal data transferred from the EU to US companies under the new framework. On the basis of the new adequacy decision, personal data can flow safely from the EU to US companies participating in the Framework, without having to put in place additional data protection safeguards. In 2020, European Union courts struck down the previous agreement between the EU and the US, the Privacy Shield, as the court stated it did not sufficiently protect EU user data from US government surveillance. This was obviously a big problem for companies like Facebook and Google, and ever since, the two blocks have been trying to come up with a replacement that would allow these companies to continue to operate relatively unscathed. In the meantime, though, several European countries handed out large fines to Amazon and Facebook for not taking proper care of EU user data. So, what makes this new agreement stricter than the previous one? The EU-U.S. Data Privacy Framework introduces new binding safeguards to address all the concerns raised by the European Court of Justice, including limiting access to EU data by US intelligence services to what is necessary and proportionate, and establishing a Data Protection Review Court (DPRC), to which EU individuals will have access. The new framework introduces significant improvements compared to the mechanism that existed under the Privacy Shield. For example, if the DPRC finds that data was collected in violation of the new safeguards, it will be able to order the deletion of the data. The new safeguards in the area of government access to data will complement the obligations that US companies importing data from EU will have to subscribe to. I’m obviously no legal expert so take this with a grain of salt, but this kind of feels like yes, there are additional protections and safeguards, but if (let’s be real here: when) companies like Facebook violate these, don’t worry, EU citizen! You can undertake costly, complex, and long legal proceedings in misty business courts so Facebook or whatever can get fined for an amount that Zuckerberg spends on his interior decorator every week. The courts struck down the Safe Harbor agreement in 2015, and the aforementioned Privacy Shield in 2020, so we’ll see if this new agreement stands the test of the courts.

The solid legal theory behind Nintendo’s new emulator takedown effort

Ars Technica: This weekend saw an exception to that rule, though, as Nintendo’s lawyers formally asked Valve to cut off the planned Steam release of Wii and Gamecube emulator Dolphin. In a letter addressed to the Valve Legal Department (a copy of which was provided to Ars by the Dolphin Team), an attorney representing Nintendo of America requests that Valve take down Dolphin’s “coming soon” Steam store page (which originally went up in March) and “ensure the emulator does not release on the Steam store moving forward.” The letter exerts the company’s “rights under the Digital Millennium Copyright Act (DMCA)’s Anti-Circumvention and Anti-Trafficking provisions,” even though it doesn’t take the form of a formal DMCA takedown request. In fighting a decision like this, an emulator maker would usually be able to point to some robust legal precedents that protect emulation software as a general concept. But legal experts that spoke to Ars said that Nintendo’s argument here might actually get around those precedents and present some legitimate legal problems for the Dolphin Team. This silly cat and mouse game between Nintendo and emulators is childish. The only people getting rich off this are lawyers.

US federal judge makes history in holding that border searches of cell phones require a warrant

With United States v. Smith (S.D.N.Y. May 11, 2023), a district court judge in New York made history by being the first court to rule that a warrant is required for a cell phone search at the border, “absent exigent circumstances” (although other district courts have wanted to do so). EFF is thrilled about this decision, given that we have been advocating for a warrant for border searches of electronic devices in the courts and Congress for nearly a decade. If the case is appealed to the Second Circuit, we urge the appellate court to affirm this landmark decision. Of course, a decision like this can go through quite a few more courts, but it’s a good precedent.

Apple fails to fully reboot iOS simulator copyright case

Apple Inc. failed to fully revive a long-running copyright lawsuit against cybersecurity firm Corellium Inc. over its software that simulates the iPhone’s iOS operating systems, letting security researchers identify flaws in the software. The US Court of Appeals for the Eleventh Circuit on Monday ruled that Corellium’s CORSEC simulator is protected by copyright law’s fair use doctrine, which allows the duplication of copyrighted work under certain circumstances. CORSEC “furthers scientific progress by allowing security research into important operating systems,” a three-judge panel for the appeals court said, adding that iOS “is functional operating software that falls outside copyright’s core.” Good.

Microsoft’s GitHub Copilot is massive copyright infringement

Before you read this article – note that Codeium offers a competitor to GitHub Copilot. This means they have something to sell, and something to gain by making Copilot look bad. That being said – their findings are things we already kind of knew, and further illustrate that Copilot is quite possibly one of the largest, if not the largest, GPL violations in history. To prove that GitHub Copilot trains on non permissive licenses, we just disable any post-generation filters and see what GPL code we can generate with minimal context. We can very quickly generate the GPL license for a popular GPL-protected repo, such as ffmpeg, from a couple lines of a header comment. Codeium claims it does not use GPL code for its training data, but the fact it uses code licensed more permissively still raises questions. While the BSD and MIT-like licenses are more permissive and lack copyleft, they still require the inclusion of the terms of the license and a copyright notice to be included whenever the covered code is used. I’m not entirely sure if using just permissively licensed code as training data is any better, since unless you’re adding the licensing terms and copyright notice with every autocompleted piece of code, you’re still violating the license. If Microsoft or whoever else wants to train a coding “AI” or whatever, they should either be using code they own the copyright to, get explicit permission from the rightsholders for “AI” training use (difficult for code from larger projects), or properly comply with the terms of the licenses and automatically add the terms and copyright notices during autocomplete and/or properly apply copyleft to the newly generated code. Anything else is a massive copyright violation and a direct assault on open source. Let me put it this way – the code to various versions of Windows has leaked numerous times. What if we train an “AI” on that leaked code and let everyone use it? Do you honestly think Microsoft would not sue you into the stone age?

Italy cuts off ChatGPT due to privacy concerns

While ChatGPT has become what seems like a household name, the AI model’s method of data collection is somewhat concerning and has some clear negative connotations. With that being the case, Italy is moving forward with legal action to stop ChatGPT from operating for the time being. Good. These corporate, for-pay tools are built upon the backs of untold numbers of writers and other artists who have not been asked if they want their works to be used. For instance Microsoft will stomp any misuse of its codes or trademarks into the ground, but at the same time, it’s building entire profit streams on the backs of others. This is wrong.