Privacy, Security Archive

Corporate greed from Apple and Google has destroyed the passkey future

William Brown, developer of webauthn-rs, has written a scathing blog post detailing how corporate interests – namely, Apple and Google – have completely and utterly destroyed the concept of passkeys. The basic gist is that Apple and Google were more interested in control and locking in users than in providing a user-friendly passwordless future, and in doing so have made passkeys effectively a worse user experience than just using passwords in a password manager. Since then Passkeys are now seen as a way to capture users and audiences into a platform. What better way to encourage long term entrapment of users then by locking all their credentials into your platform, and even better, credentials that can’t be extracted or exported in any capacity. Both Chrome and Safari will try to force you into using either hybrid (caBLE) where you scan a QR code with your phone to authenticate – you have to click through menus to use a security key. caBLE is not even a good experience, taking more than 60 seconds work in most cases. The UI is beyond obnoxious at this point. Sometimes I think the password game has a better ux. The more egregious offender is Android, which won’t even activate your security key if the website sends the set of options that are needed for Passkeys. This means the IDP gets to choose what device you enroll without your input. And of course, all the developer examples only show you the options to activate “Google Passkeys stored in Google Password Manager”. After all, why would you want to use anything else? ↫ William Brown The whole post is a sobering read of how a dream of passwordless, and even usernameless, authentication was right within our grasp, usable by everyone, until Apple and Google got involved and enshittified the standards and tools to promote lock-in and their own interests above the user experience. If even someone as knowledgeable about this subject as Brown, who writes actual software to make these things work, is advising against using passkeys, you know something’s gone horribly wrong. I also looked into possibly using passkeys, including using things like a Yubikey, but the process seems so complex and unpleasant that I, too, concluded just sticking to Bitwarden and my favourite open source TFA application was a far superior user experience.

Backdoor in upstream xz/liblzma leading to SSH server compromise

After observing a few odd symptoms around liblzma (part of the xz package) on Debian sid installations over the last weeks (logins with ssh taking a lot of CPU, valgrind errors) I figured out the answer: The upstream xz repository and the xz tarballs have been backdoored. At first I thought this was a compromise of debian’s package, but it turns out to be upstream. ↫ Andres Freund I don’t normally report on security issues, but this is a big one not just because of the severity of the issue itself, but also because of its origins: it was created by and added to upstream xz/liblzma by a regular contributor of said project, and makes it possibly to bypass SSH encryption. It was discovered more or less by accident by Andres Freund. I have not yet analyzed precisely what is being checked for in the injected code, to allow unauthorized access. Since this is running in a pre-authentication context, it seems likely to allow some form of access or other form of remote code execution. ↫ Andres Freund The exploit was only added to the release tarballs, and not present when taking the code off GitHub manually. Luckily for all of us, the exploit has only made it way to the most bloodiest of bleeding edge distributions, such as Fedora Rawhide 41 and Debian testing, unstable and experimental, and as such has not been widely spread just yet. Nobody seems to know quite yet what the ultimate intent of the exploit seems to be. Of note: the person who added the compromising code was recently added as a Linux kernel maintainer.

The Apple curl security incident 12604

When this command line option is used with curl on macOS, the version shipped by Apple, it seems to fall back and checks the system CA store in case the provided set of CA certs fail the verification. A secondary check that was not asked for, is not documented and plain frankly comes completely by surprise. Therefore, when a user runs the check with a trimmed and dedicated CA cert file, it will not fail if the system CA store contains a cert that can verify the server! This is a security problem because now suddenly certificate checks pass that should not pass. ↫ Daniel Stenberg Absolutely wild that Apple does not consider this a security issue.

Meet ‘Link History,’ Facebook’s new way to track the websites you visit

Facebook recently rolled out a new “Link History” setting that creates a special repository of all the links you click on in the Facebook mobile app. You can opt out if you’re proactive, but the company is pushing Link History on users, and the data is used for targeted ads. As lawmakers introduce tech regulations and Apple and Google beef up privacy restrictions, Meta is doubling down and searching for new ways to preserve its data harvesting empire. The company pitches Link History as a useful tool for consumers “with your browsing activity saved in one place,” rather than another way to keep tabs on your behavior. With the new setting you’ll “never lose a link again,” Facebook says in a pop-up encouraging users to consent to the new tracking method. The company goes on to mention that “When you allow link history, we may use your information to improve your ads across Meta technologies.” The app keeps the toggle switched on in the pop-up, steering users towards accepting Link History unless they take the time to look carefully. ↫ Thomas Germain at Gizmodo As more and more people in the technology press who used to be against Facebook have changed their tune since the launch of Facebook’s Threads – the tech press needs eyeballs in one place for ad revenue, and with Twitter effectively dead, Threads is its replacement – it’s easy to forget just what a sleazy, slimy, and disgusting company Facebook really is.

Federal government is using data from push notifications to track contacts

Government investigators in the United States have used push notification data to pursue people of interest, Sen. Ron Wyden (D-Ore.) said in a letter Wednesday to the Justice Department, revealing for the first time a way in which Americans can be tracked through a basic service provided by their smartphones. Wyden’s letter said the Justice Department had prohibited Apple and Google from discussing the technique and asked it to change the rule, noting that his office had received a tip that foreign governments had also begun requesting the push-notification data. ↫ Drew Harwell for The Washington Post Not surprising, of course. The one nugget of good news here is that while Apple’s policy is to hand over this data after a mere subpoena (“privacy is a fundamental human right“, everybody), Google requires an actual court order, meaning federal officials must convince a judge of the validity of the request. Assuming this is not a nebulous secret backroom deal but a proper judicial process, I’m actually okay with that – law enforcement does need the ability to investigate potential criminals, and as long as this happens within the boundaries of the law and properly overseen and approved by the judiciary every step along the way, I can support it.

Hacking the Canon imageCLASS MF742Cdw/MF743Cdw (again)

There has been quite a bit of documentation about exploiting the CANON Printer firmware in the past. For some more background information I suggest reading these posts by SYNACKTIV, doar-e and DEVCORE. I highly recommend reading all of it if you want to learn more about hacking (CANON) Printers. The TL;DR is: We’re dealing with a Custom RTOS called DRYOS engineered by CANON that doesn’t ship with any modern mitigations like W^X or ASLR. That means that after getting a bit acquainted with this alien RTOS it is relatively easy to write (reliable) exploits for it. Having a custom operating system doesn’t mean it’s more secure than popular solutions.

ANSI Terminal security in 2023 and finding 10 CVEs

This paper reflects work done in late 2022 and 2023 to audit for vulnerabilities in terminal emulators, with a focus on open source software. The results of this work were 10 CVEs against terminal emulators that could result in Remote Code Execution (RCE), in addition various other bugs and hardening opportunities were found. The exact context and severity of these vulnerabilities varied, but some form of code execution was found to be possible on several common terminal emulators across the main client platforms of today. Additionally several new ways to exploit these kind of vulnerabilities were found. This is the full technical write-up that assumes some familiarity with the subject matter, for a more gentle introduction see my post on the G-Research site. Some light reading for the weekend.

Clever malvertising attack uses Punycode to look like KeePass’s official website

Threat actors are known for impersonating popular brands in order to trick users. In a recent malvertising campaign, we observed a malicious Google ad for KeePass, the open-source password manager which was extremely deceiving. We previously reported on how brand impersonations are a common occurrence these days due to a feature known as tracking templates, but this attack used an additional layer of deception. The malicious actors registered a copycat internationalized domain name that uses Punycode, a special character encoding, to masquerade as the real KeePass site. The difference between the two sites is visually so subtle it will undoubtably fool many people. We have reported this incident to Google but would like to warn users that the ad is still currently running. Ad blockers are security tools. This proves it once again.

I tested an HDMI adapter that demands your location, browsing data, photos, and spams you with ads

I recently got my hands on an ordinary-looking iPhone-to-HDMI adapter that mimics Apple’s branding and, when plugged in, runs a program that implores you to “Scan QR code for use.” That QR code takes you to an ad-riddled website that asks you to download an app that asks for your location data, access to your photos and videos, runs a bizarre web browser, installs tracking cookies, takes “sensor data,” and uses that data to target you with ads. The adapter’s app also kindly informed me that it’s sending all of my data to China. Just imagine what kind of stuff is happening that isn’t perpetrated by crude idiots, but by competent state-sponsored actors. I don’t believe for a second that at least a number of products from Apple, Dell, HP, and so on, manufactured in Chinese state-owned factories, are not compromised. The temptation is too high, and even if, say, Apple found something inside one of their devices rolling off the factory line – what are they going to do? Publicly blame the Chinese government, whom they depend on for virtually all their manufacturing? You may trust HP, but do you trust the entire chain of people and entities controlling their supply chain?

Cars are the worst product category we have ever reviewed for privacy

Car makers have been bragging about their cars being “computers on wheels” for years to promote their advanced features. However, the conversation about what driving a computer means for its occupants’ privacy hasn’t really caught up. While we worried that our doorbells and watches that connect to the internet might be spying on us, car brands quietly entered the data business by turning their vehicles into powerful data-gobbling machines. Machines that, because of their all those brag-worthy bells and whistles, have an unmatched power to watch, listen, and collect information about what you do and where you go in your car. All 25 car brands we researched earned our *Privacy Not Included warning label — making cars the official worst category of products for privacy that we have ever reviewed. Much to the surprise of nobody.

Why is .US being used to phish so many of us?

Domain names ending in “.US” — the top-level domain for the United States — are among the most prevalent in phishing scams, new research shows. This is noteworthy because .US is overseen by the U.S. government, which is frequently the target of phishing domains ending in .US. Also, .US domains are only supposed to be available to U.S. citizens and to those who can demonstrate that they have a physical presence in the United States. The answer is GoDaddy.

Not everything is secret in encrypted apps like iMessage and WhatsApp

The mess I’m describing — end-to-end encryption but with certain exceptions — may be a healthy balance of your privacy and our safety. The problem is it’s confusing to know what is encrypted and secret in communications apps, what is not and why it might matter to you. To illuminate the nuances, I broke down five questions about end-to-end encryption for five communications apps. This is straightforward and good overview of what, exactly, is end-to-end encrypted in the various chat and IM applications we use today. There’s a lot of ifs and buts here.

Bypassing Bitlocker using a cheap logic analyzer on a Lenovo laptop

The BitLocker partition is encrypted using the Full Volume Encryption Key (FVEK). The FVEK itself is encrypted using the Volume Master Key (VMK) and stored on the disk, next to the encrypted data. This permits key rotations without re-encrypting the whole disk. The VMK is stored in the TPM. Thus the disk can only be decrypted when booted from this computer (there is a recovery mechanism in Active Directory though). In order to decrypt the disk, the CPU will ask that the TPM sends the VMK over the SPI bus. The vulnerability should be obvious: at some point in the boot process, the VMK transits unencrypted between the TPM and the CPU. This means that it can be captured and used to decrypt the disk. This seems like such an obvious design flaw, and yet, that’s exactly how it works – and yes, as this article notes, you can indeed capture the VMK in-transit and decrypt the disk.

Cyber-attack on UK’s electoral registers revealed

The UK’s elections watchdog has revealed it has been the victim of a “complex cyber-attack” potentially affecting millions of voters. The Electoral Commission said unspecified “hostile actors” had managed to gain access to copies of the electoral registers, from August 2021. Hackers also broke into its emails and “control systems” but the attack was not discovered until October last year. The watchdog has warned people to watch out for unauthorised use of their data. That seems like a state-level attack, and such data could easily be used for online influence campaigns during elections, something that is happening all over the western world right now. I wonder just how bad the hack actually was? “Millions of voters” sounds bad, but… The commission says it is difficult to predict exactly how many people could be affected, but it estimates the register for each year contains the details of around 40 million people. Holy cow.

Compromising Garmin’s sport watches: a deep dive into GarminOS and its MonkeyC virtual machine

I reversed the firmware of my Garmin Forerunner 245 Music back in 2022 and found a dozen or so vulnerabilities in their support for Connect IQ applications. They can be exploited to bypass permissions and compromise the watch. I have published various scripts and proof-of-concept apps to a GitHub repository. Coordinating disclosure with Garmin, some of the vulnerabilities have been around since 2015 and affect over a hundred models, including fitness watches, outdoor handhelds, and GPS for bikes. Raise your hands if you’re surprised. Any time someone takes even a cursory glance at internet of things devices or connected anythings that isn’t a well-studied platform from the likes of Apple, Google, or Microsoft, they find boatloads of security issues, dangerous bugs, stupid design decisions, and so much more.

Meta wants EU users to apply for permission to opt out of data collection

Ars Technica reports: Meta announced that starting next Wednesday, some Facebook and Instagram users in the European Union will for the first time be able to opt out of sharing first-party data used to serve highly personalized ads, The Wall Street Journal reported. The move marks a big change from Meta’s current business model, where every video and piece of content clicked on its platforms provides a data point for its online advertisers. People “familiar with the matter” told the Journal that Facebook and Instagram users will soon be able to access a form that can be submitted to Meta to object to sweeping data collection. If those requests are approved, those users will only allow Meta to target ads based on broader categories of data collection, like age range or general location. This immediately feels like something that shouldn’t be legal. Why on earth do I have to convince Facebook to respect my privacy? I should not have to provide any justification to them whatsoever – if I want them to respect my privacy, they should just damn do so, no questions asked. It seems I’m not alone: Other privacy activists have criticized Meta’s plan to provide an objection form to end sweeping data collection. Fight for the Future Director Evan Greer told Ars that Meta’s plan provides “privacy in name only” because users who might opt out if given a “yes/no” option may be less likely to fill out the objection form that requires them to justify their decision. “No one should have to provide a justification for why they don’t want to be surveilled and manipulated,” Greer told Ars. Exactly.

Secure Boot: this is not the protection we are looking for

So there you have it: recommending idly Secure Boot for all systems requiring intermediate security level accomplishes nothing, except maybe giving more work to system administrators that are recompiling their kernel, while offering exactly no measurable security against many threats if UEFI Administrative password and MOK Manager passwords are not set. This is especially true for laptop systems where physical access cannot be prevented for obvious reasons. For servers in colocation, the risk of physical access is not null. And finally for many servers, the risk of a rogue employee somewhere in the supply chain, or the maintenance chain cannot be easily ruled out. The author makes a compelling case, but my knowledge on this topic is too limited to confidently present this article as a good one. I’ll leave it to those among us with more experience on this subject to shoot holes in the article, or to affirm it.

Apple, Google, and Microsoft will soon implement passwordless sign-in on all major platforms

In a joint effort, tech giants Apple, Google, and Microsoft announced Thursday morning that they have committed to building support for passwordless sign-in across all of the mobile, desktop, and browser platforms that they control in the coming year. Effectively, this means that passwordless authentication will come to all major device platforms in the not too distant future: Android and iOS mobile operating systems; Chrome, Edge, and Safari browsers; and the Windows and macOS desktop environments. A passwordless login process will let users choose their phones as the main authentication device for apps, websites, and other digital services, as Google detailed in a blog post published Thursday. Unlocking the phone with whatever is set as the default action — entering a PIN, drawing a pattern, or using fingerprint unlock — will then be enough to sign in to web services without the need to ever enter a password, made possible through the use of a unique cryptographic token called a passkey that is shared between the phone and the website. Passwords are a terrible security practice, and while password managers make the whole ordeal slightly less frustrating, using my phone’s fingerprint reader to log into stuff seems like a very welcome improvement.

How a decades-old database became a hugely profitable dossier on the health of 270 million Americans

To most Americans, the name MarketScan means nothing. But most Americans mean everything to MarketScan. As a repository of sensitive patient information, the company’s databases churn silently behind the scenes of their medical care, scooping up their most guarded secrets: the diseases they have, the drugs they’re taking, the places their bodies are broken that they haven’t told anyone but their doctor. The family of databases that make up MarketScan now include the records of a stunning 270 million Americans, or 82% of the population. The vast reach of MarketScan, and its immense value, is unmistakable. Last month, a private equity firm announced that it would pay $1 billion to buy the databases from IBM. It was by far the most valuable asset left for IBM as the technology behemoth cast off its foundering Watson Health business. Imagine how easy it would be for companies to hire only people in tip-top health, and disregard anyone with even the smallest of preexisting conditions. This data is hugely valuable to just about anyone.