Microsoft’s Recall feature, which takes screenshots of the contents of your screen every few seconds, saves them, and then runs text and image recognition to extract information from them, has had a rocky start. Even now that it’s out there and Microsoft deems it ready for everyone to use, it has huge security and privacy gaps, and one of them is that applications that contain sensitive information, such as the Windows Signal application, cannot ‘opt out’ of having their contents scraped.
Signal was rather unhappy with this massive privacy risk, and decided to do something about it. It’s called screen security, and is Windows-only because it’s specifically designed to counter Windows Recall.
If you attempt to take a screenshot of Signal Desktop when screen security is enabled, nothing will appear. This limitation can be frustrating, but it might look familiar to you if you’ve ever had the audacity to try and take a screenshot of a movie or TV show on Windows. According to Microsoft’s official developer documentation, setting the correct Digital Rights Management (DRM) flag on the application window will ensure that “content won’t show up in Recall or any other screenshot application.” So that’s exactly what Signal Desktop is now doing on Windows 11 by default.
↫ Joshua Lund on the Signal blog
Microsoft cares more about enforcing the rights of massive corporations than it does about respecting the privacy of its users. As such, everything is in place in Windows to ensure neither you nor Recall can take screenshots of, I don’t know, the Bee Movie, but nothing has been put in place to protect your private and sensitive messages in a service like Signal. This really tells you all you need to know about who Microsoft truly cares about, and it sure as hell isn’t you, the user.
What Signal is doing is absolutely brilliant. By turning Windows’ digital rights management features against Recall to protect the privacy of Signal users, Signal has made it impossible – or at least very hard – for Microsoft to address this. Of course, this also means that taking screenshots of the Signal application on Windows for legitimate purposes is more cumbersome now, but since you can temporarily turn screen security off to take a screenshot means it’s not impossible.
I almost want other Windows developers to employ this same trick, just to make Recall less valuable, but that’s probably not a great idea considering how much it would annoy users just trying to take legitimate screenshots. My uneducated guess is that this is exactly why Microsoft isn’t providing developers with the kind of fine-grained controls to let Recall know what it can and cannot take screenshots of: Microsoft must know Recall is a feature for shareholders, not for users, and that users will ask developers to opt-out of any Recall snooping if such APIs were officially available.
Microsoft wants to make it has hard as possible for applications to opt out of being sucked into the privacy black hole that is Recall, but in doing so, might be pushing developers to use DRM to achieve the same goal. Just delicious.
Signal also signed off with a scathing indictment of “AI” as a whole.
“Take a screenshot every few seconds” legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like “How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?” — but more sophisticated threats are on the horizon.
The integration of AI agents with pervasive permissions, questionable security hygiene, and an insatiable hunger for data has the potential to break the blood-brain barrier between applications and operating systems. This poses a significant threat to Signal, and to every privacy-preserving application in general.
↫ Joshua Lund on the Signal blog
Heed this warning.
I was wholeheartedly onboard for the whole post, agreeing and nodding to everything being said, but then you had to randomly call AI “AI” for no reason.
There is a huge difference between the AI technology and the disgusting companies trying to make a profit over their user’s interests.
It’s so weird how many people have tied their own identity up with “AI” – do you think it’s like, if you don’t do that, people will think less of you?
Wasn’t it you who lectured me the other day on ad hominem attacks?
Even though you ask the question in bad faith, I still give you a honest answer. I don’t actually give a shit about AI, but I am well informed on it. I actually read the papers and talked to various models extensivly in order to have that informed opinion.
What I care about is intellectual honesty. I do not think Thom has done his homework and I demand it if you are going to shove the quotes down my throat with every freaking post.
The depth of his analysis seems to be: corporations-evil, AI-fake-theft-autocomplete. I want more. A simple start would be to define AI, no quotes.
It’s not bad faith. I really do think you have wrapped your identity up in the success and failure of AI, at least as far as it goes in public perception. And I really do find that weird, but you are not alone at least.
On the substance of whether or not “ai” is in quotes or not. I don’t think it matters. The term “artificial intelligence” has meant many things over the decades since the phrase was coined. For much of that time it meant something more like “the appearance of intelligence, without being intelligent” – like ghosts algorithmically chasing Pac-Man – they aren’t intelligent, but they have the appearance of it. Artificial Intelligence as in, the intelligence itself is artificial. This does apply remarkably well to a the prediction engine that underlies LLMs. This to me, differs from real Machine Intelligence, and it’s why I have preferred terms like ML, etc. Even AGI is a hacky solution to this pseudo academic terms definition problem.
Today the term “ai” is undoubtedly just a pseudonym for “LLM” or “generative ai.” There are those in the industry who are playing with a wider range of different technologies, much of which is more like Machine Intelligence, and that’s the stuff that will really change the world.
To me it could not matter less whether “AI” is in quotes or not, since the term has been so debased. Putting it in quotes does serve a purpose, which is to communicate a skepticism about what it actually is, or maybe what it refers to. When it’s in quotes, to me it’s clear we are talking about the “hype thing.” The truth is, LLMs can be useful, for non-deterministic tasks, or pattern matching type tasks, but it’s not the magic that many of it’s advocates think it is (especially for creative works, and yes, even programming – though that thought would take longer to explain to anyone who’s used cline, which does seem like magic.) And it does smack of a lot of the same problems that the other solution looking for a problem used to have – I’m writing of course of cryptocurrency and block chains.
CaptainN-,
How can it be undoubtedly when I doubt it so much 🙂 AI can include LLM or “generative ai”, but it is so many other things too as you yourself explicitly stated, These are not synonymous or interchangeable. That ordinary people conflate them is problematic, but at least here we should all know better.
I do see people who consider it useful, but I’m not seeing many people pushing it as some kind magic. As far as I can tell the LLM advocates on osnews acknowledge that it’s not a panacea and don’t consider it magic. Do you have anyone specific in mind?
I agree there’s certainly been a gold rush mentality around LLMs, although I do think there may be more demand than you’d be comfortable admitting. If you’re creating music for example and don’t have a moral opposition towards generative AI, then you may well see generated content as an opportunity to save money commissioning music/art and getting pretty darn good results. Clearly there’s some selective bias at play (ie content may not be representative), but nevertheless I am finding it harder identify human works versus AI works. This is a bad sign for anyone arguing that generative AI has no merit.
Edit:
By any chance were you trying to quote Thom Holwerda specifically here? If so, then that didn’t come through because you didn’t quote his quotes…
If this is what you meant, then you are probably right that Thom does mean ‘”AI”‘ as a pseudonym for “LLM” or “generative AI” rather than AI as a computer science term more broadly.
Side bar: I don’t the proper way to quote somebody using quotes…where’s a language expert when you need one?
BTW I asked chatgpt for maximum irony:
No one cares. Find a new hobby horse.
There’s completely no differencen between AI and disgusting companies. Every AI model on the planet is trained with stolen data.
pikaczex,
I think you went way over the line of reason. AI is very broad and has advanced computer science for decades. AI can regularly beat humans at chess, jeopardy, finds novel solutions to super mario, robots walking, etc. I guess I’ll try to interpret your post as though you wrote LLM instead of AI, but please don’t assume these are the same.
As for “stealing data”, there’s tons of nuance. Everyone can agree if an LLM is caught outputting copyrighted works verbatim, that’s objective copyright infringement. Not because it’s an LLM, but because it would be infringement if a human did the same thing and calling it infringement is 100% consistent However when LLMs generate new expressions from existing work, this is not traditionally copyright infringement and Humans do it all the time. Copyright law allows us take information from different sources to create our own new expressions and generalized LLMs are fantastic at doing this. Why would it be ok for humans to do this but not machines? People have strong opinions about it, but I think there’s a lot of philosophy that needs to be hashed out.
Alfman,
For sure there is a lot of philosophy to be hashed out, no arguments there. That said, I think where you and I disagree is in the fairness of humans doing something on the one hand and machines doing that same thing on the other.
I absolutely consider it fair to have different rules for humans and machines, considering the different economies of scale inherent in the industrial LLM hoovering approach. That’s saying nothing about how enforceable they would be of course.
The reason I consider it fair is it is much easier for machines to disenfranchise real human activity, with all the negative externalities that result, particularly in the creative spheres, given LLMs for one thing.
PhilPotter,
It’s interesting that you use the word “fairness” because I’m really not so sure if it’s fair for humans to compete against machines or not. Fair or not, the displacement of humans by machines is well established throughout history. There are tons of examples to illustrate this like the classic horse and buggy example, but I’m going to use another: the Linotype. This machine serves not only as an example of mechanization replacing countless human jobs, but also an example of skilled Linotype operators themselves being replaced by newer machines.
“Linotype | How One Machine Shaped History”
https://youtu.be/HUtJO59eKJ8
It’s not necessarily a “fair” fight between an expert Linotype operator and a low end desktop user, but capitalistic society typically promotes productivity over fairness. This isn’t really my attempt to morally defend generative AI, however it’s existence is consistent with the role of disruptive innovation throughout history. We can try to make the case that’s it’s unfair now, but why would/should this argument succeed today when we’ve mostly ignored it throughout history? I predict this will be decided by corporate interests acting as they always do to protect their bottom lines.
I don’t disagree with this. a theme in many of my posts is that AI and LLMs are going to be far more disruptive to humans than society realizes. IMHO many of the AI naysayers are still detrimentally focused on the “AI can’t compete with humans” narrative, which I feel under-represents the real threat that AI poses.
You continue to troll on this tired point, but I’ll bite. AI is short for Artificial Intelligence. What we are calling AI today, LLMs, are not intelligent, though they are artificial in the sense that they are designed by humans and not found in nature. LLMs, or Large Language Models, are language processors that are trained on existing human-generated text and images and can only output a transformation of what they take in. Therefore, they are “garbage in, garbage out”. They don’t reason, they don’t think, they don’t have a sense of self, they are not intelligent, self-aware, or alive. They are not “artificial intelligence”. If you type 2+2 into a calculator and press the = key, you get a calculated answer, 4. LLMs do the same thing but on a much larger scale, and with all of the incorrect and inaccurate source information out there, they often spit out incorrect and inaccurate responses because they don’t have the human ability to reason and separate fact from fiction.
I’m not saying LLMs are inherently bad, I’m saying that your dogged insistence of calling LLMs “AI” is incorrect and inaccurate, much like the technology itself can be. One day we may achieve true artificial intelligence that can reason and think like humans (or even non-human animals, I’d settle for that as a milestone), but until then calling it “AI” is a misnomer at best, but in your case it’s a basis for trolling.
// they often spit out incorrect and inaccurate responses because they don’t have the human ability to reason and separate fact from fiction.
humans are not really any different, that’s how propaganda works.
bert64,
That’s a good point, humans are notoriously vulnerable to misinformation.
It’s not just LLM, even people with brains fail at classifying fact and fiction. People filter out inconvenient facts and ingest misinformation that aligns with prejudices. Social media bares this out on a massive scale. Like Morgan, I don’t have faith in an LLM’s ability to solve the GIGO problem, but the uncomfortable truth may be that humans are vulnerable to it as well…
This would be an interesting topic to cover in a large scale double blind experiment.
Are you guys still using that Windows thing? Time has never been better to switch to Linux or really, anything else… Even HDR is supported now! And Bazzite is regularly out performing Windows in games at this point (if that’s one of the things holding you back.)
Oh yeah, I hear 2025 is the year of the Linux desktop!
I don’t know if it really is (one can hope) – I just find it weird that so many people on a site about alternative operating systems still continue to abuse themselves with Windows.
MS make both the DRM and Recall. What’s to stop them making an exception for recall, assuming they havent already?
Absolutely nothing. And they will for sure do it as soon as this technique spreads.
The problem of being a hostage of a proprietary OS is that your abductors can do whatever they want with you. They make the rules, and hard-code them on their system.
Just find a way to use it against government/law enforcement, and watch Microsoft *suddenly* find a solution to those loopholes. See also: Google turning a blind eye to fraudulent listings in Google maps/business… until someone demonstrated that you could setup a listing impersonating the US secret service. Or see also: Apple turning a blind eye to privacy/stalking issues with AirTags… until people started using them sticking them to cop cars.
I don’t see how in this case. They are clever: they keep small loopholes to allow these anti-features do be disabled for companies and governments. Usually, group policies or hidden cumbersome registry keys (with a tendency to simple disappear or revert to default values on updates of “home editions” of their products).
You know, just to make the wall high enough for the average home user being unable to escape away from these disgraces, while power users can pretend that everything is fine and well because “it can be disabled” (by them, who are 0.2% of the population).