Microsoft has just announced a whole slew of new “AI” features for Windows, and this time, they’ll be living in your taskbar.
Microsoft is trying to transform Windows into a “canvas for AI,” with new AI agents integrated into the Windows 11 taskbar. These new taskbar capabilities are designed to make AI agents feel like an assistant in Windows that can go off and control your PC and do tasks for you at the click of a button. It’s part of a broader overhaul of Windows to turn the operating system into an “agentic OS.”
[…]Microsoft is integrating a variety of AI agents directly into the Windows 11 taskbar, including its own Microsoft 365 Copilot and third-party options. “This integration isn’t just about adding agents; it’s about making them part of the OS experience,” says Windows chief Pavan Davuluri.
↫ Tom Warren at The Verge
These “AI” agents will control your computer, applications, and files for you, which may make some of you a little apprehensive, and for good reason. “AI” tools don’t have a great track record when it comes to privacy – Windows Recall comes to mind – and as such, Microsoft claims this time, it’ll be different. These new “AI” agents will run in what are essentially dedicated Windows accounts acting as sandboxes, to ensure they can only access certain resources.
While I find the addition of these “AI” tools to Windows insufferable and dumb, I’m at least glad Microsoft is taking privacy and security seriously this time, and I doubt Microsoft would repeat the same mistakes they made with the entirely botched rollout of Windows Recall. in addition, after the Cloudstrike fiasco, Microsoft made clear commitments to improve its security practices, which further adds to the confidence we should all have these new “AI” tools are safe, secure, and private.
But wait, what’s this?
Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.
↫ Microsoft support document about the new “AI” features
Microsoft’s new “AI” features can go out and install malware without your consent, because these features possess the access and privileges to do so. The mere idea that some application – which is essentially what these “AI” features really are – can go out onto the web and download and install whatever it wants, including malware, “on your behalf”, in the background, is so utterly dystopian to me I just can’t imagine any serious developer looking at this and thinking “yeah, ship it”.
I’m living in an insane asylum.

> which further adds to the confidence we should all have these new “AI” tools are safe, secure, and private.
Private as in: your data only getting harvested by microsoft. Can’t allow access to competitors, after all.
They claim:
> As we begin to build agentic capabilities into Windows, our commitment is to include robust security and privacy controls that empower customers to explore their potential confidently with the support of clear guidance and appropriate guardrails driven by these goals.
>
> Non-repudiation : All actions of an agent are observable and distinguishable from those taken by a user.
> Confidentiality: Agents that collect, aggregate or otherwise utilize protected data of users meet or exceed the security and privacy standards of the data which they consume.
> Authorization : Users approve all queries for user data as well as actions taken.
At least they’ll provide some way of requiring human intervention before running a command and will be visually distinguishable from the user themselves. But this is still ripe for fraud.
> At least they’ll provide some way of requiring human intervention before running a command
Um, who actually reads the windows dialogs before approving them in a reflex, even among those who know better?
Mote,
True.
Even with full screen modal warnings in UAC, people would just click next. And the websites would even give bright instructions with large arrows to do the process.
This time it is “press ENTER to run this very complicated command you could not possibly understans”
At least the say they will provide this.
And MS always does what they say, right?
Microsoft considered harmful.
Microsoft being Microsoft again. Remember Active X and how a massive security hole that was?
They will never learn. Junk programmers adding features because they can with zero thoughts at how they may be used for evil.
“Microsoft made clear commitments to improve its security practices, which further adds to the confidence we should all have these new “AI” tools are safe, secure, and private.”
Hahahaha, for anyone who believes that one I have bridge in Brooklyn going dirt cheap, PM me for details. For the last four decades or so they have shown they have not one single clue how to do security properly, that they could start now is beyond a joke….
Well… it does not matter how much you “sandbox” AI, as long as it has access to your documents and the Internet. It can make catastrophic mistakes.
Why?
Agents are just running code on your behalf. It could be internal functions, web search, python scripts, or in case of Windows, PowerShell.
In order to be useful, they will have to do “real” work, and any real work comes with risks.
Please encrypt my data, so that others cannot access it. I have heard it is important for privavy.
Yes, please
Wait, how am I going to access this data later on?
Congratulations you have installed a ransomware on your system… willingly.
The only way to avoid this is… to be perfectly honest… not letting AI do actual work. Let it search maybe, but you have to do the actions, and vet all the software that is downloaded and run, along with the scripts generated.
If I worked for the Windows core team, I’d be so frustrated right now.
Dave Cutler & Co are tough cookies and Windows, deep inside, is nowadays very secure. It is quite impressive actually that you can apply a fully distributed security model (Active Directory) to hundreds of thousands of files, processes and other objects with high performance. Other features are also good, such as the modern virtualization-based core security features.
And then, all this trouble, and the beam counters will get you to create a local user account for a chatbot with high privileges…
ugh.
Obviously all Dave have an opinion about Windows : https://www.youtube.com/watch?v=oTpA5jt1g60
What Windows core team? I don’t believe MS has one of those any more.
AI is a cult and we’re all being forced to take part in this grand experiment they are forcing on the world. This is all because tech has become a series of monopolies in various domains. All of them trying to work around each other as best they can without infringing too much.
Bolting on user like / level AI agents to automate a legacy OS is like bolting on motors to the legs of a horse so you can keep selling your medieval carriage. We need a new hierarchical servant construct with the smartest and most trusted agent at the top, not a bunch of desperados swarming a clunky Jurassic contraption. Trusted Agency is the thing MIcrosoft should be working on instead of deploying tech they half understand with security warnings. They never actually got rid of Sidney.