Microsoft has just announced a whole slew of new “AI” features for Windows, and this time, they’ll be living in your taskbar.
Microsoft is trying to transform Windows into a “canvas for AI,” with new AI agents integrated into the Windows 11 taskbar. These new taskbar capabilities are designed to make AI agents feel like an assistant in Windows that can go off and control your PC and do tasks for you at the click of a button. It’s part of a broader overhaul of Windows to turn the operating system into an “agentic OS.”
[…]Microsoft is integrating a variety of AI agents directly into the Windows 11 taskbar, including its own Microsoft 365 Copilot and third-party options. “This integration isn’t just about adding agents; it’s about making them part of the OS experience,” says Windows chief Pavan Davuluri.
↫ Tom Warren at The Verge
These “AI” agents will control your computer, applications, and files for you, which may make some of you a little apprehensive, and for good reason. “AI” tools don’t have a great track record when it comes to privacy – Windows Recall comes to mind – and as such, Microsoft claims this time, it’ll be different. These new “AI” agents will run in what are essentially dedicated Windows accounts acting as sandboxes, to ensure they can only access certain resources.
While I find the addition of these “AI” tools to Windows insufferable and dumb, I’m at least glad Microsoft is taking privacy and security seriously this time, and I doubt Microsoft would repeat the same mistakes they made with the entirely botched rollout of Windows Recall. in addition, after the Cloudstrike fiasco, Microsoft made clear commitments to improve its security practices, which further adds to the confidence we should all have these new “AI” tools are safe, secure, and private.
But wait, what’s this?
Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.
↫ Microsoft support document about the new “AI” features
Microsoft’s new “AI” features can go out and install malware without your consent, because these features possess the access and privileges to do so. The mere idea that some application – which is essentially what these “AI” features really are – can go out onto the web and download and install whatever it wants, including malware, “on your behalf”, in the background, is so utterly dystopian to me I just can’t imagine any serious developer looking at this and thinking “yeah, ship it”.
I’m living in an insane asylum.

> which further adds to the confidence we should all have these new “AI” tools are safe, secure, and private.
Private as in: your data only getting harvested by microsoft. Can’t allow access to competitors, after all.
They claim:
> As we begin to build agentic capabilities into Windows, our commitment is to include robust security and privacy controls that empower customers to explore their potential confidently with the support of clear guidance and appropriate guardrails driven by these goals.
>
> Non-repudiation : All actions of an agent are observable and distinguishable from those taken by a user.
> Confidentiality: Agents that collect, aggregate or otherwise utilize protected data of users meet or exceed the security and privacy standards of the data which they consume.
> Authorization : Users approve all queries for user data as well as actions taken.
At least they’ll provide some way of requiring human intervention before running a command and will be visually distinguishable from the user themselves. But this is still ripe for fraud.
Microsoft considered harmful.