Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.
↫ Benj Edwards at Ars Technica
I’m sure this is fine and not a sign of anything at all.

On a semi-unrelated note, I volunteered to be interviewed by Claude about what I thought the future of AI should be. It was a good.
I had unbelievable interactions with AI. No matter what some people think of it, or feel threatened by it, it seems to be a very efficient way to pack knowledge in a small-ish format. It seems to be very good at creatively mixing concepts together as far as i tested…
The problem being that it often hallucinates, and you therefore need to check everything it tells you if you want to use the information…
This leads to very strange yet non unexpected phenomenon: after dust will settle the people who are most skeptical about LLMs today would learn to use them and learn to benefit from them while the most enthusiastic endorsers who believe that LLMs would be able to work FOR them and not WITH them would be duped… happens with every new tool invented, LLM is not an exception, just the most unreliable tool invented to date. Google search before it had the exact same effect.
I agree, but it does not make it a “product” for most people.
First: it has random results.
Second: it has limits. Working in a social branch, my users have to instruct files that deals with child rapes or murder. So only in place IA can work on our files.
Third: it has limits. Trying to deal with someone named “M. Bomb” leads to very strange results.
Fourth: Ms commercial and technical people are totally incompetent. They try to FUD with statements like “AI requests increased by 400%” – yes, but I am a user: I know that 95% of the requests “I” make on an office computer are not used: I know AI queries are thrown each time I do a research on edge, or when I start Excel, or when I read my mails… I nearly NEVER use the result.
More over, I had 3 demos by Ms. None of them worked. I even witnessed a technical blaming azure people for the stability of their platform. Way to present your product man!
Fifth: Every AI enhanced software has failed miserabily for me: worst is powerautomate – it needs to be queried in english – no matter for me – but then when I asked something simple in english, it answered in french that it didn’t understand the query.
Microsoft is dying. It seems to be just a zombie company.
They will not slow down despite that. MS is sitting on a massive pile of cash they have been hoarding for more than a decade and never had the excuse to expend.
Of all companies tossing money at AI, MS is the least likely to just crash and burn.
It will be very telling if they do.
“many salespeople missed their quotas”
How reasonable/feasible/reachable were these “quotas ?
AI as it exists currently cannot be profitable. Microsoft alone is spending tens of Billions of dollars on hardware and training costs EVERY MONTH. They are throwing AI at everything and everyone trying to get it to stick. AI will become an increasing part of our lives, but current LLMs don’t do anything good enough to pay for the huge costs currently required.
Neurons are quite analog, they don’t really operate on a binary on/off basis. They’re more like amplifiers than switches.
Trying to emulate a system built out of analog circuitry with a digital one is quite wasteful and not very performant. The best solution to AI would be to build analog devices for the task. Digital computers are incredibly precise, but wetware is incredibly smudgy. Trying to nail that relative wishy-washy-ness with precise digital points isn’t the right way to go about AI, IMHO
The problem with analog circuits is that they can’t be software-defined, though a hybrid analog-digital circuit can exist, I guess.
In fact there are many of them, especially for tasks as signal processing. But “mixed-signal integrated circuits” are quite costly to design a the moment, and have inherent problems with regard to power needs since usually analog parts have different needs than digital parts…
Maybe traditional computer software isn’t what we need for this. A sufficiently engineered hardware design should be more than capable. Nature did it, we just need to figure that bit out.
The123king,
I agree. I mentioned this in another post. I think analog could be the way to go. Even in the digital space we’ve proven that LLMs can still work effectively under quantization (ie few bits/less precision). Analog better reflects the physical world too, so things should be more efficient, but I’m not sure how easy it is to repurpose fabs that have been tuned for binary transistor circuits back to the analog domain? Circuits like latches have to be completely rethought for analog. Presumably the costs could come down with large scales of economy, but at least today ADCs and DACs are relatively expensive components compared to digital ones.
Sure, an analog version of an LLM could be more efficient from a component perspective. However, it loses the modularity of the Von Neumann model—where code runs on a CPU device, accessing separate memory devices etc. — and instead requires integrating the entire model in a single device. For a 2 trillion parameter model, you would need at least 2 trillion transistors, or more likely a small integer multiple of that, which is beyond physical feasibility given atomic-scale transistor sizes. Additionally, converting analog LLM outputs back to discrete symbols for human-readable language demands an analog symbolic human language encoding mechanism, which remains foundational research rather than an off-the-shelf technology ready for deployment. Your analog LLM is likely to output ’42’ and then you’d need another digital LLM with 2 trillion parameters to interpret what it means. Maybe a slight exaggeration. However, if you remove the human from the loop and the LLM is inferring for itself, it wouldn’t need human-readable output—it could use the raw analog results directly for its own internal processing, like the internal non-human-readable representations Google translation models use. So analog LLMs are an interesting segway for robots, but not chatbots, IMO.