Microsoft is reorganising the Windows teams. Again.
For those unaware, the Windows organization has essentially been split in two since 2018. Teams that work on the core of Windows were moved under Azure, and the rest of the Windows team (those that focused on top level features and user experiences) remained under the Windows org.
That is finally changing, with Davuluri saying that the Windows client and server teams are now going to operate under the same roof once again. “This change unifies Windows engineering work under a single organization … Moving the teams working on Windows client and server together into one organization brings focus to delivering against our priorities.”
↫ Zac Bowden at Windows Central
I mean, it’s obviously far too simplistic to attribute Windows’ many user-facing problems and failures on something as simple as this particular organisational split, but it sure does feel like it could be a contributing factor. It seems like the core of Windows is mostly fine and working pretty well, while the user experience is the ares that has suffered greatly in recent years, pressured as the Windows team seems to have been to add advertising, monetisation, tons of sometimes dangerous dark patterns, and more.
I hope that bringing these two teams back together will eventually lead to an overall improvement of the Windows user experience, and not a deterioration of the core of the platform. In other words, that the core team lifts up the user experience team, instead of the user experience team dragging the core team down. A Windows that takes its users seriously and respects them could be a fine operating system to use, but it reorganisations like this take a long time to have any measurable effect.
Of course, it could also just have no effect at all, or perhaps the rot has simply spread too far and wide. In a few years, depressing as it may seem, Windows 11 might be regarded as a highlight.

All software developers (not just Windows) should be forced to write their code on standard HDD’s with a maximum transfer rate of 100MB/s, 2GB of RAM, on a dual core CPU capped at 2GHz. They should be required to ensure that the written software provides a good user experience with those limitations. The software should be functional and perform well under those constraints. Only after that’s confirmed, they can do testing on more powerful hardware.
There is too much bloat everywhere. It’s not just Windows. Linux, MacOS, even the web. Hardware from 40 years ago did amazing things because of well written software. Imagine what modern hardware could do today if given the same treatment?
@tuaris
Since I am writing this on a 2009 Macbook Pro (close to the earliest 64 bit Intel system), I obviously agree with your overall sentiment. It is a 2.5 GHz Intel Core Duo with 8 GB of RAM. According to hdparm, I am rocking 64 MB/s on the original spinning rust. A bit more RAM than you suggest but struggling to meet your specs otherwise. Without exaggeration, I found this machine sitting at my local recycling center when I dropped off some empty bottles.
My first comment is that I do not find all software as bloated as we all say. I am only using 2 GB of RAM running kernel 6.16, Wayland, Niri, and Firefox 143.0.1 with several tabs open. Obviously I do not have to do much heavy lifting to pin my two cores but just using this system is smooth and responsive for things like browsing, office apps, and basic dev work. So much is in the cloud or on the other side of a browser window, these days, it is amazing how much of my normal workflow I can use from this machine (including Podman and Terraform). I am not running any local LLMs but I can still use them remotely. I absolutely do more with this machine now than it did when it was new. I used to transcode a lot of video on similar hardware back in the day. I feel like doing it on this machine takes substantially less time and compresses better. Codecs are, if anything, quite a bit faster than they used to be. You cannot really say that this machine does less than it used to because of the software.
My second comment is that it would be ridiculous for me to expect devs to target this hardware. They need to target the hardware that real people have. I can totally agree with you if we say that things should work perfectly well on machines that regular people bought new 5 – 8 years ago. That would be closer to an 8th gen i5 (4 cores), 4-8 GB RAM, and an SSD providing 400 MB/s or so.
We completely agree that resources optimization should be a goal. Perhaps we disagree with how far we need to take that. And we may have different experiences with how bad things are. Frankly, I am constantly amazed with how well things work on older kit. I can absolutely video conference on this hardware (including with the quite bloated Microsoft Teams). The only thing stopping me is that the camera sucks. And no amount of resource efficiency is going to fix that.
And finally, while I find software “in general” to still perform well, there is certainly A LOT of software that runs poorly even on modern hardware. Don’t think I have missed that. A few programs with Electron or Java UI come to mind.
[By the way, the reason I am on this machine is that my son stole my laptop to play a game. This is the other machine in the same room as me. It works well enough that I am using it instead of walking upstairs to get a better one. In my view, that tells you how good the experience is.]
LeFantome,
Obviously it depends on specific projects, but I recognize the trends that tuaris is talking about. A recent osnews article covered this:
https://www.osnews.com/story/143180/the-size-of-adobe-reader-installers-through-the-years/
I’ve kept copies of the (windows) tax software going back to 2008 through 2024. This is just the setup without downloadable updates.
2008: 15.3MB
2024: 143MB
While the software could have gotten marginally more complex, it’s not an order of magnitude. The software contains barely any media/graphics to speak of.
Another example is a bluetooth multimeter I own with a 60MB android APK (84MB extracted).
https://www.holdpeak.com/product/320.html
Yes those screenshots show all the functionality of the application!
I didn’t cherry pick the example, it’s just normal now. Consumers are expected to buy new hardware to satisfy software requirements. Kids today probably think “meh, 84MB is nothing”. Historically 84MB is 131X larger than conventional memory available to DOS applications, and those DOS applications often did much more than what this trivial app is doing.
Regardless, the industry has already spoken: bloat doesn’t matter and they’re not interested in optimizing software to use fewer resources.
I’m a bit confused by this. Codecs are faster than they used to be? I actually had to set my security cameras to use h264 instead of h265 because, despite better compression, it was way too demanding for my computers at the time to play back. Of course if you’ve got hardware acceleration, then the codec is not handled in software at all. Services like youtube notably pushed webm instead over patent issues. Usually better compression requires more processing power. Webm had decent performance but did tradeoff quality at a comparable bitrate.
I agree with you. It doesn’t seem like a very realistic solution even though I think tuaris is right about bloat. There is a widespread mantra in software development that developers not bother with “premature optimization” in order to save costs. When we don’t take opportunities to optimize, it exacerbates software bloat. This can make sense to a project managers whose goal is to save dev costs for the company, but it does end up externalizing other, possibly much larger, costs onto customers. On the macro scale, a few more days or weeks of developer work may well be worth it compared to millions of customers needing to upgrade hardware. Companies rarely consider this their problem though.
@Alfman
> Codecs are faster than they used to be?
I am not saying that H.265 is less demanding than H.264. My point is that H.264 encoders and decoders are faster than they used to be (as are H.265 etc).
Back in 2008 or so I spent a lot of time in Handbrake converting video into H.264 on a Macbook Pro. If I do that today, on the same hardware, it will take less time because x264 or whatever is being used under the hood is faster than it used to be. It will also be higher quality at the same bitrate. This means lower bitrate at the same quality. This often means that video encoded on newer versions is easier for older hardware to play back than the same quality video encoded in the past.
Hardware that could not play back H.265 in 2013 (when x265 was released) may be able to handle it now with newer software. But maybe not. That was not really my point.
Codecs use more and more computationally intensive techniques as you say but it is complicated. Some techniques use a lot of compute but offer relatively minor gains. For example, I use AV1 to encode my video but I choose ‘fast decode’ in the options. This costs a percent or two of compression efficiency but makes it dramatically easier for older hardware to decode. The resulting files are far smaller than H.264.
I have not done my own benchmarks but the svt-av1 guys claim that they encode AV1 as fast as you can encode H.264. This is in software. Every new release is faster. Sometimes a lot faster.
https://aomedia.org/blog%20posts/SVT-AV1-Proven-Performance-Promising-Horizons/
LeFantome,
Ah, then I misread your statement completely.
This is something that we could benchmark empirically, the archives go back to 2009 and I’m curious what the numbers would look like.
https://handbrake.fr/old.php
Alas, I don’t have a macos based computer or even any other computer from that era to test on. The best I could do is test using newer hardware, but it might use a very different code path than 2009 due to hardware acceleration.
Yeah, codecs are notoriously full of toggles. I’ve done dozens of experiments in the past to try and optimize performance, quality, and compression. It’s trivial to optimize one metric if you ignore the others, but in practice they’re all important.
I see you got that from the following paragraph…
Unfortunately they haven’t linked to citations, benchmark data, or video quality comparisons for A/B testing. Not saying aomedia guys are wrong, but I’m a stickler for hard data and rigorous analysis, which I didn’t immediately find on their site.
@Alfman
> they haven’t linked to citations, benchmark data, or video quality comparisons for A/B testing
I fully support your skepticism and desire for hard facts. Respect.
The release notes for each version detail the speedups, again without evidence though I seem to remember seeing real data along the way.
> For example, in version 3.0….
> Improved mid- and high-quality presets and quality vs. speed tradeoffs for fast-decode 2 mode:
> ~15-25% speedup for M3-M10 at the same quality levels
> ~1% BD-rate improvement for presets M0-M2
> Repositioning of the fast-decode 1 mode to produce ~10% decoder cycle reduction vs fast-decode 0 while reducing the BD-rate loss to ~1%
https://aomedia.org/blog%20posts/SVT-AV1-V3_0-is-Now-Available/
That said, as a semi-heavy user of AV1, I can confirm their claims experientially (not data I know). A 25% speed-up when you are encoding on decade old hardware is certainly something you notice. 🙂
@Alfman
> I recognize the trends that tuaris is talking about
Do not get me wrong. So do I. I am not claiming that software is not getting bigger.
It is not universal though. Ironically, the first app I went to use as an example (also not cherry picking–expected it to be bigger) was LibrreOffice. I discovered that it has shrunk in size 50% from version 5 to version 25: TIL.
https://downloadarchive.documentfoundation.org/libreoffice/old/5.1.6.2/deb/x86_64/
https://downloadarchive.documentfoundation.org/libreoffice/old/25.2.6.2/deb/x86_64/
A better example might be Damn Small Linux. The old version used to fit in 50 MB. The new version is proud that it fits on a single 700 MB CD.
And bigger software uses more RAM which concerns me far more. A “lightweight” Linux desktop environment that was 150 MB – 200 MB 10 years ago wants 3 – 4 times that RAM now. And the JavaScript in web pages makes using a browser with 8 GB of RAM or less a lot harder.
We prioritize developer productivity over resource efficiency. Pulling in a GUI library can double the size of an application over the previous release. As can adding a crypto library that only gets used for one tiny task. Abstraction adds functionality and reduces programmer effort with the side-effect of making everything larger (and a bit slower).
So yes, programs are getting bigger. No question.
> DOS applications often did much more than what this trivial app is doing
I cannot agree with this though. Those DOS applications were objectively doing WAY less. Perhaps you liked where they spent their resource budget but they did less.
How big were the graphical, touch sensitive, network aware, unicode compatible, multi-tasking, bluetooth multimeters that you were using on DOS? How secure were they? How much did they cost? I guess they would have been more secure because they did not support networking or bluetooth. If they existed, they would have cost a lot more (inflation adjusted).
Neovim on my system is half the size of your multimeter. It is the latest release (most bloated). I would say that it does a lot more. But also a lot less. We can still use text apps if we want to. I just found this tool called Visidata. It is 5 MB on my system and it does a lot (latest version).
https://www.visidata.org/
Microsoft Excel 5.0 required 15 MB of space in 1993. Gnumeric requires 48 MB in 2025. In 1993, most people had hard drives smaller than 100 MB. So, Excel may have been 15 – 20 percent of your drive space back then and many people did not have enough RAM to run it. Measured that way, LibreOffice Calc and even modern Excel could be considered less resource intensive than past versions.
Anyway, it sounds like I am arguing again and I do not want to.
My original point was simply that performance has not degraded as much as we may think and you can run a lot more than you may realize on old hardware. In fact, probably more than you could when that hardware was new if you stick to like for like. But you have to pay attention to what you run. Some software has gotten smaller and faster. But A LOT of new software certainly expects lots of storage and RAM. Absolutely.
Mostly, we just run a lot more software than we used to and a lot that is doing stuff that we may or may not value.
Compilers may produce faster code today than 10 years ago. Windows 11 may actually have a more efficient kernel than Windows 7. The Windows 11 kernel would run just fine on 2009 hardware.
But yes the Windows 11 the operating system is massively bloated with dozens of processes running on it that Windows 7 did not have. These take up huge amounts of RAM and processor time. So, Windows is slower. And what we want to run may be applications like intelliJ IDEA (Java GUI), Microsoft Teams (crappy Electron app), or buckets of Javascript on a browser.
They were absolutely doing much less. I don’t have to edit how many apps I have running anymore. I remember spending hours trying out combinations of applications to figure out which ones I could run at once. For the most part, I can run whatever I want.
An app getting full resources of a regular desktop today is wild to think about. A very basic, very thin OS and the one running application having free reign to do whatever.
LeFantome,
I do appreciate that there are exceptions, however I’m going to nitpick this specific example. Look at the windows and mac versions…
This doesn’t explain the linux tarfiles you linked to, which makes me curious what’s going on. I extracted them for comparison. Bare in mind I found a lot of differences, but I’m focusing on the biggest difference, which was this deb file by far…
Some more digging reveals that the v5 version has two libraries that are not present in the v25 version:
libicudata.so is a unicode library, and LibreOffice still depends on it, however it seems to be sharing the one distributed by debian, which incidentally is now 34M 36% larger than the older version.
I couldn’t find as much information about libfbembed.so, it looks to be part of firebird sql engine. I don’t know why this has been removed, but it doesn’t seem to be a dependency any more and opening up libreoffice base on my machine doesn’t give me the option to use it. I guess it’s been replaced by HSQLDB?
While I can’t go through every file, I wouldn’t be surprised if there is more of the same in other deb files. It’s quite normal for linux software to depend on platform libraries that aren’t distributed with the software, much more so than windows where the common practice is to distribute everything the software needs as a giant installer.
In any case, maybe we can agree that looking at the windows/mac complete bundles is a better gauge for software growth than linux packages due to the dependency situation?
Yes, I agree with what you’re saying.
I’m not claiming DOS software did more than modern software as a generalization. But I provided a very specific example of an app that most definitely does much less than DOS software did. It displays the values from the multimeter and plots them. I could create the same features on a 32KB microcontroller connected to an LCD.
1-2MB APK might be expected at most, but 84MB is really a prime example of bloat.
Well, I think “smaller and faster” software are exceptions to the trend these days. Not because it’s impossible, but because it’s not widely practiced any more. People like me who care about this are becoming rarer.
Indeed, that’s a big part of it.
I’ve run into this. Video calls will cause the fan to spin up.
Was this Linux? I’ve run into codec libraries not being hardware enabled, and they were definitely harder on the CPU then the hardware accelerated versions.
Flatlander_Spider,
This was years ago when I bought the cameras in 2018 and it was a laptop that struggled with h265 playback. I don’t remember if I tried it on windows or just linux. I don’t have it any more and I don’t know whether it used hardware acceleration or not. Maybe I could enable h265 now to see if all my hardware today runs it ok.
…
Ok, so I just did that, and it plays fine from my desktop, but the SBC I am using to record the streams bugs out. It does NOT do any transcoding, but uses ffmpeg to capture the stream from the camera using RTSP. I don’t even know why it would care about the codec, but obviously it’s made a difference. A new ffmpeg might fix this, but the SBC is an ARM box, and because of that I don’t feel like it’s worth messing with right now. I’ll set it back to h264. My current feeling is “if it works, don’t fix it”.
Nowadays, one of my big gripes is that most software degrades extremely poorly when network connectivity is spotty, bandwidth limited, or latency high.
And, given the huge movement to “cut the cord”, that means a lot of people, a lot of time. Not just geeks interested in retro computing.
I would add to your requirements to make devs never have a cabled internet connection, but rather have one which randomly degrades at least once a week.
I fully agree. I run into crappy wireless networks all the time. Basically, whenever I leave the house there is a 95% chance of running into a bad network.
It’s amazingly hard to simulate a bad network connection. I would think someone would introduce a daemon which would simulate bad networks, but there’s nothing out of the box.
Flatland_Spider,
It’s pretty easy to simulate latency and missing packets with linux iptables, probably tools on BSD too. However WIFI is a whole other layer of crazy! Not only is there a whole new layer on top (or underneath?) ethernet frames, but wireless conditions are highly dynamic. There’s cross talk, interference, and any given node can’t see the whole picture. Both in and out of network nodes are only partially visible to the rest of the network.
Over the summer I was fighting with Wifi issues that seem to come out of nowhere, and only later did I make the connection between these wifi issues and a rock64 SBC connected to the network from the car (for a canbus project). Connectivity worked fine for things like ssh/pings. But transferring files would result in what I’ll call “wifi Kessler syndrome”. Only when the SBC was far away (which it was because it was in the car) and transferring larger payloads, the entire network would come to it’s knees. I’m guessing that being far away caused high strength re-transmittions to get out of hand and ruin the network for everyone else. I have no idea how common this is, but I bought a completely different wifi antenna and it exhibited the same problems at longer ranges.
I can move/add more base stations, but I don’t know how else to keep faint nodes from disproportionately harming others. Maybe newer access points have features to address this? Does anyone know?
Enjoy your ride : https://www.youtube.com/watch?v=SVGUtt7YtxI
Windows as an agentic OS… More and more AI surveillance, in plain words.