One of the main reasons some people tend to avoid updating their PCs is that “it makes it slower”. Especially with Windows 10’s Software as a Service approach, where it gets the so-called “feature updates” twice a year. But is it actually true?
Today we’re gonna find out how much Windows 10’s performance has changed over time, by benchmarking 10 elements of the OS experience.
As much as I dislike Windows, performance really was never an issue for me. It’s been responsive and snappy ever since Windows 7, but it’s still interesting to see the changes in performance over Windows 10’s lifetime.
I don’t know about those benchmarks, but I still have a
Disable AppX.reg
file on my USB flash drive as an “If the system slows down after an update, double-click this file on your desktop” file from the last time I helped someone with their Windows PC.On HDD systems, Windows slows down after the update because the update process fragments the system files. Back in the dark days of HDD disks, I would update the system and a whole 2 minutes would be added to the boot time (clean boot, after the update reboot). This was more pronounced on systems with nearly full C: drives, because the new system files (or parts of them) ended up in the inner (slower) tracks of the platters. Defragging the disk made performance return back to normal. Keep in mind I don’t know whether Windows’ defrag tool fully defrags the system files, so I can’t tell whether performance fully returned back to normal. But it was the same from a user point of view.
Most people back then underestimated how much a fragmented disk killed your performance, especially on laptops. Laptops were stuck with slow 5400RPM HDDs with slow seek times until they were eventually replaced with SSDs. Of course, Windows got significantly fatter in the meantime.
Various people at the campus would marvel at how fast my HP NX9420 run Windows 7, despite the fact it was a junker by that point and the fact HP used the slowest HDDs compared to any other major OEM (Acer used the fastest). The answer? I defragged after each update (plus some startup hygiene and using Security Essentials instead of other AVs).
BTW I never understood why Windows doesn’t reserve a partition at the start and use fixed allocation. I don’t know whether Desktop Linux does this with the /bin and /sbin partitions, maybe Thom will enlighten us.
Still happens on HDD based systems to this day, I can tell you a lot of industrial systems still prefer HDD over SSD. But the good thing is most of these systems are 24×7 and Win 10 is far superior at sorting itself out in the background in this regard. I find impatient users who often “help by manually adjusting a few things” often make it all worse!
Windows is a capable OS underneath with a userland including file system arrangement tailored towards ease of use. For this it works well for a broader spectrum of users than Linux et al out of the box.
File system fragmentation is a topic of its own. The Windows Defrag tool is faily dumb. Other applications defrag a bit better i.e. the MFT, and by file order or Windows optimisation order etcetera.
I used to defrag my Windows OS/application partition after updates or new application installs. I can’t remember the last time I did this. After checking fragmentation is 1%. Data disc fragmentation is 0%. Browsers are the worst offenders for causing fragmentation and drive wear. My browser scratch files are on a 1GB memory disc which is abandoned every shut down.
I haven’t set foot in control panel/setttings for years without a specific purpose like changing fresh install defaults, or checking network settings after a change of network device or checking updates.
I’m not interesting in meddling or overclocking or go faster stripes “optimising”. I don’t know Windows well enough and Microsoft people who do are paid to take care of that.
I think you know most of this, but stating it for completeness:
Defrag is a creature from a different era. In the late 80s/early 90s, you have a DOS-based system (no MMU), so running programs typically means reading entire files. You have a 20/40Mb hard drive, so low thousands of files total. Keeping files contiguous means a substantial reduction in seeking to load a program.
In the early 2000s, it takes hundreds of files to boot a machine. Keeping each file together is good and all, but there’s going to be tons of seeks across all the files. Keeping boot files on the outer tracks is about two things: faster transfers (a fixed rotation speed means more disk area is moving under the head per second), and also, keeping files closer so when seeking occurs it’s not as far and hence faster.
In the early 2010s, it takes thousands of files to boot a machine. Rather than face all those seeks, things like hybrid boot allow all of the boot contents to be stored in a single file. At this point defragmenting system files isn’t that important: all you need is that single file to be defragmented. Once running, the system is demand paging anyway, and doing so in response to non-system-file activity, so the head is going to be jumping from application data to some subset of a system file, and defrag won’t help that much at all.
I’ve seen systems “fall off” the boot performance path often enough too. One other factor is the boot prefetcher, which is trying to remember what contents are required to boot a machine. This needs to re-train based on the current, updated files. Back in the day it took 8 boots to fully retrain. Note that (IMHO) the application prefetcher wasn’t great in the hard drive era because background prefetching could easily interfere with a foreground task. With SSDs that want parallelism anyway, prefetching is a lot less impactful (although arguably also less beneficial in absolute terms.)
Windows has gone to great lengths to avoid a reserved partition because it eliminates storage fungibility. If the system grows or the user data grows, space should be reallocated. Note that Windows uses a page file rather than swap partition for exactly this reason too. Apple have been heading down a path of having a single storage pool that hosts multiple partitions to address this, which creates fungibility, but means those partitions don’t occupy specific disk regions so it wouldn’t be that helpful here. Frankly, optimizing for boot from hard drives at this point seems very backward looking, which is presumably underpinning Apple’s direction.
As for desktop Linux, most distros are moving in the other direction. /bin and /usr/bin are often symlinked together, and if not, individual files are, since most users have them on the same partition. A lot of distros no longer support splitting those partitions since it imposes constraints on the boot sequence. I’ve often used a seperate /boot (old habit from when BIOSes couldn’t address the whole disk) which enables the kernel to load efficiently, but there’s still tons of random IO to get to a functioning system after the kernel is running.
I’ve not noticed any noticible slowdowns in terms of booting/rebooting and shutting down.
It would be great if everything was retested on modern HW with a super fast SSD to eliminate any disk access speed issues as opposed to some weird VM configuration (did he use an HDD? An SSD? Something else entirely?)