Want to keep your Windows file systems at peak performance? Start by following this guide to tweaking cluster size, picking the best naming structure, and more.
Want to keep your Windows file systems at peak performance? Start by following this guide to tweaking cluster size, picking the best naming structure, and more.
in practice, using more than one hard drive would likely
beat them all.
Disk space is cheap. If you want to eek-out every last bit of performance in your NTFS file system, benchmark each cluster size and look for what works best for your setup. Try to align your cluster sizes to what your IO system works with. If you have a RAID controller (or software) that uses 64k block sizes, then you will most likely get best throughput using 64k cluster sizes. IOMeter is a good tool to determine what works best for you.
Again, disk space is cheap; forget the compression. As for the 8.3 filenames, it is easy to determine if that is necessary in your situation or not.
Just a quick thanks for posting that article! Very interesting.
Although the gentleman from dsl.lsan03.pacbell writes: using more than one hard drive would likely beat them all, I do have to say that likely isn’t always! The “too many files updating in the same directory” problem is very familiar to me. Exceedingly concurrent access to a directory with 25000+ files under heavy I/O load would cause processes to deadlock in Solaris 2.6&7 with UFS due to inode contention. Adding more disks to the stripeset did not help the problem much; changing the applications so that instead of /var/dir/filename they used /var/dir/f/i/filename prevented a lynch mob from storming our datacenter.
Speaking of directories, I had a fun experience today on one of my servers — a Modperl based application locked up due to having 65535 files in its server directory. This app was on an Apache/Linux/Ext2fs; any guesses as to which subsystem bit the dust, or was it just the app itself?
Yours truly,
Jeffrey Boulier
While I can appreciate the journaling, it is my impressions that NTFS is a dog. On two, nearly identical(66Mhz CPU diff), systems, the 2K/NTFS box just can’t hack moderate to heavy disk I/O for very long before degrading. The Linux/Reiser seems fresh after months of use, no noticable degradations. It’s not too noticable when the 2K disk is freshly defragged, but it only takes a few days of disk abuse and 2K starts grinding. My wife and I do web/graphic design for about twenty clients at the moment. Both systems share several directories and thousands of files via SAMBA. While the wife is long from giving up PSP and her favorite programs, but she envys the long term “performance” stability(her system does not crash) she witnesses in the linux box. The overall performance difference is difficult to quantify by seat of the pants, but, I can clearly notice the difference. In addition to the same ole, same ole usage of both systems, the Linux box also runs the GW/NAT/FW, Squid, BIND & NTP for two additional systems. I would post all sorts of impressive uptimes and stuff, but there is little point. Anyone beating on a couple of systems day in and day out already knows what they are going to show anyway.
didn’t say always – “likely” was part of the original comments.
compression might actually help, considering hdd throughput is in the 20MB to 130MB per second, yet memory throughput is at least 10 times of that on a decent system. it is just that compression is out of fashion.
as impressive as linux is today, users sometimes still stick with Windows for application reasons – that reflects the current state of OSS software in terms of usability from a different angle.
my prefered way of computing now is to let linux does the dirty job while at the same time use the Windows pretty face.
if you remove wall papers from those posted linux screenshots on the web, what left looks b*tt ugly with full screen of inconsistency, plus a dirty foot or a greasy gear
We had to turn off compression here on our NTFS servers because of peformance issues.
I like NAS best of all.
I loved the “MSFT developed a defragmentation program”; take a close look and you’ll see that it’s licenced from Executive Software and is a limited version of Diskeeper.
I loved the “you can turn off the 8.3 compatibility… just make sure you applications don’t need it”. Most older programs and some installers require it! Turning it off will speed up the system and also keep you from making mistakes; if you do a “del *.1”, windows is nice enough to scan for both matching long file names and short file names (yes, the system generates the short names and then doesn’t show them when you do a “dir” so you could delete unexpected files!). On a side note, you don’t need the resource kit to turn off the 8.3 compatability; try “regedit” and look in MyComputer/HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Control/FileSys tem and set NtfsDisable8dot3NameCreation = 1.
This has been a pain in the neck for me for several years.
It never takes very long – a few weeks to a few months for my Windows systems to start lagging due to fragmentation.
Things improved SLIGHTLY when I moved pagefile.sys to its own partition but overall, I still have to schedule periodic defragging which, after my experience with the *nix-based filesystems that don’t seem to fragment to any appreciable degree, is just an utter waste of time.
NT 4.0 on an Alpha is even worse. You’re completely out of luck with NTFS fragmentation. There simply aren’t any utilities out there to fix it.
Your only option seems to be, back everything up to tape, delete the files, restore from tape. That includes all the risks associated with that method. What a PITA!
Thanks in part to discussion here, I’ve updated the article at ZDNet UK.
1. It now makes it clear that the defrag tool is a bundled version of Diskeeper.
2. It points out that you can set cluster size independently of partition size — though Microsoft recommends that you use the default for a given partition size.
Glad it raised your interest.
Peter Judge
Editor, Tech Update
ZDNet UK