Linked by Andrew Hudson on Mon 29th Nov 2010 21:50 UTC
Windows NTFS is the file system used by Windows. It is a powerful and complicated file system. There are few file systems that provide as many features and to fully cover them all would require a book. And in fact there is a book detailing NTFS, and it's already out of date. The purpose of this article is not to cover all of the features of NTFS, nor will it exhaustively cover NTFS features in detail. Instead we will cover its basic structure and then describe some of its more advanced features and provide use examples where possible. We will focus more on what it does, rather than how it does it. Trying to walk the line between informative and detailed is difficult and so this article contains a lot of references for people who hunger for more detail.
E-mail Print r 15   · Read More · 91 Comment(s)
Thread beginning with comment 451707
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: Nitpicks
by malxau on Tue 30th Nov 2010 20:39 UTC in reply to "RE: Nitpicks"
malxau
Member since:
2005-12-04

The other instance I can think of, where compression can be beneficial, is to minimize the number of writes to the underlying storage medium. In the old days of sub-200MHz CPU's and non-DMA drive controllers, the time saved on disk I/O was a win over the extra CPU work of compression and decompression.


It really depends on the meaning of "I/O". In terms of bytes transferred, compression should be a win (assuming compression does anything at all.) The issue with NTFS (or perhaps, filesystem level) compression is that it must support jumping into the middle of a compressed stream and changing a few bytes without rewriting the whole file. So NTFS breaks the file up into pieces. The implication is that when a single large application read or write occurs, the filesystem must break this up into several smaller reads or writes, so the number of operations being sent to the device increase.

Depending on the device and workload, this will have different effects. It may be true that flash will handle this situation better than a spinning disk, both for reads (which don't have a seek penalty) or writes (which will be serialized anyway.) But more IOs increases the load on the host when compared to a single DMA operation.

(Then again, one must question the wisdom of using any journaling FS on flash...)


Although flash devices typically serialize writes, this alone is insufficient to provide the guarantees that filesystems need. At a minimum we'd need to identify some form of 'transaction' so that if we're halfway through creating a file we can back up to a consistent state. And then the device needs to ensure it still has the 'old' data lying around somewhere until the transaction gets committed, etc.

There's definitely scope for efficiency gains here, but we're not as close to converging on one "consistency manager" as many people seem to think.

Edited 2010-11-30 20:51 UTC

Reply Parent Score: 1

RE[3]: Nitpicks
by gus3 on Wed 1st Dec 2010 00:11 in reply to "RE[2]: Nitpicks"
gus3 Member since:
2010-09-02

The bigger problem with journaling on flash, is that so many FS operations need two writes: one to the journal, and then one to commit. Not good for flash longevity. In fact, early flash storage without wear-leveling got destroyed fairly quickly by journaling, when constant writing to the journal wore out the underlying flash cells in a matter of days.

Reply Parent Score: 1