Linked by Thom Holwerda on Mon 2nd Nov 2009 23:20 UTC
Sun Solaris, OpenSolaris ZFS has received built-in deduplication. "Deduplication is the process of eliminating duplicate copies of data. Dedup is generally either file-level, block-level, or byte-level. Chunks of data - files, blocks, or byte ranges - are checksummed using some hash function that uniquely identifies data with very high probability. Chunks of data are remembered in a table of some sort that maps the data's checksum to its storage location and reference count. When you store another copy of existing data, instead of allocating new space on disk, the dedup code just increments the reference count on the existing data. When data is highly replicated, which is typical of backup servers, virtual machine images, and source code repositories, deduplication can reduce space consumption not just by percentages, but by multiples."
Permalink for comment 392522
To read all comments associated with this story, please click here.
RE[4]: I skimmed the article...
by Tuxie on Tue 3rd Nov 2009 14:35 UTC in reply to "RE[3]: I skimmed the article..."
Tuxie
Member since:
2009-04-22

Well, why don't you just try it for yourself?

diff <(head -c 100000 file1.avi) <(head -c 100000 file2.avi)

This will compare the first 100000 bytes of file1.avi and file2.avi.

Reply Parent Score: 1