Linked by Thom Holwerda on Wed 7th Mar 2007 17:53 UTC, submitted by SReilly
Google Google is developing a program to help academics around the world exchange huge amounts of data. The firm's open source team is working on ways to physically transfer huge data sets up to 120 terabytes in size. "We have started collecting these data sets and shipping them out to other scientists who want them," said Google's Chris DiBona. Google sends scientists a hard drive system and then copies it before passing it on to other researchers.
Thread beginning with comment 219362
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: ZFS?
by zbrimhall on Wed 7th Mar 2007 20:45 UTC in reply to "ZFS?"
zbrimhall
Member since:
2006-08-21

Unrelated problems.

ZFS, being a filesystem, is unrelated to data transfer. I'm sure it could be useful to the project in otherways, though: ext3 filesystems, for example, have a size limit in the neighborhood of 8-16 TB. You could probably use some kind of logical volume manager to concatinate a bunch of filesystems together, but why do that if ZFS can manage such datasets withought breaking a metaphorical sweat?

Of course, the article says nothing about the actual technology Google is using for these "hard drive systems," and I can't recall off the top of my head what the state of ZFS on Linux is (is it working now through FUSE?).

As for the problem at hand--transfers of enormous datasets--it's really just a Google implementation of the old proverb "never underestimate the bandwidth of a station wagon full of backup tapes speeding down the highway."

Reply Parent Score: 1

RE[2]: ZFS?
by Mathman on Fri 9th Mar 2007 04:33 in reply to "RE: ZFS?"
Mathman Member since:
2005-07-08

Correction. You'd use LVM to join a bunch of physical volumes together into a logical volume. You'd still have to put a filesystem on the logical volume.

Reply Parent Score: 1