Linked by Thom Holwerda on Wed 7th Mar 2007 17:53 UTC, submitted by SReilly
Google Google is developing a program to help academics around the world exchange huge amounts of data. The firm's open source team is working on ways to physically transfer huge data sets up to 120 terabytes in size. "We have started collecting these data sets and shipping them out to other scientists who want them," said Google's Chris DiBona. Google sends scientists a hard drive system and then copies it before passing it on to other researchers.
Permalink for comment 219362
To read all comments associated with this story, please click here.
by zbrimhall on Wed 7th Mar 2007 20:45 UTC in reply to "ZFS?"
Member since:

Unrelated problems.

ZFS, being a filesystem, is unrelated to data transfer. I'm sure it could be useful to the project in otherways, though: ext3 filesystems, for example, have a size limit in the neighborhood of 8-16 TB. You could probably use some kind of logical volume manager to concatinate a bunch of filesystems together, but why do that if ZFS can manage such datasets withought breaking a metaphorical sweat?

Of course, the article says nothing about the actual technology Google is using for these "hard drive systems," and I can't recall off the top of my head what the state of ZFS on Linux is (is it working now through FUSE?).

As for the problem at hand--transfers of enormous datasets--it's really just a Google implementation of the old proverb "never underestimate the bandwidth of a station wagon full of backup tapes speeding down the highway."

Reply Parent Score: 1