Linked by Rahul on Mon 13th Oct 2008 21:19 UTC
Linux Linux Foundation is organizing a end user collaboration summit this week. A major topic will be a presentation on the new upcoming filesystems - Ext4 and Btrfs. Ted Tso, who is a Linux kernel filesystem developer on a sabbatical from IBM working for Linux Foundation for a year, has talked about the two-pronged approach for the Linux kernel, taking a incremental approach with Ext4 while simultaneously working on the next generation filesystem called btrfs. Read more for details.
Permalink for comment 333998
To read all comments associated with this story, please click here.
Arun
Member since:
2005-07-07

"Here is a long series of articles... He goes through all the alternatives and chooses ZFS.

Ok. So from the articles, we can say that ZFS scales all the way down to 4GB of memory and an AMD Athlon X2 BE-2350 dual core processor running in 64 bit mode.
"

No, People are running ZFS on 512MB boxes. People are also running them in VMWare VMs. Zfs-on-FUSE is used on linux there are numerous articles if you search.

People put a boatload of memory because ZFS performs best when it can cache a lot. I fail to see how you can misconstrue that to mean it needs a lot of memory. Less memory won't give you the best throughput.

http://www.opensolaris.org/jive/thread.jspa?threadID=73990&tstart=-...

There are posts here where people having been using ZFS with 512MB.
"My home ZFS server runs with only 1 GB of RAM. It achieves 430 MB/s sequential
reads and 220 MB/s writes. This is very good, given that its primary task is
to serve large files over NFS."


"speed" is another vague term. Do you mean throughput, or latency. Local I/O,
or over NFS. Etc. FYI a small amount of RAM usually impacts random I/O
workloads when they would otherwise fit in memory, but does not reduce the
throughput of sequential I/O because prefetching algorithms work just fine as
all they need is a few tens of MB of memory.

..........

As a matter of fact, until march 2008 I had been running snv_55b for over a
year with only 512 MB to serve a 2.0-TB pool over NFS. Again, the performances
(throughput) were very acceptable. If that's what the OP needs, then 512 MB
would be *technically* sufficient, even if given the current prices 1 GB would
make more sense.

-marc
"

Reply Parent Score: 1