Linked by Rahul on Sat 18th Oct 2008 11:29 UTC
Linux While Ext4 was originally merged in 2.6.19, it was marked as a development filesystem. It has been a long time coming but as planned, Ext4dev has been renamed to Ext4 in 2.6.28 to indicate its level of maturity and paving the way for production level deployments. Ext4 filesystem developer Ted Tso also endorsed Btrfs as a multi-vendor, next generation filesystem and along with the interest from Andrew Morton, Btrfs is planned to be merged before 2.6.29 is released. It will follow a similar development process to Ext4 and be initially marked as development only.
Thread beginning with comment 334205
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: relevant?
by sbergman27 on Sat 18th Oct 2008 20:30 UTC in reply to "RE: relevant?"
sbergman27
Member since:
2005-07-24

Of course it is. Believe it or not, such technical challenges are OS-neutral ;)

Incorrect. Fragmentation and its consequences are filesystem (and workload) specific. Unix/Linux filesystems historically, with the notable exception of Reiser4, have been quite resistant to fragmentation.

For example, I just spot checked my busiest server, formatted ext3, which has a workload that consists of:

- 60 concurrent Gnome desktop sessions via XDMCP and NX (Web/Mail/Wordprocessing/Spreadsheet, etc.)

- 100 concurrent sessions of a point of sale package

- Intranet web/application server

- Database server

- Internal dhcp server

- Internal name server

- Samba file and print server for legacy Windows stations

- Other stuff

It has been operating for a little under a year and currently exhibits only 6.6% fragmentation.

That said, there are may be workloads that result in more fragmentation. But low to mid single digit percentages are what I typically see. In fact, in my 20+ years of administering various Unix/Linux systems, I have never at any time been in a situation in which I felt any need for a defragmenter. But as a friend of mine was fond of saying, "it's better to have it and not need it than need it and not have it".

Unfortunately, considering the number of new Linux users coming from a Windows background, I expect to see lots of senseless recommendations to "defrag the hard drive" in the not too distant future. For "performance" reasons... and even as an attempt to fix problems. Remember that Linspire was forced, by popular user request, to add a virus checker to their distro. Because "everyone knows" that its dangerous to run a computer without one because it might get a "computer virus".

Edited 2008-10-18 20:33 UTC

Reply Parent Score: 4

RE[3]: relevant?
by Morph on Sat 18th Oct 2008 21:29 in reply to "RE[2]: relevant?"
Morph Member since:
2007-08-20

Unix/Linux filesystems historically, with the notable exception of Reiser4, have been quite resistant to fragmentation.

Why is that? Someone more knowledgeable than I could probably point to some specific aspects of unix filesystem design that reduce fragmentation. But it was an issue for the designers to consider when the fs was designed, and is still an issue for people working on new filesystems today. How well that issue is dealt with by particular operating systems or particular filesystems is a separate question. (FAT certainly was notoriously bad.)

Edited 2008-10-18 21:30 UTC

Reply Parent Score: 0

RE[4]: relevant?
by sbergman27 on Sun 19th Oct 2008 00:45 in reply to "RE[3]: relevant?"
sbergman27 Member since:
2005-07-24

Why is that?

I think it likely has to do with respective history. Unix started out on the server and evolved onto the desktop. DOS/Windows started out on the desktop and evolved to the server. Unix filesystems were designed in an environment where the machine was expected to run, run, run. Downtime was expensive and to a great extent unacceptable. Defragmenting the filesystem would have been downtime, and thus unacceptable. Current community culture reflects that tradition.

Windows culture tends to look more to resigning one's self to fragmentation (and viruses for that matter) and then running a tool (defragger, antivirus) to "fix" the problem. When NTFS was designed, Windows users were already used to the routine of regular defrags, and would likely do it whether the filesystem required it or not. So why make fragmentation avoidance a high priority?

Edited 2008-10-19 00:46 UTC

Reply Parent Score: 1

RE[4]: relevant?
by _txf_ on Sun 19th Oct 2008 14:24 in reply to "RE[3]: relevant?"
_txf_ Member since:
2008-03-17

It has to do with the fact the unix filesystems tend to allocate files on either side of the middle of the volume and not immediately one after the other.

This means that there is space after each file for edits and the appended gets allocated with the file as opposed to a fragment in the next empty space.

either way this article explains far better than I:

http://geekblog.oneandoneis2.org/index.php/2006/08/17/why_doesn_t_l...

Reply Parent Score: 3

RE[3]: relevant?
by lemur2 on Sun 19th Oct 2008 09:51 in reply to "RE[2]: relevant?"
lemur2 Member since:
2007-02-17

- 60 concurrent Gnome desktop sessions via XDMCP and NX (Web/Mail/Wordprocessing/Spreadsheet, etc.)

- 100 concurrent sessions of a point of sale package

- Intranet web/application server

- Database server

- Internal dhcp server

- Internal name server

- Samba file and print server for legacy Windows stations

- Other stuff


Nice. Very nice.

What distro do you use, and how much RAM does all this take?

Have you thought of splitting the load between a small number of lesser servers?

Reply Parent Score: 2

RE[4]: relevant?
by hollovoid on Sun 19th Oct 2008 13:39 in reply to "RE[3]: relevant?"
hollovoid Member since:
2005-09-21

Im kinda interested in that too, I consider my system kinda beastly and im positive it could not even come close to handling all of that.

Reply Parent Score: 2

RE[4]: relevant?
by sbergman27 on Sun 19th Oct 2008 14:03 in reply to "RE[3]: relevant?"
sbergman27 Member since:
2005-07-24

What distro do you use, and how much RAM does all this take?

Currently F8 x86_64, though if I had it all to do over I would have stuck with CentOS. Fedora was pretty rough for the first couple of months we ran it but things have stabilized nicely.

8 GB of memory. I target about 128M per desktop user. 64 bit costs some memory up front, but has more sane memory management. I was running something like 50 desktops on 4GB on x86_32 CentOS 5, but sometimes zone_normal was touch and go. I had to reserve a lot of memory for it which cut into the buffer and page caches a bit. (Linux does a wonderful job with shared memory. Single user desktop admins don't get to see all the wonders it can perform.)

BTW, the it's a dual Xeon 3.2 GHz box. And the processor usage is only moderate. (That's why I chuckle a bit when I hear people talk as if they think multicore is likely to benefit the average user. My 60 desktop users don't even keep 2 cores overly busy!)

With x86_64, no, I don't feel any great need to for more servers. I don't have the luxury, for one thing. And more servers means more administrative overhead. That's one reason that virtualization is such a buzz word today.

Edited 2008-10-19 14:14 UTC

Reply Parent Score: 2

RE[3]: relevant?
by Buck on Mon 20th Oct 2008 08:22 in reply to "RE[2]: relevant?"
Buck Member since:
2005-06-29

For example, I just spot checked my busiest server...

Talk about putting all your eggs in one basket.

Reply Parent Score: 2