Linked by Eugenia Loli on Mon 28th Apr 2003 15:48 UTC
Original OSNews Interviews Today we feature an in-depth interview with three members of FreeBSD's Core (Wes Peters, Greg Lehey and M. Warner Losh) and also a major FreeBSD developer (Scott Long). It is a long read, but we touch a number of hot issues, from the Java port to corporate backing, the Linux competition, the 5.x branch and how it stacks up against the other Unices, UFS2, the possible XFree86 fork, SCO and its Unix IP situation, even re-unification of the BSDs. If you are into (any) Unix, this interview is a must read.
Permalink for comment
To read all comments associated with this story, please click here.
Re: samb
by Bascule on Tue 29th Apr 2003 03:07 UTC

Scott Long: [regarding SMPng] However, as more subsystems and drivers are converted to use it, we feel that the result will be faster and more scalable than what is available from Linux. There are also two related projects that will provide vastly improved threading support to applications, and will hopefully be another reason for people to look at FreeBSD.

samb: Are there any benchmarks to support this 'feel'? If you skim the Linux kernel mailing list, you will see that a number of different benchmarks are regularly posted, and the regressions and optimazations of the development kernel is being closely monitored. I see nothing like that in the FreeBSD-camp. Not on the public mailing lists, anyway.

The only benchmarks I've encountered regarding things like the performance of the I/O scheduler, VMM, and process manager were my own dbench tests, in which FreeBSD 5.0 held a commanding lead over Linux 2.4 in terms of total throughput.

Of course, even despite my own caveats, these were contested by many to the point that they were relegated to meaninglessness.

Neverthless, I'm yet to see benchmarks to the contrary.

Care to post some, samb? I'd be especially interested in system throughput benchmarks of Linux 2.6 versus FreeBSD 5.0.

Scott Long: While a lot more development money may be going into Linux right now, FreeBSD is helped by the 20+ years of development and maturity that the BSD base brings.

samb: And how exactly does that helps with regard to the development of 'new' technologies like SMP, threading, NUMA, and so forth?

I don't think anyone is arguing anything along the lines of FreeBSD having better NUMA support than Linux.

As far as threading goes, Linux and FreeBSD took two very different paths initially, and both were terrible implementations. Linux suffered awful context switching penalties with its _clone() based implementation, and FreeBSD's threads couldn't scale across processors, and didn't provide support for multiple concurrent system calls from within different threads due to its userspace implementation.

Only now has Linux solved its threading woes with NGPTL. FreeBSD is trailing behind as far as adding KSE support to its userland libraries and finishing KSE support in the kernel (see and for further information)

As far as SMP goes, Linux and FreeBSD took a virtually identical approach. Both added initial support for SMP via a global lock on the entire kernel, the Big Kernel Lock (BKL) in Linux and the Big Giant Lock (BGL) in FreeBSD.

The main difference is in the move to a more modern and scalable SMP implementation. Linux has been slowly and progressively increasing its locking granularity. FreeBSD did virtually nothing until the 5.x series to move from the BGL. However, FreeBSD 5.0's locking granularity is much finer than in Linux. Furthermore, the inclusion of scheduler entities will provide a very nice tradeoff between the advantages of both kernel and userland threads implementations.

But back to the issue at hand, I think, as everyone know, Scott Long'spoint stands out the most the most with FreeBSD's VM subsystem, which is, at this point, a very well tuned and mature implementation. Linux has a very ecclectic VM implementation, especially in 2.6, utilizing new, untested, and untuned technologies almost exclusively. This lack of testing and maturity was what lead to the VM switch in the 2.4 series.

I think that primarily because of that very incident many people are wary about the code quality of the mainline Linux kernel. Certainly 2.4 has grown to be quite mature, but will we see a similar incident with 2.6? One can't really know...

The bottom line is that FreeBSD's legacy code base does not result in a development path that is markedly different from that of Linux. The simple fact remains that Linux has more zealots, therefore more mindshare, and consequently more developers and corporate support.

Scott Long: FreeBSD, OpenBSD, and NetBSD have a much lower barrier for entry for developers in that the official source trees are publically availalbe via CVS, and there are many more developers with CVS commit access that can funnel in changes.

samb: And despite this supposed lower barrier of entry, there seems to be a lot more happening on the Linux side of things.

Well obviously, Linux has considerably more mindshare, and consequently it receives the bulk of corporate backing.

I think many FreeBSD users are sore that Linux began receiving this corporate backing not for its technical merits but simply for the sheer amount of zealotry surrounding it.

Of course, the result of such corporate backing and its significantly larger mindshare is that Linux has eclipsed FreeBSD in most areas.

samb: I mean, projects like KSE and SMPng which are supposed to be major new features in the 5.x branch aren't even done yet, and even when they finally get there, noone really seems to know what kind of performance boosts they will bring to the table. In practice, that is.

No, no one does. The proof is in the code, and the code isn't there yet. However, the following is known: Sun switched from an M:N threads implementation to a 1:1 implementation in Solaris. While Solaris's M:N implementation didn't support scheduler activations and faced many of the same I/O starvation issues that FreeBSD 4.x was experiencing, the overall opinion seems to be that the complexity required for an M:N implementation leads to deficient overall performance.

Of course, as I said earlier, the proof is in the code. For the time being Linux has FreeBSD beaten with the NGPTL.

And another thing is for certain: the KSE threads implementation will be a significant improvement over the userland implementation in FreeBSD 4.x. Whether or not it will outperform Linux and the NGPTL remains to be seen.

So which OS is "better" in terms of purely technical merit? Well, I think it's important to keep the following in mind:

The majority of the systems running FreeBSD or Linux are going to be uniprocessor.

Most of these systems aren't going to be under considerable load, therefore stability becomes paramount.

Can one really say one stable kernel line is "better" than the other one? I know many people who experienced system locks or systems entering virtually unusable states under intense VM load with both Rik van Riel's VM and Andrea Arcangeli's VM (in fact, some people I know recommended avoiding 2.4.13 as "unlucky number 13")

As things have stabilized later in the 2.4 series, is there anything now worth mentioning?

One of my biggest with Linux remains to be the OOM killer. The OOM killer uses somewhat arbitrarily constructed algorithms (see for a full explanation of criteria) to determine which processes to kill in a low memory situation. This approach follows a surge of low-quality code being churned out primarily on the Linux platform which doesn't properly handle low memory situations in userland applications.

Personally I think the OOM killer is a horrible decision on the part of the Linux kernel designers. Because of this on systems which don't properly set ulimits, or even in other conditions where a memory exhaustion attack may be carried out on some service, the kernel will arbitrarily kill another process, often times a mission critical one.

The OOM killer seems to be an outgrowth of a very common practice in Linux kernel development, which is the desire to work around problems in userland applications via kernel hacks.

Personally, in a low memory situation I would rather see poorly written applications die in favor of more robust ones, or even better, have well-written non-mission critical applications exit gracefully.