Linked by Drumhellar on Wed 25th Sep 2013 22:02 UTC

I've been a big fan of FreeBSD since I first acquired 4.4 on 4 CDs. By that point, I had already spent a lot of time in Linux, but I was always put off by its instability and inconsistency. Once I had FreeBSD installed, it felt like a dream. Everything worked the way it was supposed to, and the consistency of its design meant even older documentation would be mostly applicable without having to figure out how my system was different. There is a reason why in the early days of the Internet, a huge portion of servers ran FreeBSD.

But, that was a while ago. Since then, Linux has matured greatly and has garnered a lot of momentum, becoming the dominant Unix platform. FreeBSD certainly hasn't stood still, however. The FreeBSD team has kept current with hardware support, new features, and a modern, performant design.

Thread beginning with comment 573298
To read all comments associated with this story, please click here.
Switching between linux and *-bsd.
by Alfman on Thu 26th Sep 2013 02:22 UTC
Member since:

I'm already deeply vested in linux, but I'm considering trying out BSD for my server hosting environments. There are several features that I want which linux doesn't give me directly:

ZFS - obviously.

Union mounts - Sometimes it's exactly what's needed and it's so annoying that it's not available in mainline due to the linux head honchos refusing to incorporate any overlay fs into the kernel (ie aufs or unionfs)

Secure root jails - while I'm very pleased to have linux containers in mainline (ie lxc-*), the virtualization has been incomplete for a long time. Not that the missing pieces are important for functionality, but the lack of secure containment kills it for me (/proc/sysrq-trigger,dmesg,hard coded uid==0, etc).

One thing that's holding me back with FreeBSD is the lack of full virtualization (KVM), although I hear that may be coming in v10.

I've found that Linux generally has excellent hardware support on servers, but I don't know where BSD stands (Ie network drivers & raid controllers like the LSI megaraid used in dell poweredge servers).

Reply Score: 3

Drumhellar Member since:

I'm pretty sure the controllers that Dell uses in their Poweredge servers are suported - Dell's PERC H700 and H800 series use the LSI SAS2108, which is supported by the MPS driver.

Network drivers should be supported, as well.


The full HCL can be seen here:

Reply Parent Score: 2

davidone Member since:

Well, GNU/Linux has a lot of hardware drivers but all these drivers are quality drivers? And do you really need that exotic controller on you mission critical server? ;)

Reply Parent Score: 2

Kebabbert Member since:

Well, GNU/Linux has a lot of hardware drivers but all these drivers are quality drivers? And do you really need that exotic controller on you mission critical server? ;)

Yes, Linux has some 150.000+ drivers and a couple of 100 drivers released every week. There are only so many Linux developers, so they will never be able to update all of them when Torvalds changes the API in the kernel. That is one of the reasons Linux is unstable, unless you are very restrictive with what software you install. If you are on a Long Term Cycle distro, like LTS, and you want to install some kind of software that uses new libraries, you need to upgrade your libraries too. Which forces you to upgrade other software on your system so they can use the new libraries too. etc. This triggers a chain reaction so you have upgraded your entire system. Ergo, LTS does not work. You can only use LTS if you install old software, or, if you hack the software so it uses your old libraries.

So, no, most Linux drivers does not work. When Torvalds upgrades the kernel and changes the API, drivers does stop working. So of these 150.000+ drivers, I wonder how many of them are up to date? Maybe 5%? 95% of the drivers does not work?
"...You have 150,000+ drivers for Linux, with a couple of hundred new devices released many Linux kernel devs are there again? if you pumped them full of speed and made them work 24/7/365 the numbers won't add up, the devs simply cannot keep up...which is of course one of the reasons to HAVE a stable ABI in the first place, so that the kernel devs can work on the kernel while the OEMs can concentrate on drivers..."

Reply Parent Score: 0

Alfman Member since:


"Well, GNU/Linux has a lot of hardware drivers but all these drivers are quality drivers?"

Mileage varies. Drivers that are problematic (in my experience) have been for the desktop (video/sound) or those for consumer devices that are added to the machine without researching linux compatibility (I was given a doxie scanner as a gift, but company behind it has said they would never support linux or release the specs, they've pulled the statement due to criticism but their policy is effectively unchanged).

On the other hand for a server I think you'd be hard pressed to find a server that linux doesn't work with even if you buy it at random. Some of the userspace components may be trickier to get without manufacturer support (ie raid monitoring/management).

"And do you really need that exotic controller on you mission critical server?"

Raid controllers are not really exotic on performance servers, and yes we do need it if we don't want to give up features like battery backup write back caching and hotswap, RAID offloading. With this you can commit transactions at tremendously high speeds even with RAID6. The backplanes in servers are often hardwired for the raid controller so you cannot really bypass them.

Don't confuse this with the "soft" raid that comes with low/mid machines.

Reply Parent Score: 5