Linked by Howard Fosdick on Mon 22nd Oct 2012 04:51 UTC
Linux Here's a topic guaranteed to start controversy. Which Linux distribution is best? It all depends on your criteria for judging. Even then the topic is highly subjective. Here are a few nominees for "best distro" in specific categories.
Thread beginning with comment 539673
To read all comments associated with this story, please click here.
Depends...
by rhavenn on Tue 23rd Oct 2012 06:50 UTC
rhavenn
Member since:
2006-05-12

for a new user: OpenSuse / Fedora / Ubuntu
power user: Arch or Gentoo or Slackware

servers: debian or FreeBSD or Arch or Gentoo

Gonna rant for a bit on the RedHat / CentOS on servers thing. I can't stand the fact they always say "we backport security patches and keep shit updated, but stable" No, what you're saying is you don't trust the developers who wrote the actual code to fix their shit and you want to charge people for your patches. a) you're introducing new code just by making your changes, so your patches can be just as unstable as any other patch and b) tracking security releases for stuff like Apache from Apache is a lot smarter to me then tracking RedHat Apache patches.

Now, in a perfect world FreeBSD would have better hardware support and a lot more money behind it. It's the best of all worlds. You can do a full binary distro with binary patches or go compile mode and track the ports system for latest releases or write your own ports. It's got a stable core and applications do a rolling release.

Now if only a Linux distro would do that for servers. Arch with a slower moving "server-core" and "server-community" repos would be awesome. A kernel compile for servers and rolling release of applications. Sure, you don't do major version changes without changing actual packages, but tracking apache22 or mysql55 should track the security patches for that major release.

Like someone else in this thread said, an internal package repo (aka: WSUS) and cfengine / Puppet (aka: AD group policy), LDAP (aka: AD), ZFS with NFS and iSCSI (aka: DFS) and some testing infrastructure would let a pretty small team manage a very large Linux farm without much trouble.

Edited 2012-10-23 06:51 UTC

Reply Score: 2

RE: Depends...
by ze_jerkface on Wed 24th Oct 2012 12:23 in reply to "Depends..."
ze_jerkface Member since:
2012-06-22

The other problem is that the RHEL/CENTOS approach encourages the frozen install problem whereby security patches are trickled in when a major version update would provide the best level of security. It would be like a third party backporting security patches to IE6 to keep the rest of their software from breaking.

Third party applications need to be decoupled from the main system. Everyone should run the latest Apache, the latest PHP, etc.

Reply Parent Score: 2

RE: Depends...
by Flatland_Spider on Wed 24th Oct 2012 19:41 in reply to "Depends..."
Flatland_Spider Member since:
2006-09-01

No, what you're saying is you don't trust the developers who wrote the actual code to fix their shit and you want to charge people for your patches.


Of course, you don't trust the developers, and you don't trust Red Hat either. Only trust the results you get when testing the software and patches in your test environment.

Who deploys something with out testing and vetting it first? Seriously. Who does that in a production environment that isn't someone's bedroom?

I will gladly pay RH to QA the software before I QA the software.

Reply Parent Score: 1