Linked by Thom Holwerda on Fri 9th Oct 2015 18:43 UTC
Debian and its clones

The Linux Standard Base (LSB) is a specification that purports to define the services and application-level ABIs that a Linux distribution will provide for use by third-party programs. But some in the Debian project are questioning the value of maintaining LSB compliance - it has become, they say, a considerable amount of work for little measurable benefit.

It's too much work for little benefit, and nobody wants to do it, so what's the point - just drop it. At least, that seems to be the reasoning.

But Debian's not throwing all of the LSB overboard: we're still firmly standing behind the FHS (version 2.3 through Debian Policy; although 3.0 was released in August this year) and our SysV init scripts mostly conform to LSB VIII.22.{2-8}. But don't get me wrong, this src:lsb upload is an explicit move away from the LSB.

That's too bad - the FHS is an abomination, a useless, needlesly complex relic from a time we were still using punch cards, and it has no place in any modern computing platform. All operating systems have absolutely horrible and disastrous directory layouts, but the FHS is one of the absolute worst in history.

Order by: Score:
Sad...but inevitable...
by TemporalBeing on Fri 9th Oct 2015 19:07 UTC
TemporalBeing
Member since:
2007-08-22

Well, personally I like the FHS, and there's utility in it, especially when you deal with embedded and low disk space systems. Admittedly that's not desktop/laptops any longer, but that doesn't mean it isn't useful to continue using it.

But it doesn't surprise me as systemd is breaking a lot of that stuff - with systemd guys pushing for everything in /usr and under /run...completely ignoring the embedded folks, and many other aspects of the distributions, and ignoring the history and why it's there.

Why keep it? To make it easier for people to push applications against Linux - especially those who have packages for Debian that are not included in the official Debian repositories for whatever reason.

And if they really do go down that road...well, then may be it's time to leave Debian behind.

Reply Score: 6

RE: Sad...but inevitable...
by phoenix on Fri 9th Oct 2015 19:21 UTC in reply to "Sad...but inevitable..."
phoenix Member since:
2005-07-11

Problem with the FHS is that pretty much every Linux-distro out there right now is FHS-compliant ... and no two of them use the same directory layout! There's so many optional and alternatives in the spec (at least, the last time I read it a few years back) that anyone can be compliant without actually being the same as anyone else.

When Debian and RedHat are both considered FHS-compliant, there's something wrong with the spec.

Reply Score: 7

RE[2]: Sad...but inevitable...
by dinosaur on Sat 10th Oct 2015 12:06 UTC in reply to "RE: Sad...but inevitable..."
dinosaur Member since:
2015-05-10

The differences aren't that great. You know the config files are in /etc/. The difference is minor. apache2 instead of httpd for example. Also you know logs are in /var and binaries are in /bin/, /usr/bin etc. It easy to adapt.

Reply Score: 7

RE[3]: Sad...but inevitable...
by snegtul on Tue 13th Oct 2015 13:36 UTC in reply to "RE[2]: Sad...but inevitable..."
snegtul Member since:
2009-08-16

Yeah I think the people who are complaining about FHS are novice/home user types. As someone who's been a sysadmin for over 15 years and has experience with commercial unixes as well as linux distros, I can tell you that in MY OPINION the FHS is pretty groovy.

Reply Score: 1

RE[4]: Sad...but inevitable...
by phoenix on Wed 14th Oct 2015 15:17 UTC in reply to "RE[3]: Sad...but inevitable..."
phoenix Member since:
2005-07-11

You know what they say about ASSumptions... ;)

I've been a sysadmin for just shy of 15 years, using FreeBSD and Linux for that entire time (starting with RedHat Linux, then moving to Debian, with some daliances with SuSE and Arch). And I still hate the FHS compared to hier(7) on FreeBSD.

If your only experience with a filesystem layout is the FHS, then it seems like a decent layout. But, once you are exposed to alternatives, it quickly becomes apparent how messed up the FHS really is.

Edited 2015-10-14 15:18 UTC

Reply Score: 2

RE[5]: Sad...but inevitable...
by Alfman on Wed 14th Oct 2015 19:20 UTC in reply to "RE[4]: Sad...but inevitable..."
Alfman Member since:
2011-01-28

phoenix,

You know what they say about ASSumptions...


Hey now...

I respect snegtul's view because it's HIS OPINION, which he emphasized. He didn't take the "better than thou" approach that seems to be so prevalent in reading the comments.

If your only experience with a filesystem layout is the FHS, then it seems like a decent layout. But, once you are exposed to alternatives, it quickly becomes apparent how messed up the FHS really is.


I agree. while there are some who like FHS, I wouldn't think that very many would want to switch to FHS if it were just invented today. As with many legacy systems, their popularity stems from historically dominant roles.

Reply Score: 2

RE[6]: Sad...but inevitable...
by acobar on Thu 15th Oct 2015 00:09 UTC in reply to "RE[5]: Sad...but inevitable..."
acobar Member since:
2005-11-15

Alfman,

Judging by your comments and how you address other's posts, it is easy to see that you are a knowledgeable and nice guy but ..

I respect snegtul's view because it's HIS OPINION

is something we, perhaps, agree but use different words to express it.

I reserve the word "respect" to people, opinions are always debatable to me, ranging from totally agree to totally disagree, even though it mostly land in the middle of the range.

I think you are right on that we must strive to talk with maximum respect toward each other, but I also find it alarming that critics to other's opinions are seen as a personal attack. This is something that have been growing in our society and I can't see how it can improve the dialog, on the opposite, I see lots of people eluding direct answers by fear it would hurt someone else feelings. I really think people should separate a bit more what may constitute just their opinions about random subjects from what can distinguish their character. Sorry for the almost irrelevant digress.

.. I wouldn't think that very many would want to switch to FHS if it were just invented today..

Agree, totally. ;)

Reply Score: 2

abraxas
Member since:
2005-07-07

I don't see what the big problem is especially to get so upset over. FHS isn't that bad at all. It's logical and not hard to implement. What's the fuss?

Reply Score: 4

dinosaur Member since:
2015-05-10

Yes I like it too. Mainly because I'm used to it and the break up of different types of files is indeed logical.

Too many programs will break if they change it now. Leave it be.

Edited 2015-10-10 12:07 UTC

Reply Score: 2

v Debian is a mess, everybody knows it.
by _QJ_ on Fri 9th Oct 2015 20:16 UTC
dreamlax Member since:
2007-01-04

A poor carpenter blames his tools.

Reply Score: 2

_QJ_ Member since:
2009-03-12

Yes, and a professional uses professional tools.

A professional carpenter arranges his tools to work efficiently.

A professional carpenter pick-up his right tool in a snap, almost blindly.

A professional carpenter, by expertise, knows which tools is the best one for his work.

And the professional carpenter will not be afraid to pay for tools support, if he thing it is best for him.

So, compare professional support choice on Debian and other distros and even OSes.

Just do it on facts.

Now, if you still want to play with Debian @ work, you are free, it is your responsibility. You'll undertake it...

Reply Score: 0

grat Member since:
2006-02-02

What you are describing isn't a failure of Debian.

It's a failure to hire competent sysadmins who can work together.

At my org, we manage both Debian and Red Hat systems, and they both have good points and bad points.

Honestly, though, once I've got them connected to the puppet server, I stop caring.

Reply Score: 3

_QJ_ Member since:
2009-03-12

-"What you are describing isn't a failure of Debian.

It's a failure to hire competent sysadmins who can work together.


Yes : This is EXACTLY the point.

Why is it almost impossible to hire competent devs and sysadmin for Debian when you find them for other distros ?

The answer is mostly in the question...

Reply Score: 1

TemporalBeing Member since:
2007-08-22

-"What you are describing isn't a failure of Debian.

It's a failure to hire competent sysadmins who can work together.


Yes : This is EXACTLY the point.

Why is it almost impossible to hire competent devs and sysadmin for Debian when you find them for other distros ?

The answer is mostly in the question...


Or the other way around?

Honestly, Debian admins are typically pretty competent too. The difference between them is the same reason why the Debian and Red Hat distros are there to start with, and sadly both sides make religious wars out of it.

I use Debian b/c the tools are wonderful. Between Apt, Schroot/Sbuild, etc, it's just a great distro platform, and I have yet to see any good equivalents in the Red Hat side. (Yes, I've used Yum and dnf, but there are aspects of them that simply don't compare, or are at least a lot later coming.)

It's not whether the admins are competent. It's how professional both sides act and whether they can set aside their differences sufficiently to work together. Sadly, that tends to lack in the software dev community as a whole.

Reply Score: 2

grat Member since:
2006-02-02

Yes : This is EXACTLY the point.

Why is it almost impossible to hire competent devs and sysadmin for Debian when you find them for other distros ?


My boss did (IMHO, of course!). And while I consider myself a little bit better than "competent", by no means am I unique.

It may be that supervisors are hiring people who have run Debian / Ubuntu desktops, rather than experienced administrators who also know Debian.

For Red Hat, you can always ask if they're RH certified (I'm not. oops.). That doesn't work as well for Debian, as there isn't an official "Enterprise" structure behind it.

There is a similar gulf between a linux desktop user and a linux server admin, as there is between a windows user and a windows server admin.

Reply Score: 2

Comment by tylerdurden
by tylerdurden on Fri 9th Oct 2015 20:17 UTC
tylerdurden
Member since:
2009-03-17

That's too bad - the FHS is an abomination, a useless, needlesly complex relic from a time we were still using punch cards, and it has no place in any modern computing platform. All operating systems have absolutely horrible and disastrous directory layouts, but the FHS is one of the absolute worst in history


Why do so many bloggers, in the tech area, feel so comfortable spouting such strong opinions about matters they really have little actual clue about?

Reply Score: 10

RE: Comment by tylerdurden
by WorknMan on Fri 9th Oct 2015 20:27 UTC in reply to "Comment by tylerdurden"
WorknMan Member since:
2005-11-13

Why do so many bloggers, in the tech area, feel so comfortable spouting such strong opinions about matters they really have little actual clue about?


Maybe because they look at it, see that critical system files are in a directory named 'etc', and rightly conclude that the whole thing is a train wreck?

I'm sure, like the rest of *nix, that directory name has historical significance and is probably quite elegant once you understand it all, but from the outside looking in, it is very user-UNfriendly and nonsensical. And yeah, all you fanboys go ahead and mod me down for speaking the truth, as you always do ;)

Edited 2015-10-09 20:30 UTC

Reply Score: 6

RE[2]: Comment by tylerdurden
by tidux on Sat 10th Oct 2015 02:50 UTC in reply to "RE: Comment by tylerdurden"
tidux Member since:
2011-08-13

Nobody who is incapable of understanding the reason should be editing files in /etc anyways.

Edited 2015-10-10 02:50 UTC

Reply Score: 8

RE[3]: Comment by tylerdurden
by WorknMan on Sat 10th Oct 2015 17:03 UTC in reply to "RE[2]: Comment by tylerdurden"
WorknMan Member since:
2005-11-13

Nobody who is incapable of understanding the reason should be editing files in /etc anyways.


Likewise, anyone who actually wants to learn how to edit these files should not have to be subjected to such a cryptic and arcane nomenclature either. The 'it's not for average users' argument is no justification for a shitty naming scheme.

Reply Score: 4

RE[4]: Comment by tylerdurden
by Alfman on Sat 10th Oct 2015 17:36 UTC in reply to "RE[3]: Comment by tylerdurden"
Alfman Member since:
2011-01-28

WorknMan,

Likewise, anyone who actually wants to learn how to edit these files should not have to be subjected to such a cryptic and arcane nomenclature either. The 'it's not for average users' argument is no justification for a shitty naming scheme.


GoboLinux's has done a great job with building a clean hierarchy. It's just my opinion but for my needs I prefer it to FHS, although they still have to support FHS under the hood because FHS is so pervasive in software as the greatest common divisor. It's just one of many legacy things that isn't going away in the foreseeable future, at least not without the support of major players.

Edited 2015-10-10 17:44 UTC

Reply Score: 2

RE[5]: Comment by tylerdurden
by Bobthearch on Sat 10th Oct 2015 18:59 UTC in reply to "RE[4]: Comment by tylerdurden"
Bobthearch Member since:
2006-01-27

I recently downloaded the liveCD version of the latest GoboLinux and it wouldn't even boot. I'm afraid that distro is in need of some real investment and development. ;)

Reply Score: 2

RE[4]: Comment by tylerdurden
by Drumhellar on Sat 10th Oct 2015 19:58 UTC in reply to "RE[3]: Comment by tylerdurden"
Drumhellar Member since:
2005-07-12

If remembering "config files go in /etc" is cryptic and arcane, you have no business editing config files, because you are not nearly ready enough for the black magic involved.

Reply Score: 3

RE[5]: Comment by tylerdurden
by WorknMan on Sun 11th Oct 2015 00:17 UTC in reply to "RE[4]: Comment by tylerdurden"
WorknMan Member since:
2005-11-13

If remembering "config files go in /etc


Way to miss the point, genius. If I don't know the file system well and I'm looking for config files, I imagine the LAST directory I would look in is one called 'etc'. I would expect them to be in a directory called 'system', or similar.

It's just a small example of how ass-backwards the entire OS is.

Reply Score: 2

RE[6]: Comment by tylerdurden
by Drumhellar on Sun 11th Oct 2015 00:59 UTC in reply to "RE[5]: Comment by tylerdurden"
Drumhellar Member since:
2005-07-12

Way to miss my point.

If you can't be bothered to learn about something as trivial as where configuration files are located, you have no business attempting to edit them - that's how things break.

Manually editing configuration files is an advanced topic, requiring knowledge far beyond "where they're located"

Edited 2015-10-11 01:01 UTC

Reply Score: 5

RE[7]: Comment by tylerdurden
by Morgan on Mon 12th Oct 2015 02:04 UTC in reply to "RE[6]: Comment by tylerdurden"
Morgan Member since:
2005-06-29

Believe it or not, you're both correct. If you can't or don't want to learn where the system files are, you don't need to mess with system files. However, historical reasons notwithstanding, "etc" is a poor name for a directory storing some of the most critical files for an OS. "etc" evokes the sense that it's optional or extraneous, not critical or required.

Reply Score: 3

RE[5]: Comment by tylerdurden
by matthekc on Mon 12th Oct 2015 07:35 UTC in reply to "RE[4]: Comment by tylerdurden"
matthekc Member since:
2006-10-28

Editing config files should not be "black magic". I hate this attitude so much... We were all new to our chosen systems at one point and so we memorized arcane details and became "wizards". However that doesn't mean that is how it should be. Config files should have good documentation in the damn file where you are when you need the documentation most.

Reply Score: 2

RE[6]: Comment by tylerdurden
by Drumhellar on Mon 12th Oct 2015 15:00 UTC in reply to "RE[5]: Comment by tylerdurden"
Drumhellar Member since:
2005-07-12

That, of course, is not what I suggested things should be like, and I certainly provided enough context where that should be clear.

Config files are not (usually) black magic, but if you know little enough to get tripped up by them being in a directory named "/etc", as opposed to some other directory name, you aren't nearly ready for editing config files.

Reply Score: 2

RE[4]: Comment by tylerdurden
by tidux on Tue 13th Oct 2015 16:52 UTC in reply to "RE[3]: Comment by tylerdurden"
tidux Member since:
2011-08-13

Oh right, because /etc/hosts is totally harder and less arcane than C:\Windows\System32\drivers\etc\hosts for the same functionality?

Reply Score: 2

RE[2]: Comment by tylerdurden
by tylerdurden on Sat 10th Oct 2015 19:28 UTC in reply to "RE: Comment by tylerdurden"
tylerdurden Member since:
2009-03-17

Wait, aren't you the guy who loves to drone on and an about "poweruser" this and "poweruser" that?

Reply Score: 3

RE[2]: Comment by tylerdurden
by grat on Mon 12th Oct 2015 14:14 UTC in reply to "RE: Comment by tylerdurden"
grat Member since:
2006-02-02

I'm sure, like the rest of *nix, that directory name has historical significance and is probably quite elegant once you understand it all, but from the outside looking in, it is very user-UNfriendly and nonsensical. And yeah, all you fanboys go ahead and mod me down for speaking the truth, as you always do ;)


I think you nailed the issue with "user un-friendly". /etc is not a directory for a user, it's a directory for an admin.

The name is irrelevant. The directory could be "fred", or (more accurately) "cfg", and it wouldn't matter. It's where (mostly) static system-wide configuration files go, and the fact that it's a consistent location is the important part.

A program written for Linux in 1996 that looks in /etc/ for it's configuration file will work today, because that's still a valid location. Is that important? Hard to say. It's still very convenient.

That same program will know that it can write logs to /var/log, store data in /var/lib, and store runtime info in /var/run.

The problem is, most people who dismiss the FHS as "a useless, needlesly complex relic from a time we were still using punch cards" don't actually administrate Unix or Linux servers.

They've almost certainly never logged into a UNIX system, and went looking in /var/log for logs, only to discover that the system logs are /var/adm in /usr/adm or some other half-baked location.

I *like* the fact that I can log into FreeBSD, Linux (debian, rhel, suse, arch, gentoo) and have a pretty good idea of where to look for various types of files.

Then again, Thom seems to be in favor of change for the sake of change, so I'm not surprised that consistency is one of his hobgoblins.

Reply Score: 3

RE[3]: Comment by tylerdurden
by Alfman on Mon 12th Oct 2015 18:39 UTC in reply to "RE[2]: Comment by tylerdurden"
Alfman Member since:
2011-01-28

grat,

The problem is, most people who dismiss the FHS as "a useless, needlesly complex relic from a time we were still using punch cards" don't actually administrate Unix or Linux servers.

They've almost certainly never logged into a UNIX system, and went looking in /var/log for logs, only to discover that the system logs are /var/adm in /usr/adm or some other half-baked location.


For my degree, I learned on Unix. I started playing with linux about 17 yeas ago. I administrate Linux servers professionally. I've written and maintained my own Linux distribution since around 2006/7. Based on my own personal experience and needs, I'm not that fond of FHS even though I understand how it came about.

The name is irrelevant. The directory could be "fred", or (more accurately) "cfg", and it wouldn't matter. It's where (mostly) static system-wide configuration files go, and the fact that it's a consistent location is the important part.


I actually agree with this, I don't really care much about names: /etc is fine, /settings, /config, /cfg, it's really quite arbitrary. For me the bigger criticism is actually the organization of the hierarchy rather than individual names.

The thing is, most people who are fortunate enough to be able to rely on a package manager for everything don't fully experience what it's like to work under the hierarchy directly. On the one hand this is a pro of using a package manager, which simplifies administration a great deal. But on the other hand, the very existence of package managers is masking the problems and they're not being worked on.

Outside a repository it's non-trivial to identify components of an application that have been installed via a script or "make install". Under linux, it's non-trivial to reliably back up an application so that it can be restored elsewhere later with confidence.

As real world example, when a production system was upgraded using the debian stable repos, the nginx daemon broke. I tried to fix the new install as best I could but it was very late and being a production system it was urgent that it come back up quickly. I couldn't role back the changes for nginx using apt. Of course I had full system backups, and those factually contained everything needed to get the server back up and running. However because of the package resources being scattered throughout the FS hierarchy, manually doing anything with them is error prone and tedious, to say nothing of becoming out of sync with apt's records. So I gave in and restored the entire server from the backup instead.

Obviously there are a lot of directions one could attack these problems from. And I've learned that even stable updates should be tested first. But I assert that a simpler hierarchy would make it much easier just to copy a working instance onto the current system.

I *like* the fact that I can log into FreeBSD, Linux (debian, rhel, suse, arch, gentoo) and have a pretty good idea of where to look for various types of files.


To an extent, this is true. The fact that there's any standard at all helps with consistency. However it says nothing about the quality of the standard as it applies to people's specific needs and there will obviously be some for whom different solutions would be better.


Then again, Thom seems to be in favor of change for the sake of change, so I'm not surprised that consistency is one of his hobgoblins.


You may disagree with myself or with Thom on merits, I know many people do. That some of us think FHS is too complex does not necessarily mean we don't understand it as some people are claiming. We need to cull these ad hominem arguments from the debate; it's disingenuous to attack the person.

Edited 2015-10-12 18:58 UTC

Reply Score: 2

RE[3]: Comment by tylerdurden
by matthekc on Tue 13th Oct 2015 06:34 UTC in reply to "RE[2]: Comment by tylerdurden"
matthekc Member since:
2006-10-28

In my opinion for Linux to gain more home users most common configurations will need to be achievable from the GUI tools. Most Linux distro's are now mostly there.
It would also be nice to see all distro's offer a good set of recovery tools preferably on the drive and available at boot. Or at least either on the install disk or on a separate recovery disk. Only in unusual corner cases should you have to actually edit the configuration files. No regular user should every have to manually recover their systems and they won't!

At no time should a normal user need to learn systemd, how to edit config files, or even the directory structure... it's not going to happen.

Edited 2015-10-13 06:37 UTC

Reply Score: 2

RE: Comment by tylerdurden
by Thom_Holwerda on Fri 9th Oct 2015 20:40 UTC in reply to "Comment by tylerdurden"
Thom_Holwerda Member since:
2005-06-29

Why do so many bloggers, in the tech area, feel so comfortable spouting such strong opinions about matters they really have little actual clue about?


I have articulated my dislike for the FHS - and those of others - often enough. I DO know what I'm talking about when it comes to this stuff, because I've been reading and writing about this specific issue for well over 10 years.

Reply Score: 8

RE[2]: Comment by tylerdurden
by tylerdurden on Sat 10th Oct 2015 19:20 UTC in reply to "RE: Comment by tylerdurden"
tylerdurden Member since:
2009-03-17

I DO know what I'm talking about when it comes to this stuff, because I've been reading and writing about this specific issue for well over 10 years.


So what if you're been reading and writing for 10 years about this? It could simply mean that your lack of understanding of a specific operating system, unix in this case, goes way back.


Your response perfectly highlights the problem with the tech blogosphere; most of you have no actual clue what you're talking about, but don't let that detract you from voicing strong opinions. In the end it is not about informing, but about click baiting...

Edited 2015-10-10 19:21 UTC

Reply Score: 2

RE[3]: Comment by tylerdurden
by Thom_Holwerda on Sat 10th Oct 2015 21:16 UTC in reply to "RE[2]: Comment by tylerdurden"
Thom_Holwerda Member since:
2005-06-29

Your response perfectly highlights the problem with the tech blogosphere


And your comments on this post perfectly illustrate the problem with internet comments.

Reply Score: 0

RE[4]: Comment by tylerdurden
by galvanash on Sat 10th Oct 2015 21:31 UTC in reply to "RE[3]: Comment by tylerdurden"
galvanash Member since:
2006-01-25

"Your response perfectly highlights the problem with the tech blogosphere


And your comments on this post perfectly illustrate the problem with internet comments.
"

Touché

Reply Score: 2

RE[4]: Comment by tylerdurden
by acobar on Sat 10th Oct 2015 23:28 UTC in reply to "RE[3]: Comment by tylerdurden"
acobar Member since:
2005-11-15

Thom,

Can you please describe what you see as bad on FHS ?

I will briefly enlist the reasons I like it:

- What we call linux, and perhaps should be called LiGnuX, is a large aggregate of system parts developed all around the world without a central management (for the whole thing, not the parts). I see no way this model could succeed unless some form of standardization be agreed upon;

- There was a prior effort to achieve standardization on Unix because of the problems the lack of a minimum one was creating to independent developers on the eighties and nineties;

- When bad things happen, and they do, you know where to look at for clues;

- When you need to change things related to the whole system you have a good guess at where to look at. This was very important when there was just crude management plumbing facilities on linux;

- The hierarchy on FHS helps the effort to create isolation layers on linux and, with it, improves security and lower maintenance costs.

It is not perfect and do need some adjusts but way less than what their critics vent.

Most of the complaints I see came from guys that would like to install some software from the Internet and get mad when they can't because the developer did not make available a version for the guy's system. And the guy lacks the knowledge about what a multiuser system means and what compromises it encompass, all he sees is that he needed version xx of libzz and his system has version yy of it and go grumpy and gnash.

Well, there are many cases where the developer can generate static linked apps (if he wants to, unless he needs to tap on some internal kernel characteristic - it is the case for some device drivers for sure). The problem is, many are so used to the "convenient" Windows way that they just blame the wrong things. Linux is not Windows, the way things are organized is different, the presuppositions are different and the methods of the basic underlying system are very different.

There are "workarounds", like static linking, patchelf, statifier, ermine and docker. None will fit all cases but will solve most of them.

As I said, there is a compromise on "linux way" and I sincerely prefer a system with a central repository where things are upgraded/updated when needed.

Also, I like specially openSUSE efforts on complementary repositories.

This whole topic is in itself way more complex than what the shallow criticisms we see over the Internet try to expose or make us believe. There are some proposals to fix it (one from Lennart Poettering that I don't sympathize that much with, lets see how it develops). As soon as I see a good article about it I will post a link here.

Edited 2015-10-10 23:31 UTC

Reply Score: 3

RE[5]: Comment by tylerdurden
by Thom_Holwerda on Sat 10th Oct 2015 23:42 UTC in reply to "RE[4]: Comment by tylerdurden"
Thom_Holwerda Member since:
2005-06-29

Can you please describe what you see as bad on FHS ?


http://www.osnews.com/story/20195/GoboLinux_and_Replacing_the_FHS
http://www.osnews.com/story/21579/Why_Do_We_Hold_on_to_the_FHS_
http://www.osnews.com/story/19711/The_Utopia_of_Program_Management

In short, it's so needlessly obtuse, chaotic, unclear, and open to interpretation that literally not two systems that adhere to the FHS actually have the same directory layout. In other words, each and every system that adheres to the FHS actually has different directory layouts. F--k man, not even two *Linux* distributions can agree on how to interpret the FHS.

The FHS is a standard so vague everybody can just do whatever the f--k they want and still claim to "adhere" to the "standard".

In other words, it is a bad standard, and needs to be modernised, or preferably replaced, with something that wasn't drawn up by people who thought punch cards were a little too hoity-toity.

Edited 2015-10-10 23:44 UTC

Reply Score: 2

RE[6]: Comment by tylerdurden
by acobar on Sun 11th Oct 2015 01:11 UTC in reply to "RE[5]: Comment by tylerdurden"
acobar Member since:
2005-11-15

Well, we agree that it needs update and a more strict adherence by all involved but there are many points I fail to agree.

OK, some:

- It is just too complex. It is not, it is easy enough so that any developer that cares can understand and fulfill its basic rules. If you want to see what is really complex take a look inside OS X system directory or Windows system one (I must do because I also work with system maintenance/administration, what I regret);

- There is too many exceptions. Like on every complex system that grows slowly, it needs adjustments (and adherence, of course);

- Symlinks are bad. No they aren't, they are actually a good solutions for lots of problems related to easy directory navigation and access;

- About app discovery, take a look at /usr/share/applications. Yeah, I know it is not on FHS but it is the solution adopted to fix this problem, discovery, in particular. Take also in account that any linux system has a huge number of programs created with pipping in mind (one of the unix way of doing things). Most users really will never care about them even though some of the applications they use will. Virtually all apps with a GUI have an entry on /usr/share/applications now;

- The security model in place is, probably, the most sane way we could ever devise taking in account that it was created to a multiuser system. For complex sharing cases and more secure requirements there are ACL and selinux (the last is a bit too complex for home desktop case);

- Personal settings are stored on user home directory and have seen some standardization on where they should be. Again, it takes time until things are settled and most developers follow the rules;

- Installation of multiple versions of libraries, apps and isolation of their settings are things people are playing with (like Lennart).

Anyway, xkcd describes the problem at full extent:
https://xkcd.com/927/

Reply Score: 4

RE[5]: Comment by tylerdurden
by dpJudas on Sun 11th Oct 2015 01:17 UTC in reply to "RE[4]: Comment by tylerdurden"
dpJudas Member since:
2009-12-10

Most of the complaints I see came from guys that would like to install some software from the Internet and get mad when they can't because the developer did not make available a version for the guy's system. And the guy lacks the knowledge about what a multiuser system means and what compromises it encompass, all he sees is that he needed version xx of libzz and his system has version yy of it and go grumpy and gnash.

Yes it is always easiest to blame the user. It is also almost always the wrong answer. The directory structure of original Unix was not written down by God as one divine way of splitting things up. It has some significant disadvantages for things not maintained by the distro itself.

YOU may not care much for such needs, but many do and there ARE more elegant solutions. OS X application and framework bundles are some of the examples of alternative strategies.

Well, there are many cases where the developer can generate static linked apps

Yes, applying a gigantic hack avoiding the entire file system design is one approach. Not really convinced it is a GOOD approach. Especially considering the LGPL got some nasty requirements that makes static linking not an option in many cases.

As I said, there is a compromise on "linux way" and I sincerely prefer a system with a central repository where things are upgraded/updated when needed.

Unfortunately this approach only works well with large popular open source projects. Once you reach small projects with a couple of developers without the resources to maintain packages for all Linux distributions things get a lot less rosy. For closed source it gets very hard to get right.

This whole topic is in itself way more complex than what the shallow criticisms we see over the Internet try to expose or make us believe. There are some proposals to fix it (one from Lennart Poettering that I don't sympathize that much with, lets see how it develops). As soon as I see a good article about it I will post a link here.

Yes, please do. It is always interesting to know about what various groups are doing to improve their distro. ;)

Reply Score: 2

RE[4]: Comment by tylerdurden
by tylerdurden on Sun 11th Oct 2015 06:01 UTC in reply to "RE[3]: Comment by tylerdurden"
tylerdurden Member since:
2009-03-17


And your comments on this post perfectly illustrate the problem with internet comments.


Absolutely, I'm simply responding in kind; Your shitty post and my shitty attitude go hand in hand.

Sometimes, someone not understanding a technological item is not necessarily an indictment against the technical qualities of that item. Sometimes, it simply means that the universe is trying to point out that you're out of your element. Too many tech bloggers, however, misinterpret their own ignorance as it being somehow authoritative.

It takes a couple of minutes to comprehend the whys and hows of the Unix hierarchy of system files. And one of the reasons for its longevity is that it makes a hell of a lot of sense, if you understand what Unix is and what it does.

Can it be improved? Absolutely. Does it need to be updated? Yes. But to claim that it is somehow one of the worst things in computing ever, that's just a ridiculous and uninformed opinion.

Edited 2015-10-11 06:05 UTC

Reply Score: 1

RE[2]: Comment by tylerdurden
by martijn on Sun 11th Oct 2015 11:58 UTC in reply to "RE: Comment by tylerdurden"
martijn Member since:
2010-11-06

Now you sound like Droogstoppel from the Max Havelaar who claims he knows what is going around in the world because he has had the same place at the coffee market for 10 years.

Reply Score: 1

RE[2]: Comment by tylerdurden
by Soulbender on Mon 12th Oct 2015 06:38 UTC in reply to "RE: Comment by tylerdurden"
Soulbender Member since:
2005-08-18

Just because you have an established opinion doesn't mean you know what you're talking about. While the FHS certainly has it's warts it's not the unmitigated disaster you make it out to be.

Reply Score: 4

RE[2]: Comment by tylerdurden
by grat on Mon 12th Oct 2015 14:43 UTC in reply to "RE: Comment by tylerdurden"
grat Member since:
2006-02-02

I have articulated my dislike for the FHS - and those of others - often enough. I DO know what I'm talking about when it comes to this stuff, because I've been reading and writing about this specific issue for well over 10 years.


Yes, but have you been administrating multi-user unix and linux servers for 27 years?

I have. The FHS may seem archaic and out-dated-- but it's a significant step forward from what we *actually* had in the punch card era. Anyone who's administered HPUX or Irix knows that the FHS is a fresh breeze of sanity and consistency.

Some of it may not make sense to the average user-- but that doesn't mean there wasn't a perfectly valid reason for the decisions behind the FHS.

Now, some of those things-- like /bin and /usr/bin-- are only really important if you're booting from small disks, or NFS, or PXE (/bin and /sbin should contain just enough binaries for the system to boot). Those are edge-cases these days, but not unheard of.

For instance, I don't think you could boot RHEL7 over PXE, or off an NFS share, any longer-- systemd is so large and clunky that it would be a really bad idea.

Now, some of this could be resolved by using variables, a la Windows with it's %TMP% and %USERDIR% type indirection, but that adds a layer of complexity that isn't really needed, as long as everyone agrees on the standard.

Reply Score: 2

RE[3]: Comment by tylerdurden
by Thom_Holwerda on Mon 12th Oct 2015 16:05 UTC in reply to "RE[2]: Comment by tylerdurden"
Thom_Holwerda Member since:
2005-06-29

s long as everyone agrees on the standard.


But nobody does, and that's the problem. A standard that's so vague everybody can do whatever they want is a bad standard. Even then, smearing stuff all over the place, forcing the use of fragile package managers and the like to keep the system running, is just bad design.

It may have made sense in simpler times, and it may still make sense on servers and other specialised hardware, but once you arrive at laptops, desktops, phones - it's nothing but complexity that breeds even more complexity.

The FHS is complex and obtuse, and a common argument is that it doesn't matter because users don't see it anyway - but this argument is invalid. Just look at all the layers operating systems are draping over the directory structure just to make the system usable - endless layers of complexity ripe for breakage. Complexity travels upwards - and this, users DO suffer from.

Many operating systems today - i.e., all of them - could benefit immensely from redesigning their building blocks - including the FHS and whatever the Windows equivalent is called, if it even has a name (it doesn't). However, doing such plumbing is not as sexy as much of the other work, and of course, especially UNIX people see UNIX as some sort of bible, the One Truth, immovable, irrefutable, and refuse to even entertain the possibility that whatever was designed for time-sharing systems with punch cards might potentially not be a good fit for a modern laptop or smartphone.

Edited 2015-10-12 16:07 UTC

Reply Score: 1

RE[4]: Comment by tylerdurden
by grat on Tue 13th Oct 2015 00:19 UTC in reply to "RE[3]: Comment by tylerdurden"
grat Member since:
2006-02-02

Many operating systems today - i.e., all of them - could benefit immensely from redesigning their building blocks - including the FHS and whatever the Windows equivalent is called, if it even has a name (it doesn't).


Actually, it does... or used to. It used to be part of what was called the "win32 standard" or some such, and the fact that XP applications generally ignored it, and Vista enforced it, was part of the reason Vista was so maligned.

Every generation of Windows has changed file locations. Remember "Documents and Settings"? Windows 7 had symlinks to it. %SYSTEMROOT%\Profiles was particularly evil.

Amusingly (to me, I have a warped sense of humor), one of the most static locations in windows is:

c:\windows\system32\drivers\etc

It was there in Windows 95, and it's still there in Windows 8. Contains hosts, lmhosts, and a couple other files that look like they were ripped straight from unix (probably because they were).

Thing is, restructuring all of this means effectively, a new operating system. That's why OSX (which has a lot of symlink hell to make a *BSD hierarchy look "normal) isn't really BSD compatible, even though it ought to be.

I agree that poorly behaved applications install in all kinds of weird places-- Except really, those are usually done by packages, and are easy enough to clean up.

I'd actually like for there to be a meta-package installer that respects not just the FHS, but the individual distro's version of the FHS. Converting between .deb and .rpm is an exercise in insanity (but if you're in that hell, look up fpm. The effin' package manager. It ROCKS).

You exist in a world where you want a stable, reliable desktop that doesn't make you see the stuff in the background. That's fine. As an admin of Red Hat and Debian servers, I *have* to see the background.

For you, the FHS is an abomination. For me, it's one of the few things that keeps me from going absolutely stark staring mad as a linux admin (I may be making an unwarranted assumption here).

I don't want a gui. I don't need a gui. I can't administer all of my servers with a gui. I need a command line, preferably one with bash or tcsh (although I do wonder what powershell on linux would be like), because GUI's don't scale.

Reply Score: 2

RE[5]: Comment by tylerdurden
by dpJudas on Tue 13th Oct 2015 01:40 UTC in reply to "RE[4]: Comment by tylerdurden"
dpJudas Member since:
2009-12-10

Every generation of Windows has changed file locations. Remember "Documents and Settings"? Windows 7 had symlinks to it. %SYSTEMROOT%\Profiles was particularly evil.

Applications aren't supposed to hardcode those paths. They have been localized since Windows 95 and it is one of the reasons Microsoft can change them without breaking any major applications. If an application relied on it being called "Documents and Settings" it would only work on English locale anyway. They are accessed with SHGetSpecialFolderPath.

Thing is, restructuring all of this means effectively, a new operating system. That's why OSX (which has a lot of symlink hell to make a *BSD hierarchy look "normal) isn't really BSD compatible, even though it ought to be.

I see OS X as a good example of how you do such a file system migration. They have the BSD folders still there for compatibility reasons and ease the pain when porting mostly command-line tools.

I don't really care much that there's a /usr directory on my Mac. If anything its a bit of an advantage because it allows me to use package management systems like homebrew to grab Linux/BSD command line stuff when I need it.

You exist in a world where you want a stable, reliable desktop that doesn't make you see the stuff in the background. That's fine. As an admin of Red Hat and Debian servers, I *have* to see the background.

Seeing the background is okay. The thing is that I only want to *have* to see it if something broke in a fundamental way. Kind of how I want the engine of my car hidden unless I have to repair it.

For you, the FHS is an abomination.

Personally, I wouldn't go *that* far. I think the FHS has limitations that makes it a poor fit for desktops and laptops. The only reason the FHS works as well as it does is because 90% of current Linux software can be 'apt-get installed' from the distro repository.

Unfortunately repositories have the same problems as App Stores do: centralized control and the politics that follow.

Reply Score: 2

RE[2]: Comment by tylerdurden
by Drumhellar on Mon 12th Oct 2015 16:20 UTC in reply to "RE: Comment by tylerdurden"
Drumhellar Member since:
2005-07-12

It seems your problem with FHS is that it is a horrible design if it were designed today, with the full benefit of hindsight.

The thing is, the FHS wasn't designed (beyond a small number of initial decisions). It grew organically, on different systems, for different reasons, and is merely an attempt for formalize convention. Any FS layout will experience this given enough time. Do we change now, and then change again, and then change again, and again?

Or, just deal with it as-is, and not waste the time and cost needed to change (Since, we'll have to do it again for the exact same reasons)?

Reply Score: 3

RE[3]: Comment by tylerdurden
by Alfman on Mon 12th Oct 2015 19:54 UTC in reply to "RE[2]: Comment by tylerdurden"
Alfman Member since:
2011-01-28

Drumhellar,

It seems your problem with FHS is that it is a horrible design if it were designed today, with the full benefit of hindsight.

The thing is, the FHS wasn't designed (beyond a small number of initial decisions). It grew organically, on different systems, for different reasons, and is merely an attempt for formalize convention. Any FS layout will experience this given enough time. Do we change now, and then change again, and then change again, and again?

Or, just deal with it as-is, and not waste the time and cost needed to change (Since, we'll have to do it again for the exact same reasons)?


I think this is insightful. Although I don't strictly agree with your conclusions, ironically I agree with you that they will largely be responsible for FHS not going away.

Some people have tried to fix these things, including myself using my own distro with some success. But the maintenance burden of supporting dozens, hundreds, or even thousands of packages is formidable. It wasn't sustainable given my resources and I had to give up and re-roll the distro to be more FHS compatible.

This is why I don't envision FHS changing, at least not without support from a major player.

Reply Score: 2

RE: Comment by tylerdurden
by dpJudas on Fri 9th Oct 2015 21:07 UTC in reply to "Comment by tylerdurden"
dpJudas Member since:
2009-12-10

Why do so many bloggers, in the tech area, feel so comfortable spouting such strong opinions about matters they really have little actual clue about?

What makes you think it is any different in other areas? ;)

Reply Score: 4

RE[2]: Comment by tylerdurden
by acobar on Sat 10th Oct 2015 14:28 UTC in reply to "RE: Comment by tylerdurden"
acobar Member since:
2005-11-15

What makes you think it is any different in other areas?

May be, because on other areas there is a hard knowledge about things that should be respected, with body standards and committees analyzing best practices and pushing their members to follow some rules or face penalties when things go wrong.

Somehow, in certain areas of computing, and I don't know why, people just push hard for "my way" and, as they don't bear the consequences (even though developers in a long chain of dependences do) and run unscratched with it, they go their "way". Perhaps, this explain why things are frequently rewritten and old rules ignored for no other reason than "I read a bit about it and did not like what I saw" or "it is rubbish!" without even developing a deeper knowledge about the current choices.

I am not saying this happens on all areas of computing, but it does on a sufficient large number to cause a painful headache for most of us.

Reply Score: 2

RE[3]: Comment by tylerdurden
by Alfman on Sat 10th Oct 2015 16:28 UTC in reply to "RE[2]: Comment by tylerdurden"
Alfman Member since:
2011-01-28

acobar,

May be, because on other areas there is a hard knowledge about things that should be respected, with body standards and committees analyzing best practices and pushing their members to follow some rules or face penalties when things go wrong.



Should we conclude that this aspect is unique to our field, or that we just happen to care more because it's closer to home? To someone else who works in law, accounting, in medicine, as a teacher, plumber, etc, they may have their rules, but those will vary by jurisdiction based on the opinions of those in charge. Hopefully those in charge do a good job, but it's natural for people to debate because they want different things. The closer we are to a field, the more we'll learn about the intricacies and conflicts going on inside it, just like FHS for us.

We can't even agree on the most important standards of all: fundamental units of measurement. I say the US should just stamp out english units for the benefit of a universal standard, but even here on osnews there were dissenting opinions.

Reply Score: 3

RE[4]: Comment by tylerdurden
by acobar on Sat 10th Oct 2015 18:24 UTC in reply to "RE[3]: Comment by tylerdurden"
acobar Member since:
2005-11-15

Should we conclude that this aspect is unique to our field ..

I think that part of the problem is related to the software side of computing being a "thinking" field, it is impalpable, and, as so, the cost associated to choices, or the development of a new one, are greatly ignored or worth the hassle.

You can not do that easily on medicine, where you may be risking lives, or on other engineering fields, where you may be also risking tangible things.

We can't even agree on the most important standards of all: fundamental units of measurement.

True, be we are not going to, or probably will not, see someone deviating slightly from what is used just because he thinks an "inch" is a stupid measurement unit and as so will roll out his own and will order the tools, bolts and nuts from manufacturers just to "respect" his "feelings".

So, in some aspects, yes, I think that software engineering is unique not only because of what it allows his professionals to do but also because of the habits ingrained in many of the practitioners.

Reply Score: 2

Comment by galvanash
by galvanash on Fri 9th Oct 2015 20:30 UTC
galvanash
Member since:
2006-01-25

That's too bad - the FHS is an abomination, a useless, needlesly complex relic from a time we were still using punch cards, and it has no place in any modern computing platform.


Im fine with simplification. It can and has been done on various unixes, linux distros, and OSX (not strictly FHS of course, but more or less the same). Symlinks can go a long way... either by using symlinks to create friendly views or doing the inverse (create FHS compliant views into a friendlier structure). Either way things are still prone to breakage - but if done carefully and pervasively it can work.

But it never really catches on... Fundamental reworking the Unix directory structure is simply not worth the pain it causes, and no one is really happy with symlink magic either. Fact is, most users spend 99% of their time in /home/whatever - venturing out of it is mostly an admin/developer thing. Of course most home users of Linux are themselves in effect admins, so they have to learn at least the basics, but once you have a system configured your back to living in /home most of the time (if you do things right anyway). I just don't see what the big deal is. Id like it simpler too, but inertia is a bitch...

Throwing away 40 years of learned behavior and breaking nearly every program in existence for the sake of a subjective improvement (and it is purely subjective) doesn't make sense. Its been tried, and every time a censuses is reached - which is more or less "leave it alone".

Im not disagreeing with you completely, it is complicated. I don't think it is needlessly complicated though - there is a valid rationale for almost all of it. The fact is pretty much every single OS in common use, with the exception of Windows, uses something pretty close to FHS - even BeOS did in a manner of speaking. It isn't going away. Ever.

Reply Score: 10

RE: Comment by galvanash
by laffer1 on Fri 9th Oct 2015 21:47 UTC in reply to "Comment by galvanash"
laffer1 Member since:
2007-11-09

Linux folks don't care anymore. They say we're clinging to the past. systemd is the future. Changing everything from ifconfig to init to directory layouts makes things better they say.

Linux is not a unix clone anymore. We need to all accept it and move on.

Reply Score: 1

RE[2]: Comment by galvanash
by bassbeast on Sat 10th Oct 2015 01:38 UTC in reply to "RE: Comment by galvanash"
bassbeast Member since:
2007-11-11

What is sad is its NOT "Linux folks", its the simple fact that pretty much all of Linux has been hijacked by Red Hat and its cronies so they WILL push the systemd party line, even ignoring the "user first" original mission statement of Debian.

Say what you will about Windows and the current version phoning home but I think the current situation perfectly illustrated the value of "voting with your wallet" and why "free as in beer" was never a viable long term strategy. Windows users look at Windows "Hey I'm a supersized smartphone now!" 8 and say "We don't want that", refuse to buy, sales go down the shitter, MSFT is forced to change the UI to something the users WILL take and Windows 10 gets more users in a month than Windows 8 did in something like a year.

Now compare this to Linux, Red Hat pushes systemd, users say "We don't want that" but because Linux users have no power of the wallet? RH is able to simply ignore the users, since corporate customers are their focus, and go straight to the devs (seriously look how many heads at places like Debian and Canonical are former RH or tightly connected to RH) and the devs give the users the bird. Sure you can fork, but so what? How long can a fork last with 1/10000 of the budget and with everything being tied to systemd? Its only a matter of time before too many critical systems are completely hooked into systemd for any fork to function without writing their own OS from scratch!

This is why the power of the wallet MATTERS, as voting with your wallet is the only way you can affect the direction of a company. Since Linux users on average don't pay the bills of the large distros like Debian, Ubuntu, Red Hat? There is no reason to listen to you, you can take it or hit the bricks. Its sad but money talks and no money? No voice.

Reply Score: 6

RE[3]: Comment by galvanash
by dpJudas on Sat 10th Oct 2015 04:07 UTC in reply to "RE[2]: Comment by galvanash"
dpJudas Member since:
2009-12-10

Windows users look at Windows "Hey I'm a supersized smartphone now!" 8 and say "We don't want that", refuse to buy, sales go down the shitter, MSFT is forced to change the UI to something the users WILL take and Windows 10 gets more users in a month than Windows 8 did in something like a year.

You are conveniently leaving out the part where we had to wait 5 years for MS to give in. Even now they are pushing their Universal App agenda, which is just Windows 8 renamed. The power of each individual customer is virtually nil.

Now compare this to Linux, Red Hat pushes systemd, users say "We don't want that" but because Linux users have no power of the wallet?

You're making the mistake of assuming the majority of Linux users share your opinion. What evidence do you have of that?

This is why the power of the wallet MATTERS, as voting with your wallet is the only way you can affect the direction of a company. Since Linux users on average don't pay the bills of the large distros like Debian, Ubuntu, Red Hat? There is no reason to listen to you, you can take it or hit the bricks. Its sad but money talks and no money? No voice.

You really think Microsoft gives a damn about what I think? I'm not a big enough customer for that. I have no voice.

Reply Score: 8

RE[4]: Comment by galvanash
by jb.1234abcd on Sat 10th Oct 2015 05:14 UTC in reply to "RE[3]: Comment by galvanash"
jb.1234abcd Member since:
2014-12-03

"Now compare this to Linux, Red Hat pushes systemd, users say "We don't want that" but because Linux users have no power of the wallet?"

"You're making the mistake of assuming the majority of Linux users share your opinion. What evidence do you have of that?"

Well, here it is.
http://distrowatch.com/polls.php?poll=8

Systemd poll results (Distrowatch.com, Jul 2015):

I use systemd and like it: 787 (30%)
I use systemd and dislike it: 318 (12%)
I am not using systemd and plan to use it: 111 (4%)
I am not using systemd and plan to avoid it: 1170 (44%)
Other: 260 (10%)

Well, that's 12% + 44% = 56% of statistically random Linux users against systemd.

This after 5 years since introduction of systemd into Linux ecosystem !

Reply Score: 4

RE[5]: Comment by galvanash
by General_Edmund_Duke on Sat 10th Oct 2015 10:35 UTC in reply to "RE[4]: Comment by galvanash"
General_Edmund_Duke Member since:
2014-05-17

Most of home users don`t care if that`s systemd or not. People haoppy with their OSes mostly don`t go on Distrowatch - what for? Frustrated ones are seeking there - and frustrates hate everything, so they always will be against, no matter what you ask for.
Yes, buntu is used very ofter for un-skilled don`t-care-if-work people. I know some of them. I used to be sysadmin (well not low-level-one making my own google-file-system, I just needed working solution - install-configure-forget-but-update-sometimes) - I wouldn`t care about is that systemd or not even a little more than now.

Reply Score: 2

RE[5]: Comment by galvanash
by dpJudas on Sat 10th Oct 2015 10:46 UTC in reply to "RE[4]: Comment by galvanash"
dpJudas Member since:
2009-12-10

Well, that's 12% + 44% = 56% of statistically random Linux users against systemd.

Please never write any encryption software!!

Using polls as statistics is very problematic because you don't get a random sample set. In this case you get answers from people that frequently visit distrowatch. The average Linux user does not visit that site. It is like doing a poll on OSNews about Windows 7 vs Windows 10. You could get some fun entertaining stats out of that, but it would in no way represent what the average Windows user out there thinks.

Reply Score: 8

RE[6]: Comment by galvanash
by bassbeast on Sat 10th Oct 2015 22:45 UTC in reply to "RE[5]: Comment by galvanash"
bassbeast Member since:
2007-11-11

Well go to places that Linux users DO go like Slashdot and SoylentNews and ask THEM about systemd. BTW be prepared to be called some VERY ugly names as they tell you what a steaming POS it is (along with more than enough actual examples and screencaps to show it is indeed got serious game breaking issues) along with telling you in no uncertain terms what you can do with it.

And I'm sorry but you go to the user forums of your favorite distro and post the same question...that is until they censor you, ban you, and wipe all evidence that it was ever there. BTW that is apparently SOP at all the forums controlled by the big three, Debian, RH, and Canonical, even though it expressly goes against the Debian founding mission statement. If THAT doesn't tell you something is rotten in Denmark? Then frankly I don't know what will.

Oh and just an observation of an outsider, but you know what the way the distros is pushing systemd reminds me the most of? The way MSFT shilled Windows 8. You even get the exact.same.talking.points. like "you are a luddite (ad hominem), "Embrace the innovation" (again attacking those that don't fall in line without any concrete reason why they should choose an unproven system over one that worked) and even outright personal attacks by devs and mods. It seriously sounds just like what we heard from the Win 8 shilling, in fact I bet if I only changed a couple words per paragraph they would be interchangeable.

For a bunch that once prided themselves on giving technical explanations that sounded like technobabble because they were so detail dense and which debated and got the users involved in EVERYTHING, to have them suddenly close ranks and start shutting down debate? Yeah I wanna know whose cashing the checks.

Reply Score: 3

RE[7]: Comment by galvanash
by WereCatf on Sat 10th Oct 2015 23:08 UTC in reply to "RE[6]: Comment by galvanash"
WereCatf Member since:
2006-02-15

Well go to places that Linux users DO go like Slashdot and SoylentNews and ask THEM about systemd. BTW be prepared to be called some VERY ugly names as they tell you what a steaming POS it is (along with more than enough actual examples and screencaps to show it is indeed got serious game breaking issues) along with telling you in no uncertain terms what you can do with it.


I frequent Slashdot on a daily basis and I can't recally anyone, not a single comment, having any actual real-world examples of issues. The comments mostly revolve around not liking change, therefore bashing systemd, or ignorance, like claiming that systemd spies on you and reports to NSA/GCHQ/whatever, or just plain trolling.

Also, it's the people who are not happy about something that are the most vocal, but people who are content with it? They generally don't make themselves heard. You can't really use that to gauge your argument's credibility.

Reply Score: 3

RE[3]: Comment by galvanash
by ddc_ on Sat 10th Oct 2015 05:42 UTC in reply to "RE[2]: Comment by galvanash"
ddc_ Member since:
2006-12-05

Red Hat pushes systemd, users say "We don't want that" but because Linux users have no power of the wallet? RH is able to simply ignore the users [...]

In free software people are voting with feet. There is Gentoo, which never replaced OpenRC with systemd. There is VoidLinux, which was specifically built systemd-free. There is Devuan, which pushes systemd-free Debian clone. If people were really all that concerned with systemd, these distros would be boosting right now. They are not. Do you know why? Because most Linux users either are happy about systemd or don't care enough. Well, people running away from systemd are easy to come across in BSD communities, but there are no big numbers to make this shift visible in stats.

Compare that to Gnome 3 drama. When there was indeed sufficient amount of people disliking it, there happened MATÉ and Cinnamon, which are still there and popular enough. XFCE saw a huge increase in its user base. Unlike systemd, that really concerned a lot of people, and consequences are easily visible.

Reply Score: 8

RE[4]: Comment by galvanash
by bassbeast on Sat 10th Oct 2015 22:48 UTC in reply to "RE[3]: Comment by galvanash"
bassbeast Member since:
2007-11-11

The same argument can be made for the spying added to Ubuntu and Windows 10. The majority didn't vote with their feet, so this is perfectly acceptable behavior, yes?

Or it could simply be that once trapped in an ecosystem it costs real $$$ to move, both in time and in getting everything back up and running, so that is becomes very difficult to just switch. Isn't that one of the arguments on why Windows users don't switch no matter what MSFT does?

Reply Score: 2

RE[5]: Comment by galvanash
by ddc_ on Sun 11th Oct 2015 04:39 UTC in reply to "RE[4]: Comment by galvanash"
ddc_ Member since:
2006-12-05

Or it could simply be that once trapped in an ecosystem it costs real $$$ to move, both in time and in getting everything back up and running, so that is becomes very difficult to just switch.

It never was very expensive on any resource. Actually, if you are not in a hurry, it is fairly cheap and trivial. The problem is that most people never really cared about spying, crapware and systemd enough to waste any time on switching.

Reply Score: 2

RE[6]: Comment by galvanash
by WereCatf on Sun 11th Oct 2015 05:43 UTC in reply to "RE[5]: Comment by galvanash"
WereCatf Member since:
2006-02-15

It never was very expensive on any resource. Actually, if you are not in a hurry, it is fairly cheap and trivial.


Only if you do pretty much nothing else than use a web-browser. If you use any software that isn't available on the target-platform or getting it running stably in Wine or whatever then it's neither cheap or trivial.

Reply Score: 2

RE[7]: Comment by galvanash
by ddc_ on Sun 11th Oct 2015 05:54 UTC in reply to "RE[6]: Comment by galvanash"
ddc_ Member since:
2006-12-05

If you use any software that isn't available on the target-platform or getting it running stably in Wine or whatever then it's neither cheap or trivial.

We were talking about switching Linux distros because of systemd. What does Wine have to do with that?

Reply Score: 3

RE[7]: Comment by galvanash
by bassbeast on Mon 12th Oct 2015 02:20 UTC in reply to "RE[6]: Comment by galvanash"
bassbeast Member since:
2007-11-11

I'll get hate for pointing this out but screw it, I'm too ancient to care.

What this is is the classic "All you need is a browser, gimp and LO" argument of the hardcore Linux zelaot and ya know what? I've been building and selling computers since the Shat sold Vic20 and I have never ever met this mythical person, I don't care if they are 15 year old kids or 75 year old retirees they all have some software they require because if they didn't wth would they actually need a PC for?

And Linux is NOT magical, just because a driver or software works in say Ubuntu does NOT guarantee that the same will be true of Red Hat. Every minute you spend having to find work arounds and fixes and alternatives? That is MONEY unless your time is literally worthless and again even after all these years I've never met anybody who thinks their time isn't worth anything or finds the tediousness of the above tasks enjoyable or "fun".

There is a reason why there are sayings like "if it ain't broke don't fix it" and why people will keep an OS long past its EOL, its because the effort to switch is usually painful and unpleasant and anybody who uses the above "all you need "argument is either being disingenuous at best or outright telling falsehoods they know to be unrealistic to sell "their" brand at worst, because IRL? If those users do exist they are as rare as hen's teeth and do NOT in any way,shape,or form represent the typical PC user in 2015.

Reply Score: 2

RE[3]: Comment by galvanash
by shotsman on Sat 10th Oct 2015 07:13 UTC in reply to "RE[2]: Comment by galvanash"
shotsman Member since:
2005-07-22

Eh? There are a good number of clear cases where Canonical has given RH the finger and done their own thing.

IMHO systemd was released far too early. It was made the default in several Distro's with a good number of bugs present.
The whole thing seemed to be rushed for some reason that I cannot fathom.
Now? It seems to be pretty stable.
The future? In a few years we may well wonder what all the fuss is about.

IF you don't like systemd then go and fork your own distro and keep init scripts. It is all FOSS so there really is nothing to stop you now is there?

For me, my days of hacking kernels are long gone. I spent a good few years writing and supporting device drivers for VMS. I'll get to grips with systemd in time but at the moment none of what I do with Linux touches it. I suspect that more users are in the same position. i.e. Systemd? MEH

Reply Score: 2

RE[3]: Comment by galvanash
by gilboa on Sat 10th Oct 2015 08:10 UTC in reply to "RE[2]: Comment by galvanash"
gilboa Member since:
2005-07-06

Let me try and follow your logic:

RH sales and stock priced doubled (800M USD to 1.6B USD, ~40 to ~80 respectively) in the last 4 years.
RH is by far the largest enterprise Linux distributor. (Especially given the fact that CentOS is now a part of RH, and given the fact that Unbreakable Linux is nearly an exact copy of RHEL)

Beyond that, *all* other enterprise Linux (you know, that ones that have paying customers) have either switched or in the process of switching to systemd.

... So, following *your* logic, users *are* voting with their wallet *in-favor* of systemd.

- Gilboa

Edited 2015-10-10 08:12 UTC

Reply Score: 3

RE[3]: Comment by galvanash
by dinosaur on Sat 10th Oct 2015 12:10 UTC in reply to "RE[2]: Comment by galvanash"
dinosaur Member since:
2015-05-10

Systemd speeds up the boot up and shutdown processes. There's no doubt about it. I do worry about how deeply it's embedded in the system. But hopefully with enough time and testing it'll become rock solid and secure.

Reply Score: 1

RE[3]: Comment by galvanash
by Bill Shooter of Bul on Sun 11th Oct 2015 04:04 UTC in reply to "RE[2]: Comment by galvanash"
Bill Shooter of Bul Member since:
2006-07-14

While I completely disagree with your assessment of systemd, and find your paranoia too 1999 for my tastes, there is another bigger issue with your theory: how to influence free software.

You *can* vote with your wallet. RHEL will sell to anyone. Or you *Can* vote with code contributions to a different distro. Like debian. All of the devs can vote. And they did ( well both in the tech steering committee elections and in the general resolutions. If you want a vote in free software those are your choices money or code. Its your choice. But you can't legitimately complain if you don't do either. You don't have the right to tell me how or what to code, how I should spend my free time and resources.

Reply Score: 3

RE[2]: Comment by galvanash
by crhylove on Tue 13th Oct 2015 04:17 UTC in reply to "RE: Comment by galvanash"
crhylove Member since:
2010-04-10

"They" say. I don't know any sysadmins personally who invite this change, and I know a few sysadmins.

Reply Score: 1

I get it now
by abraxas on Fri 9th Oct 2015 22:39 UTC
abraxas
Member since:
2005-07-07

I've been using Linux for almost 20 years now but in the past few years I have been working in a job that is primarily Windows. I can see why the Windows Engineers are so turned off by Linux now. People argue about init systems and file system layouts like they matter so much more than they do. Windows has some really bad kludges that they have carried on for years now but despite the general dislike of these designs it doesn't break out into an all out flamewar among users and developers. There are so many interesting things being done in computing and we are bitching about f--king file system layouts. Jesus Christ. Don't even get me started about systemd.

Edited 2015-10-09 22:39 UTC

Reply Score: 3

RE: I get it now
by ilovebeer on Sat 10th Oct 2015 00:38 UTC in reply to "I get it now"
ilovebeer Member since:
2011-08-08

With Windows you know what you're getting, and it's going to be something that is highly likely to work. Unfortunately Linux can't say the same. On top of the inconsistency between distros, you can have inconsistencies from one distro version to the next. Depending on your distro of choice, updating can easily be a roll of the dice with risk of all kinds of breakage.

Anyone who is subscribed to the Linux dev mailing lists knows devs are in constant disagreement and conflict over what direction <something> should head in. It feels like Linux and most of its subsystems are in a perpetual state of identity crisis.

Control is a sensitive thing. People want it and when they have too little, they complain to no end. But when they have too much, they always make a big heaping mess of everything. People who love Linux tend to love it for what it is - not Windows. But the reverse is true too. A lot of people choose Windows because it's not the mess that Linux often is. Like I said, they know what they're getting and it's highly likely going to work as expected.

Reply Score: 5

RE[2]: I get it now
by tylerdurden on Sat 10th Oct 2015 19:36 UTC in reply to "RE: I get it now"
tylerdurden Member since:
2009-03-17

That's some stale FUD you got there. You realize the late 90s were over a decade and a half ago, right?

Reply Score: 2

RE[3]: I get it now
by ilovebeer on Sat 10th Oct 2015 19:51 UTC in reply to "RE[2]: I get it now"
ilovebeer Member since:
2011-08-08

So basically what you're saying is you pay no attention at all to Linux development. You don't use the Linux dev mailing lists. And, you ignore all of the bug & breakage reports, and regressions that Linux gets on a daily basis. That's the worst kind of Linux user - the kind that can't acknowledge its' downfalls. You can stick your head in the sand all you like but the problems I've mentioned are going to continue to exist.

Linux is simply not the tropical paradise people like you would have others believe. Does it make you feel better when I say I love Linux when it comes to my personal servers and htpcs?

Reply Score: 3

RE[2]: I get it now
by abraxas on Sun 11th Oct 2015 16:10 UTC in reply to "RE: I get it now"
abraxas Member since:
2005-07-07

Windows has it's share of problems, they are just different problems. I have been able to maintain rolling release distributions for years with Linux. Sure there have been hiccups but Windows doesn't avoid this either. I have had more than one Windows bug from updates cause serious problems in recent years.

Reply Score: 3

RE[2]: I get it now
by cfgr on Sun 11th Oct 2015 20:39 UTC in reply to "RE: I get it now"
cfgr Member since:
2009-07-18

With Windows you know what you're getting, and it's going to be something that is highly likely to work.

That must be the reason why a 10 year old Windows version still has a higher market share than both last versions of Windows.

When upgrading your Windows the only thing you know you're getting is trouble, and if you're lucky, your machine is not totally bricked (think Windows 8.1 upgrade disaster). I'm not sure about you, but for most people I know a Windows upgrade typically involves getting a new PC.

Edited 2015-10-11 20:42 UTC

Reply Score: 4

RE[3]: I get it now
by ilovebeer on Sun 11th Oct 2015 21:15 UTC in reply to "RE[2]: I get it now"
ilovebeer Member since:
2011-08-08

With Windows you know what you're getting, and it's going to be something that is highly likely to work.
That must be the reason why a 10 year old Windows version still has a higher market share than both last versions of Windows.

What exactly are you referring to? Windows 7 isn't 10 years old so it can't be Windows 7 vs. Windows 8 and 10. If you're talking about XP, you do realize that XP is widely used in systems, such as point-of-sale, atms, etc. which aren't updated at the frequency the regular home user updates or buys a new pc with the newest os pre-installed.

I personally still use Windows 7 on all my own Windows machines and I have no intention of changing any time soon because they're all rock solid - I simply have no reason to. I never had interest in Windows 8/8.1 because I didn't like what I read/heard about it. I've heard far more positive than negative about Windows 10. I do take issue with all the data mining it does by default but you can turn that nonsense off so it's not more than a setup inconvenience from what I've read.

When upgrading your Windows the only thing you know you're getting is trouble, and if you're lucky, your machine is not totally bricked (think Windows 8.1 upgrade disaster). I'm not sure about you, but for most people I know a Windows upgrade typically involves getting a new PC.

In all the Windows upgrades I've been through both at home and at work, I have yet to come across a single system that got bricked. I don't recall at time where there was actually any problem at all for that matter. I/we only do clean installs however so maybe the problem you describe is limited to those to who try upgrading on top of a previous install.

No os is immune from hiccups and sometimes even disasters. But if you're generally suggesting that Windows is not a stable and solid os for most people then you must not have much experience with Windows and/or people who use it.

Reply Score: 3

RE[4]: I get it now
by cfgr on Mon 12th Oct 2015 08:56 UTC in reply to "RE[3]: I get it now"
cfgr Member since:
2009-07-18

I was referring to Windows XP compared to both Win 8 and Win 10. If Windows was as predictable as you said, it wouldn't have taken us years to get rid of XP.

In all the Windows upgrades I've been through both at home and at work, I have yet to come across a single system that got bricked.

You perhaps, but tell that to those HP users after the 8.1 upgrade.

And before that, absolutely not a single normal user ever upgraded their Windows, they just bought a new PC with the next Windows.

I/we only do clean installs however so maybe the problem you describe is limited to those to who try upgrading on top of a previous install.

My point. A clean install on a new machine is easy.

You're measuring with a double standard here. In your first post you were complaining about upgrading Linux distros but you don't actually do that for Windows either.

And yes, some distros such as Arch can break stuff when doing regular updates. Why would you use that distro then if you're not willing to deal with it? Pick the right tool for the job. And if that's Windows for you, fine, but don't pretend it's a rose garden for everyone else.

I recently upgraded my Debian server to version 8. Done in half an hour, I can't recall a single problem, perhaps tweaking a setting somewhere but it wasn't noticeable enough to even remember it.

No os is immune from hiccups and sometimes even disasters. But if you're generally suggesting that Windows is not a stable and solid os for most people then you must not have much experience with Windows and/or people who use it.

No, I'm suggesting your 'you know what you're getting' argument against Linux can be completely turned around and applied to Windows as well. I'm suggesting you're using a double standard when comparing both systems.

Edited 2015-10-12 09:07 UTC

Reply Score: 3

RE[5]: I get it now
by ilovebeer on Mon 12th Oct 2015 16:51 UTC in reply to "RE[4]: I get it now"
ilovebeer Member since:
2011-08-08

I was referring to Windows XP compared to both Win 8 and Win 10. If Windows was as predictable as you said, it wouldn't have taken us years to get rid of XP.

As I pointed out, XP held significant market share for so long because it had multiple points of penetration, unlike Windows 8/10. Additionally you're comparing over a decades worth of time against a few years. Let's see where Windows 10 is in another 10 years so the comparison can be more realistic.

In all the Windows upgrades I've been through both at home and at work, I have yet to come across a single system that got bricked.
You perhaps, but tell that to those HP users after the 8.1 upgrade.

You can say that just about for any os at any version. A more telling metric is what most users experience most of the time.

And before that, absolutely not a single normal user ever upgraded their Windows, they just bought a new PC with the next Windows.

Come on now.

I/we only do clean installs however so maybe the problem you describe is limited to those to who try upgrading on top of a previous install.
My point. A clean install on a new machine is easy.

You're measuring with a double standard here. In your first post you were complaining about upgrading Linux distros but you don't actually do that for Windows either.

Referring to my personal machines - I should have made that clear. But that doesn't account for the machines at work that weren't clean installed so I'm not using a double standard. The point is that upgrade issues seem to be isolated to those upgrading over existing installs and then only a certain group of those people. It hasn't been a problem I've experienced with clean installs or upgrade-over-install.

And yes, some distros such as Arch can break stuff when doing regular updates. Why would you use that distro then if you're not willing to deal with it? Pick the right tool for the job. And if that's Windows for you, fine, but don't pretend it's a rose garden for everyone else.

I've always been a promoter of using the right tool for the job and what works best for your needs. I don't try to `sell` any os to anyone. I myself am a Linux & Windows user at home, and you can add OSX to the list when considering work as well. All 3 of them are great in some areas and trash in others. But I'm not saying anything new here, I've said all this before in past threads.

I recently upgraded my Debian server to version 8. Done in half an hour, I can't recall a single problem, perhaps tweaking a setting somewhere but it wasn't noticeable enough to even remember it.

You should be glad to be a part of that group rather than one of the people who had to file a bug report(s) because the upgrade failed in some way. I know someone who had his laptop completely hosed by that upgrade. He wound up doing a clean install of Debian Testing rather than Debian 8. That went smooth.

No os is immune from hiccups and sometimes even disasters. But if you're generally suggesting that Windows is not a stable and solid os for most people then you must not have much experience with Windows and/or people who use it.
No, I'm suggesting your 'you know what you're getting' argument against Linux can be completely turned around and applied to Windows as well. I'm suggesting you're using a double standard when comparing both systems.

I haven't made any argument against Linux, I've simply shared my own personal experience, what I read (daily) on multiple Linux dev mailing lists, and various forums. I'm neither pro-<some os> or anti-<some other os>. I use the 3 main ones; 2 by choice, the 3rd only because I have to at work. It just so happens that I hear a lot more complaints about Linux desktops breaking because something was updated than I do Windows desktops. It is what it is.

Reply Score: 2

RE[6]: I get it now
by cfgr on Tue 13th Oct 2015 10:16 UTC in reply to "RE[5]: I get it now"
cfgr Member since:
2009-07-18

I've simply shared my own personal experience, what I read (daily) on multiple Linux dev mailing lists, and various forums. I'm neither pro-<some os> or anti-<some other os>.

Fair enough, my apologies as I mistook your post for applying double standards.

My experience is that I've always run into trouble when upgrading* desktops no matter what OS. As for regular updates: on Ubuntu they tend to break something more often (though there simply are a lot more updates for all software) but it's usually smaller stuff which is still very annoying. On Windows (7) I've had some severe issues that ran into hanging updates and update loops, and I definitely didn't know what I was getting there.

(*) If I didn't make it clear enough: with 'upgrading' I meant a major new distro/windows version, while 'updating' is the usual patching.

Edited 2015-10-13 10:18 UTC

Reply Score: 2

RE: I get it now
by dinosaur on Sat 10th Oct 2015 12:03 UTC in reply to "I get it now"
dinosaur Member since:
2015-05-10

now but despite the general dislike of these designs it doesn't break out into an all out flamewar among users and developers.


That's because windows users have no say in what goes in the operating system. They have to take whatever is prescribed by Microsoft. It also shows a lack of enthusiasm from users. That's because Windows improvements mainly benefit Microsoft the for profit enterprise behind the software. It's not a community project like opensource software ones are that benefit the whole community and if you want something done you have to speak up or roll up your sleeves and contribute.

Reply Score: 3

RE[2]: I get it now
by ilovebeer on Sat 10th Oct 2015 14:49 UTC in reply to "RE: I get it now"
ilovebeer Member since:
2011-08-08

What makes you think Joe User wants a say in how the OS is designed? Lack of enthusiasm? Sure, that's one way to describe it. Lack of interest probably does so better. Most people are neither programmers nor designers. They don't want to worry about it, or about contributing, and will gladly let someone else handle that. Windows users will make their voice heard if something gets too far off track. Notice how the Start button was removed, but then put back? That was a direct result of Windows user "enthusiasm".

Also, community driven open-source projects like Linux may sound good on paper, but I can't agree that all the problems in brings into the picture is a benefit to the whole community. Progress is either forced by one person or group of devs vision, or it's stalled because of stubbornness, posturing, and lack of true `community` effort. In reality, Linux development is fractured and fragmented to hell. If you think that's any more ideal or beneficial to users, you've got some screws loose.

Reply Score: 2

RE[2]: I get it now
by abraxas on Sun 11th Oct 2015 16:12 UTC in reply to "RE: I get it now"
abraxas Member since:
2005-07-07

"now but despite the general dislike of these designs it doesn't break out into an all out flamewar among users and developers.


That's because windows users have no say in what goes in the operating system. They have to take whatever is prescribed by Microsoft. It also shows a lack of enthusiasm from users. That's because Windows improvements mainly benefit Microsoft the for profit enterprise behind the software. It's not a community project like opensource software ones are that benefit the whole community and if you want something done you have to speak up or roll up your sleeves and contribute.
"

Except most of the people who bitch are end users who will never contribute and barely know what they are talking about.

Reply Score: 3

brion
Member since:
2010-11-04

There seems to be a trend to isolate applications from the OS a bit more, both in the server world (containers & such) and in end-user apps (eg xdg-app sandboxing).

If the sandbox environment can provide the right subset of libraries/tools to run your app on, perhaps in future it matters less what the base OS file layout etc looks like.

Reply Score: 3

Informative link missing
by pepa on Sat 10th Oct 2015 14:37 UTC
pepa
Member since:
2005-07-08

Informative link missing:
https://lwn.net/Articles/658809/

Reply Score: 2

Comment by kaiwai
by kaiwai on Sun 11th Oct 2015 09:05 UTC
kaiwai
Member since:
2005-07-06

That's too bad - the FHS is an abomination, a useless, needlesly complex relic from a time we were still using punch cards, and it has no place in any modern computing platform. All operating systems have absolutely horrible and disastrous directory layouts, but the FHS is one of the absolute worst in history.


As opposed to:

C:\Windows

Which amounts to random shit being thrown into random locations with no reasoning other than, "we're too lazy to actually clean up the mess and have a logical directory lay out"? When compared to the abomination that is Windows, the *NIX lay out makes a hell of a lot more sense especially coming from a OS X user who would sooner deal with what we have today vs. the mess of the Windows world.

Reply Score: 4

RE: Comment by kaiwai
by Thom_Holwerda on Sun 11th Oct 2015 10:04 UTC in reply to "Comment by kaiwai"
Thom_Holwerda Member since:
2005-06-29

I think you should scroll up a bit and read the blurb again before trying falsely imply I like the Windows one.

That being said, pointing fingers and saying "but but but that one sucks too!" is not a solid argument to keep smelly crap lingering around.

There's a reason we don't use punch cards anymore.

Reply Score: 1

Any wonder why so many think Linux is a joke
by MacMan on Sun 11th Oct 2015 20:08 UTC
MacMan
Member since:
2006-11-19

We develop a cross-platform computational biology package, and most of our users are not very technically literate so it would be hopeless expected them to compile their own packages. So, we provide binaries.



We build binaries on OS X 10.6, works perfectly on all versions 10.6 and above, zero problems. On Win, we build on 64 bit WinXP, and again, zero problems at all.

No, Linux, here the nightmare begins. 100's of freaking completely incompatible distributions, nothing is the same in any distro, so basically, we have to have around 40 different virtual machines to build the Linux binaries. And of course, nothing built it one is binary comparable with anything else. What an complete and utter joke.

RedHat and to an extent, SuSE are literally the only ones in the Linux scene that takes binary compatibility seriously, and takes system stability and compatibility seriously. RHEL is the only Linux where we don't have problems.

Try to build something on ubuntu 12.04, it won't work on any other newer Ubuntu. Then of course there is the "rolling release" crap like Arch where you build something one week, and it stops working a week later because they snuck in some incompatible binary change.

I've been writing Unix SW for a long time, started with AIX about 25 years ago, and I've never had these kinds of incompatibility problems as seen in Linux.

Reply Score: 3

sergio Member since:
2005-07-06

We develop a cross-platform computational biology package, and most of our users are not very technically literate so it would be hopeless expected them to compile their own packages. So, we provide binaries.


We build binaries on OS X 10.6, works perfectly on all versions 10.6 and above, zero problems. On Win, we build on 64 bit WinXP, and again, zero problems at all.

No, Linux, here the nightmare begins. 100's of freaking completely incompatible distributions, nothing is the same in any distro, so basically, we have to have around 40 different virtual machines to build the Linux binaries. And of course, nothing built it one is binary comparable with anything else. What an complete and utter joke.

RedHat and to an extent, SuSE are literally the only ones in the Linux scene that takes binary compatibility seriously, and takes system stability and compatibility seriously. RHEL is the only Linux where we don't have problems.

Try to build something on ubuntu 12.04, it won't work on any other newer Ubuntu. Then of course there is the "rolling release" crap like Arch where you build something one week, and it stops working a week later because they snuck in some incompatible binary change.

I've been writing Unix SW for a long time, started with AIX about 25 years ago, and I've never had these kinds of incompatibility problems as seen in Linux.


What you say is so true I'm gonna cry. I'm ranting about this Linux mess for at lest 15 years... nobody cares!! It's incredible.

Exactly the same problem that you describe with apps also happens with closed source drivers/modules and It's much more problematic because you usually end up with an unbootable system.

I was tired of solving module issues in critical Linux HA clusters running Veritas Volume Manager/Cluster. Every time linux kernel was patched/updated by the security team, the veritas modules stopped loading so you have to recompile the module... incredible but real, and the same happens with FC HBA drivers too and any other closed source 3rd party module. You always have some kind of issue.

I think the main problem with Linux (kernel and distros) is that a lot of technical decisions are taken by people that uses the OS for fun or research but not for business critical servers... as you said, the only Enterprise ready distro is RHEL and It's miles behind Solaris or AIX in stability. They supposed to be rock solid... but they are not, they change things all the time, they break things and they don't care. It's your problem.

That Linux "amateurism" is the reason why I refuse to recommend/run Linux in baremetal systems anymore.

If you want to run Linux, great, use ESX and run RHEL on a VM (and keep an snapshot ready every time you update or install something). Linux distros are not serious enough to take care of drivers updates and/or software updates, they fuck the things up 50% of the time. Sad but true.

Reply Score: 1

MacMan Member since:
2006-11-19

I think RHEL seems to be the only one out there that takes kernel binary compatibility seriously. They make kernel updates don't break any ABIs, and I don't think I've ever heard of a RHEL update breaking a binary driver.

The other ones, forget it.

Reply Score: 2

bfr99 Member since:
2007-03-15

You are opening yourself up to the standard criticisms like - you don't really know Linux well, all it takes is a little conditional compilation for cross release compatibility, and the catch-all just read the sources.

Reply Score: 1

Soulbender Member since:
2005-08-18

.

Edited 2015-10-12 06:40 UTC

Reply Score: 2

acobar Member since:
2005-11-15

Did you try static linking, patchelf, statifier, or ermine ?

What you are doing is kind of crazy to my eyes. Granted, for kernel drivers things are a little more complicated, for the others, the tools above can deal with most of the cases.

You may also take a look on what Mozilla and LibreOffice do as they distribute huge binary GUI apps, even though most of us stick with the distro compiled version.

Reply Score: 2

WereCatf Member since:
2006-02-15

Did you try static linking


Static linking is an ugly, stupid workaround that should not be needed in a properly-designed OS; dynamically linked libraries can be updated independently of the application and keeping them up-to-date is the job of the OS, but with statically linked libraries the supplier of the application has to also constantly keep an eye on any and all statically linked libraries the app uses for security patches and then recompile the thing and distribute yet-another patch -- it's a lot of unnecessary extra work that eats away from the time that could instead be used on actually improving the app itself.

Reply Score: 2

acobar Member since:
2005-11-15

Static linking is an ugly, stupid workaround ...

Static linking is not an stupid workaround, it is a very valid solution to cases where your application depends on old libraries (like qt3, gtk2, etc) and where the cost associated to migration is not worth all the trouble associated. It is also a valid solution for libraries that are not usually packaged by distros. Granted, for parts of the code that may have to deal with network interaction or where the used library is known to be lousy, well, it is worrisome to keep relying on it and any developer worth its salt should update to current versions or patch its flaws.

Also, static linking is not "all or nothing", you can use it for the above cited cases and the preferred dynamic to all the others.

The main problem static linking brings to developers is usually associated to the need to generated static libraries, what most distros don't provide. For that cases you can pick the other tools I mentioned.

Reply Score: 2

dpJudas Member since:
2009-12-10

¨Also, static linking is not "all or nothing", you can use it for the above cited cases and the preferred dynamic to all the others.

Yes, we could require every single developer wanting to target Linux with all that complexity, workarounds and hacks.

Or we could fix the cause of the problems in a central place once and for all.

Reply Score: 2

acobar Member since:
2005-11-15

Or we could fix the cause of the problems in a central place once and for all.

Would you please enlighten us all how can this be accomplished taking in account that we have many distros and each of them has many versions ?

Dynamic linking on Linux (and BSDs) encompass a dependence chain of libraries. Even though there are proposals to better handle it inside a distro version, one solution for all distros/versions seems very unlikely.

Please, don't say "Windows fixed it", as it is more like RHEL case and the cost associated to keep the whole ABI stable is far from cheap (or enticing from the developer POV). Also, you must know that big apps do distribute old versions of dll with them (and store them outside the system directory on this case).

Also, again, take a look on mentioned apps, it will give you a deep clue on where the problem really is and which mitigation methods are available.

As I said, Mozilla, LibreOffice and many other projects that distribute binaries handle it acceptably.

Reply Score: 3

dpJudas Member since:
2009-12-10

Dynamic linking on Linux (and BSDs) encompass a dependence chain of libraries. Even though there are proposals to better handle it inside a distro version, one solution for all distros/versions seems very unlikely.

The fault lies in splitting applications and libraries into /usr/include, /usr/bin, /usr/lib and /usr/share. The original thought behind this was that the dynamic linker would just have to search /usr/lib, the compiler /usr/include, the shell /usr/bin and the app itself in /usr/share.

The problem with this strategy is that libraries come in different incompatible versions at the API level (oops we broke /usr/include), has different defines at compile time generating different feature sets even within the same library version (oops we broke /usr/lib) and some applications may even rely on a specific version of another executable (bye bye /usr/bin).

So what's the solution? Invent the application and framework bundles. Basically rearrange the file structure in such a way that the dynamic linker only looks for dlls originally built for that program. While you could say this is somewhat similar to static linking, the key difference is that the OS itself can patch them for security updates, save disk space and so on. How? By detecting a DLL is identical to another one and symbolically link them.

Would you please enlighten us all how can this be accomplished taking in account that we have many distros and each of them has many versions ?

If you mean how to convince all distros to switch to a new file structure? I don't think that will happen. What could happen is that some distros introduce app bundles and developers are basically left with two choices:

1) Go through the hell of building for the old directory structure.
2) Create an app bundle for the new system

If enough of the big distros adopt the new system and it works as intended, then you may very quickly see developers doing political pressure on the remaining group by just not supporting a solution for #1.

Have a look at OS X' file system structure for an example of how it might end up eventually.

Please, don't say "Windows fixed it",

You seem rather obsessed about Windows.

Also, again, take a look on mentioned apps, it will give you a deep clue on where the problem really is and which mitigation methods are available.

I've been compiling my own software for Linux for almost 20 years. I think I have a pretty solid idea of what the problems with Linux are. ;)

Reply Score: 2

Alfman Member since:
2011-01-28

dpJudas,

I agree. I think installing packages onto the system should be as simple as extracing a package without the need to mix everything in the huge "FHS blender". The OS could offer facilities for automatically building and maintaining system indexes for package resources. So instead of executable files being moved into /bin, /sbin,/usr/bin,etc, they could remain associated with the package they belong to. This change alone solves many of my gripes.

Instead of:
-rwxr-xr-x 1 root root 5764 Sep 26 2014 /bin/I-dont-know-what-this-goes-to
we could get something like this:
-rwxr-xr-x 1 root root 5764 Sep 26 2014 /bin/I-dont-know-what-this-goes-to -> /pkg/oh-i-remember-1.2/I-dont-know-what-this-goes-to

This would also permit us to easily install multiple versions of packages if we need.

Installing a package would be trivial:
cd /pkg/ && tar -xf ~/nifty.tgz && update-indexes

Removing a package would be trivial:
rm -fr /pkg/nifty && update-indexes

The "need" for historical quirks such as separating /bin, /usr/bin, /usr/share/bin, etc for different OS modes and/or different partitions and/or different methods of installation goes away completely. Using symlink indexes actually provides administrators with even better control of where things can be installed.

There was a time when Dos and Windows apps worked this way (although not any longer due to registry and dll hell). This particular aspect was tremendously appealing. Installing, backing up and restoring these applications was absolutely trivial in ways that FHS simply cant match.


Obviously modern software is plagued by dependency issues responsible for "DLL Hell". But I think novel solutions on linux could solve these problems in more straitforward ways without being forced to depend on a repository to mask it.

Edited 2015-10-12 21:27 UTC

Reply Score: 2

MacMan Member since:
2006-11-19

This app does not depend on old libs, it uses QT 4.x and some other modern libs.

Problem is, every disto, and even every version of each distor build QT differently, builds all the other libs differently, it is an unmitigated nightmare. Basically, the only solution is to have a VM for each and every possible distro and version out there.

Thats just utterly insane.

It is beyond belief how stupid this madness is, and with this kind of unprofessional hacker attitude of I don't care about maintaining any compatibility, Linux will continue to be a joke, well all Linux with the exception of say RHEL, Centos or SuSE.

So, we are actually looking at no longer supporting all these amateur hour distros, and only providing binaries for RHEL since our main build guy is leaving and he spent most of his time doing builds for all these distros.

Reply Score: 2

juzzlin Member since:
2011-05-06

Problem is, every disto, and even every version of each distor build QT differently, builds all the other libs differently, it is an unmitigated nightmare. Basically, the only solution is to have a VM for each and every possible distro and version out there.


So why don't you just bundle or link statically the libraries like on Windows/Android/iOS..? An RPM for RHEL and a generic installer for "all" other distros. I'm not sure why this is a big problem on Linux but on Windows it's ok. I mean, nobody forces you to use those differently built libraries from package archives.

However, I do agree that packaging for Linux sucks. Canonical is trying to solve some problems with Click/Snappy packages, which is great.

Edited 2015-10-13 12:53 UTC

Reply Score: 1

acobar Member since:
2005-11-15

That is why I suggested you to take a look on the options I listed.

On my case, the problem was not how qt4 (and this also happened with other toolkits) was built but with math libraries I was integrating. No distro had the same libraries installed with the same dependency chain (tree actually) I needed and that was the main problem. I ended using a mixture of static and dynamic linking and using a shell wrap with LD_LIBRARY_PATH= set to handle where the binary libraries where stored (as also the main run time). You will need to modify RPATH inside the dynamic libraries you bundle (take a look on chrpath and patchelf, they are not installed by default usually even when you install the developer tools).

Of course, you will have to track the libraries dependence chain and decide what can be problematic. That is a critical point (one that may need try and test). Anyway, you may opt to bundle almost all dynamic modules (see last paragraph).

No problem (or almost) anymore with libs or multiple versions.

As you may probably guess, I end up using more memory and using more space on disk, but it is not really a big problem now days, is it ?

Edited 2015-10-13 13:27 UTC

Reply Score: 2

juzzlin Member since:
2011-05-06


As you may probably guess, I end up using more memory and using more space on disk, but it is not really a big problem now days, is it ?


I've had problems with distributions shipping too old versions of Qt even if targeting only Ubuntu world. So usually bundling Qt with the application is the only option anyway. In my opinion .deb's and .rpm's and the whole dependency-aware packaging paradigm just don't work for application developers. For the base system itself they are great.

Reply Score: 1

Common system Core
by MadRat on Tue 13th Oct 2015 01:32 UTC
MadRat
Member since:
2006-02-17

I'm no expert on Unix or Linux, but it seems more or less the experience with either one really needs a target standard, say around 2020. Moving towards commonality in the location of all binary executables, libraries, and user dependent settings under the installation folder of the individual packages would help admins out significantly. Let the application developer create his or her own creative monster for the installation in however they deem suitable under the application install folder. Storage limitations for the installation should be all pretty much nonexistent in this day and age. Data should be in the home folder under the appropriate user, share folder, or application specific folder. It would make recovery so much simpler than the scattered hell we face today.

Reply Score: 2

RE: Common system Core
by Alfman on Tue 13th Oct 2015 02:52 UTC in reply to "Common system Core"
Alfman Member since:
2011-01-28

MadRat,

Some distros have quite a large head start in that direction, if you haven't looked already, take a look at GoboLinux...

http://gobolinux.org/index.php?page=documentation

If a major distro were to pick it up, adoption would move rather quickly. But I don't find it very likely.

Reply Score: 2

RE[2]: Common system Core
by acobar on Tue 13th Oct 2015 13:43 UTC in reply to "RE: Common system Core"
acobar Member since:
2005-11-15

Alfman,

Take a look on comment from http://www.osnews.com/permalink?619160.

The main problem never was really where things are located but the chain (tree) of dependence and breakage of ABI on newer libraries versions (unless I got things completely upside down, a possibility, of course).

I also pointed what I (and many others) use as a workaround. A smarter loader (with, perhaps, a better ELF format) would help improve the situation but I dunno it would fix it completely and the cost would end up being not rewarding, I think. I will take a closer look at it anyway.

Reply Score: 2

RE[3]: Common system Core
by Alfman on Tue 13th Oct 2015 14:37 UTC in reply to "RE[2]: Common system Core"
Alfman Member since:
2011-01-28

acobar,

The main problem never was really where things are located but the chain (tree) of dependence and breakage of ABI on newer libraries versions (unless I got things completely upside down, a possibility, of course).


Well, that's a completely different problem from hierarchy, so I don't think it really applies to this particular thread. I can't vouch for QT, but if they botched things up as badly as MacMan claims, I have to take his word for it.

Technically though, there's no reason a library can't be ABI compatible everywhere, and in fact most libraries are. There's absolutely nothing special about QT in this regards that implies frequent breakages. If they do break, I'm temped to say it's either bad/fragile implementation, or improper usage, but it's difficult to say without looking at specifics.


I also pointed what I (and many others) use as a workaround. A smarter loader (with, perhaps, a better ELF format) would help improve the situation but I dunno it would fix it completely and the cost would end up being not rewarding, I think. I will take a closer look at it anyway.


If it's an ABI breakage, a smarter loader won't help. If the loader can't find libraries, that's a significant problem that should get fixed. There may be problems internal to the library? I don't really know.

Edited 2015-10-13 14:54 UTC

Reply Score: 2

RE[4]: Common system Core
by MacMan on Tue 13th Oct 2015 15:22 UTC in reply to "RE[3]: Common system Core"
MacMan Member since:
2006-11-19


Well, that's a completely different problem from hierarchy, so I don't think it really applies to this particular thread. I can't vouch for QT, but if they botched things up as badly as MacMan claims, I have to take his word for it.


Its not really a Qt problem, but more in that there are so many different options for building Qt, and every distro builds it differently, with a completely different dependency tree.

The real problem is that on Windows, it is just agreed that Win API, the .net API is standard, and is there and just use it.

Same with Cocoa on OSX.

But Qt/GTK are sort of, but not really system libraries. These libs have so many different config/build options that each distro builds them differently.

Whats even worse is the apocalyptically idiotic xerces parser where you can define macros to build it with or without namespaces, some distoros build with, some without, so its impossible to use as a shared lib, and why we stopped using it, and no just use plain old libxml as it just works and has a standard API.

Nobody in Linux land can seem to agree on what exactly constitutes a system library.

RHEL/Centos/SuSE are the only ones who seem to take this seriously, and I can see how it costs them a HUGE amount of money constantly testing all this stuff to make sure nothing from this herd of cats linux devs breaks anything. And, I can certainly see why corporations have no problem paying them to maintain compatibility.

Reply Score: 2

RE[5]: Common system Core
by acobar on Tue 13th Oct 2015 15:53 UTC in reply to "RE[4]: Common system Core"
acobar Member since:
2005-11-15

Spot on. That is what I was trying to tell people.

Even ABI is not the kind of problem people think it is sometimes. You can have the same ABI from a Qt library but if may depend on third part libraries on distro A that were not used on distro B because their use are optional so you end up with a missing symbol.

I did not see people discussing it here but the whole point of LSB was really to guarantee a common ABI. To achieve it a lot of things on the build process (linker and compiler flags, libraries dependencies and so on) should be agreed. After a lot of discussion it never really take off enough to be effective. It also means a lot of work for maintainers. I guess, because of both things, Debian is going to abandon it.

Edited 2015-10-13 15:54 UTC

Reply Score: 2

RE[5]: Common system Core
by Alfman on Tue 13th Oct 2015 18:12 UTC in reply to "RE[4]: Common system Core"
Alfman Member since:
2011-01-28

MacMan,

Its not really a Qt problem, but more in that there are so many different options for building Qt, and every distro builds it differently, with a completely different dependency tree.


This problem has very little to do with FHS/FHS alternatives, so it didn't seem relevant to this thread.

Anyways, if there are incompatible ways to official build the QTs, then arguably that is a QT problem that needs to be fixed within QT. If the distros are actually making unsupported changes to QT, then that's a distro problem. Or, maybe the developer is using features specific to newer libraries, which will not work if the distro hasn't updated. I don't really know which it is, it's impossible to say without talking specifics.

But Qt/GTK are sort of, but not really system libraries. These libs have so many different config/build options that each distro builds them differently.

RHEL/Centos/SuSE are the only ones who seem to take this seriously, and I can see how it costs them a HUGE amount of money constantly testing all this stuff to make sure nothing from this herd of cats linux devs breaks anything. And, I can certainly see why corporations have no problem paying them to maintain


Maybe you are right, but these are very abstract assertions, can you give an example of something that works on say redhat, but not Ubuntu? At least this way we can be on the same page.

Edited 2015-10-13 18:24 UTC

Reply Score: 2

Comment by Luminair
by Luminair on Tue 13th Oct 2015 08:37 UTC
Luminair
Member since:
2007-03-30

A lot of IT types getting on Thom's back about the file system directories. While the current system works for IT, I think it's a good idea to aspire to regular users being able to understand and maintain their system.

Surely we can admit the possibility of something better than /bin/, etc.

Reply Score: 2

RE: Comment by Luminair
by MadRat on Tue 13th Oct 2015 11:59 UTC in reply to "Comment by Luminair"
MadRat Member since:
2006-02-17

I'd settle for mean or mode of users being able to understand it. Windows initially had some sanity, now it looks just as screwy.

Reply Score: 2

Modern Package re-Compiler
by MadRat on Wed 14th Oct 2015 05:50 UTC
MadRat
Member since:
2006-02-17

You'd think that someone could write a package recompile program to search out dependencies within source code and simply reconstruct packages with new binaries to be compatible to whichever distribution matrix is defined by the system. Every distribution would benefit.

Reply Score: 2

RE: Modern Package re-Compiler
by Alfman on Wed 14th Oct 2015 13:47 UTC in reply to "Modern Package re-Compiler"
Alfman Member since:
2011-01-28

MadRat,

You'd think that someone could write a package recompile program to search out dependencies within source code and simply reconstruct packages with new binaries to be compatible to whichever distribution matrix is defined by the system. Every distribution would benefit.


I researched something like this for my distro, but there are numerous complications.

1. Most software does not come with any dependency metadata from it's author (because there's no standard for doing so), it shows symptoms like missing h files when we try to compile it. If we're lucky, they've written this in a readme file. If not, we may have to search the internet and guess where this .h file may have come from.

2. It's the developer's responsibility to keep the ABI backwards compatible or at least rename the shared library when it breaks. If he does not, the linker will happily link incompatible ABIs because it has no idea the developer did this.


2. Not all software uses C/H/ELF files, many useful packages use something else even if we don't realize it (ie lmsensors). That something else (ie perl) will have it's own dependencies. A generic solution needs to be able to handle this.

3. Some software contains multiple versions of code, so a naive "scanner" might raise false positives for dependencies that are not needed. For example, it might need pulseaudio OR alsa, but not both.

4. Binary headers show static dependencies, my own distro checks all of them automatically. But some software uses dlopen for technical reasons. It doesn't mean there's not a dependency.

5. Version incompatibilities can happen if two projects don't use the same update cycles. The latest version of one package may not work with the latest version of another, which means makes dependency resolution all that more difficult.


6. It might be useful to grab the the dependency metadata from an existing repo, but that's frequently a couple years out of date (between stable distros and changing software like gnuradio). Unless we want to be pegged to the same version as used in repos, distro metadata is too old.


I've said it before, I think solving this effectively would make linux development and distro management so much better. Because of the complexity of solving this generically without metadata, I think it comes down to all package authors adding standardized metadata to their software that tells the users (or rather our automated tools) what they need outside of the package to get the package to work.

It would make a huge different for anyone who's been faced with manually resolving dependencies outside of their repo tree. But not only that, it would help repo builders themselves out a great deal!

In fact, once this is in place, new tools could be built to generate the metadata automatically at compile time (thereby eliminating most human error). If the developer updates his system with new libraries, his project's metadata could reflect the new dependencies automatically without devs needing to remember to update it.

Edited 2015-10-14 13:58 UTC

Reply Score: 2