Linked by Thom Holwerda on Sat 23rd Aug 2008 15:37 UTC
Editorial Earlier this week, we ran a story on GoboLinux, and the distribution's effort to replace the Filesystem Hierarchy Standard with a more pleasant, human-readable, and logical design. A lot of people liked the idea of modernising/replacing the FHS, but just as many people were against doing so. Valid arguments were presented both ways, but in this article, I would like to focus on a common sentiment that came forward in that discussion: normal users shouldn't see the FHS, and advanced users are smart enough to figure out how the FHS works.
Permalink for comment 327852
To read all comments associated with this story, please click here.
RE[6]: what is wrong with FHS?
by Doc Pain on Sun 24th Aug 2008 20:52 UTC in reply to "RE[5]: what is wrong with FHS?"
Doc Pain
Member since:
2006-10-08

I meant to say that *even in BSD* it's hard to be sure, as a user, whether something came with the base or was part of an add-on. "User" here is both system admins and regular users.


A means to determine it is by looking at the path of an installed program:

% which lpq
/usr/bin/lpq

Ah, this one belongs to the OS.

% which lpstat
/usr/local/bin/lpstat

This has been installed afterwards. (Now it's possible to use the tools provided by the package management system to find out which application had installed it.)

Furthermore, you can read from the creators of the BSD which things they provide (with their OS) and which they don't. For example, the default installation of FreeBSD and OpenBSD differ in regards of what does belong to the base system.

Other questions coming into mind could be: Why is a name server part of the base system? Why is a DHCP server not part of the base system? I'm sure you can imagine similar considerations.

OK, so how does the author of a system service know how to answer the questions about what structure /var/ has?


Usually from "man hier" or the respective description - if one is available. If not, well, I think the author starts guessing and finally implements something on his own.

I'd like to believe that no violations exist [in BSD], but I just don't. Nobody is that perfect.


I won't claim there is no exception. Often, I find applications using share/ and lib/ directories in a similar way (e. g. to put icons there). There are recommendations, but not everyone uses them. So it's completely possible in the BSDs, as it is in Linux.


All *nix systems clean /tmp on start.


No. For example, if you set clear_tmp_enable="NO" in /etc/rc.conf in FreeBSD, the content of /tmp will be kept between reboots.

I don't know about you but I very rarely reboot my computers, except to patch the kernel and upgrade hardware.


At home, my computers run just as long as I use them or as long as they've got something to do. At work... well, who reboots servers? :-)

Can we really rely on boot-time cleaning?


It's a system setting, the maintainer of the system should know. And it's the standard behaviour to start with an empty /tmp, as far as I know.

Secondly, even if we're not worried about crufty junk accumulating it seems to me that it would be useful to provide more clarity. Don't tell me "just don't ever look in /tmp" because sometimes you have to... and sometimes you're writing a program that has to work with temporary files. Isn't it better to have a clear place to put things?


Definitely. Maybe you know the term "file disposition" from IBM mainframe OS architectures / JCL. You can define how a file will be handled during a job, e. g. it's deleted after the job has finished (often welcome solution), or it should be kept for further use (sometimes useful, mostly for diagnostics).

But I still think the term "temporary" indicates that something is not very useful to the user, but maybe to other programs.

This is a perfect example of the problem: The correct behavior is not known so a developer makes something up. I'd like to avoid this kind of thing,


Exactly. But when we suggest a "correct behaviour", it should be documented in an understandable way. I'm not sure who would be responsible for this, maybe the creators or maintainers of an OS? But then, what about cross-platform applications? And when we're talking about Linux, who should develop a common standard there? And would the different distributors follow it?

There are two problems with that analogy: (1) Laws are enforced, standards aren't. (2) When you have a rule no one obeys you have a bad rule, not bad people.


Interesting look at the nature of rules, but understandable.

And each application doing its own thing is a problem because then there's no consistency. This is why I say the FHS has problems.
[...]
This once again goes back to my point: The FHS has problems, mostly that it doesn't answer questions it should and partly that it's terribly, arbitrarily inconsistent. People who want to radically overhaul it are usually misguided, but they frustration springs from very real issues.


Other arguments could be "never touch a running system" or "don't ask why it works, it just works". Sooner or later, this can lead into real problems. I see the problems simply by following your questions: Many of them cannot be answered completely, and answers sometimes lead to the inconsistencies you mentioned. Concepts leading to such answers are far away from a mandatory standard.

BTW, your reply looks truncated. Was it?


Maybe I exceeded the char[8000] limit, but the preview was complete. "Just a general consensus helps here." should be the last line, it's possible that I didn't press the keys strong enough. :-)

Reply Parent Score: 2