It may be arcane knowledge to most users of UNIX-like systems today, but there is supposed to be a difference between /usr/bin and /usr/sbin; the latter is supposed to be for “system binaries”, not needed by most normal users. The Filesystem Hierarchy Standard states that sbin directories are intended to contain “utilities used for system administration (and other root-only commands)”, which is quite vague when you think about it. This has led to UNIX-like systems basically just winging it, making the distinction almost entirely arbitrary.
For a long time, there has been no strong organizing principle to /usr/sbin that would draw a hard line and create a situation where people could safely leave it out of their $PATH. We could have had a principle of, for example, “programs that don’t work unless run by root”, but no such principle was ever followed for very long (if at all). Instead programs were more or less shoved in /usr/sbin if developers thought they were relatively unlikely to be used by normal people. But ‘relatively unlikely’ is not ‘never’, and shortly after people got told to ‘run traceroute’ and got ‘command not found’ when they tried, /usr/sbin (probably) started appearing in $PATH.
↫ Chris Siebenmann
As such, Fedora 42 unifies /usr/bin and /usr/sbin, which is kind of a follow-up to the /usr merge, and serves as a further simplification and clean-up of the file system layout by removing divisions and directories that used to make sense, but no longer really do. Decisions like these have a tendency to upset a small but very vocal group of people, people who often do not even use the distribution implementing the decisions in question in the first place. My suggestions to those people would be to stick to distributions that more closely resemble classic UNIX.
Or use a real UNIX.
Anyway, these are good moves, and I’m glad most prominent Linux distributions are not married to decisions made in the ’70s, especially not when they can be undone without users really noticing anything.


This is what I “love” (i.e. hate) about Desktop Linux:
– Can’t decide which binaries are core and non-core? Eliminate the separation and put them all in /usr
– Can’t agree to put the non-root binaries in bin and the root binaries in sbin? Eliminate the separation and put everything in bin.
And while I am willing to give them the benefit of the doubt for the first one (deciding what is “core” and “non-core” can be tough), the second one is just idiotic. A tool either needs root to work at all or it doesn’t.
This is what you get when you have too many neckbeards and graybeards with opinions™ spread across multiple organizations/distros and no boss to enforce a single decision.
> A tool either needs root to work at all or it doesn’t.
That’s not always true, though. Docker requires root in its default configuration – but that can be changed so that non-root users can run docker without sudo.
I was going to say this too, I don’t think “root” designation is particularly logical. Some admin tools like “ip” will still output useful data in readonly mode even though some functionality obviously won’t work without root.
The partition manager “parted” (which is in sbin on debian) can and should be used by normal users working with disk images (ie virtual machine disk image files). It’s not the tool itself that determines whether root access is needed, but the file or block device that gets passed in. This is the case for other tools in sbin too. Therefor dividing programs between “bin” and “sbin” based on “root” is not that useful IMHO.
My own distro symlinks /bin to /sbin and that’s just as good.
A tangential point: I dislike the unix practice of stuffing thousands of binaries into global directories. Its bad for organization. It might have made sense at first when there just wasn’t that much software,, but it set a bad precedent for the future. Software files should be kept together in local directories. Gobolinux handles this better or (dare I say it…) even windows “program files”.
While I don’t condone many of microsoft’s ugly hacks (like program files 86), the basic idea that software be organized within their own directory structures is a good one. I like the idea of installation & uninstallation be contained to a directory without spaying resources across the file system into system-wide garbage heaps.
Gobolinux! I forgot about that. I gave it a try years ago but it had issues. Is it still going?
Some programs require sudo no matter what (under standard file permissions), for example dpkg. Only those go to sbin. How did the Unix graybeards and Linux neckbeards manage to complicate something so simple is beyond me. This is what you get when you have people with opinions™ (with each person trying to drag things to their own point of view) and with no boss to say “only binaries that require sudo no matter what under standard file permissions go to sbin, the rest goes to bin, period”.
kurkosdr,
Not for nothing but even your example requires a non-universal judgement call. Dpkg can be used by non-root users and there’s nothing even weird about it. You can use it with a local install path, or even just to view and extract package files. You might want to do on shared hosting where you don’t have root for instance.
On the one hand it doesn’t really matter if it goes in bin or sbin because non-root users can call binaries in sbin anyway. But based on your own rule, I can’t make sense of placing dpkg into sbin. A dev might use it to build deb files without root/sudo and I expect some do. Even a utility like mkfs.ext4, which has “root” written all over it, lets ordinary users create file system images. Obviously I know it has applications for root users, but we cannot claim that it can’t be used by non-root users.
I don’t really see the point in sbin at all. Your simple/obvious rules are subjective and lead to inconsistent interpretations anyway, Why does it matter at all? It just seems like a pointless subjective endeavor to determine whether a binary belongs in bin or sbin because whether we get it right or wrong there’s not much benefit for having done it.
*nod* I’ve used `dpkg -S $(which cmdname)` or `dpkg -L pkgname` at least 20 times as much as I’ve used the side of it requiring sudo.
(-S tells you which package provides a path and -L lists the paths provided by a package. The former is useful for packages which provide multiple commands with no obvious connection to their package name, and the latter is useful for things like “What the heck did this package name its binary?” or “Where are the system config files?”)
I constantly use `dpkg -L XXXX` to list the files contained in a package. Or `dpkg -S /aaa/bbb/ccc` to find which package installed the file `/aaa/bbb/ccc`. And neither need sudo.
It’s important to remember that /usr directory itself is used “wrongly”. Originally it was supposed to serve the same purpose /home serves now, i.e. store users’ personal files, with binaries going to either /bin or /sbin (or /lib if we’re talking libraries).
I like sbin, and in T2/Linux we even still maintain the split of system core things in / and non vital things in /usr. If I were to remove and join anything that it would be getting rid of /usr entirely. and only use /bin,/sbin, …
I agree with this.
Ideally “/usr” would be everything not in the core OS. “/usr” would be for user installed extras, and “/usr/local” would be for stuff locally compiled or 3rd party like it should be (basically what “/opt” is used for). 🙂
I would still get rid of “/sbin” though. 🙂 Most interactive systems are single seat, so it’s not the best distinction.
Short answer — /usr/sbin (and /usr/local/sbin) should be where to place commands that require you to “sudo”.
That would make sense. I’m not sure anything in “/usr/local” should need privilege elevation, but that’s a different conversation.
It used to be /bin for binaries, and /usr for “user directories”.
/bin (and /sbin) were local, and /usr was mounted over NFS.
Over time software started installing themselves in /usr, and they were large and needed to be shared across labs or even campus.
That slowly pushed users’ home directories … to /home (and /usr became “Unix System Resources”)
And now we are stuck with that decades later. There is no more NFS for most builds. Local HDDs are large enough to store all software we need, even for tiny devices like Raspberry Pi. And we don’t even need it during boot (there used to be a concern for /usr not being mounted), as we have a “disposable” ram disk during boot process for this anyway.
A “good” desktop Linux could easily do away with /usr and just have /bin, /sbin, (and /lib). And /local? can go into /opt (ah, yes, I completely forgot about it. /opt is where third party place themselves outside of the packaging system. things like “mamba” where they manage their own setups).
@Thom, please enable tt and/or pre tags on the site.
Then you can’t have all the system static files in a separate partition, so mounting a remote /usr or immutable systems are out.
I always liked the separation of function, and it made sense for system management tools like fsck to be in sbin and userland binaries like the shell to be in bin. Lumping everything together just feels a step towards the “dump everything in system32” approach taken by windows, which always looked extremely messy.
bert64,
Lumping everything together is extremely messy (as is system32).. It’s just a poor organizational fix for that mess because both bin and sbin feel like a system32 garbage pile even though you have two of them. I liked that gobolinux are trying something different. My vote would be for mainstream distros to clean it up too with a deep reorganization, but I realize that changing established precedent is really hard regardless of the merits.
To further confuse newbies, you must use sudo to install files into /usr/bin. As a result, many package managers install binaries into the user’s home directory, typically under ~/.local/bin, so they are not available to other users.
Iapx432,
It’s just awfully inefficient to have software that installs itself under ~/.local or similar. Games in particular can use many gigabytes and you don’t want every user to install their own copy. I keep wondering if there’s a more elegant way to solve this without requiring users have root access.
Perhaps a file system can dedup identical files across user accounts. But even if that works I still don’t think it’s great that every user needs to install/update the software within their user directory.
It seems like the OS could have a new type of trust domain whereby the software itself get get it’s own special user account in which the software is installed & updated using signatures validated though HTTPS. Multiple users trying to run the same software would be able to use the already installed copy. And thanks to the HTTPS validation would know other (non-root) users haven’t tampered with it. Users would have the ability to run these programs from their own accounts normally. For the sake of security, the software should be sandboxed from accessing/interfering with the user’s files. unless specifically allowed.
I don’t think I’ve ever seen a multiuser OS that works this way. It’s an off the cuff idea, but it could be an interesting alternative to the problem of requiring root or having users installing copies under their profile.
In the past I’ve solved this by giving the “users” group ownership of “/usr/local” and setting the sticky bit.
There is also sudo magic which could work to limit the scope of the command. This would require setting the target group and command, I think. I’d have to try this to be 100%.
CoW filesystems can do this, and it’s supposed to be one of the star features. However, it doesn’t work that well, and it breaks encryption.
I’m perfectly fine with people installing software into their homedir. I’d rather they do that then installing it on the system. I’m also amazed package managers don’t allow this by default. “dnf –user install tmux” should install tmux into my homedir.
Are you quoting the flatpak docs?
Flatland_Spider,
Yeah, I am trying to think of a way for it to work between untrusted users though. Not that the goal is to have malicious users on the system, but as a matter of principal users should be able to efficiently install the same software without trusting each other.
For such a setup to work effectively and be secure, I think the following requirements would need to be met:
1) There needs to be a shared software collection that all the users can publish to
2) Each title published into the software collection needs to be downloaded/installed/verified from an upstream source such as an RPM, DEB, Flatpak, zip, tar.xz, etc. The OS would perform the extraction and authenticate the local installation is a genuine installation, this fact can be communicated to all the users of the system. Naturally the OS can’t vouch for the what the software does (hence the reason to run it in a sandbox), but the OS should be able to precisely vouch for the URL it was downloaded from. Users who trust that URL should trust the software in the collection.
3) Users should have a nice tool to browse/install/customize all software from the shared collection into their own profile. This could mean symlinks, COW FS, bind mounts, or whatever. But the user could not change the software collection by themselves.
4) The system could periodically garbage collect unreferenced software that all users have uninstalled in their profiles.
It seems like it should work in principal, but I imagine there may be gotchas if I actually tried to implement it. One issue is that a lot of software doesn’t come in a ready to install package format from upstream. This is important for the install to be automated. If the install requires manual intervention, then root could still do it, but that kind of defeats the purpose of the idea, which is to install software without root.
I see what you mean about breaking encryption within one file system. This is well outside the scope of anything I had in mind, but for the sake of discussion… I’ve used overlay file systems to overlay readonly squashfs images with tmpfs and it would probably work here too. An overlayfs with COW semantics would let you overlay an encrypted user directory on top of the unencrypted software directory. Then you’d be free to modify the software and have those changes be written into your encrypted profile.
The reason I don’t like this is that it really adds up. Say both my kids install the Hogwarts game, which they both play, into their profile, that’s 85GB per install (not to mention I installed it too). Granted, I cherry picked this game because it’s the largest one we have, but it’s not uncommon for titles to be ~5GB, which is still a lot to duplicate. I don’t consider the duplication of such large resources to be reasonable. Some of the LLM’s I’ve been experimenting with are also in the ~70GB range. Even we want to say just buy larger disks to hold everyone’s files including dups, I would argue that linux should offer a better solution. Of course I technically have root on my systems, so I can manually accomplish the sharing I need, but I was thinking of ways we might make shared software work better out of the box without root.
Haha. I can see similarity in scope. Does flatpak solve the multiuser installation problem? If it does, then I did not know about that and I’d be very interesting in learning more about it!
I thought that /sbin was for critical system binaries that were statistically linked.
jgfenix
I think there’s some truth to that, however keeping static binaries in their own directory was not really a goal in and of itself. It was just a consequence of the very specific technical purpose these directories had: being mount points for different physical disks. Multiple disks were needed due to size constraints. System software located in sbin would need to boot the rest of the system and could not be dependent on libraries on a different disk that have not been mounted yet.
None of this is true anymore and we’ve long outgrown the need to boot across disks, But rather than reevaluting the directories, we’ve kept them as a convention, with the split becoming more arbitrary over the years. Some people try to retroactively justify their importance, but if not for hardware constraints mandating unix have multiple disk mount points, this division would probably never have come to exist.
https://lists.busybox.net/pipermail/busybox/2010-December/074114.html
Alfman,
One other concern is backwards compatibility.
I, too, would like a “clean slate”.
However things like UNIX compatibility, and Linux Standard Base make it extremely difficult to remove them. (At best we would have them symlink to /usr/bin and so on).
sukru,
I tried in vein in to create a new hierarchy in my linux distro. I still feel that we can do much better. I wanted unzipping to equal installation, and uninstallation to be removing the directory. It’s such a simple & elegant idea. Old school DOS/windows and macos software work this way and frankly it was awesome! I think people are longing for simpler software installation. But the big con with my attempts came down to compatibility. So many packages that I wanted to install needed to be repackaged that it become non-viable for me to maintain and I threw up my arms. Mainstream distros that have more resources to do it aren’t as likely to be innovative here.
Aside: Now days I wonder if it would be possible to employ an AI agent to automate the necessary changes to modify software packages for the new scheme?
For me, the biggest deal-breaker is that doing JUST that (i.e. without also adding something akin to what classic Mac OS does under the hood to maintain the Desktop DB) will not integrate it into $PATH and `man` and file-type associations and the like.
You need some kind of filesystem-watching daemon to add and remove system integrations.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
I think that’s a good idea. I don’t think it would be that hard to do. As a side benefit a proper index could be even more efficient than $PATH. $LD_LIBRARY_PATH, etc.
Ideally the file system & kernel would be able to automatically index files with robust integrity like databases can, but we don’t have this today and it’s probably not likely to change anytime soon.
Still there’s a lot that can be done just in userspace and we’ve seen it in other contexts like music software that scans mp3 tags. The linux “fnotify” API is useful to monitor changes in the file system without having to be told to rescan the software. This could be very useful to efficiently maintain a software index. This would work for launching software, but also other assets such as man pages, like you said.
*nod* Classic Mac OS had its flaws, but it was probably the best example ever created of how one can reconcile being human-friendly with putting the filesystem front and centre as a generic concept for organizing things.
(Sure, it had some weirdness from originating on a single-tasking machine with no permanent storage and no MMU, but you can’t beat an OS where making a bootable disk is as simple as dragging and dropping the System folder, every file and folder can have a custom icon and application developers made use of that as an aid to comprehension, file associations don’t care how you organize your programs (bugs aside), installation and removal of applications without system extensions is as simple as copying and deleting them, and the Resource Fork allows applications to have their structure without making it too easy for novices to mess things up, installing System extension was just a matter of drag-and-drop onto the System folder, removing problematic system extensions was doable via drag-and-drop without breaking things, and the equivalent to Windows Safe Mode was simply “hold Shift while booting”.)
Heck, when I’m using it as a retro-hobby thing, the main idea I think didn’t age well is associating files with default applications on a file-by-file bases (HFS creator codes) rather than the format-by-format basis modern OSes uses… something I work around with FinderPop’s context menu for quickly re-typing files.)
Ugh. And I didn’t have time to apply proofreading fixes to that last paragraph I edited in. 🙁
I should also have pointed to how much it helped that Apple created world-class HIGs, with the 1992 one for System 7 still being, in my opinion, the best example of “embody what you teach” in my entire collection of UI/UX design books and still beneficial as a broad-spectrum crash course in how to think about UI design today.
(Assuming you see it in print. The layout doesn’t have the same kick in the PDF off the Wayback Machine archive of Apple’s developer site.)
NFS mounts were also in play. Booting many systems with lots of shared parts also shaped the *nix filesystem hierarchy.
The filesystem flexibility was an accident, but it’s a nice feature.
Flatland_Spider,
I prefer the idea that file system abstractions primarily exist to serve us, the humans. The way the OS works behind the scene should not unnecessarily leak into the abstractions that humans see.
While the OS needs to solve these problems, I see it as a failure that operating systems pollute human abstractions for technical reasons that human operators shouldn’t need to be bothered with.
So under this philosophy the OS should be reshaped to fit human needs rather than the other way around.
Even if that’s the case there is some merit to it. To put the basic system utilities in one place and a small partition just like Android’s system partition can help when there is data loss or to reinstall the operating system. Although now we have immutable distributions …
It makes more sense in interactive multiuser systems. Most people aren’t going to need fdisk or ping, but they would need vim.
A better interpretation is “sbin” is supposed to be for stuff which the system needs when it’s booted into single user mode. It’s an early example of a rescue OS.
Most *nix systems are single user, or the only people logging in are admins anyway, so the split doesn’t make sense, which is why Fedora is getting rid of it.
Anyone who gets upset about this probably isn’t running RH stuff because RH stuff has included “sbin” in the path by default for a long while. Debian has left it out, but I add it back into $PATH. 🙂
There’s not a lot of protection on “sbin”, so anyone can run commands anyway. On the few interactive multiuser systems left, anyone cleaver enough can use the full path of those commands anyway.
Speaking of “real UNIX”… Merging everything in “/usr” was dumb, and the reason I’ve heard is that some “real UNIX” did it that way and software expects that. It’s probably better to remember Linux is not Unix, and “until something better comes along” is part of the Unix philosophy.
Eventually everything will be “/” only. Like C:. Why? To do otherwise would require us to think. And thinking is hard.
chriscox,
Stuffing all programs into a massive directory is exactly what DOS v1 did. This was bad, as I suspect most will agree.
Adding directories meant we could organize files into logical structures:
… whatever. Software could be organized and managed as the user saw fit (8 character limit was insidious, but different topic).
Linux has subdirectories, which ought to be good, be we don’t use them effectively. I don’t mind a directory for basic system commands like /sbin c:\dos. But beyond this the *nix practice of stuffing all binaries into huge directories is highly reminiscent of DOSv1. It boggles my mind that we’ve gone so long without proper organization.
While I criticize having /sbin /bin /usr/bin /usr/sbin /usr/local/bin, etc because their purposes are obsolete, I think people are wrongly taking this criticism to mean binaries shouldn’t be organized at all. Quite the contrary, organization is important to me, but these confusing and arbitrary linux directories are NOT creating a useful hierarchy. Creating more directories to act as massive software piles still resembles the DOSv1 situation, only now more instances of it.
Think of a dentist office where all patient files just get stuffed into a cabinet without organization. Someone wants to fix it and buys more filing cabinets, distributing the files between them in a quasi arbitrary fashion but fails to fundamentally organize anything inside. This doesn’t fix the problem and is what linux binary directories are like today.
I concede it’s really hard to change legacy conventions without getting people into an uproar. But are we really that committed to keeping these unorganized DOSv1-esque software piles around? This has been an ugly part of *nix I’ve wanted to fix for so long, but it’s apparent that it will require buy in from the major distros to get anything done.