Linked by jessesmith on Wed 5th Nov 2014 10:39 UTC
Linux Over the past year I've been reading a lot of opinions on the new init technology, systemd. Some people think systemd is wonderful, the bee's knees. Others claim that systemd is broken by design. Some see systemd as a unifying force, a way to unite the majority of the Linux distributions. Others see systemd as a growing blob that is slowly becoming an overly large portion of the operating system. One thing that has surprised me a little is just how much people care about systemd, whether their opinion of the technology is good or bad. People in favour faithfully (and sometimes falsely) make wonderful claims about what systemd is and what it can supposedly do. Opponents claim systemd will divide the Linux community and drive many technical users to other operating systems. There is a lot of hype and surprisingly few people presenting facts.
Thread beginning with comment 599219
To view parent comment, click here.
To read all comments associated with this story, please click here.
Member since:

I note you do not address any concerns relying bug fixes, or administration transparency. The latter is a big, big deal in server land. Workarounds are a symptom of good design.

I did, actually.
Transparency if provided by every single unit file using the same default configuration.
Once you know the defaults and behaviour for one, you know them all, unlike having to read a script for every single service.

As for bugfixes... C is orders of magnitude easier to understand than bash, so those are made easier. Also, you get the advantage of helping everyone, instead of just you/your distro.

I'm sure you know full well that is not at all straightforward (by design), and has been the source of some pretty big headaches for shim projects. The API most definitely is not stable, by LP's own words.[\q]

It's not straightforward, but it *is* all documented.
One of the advantages of having the entire core OS under one umbrella is that changes can be made faster than before, because the test matrices will show all regressions in functionality.
So, it's exactly the same as before, in that you have to keep track of the software yours interacts with, except that now development is more agile, so you have to pick up the pace too.

[q]If portability isn't important to you (or anyone else), that's your prerogative. I've no problem with that, or proprietary software and practices. There's room for everything. However, UNIX philosophy is not an 'argument to tradition', it is a set of tried and tested design ideas that have been proven to work, and have contributed strongly to the considerable progress of both Linux and the *BSDs.

Linux does not, and should not, limit itself to Unix design methodologies and techniques, instead of making its own way.
It's all well and good to strap together binaries and utilities to give greater functionality for end users and admins; piping shell commands into each other is ridiculously powerful.
However, when the whole system is designed like that, you end up with slower development due to increased complexity of interactions.
It's no secret that plan 9 hasn't taken off, and that's more unixy than unix.
I'd say it's for the same reason Linux won out over HURD; many independent moving parts is far more flexible, but far more difficult, and maintaining those parts, increasing functionality and performance without breaking them, just gets harder as time goes on.

Reply Parent Score: 3