Linked by jessesmith on Wed 5th Nov 2014 10:39 UTC
Linux Over the past year I've been reading a lot of opinions on the new init technology, systemd. Some people think systemd is wonderful, the bee's knees. Others claim that systemd is broken by design. Some see systemd as a unifying force, a way to unite the majority of the Linux distributions. Others see systemd as a growing blob that is slowly becoming an overly large portion of the operating system. One thing that has surprised me a little is just how much people care about systemd, whether their opinion of the technology is good or bad. People in favour faithfully (and sometimes falsely) make wonderful claims about what systemd is and what it can supposedly do. Opponents claim systemd will divide the Linux community and drive many technical users to other operating systems. There is a lot of hype and surprisingly few people presenting facts.
Permalink for comment 599102
To read all comments associated with this story, please click here.
Member since:

It might be "...rigorously tested", however systemd has bugs, and the attitude towards those bugs from the developers (from Poettering down) is cavalier. Is one example of a fundamental task that does not work as it should. There are others. That bug is a year old.

All software has bugs. This one has a workaround posted on that very page.

Also, you can use mount units instead of fstab, you know?

There's a very good argument for diversity in init systems, notwithstanding the fact that administrators know how they work, can troubleshoot them, and customise them without knowing a black-box language like C. What, specifically, are OpenRC and Sys V not capable of? Choice is a cornerstone of the Linux ecosystem.
Linux is not about choice.

Systemd is very well documented, with all of the defaults explained at length in the man pages.
They're much more predictable than having to read through a bunch of shell code that will differ from distro to distro and version to version.

The primary advantages of systemd, apart from the increase in predictability, reliability and ubiquity, is that of its project, as opposed to systemd the daemon.
The matrix of programs that fall under the project umbrella are all constantly unit tested against each other, so as to give a solid base platform that can be relied upon, as opposed to each distro needing to attach different binaries for different purposes.

Upstream developer laziness is not the fault of users; and reinvention of the wheel is an ironic argument to use in the systemd discussion, being as that is one of the prime reasons people do not like it. There is little that systemd does that is not already a feature of current systems. They were not first to use cgroups, parallelisation, or socket activation. All of these are available already.

It is monolithic (Even Poettering's argument against this is essentially semantic), tightly coupled, enforces dependencies for no good reason than to grow itself, and will be extraordinarily difficult to replace if it continues to supercede GNU core utilities.

You should find this worrying if you're a supporter of Linux; it is walking down a road one cannot easily come back from.

How is it lazy for developers to not want to have to support code that is not actually related to the point of their project?
Why should they have to write init scripts for their software, when all they want is for it to be reliably started and stopped?
Why should they have to maintain their own helper libraries for functionality that exists in most modern linuxes out of the box?
Time spent on those helper libraries could be better spent improving their actual projects.

Systemd may not have been the first to have those features, but it was the first to make them simple to take advantage of from everywhere.

The kernel is monolithic; what's your point?
The APIs of all of their binaries are well documented. Want to use a different getty? Go ahead. You'll just have to take care of making sure it works with the other binaries in the userland... which is what you already had to do anyway.

Projects are brought under the umbrella so as to remove uncertainty in OS design and state. If the whole of the core OS is in one tree, like with the BSDs, you can more easily ensure it all plays nicely together, and releases are in a state where everything works with everything else.

The only argument that isn't one to tradition ("It's not following the Unix philosophy"... GNU's Not Unix) that I've heard is that the current maintainers may lead the project in a direction that is bad for others.
In that case, it's all non-CLA'd and GLP'd. Fork the lot from the last good state.

Reply Parent Score: 2