Linked by jessesmith on Wed 5th Nov 2014 10:39 UTC
Linux Over the past year I've been reading a lot of opinions on the new init technology, systemd. Some people think systemd is wonderful, the bee's knees. Others claim that systemd is broken by design. Some see systemd as a unifying force, a way to unite the majority of the Linux distributions. Others see systemd as a growing blob that is slowly becoming an overly large portion of the operating system. One thing that has surprised me a little is just how much people care about systemd, whether their opinion of the technology is good or bad. People in favour faithfully (and sometimes falsely) make wonderful claims about what systemd is and what it can supposedly do. Opponents claim systemd will divide the Linux community and drive many technical users to other operating systems. There is a lot of hype and surprisingly few people presenting facts.
Thread beginning with comment 599029
To read all comments associated with this story, please click here.
Complexity
by tbullock on Wed 5th Nov 2014 18:53 UTC
tbullock
Member since:
2012-01-30

I've been running OpenBSD using cwm (for when I need graphical stuff) and just a serial console or ssh to the shell otherwise. My operating systems don't restart more than once or twice a year if I can help it.

Obviously systemd won't ever become a part of OpenBSD (or any other BSD), so largely I've been ignoring the yakyak around the program.

On the whole, the systemd software appears complex and full of magic, moreover it doesn't seem to be in keeping with the traditional philosophy of keeping stuff small, simple and independent of each other.

I took a moment a minute ago to download the latest systemd tarball and looked at the source. Some things jumped out at me:

- 37.5 MB in uncompressed stuff
- 12.8 MB of c source (This is huge!?)
- Dependencies: dbus, udev, cap, attr, selinux, pam, libaudit, others?
- Dependencies of dependencies: Rabbit hole, but there are some here, like X11.
- Presumably these are all statically linked into the binary so that emergency booting is possible (like if I cannot mount /usr or /var or something else important).

All that to turn some software on and off.

My opinion is that this is way too complex for an init system; for me to use something like this I'd want to see less than, say 6 (arbitrary!) c source files each with less than 2k lines of code and probably a dependency to libevent and imsg for a small privsep state machine. Then you'd at least know what you're getting into.

Even then, the small collection of sh (not bash) scripts that starts my handful of daemons is just immensely preferable for me.

Reply Score: 3

RE: Complexity
by Sully on Wed 5th Nov 2014 20:04 in reply to "Complexity"
Sully Member since:
2014-11-05

My opinion is that this is way too complex for an init system; for me to use something like this I'd want to see less than, say 6 (arbitrary!) c source files each with less than 2k lines of code and probably a dependency to libevent and imsg for a small privsep state machine. Then you'd at least know what you're getting into.

Even then, the small collection of sh (not bash) scripts that starts my handful of daemons is just immensely preferable for me.


It's often the BSD people that get it.

Edited 2014-11-05 20:04 UTC

Reply Parent Score: 2

RE: Complexity
by CapEnt on Wed 5th Nov 2014 20:47 in reply to "Complexity"
CapEnt Member since:
2005-12-18

That's the problem, the "traditional philosophy" stuff is being dropped on Linux.

Linux is here, trying to gain desktop market share for the last 20 years, and never went above 1%. Some people on distribution and desktop environment development are growing increasingly tired and frustrated by this. They want to try new, bold, strategies.

It's is clear that it is near impossible to build a desktop operating system that "just works" using traditional UNIX development philosophy. On desktop, strong coupling of kernel, underlying components and desktop environment is crucial.

Having a desktop made of disjointed pieces patched together by scripts and lots of manual configuration can by fun for a geek, but for the average user and corporate users, this is simple not a option.

I remember 15 years ago when my desktop was a heavily customized Window Maker setup, a hand compiled kernel tailored exactly for my PC, a customized init sequence, a really bugged ALSA manually configured for my 4.0 audio setup, and that i had to record DVDs using command line applications. Why i used such thing? Because back then i found this to be funny and desktop environments (KDE and GNOME) also was far more primitive, so a desktop environment did not had such importance.

But that's it, back that time, you could not even eject a cd from your drive without a command line. Your system could not even detect that you plugged a headphone on your laptop. Could not properly regulate power levels. Could not setup screen brightness. Getting a webcam to work was a epic journey to the guts of your system. In some distros, even getting two applications to use the sound board at same time could require manual intervention (do you remember artsD? esd? dmix? the oss vs alsa battle? The bugged alsa oss emulation layer?). Something as trivial as replacing a harddrive could become a unbelievable mess.

Now the Linux kernel and all userland are far more evolved. We are in a point that we have the man power, the technical foundations and the corporate backing to make a desktop that "just works".

So, the philosophical question here is: we want desktop Linux to be finally popular or we want it to be a eternal niche project?

Reply Parent Score: 5

RE: Complexity
by Bill Shooter of Bul on Wed 5th Nov 2014 22:19 in reply to "Complexity"
Bill Shooter of Bul Member since:
2006-07-14

systemd is more of a meta package that contains an init system as well as some optional services that are developed by the same people in the same repository. Kind of like KDE is a a windows manager, desktop environment and assorted utilities and applications. You can't take a look at just the size of all of KDE and make a value gross judgment on it. If you approach it with an open mind, at least you'll be able to appreciate the motivation behind it.

The various parts communicate via dbus and it uses cgroups to keep track of everything. The operating philosophy is to improve system management on linux. So it has lots of optional tools for doing that including things like hostnamed, logind, journald, etc.


Take a look, its pretty cool!


http://www.freedesktop.org/wiki/Software/systemd/

Reply Parent Score: 3

RE[2]: Complexity
by tbullock on Thu 6th Nov 2014 08:14 in reply to "RE: Complexity"
tbullock Member since:
2012-01-30

Take a look, its pretty cool!


I have looked; the code idioms are preposterous. I would fire any of my employees who wrote software like that.

Reply Parent Score: 1

RE[2]: Complexity
by hobgoblin on Thu 6th Nov 2014 18:41 in reply to "RE: Complexity"
hobgoblin Member since:
2005-07-06

"some optional services"

For how long?

Reply Parent Score: 2

RE: Complexity
by gilboa on Wed 5th Nov 2014 22:24 in reply to "Complexity"
gilboa Member since:
2005-07-06

If you think systemd is complex and large (and mind you, system is Linux base system / infrastructure and *not* mealy an init system), you should take a look at the Linux kernel.

- Gilboa

Reply Parent Score: 3

RE[2]: Complexity
by tbullock on Wed 5th Nov 2014 23:24 in reply to "RE: Complexity"
tbullock Member since:
2012-01-30

Yes, the linux kernel is very complex. I've dilly dallied with it in the past.

Edited 2014-11-05 23:26 UTC

Reply Parent Score: 1

RE: Complexity
by dbolgheroni on Thu 6th Nov 2014 00:17 in reply to "Complexity"
dbolgheroni Member since:
2007-01-18

And considering sys.tar.gz from OpenBSD's FTP has only 20 MB, you can see how insane they got.

As almost all software gets bloated, I'm always looking for well-designed, less bloated software. It's not uncommon today to see software with orders of magnitude of difference in size with basically the same functionality.

Again, insane.

Reply Parent Score: 2

RE: Complexity
by grat on Thu 6th Nov 2014 05:32 in reply to "Complexity"
grat Member since:
2006-02-02

Presumably these are all statically linked into the binary so that emergency booting is possible (like if I cannot mount /usr or /var or something else important).


Under Red Hat 7, the combination of systemd and Fedora's "improved file layout" (ie, move /bin into /usr/bin) means if you have a system that has /usr on a separate partition-- It cannot be upgraded to Red Hat 7, and may or may not work (systemd relies on /usr being available when it starts).

Minor issue in the grand scheme of things, but it means my group can't upgrade any of our servers in-place to RHEL7.

Does make you wonder about the size of the initramfs, though.

Reply Parent Score: 2

RE[2]: Complexity
by hobgoblin on Thu 6th Nov 2014 18:47 in reply to "RE: Complexity"
hobgoblin Member since:
2005-07-06

There are ways around the /usr thing, but it depends on using a initramfs that do the initial boot and mount of /usr.

I think their plan is to use this for cloud VMs, so that a shared /usr can sit on a SAN and be mounted by any number of minimal VMs as they are spun up by the load balancer.

Frankly all that is coming out of systemd and Gnome these days seems oriented at two things.

1. cloud computing.
2. multi-seat government/military installations.

For either of these the feature set of systemd is pure gravy. but for most outside of that it is straight overkill.

Reply Parent Score: 4