This week one of the more interesting WSL mentions is proof-of-concept work on using systemd within Windows Subsystem for Linux. Well known Ubuntu developers Didier Roche and Jean Baptiste Lallement of Canonical’s desktop team mentioned among their WSL work recently was “PoC of systemd on WSL at startup of an instance.“
I’m sure nobody will be unhappy with systemd making its way to WSL.
There was, and I may still have one in the basement. a “Pentium Overdrive” chip that would work just fine on those old boards.
systemd remains an absolute abortion. I have one machine that, for the life of me, cannot figure out what happens when it boots up after a power failure. networking? ssh? Controlled by systemd? Something else? Try to figure out how systemd’s archane, cryptic commands work – not easy! What used to be simple before systemd is now not simple. Startup scripts that worked for 20 years no longer work. Simple shell scripts that guarded against buffer overflows are now replaced with C programs with no buffer overflow protection. Whahhh???
For the past decade majority of Linux world moved to systemd and is content. So I suppose the problem is with you, Steve, not with systemd.
Sustemd does have issues, and WONTFIX doesn’t solve them. But perhaps WONTLISTEN is the root of all its problems.
well… if most of what they heard was things like the comment from Steve, sure they WONTLISTEN 🙂 But for my part, I haven’t had too much problems with systemd. It’s not perfect, clearly, and it’s not the solution for every kind of system setup, but it’s still a progress from the 40 years old sysvinit.
I’ve been watching a few Linux videos. The balance of those I have been watching has been towards pushing for more usability and customer friendliness and sorting out long standing architectural issues.
This video on systemd explains a lot. It makes more sense than the nonsense I caught from religious wars which most people would have heard if most people even remember. One point made deep into the presentation is systemd is about more than system initialisation. The presenter goes on to make the point you need a holistic design view and cannot necessarily do everything in one project.
https://www.youtube.com/watch?v=o_AIw9bGogo
I also watched some videos by a Microsoft developer who explained some points about portability I’ve mentioned before and not many developers seem to get them.
I’ve been long enough on the internet that I suspect a lot of these comments are just a really silly game of telephone.
Someone read somewhere sometime ago that systemd is this awful thing, and here we are 10+ years later with the same narrative almost verbatim. Just like how people are still using the same memes from Windows 95 about Windows 11….
SystemD being overengineered, fragile and buggy is a fact. This article is an example of it. Since when exactly an init system needs porting?
I’ve been burned with SystemD several times, and it is several times too many for an init system. Examples:
– System not booting after an upgrade due to systemd errors. Fixed with a reinstall.
– No network after an upgrade. Fixed after an hour of googling on a phone.
– Installed a server daemon shipped without SystemD units – half a day spent learning how to set it up just to have it running at startup.
– Only one LXC container with SystemD can run at a time. I’ve added an extra container and another unrelated and more important container stopped booting.
– I tried using cgroups to set application limits only to find this mechanism is now under a sole control of SystemD, who knows what for.
Booting process is trivial and in most cases unimportant – I reboot my laptop once a month, usually when I forget to plug it in. The only thing I want from the init system is not to fail and be easy to fix when it does. Using this metric, SystemD is the worst tool for the job.
ndrw,
I’ve been burned by those things as well. There were many awesome tools including LXC employing cgroups before systemd. Like it or not though linux kernel control groups moved to version 2 and put systemd in the privileged position of managing root cgroups exclusively. Containers can no longer directly provision their own cgroups. You can still have controllers, but now they have to do through systemd’s privileged APIs and they can’t call the kernel directly.
http://systemd.io/CGROUP_DELEGATION/
https://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/
https://www.linuxfoundation.org/blog/all-about-the-linux-kernel-cgroups-redesign/
In theory you still don’t have to use systemd or it’s tightly coupled API. But in practice if you don’t then your software is incompatible with distros running systemd.
There was a power outage here due to high winds just the other day. Every one of my Linux boxes booted right back up like it never happened. Btw, if you’re still using scripts from 20 years ago, you’re computering wrong.
I haven’t had all the issues you have with systemd, but I definitely don’t think it’s production ready yet and likely never really will be as long as Poettering and Co. are in charge of it with their DONTCARE/WONTFIX attitude.
I’ve avoided it by using distros and OSes that don’t use it or replace it with something more mature and stable. Void, Slackware, Alpine, PostMarketOS, MX Linux, AntiX, OpenBSD, PCLinuxOS, and KISS Linux are all great systemd-free distros/OSes; you should be able to find something among them to handle whatever tasks you’re trying to accomplish, from servers and containers to workstations, Linux gaming, embedded, portable, etc.
With all of that said, some distros out there have managed to tame systemd and make it mostly stable enough for daily use; in particular Pop!_OS has been pretty solid on one of my laptops that refuses to work 100% with anything not Ubuntu based.
I’m reading Steve’s comments as reactive FUD. I have no idea how good or bad systemd is as a piece of code. (Citations please!) I would say don’t blame systemd for vendors implementations or other components causing issues. Linux Mint is based on Ubuntu and runs pretty rock solid to the point where out of the box it’s an extremely dull experience. In watching youtube chatter I haven’t come across a single person screaming about systemd on any system. I’m curious why you picked the language “tame it” and “mostly stable” like systemd is an out of control monster. Again, I’ll just ask for citations and explanations for any documented problems including vendor error… Dull I know but this is material I can work with. I can’t work with opaque rumour and anecdote.
I’m not sure how you got “out of control monster” from my “mostly stable” description; to me “mostly stable” means “occasionally unruly”, I apologize if that wasn’t clear enough. And that’s been my experience; systemd is occasionally buggy and does what it wants instead of what I want it to or expect it to. I’ve had it hang at boot, hang at shutdown, scramble error logs, prevent X from starting with no error message or log output, and a few other nasty things. But since you don’t want to hear my anecdotes, here’s some links to other sources:
https://ewontfix.com/14/
https://www.agwa.name/blog/post/how_to_crash_systemd_in_one_tweet
Linus Torvalds lamenting that because the systemd developers refuse to fix show-stopping bugs, the kernel devs are forced to patch around it in the kernel, which should never, ever be the case (userland should never dictate what happens in the kernel):
https://lkml.org/lkml/2014/4/2/420
This attitude of “we don’t care if it’s broken, we won’t fix it, deal with it, fuck the users” from the systemd developers is my biggest issue with the project as a whole. How can anyone trust core system software that the devs refuse to fix when it breaks? Why would I want to use it on a mission critical server knowing that the devs don’t care if one of their bugs takes down my whole system? Distro maintainers have been forced to work around its bugs and massage it into allowing basic things like not hanging the system randomly or preventing simple tasks like creating and deleting users, things that were never an issue before systemd took over. It’s a great idea, a great concept, trying to bring the various distros under one umbrella of system management, but the implementation is what sucks. If you’re going to do something that grand and all-consuming, at least put in the effort to get it right.
And in fairness, here is Lennart’s reasoning behind systemd:
http://0pointer.de/blog/projects/why.html
@Morgan
Thanks for the links. They makes some good points but I think their shoutiness and conflating issues gets in the way. More on this in my closing paragraph.
The point about a clean architectural design is good as is code quality and interface design and scope. Personally I’d rather they began with that and explained themselves better. I also think demanding to use Rust when the Linux kernel itself has only just started using it is a bit on the demanding side. Other than this I agree a more industrial approach is better than write once and throw away inspired coding practices.
Those posts are now a few years old so it would be interesting to see where things are today.
Having watched a good video on it the other week I have a slight clue. The consensus was something needed to be done to modernise the guts of Linux and systemd solved many but not all of those problems. In fact the video is only a couple of years old so newer than those posts and the issues of communication and progress was brought up. Some of the comment by the presenter said a lot of that boils down to technical people may not be the best with communication and making some allowances for each other and appreciating different strength and weaknesses would help things go smoother.
@HollyB:
I agree with this completely, and I think a lot of the friction Lennart and co. faced from the beginning was due to lack of proper communication combined with their “move fast, break things” approach to development. That approach might work for a web based startup or some small side project, but something as important and deep-rooted as the system supervisor in the OS that is the foundation of everything we do with computers requires a more mature and conservative approach. Imagine if Linus Torvalds said “hey, let’s completely change some low-level kernel stuff without testing first, and ignore bug reports from major companies and regular users who depend on it for their daily business!” There would be a revolt, and at least on the business side perhaps a huge migration to FreeBSD or (shudder) Windows.
Morgan,
That’s been my issue with it as well. I’ve disliked sysvinit for a long time and had no qualms about replacing it with something better. Of course systemd is always defended as being better than sysv, which it is, but frankly that’s a low bar. I think its design falls short with binary logs and ugly inter-dependencies.
The problem with Lennart in particular is that he doesn’t work well with others, even to the point of his projects suffering for it. His my way or the highway approach, while sitting in a position of influence at redhat, has created tons of friction, problems, and division. Sysvinit was dated and on the way out one way or another, but the way that Lennart does things is a bit jarring for the linux community. It is what it is though.
@Morgan @Alfman
I have no dog in this fight. I only went 100% Linux last week and as a general point if it’s flaky and the “community” is any bother I’m simply not going to engage with that. If I was being paid to worry about this which I’m not my outline view would be:
What is the architectural map?
What are the principles/reasons behind the design intent?
What is the external structure and external interface?
What is the internal structure and interface?
What are the key specific issues for design/coding/implementation?
That’s what I want to hear about. Really, what I want is a clear description of the problem and the proposed solution and why.
My personal approach is I prefer to use industrial quality principles and am happy to be revolutionary when I have to be. The reason is you have to balance quality with change. Sometimes the quick and dirty approach is required otherwise nothing would get done and we’d still be here today with nothing but vapourware. (And that’s what the presentation actually said.) I get that but I think embracing conservatism is too much in the other direction too. Another thing you get in Linux is people being mini-Hitlers or the opposite problem of when everyone is in charge nobody is in charge. It’s not unique to Linux. I’ve seen this many times over the years in organisations of all sizes both state and private sector and third sector.
I have two problems to sort out this week. One is why my OS crashes when I boot with a card in the Expresscard slot. The other is a wholly unrelated problem of navigating a supplier because of problems caused elsewhere in the system by pen pushers and politicians poking their nose in. That last one is causing me stress and losing me time and money for no reason.
HollyB,
Truthfully you’ll probably be fine given your general user use case. But there are many of us who take issue with systemd’s tight coupling. Things that used to be completely independent are now under systemd’s control, like binary log files & cgroups. Consequently the transition to systemd forced some of us to change our preferred tooling, for example:
https://unix.stackexchange.com/questions/170998/how-to-create-user-cgroups-with-systemd
I’m not a fan of monolithic systems, but fine whatever, most of the fallout is behind us now and it’s usually not a problem any more. However there are still consequences of this design, like policy decisions that logically should be up to system administrators now being in Poettering’s hands. He’s notorious for not giving a crap about what he breaks and he’s not willing to work with the community to solve problems caused by his policies. Here’s an example of a simple request to add retries for mount points because networking is flaky. The request is denied and still causes problems for people today.
https://github.com/systemd/systemd/issues/4468
I think this recurring theme you’ll hear about Poettering is a fair assessment: He wants the whole world to rotate around his own needs and doesn’t care about others.
Wow, I haven’t seen an expresscard in ages! I hope things work out, keep us posted on your progress 🙂
@Alfman
Personally I’d suggest looking into policy when it comes to the project management. This needs to be hashed out with input from people with project management and related expertise then documented. The reason is this makes decisions accountable and subject to rational examination. Then there’s the issues of funding. Attaching funding to compliance can encourage co-operation.
Monolithic systems can be overwhelming I agree. That’s where being smart with architectural abstractions can help. It means you can more readily deal with things at the big picture level, component relationship level, and the more bottom up nitty gritty. With that out of the way it makes fiefdoms more transparent and less able to use control of one part or obstruction behind other parts to cement unaccountable power.
One problem caused by unaccountable power within a larger framework is people can develop egotism and the idea they are right about everything all the time, and any compromise should always be in their direction.
I’ve taken a top to bottom view discussing this but if you look around organisations you will see similar patterns. The key solution is policy. To sort this out you need to know who to contact at the top or an influential group. Lastly it also helps to have the general public on your side. Most of the won’t have the expertise or energy or interest but it can help set a tone.
Poettering is German. German’s can be snobby about following rules and this may be causing a culture clash. I’m not going to personalise this but he may not have figured out yet that people are not nice logical and predictable systems and that the shape and tone of the high level politics does matter.
I don’t have a Thunderbolt 2/3 port (or USB 3 port) on my laptops so the Expresscard slot provides useful additional functionality. It’s mostly for rare use of an eGPU but a USB 3 card is a nice bonus if I need one. The Expresscard slot works fine under Windows and gives me a hard crash during boot on Linux. I haven’t yet figured out where the logs are and how to interpret them and it’s a low priority. I used a (cheap) USB 3 card to test the slot. From what I can glean it may be the card chipset as another one is reported to work fine.
On Windows I could use the driver reporting tool which would send a message to Microsoft’s driver team. They had a reputation for being fast. A new driver for a tablet I had issues with popped up on Windows update the next week and the problem was solved. Linux? For the Expresscard issue on Linux I have to go digging around a mountain of stuff and figure out whether it’s the vendor, someone earlier in the chain, or the driver maintainer whoever they are and rely on their grace and favour. It would take me a week just to get this far!
It may be a PCI Express power management issue. I’ll need to check the BIOS and settings. I think the USB 3 card has an NEC chipset which is problematic but will need to look this up. I’ve got some other Expresscard’s and the eGPU I can try so it looks potentially resolvable.
There’s a project on youtube to convert a USB 3 socket to a USB C socket which is interesting but a problem for another day!
@Alfman
I’ve taken a look at your links. I don’t know enough about the technicals or how components work together in practice to have a definitive opinion but there’s a valid case to say people either aren’t using what is there correctly, or the problem lies with the third party component. That’s going to happen as things roll over from one system to the next. Once a problem is identified it can whiff a little around how it is resolved and who does the work. I’m not reading anything overtly power trippy or problematic at first glance.
Complaining of quick and dirty development then wanting a patch layer to sort out a problem with a third party component? Which one is it?
Microsoft have their problems with all this too and it’s usually hidden behind the branding and marketing. That or forced obsolescence so everyone has to go out and buy new stuff because they cannot be bothered to fix the old stuff.
To a degree this is politics. Sometimes the art of getting something done is knowing what to ignore. Managing the balance of this can be tricky.
HollyB,
Well, he’s got a history of power trips against the community. But anyways the problem in this case is twofold. Poettering has no business defining policy for system administrators, it’s not his place to tell admins what to do and it shouldn’t be systemd’s concern. Unfortunately though systemd’s monolithic design has roped more and more into it such that it’s become it’s own bottleneck for repairs. Fixing things on the system can require patching upstream systemd binaries, which they may or may not be willing to do. Secondly systemd is a userspace helper built to support the linux kernel. The linux kernel is not built to support systemd. Sure Poettering has an opinion about how kernel development should go, but at the end of the day it is systemd’s responsibility to support the linux kernel and its drivers, not the other way around.
I can go hundreds of days with no problems with linux mounting network file systems, but every now and then there’s a temporary problem that causes mounts to fail, like booting computers in the wrong order after a power failure, and that’s when systemd’s limitations are more evident. The only reason these haven’t been fixed all these years is Poettering’s insistence that systemd not be able to do things like retry network mounts despite the fact that it would be useful for users. This was one example, but the problem can also extend to more areas under the systemd umbrella.
Ultimately most users will never have a reason to look at systemd. I find it better than sysvinit, I still don’t like userspace tools being coupled with the init system and prefer the simplicity and unobtrusiveness of runit, but that’s just my opinion.
HollyB,
I’ve wanted to give eGPUs a try myself, but PCs with thunderbolt ports are relatively rare and I don’t have any on my computer. I’m curious how well the eGPUs work on linux.
Just a stab in the dark, but you might try booting with “noapic” kernel parameter.
Also, I make a habit of removing “quiet” and graphics modes on boot. The graphical splash screen can be “nice”, but it hides the console, which can reveal what the system was doing at boot before it crashed.
@Alfman
I think they can work but it’s one of those YMMV things. I’ve got a GDC Beast. (Call me cheap.) You can get eGPU adapters with different leads. Expresscard, Mini PCI-E, and NGFF. How well you can bodge them into your laptop case is another thing. Some laptops have a clean design and can run the lead out through a DVD bay. With others you may need to cut a slot.
They usually only use 1x or 2x PCI-E lanes. I’ve seen benchmarks and tested it myself on Windows and it’s no big deal. If using the HDMI out you’re not contesting traffic via loopback to your laptop display. The only slow spot would be transferring data over to the graphics card but once it’s there it’s there and traffic overhead is minimal. Depending on the card you get performance can be 70-90% of the full size x16 bus.
You’ll need to knock up a case and either use a Dell power supply (I forget which) which will limit the wattage of the card. iirc 150W but you’ll need to look it up. Alternatively you can use an ATX power supply and power the card directly which removes the power limit.
There’s youtubes on this if you want to see for yourself. There’s also https://egpu.io/ who can have useful data.
Worth a try. Noted. Thanks.
HollyB,
In my particular case I am more interested in using the GPUs for cuda. PCI lanes are not a bottleneck for offloading jobs that don’t need to interact that frequently with the rest of the system. However my understand is that nvidia introduced a restriction to block it’s newest cards from working with m2, minipci risers, and even PCIE x4 slots.
https://www.nvidia.com/en-us/geforce/forums/geforce-graphics-cards/5/299995/rtxgtx-cards-in-a-pcie-4x-slot/
I’ve had trouble finding concrete information about it, and I only learned about it because I discovered the x4 restriction on my own system. I’ve seen dedicated eGPU products like this one, that presumably don’t have this restriction, but it’s not running a standard GPU inside.
https://www.newegg.com/gigabyte-geforce-rtx-3090-gv-n3090ixeb-24gd/p/N82E16814932385
So I’m not sure whether the generic eGPU cases you can buy separately are going to be incompatible with stock nvidia GPUs going forward.
@Alfman
That is naughty isn’t it?
My main curiosity is using an eGPU for OpenCL so not too different although that’s only to run Davinci Resolve and then only rarely if at all.
I don’t know what your coding tasks or requirements are but it may be worth thinking about abstracting so you can port higher level code across API’s. There’s some scope in there for skill pooling… Actually, Davinci Resolve is one of the few big applications which works across all three API’s. As long as your atomic tasks (a task which cannot be subdivided lower) are fine and map to the API’s well you should be free of the tyranny of a walled garden API.
I have no idea about the state of play of code convertors, although I do know Nvidia have always been dodgy at the driver level and with benchmarking and failing that running hardware “hot” not to mention all their lock-in tricks with games developer support programmes.
FWIW, I’ve run hundreds of systems (maybe low-thousands, it’s tough to keep track of the VMs) using systemd over the last bunch of years and I struggle to remember a time where systemd caused me trouble that was wasn’t me not knowing how it worked. The vast majority of these in the RHEL/Fedora ecosystem, with some in Ubuntu. It’s not as ugly as it’s made out to be, although I don’t love pulling so much into one project.
I have worked on multi-billion organizations with literally thousands of linux instances from distros using SystemD.
I’d say it’s pretty much “production ready.”
There are always these silly arguments to the personality of one of the developers, as if that had any bearing with the actual functionality of the project.
When I hear all these horror stories, where minor hiccups or inconveniences are made into mountains of hyperbole. It just makes me wonder. I have encountered my share of technical issues every now and then, but once I solve them… I tend to forget about them, I certainly don’t carry some kind of grudge burned into my memory. Maybe a more balanced life of more non-computing/tech related activities may help?
Would not waste my time with PulseAudio. Pipewire is a compatible better architected replacement.