Linked by Thom Holwerda on Sat 11th May 2013 21:41 UTC
Windows "Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening." That's one way to start an insider explanation of why Windows' performance isn't up to snuff. Written by someone who actually contributes code to the Windows NT kernel, the comment on Hacker News, later deleted but reposted with permission on Marc Bevand's blog, paints a very dreary picture of the state of Windows development. The root issue? Think of how Linux is developed, and you'll know the answer.
Permalink for comment 561380
To read all comments associated with this story, please click here.
RE[3]: Too funny
by Lunitik on Mon 13th May 2013 09:39 UTC in reply to "RE[2]: Too funny"
Lunitik
Member since:
2005-08-07

See, this is what I *don't* like about systemd; it does too much. We already have cron and at for scheduling, we have udev for hotplugging and there are already many good solutions for managing logging.


There is no more udev, it is part of systemd - which is a good thing because the system management daemon should be managing the system. Why is it good to have at and cron around, as well as xinetd when all they're doing is managing services. The problem is that these are each using different methods, and are utterly incompatible so we are increasing the learning curve for no benefit at all.

Sure, systemd has a learning curve at first, but as you get used to it it just makes sense.

Personally I much prefer the Upstart approach of focusing on a small set of problems that need solving.


Upstart is crap, there is no real benefit to it over sysv at all. The main problem is it is still basically using scripting for everything, so there are still something like 3000 fs calls when bringing up the system. This, when compared to the about 700 of systemd, is simply too much - systemd needs to come down somemore too, but this includes everything up to a gnome desktop, when gnome-session starts to use systemd user sessions, this will come down drastically.

As others have said, fs access on linux is not great, so the less times we are accessing the disk, the better, and the faster the system will come up.

Honestly, I still don't even understand upstart, its language is just broken. I keep trying to look into it, but the more I do, the less I like it. Systemd uses the file system in a logical way for booting the system, and gives us more access in the /sys dir to many of the features of the system. Upstart gets us nowhere because fundamentally it is only a redesign of the init, it is not something substantially new.

You seem to think this is a good thing, but systemd simply results in a cleaner and easier to understand system. By removing many things like at and cron and syslog and logrotate and all these programs that do not communicate with each other, we end up with a more integrated base system. For me, this is a good thing, it is a miracle all these projects have managed to coexist for so long at all.

By moving all these into one project (with many binaries for each thing, because modularity is important for parallel booting) we now get a consistent API for all events on the system, whether hardware or software crashes or whatever, it all becomes predictable. Everything is handled in the same way system wide, and there are no more obscure config settings to learn depending on exactly what you're trying to achieve. This is a huge benefit to Linux, others moved to similar approaches a long time ago.

Couple this with the fact upstart has a CLA, and systemd becomes the only intelligent choice. Canonical does not have the Open Source communities best interests at heart, so their projects will not be touched by anyone outside Canonical. You essentially fall into the same trap as any proprietary company, you become utterly dependent on a single company for all issues that might arise.

Canonical will be replacing udev in UnityNext, and haven't been upgrading it in their repo's for a couple releases now. They are discussing replacements for NetworkManager, they want their own telepathy stack. They will be competing head on with not just Red Hat, but people like Intel and IBM, all these companies that are heavily invested in Free Software. Canonical are fooling themselves if they think they can compete by doing everything themselves. They simply lack the competency.

To date, the only things Ubuntu have actually done themselves is a Compiz plugin that was quite broken for most people, an init system whose initial developer left, and a few bloated APT frontends. Below this, they utterly depended on Novell and Red Hat for everything. Now they want to replace all this, they want to control everything themselves, everything good about Open Source simply is lost on Canonical.

For some reason, they are praised because everything "just works", but it works because of work done by others. It honestly makes me sad that the praise is so misplaced. In fact, Ubuntu are mostly reponsible for making things not work, for breaking others work because they don't actually understand the software they are shipping. To this day, Lennart gets a bad rap because Ubuntu devs didn't understand pulseaudio and so shipped a broken config.

Of course, Ubuntu is heavily used, so a broken config in their shipped release gave people a bad opinion of pulseaudio. Another example is the binary drivers constantly breaking on upgrades, it makes Linux look bad because users aren't really made aware of the issues before something breaks. Canonical just makes horrible decisions throughout the stack and this harms Open Source development because people just accept proprietary poorly maintained software instead of pressuring these companies to play nicer with Open Source.

This is my real problem with Ubuntu, they simply don't care about Free Software, they do not care about Open Source, they just want to benefit from it. They think they are being slandered in upstream projects because their code is rejected, but the code is rejected because it is bad code. Now they want to rewrite everything, they want the entire stack to simply comply with their tests, they cripple developers, they make it ok for poor code provided it meets the testing requirements. Open Source is innovative because of its dynamic nature, Canonical are ridding themselves of this benefit.

Edited 2013-05-13 09:57 UTC

Reply Parent Score: 3