“Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening.” That’s one way to start an insider explanation of why Windows’ performance isn’t up to snuff. Written by someone who actually contributes code to the Windows NT kernel, the comment on Hacker News, later deleted but reposted with permission on Marc Bevand’s blog, paints a very dreary picture of the state of Windows development. The root issue? Think of how Linux is developed, and you’ll know the answer.
The comment was originally posted at Hacker News, and was verified to be from an anonymous developer at Microsoft who works on the Windows NT kernel. He later deleted the comment, but allowed Bevand to repost it on his blog – but with the proof, “the SHA1 hash of revision #102”, removed. It paints a grim picture of the state of Windows.
He claims Windows is indeed slower in lower-levels than Linux, and the root cause of the problem is “social”. While Linux’ open nature attracts developers working for glory and recognition, this is not the case within Windows – in fact, we run into something that has long been a problem at Microsoft: fragmentation. Windows development is managed by many different teams, and these teams do not work together at all.
Component owners are generally openly hostile to outside patches: if you’re a dev, accepting an outside patch makes your lead angry (due to the need to maintain this patch and to justify in in shiproom the unplanned design change), makes test angry (because test is on the hook for making sure the change doesn’t break anything, and you just made work for them), and PM is angry (due to the schedule implications of code churn). There’s just no incentive to accept changes from outside your own team. You can always find a reason to say “no”, and you have very little incentive to say “yes”.
This means there’s no incentive on working on small, incremental improvements, according to the Windows developer, because only huge improvements might get you credit – small improvements “just annoy people and are, at best, neutral for your career”. This is the exact opposite of, say, the Linux kernel, where there is a continuous stream of small improvements and experimentation.
There are external issues as well – such as a talent drain to other companies, like Google. This means Microsoft has to rely more and more on people straight out of college, and these have no knowledge about why things work the way they work, and they are afraid to change things that do work. They then tend to recreate existing features instead of improving old ones.
All in all, it paints a not-so-pretty picture of Windows development. Since the blog post hit publicity the anonymous developer has had a few more words to say – the usual stuff in this scenario, trying to make it all a bit less harsh. Still, even with the harshness reduced, it ain’t pretty.
It’ll take a very strong manager to break down the walls and change all this, but you’d think upper management is aware of these issues and working on them.
publicity -> publicly
ware -> aware
This makes a lot of sense. Its difficult to pay for the manpower to match a group who do something out of love of the art. Microsoft, at least the top levels, have known this for a long time. Its the reason for there unending war on Linux, even when they may publicly applaud open source. Its hard to compete with free.
The fat majority of devs working on the Linux Kernel are payed and yet still do there work out of love of the art. The one doesn’t exclude the other. At least not on Linux.
http://www.computerweekly.com/blogs/open-source-insider/2012/04/lin…
Edited 2013-05-12 06:26 UTC
If you regularly do good work on the Linux kernel, you can pretty much be guaranteed to get job offers from companies.
You also see the share of “companies” working on the code is still ricing:
http://lwn.net/Articles/547073/
I mentioned them as “companies”, because they pretty much have no say in what the kernel developers they hide do. They mostly just keep working on what they’ve always worked on.
Companies hire the developers that work on things they care about. Not so much to influence it, but because they want an in-house expert.
No it doesn’t, not really, for several reasons. One nobody is gonna care about “faster” when the average $300 PC is several TIMES more powerful than its owner will need.
Two nobody is gonna care about speed and innovation if you just broke $30k worth of office software and left an entire company’s products broken so the risk isn’t worth it in many cases.
Third Linux pays for this “speed and innovation” with one of the worst driver models in the history of computing, a driver model so messed up that to this very day a company can’t just put a penguin on the box and a driver on the disc because thanks to Torvalds laughingly bad driver model there is a good chance the driver WILL be broken before the device even reaches the shelves.
Let me end with an illustration of why this entire argument is pointless and moot when it comes to OSes from my own life…Next week after nine years of faithful service I’ll be putting my old Sempron office box in the cheap hardware pile. Now that is NINE years, TWO service packs, and probably 3000 plus patches…and not a single broken driver, NOT ONE.
For the vast majority of people on this planet Linux isn’t free as in beer nor freedom, its free as in worthless.I’m sorry if that makes some people upset but its true, you can give me your OS for free but if my wireless is toast on first update and my sound goes soon after? Well into the trash it goes.
Faster kernels mean jack and squat if it doesn’t lead to a better product and when Googling for fixes, forum hunts, and drivers that work in Foo broken in Foo+1 are the order of the day it doesn’t. I mean MSFT has put out a product more hated than Vista, ME, and Bob put together and Linux is STILL flatline…doesn’t that tell you something?
On the other hand once a driver lands into the kernel, it becomes the same moving target as the whole thing. Most of the webcams released for Windows XP no longer work on Windows Vista/7/8 while in Linux you can use many of those webcams on all kinds of hardware, from laughingly old x86 to boards such as Rasberry PI. Plug it in and it just works! Thank you Linus, this is the reason why I love Linux. It might be beneficial in few cases to have a stable ABI but see the bigger picture and imagine today’s stuff that just works in 20 years time – unlike Windows. I think having stable ABI would prevent it.
Your milage may vary, but that is NOT my experience with webcams, especially since most webcams manufactured in the last several years support UVC ( http://en.wikipedia.org/wiki/USB_video_device_class ).
Webcams are a telling example, but not necessarily in the way you think… I’m sort of a collector of old webcams (I even have the very first dedicated webcam, Connectix Quickcam for ADB Macintoshes) and it’s not so well under Linux.
Model/family-specific Linux drivers for oldish webcams are often quite half-baked, not exposing all the functionality, full capability of the camera. Just because a driver is in the kernel, doesn’t mean it can’t be neglected…
MS pushed through USB video class (as a Windows logo requirement), which also did improve the situation on Linux – and still the driver is somewhat half-baked (since it’s within v4l2, it doesn’t support stills)
Plus, “Most of the webcams released for Windows XP no longer work on Windows Vista/7/8” is probably not true – especially big names often do released Vista & up drivers.
People do care about faster, perhaps not those who run a single system on a modern desktop but think of other areas, where linux happens to be strong.
Embedded devices – much slower hardware, better performance counts (and can also improve battery life).
Supercomputers – slight performance improvement, multiplied by thousands of nodes.
Virtualization – slight performance improvement, multiplied by large numbers of virtual machines.
As for drivers, i have never had a driver that was in the kernel break, and that is where drivers should be so that they can be improved and debugged alongside the rest of the kernel. The idea of third parties creating binary drivers is extremely damaging, and is one of the reasons why windows has never been successful on non x86 systems. I very much like the fact that virtually all the USB peripherals i use on my x86 linux boxes will also work on my arm based linux systems. Closed source binary drivers will never have this level of flexibility.
Also, even MS were forced to break the driver interface with vista because the old driver interface was holding them back. Linus doesn’t want to get stuck in a situation like that, where progress is impeded unnecessarily. Linux typically undergoes regular, small changes instead of sudden large ones, and when the interface is changed the in-kernel drivers are typically changed at the same time so nothing will ever be in a state of not working unless your running bleeding edge development kernels (an option that closed source systems typically dont even provide at all).
Linux is not flatline, its huge in pretty much every market except desktops, and its lack of success in desktops is more down to marketing and lock-in than any lack of technical merit.
Now that Linux is mainstream that driver model is doing fine.
These days lots of drivers are developed in the Linux kernel before the product ships. It takes a lot of time to get a product to ship. There are kernel drivers that were already done and releases before the product came to market. Yes, it also takes some time to get into distributions. This is true, but a lot of popular distributions now are on a 6 month release cycle. This helps a lot.
I don’t know if this is still true, but you could even get someone to develop a driver for your device for free: http://www.kroah.com/log/linux/linux_driver_project_kickoff.html even if you require a NDA.
So lots of things have changed.
Obviously some companies are still not participating all that well. And I don’t even mean Nvidia. They can’t develop a driver for their desktop GPU in the open because they don’t own the license on their own drivers. They hired an other company to develop it.
The mobile GPU-drivers from Nvidia are actually in the kernel and submitted by Nvidia.
How would Linux the kernel suddenly break office software? The userland API/ABI is extremely stable, add to that the fact that no company would be running their business on bleeding edge kernels to begin with, they will most likely use a tried and tested LTS kernel version.
Your old tired bullshit again, first off Linux (while just being a kernel) supports more hardware devices than any other operating system out there, and it does it straight out of the box.
A company wanting to ‘put a Linux sticker’ on the box doesn’t need to put a driver on a disc, they can have the driver be part of the actual kernel tree and be shipped with all Linux distros and maintained against ABI changes by the kernel devs.
ABI changes doesn’t break these drivers inside the kernel (if the kernel devs change the source interface they update the drivers), it only breaks those very few external proprietary drivers that exist for Linux, and no it’s not as if those drivers have to be rewritten, typically they have to be slightly modified and re-built against the new ABI.
And those few proprietary driver vendors do continously update their drivers against the new ABI versions, so it’s not even a practical problem even when it comes to proprietary drivers, you just get the new driver from your package manager.
By comparison when Microsoft update their driver ABI like with Windows Vista, tons of perfectly functional hardware is made obsolete because the hardware companies won’t release new drivers for older hardware (they want you to buy new hardware).
And it’s not as if a Windows ABI version is in reality stable during it’s own lifetime, as shown by the Vista SP1 service pack driver malfunction debacle.
Again unless (and even if you do) run a bleeding edge distro any problem such as this is extremely unlikely to be caused buy the Linux kernel, ABI change or not.
The only area in which Linux ‘flatlines’ is on the end user desktop, an area where no one has been able to challenge Microsoft, not even OSX riding the ‘trendy/hipster’ wave with massive marketing, in every other area in computing Linux either dominates or is doing extremely well. And guess what, those areas ALSO USE DRIVERS, how can this be? According to you they just keep crashing due to your imagined ABI problem.
A non-stable ABI means no cruft, no need to support crappy deprecated interfaces and functionality because some driver out there may still use it (hello Windows), instead you modify and re-compile the drivers to work against the new improved ABI, and this is something the kernel devs do for you if you keep your driver in the kernel.
Again for those few instances of proprietary driver holdouts, yes they need to do this themselves, and they do. The result is that all Linux drivers improve as they make use of new/enhanced functionality provided by the kernel.
Well this bullshit comes up time and time again, because stuff breaks. How many times has someone’s video goes tits up after a kernel upgrade … and don’t give me this “it should be in the kernel” bullshit … when you are stuck at a command prompt having to use another machine to google the solution it isn’t a lot of fun.
Stable API/ABIs are good engineering like it or not.
Edited 2013-05-12 17:36 UTC
No, this shit comes up time and time again from guys like you and bassbeast who doesn’t even run Linux.
Yes tell me, how many times? I’ve used a bleeding edge distro for 5 years, my video drivers (I use NVidia cards) have never caused breakage, only time I had problems with a kernel update which forced me to downgrade was a instability caused by my network driver during a major network driver rewrite.
And this is because I run a bleeding edge distro, Arch (kernel 3.9.2 went into core today), had I been using a stable distro then I would not have been bitten by that bug either.
What bullshit is that? It works for a goddamn gazillion of hardware drivers, right out of the goddamn box. And unlike Windows which relies on third party drivers, this means that Linux can support all this hardware on ALL the numerous architectures it runs on, which is of course a huge advantage of Linux.
Beats a blue screen of death, see I can play this game of BS too.
Yes in a perfect world, in reality there’s always a cost like that of poor choices you have to live with in order to ensure backwards compability, the Linux devs went with a middle way, anything inside the kernel can be changed at any time, hence you either put your code in the kernel were it will be maintained against changes, or you do the labour yourself.
Meanwhile breaking kernel to user space interfaces is a big NO.
Except I actually do run Linux. I don’t pretend it is perfect and I don’t make out that poor decisions are good when they blatantly aren’t.
Then tell us what distro you are using and what these kernel breakages you keep suffering are.
As I am running a bleeding edge distro I am fully aware and even expect that things may break, yet it’s been amazingly stable (granted I don’t enable testing, I’m very thankful for those who do and report bugs which are then fixed before I upgrade).
Neither do I, as I said I had to downgrade due to a network driver instability, still that’s one showstopper in a five year period on a bleeding edge distro. Vista didn’t make it past it’s first service pack before it started to break drivers, so much for ‘stable’ ABI.
How are they ‘blatantly’ poor decisions, you offer no arguments, please explain.
Now let’s see, more hardware device support out of the box than any other system, ported to just about every architecture known to man, used in everything from giant computer clusters, embedded devices, servers, hpc, 3d/sfx, super computers, mobile phones, tablets, fridges, etc.
But yeah according to you and bassbeat these areas don’t need driver stability, they obviously can’t since according to you guys Linux drivers ‘just keep breaking’, update the kernel and whooosh there goes the stability.
I’m not going to explain anything anymore, because your counter argument is “it works for me” so far.
from the looks of it it works for a lot of purposes, not just him.
Huh? From what I can tell you haven’t explained anything, you made a blanket statement of ‘Stable API/ABIs are good engineering like it or not’ which totally disregards reality, which is that stable api/abi’s are NOT automatically ‘good engineering’ if those stable ABI/API’s turn out to suck and then you are stuck with them.
I doubt there is a single long term ‘stable’ API/ABI today that would look the way it does (in many cases probably not even remotely) had the original developers been able to go back in time and benefit from what they know now.
So it’s a balance, will you keep code from ‘yesterday’ which ‘today’ turned out to be a crappy solution in order to maintain compability or will you allow yourself to break it and force improved changes upon those interfacing with your solution?
The kernel devs chose the balance of being able to break anything inside the kernel, while keeping userland interfaces intact.
This is not a perfect solution because there simply is no perfect solution, but it means that their changes has practically zero impact on user space (which certainly limits the improvements they can make) while allowing them free reign to improve within the kernel space.
And since the kernel is monolithic this means there’s a ton of in-kernel functionality which can be enhanced without breaking compability, the exceptions in practice are those few proprietary drivers residing outside of the kernel, which needs to be maintained by their vendors against kernel ABI changes.
Oh I think I’ve fleshed out my argument a lot more than that, meanwhile yours still seem very little beyond ‘it doesn’t work for me’.
At the end of the day, I think a simple google for “my wireless is not working anymore in <distro x>” proves the point.
While it was better than it was, it still sucks. YET Linux advocates make up all sorts of excuses why it is okay. They could have I dunno supported a legacy interface until kernel version X which wouldn’t have been much effort if the internal structures are as good as you say they are. This would have given hardware companies and OEMS roadmap that they could follow.
There always been problems because they made a bad choice for desktop users and oems that may have wanted to support it. Might be a good choice for progression of the kernel itself, but it is a shit choice of desktop users, which is one of the many reasons that Linux is and will always be a failure on the desktop.
The fact of the matter is while you get up-votes on here, changing interfaces and APIs tends to piss third party developers off. If there wasn’t the legal problems with the BSDs at the time, Linux wouldn’t have even got off the ground because there wouldn’t have been picked up as a “free” *nix like alternative.
Sadly friend that is ALL you will get because ultimately the broken driver model has become a religious element, a way to “prove the faithful” by how much they will get behind an obviously and demonstrably bad design.
Quick, how many OSes OTHER than Linux use Torvald’s driver model? NONE. How many use stable ABIs? BSD,Solaris, OSX,iOS,Android,Windows, even OS/2 has a stable driver ABI.
I’m a retailer, I have access to more hardware than most and I can tell you the Linux driver model is BROKEN. I can take ANY mainstream distro, download the version from 5 years ago and update to current (thus simulating exactly HALF the lifetime of a Windows OS) and the drivers that worked in the beginning will NOT work at the end.
And before anybody says “Use LTS” that argument doesn’t hold water because thanks to the again broken design by Torvalds most software in Linux is tied to the kernel so if you want more than a browser and Open Office? You WILL be forced to upgrade because “this software requires kernel x.xx” or be left behind with older non supported software. With Windows with the exception of games that require a newer version of DirectX (which is rare, most have a DX9 mode for this very reason) you can install the latest and greatest on that 10 year old XP machine and it JUST WORKS.
Again let me end with the simple fact that after NINE YEARS I’m retiring the shop netbox. That is TWO service packs and at LEAST 3000 patches and not a single broken driver, NOT ONE. If Linux wants to compete then it actually HAS to compete, not give us excuses which frankly math can prove doesn’t work. Look at the “Let the kernel devs handle drivers” excuse. You have 150,000+ drivers for Linux, with a couple of hundred new devices released WEEKLY..how many Linux kernel devs are there again? if you pumped them full of speed and made them work 24/7/365 the numbers won’t add up, the devs simply cannot keep up…which is of course one of the reasons to HAVE a stable ABI in the first place, so that the kernel devs can work on the kernel while the OEMs can concentrate on drivers.
Sorry for the length but this one really irks me, if you like running an OS that is rough because of reasons? Go right ahead, i wish you nothing but luck. But when you compare that broken mess to either OSX or Windows I gotta throw down the red flag and call bullshit, its not even in the same league. Oh and do NOT bring up embedded or servers as that is “moving the goalposts” and honestly i don’t care how cool your OS is at webserving, I’m not selling webservers and that isn’t the subject at hand. Linux is broken ON THE DESKTOP and that is what we are discussing so try to stay on topic.
To my knowledge he runs Ubuntu but that’s irrelevant; this discussion is about NT kernel vs Linux kernel performance, not a dick measuring contest.
I can see where you’re both coming from because I live in both stable LTS and bleeding edge testing worlds at the same time. On my workstation it’s Windows 7 Pro and Ubuntu LTS because I like to actually be able to get work done without wasting time fiddling around with broken crap. On my laptop it’s “anything goes”, as that device is my testbed for all the shiny new distro releases. Any given week I might have a Debian based distro, an Arch based one, a Slackware derivative or even Haiku booting on it. I greatly enjoy having both the stability of my work machine and the minefield that is my portable.
I feel like I say this every week lately, but if you’re running a production machine for anything other than OS development, you should stick to a stable (preferably LTS) release of your favorite OS. If you’re a kernel hacker, a distro contributor or you’re just batshit insane, go ahead and use a bleeding edge distro as your production machine, and enjoy all the extra work that goes into maintaining it.
First off, I haven’t even glanced at my dick during this entire argumentation (I need to keep my eyes on the damn keyboard)
And while the original discussion was about Linux vs NT performance, this offshoot is about bassbeast’s claims that the lack of a stable driver ABI is causing user space office software to crash and holding Linux back on the desktop.
He was painting this picture of this ‘company’ which must either run a bleeding edge distro or is downloading and installing new kernel versions off git and then goes:
-‘shit! this new untested kernel release broke the proprietary driver and as such cost us $30k of products! we’re screwed! oh, man if only this could have been avoided, like with one of them stable driver ABI’s like I hear windows has, bummer!’
And then he used this ‘very likely’ scenario as the reason of why Linux has not made it big on the end user desktop.
Same here, stable for work, batshit crazy bleeding edge for leisure.
Everyone knows LTS isn’t actually stable.
On my second attempt at googling ….
http://blogs.operationaldynamics.com/paul/opensource/not-unified-re…
http://www.techdrivein.com/2011/06/fixwifi-driver-breaks-after-upda…
I am sure I could find more. I don’t run Ubuntu, I run Debian and Fedora.
Sorry about that, I guess I was thinking of someone else.
Ubuntu 12.04 LTS has been pretty good for me, at least on my workstation. On my laptop it’s another story; it’s a Sony Vaio I got secondhand from my sister so of course there are hardware compatibility issues. It seems like Sony can’t make an OSS friendly device no matter what. So far Slackware and derivatives are the best supported and most stable on it. I’m about to load up Crunchbang Linux, which has been very stable in the past for such a “bleeding edge” distro. Also it’s Debian based and not Ubuntu based, so in my opinion it’s a cleaner OS overall.
I think everyone is turning back to debian for a stable-ish linux.
Ok, there is more to this thread of arguments than I would like to discuss about but, to put things on perspective, I will write some possibly inaccurate and anecdotal specific cases that apply to me.
First, X drivers are NOT part of the linux kernel but, frankly, does it matter where it fits on system stack if you just want to boot to a nice GUI? Yeah, I thought so.
Some disclosures:
d.1) I have openSUSE 12.3 installed on all my computers that I really use (so, I am discounting the very old ones I just play with out of pure nostalgia). They are four.
d.2) Three of them have Windows systems also (1 xp, 1 vista and one 7);
d.3) One is a laptop, the others are all desktops;
d.4) I prefer nVidia graphic cards because of their drivers on linux. Two of them have such cards.
Now, this is what I can tell:
1) I compile some programs on my machines to add features that are left out by regular repos due some restrictions (mainly because they are not “totally” free additions) or because I want to use specific patches that are not applied by default. If you use scientific packages you are going to find it is a common case;
2) I don’t use openSUSE nVidia drivers repo because of a very simple problem: when I compile some programs that depend on X devel libraries and generate the appropriated rpms, they will display a dependency on those nVidia drivers, what I try to avoid because not all my machines have a nVidia card (or the ones I supply the packages to);
3) Because of ‘1’ and ‘2’, when a kernel upgrade happens (only one time since 12.3 or 12.3 RCs – I am really not sure when it happened), I am thrown to the shell prompt (what I enjoy, but it is a completely different subject). On MS windows it does not happen but I experienced BSOD from time to time (rare on Xp, very rare on newer versions) and they were never related to video dirver issues. Both systems developed their ways to cope with related problems (drivers): unroll on MS products and shell + update/revert/blacklist on linux side. I prefer the last solution but I am very aware that the former, at least to me, seems like a more “unexperienced user” fool proof solution, and yes, I always install the “recovery prompt” and things alike on windows “just in case” (not only on my machines but on any I may have to support);
4) Things “break” on linux hardware support. With openSUSE 11.4, I was able to use an old nVidia FX 5200 PCI card to happily coexist with an integrated intel video driver through xinerama. It all stopped to work on 12.3 no matter how hard I tried by editing xorg.conf (after be sure it was properly picked), had to bite the bullet and buy a new card. The same thing worked flawlessly on xp, only after hacking with vista; it was “no way” with windows 7. As explained on second paragraph, I am aware that X is not part of linux kernel;
5) The common argument that if something is supported on linux kernel then it is going to work properly on newer versions is bullock. For network devices, storage devices and other very important subsystems of servers if may be true, but I had lots of problems with video capture devices, first and foremost because the kernel drivers are only part of the “integrated” solution, you will need also a proper interface working on user space and some of them just stop to work because we had gtk or kde updated and the old application is not compatible with them. To be fair, it also happens on windows but, for this specific case, I do find that there is better support not only by driver options but on applications as well, even though they push (the hardware/software manufacturer) the user to buy a new kit when a new ms windows version roll-up.
6) Some of the “pro” arguments towards use of MS Office and Adobe Creative Suite are also ridiculous. The former is only needed for some quite few cases but if you ask, almost all say that they “need!” it, press them to tell what specific feature they may miss and watch things go funny. Photoshop and Illustrator can be very successfully replaced by gimp and inkscape on web development on most cases also. Problems start to pop if you need to work on press industry though. As a side note, I like more Scribus than InDesign for small projects that requires only PDF generation (probably because of years of Pagemaker use).
Where I think linux really shine is on automation of process. It is really hard to beat it when you properly assembly a string of operations to be performed by scripts (bash, make and all). Perhaps, some sort of equivalent thing could be accomplished on MS camp by PowerShell but I don’t see “scriptable” imprinted on DNA of MS world apps stack what impair what can be accomplished. So, for people like myself that just love to make most of things “set and forget” (or almost, of course), there is probably a kind of leaning towards linux, I guess.
Just use the open source drivers … so use a system with Intel or AMD graphics. “Problem” solved.
Umm what?
a) Shelling out for a new computer because of driver regressions is wasteful and stupid. Most people don’t have the time, the money, or the desire to do that.
b) The open source drivers routinely suck on lots of hardware, and are also subject to horrible regressions.
c) Again, the performance of open source drivers (particularly 2D performance) tends to be pathetic.
Firstly, let me point out that the open source Linux drivers for Intel graphics are the only Linux drivers for Intel graphics.
The other major vendor with a vendor-supported open source graphics effort is AMD:
a) Wrong. The open source graphics drivers for AMD provide better legacy support.
b) Wrong. The open source drivers available with current kernels admirably cover considerably more AMD graphics chips than fglrx does.
c) Wrong. In some areas pertaining to 2D acceleration, the fglrx closed source Linux driver is marginally better than the open source driver. In other areas of 2D acceleration, however, the closed source fglrx driver is five times slower than the open source driver. The open source graphics drivers in most areas of 3D performance achieve about 80% of closed source fglrx performance, in a few areas they are further behind at about 50%, and in a few other areas they are actually 5% to 10% faster.
Your comment is, however, correct for nvidia graphics hardware. Accordingly, I do not recommend nvidia graphics hardware for use with Linux.
Edited 2013-05-15 03:34 UTC
I think that when you install a proprietary driver the kernel, Xorg, and that driver should only upgrade in sync when the driver allows.
More specifically what I’m trying to say is if the package for the driver recommends a certain kernel or Xorg that is where those packages stay until an upgraded driver that supports the upgraded kernel and Xorg comes out.
This is a package management issue that could have been fixed ages ago and in fact you can manually lock down these packages and achieve stability.
Seems that, in practice, this can also mean the driver often languishes; especially when it’s for some older piece of hardware (for example http://www.osnews.com/permalink?562009 )
> Third Linux pays for this “speed and innovation” with one of the worst driver models in the history of computing, a driver model so messed up that to this very day a company can’t just put a penguin on the box and a driver on the disc because thanks to Torvalds laughingly bad driver model there is a good chance the driver WILL be broken before the device even reaches the shelves.
WAAAAAAAAAAH my proprietary code designed to link against a GPLv2 kernel that explicitly doesn’t have a static driver ABI doesn’t work with a different kernel version! If your driver means that much to you, submit a patch.
Edited 2013-05-12 18:47 UTC
You want proof your entire premise doesn’t work? do the math:
You have a MINIMUM of 150,000 drivers for Linux, yes? And we have several thousand NEW deices released weekly…how many Linux kernel devs are there again? 500? 1000? if you kept them working 24/7/365 on NOTHING but drivers the math still wouldn’t work, all it would take is Torvalds changing a pointer (which considering I can wallpaper this page with “update foo broke my driver” posts appears to be Torvalds SOP) and it would take 3 to 4 YEARS just for them to give 5 minutes to each driver.
So I’m sorry but you can bang your Linux bible all day long, what you are selling is about as believable as Adam riding a dinosaur. When every single OS on the planet OTHER than Linux has a stable ABI are you REALLY gonna sit here and argue that Torvalds is smarter than every single OS designer on the entire planet? Really? if his driver model was good others would adopt it, they haven’t and the reason why is obvious, its not good.
Oh and finally your claiming its about the GPL makes this a religious argument, you know this, yes? You are arguing that it is okay to have a busted model as long as it promotes the “purity” that is the GPL…except every single forum tell you to use Nvidia because SURPRISE! The FOSS drivers don’t work worth a damn across the board. The math doesn’t work, you can spin it all you want but you can’t change the fact that the current driver model? it is really terrible.
Hmmm. Bad hair day? Here’s a real-world user case. I still own and use a 1999 Dell Dimension 4100. It’s used mostly for testing purposes. When I boot it from the hard drive, it runs Debian7 (wheezy). It’s been running it for about a year now, from beta status to (just recently) stable release. Many updates in between. The kernel is a third-party 3.8-10.dmz.1-liquorix-686, the latest liquorix version. It has gone through many updates, as well.
The machine uses an 800mHz PentiumIII CPU and 512MB of RAM. I’d add to the RAM count, but it’s all the motherboard will register, even with more installed. The motherboard came with no ethernet port. It still doesn’t have one. I use a Linksys WUSB54G card, connected to a USB port. The Linksys is quite a few years old, but not as old as the Dell.
The wireless driver for the chipset is in the kernel. It is registered with the OS, “out of the box”. Through all of the “bleeding edge” liquorix updates, as well as the Debian testing updates, the setup never failed to acquire a wireless signal or connection ability.
Regressions? You clearly do not know what the hell you are talking about.
To be fair this is debian.
If you said ubuntu I wouldn’t believe you.
Unless you’re developing resource hungry algorithms and applications that those not-caring people are actually using. You can’t base your development on thinking only about the consumers. True, they are the larger crowd, but without the developed apps and content, they won’t have anything to consume. You might say you don’t care about the developers and only concentrate on serving the consumers, but then at least state that plainly and don’t behave like you care, frustrating devs from time to time with seemingly ignorant decisions.
What you are doing is called “moving the goalposts” and is one of the reasons nobody can discuss Linux anywhere. We are talking about desktops NOT workstations, routers, phones, your toaster, or a webserver, okay?
So if your argument is that Linux works on routers (where they are never updated and run a custom build) or on workstations (where companies like HP spend millions to constantly rebuild drivers after Torvalds breaks them) or on some other device? Then sure, no argument. but we aren’t talking about any of those, we are talking about X86 desktops and laptops which linux clearly does NOT work on, or else I wouldn’t be able to wallpaper this page with “update foo broke my drivers” posts.
I’ll leave you with this, if one of the largest OEMs on the entire planet can’t get Linux to work without running their own fork, what chance does the rest of us have?
http://www.theinquirer.net/inquirer/news/1530558/ubuntu-broken-dell…
For most of the last 30 years windows have emphasized backward compatibility, so the code written X years ago will still run today. The consequences of this will lead to more bloat and less optimal code supporting mysterious-forgotten corner cases.
Linux being purely engineer lead will more proactively-aggressively route out unneeded code, optimize at the expense of compatibility.
In my personal experience, Linux is faster at many things as a _user_! ls !! But surprisingly isn’t that great! And as a developer, there really isn’t any worthwhile difference at all.
Linux has extremely good source level backwards compatibility, lots of software written for very old unixes will still compile and run just fine on modern day linux systems.
That’s the difference between writing a simple, sensible and extensible system vs writing something unnecessarily complex.
Yes, backward compatibility has a cost, this is even true in Linux:
https://www.youtube.com/watch?v=Nbv9L-WIu0s
[replied to the wrong comment]
Edited 2013-05-12 13:31 UTC
Most Linux kernel developers are actually paid for what they are doing. That, I agree, however not contradict with the fact that they probably love what they’re doing.
Adrian
I think you missed the point – it’s not about “free” (many linux devs are employed anyway), it’s about “in the open”.
He mentions there are still good hackers at Microsoft. And I believe they’re the ones who still keep them afloat. The founders (Bill Gates) were really hackers in the sense of what we understand today (even though they charged for it, they really enjoyed coding). But when that culture was replaced by marketers, Microsoft left being competitive, but became just another company.
Just like Apple was before the come back of Steve Jobs. Now that he passed away they are living on their assets and attacking any company that tries to compete with their products.
Looks to me like this is getting overblown. Personally, after reading the title, I was expecting a technical response for a technical claim. Instead, what I got was a pseudo-socio-political response. Reading it through, it is clear to me that whomever this is, wrote it as an off hand rant.
What it comes down though is that MS development is conservatively managed whereas the Linux kernel is not. This can be an advantage for stability for Microsoft or for the many incremental improvements for Linux. When your userbase is in the billions, it makes sense to be conservative. You don’t see major rewrites of significant subsystems in Windows because of this. How many times does the Linux driver ABI change? How many IPCs are there going to be? SystemD anyone? Etc, etc..
In any case, the development methodology of both Linux and Windows have their merits. Linux, obviously, is better suited for the high churn and just-recompile-it environment that comes with being an open source project. In fact, this goes hand-in-hand with the just-throw-it-away mentality of the cell phone market. Clearly that market has proven conducive to the Linux model.
On the other hand, Windows allows for many leaf projects (such as commercial games) to succeed that would generally require far too much maintenance under the former. Steam is trying to mitigate this by shipping software outside the main dependency-resolution packaging world. The same is true for a vast majority of new peripherils and their drivers.
In any case, they both have merits.
Edited 2013-05-11 23:11 UTC
I was about to make a very similar post, but you saved me the effort. Yes, Microsoft has to compete with open source, and open source has some very tangible advantages… But its not black and white – having to answer to a board, working within a schedule, and prioritizing work based on things other than technical merit create a certain level of discipline and efficiency of effort. Its not always pretty, but its not all bad either. Some good things come out of that approach too.
True, but to be fair, the original author of the “rant” was talking from a purely technical stand point. He was a developer, not a salesperson or corporate exec. And in the product space of software, the technical aspects of it are what matters the most. You could be the most efficient hacker in the world, but if your software sucks, who is going to but it?
Having to work within a schedule also encourages (and in some cases requires) corner cutting.
You’re absolutely correct. I’d also like to add that the person who wrote the original posting is exaggerating it as well. I know quite a few people (across nearly every division they have) who work for Microsoft and the stories I hear from them paint a little different of a picture. While Windows development is indeed compartmentalized, there isn’t nearly the lack of communication the OP is trying to claim. There are a lot of great programmers there, including younger ones.
BTW, everyone I know who works there loves it. The only real complaint I hear is from the ones doing contract work who would prefer they just be hired on directly.
1. Metro API vs win32. The insight explains why doing new rather then fixing and extending existing.
2. Android. Conservative, billions of users, Linux.
You don’t see them in the Linux Kernel either. Major rewrites wasn’t the point at all. Its about incremental improvements.
Public API/ABI, like for userland, did not change since a decade. The Linux Kernel has very strict rules to not change those parts. Private API changes.
dbus and systemd are not Kernel but userland. Userland changes, also on Windows.
Edited 2013-05-12 06:32 UTC
>>You don’t see major rewrites of significant subsystems >>in Windows because of this
>You don’t see them in the Linux Kernel either. Major >rewrites wasn’t the point at all. Its about incremental >improvements.
Hmmm… You should know that Linux gets major parts rewritten all the time. The code is always in beta stage, the new bugs are never ironed out before the code is rewritten again. Linus Torvalds said that “Linux has no design, instead it evolves like nature. Constant evolution created humans, so constant reiteration and rewriting is superior to design.” Just google on this and you will see it is true. Big parts are rewritten all the time. I am surprised you missed this. Apparently you dont read what Linux devs says about Linux.
http://kerneltrap.org/Linux/Active_Merge_Windows
“the tree breaks every day, and it’s becoming an extremely non-fun environment to work in. We need to slow down the merging, we need to review things more, we need people to test their […] changes!”
http://www.tomshardware.com/news/Linux-Linus-Torvalds-kernel-too-co…
Torvalds recently stated that Linux has become “too complex” and he was concerned that developers would not be able to find their way through the software anymore. He complained that even subsystems have become very complex and he told the publication that he is “afraid of the day” when there will be an error that “cannot be evaluated anymore.”
Don’t confuse the lack of a road map and rigor mortis as being conservative because what is occurring right now (and hurting Microsoft) is 10+ years of bumping around in the wilderness with no long term development road map. If I as a third party look on I want to know where Microsoft is heading – what is the role of WinRT? is there a future of traditional desktop applications and if so has development of win32 ceased in favour of WinRT? are Microsoft supporting C++ ’11 and if so what is the road map for the implementation of those features so that developers can plan for features emerging? then there is longevity – are we going to see Microsoft once again bumping around flinging crap against the wall in the form of pushing out API’s onto to kill them off later on when the programmers at Microsoft either get bored or realise that they really didn’t fully think it through?
Then there is views such as the former lead of Windows, Jim Allchin, who labelled legacy code as an asset whilst ignoring that it can quickly become a drag on future development; that backwards compatibility is good and when a better solution to provide that compatibility is provided such as virtualisation then it should be taken advantage of. Here we are in 2013 and Windows is still suffering from the laundry list of issues I’ve raised in previous posts – one might have forgiven them in the past due to technical limitations but these days with virtualisation there is no reason why development is merely putting a new coat of paint in an rickety tug boat that has seen better days.
that is complete crap. Microsoft have had no qualms about rapidly deprecating large parts of DirectX, just look at development of DirectX 1-7 to see that. Game development on Linux is as easy or easier than it is on other platforms. The main reason for the lack of titles is developer perception or the userbase and installbase. Lokigames showed over 10 years ago that Linux games could be made easily, but they also couldn’t prove the business case. Only just now is the business side becoming viable in terms of installed numbers etc. The sales figures from Loki alone illustrate this. Sell 3000 copies of a game and you’re not going to make a profit on your ports.
It was about a year ago, or was it two that someone explained Microsoft’s version of decimation – the practice where under penalty every 10 roman soldiers would have 9 physically beat the 10th to death.
http://minimsft.blogspot.com/2011/04/microsofts-new-review-and-comp…
The other problem is they are more concerned about turning it into a DRM system or other political-marketing goal than something that is engineered well, so even “security” means hard to copy, not hard to exploit. “The browser is part of the OS”. So it is mixed in with DLLs and the Kernel. Change the driver model – I don’t know why, just do it so each new rev needs vendors to redo everything.
There is GMP – the Greatest Management Principle – you get more of what you reward. If you reward firemen who fix bad code emergencies instead of refactorers who insure there won’t be any emergencies in their work, you will get firemen. If you create performance review games, you will get the best at playing those games.
There is no reason they could not continually beat Linux and be better other than they don’t want to be – they reward other things. Look at how long IE languished until Firefox and the Webkit browsers started gaining enough share. Outlook and Hotmail? Now since “cloud” is the new buzzword, they are trying to pantomime Google and others (Apple isn’t doing that well either, but their model broke – they need to have something completely new every year, and some won’t work like Ping and Apple TV and others will take off – but while Google has Glass, Apple…).
In the W8/WP8 death knell, I note they have no W8 Zune – a very inexpensive entry unit that a developer could use for everything not involving a phone. Xbox is too far away from the rest of the ecosystem (hardware, XBLive, etc) to cross-pollinate. And even with their attempts to lock down with UEFI Windows 8, and prevent going retro instead of Metro, they damaged the PC market – it takes rare talent to kill your Cash Cow. Ballmer is considered the top CEO who should have been fired much earlier.
Just to contrast, Blackberry’s new offerings from reports have double the battery life, the browser is better than anything from Android, it runs most Android apps, but also have other dev systems, has really good PC support and security and a QWERTY model – the predictions of their impending death were exaggerated. Its not hard, but it requires realizing what you must do when the world is changing around you and your monopoly is being eroded daily.
Then there’s the comic attempt at Office – the “XML” version and their political manipulations to get that piece of trash spec that no version of Office actually supports adopted as an “open standard”. ODF might be ugly and have gaps, but there’s open source again. KDE, Open, and Libre Office manage to work.
The checkbook doesn’t work with innovation. Apple tried it with their Maps. I’m not sure what they were thinking with the whole Windows 8 set of things – they look to be trying the ecosystem lock-in like Apple without the fanbois – either consumers or developers. They are trying hardware again – so there are lots of calls of “neat”, then people go out and buy something else.
Hopefully they will right themselves before crashing. Apple is already in decline (suing Samsung over silly patents, but Microsoft is doing that to Motorola).
Perhaps they have enough patents and lawyers to freeze technology for a decade here while they live off licensing fees – Xerox did that with the photocopiers for a while.
Blackberry may not be a good example, at all. They are still losing market share, and that is not a good thing in a market which is growing overall very rapidly.
Edited 2013-05-13 21:09 UTC
This was an interesting look into the development process within Microsoft. I’m not sure if the person who wrote it was involved in the recent MinWin effort as that was an effort to streamline things. Still, however, it was a break from their normal process which we’ve seen described in similar fashion before.
All that being said, while MS clearly has some flaws in its project management, one should not infer from this rant that the larger *nix world is not susceptible to the same flaws. Accelerated graphics is a perfect example of that what with Compiz, Beryl, Emerald, Mutter, and KWin all reinventing the wheel the first time around and then Wayland and Mir (what about X12?) looking to create anew rather than refine existing work. While MS does not fix every bug, because its programmers are not merely trying to scratch their itches, they are forced to do more QA testing and fixing. One can debate about Win8 but aside from that, MS does not release half-finished products the way that you do sometimes see in OSS-land.
I say all this as someone who has a Windows 8 laplet (laptop/tablet), a Suse laptop, and a dual Win7/Ubuntu desktop.
What this guy describes seems hardly isolated to Microsoft. At the company where I work, I’ve seen a lot of talented developers leave because the higher-ups didn’t want to pay them what they were worth, and so they went to work for somebody else who would. In fact, if you’re already in the organization and get promoted, you would make less money in your new position than somebody being hired off the street. I moved from customer support to engineering a few years ago, but I still have a CS title, because apparently it takes an act of congress just to get a real promotion. No, I don’t understand it either.
For this reason, you have the old guard leaving and new recruits coming in. And since very little has been documented over the years, the only real way to know the system well is to have been working there for several years in order to understand why things were set up the way they were. Hell, on some servers, there are apps set up on cron to be restarted once or twice a day and nobody knows why. So they turn off the restarts, find out that apps run out of memory and/or crash midway through the day, so they turn the restarts back on. And then they repeat this process every 2-3 years.
WorknMan,
I think these experiences are universal. Heck, no programmer’s manifesto would be complete without addressing them. I’ve only worked with small/medium businesses, but the same kind of motivation problems definitely occur there too. The effort to make software better often goes unrewarded. One of my former bosses said the improvements and fixes are a waste of company money because the company only gets paid for new contracts and not fixing things out of the kindness of our hearts. It can make a good programmer feel very unappreciated and unproud, but it’s really just business. Of course, in public, companies won’t admit any of this, employees are at risk even talking about it.
…why are you staying with the company?
I was thinking the same. I know a few guys that are fantastic programmers but work for my old place.
I am not one of the developers and for my position, they actually pay more than they probably should Plus, it’s a rather large company that’s pretty stable, unlike some of the fly-by-night startups that people are leaving for. I’ve seen some folks head out for greener pastures, only to return 6 months to a year later.
Seems there is and was a lack of documentation.
For clarity, I work at Microsoft, on some of the components the OP is referring to. Prior to Microsoft, I contributed to open source projects. And right from the start, note that these open source projects can’t really be described collectively, since each project has its own culture, process and style.
Have there been things at Microsoft that I’d like to have been able to do and couldn’t? Sure. I don’t think it’s nearly as bad as the poster describes – I’ve been well rewarded for getting things done, and I do things I’m not asked to, including refactoring, all the time. Since each team owns its own code, modifying another team’s code is a social problem, and that doesn’t always go the way I want. But in open source, it’s still not all my code, and the social problem is still there.
Back in the day, I contributed to Gaim (now Pidgin). My experience working with them was fantastic, but it’s since been forked because people can’t agree on the behavior of the size of the input box (!).
I wrote a patch to the linux kernel for my own benefit, and submitted it to lkml for glory. That list is huge, and the patch attracts plenty of review, including valid and constructive feedback for how to improve it. But since it’s my first patch, the feedback requires learning a lot more of the system and building a much bigger feature. This is not a bad process – it results in good code – but it’s not encouraging for new contributors. My patch never merged.
I wrote a patch for Mozilla. This one saddens me more than anything. Specifically, I took an abandoned patch, revived it to current code, polished, finished, and submitted it. It gets reviewed, rejected, there are unrelated flaws found because people are testing the code, it languishes, and some part of it gets merged. The UI part was against the SeaMonkey UI, and FireFox has never had UI support for it (about:config only). The bug I addressed is still active due to unfinished work, people still work on related bugs, and the most frequent outcome is more incomplete, abandoned patches just like the one I started from. I still get bugzilla emails from complaining users over things there, and am no longer in a position to just step in and fix them. If I were to compare this to Microsoft, at Microsoft you need to convince a team of the need to do something, but at Mozilla you have to do it yourself, including every potential tangent to it, and do it perfectly. Again, this is not necessarily a criticism – it’s always good to have features that are complete rather than “it works in case X but not case Y” – but it’s not welcoming.
I agree with the OP that fixing an existing technology is often better than writing a new one to add to the side. And Microsoft does that. But so do open source projects – if X can’t be fixed, have Wayland. If UFS can’t be fixed, have ZFS. If NFSv3 can’t be fixed, have NFSv4 (which shares little except a name.) Again, this is not a criticism – whether Microsoft or Linux, this is done because it ensures great compatibility with existing deployments, who don’t need to embrace a disruptive change immediately; the two can coexist while migration occurs. The unfortunate part is it means the installed base, running the older thing, will not be as good as it could be. Open source has an advantage in the nature of its users and deployments which allow the older thing to go away faster than in the Microsoft world, but even there, my systems still have gtk 1.2 installed.
It’s great to hear the OP care so passionately about Microsoft. We do face valid challenges, and I’ll always be open to anyone who is trying to improve the state of my area, but it’s important to note that the engineering issues are shared by everybody. If the OP has great ideas for how to improve performance of filesystems and storage, come talk to me.
Minor side note: ZFS wasn’t written because UFS couldn’t be “fixed”. It was written because fundamentally the classical storage stacks simply did not scale. The project’s scope was much larger than just writing a new filesystem.
You get too abstract.
Its about “Get The F***K away from my code”, when person who want changes is from oustide.
That increase communication costs, that increase bug-finding/fixing costs, that increase division between programmers and managers and testers…
Bad, bad, bad.
In Linux development things are organized better. You will hear statement from above only when you do not know what you are doing (and most probably you think otherwise).
Well, I’ve never heard that statement or anything remotely resembling it at Microsoft. Most conversations focus on tradeoffs, priorities, consequences of a change, and resource constraints. These are not always obvious to the person proposing a change, who is only concerned with one specific thing.
…Which is another way of saying, it’s not unlike my experience with Linux or Mozilla. All have a similar discourse with people sincerely focused on building the best product possible.
Personally when presented with a good change on one of my components, I’ll gladly just take it – easier that than reinventing it myself.
Whats your take on stack-ranking management and how it aligns to motivation, “out of order” innovation and improvements?
Edited 2013-05-13 05:21 UTC
The “proof” is un-verifiable to anyone except those that have access to the Windows Kernel Source Code.
Too bad for Microsoft. You had to know that Microsoft is reaching the end. Linux development shows the way to get things done.. No commercial entity can compete with that.
Go Linux, go!
A lot of linux development is done by commercial entities too.
“A lot” might actually be understating it. Commercial entities are involved is practically every aspect of linux and that fact is such common knowledge it’s hard to believe anyone would think otherwise — unless they have no clue what their talking about.
If that is true, Why do I feel Windows 8 Desktop faster than any Linux distro? or is it this exclusive for servers?
Edited 2013-05-12 15:53 UTC
The post is bullshit, the proof is un-verifiable outside of Microsoft.
The fact that memory usage has reduced from Vista onwards pretty much confirms this is bullshit.
The Windows XP/Vista/7/8 desktop is no doubt faster than any X/KDE/GNOME combination.
But if you are talking about Linux as in Linux the kernel it depends on the measurement.
I haven’t seen benchmarks recently but Linux will probably beat Windows for raw file i/o. How much of the difference matters in the real world is another question.
The situation is similar to 3D cards. It’s not always about which is faster, it also depends on what you want to run.
http://www.zdnet.com/valve-linux-runs-our-games-faster-than-windows…
Uhhh…I don’t know if I’d want to brag about that friend, that is like saying “Linux runs DOS better than Windows 7/8” because Valve frankly has one of the more piss poor game engines in gaming, its not even DirectX 9c yet and that was released…what? 2006 or so?
Valve hasn’t been a gaming house in quite awhile, all their R&D goes to the Steam service. I have no doubt its games run faster on Linux as both its OpenGL and DirectX are waaay behind the times and while Linux supports ancient versions of OpenGL MSFT really doesn’t care about OpenGL nor DirectX before 9c, its just too old.
Yeah, only that was an irrelevant scenario, few hundred fps (probably windows directx team rightfully thought it’s pointless to optimise for such scenario in dx9). OTOH, Valve wants bad publicity for Windows – now that MS, with their appstore, is a direct competitor for Steam.
this made my day
(Besides: you guys have systemd, which if I’m going to treat it the same way I treated NTFS, is an all-devouring octopus monster about crawl out of the sea and eat Tokyo and spit it out as a giant binary logfile.)
It is simply ignorant.
Systemd is SMALLER than init or any other such program.
It is not devouring anything, it is simply creating replacements for many prior projects. These are all separate binaries, and can be used or left out.
The logging is far superior to anything we’ve had before, the very fact that it is binary ensures more security. Before, it was relatively easy for a hacker to just edit the logs and the admin wouldn’t know he was there. Now there are mechanisms in place to ensure the log really comes from where it is intended, and it is much harder to change that information.
It is certainly vastly different, but having used it for a while, I would never use another init mechanism.
Imagine, a single tool to initialize everything on the system, not one for bringing up the system, another for timed events, another for scheduling events, another for dealing with events on demand, another for acting on new hardware, all logs and tracked in a uniformed way, all managed in a uniform way. The sheer number of in-kernel mechanisms it makes easily available to admins is staggering. Systemd actually lets us take full advantage of the platform, rather than remaining confined to 30 year old mechanisms.
People dislike change, people like using shell scripts to do things, cool. Systemd can execute any type of script or binary, so actually it is more powerful than simple shell scripting in this regard too. As for change, there is always Slackware – the state of the art in the early 90’s – or any of the BSD’s. Everyone else except Debian is moving on, but they’ve never complied with standards anyway.
Edited 2013-05-12 16:49 UTC
True, that. And there’s the journalctl tool to display all the system logs, in one place, in human-readable format.
Never complied with standards? In what way? What supporting evidence can you show?
Run levels are a defined standard, Debian simply ignored them – many will say you can create similar functionality, but you can create similar via targets in systemd, this doesn’t mean it follows the standard.
There is also the issue of using dash for /bin/sh which results in some oddities.
And good on them for moving ahead and, just like Slackware, getting rid of the obsolete runlevels system.
And actually that’s a de facto standard foss way of progress. New idea, implemented, let to live, and if it proves better, will propagate into other distros as well. If not, will eventually die out. Maybe ( ) not the best way, but still, it’s working.
They don’t really “ignore” them.
http://wiki.debian.org/RunLevel
http://www.debian.org/doc/manuals/debian-reference/ch03.en.html#_st…
That’s a good point. I had forgotten they did that after Lenny. bash is the default “interactive” shell.
This is bullshit. You can just as easily generate a checksum from text and be just as secure. There’s no reason to not have plain text logs; especially considering that binary logs require special tools to read and filter.
If you’re creating a checksum from a text file that is already compromised, it doesn’t get you very far.
The systemd journal comes from such a tool, and others can be easily created. Another added bonus though is that logs can be handled in one place from the entire network remotely and in a secure manner, and still the information is retained as to where it came from. People have done similar with syslog but it is a mess, this comes by default with systemd and makes sense considering most Linux deployments are cluster-based.
I do not understand what the problem is with binary log files, with journalctl they are just as accessible as using cat but it is more logical, each cgroup keeps its logs together in an orderly way, rather than each process randomly farting info to the file. Further, all the logging systems used around Linux are all read by journald, rather than having the possibility of what you need being in any of 10 files at times.
If you really want though, there is no problem with using journal and syslog on the same system if you absolutely need logs in text files. The journal is not a good enough reason to not use systemd. You gain far more in a far cleaner way by utilizing systemd.
How would the text be compromised? You generate the checksum at the same time as the text. By the time a hacker could even see the text the checksum already exists.
My entire argument is this: Binary formats buy you nothing and only force you to do more work (creating journalctl to do the tasks that could formerly be done using standard tools.) There is no security advantage that can’t be easily gained by adding checksums, which has the additional advantage of not requiring a brand new tool.
The issue of what can be done with syslog is different. If you say syslog can’t do checksums (easily) then I’ll believe you. However, the solution should not involve switching to binary (for the above reasons and a few additional that are specific to me; I’d be happy to share, but they’re really only important to my use case.)
On an entirely different note: systemd doesn’t give me any benefits. The old init system that Arch Linux used worked just fine for me. I had zero issues with it (in point of fact, I used Arch specifically for its init system.) However, I don’t care enough to avoid it. In that regard it’s exactly like pulseaudio. It gives me no benefit, but it more or less works; so whatever.
satsujinka,
Text versus binary logging depends on what you want to do and what tools you have. Alas you are right that many binary formats don’t provide good tools, and it’s often necessary to pipe textual output out of them so they can be manipulated by text utilities. This obviously has no benefit over having used a text format in the first place.
Assuming that the “binary format” is actually a real database, then I *strongly* prefer querying data from a database over using textual tools to parse semi structured text records. I’ve worked on several projects where we logged activity into the database instead of using text files, and we’ve never found a reason to regret it. We get aggregates, indexing, calculations, sophisticated searching, easy reporting & integration. In certain cases it’s necessary to populate the database by parsing text logs, and it makes me wish that all tools could log into database tables directly.
It’s often trivial to export databases back into text format if you wanted to, but there’s hardly ever a reason to do it since database facilities are so much better.
The database is named filesystem. Binary dumps for logs? That is stupid like a binary config registry.
Edited 2013-05-13 05:37 UTC
On the contrary, putting logs into searchable binary storage like ElasticSearch is great. Grep doesn’t really scale.
Binary is not a good format for the default system logs though.
cdude,
“The database is named filesystem. Binary dumps for logs? That’s more stupid then a binary config registry.”
I’m finding it peculiar that you’d bring up a “named filesystem” database given that it doesn’t apply to logfiles.
With a database, each record exists and is manipulated independently from all other records. You cannot use file system level operators (cd, ls, find, etc) to query log files or manipulate individual records. In order to get the same level of granularity that a database gives us, you’d have to store each “record” or log event in a separate file. Another major difference is that the database records can be indexed such that queries will only read in the records that match the criteria. A text log file on the other hand has no indexes and needs to be fully scanned.
Text processing programs like sed/grep/cut/sort/etc are great tools, but SQL is far more powerful for advanced analytics.
Edit: Also, the windows registry sucks, no disagreement there. But it’s not right to put all databases in the same boat as regedit. The registry has a huge gap in analytical power and structure compared to any real database.
Edited 2013-05-13 06:37 UTC
While cdude was being ostentatious, he does have a point. In that, technically, a file system is a graph database…
Skipping over that and more importantly, there’s no reason why you can’t implement a database on top of text files. Perhaps, there might be some performance penalty due to the size of a human word and a machine word. But most other issues (i.e. indexing) are just a matter of translating from binary to what that byte actually meant.
Of course, with semi-structured text that has little embedded meta-data (i.e. syslog’s logfiles,) getting adequate performance would be hard. However, I was already suggesting adding checksum meta-data; so it’s not really a stretch to imagine that I’m okay with adding whatever other necessary meta-data.
satsujinka,
“In that, technically, a file system is a graph database…Skipping over that and more importantly, there’s no reason why you can’t implement a database on top of text files. Perhaps, there might be some performance penalty due to the size of a human word and a machine word. But most other issues (i.e. indexing) are just a matter of translating from binary to what that byte actually meant.”
I realize all of this, a file system *is* a type of database, anything with a simple key-value mapping would fit naturally. More over you could re-implement just about any other type of advanced data structure on top of it, however you’d be reinventing the wheel and probably end up with something that is slower, less flexible, and less accessible than SQL.
For SQL users, the actual data format is mostly irrelevant other than performance and integrity reasons. Mysql has a text database engine, but it isn’t as good as the other engines and lacks indexing.
http://dev.mysql.com/doc/refman/5.1/en/csv-storage-engine.html
Generally speaking once you’ve got the data in a structured database you’ll never want to revert to the text processing tools again (sed/grep/cut/etc). The main reason to convert back to text form is for data interchange with others, not for querying or manipulation.
“Of course, with semi-structured text that has little embedded meta-data (i.e. syslog’s logfiles,) getting adequate performance would be hard. However, I was already suggesting adding checksum meta-data; so it’s not really a stretch to imagine that I’m okay with adding whatever other necessary meta-data.”
I’m not sure how much security is gained by checksuming, since if an attacker gained sufficient access to manipulate the logs, it seems they could also have sufficient access to manipulate the checksums as well. This would be true whether in binary or text.
How would implementing an SQL database on top of plain text be less flexible and less accessible than SQL? That is plainly a contradiction.
A CSV variant (i.e. DSV) is already understood by the standard tools. So considering MySQL uses CSV, there’s no reason why we couldn’t implement a query engine that can co-exist with the standard tools. And why not provide that compatibility if we can? After all, for simple searches grep will be easier to use than SQL (simply due to having less syntax.) Is this important? No. I live with systemd. But there’s no reason to isolate our logs from the tools we use with the rest of the system.
Performance issues are a different matter. For log files, there probably won’t be any problem… however, as I’ve said already; you can do indexing on plain text. You just have to add the appropriate semantics to your text format.
As you point out yourself, if a hacker has already compromised your system (such that they can manipulate the logs) there really isn’t much that can be done. It is always possible for them to modify the files; regardless of whether or not they’re plain text, have checksums, or are completely binary. However, consider locks on doors. A lock doesn’t prevent a burglar from getting in. It’s trivial to go through a window instead. However, a lock does keep people from just randomly wandering into your home. Checksums or binary provide this type of security; in that someone who doesn’t know what they’re doing can’t easily remove traces of what they did.
satsujinka,
“How would implementing an SQL database on top of plain text be less flexible and less accessible than SQL? That is plainly a contradiction.”
That’s not what I said, I said if you were to build your own custom database over top file system primitives, it’s unlikely to be as flexible or accessible as an SQL database. The applications that I know of which do use a file system database are quite limited and not even remotely close to being SQL-complete (for example postfix mail queue). Anyways, given that all text logging systems to my knowledge use a flat files and not a file system database, I’d like for us to move past this particular issue.
“A CSV variant (i.e. DSV) is already understood by the standard tools. So considering MySQL uses CSV, there’s no reason why we couldn’t implement a query engine that can co-exist with the standard tools. And why not provide that compatibility if we can?”
The thing is, once you have data in a database, you wouldn’t ever have a need to use the standard text tools to access the data since they’re largely inferior to SQL (unless of course you didn’t know SQL).
I don’t object to your choice of using a text database engine if you want to. CSV is often a least common denominator format, which is simultaneously a strength (because it’s pervasive) and weakness (because it lacks alot of the more advanced features a database can normally provide). But the choice is yours to make.
“Performance issues are a different matter. For log files, there probably won’t be any problem… however, as I’ve said already; you can do indexing on plain text. You just have to add the appropriate semantics to your text format.”
How do you index a plain text file using standard tools and then go on to query your records via that index? Wouldn’t you need to write customized scripts to build and query the index? It seems to me that you need to rebuild custom tools frequently every you want to do something that SQL has built in.
Your argument seems very confused to me, but maybe I’m misunderstanding you.
I’m going to drop the indexing discussion after this because I’m not sufficiently studied on the topic to explain how a database does indexing. However, if we take file=table and line=row; then I would imagine we can cache rows and mark them with their table (inside the cache.) But as I said, I don’t know what databases do; so this is just my guess. Also I’m not convinced that a log database would have performance issues (as there’s really only 1 record type and logs don’t cross reference each other too much.)
Moving back to the top:
The bold part is what you’re missing. And is why you’re contradicting yourself. You are literally saying that an SQL database is less flexible and accessible then an SQL database. The backend is totally unimportant for non-performance considerations.
See but there are reasons why you might not want to use a query engine. You list a trivial one (that at least a professional system admin. should try to overcome, but not everyone is a professional system admin.) Here are some more reasons:
* Because I want to verify that the query engine is returning the correct results. (Query engines have bugs too!)
* Because writing out a full query is more work than greping for some keyword. (I’m lazy.)
* Because log files shouldn’t exist in some magical land separate from all my other files (e.g. off in SQL land while all of my other files are in CLI land; this can also be read as “CLI is what I reach for first”.)
* Because I don’t want to have to hunt down a database driver just to pick some things out of my logs from within my program.
* Or from the other side of the fence, because I don’t want to have to hunt down a database driver to write some logs for my program.
* Because I want to pipe my results out to some other program (this is more a comment on most SQL query engines then a real limitation.)
And what “advanced” features would apply to a log? There’s only 1 record type. CSV provides sufficient capabilities to handle that.
Consider wikipedia’s CSV page:
Does this not sound exactly like what an entry in a log file is?
satsujunka,
“if you were to build your own custom SQL database over top file system primitives, it’s unlikely to be as flexible or accessible as an SQL database”
…
“The bold part is what you’re missing. And is why you’re contradicting yourself. You are literally saying that an SQL database is less flexible and accessible then an SQL database. The backend is totally unimportant for non-performance considerations.”
Oh I see, you modified my quote in order to create a contradiction. (please don’t do that again!) This is a bit out of context for log files which we were discussing, but since your quite adamant that textfiles are just as good for implementing SQL databases, I’ll address why I think you are wrong.
You *could* build an SQL database on top of any format you chose. I won’t discourage you from trying it, but unless you create ODBC / JDBC / native data connectors for it, then you’d end up with a rather inaccessible SQL implementation. Still you *could* build all the SQL connectors and have a fully usable SQL database.
Now, conceptually your happy, but the implementation details are where problems begin cropping up. Almost any changes to records (changing data values or columns) mean re-sequencing the whole text file, which is not only inefficient by itself (particularly for large databases), but it means rebuilding all the indexes as well. Also many years of research have gone into the so-called ACID features you’ll find in SQL databases. Consider atomic transactions, foreign key integrity, etc. SQL implementations are designed to keep the data consistent even in the event of a power loss, think about what that means for flat file storage.
Another issue is that flat text file makes efficient concurrency difficult, any change that one program is making would have to block other readers/writers to avoid data corruption, I think you’ll agree that the entire text file needs one large mutex in order to guaranty that the textual data is in a consistent state. Although linux has advisory file locking, I don’t think standard tools like grep use it. After all your work to make your “custom SQL database” use a text format, you still cannot safely use standard tools like grep on it without first making sure the database is taken offline.
So I ask you now, what is the advantage of having a text SQL engine over being able to export text from a binary SQL database?
The only criticism I can give merit to as a real fundamental problem is if you don’t trust the database implementation to produce reliable results (for fun: I challenge you to find an instance of a popular database having produced unreliable query results on working hardware). For everything else you could export a text copy and even then I have to reiterate that it’s my honest belief that for anyone who is proficient with SQL, hardly any would want to use the text tools by choice. SQL really is superior even for simple adhoc queries.
“And what ‘advanced’ features would apply to a log? There’s only 1 record type. CSV provides sufficient capabilities to handle that.”
For example, on one production system I import apache logs into a database, index them, and compute per-user aggregate statistics. The database correlates these hits with user accounts groups them by date for displaying monthly statistics for our users. These web statistics also get joined to the sales email statistics.
I won’t pretend most log files need this amount of analytical power, they don’t. But it’s still nice to have this ability without first having to write programs to parse the text files and manually compute aggregates. I can (and have) written perl & php scripts to run similar computations by hand, but the database is the clear winner IMHO. Even if I’m just browsing the data and not manipulating it, I’d rather have a tabular spreadsheet interface over a flat file one.
I do appreciate how cleverly the text tools can be used in a unix shell, but the more I think about it the more I like the database approach. Maybe we need to stop thinking about it as “binary” versus “text”, and think about it as different degrees of data abstraction.
No I modified your quote to show you were you were misinterpreting me. You were under the impression I wasn’t talking about a text backed SQL query engine. So I modified your quote because apparently just saying it wasn’t enough.
This is the same issue that journald faces. Of course, I think it’s reasonable to infer that you feel that journald should use an existing database (say MariaDB or SQLite or …)
I felt it was quite reasonable to assume that I was talking about what journald should have done (if it was going to go ahead and reinvent a database anyways.)
All databases have to solve the same challenges. Nothing mentioned here is particularly different no matter your database backend. In short, implementing a database is hard.
Of minor note, no matter the format; changing a column value would not require re-sequencing the file. The rest of the file hasn’t changed so the representation currently cached would still be valid; no need to restore it.
Again, no issues that other formats don’t face. However, you are mistaken about something. You don’t have to lock a file for reading; ever. grep/sed/awk will always work even if a database is currently using a file. The only time you need to lock files is when you’re writing (and that only blocks writers.) So since you shouldn’t ever be modifying your logs… neither the database or grep/sed/awk need to lock the files.
It’s easier and more reliable to get to the data, with all that that implies.
I’ve had issues where MS SQL Server doesn’t delete rows from a table, but they also don’t show up in queries… (they did appear in Management Studio though.)
Anyways, I disagree, SQL is not superior for simple queries. I would much rather use grep. We probably aren’t going to agree here.
Libre Office Calc can open CSV files as a spread sheet…
I don’t really think that that’s the issue. I really do think that there’s just some misunderstanding going on. I mean, I haven’t been saying “don’t use SQL” or “the relational model sucks!” I’ve been saying “I want to be able to read my log files with any old tool, but adding in a query engine would be cool too.” The only issue that could arise with what I want is performance. And not only do I think log files aren’t likely to have this issue; but I think that it can be solved without abandoning text.
satsujinka,
“Again, no issues that other formats don’t face. However, you are mistaken about something. You don’t have to lock a file for reading; ever. grep/sed/awk will always work even if a database is currently using a file. The only time you need to lock files is when you’re writing (and that only blocks writers.)”
The difference is that normally databases aren’t designed to have their datastores read/written by external processes as they’re being used, so the problem doesn’t really come up at all. Never the less I do want to point out that even for textual databases readers do need to be blocked and/or intercepted in order to prevent incomplete writes & incomplete transactions from being seen by the reader.
If you don’t have a database background, you might not realize that transactions can involve many non-contiguous records such that without locking you’d end up with a race condition between the reader / writer starting and completing their work in the wrong order.
“I’ve had issues where MS SQL Server doesn’t delete rows from a table, but they also don’t show up in queries… (they did appear in Management Studio though.)”
It’s not a bug, it’s a feature
In the absence of a confirmed bug report, my gut feeling is that the most likely cause of your problem was an uncommitted transaction. Maybe you deleted and queried the result *inside* one transaction, then from another program you still saw the original records. You wouldn’t see the deletions until the first transaction was committed. This can surprise you if you aren’t expecting it, but it makes sense once you think about it.
When you program in pl/sql, for example, by default all the changes you make (across many datatables and even schemas) remain uncommitted. You can execute one or more pl/sql programs & update statements from your IDE and then query the results, but until you hit the commit button, noone else can see your changes. The semantics of SQL guarantee that all those changes are committed atomically. It’s understandable that awareness of these SQL concepts is low outside the realm of SQL practitioners given that they don’t exist in conventional file systems nor programming languages.
“I mean, I haven’t been saying ‘don’t use SQL’ or ‘the relational model sucks!’ I’ve been saying ‘I want to be able to read my log files with any old tool, but adding in a query engine would be cool too.’ The only issue that could arise with what I want is performance. And not only do I think log files aren’t likely to have this issue; but I think that it can be solved without abandoning text.”
The thing is, if your software supports logging into a database interface, you could have a simple NULL database engine that does absolutely nothing except output a text record into a text file. That’s essentially a freebee for you.
The inverse is not true, processes that are developed to output to text files will need additional external scripts/jobs for parsing and inserting relational records into the database. Also there’s a very real risk that the logging program will not log enough information to maintain full relational integrity because it wasn’t designed with that use case in mind. Our after-the-fact import scripts are sometimes left to fuzzy matching records based on timestamps or whatnot. If the standard logging conventions dictated that programs used a structured database record format from the get go, such ambiguities wouldn’t have arisen.
It’s probably wishful thinking that all programs could shift to more structured logging at this point since we’re already entrenched in the current way of doing things. But if we had it all to do over, it would make a lot of sense to give modern database concepts a more prominent primary role in our everyday operating systems.
Edited 2013-05-15 03:22 UTC
I guess, but in the case of a log this isn’t really going to be an issue. And this also depends on whether or not reading incomplete transactions is an issue.
Another thing I should point out. I was rather purposefully using “query engine” before. Logs shouldn’t be written to (by anything other than the logger,) so there won’t be any writing being done by the database in the first place. It’s just another read client (like the standard tools would be.)
Again, only if the reader cares what the writer is writing. In the case of a log, in which every record is only written once, this shouldn’t be an issue.
I wasn’t doing the manipulations directly, however, I’m fairly certain that the delete was committed (since it was a single transaction spawned by a REST command.) I will admit that it may not have been entirely MS SQL Server’s fault, but it was irritating enough that I’d really just rather have direct access to the data.
I may not work with databases all the time, but I do have some experience; so I can pretty much guarantee that I don’t have any issues with the commit/rollback functionality of databases (in point of fact, I’ve been trying to get approval to modify my employer’s web server to not just commit everything it does right away; instead someone just decided that implementing a custom history was a good idea…)
This would be fine. The logs are in text and you can use SQL. That fits my requirements just fine.
This is actually what I’ve been saying. The log should be structured in record format. CSV is a record format so that was the example I’ve been using (it has the added bonus of working with existing tools, but so long as the format can be read by humans I don’t care.) The only additional requirement I have is that the record format should also be human readable.
Hell, the log could be an .sql file for all I care.
I don’t disagree with you. I do like the relational model, even if I’m not fond of SQL.
satsujunka,
“I guess, but in the case of a log this isn’t really going to be an issue. And this also depends on whether or not reading incomplete transactions is an issue.”
We seem to keep floating between two separate topics here 1) logging, and 2) implementing a generic sql database using a text engine. I think I’ve made clear why implementing a full SQL database over text is more trouble than it’s worth. I think you’ve made clear that it could nevertheless work for logging since it’s append only. Personally I wouldn’t see the point in using a special kind of database just for logs, but I’m not denying that it could be done.
Regarding NULL DB engines:
“This would be fine. The logs are in text and you can use SQL. That fits my requirements just fine.”
This would be my preferred solution.
“This is actually what I’ve been saying. The log should be structured in record format. CSV is a record format so that was the example I’ve been using (it has the added bonus of working with existing tools, but so long as the format can be read by humans I don’t care.) The only additional requirement I have is that the record format should also be human readable.”
This is not what I meant. For one thing, CSV’s data escaping rules are non-trivial and require a state machine generate & parse CSV character by character. Very often I’ve seen CSV data feeds output by trivial means that don’t even escape the data fields at all. Sometimes this problem is not noticed until someone enters a comma on a production machine causing fields to become misaligned. More importantly though, CSV would be a poor choice because records don’t contain field metadata, the reader has to be programmed with some template to just know what each column means. This ambiguity is unacceptable when we try to insert records into the database. So XML would technically be better, but this isn’t what I meant either.
I think all programs should be using a structured library interface directly without bothering with the text conversion at all. It could be a library call similar to printf, but it would have to be capable of maintaining field metadata. This call would not output text (optionally it could for interactive debugging), instead it would transmit the log record to the system logger.
You, as the administrator, could setup the system logger to take in the *structured data* and do whatever you please with it. You could output text files (whether csv, xml, yaml, json, or whatever suits your fancy), you could record the records in the database of your choosing, you could filter/throw them out without logging, you could even have special triggers to perform various actions as things are happening. This could be highly efficient as there isn’t a need to convert integers to text and back again or scan text values of unknown length as is necessary with a text format.
As a programer trying to integrate subsystems, I find this far more appealing than having daemons write out text logs and then programming monitoring scripts to parse text logs back into a structural form before being able to do something useful with them. The goal would be for all programs to build on top of interfaces which enable a higher degree of data abstractions. The lowest common denominator would get raised to a datatuple instead of a line of text as it stands today.
This is getting long winded, but since it’s relevant: I was actually discussing this topic with Neolander. The vision was not just for logging, but actually to replace all sorts of text I/O streams with data tupples. When I do “route -n” or “iptables -L”, the programs could open a datatuple output stream instead of (or in addition to) a text output stream. Bash could be modified to support these structured data output streams and work with them.
Some examples:
iptables -L # dump human output to console.
iptables -L | spreadsheet # open tuples in spreadsheet
iptables -L | gencsv > file.csv # save tupples as csv
iptables -L | genxml > file.xml # save tupples as xml
iptables -L | genjson > file.json # …
parsexml file.xml | genjson > file.json
iptables -L | mysqlinsert database.datatable # insert tupples into database
Note that in these examples, iptables doesn’t care how the structured data gets used, and the receiving tools don’t care what is producing the data. Unlike today, no parsing would be needed. This would all be possible if we could get away from text as the least common denominator and transition to data tuple based IO. (This is why I said it’s best not to think in terms of “text” versus “binary”, but in terms of data abstractions)
I find these ideas very inspirational and extremely compelling, but I’m not sure if there’s any chance of convincing existing mainstream operating systems to change their way of doing things. If I were still working on my own OS I would certainly do it this way.
Okay, so moving out of the logging topic to databases in general as a organizational system for an operating system.
About CSV vs. XML: Considering Golang’s csv and xml packages: both of the 2 go files for CSV have a combined size smaller than xml’s 3 of the 4 go files for xml (the 4th is approximately the same size as csv’s reader.go). To me this implies that CSV doesn’t have any escaping issues that are particularly harder to solve then XML or JSON (JSON actually has the most code dedicated to it.)
Of course, part of this is that csv provides the smallest feature set. However, comparing similar functionality leads to the same conclusion.
As for metadata: you have to provide a schema no matter what data format you choose. XML isn’t better in this regard; usually you match tag to column name. CSV has a similar rule: match on column position. I know in relational theory the columns are unordered; but in practice the columns are created/displayed with an order; just use that. Optionally, you can write a schema to do the matching. This is actually a better situation then XML which requires a schema all the time (what do we do with nesting? I can think of 3 reasonable behaviors off the top of my head.)
—
I’m not opposed to this in principle. However, I fear figuring out what this “printf”‘s interface should be will not be so simple. Does it expect meta-data co-mingled with data? Does it take a schema as a parameter? Isn’t “%s:%d” a schema already (one whose fields are anonymous, but paired with scanf, you can write and retrieve records with it)? Also, what should we use for a schema format? Or should we just choose to support as many as possible?
What would these data tuples look like? You’ll need some data to mark where these tuples begin and end, their fields, and their relation. End can double as begin, so only 3 symbols are necessary (but the smallest binary that can hold 3 is 2 bits so you may as well use 4.) If you omit table separators, then you need to include a relation field.
With 4 symbols:
Bin Txt Description
00 – ( – tuple start
01 – , – field break
10 – ) – tuple end
11 – ; – table end
Ex.
(Id,First Name,Last Name)(001,John,Doe)(002,Mary,Jane);
With 3 symbols:
00 – , – field break
01 – \n – tuple end
10 – \d – table end
Ex.
Id,First Name,Last Name
001,John,Doe
002,Mary,Jane
\d
… Hey, wait a minute that’s CSV!
With 2 symbols:
0 – , – field break
1 – ; – tuple break
Ex.
Person,Id,First Name,Last Name;
Person,001,John,Doe;
Person,001,Mary,Jane;
Just to make it clear: I too want a standard format that everything uses. It’s just that saying “use tuples” ignores the fact that we still have to parse information out of our inputs in order to do anything. You do go on to say “redesign bash to handle this”. I assume you also mean “provide a library that has multiplexed stdin/stdout” as you also have to write and read from an arbitrary number of stdin/stdouts (as corresponds to the number of fields.) Alternately, you could shift to use a byte code powered shell (so that all programs use the same representations as the shell and can simply copy their memory structures to the shell.)
satsujinka,
“About CSV vs. XML: Considering Golang’s csv and xml packages: both of the 2 go files for CSV have a combined size smaller than xml’s 3 of the 4 go files for xml (the 4th is approximately the same size as csv’s reader.go). To me this implies that CSV doesn’t have any escaping issues that are particularly harder to solve then XML or JSON (JSON actually has the most code dedicated to it.)”
This methodology really isn’t sound, but I don’t really want to get into it.
“As for metadata: you have to provide a schema no matter what data format you choose. XML isn’t better in this regard; usually you match tag to column name. CSV has a similar rule: match on column position.”
XML provides self-defining metadata (the names and possibly other attributes), where as CSV does not. It’s illogical to me for you to disagree, but let’s just move on.
“However, I fear figuring out what this ‘printf’s interface should be will not be so simple.”
It doesn’t really matter for the purpose of this discussion, whatever makes it easiest in the context of the language the library is being written for.
“What would these data tuples look like? You’ll need some data to mark where these tuples begin and end, their fields, and their relation. End can double as begin, so only 3 symbols are necessary (but the smallest binary that can hold 3 is 2 bits so you may as well use 4.) If you omit table separators, then you need to include a relation field.”
Your still thinking in terms of text with delimitors, but the whole idea behind the tuples would be to use a higher level abstraction. Think about how a class implements an interface, you don’t have to know how a class is implemented to use the interface.
“It’s just that saying ‘use tuples’ ignores the fact that we still have to parse information out of our inputs in order to do anything.”
No, you as a programmer would be using the higher level abstraction of the tupple without caring about the mechanics used to implement them. You keep thinking in terms of programs parsing text streams, but with the tupple abstraction you can skip the intermediary text conversions entirely. You only need code to convert the tupple to text at the point where text is the desired form of output like in the shell or a logfile. I’m not sure your understanding this point.
Going back to your interface/class metaphor. I’m not the guy using the interface. I’m the guy writing the interpreter/compiler/VM that dispatches to the correct class on a particular method call from an interface. In which case, I do care how things are implemented, because I want to implement them!
As it stands, there is no hardware notion of a tuple. We just have data streams. So we either have to delimit the data or we have to multiplex the stream. If there are other options then please let me know, but “use higher abstraction” is not a means to implement a system.
If you’re not interested in discussing how to implement a relational I/O stream abstraction (which we already both agree would be nice,) I guess there’s really nothing else to talk about.
Moving back, my methodology is perfectly sound. I was not trying to show CSV is easier to parse. I was disproving your claim that CSV is particularly hard to parse. The fact that 2 common formats (1 of which you recommended) take more work to parse then CSV, in any general purpose language, is a sound disproof of your claim.
Next:
XML does not and cannot provide self-defining metadata. Consider your example of XML providing names. What do metadata do those names provide? What metadata does <contains>…</contains> provide? That something is contained? What if it’s a type declaration? (x contains this type) In order to answer the question of what metadata is present, we need some context and if we need a context; then by definition our metadata is not self-defined. This is true of all data formats. Within a context it does make sense to say that some data has a meaning, but outside of it? No.
So back to what I originally said: XML is matched on name and CSV is matched on position. This is how we determine meaning for these two formats. Metadata follows this rule too. In XML we specify metadata with a known name (contains?) and in CSV we specify metadata with a known position (2nd?)
satsujinka,
“If you’re not interested in discussing how to implement a relational I/O stream abstraction (which we already both agree would be nice,) I guess there’s really nothing else to talk about.”
I’ll play along, but I must emphasize that any implementation can be swapped out and all programs using the record I/O library would continue to work oblivious to how the library is implemented, which is irrelevant. If your wondering why I’m being such a stickler here, it’s because I don’t want to hear you criticizing an implementation because it happens to be binary. That’s like criticizing C for doing calculations in binary under the hood when you want your numbers to be decimal. Binary is used under the hood because it’s more efficient, it doesn’t affect your program’s I/O.
“As it stands, there is no hardware notion of a tuple. We just have data streams. So we either have to delimit the data or we have to multiplex the stream. If there are other options then please let me know, but ‘use higher abstraction’ is not a means to implement a system.”
We actually have much more than data streams, we have a wealth of data structures as well that can be transferred via shared memory pages to create much more powerful IPC than simple data streams. These aren’t used often because 1) the lack of portability between platforms, 2) the lack of network transparency, and 3) many developers never learned about it. Never the less I just wanted to bring it up to point out that IPC isn’t limited to streams.
“Moving back, my methodology is perfectly sound. I was not trying to show CSV is easier to parse. I was disproving your claim that CSV is particularly hard to parse.”
Regarding CSV files in particular (as documented here https://en.wikipedia.org/wiki/Comma-separated_values) has several issues which your examples completely omitted. One would like to implement a trivial CSV parser this way:
– Read one line to fetch one record
– split on commas
– add fields to a name value collection (using external knowledge about field names)
This is easy, but it breaks on legitimate CSV input files.
1. Quoting characters may or may not be present based on value heuristics.
2. The significance of whitespace is controversial & ambiguous between implementations (ie users often add whitespace to make the columns align, but that shouldn’t be considered part of the data).
3. There are data escaping issues caused by special characters that show up inside the field (quotes, new lines, commas). These need to be quoted and escaped.
This is especially troubling because the main record fetching logic has no idea whether a particular newline indicates an end of record or is a databyte without knowing the quoting context in which it showed up, which is even more complicated once you consider that quote characters themselves have unique rules.
Ideally you could identify record boundaries without any knowledge of field level quotation, which is a very serious deficiency for CSV IMHO.
It turns out that XML is somewhat easier to parse without a fancy state machine because the characters used in specifying the records/fields are never present inside text fields. I’m not saying XML is the right choice, it has support for rich multidimentional structures make it complicated for other reasons. But for the sake of argument just consider this subset:
<record a=”a” b=”b”” c=”b<“/>
(edit: pretend that the xml above is correctly quoted, an osnews bug prevents it from displaying as I wrote it)
When the parser reaches a “<” the parser can ALWAYS find the end of that node by searching for “>”.
When the parser reaches a quote, it can ALWAYS find the end of the string by finding the next quote. It’s trivial because special characters NEVER show up in the data. This is much easier than with CSV.
As far as escaping an unescaping values, here’s a trivial implementation of that:
// escape a string
str = replace(str, “&”, “&”);
str = replace(str, “<“, “<“);
str = replace(str, “>”, “>”);
str = replace(str, “\””, “””);
(edit: curse you osnews! it should show show & amp ; etc)
I hope this example helps clarify why CSV is awkward to implement for arbitrary data.
“You do go on to say ‘redesign bash to handle this’. I assume you also mean ‘provide a library that has multiplexed stdin/stdout’ as you also have to write and read from an arbitrary number of stdin/stdouts (as corresponds to the number of fields.)”
Hmm, I’m not sure quite what your saying, but I was saying that since all applications would be communicating via data records a shell like bash could receive these data records and then output them as text. When piped into a logger, the logger would take the records and save them using whatever format it uses. The under-the-hood format and even the specific IPC mechanism used by this new record-I/O library could be left unspecified so that each OS could the mechanism that works best for them.
Now the moment you’ve been waiting for… My implementation would probably use a binary format with length prefixes (aka pascal strings) so that one wouldn’t need to scan through text data AT all (eliminates 100% of quoting & escaping issues). Moreover, the parser would know the length of text right away without having to scan through it’s bytes first. This way it could allocate new string variables of perfect size. Without the length prefix, you’d have 2 options 1) use one pass to determine the length first, and then another to copy the data or 2) guess the size of string needed then dynamically grow it when it overflows.
I hope this post is clear because I don’t think we have much more time to discuss it.
Edited 2013-05-16 16:06 UTC
There’s always time to discuss, even if the article gets buried. Just go to your comments and access the thread from there.
So would you have 3 length prefixes? Table, record, field? Or would you have table be a field of the record (allowing arbitrary records in 1 stream).
Ex w/ 3 length markers; numbers are byte lengths.
2 2 3 (3 bytes of data for field 1) 5 (5 bytes for field 2) 6 (record 2 field 1) 2 (record 2 field 2) (end of stream)
Ex w/ 2 markers. Same data, note that the number of fields is required for each record now. This adds (tableIdbytes + 1) * numRecords bytes to the stream.
3 5 (rec.1 table) 3 (rec.1 field1) 5 (rec.1 field2) 3 5 (rec.2 table) 6 (rec.2 field1) 2 (rec.2 field2)
Interestingly, the data could be binary or text here. While the prefixes have to be read as binary, everything else doesn’t (and most languages while(character–!=0) makes sense…)
These are issues with arbitrary CSV-like formats. Given the opportunity to standardize (which we have) it’s straight forward to mandate escapes (just as XML does) and declare leading whitespace as meaningless.
While XML only has to escape quotes in quotes; comma’s can be ignored while in quotes and newlines can be escaped with “” (double quotes is usually the quote escape, but I’m swapping it with newline because there’s another perfectly good quote escape “\”” and this way I preserve record=line.)
You keep saying that a “fancy state machine” is necessary, but XML requires 1 too. XML has quotes that need escaping so you still need a state machine to parse XML.
Now having said that, I wouldn’t use CSV in the first place. I’d use a DSV with ASCII 31 (Unit Separator) as my delimiter, since it’s a control character it has no business being in a field so it can simply be banned. Newlines can be banned, as they’re a printing issue (appropriate escapes can be passed as a flag to whatever is printing.) Which leaves us with no state machine necessary. The current field always ends at unit separator and the current record always ends at a newline. Current tools are also able to manipulate this format (if they have a delimiter flag.)
Back to multiplexing:
Basically I was saying that one option is to pass tuples to n stdout where n is the number of fields in the tuple. These stdout have to be synchronized, but it gives you your record’s fields without having to do any escaping (as stdout n is exactly the contents of field n.) So say you have
RELATION ( SOMETHING, SOMETHING_ELSE );
When printed this gets broken in half and sent to 2 stdout.
SOMETHING -> stdout 1
SOMETHING_ELSE -> stdout2
The reverse is true for reading. Basically, we’re using multiple streams (multiple files) to do a single transaction (with different stream = different field.)
satsujinka,
“There’s always time to discuss, even if the article gets buried. Just go to your comments and access the thread from there.”
If I’m not mistaken, the article will become locked in a few hours.
“So would you have 3 length prefixes? Table, record, field? Or would you have table be a field of the record (allowing arbitrary records in 1 stream).”
Just for records and fields.
“These are issues with arbitrary CSV-like formats.”
There’s no way to avoid the quoting problem with CSV though without using proprietary conventions for it. For example, I’ve seen datafeeds that have substituted “[NEWLINE]” for newline characters just to simplify the record parsing issues. I wonder what this developer intended to be used to convey text actually containing “[NEWLINE]”? Haha.
This is why the XML character escaping is better, special characters like < and > don’t contain themselves when they’re escaped ( & lt ; & gt ; ). This gives the high level XML parser the freedom to parse the XML structure without regards to false positives of these symbols showing up in the data, which simplifies it tremendously.
“You keep saying that a ‘fancy state machine’ is necessary, but XML requires 1 too. XML has quotes that need escaping so you still need a state machine to parse XML.”
Take another look at the example I gave and see that you can trivially find the matching “<” and “>” without any regards to the text contained within BECAUSE “<” and “>” are always escaped and NEVER show up in the text. EVERY SINGLE occurrence of these symbols in XML is structural. Once you find “<“, you can automatically do indexof(“>”) to get the matching closing tag, no exceptions to the rule. You cannot reliably do this with CSV because it depends on context.
“I’d use a DSV with ASCII 31 (Unit Separator) as my delimiter, since it’s a control character it has no business being in a field so it can simply be banned. Newlines can be banned, as they’re a printing issue (appropriate escapes can be passed as a flag to whatever is printing.) Which leaves us with no state machine necessary.”
This is a bit shortsighted though. You cannot just tell programmers their variables cannot contain certain bytes like newlines. This is shifting the escaping problem to them based on implementation details they shouldn’t have to worry about. This isn’t very relevant to what I wanted to be discussing in the first place, and I have to get going, so if your still not convinced, I guess we may have to leave it there.
Oh yeah, security through obscurity. That ALWAYS works out great. syslog can already log to a remote host and for your other log files you should already be using something like logstash or graylog2 if security and manageability is a concern.
See, this is what I *don’t* like about systemd; it does too much. We already have cron and at for scheduling, we have udev for hotplugging and there are already many good solutions for managing logging. SystemD should stay the hell out of these areas and focus on the one area that does need solving: service management.
Personally I much prefer the Upstart approach of focusing on a small set of problems that need solving.
Security through obscurity? Just because some firm does not base its operations upon the principles of the communist manifesto, it does not mean that the products it produces are insecure. Put another way, just because they choose to sell their product and you don’t get to see their source code it does not mean that the product relies merely on obscurity for its security. Logic is your friend. Closed source software rely on good design and engineering for security as much as anything. They also receive bug reports. Thus, when a bug is found, they can fix it. You need not see the source code.
What the heck are you talking about?
We are talking about systemd’s binary logging and the (lack of) security it provides, not about closed-source software.
There is no more udev, it is part of systemd – which is a good thing because the system management daemon should be managing the system. Why is it good to have at and cron around, as well as xinetd when all they’re doing is managing services. The problem is that these are each using different methods, and are utterly incompatible so we are increasing the learning curve for no benefit at all.
Sure, systemd has a learning curve at first, but as you get used to it it just makes sense.
Upstart is crap, there is no real benefit to it over sysv at all. The main problem is it is still basically using scripting for everything, so there are still something like 3000 fs calls when bringing up the system. This, when compared to the about 700 of systemd, is simply too much – systemd needs to come down somemore too, but this includes everything up to a gnome desktop, when gnome-session starts to use systemd user sessions, this will come down drastically.
As others have said, fs access on linux is not great, so the less times we are accessing the disk, the better, and the faster the system will come up.
Honestly, I still don’t even understand upstart, its language is just broken. I keep trying to look into it, but the more I do, the less I like it. Systemd uses the file system in a logical way for booting the system, and gives us more access in the /sys dir to many of the features of the system. Upstart gets us nowhere because fundamentally it is only a redesign of the init, it is not something substantially new.
You seem to think this is a good thing, but systemd simply results in a cleaner and easier to understand system. By removing many things like at and cron and syslog and logrotate and all these programs that do not communicate with each other, we end up with a more integrated base system. For me, this is a good thing, it is a miracle all these projects have managed to coexist for so long at all.
By moving all these into one project (with many binaries for each thing, because modularity is important for parallel booting) we now get a consistent API for all events on the system, whether hardware or software crashes or whatever, it all becomes predictable. Everything is handled in the same way system wide, and there are no more obscure config settings to learn depending on exactly what you’re trying to achieve. This is a huge benefit to Linux, others moved to similar approaches a long time ago.
Couple this with the fact upstart has a CLA, and systemd becomes the only intelligent choice. Canonical does not have the Open Source communities best interests at heart, so their projects will not be touched by anyone outside Canonical. You essentially fall into the same trap as any proprietary company, you become utterly dependent on a single company for all issues that might arise.
Canonical will be replacing udev in UnityNext, and haven’t been upgrading it in their repo’s for a couple releases now. They are discussing replacements for NetworkManager, they want their own telepathy stack. They will be competing head on with not just Red Hat, but people like Intel and IBM, all these companies that are heavily invested in Free Software. Canonical are fooling themselves if they think they can compete by doing everything themselves. They simply lack the competency.
To date, the only things Ubuntu have actually done themselves is a Compiz plugin that was quite broken for most people, an init system whose initial developer left, and a few bloated APT frontends. Below this, they utterly depended on Novell and Red Hat for everything. Now they want to replace all this, they want to control everything themselves, everything good about Open Source simply is lost on Canonical.
For some reason, they are praised because everything “just works”, but it works because of work done by others. It honestly makes me sad that the praise is so misplaced. In fact, Ubuntu are mostly reponsible for making things not work, for breaking others work because they don’t actually understand the software they are shipping. To this day, Lennart gets a bad rap because Ubuntu devs didn’t understand pulseaudio and so shipped a broken config.
Of course, Ubuntu is heavily used, so a broken config in their shipped release gave people a bad opinion of pulseaudio. Another example is the binary drivers constantly breaking on upgrades, it makes Linux look bad because users aren’t really made aware of the issues before something breaks. Canonical just makes horrible decisions throughout the stack and this harms Open Source development because people just accept proprietary poorly maintained software instead of pressuring these companies to play nicer with Open Source.
This is my real problem with Ubuntu, they simply don’t care about Free Software, they do not care about Open Source, they just want to benefit from it. They think they are being slandered in upstream projects because their code is rejected, but the code is rejected because it is bad code. Now they want to rewrite everything, they want the entire stack to simply comply with their tests, they cripple developers, they make it ok for poor code provided it meets the testing requirements. Open Source is innovative because of its dynamic nature, Canonical are ridding themselves of this benefit.
Edited 2013-05-13 09:57 UTC
Wow, how horrible. I’m guessing it’s not only Ubuntu that will look at alternatives to systemd and udev then. Not everyone is happy about being at the mercy of RH and Lennart.
These programs have no need to communicate with eachother. at/cron performs completely different functions from syslog and logrotate. Putting all these disparate functions into the same daemon is pointless and doesn’t solve any real-world problem.
It’s pretty obvious that you never worked with upstart. It solves all of the problems with sysv (PID files, backgrounding daemons, no automatic restarts etc) and it does so without completely taking over your system.
You know, like the old Unix philosophy: do one thing and do it well.
Uh, no it doesn’t. It CAN use scripting if needed but most upstart config files are just configuration statements and one exec statement for the binary.
I don’t see what’s so hard about it. it’s a simple config file format. Here, i’ll show you a very simple Upstart config for an imaginary service:
description “myservice”
start on runlevel [2345]
stop on runlevel [016]
respawn
console log
exec /usr/bin/myservice_executable
Right, because Redhat, IBM, Novell etc has the community’s best at heart…
Considering that they all do different things and at no point need to communicate with eachother it’s really no miracle at all.
Well, I for one welcomes competition and diversity.
the main reason why distributions are adopting systemd is it means they don’t have to develop their own initscripts
Most contributers to systemd are from outside Red Hat.
Udev was already a Red Hat project.
Basically every distro that matters except Ubuntu is likely to move to systemd. Debian are looking at it quite heavily, and the only competent developer at Canical is even a heavy contributor to systemd so perhaps they will finally switch too at some stage.
What is clear is that systemd is the way forward, whether you like it or not. I think if you actually use it, you will see why people like me find arguments against it to be absurd.
You don’t think that it is absurd to have 4 binaries essentially doing the same thing in different ways, rather than a single uniform method? It is just duplication of effort, and what is hilarious is each is about the size of systemd. There is no necessity when a single binary can accomplish the task each is trying to fulfil in a cleaner way.
Its “PID files” is not nearly as good as cgroups. All daemons are backgrounded, that is sorta the point. Have you actually worked with upstart though? It seems to me you haven’t, you just know that your Ubuntu desktop uses it and seems to do fine. Try to configure one of its event files, you will understand why people dislike it.
Systemd does do one thing, and it does to it will. It manages services, but it is no longer just a replacement for init. I don’t think you are grasping the fact that systemd does not only consist of one binary. There are a few to do different things, and these are actually more Unix like than the prior alternatives. Things like hostnamectl or datetimectl are tiny binaries that do exactly what you think they do.
The fact that you want to limit that one thing to just bringing up the userland is where we will not agree. A modern system is more dynamic than this, the central process on the system should be dynamic too. Upstart tried to address this, but its fundamental design is stupid. Instead of focusing on the end result, it just runs every daemon that could depend on something lower down. Want to start NetworkManager? Cool, you’re going to get bluez and avahi and whatever else could possibly use dbus that is installed because NetworkManager brought that up. This is broken and simply retarded.
Try looking at the actual event files, those hooks work with the event file which is still just a series of scripts. Browse around /etc/events.d or whatever, you will notice they are scripts doing the same sorts of things as old init files. The fact this is abstracted to a simple config format is besides the point.
Yes, they do, because without a strong community, these companies understand they would limit their capabilities. The whole point of Intel and IBM investment is that they don’t want a single software vender to control their destiny. They each understand the benefit of Open Source deeply, they each understand they need each other, that their future depends on collaboration.
Canonical seem to think quite the opposite, that just throwing code over a wall is fine, that community is irrelevant. This is not how Open Source projects succeed, you need a strong community around the software, innovation comes by many contributors pulling the code in their own directions. Canonical seem to have gone out of their way to ensure developers are not interested in contributing, thus making it safe to throw the code over the wall.
Couple this with the inclusions of things like the Amazon scope, and you see that Canonical are only interested in money. Red Hat must make money, they are a publicly traded company, but they are adamant about Free Software at the same time. This is why people trust them, as much as Canonical is harping on about being the leading guest in the cloud, those clouds are running Red Hat as the host OS for a reason.
People use Ubuntu because it is free, companies use Red Hat because they are competent.
They don’t though, they all manage services. If you don’t think logrotate should communicate with rsyslog, if you don’t think rsyslog needs to communicate with each daemon to ensure correct information, you are fooling yourself. We have tried to standardize these things so they can better talk to each other, but it has just been a failure. If you had any real experience maintaining a Linux network, you would understand the frustration of the unpredictability of logs – everything seems to decide arbitrarily what method they will use to log things. This all goes away with the journal, it supports all these methods and keeps them sane, then offers a new API which is far cleaner and better defined. All of this with the intent of at least offering something as good as Solaris and Apple have had for over a decade. Instead we are basically in the same situation we were in the 80’s on many Linux distros.
There are not enough developers in the Open Source ecosystem to justify such lack of teamwork. The app story on Linux is bad enough, but now Canonical are defining their own developer environment.
Personally, I am glad everyone else is just moving their apps to the web, the native Linux story for client stuff is embarrassing. Please do not bring up how Canonical are standardizing on Qt/QML either, while KDE/RIM/Sailfish are technically each using this technology, the very nature of QML is causing a lot of problems. Each developer environment will be fundamentally different, API’s will not translate. It is not standardization, it is even more proliferation.
Qt is another project that uses a CLA, so that is not an option for a company that cares about Free Software, about user advocacy. Perhaps it would be better to drop GTK and have the big guns all move to EFL(Intel and Samsung are both heavily invested there), at least it seems to be gaining momentum with a clean stack. Then we have to rewrite a lot of apps all over again though. It is really quite sad what is happening today in Linux clients.
There are a lot of Linux developers, but those are all working on web platforms, cloud platforms, everyone can clearly see the client platform outside of Android is a joke and it is getting far worse. I blame Canonical for this, they gave people permission to fork Gnome Shell by creating Unity – the most ironic name ever for a software project. If they hadn’t done this, Cinnamon and the other forks wouldn’t have happened, there would be less proliferation. Instead everything is a mess, and it is getting worse.
Edited 2013-05-13 14:27 UTC
They don’t do the same thing. Really. They do 4 different things, well 3 if you combine cron and at.
corn/at and syslog has NOTHING in common. They do NOTHING that is the same. Really, I find it odd that you’d think they do.
What’s absurd is combining these 4 (or 3) apps that does completely different things into one.
There’s no events.d that is part of upstart.
If you think they do you’re seriously deluded. They’re looking out for their own interests, nothing else.
uh, no they don’t NONE of them is managing services. Cron schedules *jobs*, syslog writes logs messages, logrotate rotates log files. They do not manage services.
That’s what API’s are for and there already is one for syslog.
I do and logs is a problem to which there are many good existing solutions, most of them better than the proposed systemd solution. Which you would have known if you had “real experience managing a Linux network”.
What is the purpose of syslog without something to monitor? What is the purpose of at, cron, init, or xinetd without something to start and stop?
As opposed to having 4 utterly different codebases, and all the reproduction of efforts that implies?
Now you just look ignorant.
$ cd /etc/events.d
Look at the files in there.
It is in their best interests to work with the community, this is what you’re missing.
What is a job if not a service? You seem to have a very strange definition of what a service is.
If you don’t think logging is a part of service management, I don’t even know what to tell you. If I cannot keep track of services, management itself is simply impossible.
You’d think so, right?
You’d be mistaken, there are attempts to standardize the format but there is no API definition. Essentially, things are just farting text out of std[out,err] and syslog is throwing that raw into a file. It simply doesn’t care what that info is, how it is formatted, nothing.
Please show me a solution which is as seamless as journald over the network. They are all hacks which try to address the shortcomings of syslog.
syslog doesn’t “monitor” anything. It receives log messages from other processes and either writes them to a log file, or sends them across a network to another syslog.
at/cron runs processes at specific times.
xinetd listens on sockets and forks processes as needed.
Not even close to the same thing.
Kubuntu 12.10:
$ ls /etc/events.d
ls: cannot access /etc/events.d: No such file or directory
Huh, so if Canonical is using upstart, and events.d doesn’t exist on a Canonical system, wouldn’t that imply that events.d is not part of upstart?
Inaccurate, nothing ever sends anything to syslog, thus to say it receives something is erroneous. The correct word is monitor, it monitors stdout and stderr and throws that into a text file when appropriate.
This the fundamental problem with syslog and init/upstart, there is no defined API for this, nothing is happening on purpose. The purpose of syslog more than anything is to try to decide what actually needs to go to a text file.
at() starts processes after a certain amount of time
crom() starts processes based on a specific schedule
Neither even attempt to monitor the processes after they are executed.
It does not fork at all, it starts processes based on sockets being requested and stops the process when it becomes idle.
This is a good example of something the current mechanisms fall short at. Why aren’t at/cron/init also monitoring those things they’ve executed? Why are they simply left to their own devices once they are executed? Why are they executing anything which isn’t actually necessary at this time?
xinetd() is great, it should never have been kept outside init itself.
How so?
Again, they are starting and stopping programs, is it valid to have a whole other mechanism because you want execution based on a schedule, or periodically, or simply when they’re needed? Should these not all be part of the init system since they are fundamentally just initializing programs? What exactly is so different about them that they merit being separate?
Meh, I guess it is /etc/init now.
I am not even sure what point you think you’re making here. Sorry for my error, it is still that event files are scripts, which create huge amounts of IO. It is why the boot and shutdown processes are so slow on Ubuntu, it is irrelevant what benefits they’ve made because they still depend on filesystem speed.
By using executables, as systemd units are, this is no longer an issue. Instead we depend only on the speed of RAM.
That’s not how syslog works. Syslog listens on a socket, unix or inet, for log messages. Anything written to those sockets that is in the correct format will be handled by syslog. Syslog does not monitor any processes.
Yes, that’s what he said. (x)inetd forks processes to handle network requests.
The API and message format of syslog is defined in an RFC (can’t remember which one, it’s late…).
Cron monitors the processes it runs and will send a notification if anything is written on stdout or stderr. Granted, standard cron can certainly be improved in this respect (you can use cronic for better control, for example) but merging it into the systemd process is not the best solution.
Note that I don’t have a very strong opinion on this, integrating cron in systemd makes some kind of sense.
syslog and logrotate doesn’t, though.
xinetd doesn’t scale. It’s usable for very light work but the fork-model suffers when you scale up.
/etc/init contains the config files in the format I already showed an example of. They’re not scripts but they can, if the situation requires, use simple scripting to launch the process.
No, they’re still not scripts.
Edited 2013-05-14 20:33 UTC
Wait a minute. If systemd is so damn fast, why do Fedora 18 and OpenSUSE 12.3 take twice as long to boot as Debian Wheezy, no matter how many services and crap I disable? And why does my computer’s hard drive grind throughout the boot process on distros using systemd, but not those with sysvinit?
Mind, I don’t have anything in particular against systemd. However, I keep hearing “it enables OMG fast boots” when distros that use it in fact seem remarkably slow to get off the ground. If systemd makes logs easier to search and is more maintainable, good… But let’s not be overoptimistic about the effect on boot speed.
Lunitik,
Personally I’ve always found sysV init scripts clumsy and I’m kind of glad they’re being phased out. They lack any sort of automatic dependency analysis, there’s no restart monitor for dead processes (like init+inittab), init scripts cannot be scheduled and cron jobs are inflexible, etc.
So I think we’re in agreement here, but I also agree with the author that systemd is a bit of an octopus. I don’t think it’s a bad thing though, I think it’s good to have consolidation around system task management and it makes sense to consolidate all the components under a unified system like systemd.
Debian has support for it to, you just have to tick a box and it installs.
As much as I dislike many aspects of systemd it is still an immense improvement of the retarded abomination that is SysV init.
PID files? Yeah, sure. You could use an incredibly inaccurate and error-prone way of tracking the process OR you could just do it right from the start and run it in the foreground managed by a supervising process.
~5 billion symlinks with magic names in /etc in order to control what service starts when? Oh yeah, great idea. *Much* easier to manage than just a config file….
Also, runlevels needs to die. It’s a concept that is pointless in te majority of the use cases. In the general cause you either just need to boot normally or in rescue mode. Anything else is a corner case.
Disk performance in Linux is horrifying. I say this from experience. The same hardware running Windows 2003 did not have this problem. Oh, and it’s not an issue with drivers. The disk controller drivers have been certified for both Windows and Linux use.
Disk performance hugely depends on filesystem used. You can’t just say “disk performance on Linux is horrifying” without going into details.
I could believe that Windows has all kinds of kernel space issues, but for desktop performance this doesn’t really matter – because as good as the Linux kernel might be, the desktop environments basically suck. The desktops are bloated, the graphics drivers are incomplete, and now the desktops rely heavily on the graphics drivers; guess what happens?
Sure, Linux with Openbox or whatever is faster than Windows 7 on well supported hardware. The problem is that maybe 5% of new Linux users can be bothered to configure a standalone window manager. The rest will install *buntu because they’re not interested in making work for themselves, and will more likely than not recoil in horror at the bad performance and go straight back to dear old Windows.
TL;DR: a Smart Fortwo can outrace a stock car, if the stock car has bad tires and is towing several tons of lard.
Edited 2013-05-12 22:19 UTC
You make a good point which is that it depends on which stack you are looking at.
Linux has a better selection of file systems but the video stack in quite inferior. Just ask any RHEL engineer if he would run X on a critical server. Most Windows Servers in the wild (including headless) are running a GUI and the gains from going barebones are minimal.
It also depends on what type of software you run. The MySQL developers have long maintained that their software is not optimized for Windows. Will it run poorly? No, but if you want to squeeze out every last cycle then Linux is a better choice. Is hardware a significant factor in project costs? No, it’s the admin costs that matter. Cpus and RAM are dirt cheap, the average enterprise spends more annually on toilet paper. Linux cpu savings mattered a lot more a decade ago.
So it’s a more complex situation than faster or slower.
But I will say that Windows Server 2012 is retarded for having forced-Metro. It’s insulting really.
Are you talking about Windows?
Pardon?
Whatever else you can say about Windows, it usually has very good graphics drivers available. And the Windows desktop does not bork on those occasions when hardware acceleration is not available; I have run Win8 in Virtualbox without graphics acceleration. (Had to actually because the Virtualbox drivers for 8 were broken at the time.)
Linux OTOH is ridiculous about this stuff. All the FOSS drivers are terrible for both 2D and 3D performance, Gnome 3 requires hardware acceleration (unless you want continuous 50% CPU usage from llvmpipe), and Unity is a freaking overgrown Compiz plugin. KDE 4 also assumes good hardware acceleration for rendering widgets and stuff using Qt’s native backend. The result is godawful performance.
Xfce of course actually works. But who the hell uses Xfce by default?
Pardon?
The Linux open source drivers for ATI and Intel cards are beginning to approach, and in some cases exceed, the gaming performance of closed proprietary drivers.
http://www.phoronix.com/scan.php?page=article&item=amd_linux_april2…
For plain straightforward desktop rendering, such as with KDE/Qt, the open source FOSS drivers have been perfectly fine for a long while now. A number of years at least.
http://www.muktware.com/5194/kde-410-review-time-switch-kde
KDE 4.10: The Fastest And Most Polished KDE Ever
http://www.youtube.com/watch?feature=player_embedded&v=Fqe5ZcXJUHI
KDE 4.10 (with FOSS graphics drivers which work out-of-the-box for ATI & Intel) is arguably the fastest, slickest, most powerful, most capable and most configurable desktop available today.
You are horrendously out-of-date and misinformed if you believe that contemporary KDE/Qt has “godawful performance”. “Awesome performance” would be more accurate.
Seriously, get a clue.
Edited 2013-05-13 12:09 UTC
I really have to call BS on this. I’ve used the nouveau driver; 2D performance was visibly worse than with the nVidia blob, and it was very unstable to boot.
I’ve also used KDE 4.10. Things like window resizing and alt-tab lag like crazy, and login can take 30 seconds even on very fast computers.
Now sure, if you have a quad-core Intel i5 desktop with 8 GB of RAM, a high-end nVidia card, and a nice big SATA hard drive, everything will be snappy. But that’s not KDE or nouveau being overwhelmingly fast, that’s your computer being overwhelmingly fast. I really don’t think one should need an overwhelmingly fast computer just for the default desktop to perform well.
Actually, an addendum: current Linux distros with standalone window managers still suck moose on very low-end machines. ATM I’m using Fluxbox on a netbook, and the performance is disgusting – simple things like menu highlights lag absurdly, and you can see each window fill in as it opens.
I wish GTK+ would just die already.
Edit: unfortunately software is the only damn thing in the whole world for which you need a new version every week just to be secure vs. petty crooks. If cars were like that, nobody would drive.
Edited 2013-05-15 16:57 UTC
Microsoft certainly has cultural and leadership problems. Ballmer is an asshead that needs to go, he has been given plenty of chances. The Windows division under Sinofsky went into crazyland, only rabid MS fanboys and employees still defend his Metro-down-your-throat plan.
But there is a major problem with his argument because it makes a common false assumption that Linux code is developed for non-economic reasons.
Most Linux code is developed by corporations:
http://apcmag.com/linux-now-75-corporate.htm
Furthermore if “passion” creates good code then Linux should have a much better game selection.
The truth is likely somewhere in the middle. Red Hat probably has a better culture for developers but the actual development has nothing to do with glory or recognition. Most Red Hat developers have families and just want to pay the bills.
Indeed. And it doesn’t look like they really care to fix that.
I can speak from experience when I submitted code for inclusion in what was then known as the Windows Resource Kit. Without naming the utility, what I developed was tool that had two threads – a reader and a writer. The reader thread read data from the filesystem and stored data in a linked list, and the writer thread would take data from the linked list and write it out to the filesystem. All very elegant, tested to death, and CS 101, it was rejected because I did not use a named pipe between the threads. I pointed out that, at the time, calls to named pipes caused context switches and adversely impacted performance. The feedback I received was “what is a context switch?”. I explained that switching from user mode to kernel mode in a thread had a performance impact. No-one cared. They had their way of doing things and refused to even look at others. I also learned that some of the review team did not understand how linked lists worked and could not be assured there were no bugs in the code because they could not understand it. The utility that eventually shipped used a named pipe, and was significantly slower than the one I originally developed. It is now part of Windows. Needless to say, I use my own version (still).
This is total propaganda and I am sick of it. Windows is BY FAR the fastest OS. Linux is absolute junk. While the Linux kernel may be OK…and I stress may be OK… let us not accept such things on blind faith… Linux as a complete operating system is total garbage. I could easily (as could any honest person with a little experience and honesty) write endlessly about all the problems with Linux… let me focus on speed. Linux fast? TEST ONE: Find an older computer. Let’s say an Athlon XP 2500+ with 1GB ram (that is a LOT of ram!) and ATI 8500 radeon and install Ubuntu 12/13, or Mint 13/14, or Fedora 18. It won’t even run and if it does it will be so slow that the OS won’t be usable. Any GUI operation will take like a minute. Now load Windows XP on the same computer and it will turn into a speed demon. Please stop with the propaganda. TEST TWO: Find a system such as a core 2 duo with 4GB ram and mid range gf card like NVIDIA 8400. Now set up an Ubuntu12/13 or Mint13/14 VM, and a Windows 7 VM. The Linux VM will be unusable because it is a resource hog. The Windows 7 VM will be lightning fast. CONCLUSION: It is a fact that Windows is faster and lighter as a complete operating system than Linux or MAC (yes, you can set up mac a vm also). A Linux freak may argue that you can strip down Linux to the kernel andno x server and it will be faster. Or use some primitive bare minimum GUI system. Yeah? So what? That is like comparing a non functional OS to a functional one and saying that the non-functional (or barely functional one) is superior because it is faster. No… no. Let us compare apples to apples and oranges to oranges. In other words, real world tests of modern COMPLETE Linux distros and modern complete Microsoft OSs….. as I have above. That is what matters. The truth is, and I know this will hurt… Linux isn’t even up to Windows 95 level when it comes to functionality. And MAC is not much better than Linux. No, I’m not a Microsoft fanboy just sick of this BS. I use Linux at work (soft engineer), MAC i7 and Windows 7 gaming PC at home.
The irony of this statement in relation to your post is epic.
Oh really? Maybe you are too brain washed to think for yourself. You heard Linux is fast and you actually believe it. Tell you what… let’s stick to facts. Run TEST 1 and TEST 2 in my post and get back to me with your results. Facts mate… facts. The truth will set you free.
Already with the personal insults?
No, I know it’s fast and so is Windows. Which one is “faster”? I don’t know and I don’t care. On any reasonably modern computer both are fast enough.
You didn’t really provide any but here’s my personal experience:
I have run Ubuntu 12.10 on a old laptop with a crummy Intel gm855 video card and only 768Mb RAM and it was perfectly usable. GUI operations did certainly not take minutes. I also have it on a 2Ghz dual-core with Intel HD graphics and 4Gb RAM and it’s just as fast as Windows. In fact, Ubuntu is a bit snappier but since the Windows is factory Lenovo the difference might not be Windows fault.
Edited 2013-05-13 03:06 UTC
I didn’t provide any facts…eh? Deny reality much? The post is in front of you full of facts. Facts are facts even when you don’t like the facts. Methodology is in front of you. Run the tests. I have… Ubuntu 12 is certainly too slow for the systems I described. Don’t tell me it was “OK for you”. BS. It is absolutely unusable. In fact, I’ll add to my previous post another fact for you to ignore.
I have a relatively modern phenom system. Guess what? I can’t use Ubuntu 13 or Mint 14 because… I need to buy new hardware. My Radeon 4850 is no longer supported. Yes, I can use the open source driver but then I can’t play games and don’t have full axel. Or I can downgrade X server to some obsolete version. I tell you… this Linux thing is awesome. In windows land updates don’t make hardware obsolete. In Linux land, you play the driver lotto hoping that your hardware works and when it does you pray that something doesn’t break things the next update. Absolute garbage of an OS. Why do you think something FREE and so awesome can’t get past 1% of market share? Because it is so much better than Windows, right? I tell you, Linux today isn’t even on the level of Windows 95 with respect to functionality. I would love it if Linux was great… but reality matters to me.
I see. Your personal experiences are facts while mine are bullshit and lies. How convenient.
You just continue digging that grave for yourself, it’s rather entertaining.
Yes, that is exactly it.
What I said are facts because I KNOW them to be. I know what you said is not true. I know what I know. This is how the world works. People state their observations. If we want to take it to the level of science…. that is how science works also. I have stated my claim. I have stated my methodology. It is up to those that care to repeat the experiments and find who is telling the truth.
What *I* said are facts because I KNOW them to be. I know what you said is not true.
Actually, what you said might well be true but your anecdotal evidence does not scale to the Linux Desktop in general.
Your methodology has already been shown insufficient by others so I see no need to repeat your experiments.
Well, he almost has a point with that Radeon 8500 …Linux support for it is a bit half-baked ( http://www.osnews.com/permalink?561942 ).
Well, if the article was anti-MS propaganda, then your posts have more than overcompensated with a large helping of anti-linux propaganda.
Just one point though, there is a difference between “facts” and “anecdotal evidence”. You could conduct systematic benchmarks and post them to provide a much better context for a serious & interesting discussion. But as is, this thread is just an angry rant.
🙂
The thread title is “Windows slower than other operating systems”. It is not “Windows kernel is slower than Linux kernel” nor is it “Windows i/o is slower than Linux i/o”.
It is most definitely Anti-Microsoft propaganda that flies in the face of reality.
If it said “Microsoft s****** up by removing the start menu” I wouldn’t criticize. But slower than Linux and getting worse? Give me a break. Total BS.
On top of that, Linux is barely usable to begin with. People kind of look past that because the bar is set so low in the land of Linux. Any BS that can make the alternative look bad is welcome propaganda. Can you ever see a soccer mom recompiling a Linux kernel so her kid can play Minecraft or some other game because they just bought a new gfx card? When soccer moms start doing things like that on mass Linux will go from 1% market share to be the dominant desktop system.
triangle,
I didn’t say there was no anti-ms propaganda, never the less your posts are chock-full of anti-linux propaganda. You even took the opportunity to reply to my post with even more anti-linux propaganda backed by nothing more than literally made up anecdotal evidence.
Linux is good for some people, not for others, and that’s fine. But it appears that you’ve got an axe to grind. If you are not going to bring anything insightful to this discussion, and you haven’t yet, then I don’t think there’s enough substance to continue it.
You and I should probably stop feeding the troll.
Soulbender,
“You and I should probably stop feeding the troll.”
You are right, it’s weird how we rarely encounter this sort of thing in real life, but online it’s almost an everyday occurrence.
@Alfman
Ahh, an attempt at a high horse bail out. I just love it.
First, although my posts are full of anti-linux sentiments (and are quite provocative)… just as you claim, I actually used no propaganda. What exactly did you find to be propaganda? I spoke honestly and truthfully. Not used to it? Just because you didn’t like the message it does not make it propaganda. Criticism doesn’t necessarily == propaganda.
Second, although you don’t (and most Linux users here) don’t appreciate harsh criticism about Linux, it is well deserved. I didn’t post anti-Linux sentiments in a pro-linux site for appreciation. 10 years ago, I honestly hoped Linux would become a great alternative to MS. I’m SURE most MS users felt the same. Today, I must say in all honesty that Linux is absolute garbage. Totally unusable. It was much better 5 years ago. The Linux community has spent 10 years circle jerking each other about how great Linux is all the while still not catching up to Windows 95. Why do I say things so harshly? Because it is f’n true. Wake the hell up and stop distorting reality with BS propaganda. Linux faster than MS? Please.
Hell… I’m not saying my post or posts like it will make a difference. I know there is no hope for Linux. You guys are too smart and too right. So smart that no one else is smart enough to appreciate what you have made. Are you going to now correct me that this is not in fact the defacto attitude in the Linux community?
Well, why then does Linux have a 1% market share? Is it because it is so awesome or because ordinary people are too stupid to see your genius?
Edited 2013-05-13 06:45 UTC
triangle,
“First, although my posts are full of anti-linux sentiments (and are quite provocative)… just as you claim, I actually used no propaganda. What exactly did you find to be propaganda?”
I don’t think you are hearing yourself. You’d never accept such a mindless rant targeted against windows, why would you expect anybody else to take your mindless rant against linux seriously?
“Second, although you don’t (and most Linux users here) don’t appreciate harsh criticism about Linux, it is well deserved. I didn’t post anti-Linux sentiments in a pro-linux site for appreciation.”
Is that what you think? I don’t mind linux criticism in the least. I’m just disappointed in the level of intelligence these rants are reducing the comments to. It makes the discussion uninteresting and boring.
“Because it is f’n true. Wake the hell up and stop distorting reality with BS propaganda. Linux faster than MS? Please.” …
You responded to me, but who the heck are you talking to? You are just shouting that everything is BS and propaganda without being able to put your finger on it or talk intelligently about it. If you want to talk intelligently then I ask you again to at least cite some of the specific benchmarks of what you are talking about. I have no problem accepting that some benchmarks might put linux behind, but you need to accept that even if linux were behind on a bootup benchmark, that’s only one aspect of an operating system’s performance and that doesn’t make it “garbage”.
“I know there is no hope for Linux. You guys are too smart and too right. So smart that no one else is smart enough to appreciate what you have made. Are you going to now correct me that this is not in fact the defacto attitude in the Linux community?”
Just because you’ve just chosen to focus on the egotistical subset of the linux community doesn’t mean that we’re all so obnoxious.
I am completely open to have an intelligent discussion on the subject. This link shows many different performance metrics for ubuntu + windows both 32bit and 64bit including bootup, shutdown, I/O performance. I link it specifically because it has many interesting data points to talk about.
http://www.tuxradar.com/content/benchmarked-ubuntu-vs-vista-vs-wind…
The bootup benchmarks are surprisingly close (in my opinion they’re all bad, but that’s a different discussion). In some benchmarks you’ll see windows is quite the slug. Never the less it’s not a basis for calling windows “garbage”. It’s more likely that users choose an OS for other reasons and performance is just an incidental consequence.
I honestly don’t even know where to start with this…
Test 1… Yep, sounds fair. Those versions of Linux are probably six years newer than that hardware, while XP is six years older and was designed for much weaker hardware. Would comparing with Windows 8 not be a more fair comparison?
As for test 2, what exactly do you think you’re measuring? Most VM hosts support Windows better than anything else, and provide fast (enough) video acceleration on Windows guests but generally not Linux guests. Surely a better comparison would be to run then on real hardware, but there really won’t be much of a difference. Anything remotely recent is many times faster than either OS requires. Windows might be slightly faster as a desktop OS, but Linux is hardly slow or bloated.
Fair enough.
Windows 8 would not be a better choice than XP in my opinion. In terms of date, your point is well taken but I would argue that “functionality” is the key concept. For a given functionality what overhead is there? Today’s Ubuntu and Mint (some of the most pop distros and my fav also) are not yet on the same level of functionality as Windows XP. XP is far superior imo. If it came down to debate, it would not be hard to defend this point… even though I know it sound provocative. At the same time, Windows 7/8 is not much slower than XP if at all… so we could do as you say…but Win 7/8 require more ram than 1GB. I suppose it comes down to ideology. It is easy for us to set the date as the defining point. Functionality is a debatable sticking point. The major thing in my mind, is that XP is still modern in the sense that I can get drivers for XP even for modern computers (let’s say if i wanted to build one). On the other hand, if I built a computer today, I would have to use bleeding edge Linux just to have a chance to run Linux. So in this respect, I think XP vs LinuxCurrent is fair game. Also, because of the centralised software scheme in Linux land, one is forced to use a relatively new distro. It is not like I can use an 8 year old Linux distro on the Athlon and be productive and secure.
As far as the second test goes, I have ran those OS’s on bare metal on those systems. Linux is noticeably slower. I mentioned virtualization because there the additional overhead makes the difference even more obvious. As far as VM bias towards windows… VirtualBox does not bias towards Microsoft products.
Edited 2013-05-13 03:54 UTC
I guess I’ll feed the troll….
Seriously, what functionality does Windows XP have “Out of the Box” that any of the mint/Ubuntu Linux distributions don’t have?
The only thing XP has going for it out of the box is Paint, Notepad, Wordpad, IE and Outlook Express.
Oh, maybe I’m missing solitaire and minesweeper….
Oh, you can search for files, and regular file management stuff. But what else?
Any Linux distribution (yes even from the days when XP came out) had more applications and a full office suite by doing the default desktop install.
Here’s some anecdotal evidence to support the opposite of what you’re saying.
This was a few years back, but my sister’s friend had a crappy eMachine, which had a motherboard go bad. I tried to find a replacement, but unfortunately ordered her one that used PC133 memory, of which I only had a 128mb stick. Ubuntu (I think it was 8.04) ran on it, but slowly. OpenOffice loaded in about 5 minutes! Exact same hardware, dual booting with Windows Xp took about 10 minutes to load OpenOffice!
So you can berate Linux all you want, but the truth of the matter is that it still has leaps and bounds better memory management than Windows does. Ever look at your memory usage on Windows? I can pretty much guarantee that it’s using Page file, even though it still has physical memory. I NEVER see this on Linux. We buy RAM for a reason, Microsoft… fix your damn memory management!
What does XP have out of the box that Linux does not?
For one, it works. That’s a pretty big one. Let me elaborate.
Yes, Linux sometimes works. XP allways works and works fully. That is a huge difference. The function of an OS is to be an OS… not to supply you with a huge amount of free software. By the way, there is huge amount of free open source software for win also. Look up softpedia.
So let us compare specifically how the OS compares out of the box. I’ll state 4 scenarios. All 4 scenarios will consider what the experience would if we were to install an OS today.
SCENARIO I – 10 year old computer.
———————————
CPU: Athlon XP 2500+
RAM: 1 GB
Mobo: NF7 Series
GFX: Ati Radeon 8500
SND: Creative sound blaster 5.1
Current Ubuntu, Mint, etc will not work on this system. Score F-.
A legacy Linux distro will work partially on this system. Videocard will barely work and sound card will partially. However, and this is vital… because of the flawed centralized software repo scheme no new software could be used. This effectively means no security and very limited functionality/productivity. So in effect, installing a legacy distro is not an option. Score F.
Windows XP:
-Works perfectly. Everything fully functional and optimal.
-Super fast and light. Like computer was brand new!
-No junk comes with OS
-Can install any new software.
Score A.
SCENARIO II – 5-6 year old system
———————————-
CPU: Core 2 Duo 2.X GHZ
RAM: 3 GB
GFX: Ati 2 or 3000 series or NVIDIA 8400
SND: Creative sound blaster 5.1
New Ubuntu or Mint. With Ati shit out of luck. F-
With NVIDIA will work. Will be slow but but speed wise usable. Of course new GUI systems like unity and gnome 3 are total crap and that should be taken into account. Let’s pretend classic is used. in this case the score depends highly on hardware. You are playing the hardware lotto. If run virtualized, the score is an F because the harware can’t handle with enough speed the hog that Linux is.
Legacy Linux. Can’t be run for sane reasons as above. No axs to new packages/software. Score F-.
Windows XP:
-Works perfectly. Everything fully functional and optimal.
-Super fast and light. Like computer was brand new!
-No junk comes with OS
-Can install any new software.
-No vitalization problems.
Score A+.
SCENARIO III – 2 Year OLD system
———————————
CPU – AMD Phenom
RAM – 4GB
GFX – ATI 48XX
SOUND – Creative 6.1 or 7.1
Current Ubuntu or Mint, works but not fully. Minor isues with sound card and major issues with video card. Gaming out of the question unless you downgrade x server blah blah. Score C-.
Legacy Ubuntu or Mint. Not an option for above reasons and also pointless.
Windows XP:
-Brilliant. Everything fully functional and optimal.
-Super fast and light. Absolute speed demon.
-No junk comes with OS
-Can install any new software.
-No vitalization problems.
Score A+.
CONCLUSION
————
One of the key functions of an OS is that is should work. Sometimes Linux does works but often it does NOT! You must play the hardware lotto. Yet it is the goal of an OS to make hardware work. So it fails at this very basic level.
Also, Linux does not age well. It is not usable with older hardware because that forces users to give up on new packages/software and security. In fact, even 2 year old hardware becomes obsolete real fast as with the Radeon 4XXX series forcing people to buy new hardware.
For hardware to work you need proper driver flexibility. Linux does not have this. Yes, this is a business issue not just an engineering issue but alas the result is the same. You are lucky to even get your hardware working. In Windows land you always have full hardware support.
In windows you can keep your software. They are not obsoleted by package updates.
Also, in the land of Windows… you can run up to date new software on old hardware.
In short Windows as an OS fulfills all the needs that an OS is supposed to fulfill. It is flexible and has long term hardware support. XP is 13 years old and is still marvelous.
I should also add that software developed for Microsoft still works for the most part in newer OS’s because of the amazing backwards compatibility MS always achieves.
In short Windows has none of the problems Linux has.
With two exceptions of course, it is not free and not open source.
Edited 2013-05-13 07:37 UTC
I’m not arguing that XP is the best OS ever. I think that for the most part Windows 7 is better than XP. Obviously it supports modern hardware fully. The question is are current Linux distros on the same level yet of 13 year old XP. I say no. The above considerations are elementary consideration for an OS. Essentials. Linux fails on these and Windows scores extremely high (better than any OS). I am only considering these elementary considerations here. If we were to consider usage elements then I would argue that Linux isn’t even yet on the level of Windows 95. Windows 95 was much easier to use/admin than Linux is today (or ever has been). Sad but true.
I should also correct my post for scenario 3. XP should score a B or a C and not an A since it does not support 64bit and more than 4GB ram…hence not fully the hardware after all. But at the same time Microsoft has Win 7 which would score an A in that scenario.
Edited 2013-05-13 07:58 UTC
1) You are comparing to ubuntu/mint, etc.. Try comparing to something solid like OpenSuSE or Fedora.
2) Yes, linux installations can break but it gives you tools to easily fix it. I gave up on windows XP in 2004 after a horrible experience with Microsoft support where an idiot eventually suggested I format. I had stumbled only a NTFS bug. the microsoft support person denied the bug but it was eventually fixed in Vista.
luckily I didn’t and I used a linux rescue CD with ntfs write support to fix my windows installation.
i call BS on your argument. score F.
I have an old (~7 years) system which ranks somewhere between your 10year old and 7 year old examples, and i don’t have to degrade to unsupported legacy linux installations to use it.
The argument of no-junk windows xp installation and everything working out of the box can be thrown just out the window – i mean, there’s outdated IE and you are missing most of the drivers for your hardware. and the whole system is rather vulnerable out of the box, unless you roll a custom installation media streamlined with all of the hotfixes so far.
Also, you silently assumed to run default desktops given distro comes with. I can use gnome3/kde fairly comfortably, although i prefer simpler setups more. in such case older systems might struggle. but not every distribution comes with beefy desktop option by default.
You also forgot to notice that windows xp is going to become unsupported soon.
> Also, Linux does not age well. It is not usable with older hardware because that forces users to give up on new packages/software and security.
BS. you can run recent releases of distributions on fairly old hardware. I run archlinux on laptop from 2004, and i keep it up to date.
do not make the assumption that recent linux versions are only for top-of-the shelf hardware. do not make the assumption that there is only gnome3/kde or other resource hogging desktop for linux. do not assume that old hardware only works with linux distributions released more than few years ago – there are linux distributions that will work well on older hardware while containg up to date software.
> In windows you can keep your software. They are not obsoleted by package updates.
Try installing some stuff written for xp on windows7 or 8. Or try installing something that requires most recent directx on windows xp. and then tell me it’s not being obsoleted.
There are more and more apps getting left behind. And that includes drivers for older hardware – some hardware from winxp era is not getting drivers for modern windows releases. at some point there will be hardware with no xp drivers available. i doubt it will happen anytime soon, but it’s a matter of time.
on linux you at least get the comfort of having your hardware supported as long as there are people using it, and it works realistically. even if it’s discarded, you are free to fork the kernel or make a group of interest to restore given feature.
you can fire up fairly recent linux system and it will work with your hardware, recent or ancient. the exceptions are hardware that’s truly obsolete (like 386, with its ram restrictions) or hardware nobody uses anymore (ancient modems, really old and obscure graphics cards) or hardware nobody has written drivers for yet.
“The argument of no-junk windows xp installation and everything working out of the box can be thrown just out the window – i mean, there’s outdated IE and you are missing most of the drivers for your hardware. and the whole system is rather vulnerable out of the box, unless you roll a custom installation media streamlined with all of the hotfixes so far.”
Not true. You click the update button. Then you are up to date. Too complex for you? Second, yes one should install drivers. In the land of Windows this is not necessary most of the time but is always an option and you can always get the correct drivers from the manuf website. You just double click the exe file and boom, all is perfect in the world. In the land of Linux… good luck with driver issues. Buy yes, I see your point, I know you like to roll your own driver and custom compile the kernel. As do all normal people…
“Also, you silently assumed to run default desktops given distro comes with. I can use gnome3/kde fairly comfortably, although i prefer simpler setups more. in such case older systems might struggle. but not every distribution comes with beefy desktop option by default.”
Yes, I talk about how Linux comes out of the box vs. how Windows comes out of the box. How rude and flawed of me. Good point.
“You also forgot to notice that windows xp is going to become unsupported soon.”
Oh my god… after 13+ years? How dare Microsoft do such a thing!?! Let’s start the MS hate bandwagon right now! Just one question dude, what is the longest support a main-stream Linux distro gets? LOL Also, you may have noticed that MS has an OS called Windows 7. Spend a few bucks, it will be supported for quite a bit longer than any Linux distro will.
“there are linux distributions that will work well on older hardware while containg up to date software.”
This is true. You can run Lubuntu… maybe… if you wind the Linux driver lotto. See… the thing is that Ubuntu is garbage enough. As far as usability is concerned. Why would I want to run something even more stripped down just to be able to run an OS? No thanks. Lose too much functionality/usability. We need to keep this apples to apples and since the best of linux isn’t up to Windows 95 level in terms of ease of use/admin and functionality why would I want to run something even more stripped down?
“Try installing some stuff written for xp on windows7 or 8. ”
Funny you say that. This for the most part is no problem at all! I have yet to find something written for XP that doesn’t work on Win7 (drivers excluded of course). You see, not only that but you can even run 32 bit apps in 64 bit Win… unlike in Linux where they still can’t get this right. Maybe in 10 years they will finally get multiarch working properly… fingers crossed. Yes if some dynamic lib is version num off the application won’t run in Linux. In windows apps are smart enough to package what they need with the app unless it is a sys lib and those are guaranteed not to break in apps in the future (unlike in Linux where breaking older apps is the norm).
“Or try installing something that requires most recent directx on windows xp. and then tell me it’s not being obsoleted.”
This is true. Yet, you can fully enjoy any game in XP just fine. MS is using some pressure here for sure. I think it is fine for MS to ask users to spend a $100 after a decade and a half.
“There are more and more apps getting left behind. And that includes drivers for older hardware – some hardware from winxp era is not getting drivers for modern windows releases. at some point there will be hardware with no xp drivers available. i doubt it will happen anytime soon, but it’s a matter of time.”
True. But again… what is the point? That XP will one day become obsolete? Granted. Spend a $100 after a decade and a half. Give me a break. Programmers need to eat too.
“on linux you at least get the comfort of having your hardware supported as long as there are people using it, and it works realistically. even if it’s discarded, you are free to fork the kernel or make a group of interest to restore given feature.”
What a joke. Good luck even getting basic functionality of hardware on Linux. Any hardware that is not main stream has no chance at all. You can roll your own drivers ehh? What a flaky argument. You can do the same on Windows mate. There is nothing stopping you.
Edited 2013-05-13 14:39 UTC
Microsoft wanted Windows XP dead years ago, but they couldn’t kill it off because of Linux and that pesky netbook fad several years ago. Never mind the atrocity that was Vista, which wouldn’t even run on the damn things. If Microsoft had their way, everyone would have jumped off of XP like fleas leaving a dead animal, but practically no one wanted to run that dud, to the point where Microsoft had to introduce downgrade rights (great way to cheat the OS market share statistics). The Linux distribution family with the longest support is probably Red Hat Enterprise Linux and its derivatives. Enterprise Linux 5 and 6 have a support lifecycle that spans a full decade of production phase, followed by three years of extended life. That’s 13 years. Try again.
And that’s all I am going to waste my time responding to. Your comments are so absurd, I have to wonder if you are going for the “Internet’s Biggest Troll” award. You posts are sea after sea of pure, 100% bullshit. I have a hard time believing that even you believe what you’re saying, and I can’t believe people are wasting so much energy getting into such heated arguments with you. Seems like a lost cause to me.
Edited 2013-05-15 06:38 UTC
It is a huge difference, but you have it the wrong way around.
Example:
I’ll call your bluff:
Athlon XP 2500+ CPU is fine.
RAM: 1 GB is fine.
Mobo: NF7 Series: I can’t say for sure but there shouldn’t be an issue: http://www.linuxcompatible.org/compatdb/details/abit_nf7_s_linux.ht…
GFX: Ati Radeon 8500 : supported by the radeon open source driver from Xorg. Works flawlessly out of the box.
SND: Creative sound blaster 5.1: http://phoronix.com/forums/showthread.php?2216-Linux-Compatibility-… : An old sound card, based of the emu10k1 sound chip, that works perfectly in Windows and Linux, with hardware mixing.
So this system should work flawlessly out of the box with the latest Linux distributions.
Here is the list of ATI cards supported by the radeon open source driver as reported by the command “grep ATI /var/log/Xorg.0.log” under the latest Kubuntu 13.04 distribution: http://justpaste.it/2m4m
In other words, you are full of it.
You’ll call my bluff with BS? Why do you assume I’m bluffing and your assumption that I’m bluffing is simply flawed. The examples I’ve stated were concrete specific examples because I know those things to be true.
You say “it should work”. I agree. It should. Unfortunately in reality there is only how things are and not how they should be. You can’t use modern distros with open source driver for that card. If u think that is a usable system… try it and enlighten yourself.
The open source ati driver is garbage. The fact that there exists an attempt to make a working driver does not mean it is successful.
Edited 2013-05-13 14:51 UTC
This is like trying to dispute scientific evidence by quoting the Bible.
lemur2 at least gave you some links outlining the compatibility status of your hardware.
So, you didn’t get it to work and by extension it’s absolutely impossible? Not much of a reference, I’m afraid.
Nonsense.
My last ATI was in a Lenovo T410 (IIRC.. it was in 2006) and I got a lot to work with the open source driver just fine whereas the official driver blew chunks (dual screen using both video-out ports of the docking station, etc).
I called your bluff BS because it is BS.
Linux works with nearly every system out there. It is absolutely certain to work if you get a Linux system in the same way that you get a Windows system … by buying a system from the vendor with Linux pre-installed.
And it will work faster, better and with more stability than Windows.
And Windows 8 will also work flawlessly with such system, because that is exactly the sort of system I own.
There’s more to it than such list. I happen to have an old PC with this GFX card, Radeon 8500 – and while the open source driver is the best choice, it really isn’t very good: unmaintained (like many Linux drivers for older stuff); works poorly with 3D compositing enabled WMs; multi-monitor setups are really bad, very glitchy (especially when trying to use it with another GFX card, Matrox G450 PCI, also with OSS drivers; no such problems under XP…)
Edited 2013-05-17 19:55 UTC
Softpedia is filled with “DOWNLOAD NOW” links that go to different software than what you’re looking for, not to mention other sites that fill open source software with their own toolbars and other malware / spyware. See below about Linux working…
Use Debian. Wheezy just came out and I bet you it’d run flawlessly on that setup. Ubuntu and Mint go for the newest hardware, and generally don’t try too hard to support older hardware. I had Debian Wheezy running fast on a Pentium 2 with 512MB of RAM a while back. Only issue I had on that system (at the time, it should be fixed now that Wheezy is stable) is that I couldn’t use full Gnome-Shell because the video card I had only worked with the legacy nVidia drivers, which wouldn’t work with the newer kernel (hardly Linux’s fault, that’s all on nVidia.)
Radeon 8500 is hardly a new card. Blame ATI for not supporting their video cards worth a crap. I have an AMD3200HD in a laptop and it’s not all that old (from 2009?) and they’ve already dropped support for it. Grabbed the fglrx-legacy drivers from experimental and haven’t had issues since. Again, as far as the repositories go, use a distribution that doesn’t suck. Debian is the best for long term support, while the actual support isn’t as long, the upgrade paths are more or less flawless. I went from sarge, to etch, to lenny to squeeze on my server. Only reason I finally re-installed is because the hardware got old and I upgraded to a 64-bit system. There were methods for converting from 32-bit to 64-bit Debian without re-installing, but it seemed like a royal pain in the butt, so I just reinstalled after backing everything up. Can’t even do that on any Windows system.
Of course it’s quick and fully(?) functional, when there isn’t any software installed. Once you start installing software it gets fat and unresponsive. Thank the registry for that. And it is very rapidly becoming that you can’t install any software. You’re permanently stuck on DirectX 9 because Microsoft is forcing you to upgrade. I know, because I would have probably stuck with XP if it weren’t for newer games all supporting Direct X 10+
Use KVM with QXL, or are you talking about Linux being the guest and Windows being the host? In which case it’s Windows’ fault that Linux is slow.
Tell that to my friend who supports legacy systems using RHEL 3 and 4, and ancient FreeBSD systems that customers refuse to upgrade. Sure it’s a pain, but it’s possible.
Installing guest drivers on XP can be annoying. But sure, if you install yourself and then (as above) start installing a bunch of stuff, it gets cluttered real quick. Basically to be usable as an ‘operating system’ it has to have applications installed, and installing those applications is what slows XP down. It’s been a known issue for as long as Windows 95 has been around.
You keep mentioning sound card issues. I never had any issues with my Audigy cards, except when they finally started to make weird popping noises (which started in Windows) I’ve had MORE issues with getting them working in new versions of Windows than I ever have in Linux. Granted I didn’t buy the X-Fi ’til much much later, and didn’t have to wait for Creative to open source their drivers, but once I had one, I didn’t have any issues with it.
I have a lot of games that work in Wine better than Windows 7. I have also been able to get any old binary only software working on Linux with the libstdc++5 libraries installed. I think all of your issues with ‘Linux’ are really issues with Ubuntu. Try a more stable distribution like CentOS or Debian. You’ll find it far more pleasant to use, and work far better on older hardware. Ubuntu is for the New Kids, who like the shiny. Sorry, had to snip some of the quote, due to character limit.
Well, i give you prop for being specific.
In short, you admit that things don’t work so well in Linux and that it can be a pain in the but. Your friend makes a living doing that. I’m sure you are correct, I wouldn’t argue against the idea that it is possible to support legacy hardware on Linux. My point is that in the land of Windows people don’t make a consultation career doing so because it is painless. Things just work perfectly.
Essentially what you are saying is that you are willing to make due with the hassles and shortfalls of Linux. Good for you.
I have used and do use Debian every day.
Blame ATI? As far as I’m concerned it is not about who is at fault, instead it is about the state of things. However, if we want to place blame for the state of things then I would say that the problem is the GPL.
As far as games running better in wine… I don’t really have to comment on that. You should have left that one out because it reeks of bias and reality distortion. Even if there was a game or 2 like that… most games don’t even run in Wine (of course you don’t mention that).
Edited 2013-05-13 15:09 UTC
Why would you install a legacy distro instead of just a lightweight distro? A machine with 1gb of ram is not bad at all, i run virtual machines with considerably less.
Also, XP lacks drivers for much of this hardware by default, which means you have to either use a modified install disc or install the drivers all manually, which can be extremely painful especially if you’re not sure what hardware is present. In some cases you might actually need a floppy disc (!) to install XP on a machine where the SATA controller is not supported by default.
Amusing you call a centralised repository flawed, when the windows approach (download arbitrary binaries from random sites) is far more flawed, and requires considerably more user knowledge in order to avoid malware infestations. Also the “flaw” as you call it seems to be that the repository is outdated, but there is no real reason to run an old linux distro when the new ones are available for free.
I’m also curious as to how you had problems with older radeon cards, since i have had no problems using older cards with modern linux distros (7000, 9200, x1600, x1900) and they work out of the box with the open source drivers. Conversely these cards won’t work at all with windows 7/8, and in some cases only run on xp if you’re willing to accept drivers with known security holes.
XP is not “up to date software”, and applications are only still being made compatible with it because many users have not upgraded to newer versions. Linux doesn’t have the problems windows has which cause people to avoid upgrading, problems like lock-in, lack of drivers in newer versions, increased hardware requirements, cost etc. There is very little reason to ever run an old version of linux, even on old hardware.
The biggest problem with linux that you highlight, is that many hardware manufacturers are still stuck with the windows mentality of releasing closed source drivers… Open drivers work far better, and are the reason why modern linux supports all manner of hardware which is abandoned by modern windows, especially the 64bit variants. While 64bit linux supports almost all the hardware the 32bit version did simply by recompiling the drivers, once 64bit windows came out only new hardware ever got drivers, and there is all manner of older stuff which is unusable.
And then there’s other processors, linux on arm inherits the majority of the drivers from x86 linux, so if you have an arm (or mips or ppc etc) based machine with pci or usb slots, you can plug all manner of hardware in and have it work. If you are running windows ce, or windows rt on such hardware you have extremely limited driver support – because the manufacturers only ever made x86 binary drivers.
I want to add that after using my computer for a few hours yesterday, I was able to turn off the page file without having to reboot, meaning it wasn’t in use (It also reported 0MB used in the page file). That is the first time that’s ever happened to me in Windows.
That reminds me. I have to turn the page file back on, now that I’m nolonger defragging prior to shrinking my Windows partition.
I’m running Windows 8 with 8GB of ram. Not a small amount of ram, but not a lot, either.
I wouldn’t agree with either of those two points.
If you exclude “runs Windows applications” as a feature (if that’s what you want to do, just use Windows), I can’t really think of anything Windows XP has that current (or even older) Linux distributions don’t. Certainly nothing that I use.
I don’t really consider Windows XP to be viable anymore, unless you’re running on very old hardware that can’t run Windows 7 well (or at all).
When using a Windows XP machine, there are plenty of things I miss from Linux. And Windows 7. And Mac OS X, for that matter. It just feels like an antique to me at this point.
From my experience, that’s usually a problem with the graphics drivers (nVidia’s drivers, in the case I’m thinking of), but I’ve had no such problems in years.
It’s hard to know what you mean by “slow” anyway. Unless there’s a graphics card issue (dropping back to software rendering because you’re in a VM will pretty much do that), I can’t think of any way that Linux feels slower than Windows. Not on anything I’ve used in a very long time.
You lost me on the VMs bit. 1 GB Ubuntu VMs on a powerful machine like what you describe will work just as well as Windows. Hell, on a Core 2 Duo you can run Unity 3D with no hardware acceleration, on one core. That’s how damn powerful modern CPUs are.
ATM I’m running Linux with the Mate desktop on a 1 GB Intel Atom netbook with trashy Intel graphics, less powerful than the obsolete configuration you mention. It’s about as fast as Windows XP – i.e. not very, but usable.
OTOH, Gnome 3 makes this netbook cry. Thus, I reiterate: the problem is the desktop stack.
Edited 2013-05-13 12:58 UTC
Why does Ubuntu run like crap on virtual box then, while Windows 8 runs nice and smoothly?
that’s your argument, seriously? LOL
Edited 2013-05-13 18:40 UTC
Is that your argument?
It all anecdotal evidence, unless someone produces some real numbers.
I was just confused as to what recognizing that you had no idea of what you were doing, in a specific matter, had to do with the general scheme of things.
Pathetic as always.
We are talking about performance, I stuck in my 2 cents for what it is worth.
I know it isn’t a good measure and I don’t pretend it is.
I was more interested in the fact that Windows 8 works pretty well in a VM and it seems to have some compositing, whereas Gnome 3 runs like dirt with the same allocated video memory.
Edited 2013-05-13 22:06 UTC
You’re being too hard on yourself. I wouldn’t go as far as referring to you using your lack of technical competence in a matter as “pathetic” per se… more like an “odd” choice for the basis of an argument.
The guy made a valid observation. The same one I made in a previous post (among others). It is a cheap tactic and a rather sleezy one at that to try an invalidate the observation based on his lack of omnipotance.
“Ohh… you don’t know what causes the slowness. I c. Then your observation that it is slow is not valid”.
I can observe the effects of gravity without having a Phd in physics and having spend the last 40 years researching gravity.
In the same way, he can observe the slowness of some software without having been an integral developer of the software.
If you’re so stringent where is the hard data to back up the notion that Windows is slower than Linux? Give me hard data. Show me the numbers and methodology. On top of that… just to follow your standards… the person who publishes the data better have been an integral developer of both Linux and Windows! /sarcasm.
This article presents no evidence what so ever that Windows is slower yet any observations to the contrary submitted by users in this discussion thread is somehow “anecdotal”. Hypocrasy at the very least.
Huh? who are you talking to?
I think we both know I was referring to you. I wasn’t arguing I was asking, thus a question.
Twisting words and saying things I am incompete is pretty damn weak, when I was making an observation … after a bit of research it is to do how virtualbox handles 3D.
…your ability to parse humor is only surpassed by your legendary technical prowess in the field of computing.
The thing is that it wasn’t funny, it isn’t cute, nor witty … calling it humour is a stretch at best.
Also these little digs at my competence, it making me wonder whether you are projecting your own doubts in your ability because I tend not to agree with you.
Edited 2013-05-14 16:46 UTC
I don’t know, given how your favorite argumentative approach seems to tell those who disagree with you how little they know about “Software Engineering.” And how the slightest push of your leg from my part, managed to get your panties in such a twist. I gotta say, the bit about “projecting” a certain lack of security, made the lack of self awareness that more hilarious.
Alas, all good things must come to an end. So have a wonderful day…
tylerdurden,
“I don’t know, given how your favorite argumentative approach seems to tell those who disagree with you how little they know about ‘Software Engineering.'”
Are you referring to this by any chance? Yea that was lousy. Note the voting patterns in that thread, there’s clearly a windows fanboy somewhere using duplicate voting accounts. There’s no way I can expect this post not be be downvoted
http://www.osnews.com/thread?560949
LOL, I wasn’t thinking of that particular exchange (I don’t remember reading it). But it certainly solidifies the pattern…
With Ubuntu it was actually under KVM, not Virtualbox. 1 GB RAM, no graphics acceleration at all. It still worked fine, mostly because the computer in question was a beast; on my netbook it sucks terribly.
Not sure how well it would work in Virtualbox though. If your hardware is similar to my late Core 2 Duo desktop, the problem might be with Virtualbox, not Ubuntu.
As far as Windows 8 is concerned, I found it quite fast on Virtualbox (even without GPU acceleration) but quite slow on an older laptop. I would guess it’s heavily optimized for newer machines.
I am talking about my 8 processor Xeon machine at work.
I don’t honestly know whether it is virtual box or the distro. But Windows 8 works pretty damn well, and anything with lots of 3D compositing (Gnome 3, Unity) seems to very badly.
What virtual hardware are you using? Emulated drivers like IDE/SATA/e1000? Or virtio drivers? If you aren’t using virtio, performance will be horrible, regardless of which OS you use.
And no “defaults suck! I don’t want to tune! Wah!” is not an answer to why one OS works well and another doesn’t.
Thanks !!
I appreciate your posts !
Usually, we here have Vim vs Emacs, Gnome vs KDE, Ubuntu vs Debian, Google vs Apple… everyone usually here is a little fanboy but within reason.
but your posts are highly original ! I must congratulate you !
You are the most delusional troll I can remember on Osnews ! You just talk bullshit, like you know everything , everyone else are liers and only you know the truth!
Its so stupid that I just suggest everyone to just laugh and forget what you said ! ( not worth mentioning linux servers on the world, the embedded market, linux always need less ram and cpu that windows usually, even nvidia and steam developers reporting FPS increase on linux comparing to windows ) !
Other people must really like you !!!
Thank you for a good laugh !!! You are the King Idiot !
You are most welcome. I should also add that the market really likes Linux. That is why despite being free Linux has a 1% market share. Because it is so awesome.
Or wait…
Does Linux have a 1% market share because it is so awesome or is it because people like myself are too stupid too see the awesome that is Linux? Isn’t the prevailing view of Linux users that ordinary people are just too dumb too understand how amazing Linux is?
Yes, you are too smart and too cleaver for the rest of us. Isn’t it wonderful up on that high horse mate?
Edited 2013-05-13 15:28 UTC
[quote] Yes, you are too smart and too cleaver for the rest of us. Isn’t it wonderful up on that high horse mate? [/quote]
Dont know … you tell me , Mr Know It All, Keeper of statistics and benchmarks … the two truths !!
That’s desktop market share. Server market share is way higher than 1%, because Linux is awesome – on servers.
Edited 2013-05-13 16:20 UTC
and on mobile phones, embedded systems .. anything that has ARM at its core, inflight entertainment systems, as of late the ISS, etc.
Edited 2013-05-13 20:55 UTC
Linux has only 1% marketshare on desktops and laptops because Microsoft coerces manufacturers into installing Windows.
However in markets where MS has little or no leverage – servers, supercomputers, embedded devices, phones and tablets Linux has been a spectacular success [and MS a spectacular failure].
In fact sometime in the next 2-4 years Linux (Android) will be the best selling OS as portable devices begin to replace most desktops and laptops.
Indeed, Android uses a Linux kernel, and it is easily the best selling OS on mobile phones. It has also recently just about caught up with iOS on tablets.
http://www.androidcentral.com/survey-says-android-claims-48-us-tabl…
Linux is the most popular OS for supercomputers, embedded in devices such as TVs, DVDs, PVRs, e-readers etc, even in cars, in network infrastructure such as routers, in servers, in render farms, and in server farms such as used by Google and Facebook.
Even for desktops, Linux ships currently on about 5% of machines sold.
http://www.phoronix.com/scan.php?page=news_item&px=MTA5ODM
Given all this coverage, over the whole of the computing market Linux is actually the majority OS of the computing world.
Oh, and given that many ordinary people in western countries own a car, a phone or tablet or both, a TV, a DVD player, a PVR, perhaps a separate GPS, maybe an e-reader, a wifi router and ADSL modem, and also a desktop or laptop … then such an ordinary person is likely to be running about five times as many copies of Linux as copies of Windows.
Edited 2013-05-14 08:21 UTC
Android != Desktop Linux.
It is like me including Windows CE PDAs and POS machines in Windows market share.
While Android is Linux based, the underlying kernel could have been anything. Linux is one component of what makes Android.
Agreed. However, Android does include Linux. The Android kernel is part of the main Linux source tree.
http://www.pcworld.com/article/252137/linux_unites_with_android_add…
Hardly. Windows CE does not use the same source tree. I don’t know what codebase Windows POS devices utilise.
Agreed entirely.
Therefore, if we wish to assess how many of the world’s CPUs utilise a Linux kernel, we must include Android devices.
Edited 2013-05-15 05:30 UTC
Nope. “Windows” is merely a brand used by Microsoft. It is not a particular technology. Windows CE and POS machines aren’t part of the NT-based ecosystem.
Linux is by definition a kernel. So anything that uses a Linux kernel is ‘Linux’. Conversely any desktop sytem that looks and behaves like a Linux distro but uses a different kernel isn’t Linux.
So by definition Android is Linux and MintBSD is non-Linux.
That is likely largely meaningless… (replies in http://www.osnews.com/thread?561343 sub-thread)
http://www.telegraph.co.uk/technology/news/10049444/International-S…
http://www.pcworld.com/article/238068/how_linux_mastered_wall_stree…
http://www.smartcompany.com.au/information-technology/050276-ibm-in…
Edited 2013-05-14 11:56 UTC
ISS are using hardened machines for outerspace, not modern machines.
Super computers != Desktop operating systems
I could go on. We are talking about primarily desktop operating systems with Windows … and none of the examples prove it is a desktop operating system. It proves that it works well in those circumstances.
Lemur2, they guy that doesn’t understand what a use case is or why it really matters when evaluating software.
Edited 2013-05-14 18:19 UTC
So? The ISS machines, modern or otherwise, for reasons of stability and supportability, are still going to be running Linux rather than Windows XP in the future.
True. Having said that, supercomputers do utilise operating systems, and many of the supercomputers on the top 500 list do utilise Intel CPUs. Furthermore, for reasons of scalability and speed, 462 of the top 500 supercomputers do use a Linux kernel, and there is, after all, only one Linux kernel source tree.
No we aren’t, we are talking about operating systems. This site is called “OSNews”, which is short for “Operating System News”. It doesn’t say “Desktop Operating System News”.
Lucas_maximus, the guy that doesn’t know what we are talking about under the topic of “Operating System News”.
Edited 2013-05-15 06:01 UTC
Hm, DOSnews would have a funny ring to it
Actually, there are dozens of fairly normal laptops (Thinkpads, including quite recent T400) on the ISS. The only major modification is cooling (because convection doesn’t work in microgravity).
One can argue that HFT is not a good thing…
I worked at DEC and had the privilege of working on VMS. OS work was most definitely split into subgroups, I worked in security. But security had the benefit of being interwoven throughout the various components of the operating system so I worked in loginout, memory management, backup, command line interpreter, etc.
I had to touch other people’s codes and in fact, was even assigned investigations FROM the other groups. Never had a bad experience where a developer was over-protective of their code or didn’t want someone else’s fingers in their pie.
Also, peer code review was prevalent. If you had a way to improve performance or a recommendation that would benefit the operating system in any way, you presented it to a group of your peers (and more senior developers) and if your recommendation was valid, you’d be given the go ahead to make the change.
There was no inter-group rivalry, it was one very large team and we all understood the concept of helping each other out since we were all working toward the same goal of releasing a bug-free, superior product.
(Note, the VP who drove NT was an ex-DEC VP if VMS so I would think the same philosophies would carry over).
I don’t know if DEC is a good example. They are long gone by now, whereas Microsoft is not only still around but thriving as well.
The two companies have very different definition of what “product” is/was. So corporate cultures may have been almost uncorrelated.
DEC is no longer around because of poor management decisions, products were excellent (both software and hardware).
It was a weird situation as most employees could see the vast spending, uncontrolled growth in non-product-related areas and decisions regarding the future of personal computing and how that should affect the company’s business model going forth would cause the company to stumble and fall. It fell hard and fast.
Despite the fact the company is gone, VMS and other divisions of DEC STILL live on, absorbed by other companies.
That’s pretty common dissonance among engineering teams; It’s always management issues, never technical or engineering ones. Even though management is part of the engineering process ;-).
E.g. Those excellent technical approaches by DEC, in the end, led to products which were either under performing, or priced themselves out of the market, or arrived too late to matter. Whereas, Microsoft’s supposedly “sloppier” MO ended up being consistently right on the money, literally. Or at least they were, until other even sloppier and hungrier companies started to pop up trying to eat their lunch. It’s the circle of life, I guess…
I believe NT started of as a very high quality VMS-inspired OS. However it has generally deteriorated with each iteration since it became a consumer desktop.
https://twitter.com/BrandonLive/status/333770561526317056
This I believe.
OK, lets make things short.
First, I don’t have any experience with windows or linux kernel development and patching submitting.
Peer review is a norm on medium/large projects on private code development and have been this way for years. No one wants to look dumb to its “pals” (and even less to its “rivals”). Windows kernel is not, definitively, a small project so, I guess, the “pride” differential argument really does not apply on this specific scale. It may happen on small (very few developers) open source projects though as bad code visibility may be, for some, kind of embarrassing.
New challenges are a great stimulus to attract talented developers but money is also that. MS used to be famous to reward its developers. Unless things have changed a lot on recent years, I would not use this card as a differential too.
Getting back to “regular” coding, I, however, have some experience applying for “fix snippets” on small projects through independent patch proposals. Some got accept, some not.
Some maintainers are nice guys and ask you a few questions, suggest improvements and give some well-funded arguments on why things are the way they are and what could be the consequences associated to changes; some just refuse or apply the patches; some are jerks that sit on the code like all that is there is the only way to do whatever the code needs to do reasonably and shot down any attempt to alter it using crap excuses.
It happens on both worlds.
I don’t want to point fingers, so just google for jerk open source maintainers if you want.
Edited 2013-05-14 01:30 UTC
Exactly. (Well, first part of your response, don’t know about the other as my experience is limited there).
Nice to see the Linux vs. Windows penis measuring contest is still alive and well. Personally I use both operating systems so I suppose I win no matter what.
yeah …. so do I use both.
But my penis is larger when using Linux ( such a crazy conversation , this is )
This whole conversation and topic is just toooo silly and the main point from the article is completly missed!
this isnt about “benchmarks and performance” ( because , kernel may be slower , optimized graphics make up for it , etc … there is a whole stack of layers of software that alter the final performance vision of a computer ).
This is actually about companies like Microsoft and actually, any large company with employers and divisions/teams that defend more their job’s and status within the company than the product and company itself.
Again, this is not something that doesnt exist on open source development ( with open source developers and their NIH sindrome ) …. but its different, because colaboration and patchs are extremely welcome and there is always the freedom to pick the code, fork and do it properly.
Having worked and experienced the huge difference from small open source companies and big JAVA enterprises…. i can relate to the topic completly.
Does that mean you have two penises?
http://www.youtube.com/watch?v=mIUk08iYZKE
Detachable…
What else is new?