Linked by Thom Holwerda on Sat 11th May 2013 21:41 UTC
Windows "Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening." That's one way to start an insider explanation of why Windows' performance isn't up to snuff. Written by someone who actually contributes code to the Windows NT kernel, the comment on Hacker News, later deleted but reposted with permission on Marc Bevand's blog, paints a very dreary picture of the state of Windows development. The root issue? Think of how Linux is developed, and you'll know the answer.
Order by: Score:
corrections
by TechGeek on Sat 11th May 2013 21:57 UTC
TechGeek
Member since:
2006-01-14

publicity -> publicly

ware -> aware

Reply Score: 2

makes sense
by TechGeek on Sat 11th May 2013 22:01 UTC
TechGeek
Member since:
2006-01-14

This makes a lot of sense. Its difficult to pay for the manpower to match a group who do something out of love of the art. Microsoft, at least the top levels, have known this for a long time. Its the reason for there unending war on Linux, even when they may publicly applaud open source. Its hard to compete with free.

Reply Score: 11

RE: makes sense
by cdude on Sun 12th May 2013 06:23 UTC in reply to "makes sense"
cdude Member since:
2008-09-21

Its difficult to pay for the manpower to match a group who do something out of love of the art

The fat majority of devs working on the Linux Kernel are payed and yet still do there work out of love of the art. The one doesn't exclude the other. At least not on Linux.

http://www.computerweekly.com/blogs/open-source-insider/2012/04/lin...

Edited 2013-05-12 06:26 UTC

Reply Score: 12

RE[2]: makes sense
by Lennie on Sun 12th May 2013 13:11 UTC in reply to "RE: makes sense"
Lennie Member since:
2007-09-22

If you regularly do good work on the Linux kernel, you can pretty much be guaranteed to get job offers from companies.

You also see the share of "companies" working on the code is still ricing:

http://lwn.net/Articles/547073/

I mentioned them as "companies", because they pretty much have no say in what the kernel developers they hide do. They mostly just keep working on what they've always worked on.

Companies hire the developers that work on things they care about. Not so much to influence it, but because they want an in-house expert.

Reply Score: 7

RE: makes sense
by bassbeast on Sun 12th May 2013 07:35 UTC in reply to "makes sense"
bassbeast Member since:
2007-11-11

No it doesn't, not really, for several reasons. One nobody is gonna care about "faster" when the average $300 PC is several TIMES more powerful than its owner will need.

Two nobody is gonna care about speed and innovation if you just broke $30k worth of office software and left an entire company's products broken so the risk isn't worth it in many cases.

Third Linux pays for this "speed and innovation" with one of the worst driver models in the history of computing, a driver model so messed up that to this very day a company can't just put a penguin on the box and a driver on the disc because thanks to Torvalds laughingly bad driver model there is a good chance the driver WILL be broken before the device even reaches the shelves.

Let me end with an illustration of why this entire argument is pointless and moot when it comes to OSes from my own life...Next week after nine years of faithful service I'll be putting my old Sempron office box in the cheap hardware pile. Now that is NINE years, TWO service packs, and probably 3000 plus patches...and not a single broken driver, NOT ONE.

For the vast majority of people on this planet Linux isn't free as in beer nor freedom, its free as in worthless.I'm sorry if that makes some people upset but its true, you can give me your OS for free but if my wireless is toast on first update and my sound goes soon after? Well into the trash it goes.

Faster kernels mean jack and squat if it doesn't lead to a better product and when Googling for fixes, forum hunts, and drivers that work in Foo broken in Foo+1 are the order of the day it doesn't. I mean MSFT has put out a product more hated than Vista, ME, and Bob put together and Linux is STILL flatline...doesn't that tell you something?

Reply Score: 6

RE[2]: makes sense
by RshPL on Sun 12th May 2013 09:52 UTC in reply to "RE: makes sense"
RshPL Member since:
2009-03-13

On the other hand once a driver lands into the kernel, it becomes the same moving target as the whole thing. Most of the webcams released for Windows XP no longer work on Windows Vista/7/8 while in Linux you can use many of those webcams on all kinds of hardware, from laughingly old x86 to boards such as Rasberry PI. Plug it in and it just works! Thank you Linus, this is the reason why I love Linux. It might be beneficial in few cases to have a stable ABI but see the bigger picture and imagine today's stuff that just works in 20 years time - unlike Windows. I think having stable ABI would prevent it.

Reply Score: 12

RE[3]: makes sense
by jockm on Sun 12th May 2013 15:38 UTC in reply to "RE[2]: makes sense"
jockm Member since:
2012-12-22

Your milage may vary, but that is NOT my experience with webcams, especially since most webcams manufactured in the last several years support UVC ( http://en.wikipedia.org/wiki/USB_video_device_class ).

Reply Score: 3

RE[3]: makes sense
by zima on Sat 18th May 2013 22:27 UTC in reply to "RE[2]: makes sense"
zima Member since:
2005-07-06

Webcams are a telling example, but not necessarily in the way you think... I'm sort of a collector of old webcams (I even have the very first dedicated webcam, Connectix Quickcam for ADB Macintoshes) and it's not so well under Linux.

Model/family-specific Linux drivers for oldish webcams are often quite half-baked, not exposing all the functionality, full capability of the camera. Just because a driver is in the kernel, doesn't mean it can't be neglected...

MS pushed through USB video class (as a Windows logo requirement), which also did improve the situation on Linux - and still the driver is somewhat half-baked (since it's within v4l2, it doesn't support stills)

Plus, "Most of the webcams released for Windows XP no longer work on Windows Vista/7/8" is probably not true - especially big names often do released Vista & up drivers.

Reply Score: 2

RE[2]: makes sense
by bert64 on Sun 12th May 2013 12:47 UTC in reply to "RE: makes sense"
bert64 Member since:
2007-04-23

People do care about faster, perhaps not those who run a single system on a modern desktop but think of other areas, where linux happens to be strong.

Embedded devices - much slower hardware, better performance counts (and can also improve battery life).
Supercomputers - slight performance improvement, multiplied by thousands of nodes.
Virtualization - slight performance improvement, multiplied by large numbers of virtual machines.

As for drivers, i have never had a driver that was in the kernel break, and that is where drivers should be so that they can be improved and debugged alongside the rest of the kernel. The idea of third parties creating binary drivers is extremely damaging, and is one of the reasons why windows has never been successful on non x86 systems. I very much like the fact that virtually all the USB peripherals i use on my x86 linux boxes will also work on my arm based linux systems. Closed source binary drivers will never have this level of flexibility.
Also, even MS were forced to break the driver interface with vista because the old driver interface was holding them back. Linus doesn't want to get stuck in a situation like that, where progress is impeded unnecessarily. Linux typically undergoes regular, small changes instead of sudden large ones, and when the interface is changed the in-kernel drivers are typically changed at the same time so nothing will ever be in a state of not working unless your running bleeding edge development kernels (an option that closed source systems typically dont even provide at all).

Linux is not flatline, its huge in pretty much every market except desktops, and its lack of success in desktops is more down to marketing and lock-in than any lack of technical merit.

Reply Score: 9

RE[2]: makes sense
by Lennie on Sun 12th May 2013 13:26 UTC in reply to "RE: makes sense"
Lennie Member since:
2007-09-22

Now that Linux is mainstream that driver model is doing fine.

These days lots of drivers are developed in the Linux kernel before the product ships. It takes a lot of time to get a product to ship. There are kernel drivers that were already done and releases before the product came to market. Yes, it also takes some time to get into distributions. This is true, but a lot of popular distributions now are on a 6 month release cycle. This helps a lot.

I don't know if this is still true, but you could even get someone to develop a driver for your device for free: http://www.kroah.com/log/linux/linux_driver_project_kickoff.html even if you require a NDA.

So lots of things have changed.

Obviously some companies are still not participating all that well. And I don't even mean Nvidia. They can't develop a driver for their desktop GPU in the open because they don't own the license on their own drivers. They hired an other company to develop it.

The mobile GPU-drivers from Nvidia are actually in the kernel and submitted by Nvidia.

Reply Score: 4

RE[2]: makes sense
by Valhalla on Sun 12th May 2013 14:55 UTC in reply to "RE: makes sense"
Valhalla Member since:
2006-01-24


Two nobody is gonna care about speed and innovation if you just broke $30k worth of office software and left an entire company's products broken so the risk isn't worth it in many cases.

How would Linux the kernel suddenly break office software? The userland API/ABI is extremely stable, add to that the fact that no company would be running their business on bleeding edge kernels to begin with, they will most likely use a tried and tested LTS kernel version.


Third Linux pays for this "speed and innovation" with one of the worst driver models in the history of computing, a driver model so messed up that to this very day a company can't just put a penguin on the box and a driver on the disc because thanks to Torvalds laughingly bad driver model there is a good chance the driver WILL be broken before the device even reaches the shelves.

Your old tired bullshit again, first off Linux (while just being a kernel) supports more hardware devices than any other operating system out there, and it does it straight out of the box.

A company wanting to 'put a Linux sticker' on the box doesn't need to put a driver on a disc, they can have the driver be part of the actual kernel tree and be shipped with all Linux distros and maintained against ABI changes by the kernel devs.

ABI changes doesn't break these drivers inside the kernel (if the kernel devs change the source interface they update the drivers), it only breaks those very few external proprietary drivers that exist for Linux, and no it's not as if those drivers have to be rewritten, typically they have to be slightly modified and re-built against the new ABI.

And those few proprietary driver vendors do continously update their drivers against the new ABI versions, so it's not even a practical problem even when it comes to proprietary drivers, you just get the new driver from your package manager.

By comparison when Microsoft update their driver ABI like with Windows Vista, tons of perfectly functional hardware is made obsolete because the hardware companies won't release new drivers for older hardware (they want you to buy new hardware).

And it's not as if a Windows ABI version is in reality stable during it's own lifetime, as shown by the Vista SP1 service pack driver malfunction debacle.


you can give me your OS for free but if my wireless is toast on first update and my sound goes soon after? Well into the trash it goes.

Again unless (and even if you do) run a bleeding edge distro any problem such as this is extremely unlikely to be caused buy the Linux kernel, ABI change or not.

and Linux is STILL flatline...doesn't that tell you something?

The only area in which Linux 'flatlines' is on the end user desktop, an area where no one has been able to challenge Microsoft, not even OSX riding the 'trendy/hipster' wave with massive marketing, in every other area in computing Linux either dominates or is doing extremely well. And guess what, those areas ALSO USE DRIVERS, how can this be? According to you they just keep crashing due to your imagined ABI problem.

A non-stable ABI means no cruft, no need to support crappy deprecated interfaces and functionality because some driver out there may still use it (hello Windows), instead you modify and re-compile the drivers to work against the new improved ABI, and this is something the kernel devs do for you if you keep your driver in the kernel.

Again for those few instances of proprietary driver holdouts, yes they need to do this themselves, and they do. The result is that all Linux drivers improve as they make use of new/enhanced functionality provided by the kernel.

Reply Score: 15

RE[3]: makes sense
by lucas_maximus on Sun 12th May 2013 17:35 UTC in reply to "RE[2]: makes sense"
lucas_maximus Member since:
2009-08-18

Well this bullshit comes up time and time again, because stuff breaks. How many times has someone's video goes tits up after a kernel upgrade ... and don't give me this "it should be in the kernel" bullshit ... when you are stuck at a command prompt having to use another machine to google the solution it isn't a lot of fun.

Stable API/ABIs are good engineering like it or not.

Edited 2013-05-12 17:36 UTC

Reply Score: 1

RE[4]: makes sense
by Valhalla on Sun 12th May 2013 18:31 UTC in reply to "RE[3]: makes sense"
Valhalla Member since:
2006-01-24

Well this bullshit comes up time and time again, because stuff breaks.

No, this shit comes up time and time again from guys like you and bassbeast who doesn't even run Linux.

How many times has someone's video goes tits up after a kernel upgrade

Yes tell me, how many times? I've used a bleeding edge distro for 5 years, my video drivers (I use NVidia cards) have never caused breakage, only time I had problems with a kernel update which forced me to downgrade was a instability caused by my network driver during a major network driver rewrite.

And this is because I run a bleeding edge distro, Arch (kernel 3.9.2 went into core today), had I been using a stable distro then I would not have been bitten by that bug either.

... and don't give me this "it should be in the kernel" bullshit

What bullshit is that? It works for a goddamn gazillion of hardware drivers, right out of the goddamn box. And unlike Windows which relies on third party drivers, this means that Linux can support all this hardware on ALL the numerous architectures it runs on, which is of course a huge advantage of Linux.

... when you are stuck at a command prompt having to use another machine to google the solution it isn't a lot of fun.

Beats a blue screen of death, see I can play this game of BS too.

Stable API/ABIs are good engineering like it or not.

Yes in a perfect world, in reality there's always a cost like that of poor choices you have to live with in order to ensure backwards compability, the Linux devs went with a middle way, anything inside the kernel can be changed at any time, hence you either put your code in the kernel were it will be maintained against changes, or you do the labour yourself.

Meanwhile breaking kernel to user space interfaces is a big NO.

Reply Score: 4

RE[5]: makes sense
by lucas_maximus on Sun 12th May 2013 18:45 UTC in reply to "RE[4]: makes sense"
lucas_maximus Member since:
2009-08-18

Except I actually do run Linux. I don't pretend it is perfect and I don't make out that poor decisions are good when they blatantly aren't.

Reply Score: 2

RE[6]: makes sense
by Valhalla on Sun 12th May 2013 19:29 UTC in reply to "RE[5]: makes sense"
Valhalla Member since:
2006-01-24

Except I actually do run Linux.

Then tell us what distro you are using and what these kernel breakages you keep suffering are.

As I am running a bleeding edge distro I am fully aware and even expect that things may break, yet it's been amazingly stable (granted I don't enable testing, I'm very thankful for those who do and report bugs which are then fixed before I upgrade).

I don't pretend it is perfect

Neither do I, as I said I had to downgrade due to a network driver instability, still that's one showstopper in a five year period on a bleeding edge distro. Vista didn't make it past it's first service pack before it started to break drivers, so much for 'stable' ABI.

and I don't make out that poor decisions are good when they blatantly aren't.

How are they 'blatantly' poor decisions, you offer no arguments, please explain.

Now let's see, more hardware device support out of the box than any other system, ported to just about every architecture known to man, used in everything from giant computer clusters, embedded devices, servers, hpc, 3d/sfx, super computers, mobile phones, tablets, fridges, etc.

But yeah according to you and bassbeat these areas don't need driver stability, they obviously can't since according to you guys Linux drivers 'just keep breaking', update the kernel and whooosh there goes the stability.

Reply Score: 3

RE[7]: makes sense
by lucas_maximus on Mon 13th May 2013 00:34 UTC in reply to "RE[6]: makes sense"
lucas_maximus Member since:
2009-08-18

I'm not going to explain anything anymore, because your counter argument is "it works for me" so far.

Reply Score: 0

RE[7]: makes sense
by Morgan on Mon 13th May 2013 10:45 UTC in reply to "RE[6]: makes sense"
Morgan Member since:
2005-06-29

Then tell us what distro you are using and what these kernel breakages you keep suffering are.


To my knowledge he runs Ubuntu but that's irrelevant; this discussion is about NT kernel vs Linux kernel performance, not a dick measuring contest.

I can see where you're both coming from because I live in both stable LTS and bleeding edge testing worlds at the same time. On my workstation it's Windows 7 Pro and Ubuntu LTS because I like to actually be able to get work done without wasting time fiddling around with broken crap. On my laptop it's "anything goes", as that device is my testbed for all the shiny new distro releases. Any given week I might have a Debian based distro, an Arch based one, a Slackware derivative or even Haiku booting on it. I greatly enjoy having both the stability of my work machine and the minefield that is my portable.

I feel like I say this every week lately, but if you're running a production machine for anything other than OS development, you should stick to a stable (preferably LTS) release of your favorite OS. If you're a kernel hacker, a distro contributor or you're just batshit insane, go ahead and use a bleeding edge distro as your production machine, and enjoy all the extra work that goes into maintaining it.

Reply Score: 3

RE[7]: makes sense
by acobar on Mon 13th May 2013 15:10 UTC in reply to "RE[6]: makes sense"
acobar Member since:
2005-11-15

Ok, there is more to this thread of arguments than I would like to discuss about but, to put things on perspective, I will write some possibly inaccurate and anecdotal specific cases that apply to me.

First, X drivers are NOT part of the linux kernel but, frankly, does it matter where it fits on system stack if you just want to boot to a nice GUI? Yeah, I thought so.

Some disclosures:

d.1) I have openSUSE 12.3 installed on all my computers that I really use (so, I am discounting the very old ones I just play with out of pure nostalgia). They are four.

d.2) Three of them have Windows systems also (1 xp, 1 vista and one 7);

d.3) One is a laptop, the others are all desktops;

d.4) I prefer nVidia graphic cards because of their drivers on linux. Two of them have such cards.

Now, this is what I can tell:

1) I compile some programs on my machines to add features that are left out by regular repos due some restrictions (mainly because they are not "totally" free additions) or because I want to use specific patches that are not applied by default. If you use scientific packages you are going to find it is a common case;

2) I don't use openSUSE nVidia drivers repo because of a very simple problem: when I compile some programs that depend on X devel libraries and generate the appropriated rpms, they will display a dependency on those nVidia drivers, what I try to avoid because not all my machines have a nVidia card (or the ones I supply the packages to);

3) Because of '1' and '2', when a kernel upgrade happens (only one time since 12.3 or 12.3 RCs - I am really not sure when it happened), I am thrown to the shell prompt (what I enjoy, but it is a completely different subject). On MS windows it does not happen but I experienced BSOD from time to time (rare on Xp, very rare on newer versions) and they were never related to video dirver issues. Both systems developed their ways to cope with related problems (drivers): unroll on MS products and shell + update/revert/blacklist on linux side. I prefer the last solution but I am very aware that the former, at least to me, seems like a more "unexperienced user" fool proof solution, and yes, I always install the "recovery prompt" and things alike on windows "just in case" (not only on my machines but on any I may have to support);

4) Things "break" on linux hardware support. With openSUSE 11.4, I was able to use an old nVidia FX 5200 PCI card to happily coexist with an integrated intel video driver through xinerama. It all stopped to work on 12.3 no matter how hard I tried by editing xorg.conf (after be sure it was properly picked), had to bite the bullet and buy a new card. The same thing worked flawlessly on xp, only after hacking with vista; it was "no way" with windows 7. As explained on second paragraph, I am aware that X is not part of linux kernel;

5) The common argument that if something is supported on linux kernel then it is going to work properly on newer versions is bullock. For network devices, storage devices and other very important subsystems of servers if may be true, but I had lots of problems with video capture devices, first and foremost because the kernel drivers are only part of the "integrated" solution, you will need also a proper interface working on user space and some of them just stop to work because we had gtk or kde updated and the old application is not compatible with them. To be fair, it also happens on windows but, for this specific case, I do find that there is better support not only by driver options but on applications as well, even though they push (the hardware/software manufacturer) the user to buy a new kit when a new ms windows version roll-up.

6) Some of the "pro" arguments towards use of MS Office and Adobe Creative Suite are also ridiculous. The former is only needed for some quite few cases but if you ask, almost all say that they "need!" it, press them to tell what specific feature they may miss and watch things go funny. Photoshop and Illustrator can be very successfully replaced by gimp and inkscape on web development on most cases also. Problems start to pop if you need to work on press industry though. As a side note, I like more Scribus than InDesign for small projects that requires only PDF generation (probably because of years of Pagemaker use).

Where I think linux really shine is on automation of process. It is really hard to beat it when you properly assembly a string of operations to be performed by scripts (bash, make and all). Perhaps, some sort of equivalent thing could be accomplished on MS camp by PowerShell but I don't see "scriptable" imprinted on DNA of MS world apps stack what impair what can be accomplished. So, for people like myself that just love to make most of things "set and forget" (or almost, of course), there is probably a kind of leaning towards linux, I guess.

Reply Score: 6

RE[4]: makes sense
by lemur2 on Tue 14th May 2013 11:35 UTC in reply to "RE[3]: makes sense"
lemur2 Member since:
2007-02-17

Well this bullshit comes up time and time again, because stuff breaks. How many times has someone's video goes tits up after a kernel upgrade ... and don't give me this "it should be in the kernel" bullshit ... when you are stuck at a command prompt having to use another machine to google the solution it isn't a lot of fun.

Stable API/ABIs are good engineering like it or not.


Just use the open source drivers ... so use a system with Intel or AMD graphics. "Problem" solved.

Reply Score: 0

RE[5]: makes sense
by Gullible Jones on Wed 15th May 2013 02:59 UTC in reply to "RE[4]: makes sense"
Gullible Jones Member since:
2006-05-23

Umm what?

a) Shelling out for a new computer because of driver regressions is wasteful and stupid. Most people don't have the time, the money, or the desire to do that.

b) The open source drivers routinely suck on lots of hardware, and are also subject to horrible regressions.

c) Again, the performance of open source drivers (particularly 2D performance) tends to be pathetic.

Reply Score: 2

RE[6]: makes sense
by lemur2 on Wed 15th May 2013 03:31 UTC in reply to "RE[5]: makes sense"
lemur2 Member since:
2007-02-17

Umm what?

a) Shelling out for a new computer because of driver regressions is wasteful and stupid. Most people don't have the time, the money, or the desire to do that.

b) The open source drivers routinely suck on lots of hardware, and are also subject to horrible regressions.

c) Again, the performance of open source drivers (particularly 2D performance) tends to be pathetic.


Firstly, let me point out that the open source Linux drivers for Intel graphics are the only Linux drivers for Intel graphics.

The other major vendor with a vendor-supported open source graphics effort is AMD:

a) Wrong. The open source graphics drivers for AMD provide better legacy support.

b) Wrong. The open source drivers available with current kernels admirably cover considerably more AMD graphics chips than fglrx does.

c) Wrong. In some areas pertaining to 2D acceleration, the fglrx closed source Linux driver is marginally better than the open source driver. In other areas of 2D acceleration, however, the closed source fglrx driver is five times slower than the open source driver. The open source graphics drivers in most areas of 3D performance achieve about 80% of closed source fglrx performance, in a few areas they are further behind at about 50%, and in a few other areas they are actually 5% to 10% faster.

Your comment is, however, correct for nvidia graphics hardware. Accordingly, I do not recommend nvidia graphics hardware for use with Linux.

Edited 2013-05-15 03:34 UTC

Reply Score: 2

RE[4]: makes sense
by matthekc on Thu 16th May 2013 15:02 UTC in reply to "RE[3]: makes sense"
matthekc Member since:
2006-10-28

I think that when you install a proprietary driver the kernel, Xorg, and that driver should only upgrade in sync when the driver allows.
More specifically what I'm trying to say is if the package for the driver recommends a certain kernel or Xorg that is where those packages stay until an upgraded driver that supports the upgraded kernel and Xorg comes out.
This is a package management issue that could have been fixed ages ago and in fact you can manually lock down these packages and achieve stability.

Reply Score: 1

RE[3]: makes sense
by zima on Sat 18th May 2013 22:55 UTC in reply to "RE[2]: makes sense"
zima Member since:
2005-07-06

A company wanting to 'put a Linux sticker' on the box doesn't need to put a driver on a disc, they can have the driver be part of the actual kernel tree and be shipped with all Linux distros and maintained against ABI changes by the kernel devs.
[...]
you modify and re-compile the drivers to work against the new improved ABI, and this is something the kernel devs do for you if you keep your driver in the kernel.

Seems that, in practice, this can also mean the driver often languishes; especially when it's for some older piece of hardware (for example http://www.osnews.com/permalink?562009 )

Reply Score: 2

RE[2]: makes sense
by tidux on Sun 12th May 2013 18:47 UTC in reply to "RE: makes sense"
tidux Member since:
2011-08-13

> Third Linux pays for this "speed and innovation" with one of the worst driver models in the history of computing, a driver model so messed up that to this very day a company can't just put a penguin on the box and a driver on the disc because thanks to Torvalds laughingly bad driver model there is a good chance the driver WILL be broken before the device even reaches the shelves.

WAAAAAAAAAAH my proprietary code designed to link against a GPLv2 kernel that explicitly doesn't have a static driver ABI doesn't work with a different kernel version! If your driver means that much to you, submit a patch.

Edited 2013-05-12 18:47 UTC

Reply Score: 3

RE[3]: makes sense
by bassbeast on Thu 16th May 2013 18:38 UTC in reply to "RE[2]: makes sense"
bassbeast Member since:
2007-11-11

You want proof your entire premise doesn't work? do the math:

You have a MINIMUM of 150,000 drivers for Linux, yes? And we have several thousand NEW deices released weekly...how many Linux kernel devs are there again? 500? 1000? if you kept them working 24/7/365 on NOTHING but drivers the math still wouldn't work, all it would take is Torvalds changing a pointer (which considering I can wallpaper this page with "update foo broke my driver" posts appears to be Torvalds SOP) and it would take 3 to 4 YEARS just for them to give 5 minutes to each driver.

So I'm sorry but you can bang your Linux bible all day long, what you are selling is about as believable as Adam riding a dinosaur. When every single OS on the planet OTHER than Linux has a stable ABI are you REALLY gonna sit here and argue that Torvalds is smarter than every single OS designer on the entire planet? Really? if his driver model was good others would adopt it, they haven't and the reason why is obvious, its not good.

Oh and finally your claiming its about the GPL makes this a religious argument, you know this, yes? You are arguing that it is okay to have a busted model as long as it promotes the "purity" that is the GPL...except every single forum tell you to use Nvidia because SURPRISE! The FOSS drivers don't work worth a damn across the board. The math doesn't work, you can spin it all you want but you can't change the fact that the current driver model? it is really terrible.

Reply Score: 2

RE[2]: makes sense
by djohnston on Sun 12th May 2013 19:02 UTC in reply to "RE: makes sense"
djohnston Member since:
2006-04-11

Let me end with an illustration of why this entire argument is pointless and moot when it comes to OSes from my own life...Next week after nine years of faithful service I'll be putting my old Sempron office box in the cheap hardware pile. Now that is NINE years, TWO service packs, and probably 3000 plus patches...and not a single broken driver, NOT ONE.

For the vast majority of people on this planet Linux isn't free as in beer nor freedom, its free as in worthless.I'm sorry if that makes some people upset but its true, you can give me your OS for free but if my wireless is toast on first update and my sound goes soon after? Well into the trash it goes.


Hmmm. Bad hair day? Here's a real-world user case. I still own and use a 1999 Dell Dimension 4100. It's used mostly for testing purposes. When I boot it from the hard drive, it runs Debian7 (wheezy). It's been running it for about a year now, from beta status to (just recently) stable release. Many updates in between. The kernel is a third-party 3.8-10.dmz.1-liquorix-686, the latest liquorix version. It has gone through many updates, as well.

The machine uses an 800mHz PentiumIII CPU and 512MB of RAM. I'd add to the RAM count, but it's all the motherboard will register, even with more installed. The motherboard came with no ethernet port. It still doesn't have one. I use a Linksys WUSB54G card, connected to a USB port. The Linksys is quite a few years old, but not as old as the Dell.

The wireless driver for the chipset is in the kernel. It is registered with the OS, "out of the box". Through all of the "bleeding edge" liquorix updates, as well as the Debian testing updates, the setup never failed to acquire a wireless signal or connection ability.

Regressions? You clearly do not know what the hell you are talking about.

Reply Score: 5

RE[3]: makes sense
by lucas_maximus on Mon 13th May 2013 07:07 UTC in reply to "RE[2]: makes sense"
lucas_maximus Member since:
2009-08-18

To be fair this is debian.

If you said ubuntu I wouldn't believe you.

Reply Score: 2

RE[2]: makes sense
by l3v1 on Mon 13th May 2013 11:30 UTC in reply to "RE: makes sense"
l3v1 Member since:
2005-07-06

No it doesn't, not really, for several reasons. One nobody is gonna care about "faster" when the average $300 PC is several TIMES more powerful than its owner will need.


Unless you're developing resource hungry algorithms and applications that those not-caring people are actually using. You can't base your development on thinking only about the consumers. True, they are the larger crowd, but without the developed apps and content, they won't have anything to consume. You might say you don't care about the developers and only concentrate on serving the consumers, but then at least state that plainly and don't behave like you care, frustrating devs from time to time with seemingly ignorant decisions.

Reply Score: 4

RE[3]: makes sense
by bassbeast on Thu 16th May 2013 18:47 UTC in reply to "RE[2]: makes sense"
bassbeast Member since:
2007-11-11

What you are doing is called "moving the goalposts" and is one of the reasons nobody can discuss Linux anywhere. We are talking about desktops NOT workstations, routers, phones, your toaster, or a webserver, okay?

So if your argument is that Linux works on routers (where they are never updated and run a custom build) or on workstations (where companies like HP spend millions to constantly rebuild drivers after Torvalds breaks them) or on some other device? Then sure, no argument. but we aren't talking about any of those, we are talking about X86 desktops and laptops which linux clearly does NOT work on, or else I wouldn't be able to wallpaper this page with "update foo broke my drivers" posts.

I'll leave you with this, if one of the largest OEMs on the entire planet can't get Linux to work without running their own fork, what chance does the rest of us have?

http://www.theinquirer.net/inquirer/news/1530558/ubuntu-broken-dell...

Reply Score: 2

RE: makes sense
by Deviate_X on Sun 12th May 2013 10:38 UTC in reply to "makes sense"
Deviate_X Member since:
2005-07-11

For most of the last 30 years windows have emphasized backward compatibility, so the code written X years ago will still run today. The consequences of this will lead to more bloat and less optimal code supporting mysterious-forgotten corner cases.

Linux being purely engineer lead will more proactively-aggressively route out unneeded code, optimize at the expense of compatibility.

In my personal experience, Linux is faster at many things as a _user_! ls !! But surprisingly isn't that great! And as a developer, there really isn't any worthwhile difference at all.

Reply Score: 3

RE[2]: makes sense
by bert64 on Sun 12th May 2013 12:49 UTC in reply to "RE: makes sense"
bert64 Member since:
2007-04-23

Linux has extremely good source level backwards compatibility, lots of software written for very old unixes will still compile and run just fine on modern day linux systems.

That's the difference between writing a simple, sensible and extensible system vs writing something unnecessarily complex.

Reply Score: 1

RE[3]: makes sense
by Lennie on Sun 12th May 2013 13:31 UTC in reply to "RE[2]: makes sense"
Lennie Member since:
2007-09-22

Yes, backward compatibility has a cost, this is even true in Linux:

https://www.youtube.com/watch?v=Nbv9L-WIu0s

Reply Score: 3

RE[2]: makes sense
by Lennie on Sun 12th May 2013 13:28 UTC in reply to "RE: makes sense"
Lennie Member since:
2007-09-22

[replied to the wrong comment]

Edited 2013-05-12 13:31 UTC

Reply Score: 2

RE: makes sense
by toast88 on Sun 12th May 2013 19:37 UTC in reply to "makes sense"
toast88 Member since:
2009-09-23

This makes a lot of sense. Its difficult to pay for the manpower to match a group who do something out of love of the art.


Most Linux kernel developers are actually paid for what they are doing. That, I agree, however not contradict with the fact that they probably love what they're doing.

Adrian

Reply Score: 3

RE: makes sense
by zima on Sat 18th May 2013 22:57 UTC in reply to "makes sense"
zima Member since:
2005-07-06

I think you missed the point - it's not about "free" (many linux devs are employed anyway), it's about "in the open".

Reply Score: 2

change of culture
by sukru on Sat 11th May 2013 22:44 UTC
sukru
Member since:
2006-11-19

He mentions there are still good hackers at Microsoft. And I believe they're the ones who still keep them afloat. The founders (Bill Gates) were really hackers in the sense of what we understand today (even though they charged for it, they really enjoyed coding). But when that culture was replaced by marketers, Microsoft left being competitive, but became just another company.

Reply Score: 9

RE: change of culture
by lord_rob on Sun 12th May 2013 22:22 UTC in reply to "change of culture"
lord_rob Member since:
2005-08-06

Just like Apple was before the come back of Steve Jobs. Now that he passed away they are living on their assets and attacking any company that tries to compete with their products.

Reply Score: 1

Comment by ansidotsys
by ansidotsys on Sat 11th May 2013 23:07 UTC
ansidotsys
Member since:
2008-08-15

Looks to me like this is getting overblown. Personally, after reading the title, I was expecting a technical response for a technical claim. Instead, what I got was a pseudo-socio-political response. Reading it through, it is clear to me that whomever this is, wrote it as an off hand rant.

What it comes down though is that MS development is conservatively managed whereas the Linux kernel is not. This can be an advantage for stability for Microsoft or for the many incremental improvements for Linux. When your userbase is in the billions, it makes sense to be conservative. You don't see major rewrites of significant subsystems in Windows because of this. How many times does the Linux driver ABI change? How many IPCs are there going to be? SystemD anyone? Etc, etc..

In any case, the development methodology of both Linux and Windows have their merits. Linux, obviously, is better suited for the high churn and just-recompile-it environment that comes with being an open source project. In fact, this goes hand-in-hand with the just-throw-it-away mentality of the cell phone market. Clearly that market has proven conducive to the Linux model.

On the other hand, Windows allows for many leaf projects (such as commercial games) to succeed that would generally require far too much maintenance under the former. Steam is trying to mitigate this by shipping software outside the main dependency-resolution packaging world. The same is true for a vast majority of new peripherils and their drivers.

In any case, they both have merits.

Edited 2013-05-11 23:11 UTC

Reply Score: 6

RE: Comment by ansidotsys
by galvanash on Sat 11th May 2013 23:24 UTC in reply to "Comment by ansidotsys"
galvanash Member since:
2006-01-25

I was about to make a very similar post, but you saved me the effort. Yes, Microsoft has to compete with open source, and open source has some very tangible advantages... But its not black and white - having to answer to a board, working within a schedule, and prioritizing work based on things other than technical merit create a certain level of discipline and efficiency of effort. Its not always pretty, but its not all bad either. Some good things come out of that approach too.

Reply Score: 4

RE[2]: Comment by ansidotsys
by TechGeek on Sun 12th May 2013 01:44 UTC in reply to "RE: Comment by ansidotsys"
TechGeek Member since:
2006-01-14

True, but to be fair, the original author of the "rant" was talking from a purely technical stand point. He was a developer, not a salesperson or corporate exec. And in the product space of software, the technical aspects of it are what matters the most. You could be the most efficient hacker in the world, but if your software sucks, who is going to but it?

Reply Score: 4

RE[2]: Comment by ansidotsys
by bert64 on Sun 12th May 2013 12:53 UTC in reply to "RE: Comment by ansidotsys"
bert64 Member since:
2007-04-23

Having to work within a schedule also encourages (and in some cases requires) corner cutting.

Reply Score: 3

RE: Comment by ansidotsys
by ilovebeer on Sun 12th May 2013 05:42 UTC in reply to "Comment by ansidotsys"
ilovebeer Member since:
2011-08-08

You're absolutely correct. I'd also like to add that the person who wrote the original posting is exaggerating it as well. I know quite a few people (across nearly every division they have) who work for Microsoft and the stories I hear from them paint a little different of a picture. While Windows development is indeed compartmentalized, there isn't nearly the lack of communication the OP is trying to claim. There are a lot of great programmers there, including younger ones.

BTW, everyone I know who works there loves it. The only real complaint I hear is from the ones doing contract work who would prefer they just be hired on directly.

Reply Score: 5

RE: Comment by ansidotsys
by cdude on Sun 12th May 2013 06:13 UTC in reply to "Comment by ansidotsys"
cdude Member since:
2008-09-21

When your userbase is in the billions, it makes sense to be conservative

1. Metro API vs win32. The insight explains why doing new rather then fixing and extending existing.
2. Android. Conservative, billions of users, Linux.

You don't see major rewrites of significant subsystems in Windows because of this

You don't see them in the Linux Kernel either. Major rewrites wasn't the point at all. Its about incremental improvements.

How many times does the Linux driver ABI change?

Public API/ABI, like for userland, did not change since a decade. The Linux Kernel has very strict rules to not change those parts. Private API changes.

How many IPCs are there going to be? SystemD

dbus and systemd are not Kernel but userland. Userland changes, also on Windows.

Edited 2013-05-12 06:32 UTC

Reply Score: 14

RE[2]: Comment by ansidotsys
by Kebabbert on Wed 15th May 2013 16:51 UTC in reply to "RE: Comment by ansidotsys"
Kebabbert Member since:
2007-07-27

>>You don't see major rewrites of significant subsystems >>in Windows because of this

>You don't see them in the Linux Kernel either. Major >rewrites wasn't the point at all. Its about incremental >improvements.

Hmmm... You should know that Linux gets major parts rewritten all the time. The code is always in beta stage, the new bugs are never ironed out before the code is rewritten again. Linus Torvalds said that "Linux has no design, instead it evolves like nature. Constant evolution created humans, so constant reiteration and rewriting is superior to design." Just google on this and you will see it is true. Big parts are rewritten all the time. I am surprised you missed this. Apparently you dont read what Linux devs says about Linux.

http://kerneltrap.org/Linux/Active_Merge_Windows
"the tree breaks every day, and it's becoming an extremely non-fun environment to work in. We need to slow down the merging, we need to review things more, we need people to test their [...] changes!"

http://www.tomshardware.com/news/Linux-Linus-Torvalds-kernel-too-co...
Torvalds recently stated that Linux has become "too complex" and he was concerned that developers would not be able to find their way through the software anymore. He complained that even subsystems have become very complex and he told the publication that he is "afraid of the day" when there will be an error that "cannot be evaluated anymore."

Reply Score: 2

RE: Comment by ansidotsys
by kaiwai on Sun 12th May 2013 13:46 UTC in reply to "Comment by ansidotsys"
kaiwai Member since:
2005-07-06

Don't confuse the lack of a road map and rigor mortis as being conservative because what is occurring right now (and hurting Microsoft) is 10+ years of bumping around in the wilderness with no long term development road map. If I as a third party look on I want to know where Microsoft is heading - what is the role of WinRT? is there a future of traditional desktop applications and if so has development of win32 ceased in favour of WinRT? are Microsoft supporting C++ '11 and if so what is the road map for the implementation of those features so that developers can plan for features emerging? then there is longevity - are we going to see Microsoft once again bumping around flinging crap against the wall in the form of pushing out API's onto to kill them off later on when the programmers at Microsoft either get bored or realise that they really didn't fully think it through?

Then there is views such as the former lead of Windows, Jim Allchin, who labelled legacy code as an asset whilst ignoring that it can quickly become a drag on future development; that backwards compatibility is good and when a better solution to provide that compatibility is provided such as virtualisation then it should be taken advantage of. Here we are in 2013 and Windows is still suffering from the laundry list of issues I've raised in previous posts - one might have forgiven them in the past due to technical limitations but these days with virtualisation there is no reason why development is merely putting a new coat of paint in an rickety tug boat that has seen better days.

Reply Score: 5

RE: Comment by ansidotsys
by Darkmage on Thu 16th May 2013 01:32 UTC in reply to "Comment by ansidotsys"
Darkmage Member since:
2006-10-20

that is complete crap. Microsoft have had no qualms about rapidly deprecating large parts of DirectX, just look at development of DirectX 1-7 to see that. Game development on Linux is as easy or easier than it is on other platforms. The main reason for the lack of titles is developer perception or the userbase and installbase. Lokigames showed over 10 years ago that Linux games could be made easily, but they also couldn't prove the business case. Only just now is the business side becoming viable in terms of installed numbers etc. The sales figures from Loki alone illustrate this. Sell 3000 copies of a game and you're not going to make a profit on your ports.

Reply Score: 2

One can be aware, but culture runs deep
by tomz on Sat 11th May 2013 23:17 UTC
tomz
Member since:
2010-05-06

It was about a year ago, or was it two that someone explained Microsoft's version of decimation - the practice where under penalty every 10 roman soldiers would have 9 physically beat the 10th to death.

http://minimsft.blogspot.com/2011/04/microsofts-new-review-and-comp...

The other problem is they are more concerned about turning it into a DRM system or other political-marketing goal than something that is engineered well, so even "security" means hard to copy, not hard to exploit. "The browser is part of the OS". So it is mixed in with DLLs and the Kernel. Change the driver model - I don't know why, just do it so each new rev needs vendors to redo everything.

There is GMP - the Greatest Management Principle - you get more of what you reward. If you reward firemen who fix bad code emergencies instead of refactorers who insure there won't be any emergencies in their work, you will get firemen. If you create performance review games, you will get the best at playing those games.

There is no reason they could not continually beat Linux and be better other than they don't want to be - they reward other things. Look at how long IE languished until Firefox and the Webkit browsers started gaining enough share. Outlook and Hotmail? Now since "cloud" is the new buzzword, they are trying to pantomime Google and others (Apple isn't doing that well either, but their model broke - they need to have something completely new every year, and some won't work like Ping and Apple TV and others will take off - but while Google has Glass, Apple...).

In the W8/WP8 death knell, I note they have no W8 Zune - a very inexpensive entry unit that a developer could use for everything not involving a phone. Xbox is too far away from the rest of the ecosystem (hardware, XBLive, etc) to cross-pollinate. And even with their attempts to lock down with UEFI Windows 8, and prevent going retro instead of Metro, they damaged the PC market - it takes rare talent to kill your Cash Cow. Ballmer is considered the top CEO who should have been fired much earlier.

Just to contrast, Blackberry's new offerings from reports have double the battery life, the browser is better than anything from Android, it runs most Android apps, but also have other dev systems, has really good PC support and security and a QWERTY model - the predictions of their impending death were exaggerated. Its not hard, but it requires realizing what you must do when the world is changing around you and your monopoly is being eroded daily.

Then there's the comic attempt at Office - the "XML" version and their political manipulations to get that piece of trash spec that no version of Office actually supports adopted as an "open standard". ODF might be ugly and have gaps, but there's open source again. KDE, Open, and Libre Office manage to work.

The checkbook doesn't work with innovation. Apple tried it with their Maps. I'm not sure what they were thinking with the whole Windows 8 set of things - they look to be trying the ecosystem lock-in like Apple without the fanbois - either consumers or developers. They are trying hardware again - so there are lots of calls of "neat", then people go out and buy something else.

Hopefully they will right themselves before crashing. Apple is already in decline (suing Samsung over silly patents, but Microsoft is doing that to Motorola).

Perhaps they have enough patents and lawyers to freeze technology for a decade here while they live off licensing fees - Xerox did that with the photocopiers for a while.

Reply Score: 11

tylerdurden Member since:
2009-03-17

Blackberry may not be a good example, at all. They are still losing market share, and that is not a good thing in a market which is growing overall very rapidly.

Edited 2013-05-13 21:09 UTC

Reply Score: 3

Each model has its advantages
by matthew-sheffield on Sun 12th May 2013 04:39 UTC
matthew-sheffield
Member since:
2013-04-30

This was an interesting look into the development process within Microsoft. I'm not sure if the person who wrote it was involved in the recent MinWin effort as that was an effort to streamline things. Still, however, it was a break from their normal process which we've seen described in similar fashion before.

All that being said, while MS clearly has some flaws in its project management, one should not infer from this rant that the larger *nix world is not susceptible to the same flaws. Accelerated graphics is a perfect example of that what with Compiz, Beryl, Emerald, Mutter, and KWin all reinventing the wheel the first time around and then Wayland and Mir (what about X12?) looking to create anew rather than refine existing work. While MS does not fix every bug, because its programmers are not merely trying to scratch their itches, they are forced to do more QA testing and fixing. One can debate about Win8 but aside from that, MS does not release half-finished products the way that you do sometimes see in OSS-land.

I say all this as someone who has a Windows 8 laplet (laptop/tablet), a Suse laptop, and a dual Win7/Ubuntu desktop.

Reply Score: 5

Not surprising
by WorknMan on Sun 12th May 2013 06:37 UTC
WorknMan
Member since:
2005-11-13

What this guy describes seems hardly isolated to Microsoft. At the company where I work, I've seen a lot of talented developers leave because the higher-ups didn't want to pay them what they were worth, and so they went to work for somebody else who would. In fact, if you're already in the organization and get promoted, you would make less money in your new position than somebody being hired off the street. I moved from customer support to engineering a few years ago, but I still have a CS title, because apparently it takes an act of congress just to get a real promotion. No, I don't understand it either.

For this reason, you have the old guard leaving and new recruits coming in. And since very little has been documented over the years, the only real way to know the system well is to have been working there for several years in order to understand why things were set up the way they were. Hell, on some servers, there are apps set up on cron to be restarted once or twice a day and nobody knows why. So they turn off the restarts, find out that apps run out of memory and/or crash midway through the day, so they turn the restarts back on. And then they repeat this process every 2-3 years.

Reply Score: 7

RE: Not surprising
by Alfman on Sun 12th May 2013 07:03 UTC in reply to "Not surprising"
Alfman Member since:
2011-01-28

WorknMan,

I think these experiences are universal. Heck, no programmer's manifesto would be complete without addressing them. I've only worked with small/medium businesses, but the same kind of motivation problems definitely occur there too. The effort to make software better often goes unrewarded. One of my former bosses said the improvements and fixes are a waste of company money because the company only gets paid for new contracts and not fixing things out of the kindness of our hearts. It can make a good programmer feel very unappreciated and unproud, but it's really just business. Of course, in public, companies won't admit any of this, employees are at risk even talking about it.

Reply Score: 6

RE: Not surprising
by Soulbender on Sun 12th May 2013 07:25 UTC in reply to "Not surprising"
Soulbender Member since:
2005-08-18

...why are you staying with the company?

Reply Score: 2

RE[2]: Not surprising
by lucas_maximus on Sun 12th May 2013 16:07 UTC in reply to "RE: Not surprising"
lucas_maximus Member since:
2009-08-18

I was thinking the same. I know a few guys that are fantastic programmers but work for my old place.

Reply Score: 2

RE[2]: Not surprising
by WorknMan on Sun 12th May 2013 19:04 UTC in reply to "RE: Not surprising"
WorknMan Member since:
2005-11-13

...why are you staying with the company?


I am not one of the developers and for my position, they actually pay more than they probably should ;) Plus, it's a rather large company that's pretty stable, unlike some of the fly-by-night startups that people are leaving for. I've seen some folks head out for greener pastures, only to return 6 months to a year later.

Reply Score: 3

RE: Not surprising
by netpython on Sun 12th May 2013 16:41 UTC in reply to "Not surprising"
netpython Member since:
2005-07-06

Seems there is and was a lack of documentation.

Reply Score: 2

Is it really Microsoft specific?
by malxau on Sun 12th May 2013 10:00 UTC
malxau
Member since:
2005-12-04

For clarity, I work at Microsoft, on some of the components the OP is referring to. Prior to Microsoft, I contributed to open source projects. And right from the start, note that these open source projects can't really be described collectively, since each project has its own culture, process and style.

Have there been things at Microsoft that I'd like to have been able to do and couldn't? Sure. I don't think it's nearly as bad as the poster describes - I've been well rewarded for getting things done, and I do things I'm not asked to, including refactoring, all the time. Since each team owns its own code, modifying another team's code is a social problem, and that doesn't always go the way I want. But in open source, it's still not all my code, and the social problem is still there.

Back in the day, I contributed to Gaim (now Pidgin). My experience working with them was fantastic, but it's since been forked because people can't agree on the behavior of the size of the input box (!).

I wrote a patch to the linux kernel for my own benefit, and submitted it to lkml for glory. That list is huge, and the patch attracts plenty of review, including valid and constructive feedback for how to improve it. But since it's my first patch, the feedback requires learning a lot more of the system and building a much bigger feature. This is not a bad process - it results in good code - but it's not encouraging for new contributors. My patch never merged.

I wrote a patch for Mozilla. This one saddens me more than anything. Specifically, I took an abandoned patch, revived it to current code, polished, finished, and submitted it. It gets reviewed, rejected, there are unrelated flaws found because people are testing the code, it languishes, and some part of it gets merged. The UI part was against the SeaMonkey UI, and FireFox has never had UI support for it (about:config only). The bug I addressed is still active due to unfinished work, people still work on related bugs, and the most frequent outcome is more incomplete, abandoned patches just like the one I started from. I still get bugzilla emails from complaining users over things there, and am no longer in a position to just step in and fix them. If I were to compare this to Microsoft, at Microsoft you need to convince a team of the need to do something, but at Mozilla you have to do it yourself, including every potential tangent to it, and do it perfectly. Again, this is not necessarily a criticism - it's always good to have features that are complete rather than "it works in case X but not case Y" - but it's not welcoming.

I agree with the OP that fixing an existing technology is often better than writing a new one to add to the side. And Microsoft does that. But so do open source projects - if X can't be fixed, have Wayland. If UFS can't be fixed, have ZFS. If NFSv3 can't be fixed, have NFSv4 (which shares little except a name.) Again, this is not a criticism - whether Microsoft or Linux, this is done because it ensures great compatibility with existing deployments, who don't need to embrace a disruptive change immediately; the two can coexist while migration occurs. The unfortunate part is it means the installed base, running the older thing, will not be as good as it could be. Open source has an advantage in the nature of its users and deployments which allow the older thing to go away faster than in the Microsoft world, but even there, my systems still have gtk 1.2 installed.

It's great to hear the OP care so passionately about Microsoft. We do face valid challenges, and I'll always be open to anyone who is trying to improve the state of my area, but it's important to note that the engineering issues are shared by everybody. If the OP has great ideas for how to improve performance of filesystems and storage, come talk to me.

Reply Score: 11

Why ZFS was actually written
by saso on Sun 12th May 2013 10:47 UTC in reply to "Is it really Microsoft specific?"
saso Member since:
2007-04-18

If UFS can't be fixed, have ZFS.

Minor side note: ZFS wasn't written because UFS couldn't be "fixed". It was written because fundamentally the classical storage stacks simply did not scale. The project's scope was much larger than just writing a new filesystem.

Reply Score: 6

malxau Member since:
2005-12-04

Its about "Get The F***K away from my code", when person who want changes is from oustide.


Well, I've never heard that statement or anything remotely resembling it at Microsoft. Most conversations focus on tradeoffs, priorities, consequences of a change, and resource constraints. These are not always obvious to the person proposing a change, who is only concerned with one specific thing.

...Which is another way of saying, it's not unlike my experience with Linux or Mozilla. All have a similar discourse with people sincerely focused on building the best product possible.

Personally when presented with a good change on one of my components, I'll gladly just take it - easier that than reinventing it myself.

Reply Score: 5

cdude Member since:
2008-09-21

Whats your take on stack-ranking management and how it aligns to motivation, "out of order" innovation and improvements?

Edited 2013-05-13 05:21 UTC

Reply Score: 2

Comment by lucas_maximus
by lucas_maximus on Sun 12th May 2013 12:58 UTC
lucas_maximus
Member since:
2009-08-18

The "proof" is un-verifiable to anyone except those that have access to the Windows Kernel Source Code.

Reply Score: 5

v old stuff
by distrodude on Sun 12th May 2013 13:19 UTC
RE: old stuff
by hussam on Sun 12th May 2013 15:41 UTC in reply to "old stuff"
hussam Member since:
2006-08-17

Too bad for Microsoft. You had to know that Microsoft is reaching the end. Linux development shows the way to get things done.. No commercial entity can compete with that.
Go Linux, go!

A lot of linux development is done by commercial entities too.

Reply Score: 5

RE[2]: old stuff
by ilovebeer on Tue 14th May 2013 01:00 UTC in reply to "RE: old stuff"
ilovebeer Member since:
2011-08-08

"Too bad for Microsoft. You had to know that Microsoft is reaching the end. Linux development shows the way to get things done.. No commercial entity can compete with that.
Go Linux, go!

A lot of linux development is done by commercial entities too.
"
"A lot" might actually be understating it. Commercial entities are involved is practically every aspect of linux and that fact is such common knowledge it's hard to believe anyone would think otherwise -- unless they have no clue what their talking about.

Reply Score: 3

...
by Hiev on Sun 12th May 2013 15:44 UTC
Hiev
Member since:
2005-09-27

If that is true, Why do I feel Windows 8 Desktop faster than any Linux distro? or is it this exclusive for servers?

Edited 2013-05-12 15:53 UTC

Reply Score: 4

RE: ...
by lucas_maximus on Sun 12th May 2013 17:40 UTC in reply to "..."
lucas_maximus Member since:
2009-08-18

The post is bullshit, the proof is un-verifiable outside of Microsoft.

The fact that memory usage has reduced from Vista onwards pretty much confirms this is bullshit.

Reply Score: 4

RE: ...
by ze_jerkface on Mon 13th May 2013 01:29 UTC in reply to "..."
ze_jerkface Member since:
2012-06-22

The Windows XP/Vista/7/8 desktop is no doubt faster than any X/KDE/GNOME combination.

But if you are talking about Linux as in Linux the kernel it depends on the measurement.

I haven't seen benchmarks recently but Linux will probably beat Windows for raw file i/o. How much of the difference matters in the real world is another question.

The situation is similar to 3D cards. It's not always about which is faster, it also depends on what you want to run.

Reply Score: 3

RE[2]: ...
by cdude on Mon 13th May 2013 05:27 UTC in reply to "RE: ..."
cdude Member since:
2008-09-21
RE[3]: ...
by bassbeast on Thu 16th May 2013 19:08 UTC in reply to "RE[2]: ..."
bassbeast Member since:
2007-11-11

Uhhh...I don't know if I'd want to brag about that friend, that is like saying "Linux runs DOS better than Windows 7/8" because Valve frankly has one of the more piss poor game engines in gaming, its not even DirectX 9c yet and that was released...what? 2006 or so?

Valve hasn't been a gaming house in quite awhile, all their R&D goes to the Steam service. I have no doubt its games run faster on Linux as both its OpenGL and DirectX are waaay behind the times and while Linux supports ancient versions of OpenGL MSFT really doesn't care about OpenGL nor DirectX before 9c, its just too old.

Reply Score: 2

RE[3]: ...
by zima on Sat 18th May 2013 17:45 UTC in reply to "RE[2]: ..."
zima Member since:
2005-07-06

Yeah, only that was an irrelevant scenario, few hundred fps (probably windows directx team rightfully thought it's pointless to optimise for such scenario in dx9). OTOH, Valve wants bad publicity for Windows - now that MS, with their appstore, is a direct competitor for Steam.

Reply Score: 2

Too funny
by hussam on Sun 12th May 2013 16:11 UTC
hussam
Member since:
2006-08-17

this made my day

(Besides: you guys have systemd, which if I'm going to treat it the same way I treated NTFS, is an all-devouring octopus monster about crawl out of the sea and eat Tokyo and spit it out as a giant binary logfile.)

Reply Score: 2

RE: Too funny
by Lunitik on Sun 12th May 2013 16:41 UTC in reply to "Too funny"
Lunitik Member since:
2005-08-07

this made my day

(Besides: you guys have systemd, which if I'm going to treat it the same way I treated NTFS, is an all-devouring octopus monster about crawl out of the sea and eat Tokyo and spit it out as a giant binary logfile.)


It is simply ignorant.

Systemd is SMALLER than init or any other such program.

It is not devouring anything, it is simply creating replacements for many prior projects. These are all separate binaries, and can be used or left out.

The logging is far superior to anything we've had before, the very fact that it is binary ensures more security. Before, it was relatively easy for a hacker to just edit the logs and the admin wouldn't know he was there. Now there are mechanisms in place to ensure the log really comes from where it is intended, and it is much harder to change that information.

It is certainly vastly different, but having used it for a while, I would never use another init mechanism.

Imagine, a single tool to initialize everything on the system, not one for bringing up the system, another for timed events, another for scheduling events, another for dealing with events on demand, another for acting on new hardware, all logs and tracked in a uniformed way, all managed in a uniform way. The sheer number of in-kernel mechanisms it makes easily available to admins is staggering. Systemd actually lets us take full advantage of the platform, rather than remaining confined to 30 year old mechanisms.

People dislike change, people like using shell scripts to do things, cool. Systemd can execute any type of script or binary, so actually it is more powerful than simple shell scripting in this regard too. As for change, there is always Slackware - the state of the art in the early 90's - or any of the BSD's. Everyone else except Debian is moving on, but they've never complied with standards anyway.

Edited 2013-05-12 16:49 UTC

Reply Score: 5

RE[2]: Too funny
by djohnston on Sun 12th May 2013 19:09 UTC in reply to "RE: Too funny"
djohnston Member since:
2006-04-11

The logging is far superior to anything we've had before, the very fact that it is binary ensures more security. Before, it was relatively easy for a hacker to just edit the logs and the admin wouldn't know he was there. Now there are mechanisms in place to ensure the log really comes from where it is intended, and it is much harder to change that information.

True, that. And there's the journalctl tool to display all the system logs, in one place, in human-readable format.

Everyone else except Debian is moving on, but they've never complied with standards anyway.

Never complied with standards? In what way? What supporting evidence can you show?

Reply Score: 1

RE[3]: Too funny
by Lunitik on Sun 12th May 2013 22:13 UTC in reply to "RE[2]: Too funny"
Lunitik Member since:
2005-08-07

Run levels are a defined standard, Debian simply ignored them - many will say you can create similar functionality, but you can create similar via targets in systemd, this doesn't mean it follows the standard.

There is also the issue of using dash for /bin/sh which results in some oddities.

Reply Score: 2

RE[4]: Too funny
by Soulbender on Mon 13th May 2013 02:35 UTC in reply to "RE[3]: Too funny"
Soulbender Member since:
2005-08-18

Run levels are a defined standard,Debian simply ignored them


And good on them for moving ahead and, just like Slackware, getting rid of the obsolete runlevels system.

Reply Score: 3

RE[5]: Too funny
by l3v1 on Mon 13th May 2013 11:49 UTC in reply to "RE[4]: Too funny"
l3v1 Member since:
2005-07-06

"Run levels are a defined standard,Debian simply ignored them


And good on them for moving ahead and, just like Slackware, getting rid of the obsolete runlevels system.
"

And actually that's a de facto standard foss way of progress. New idea, implemented, let to live, and if it proves better, will propagate into other distros as well. If not, will eventually die out. Maybe ( ;) ) not the best way, but still, it's working.

Reply Score: 3

RE[4]: Too funny
by djohnston on Mon 13th May 2013 15:48 UTC in reply to "RE[3]: Too funny"
djohnston Member since:
2006-04-11

Run levels are a defined standard, Debian simply ignored them - many will say you can create similar functionality, but you can create similar via targets in systemd, this doesn't mean it follows the standard.

They don't really "ignore" them.
http://wiki.debian.org/RunLevel
http://www.debian.org/doc/manuals/debian-reference/ch03.en.html#_st...

There is also the issue of using dash for /bin/sh which results in some oddities.

That's a good point. I had forgotten they did that after Lenny. bash is the default "interactive" shell.

Reply Score: 2

RE[2]: Too funny
by satsujinka on Sun 12th May 2013 19:34 UTC in reply to "RE: Too funny"
satsujinka Member since:
2010-03-11

The logging is far superior to anything we've had before, the very fact that it is binary ensures more security. Before, it was relatively easy for a hacker to just edit the logs and the admin wouldn't know he was there. Now there are mechanisms in place to ensure the log really comes from where it is intended, and it is much harder to change that information.


This is bullshit. You can just as easily generate a checksum from text and be just as secure. There's no reason to not have plain text logs; especially considering that binary logs require special tools to read and filter.

Reply Score: 5

RE[3]: Too funny
by Lunitik on Sun 12th May 2013 22:22 UTC in reply to "RE[2]: Too funny"
Lunitik Member since:
2005-08-07

This is bullshit. You can just as easily generate a checksum from text and be just as secure. There's no reason to not have plain text logs; especially considering that binary logs require special tools to read and filter.


If you're creating a checksum from a text file that is already compromised, it doesn't get you very far.

The systemd journal comes from such a tool, and others can be easily created. Another added bonus though is that logs can be handled in one place from the entire network remotely and in a secure manner, and still the information is retained as to where it came from. People have done similar with syslog but it is a mess, this comes by default with systemd and makes sense considering most Linux deployments are cluster-based.

I do not understand what the problem is with binary log files, with journalctl they are just as accessible as using cat but it is more logical, each cgroup keeps its logs together in an orderly way, rather than each process randomly farting info to the file. Further, all the logging systems used around Linux are all read by journald, rather than having the possibility of what you need being in any of 10 files at times.

If you really want though, there is no problem with using journal and syslog on the same system if you absolutely need logs in text files. The journal is not a good enough reason to not use systemd. You gain far more in a far cleaner way by utilizing systemd.

Reply Score: 4

RE[4]: Too funny
by satsujinka on Mon 13th May 2013 01:33 UTC in reply to "RE[3]: Too funny"
satsujinka Member since:
2010-03-11

How would the text be compromised? You generate the checksum at the same time as the text. By the time a hacker could even see the text the checksum already exists.

My entire argument is this: Binary formats buy you nothing and only force you to do more work (creating journalctl to do the tasks that could formerly be done using standard tools.) There is no security advantage that can't be easily gained by adding checksums, which has the additional advantage of not requiring a brand new tool.

The issue of what can be done with syslog is different. If you say syslog can't do checksums (easily) then I'll believe you. However, the solution should not involve switching to binary (for the above reasons and a few additional that are specific to me; I'd be happy to share, but they're really only important to my use case.)

On an entirely different note: systemd doesn't give me any benefits. The old init system that Arch Linux used worked just fine for me. I had zero issues with it (in point of fact, I used Arch specifically for its init system.) However, I don't care enough to avoid it. In that regard it's exactly like pulseaudio. It gives me no benefit, but it more or less works; so whatever.

Reply Score: 2

RE[5]: Too funny
by Alfman on Mon 13th May 2013 03:21 UTC in reply to "RE[4]: Too funny"
Alfman Member since:
2011-01-28

satsujinka,

Text versus binary logging depends on what you want to do and what tools you have. Alas you are right that many binary formats don't provide good tools, and it's often necessary to pipe textual output out of them so they can be manipulated by text utilities. This obviously has no benefit over having used a text format in the first place.

Assuming that the "binary format" is actually a real database, then I *strongly* prefer querying data from a database over using textual tools to parse semi structured text records. I've worked on several projects where we logged activity into the database instead of using text files, and we've never found a reason to regret it. We get aggregates, indexing, calculations, sophisticated searching, easy reporting & integration. In certain cases it's necessary to populate the database by parsing text logs, and it makes me wish that all tools could log into database tables directly.

It's often trivial to export databases back into text format if you wanted to, but there's hardly ever a reason to do it since database facilities are so much better.

Reply Score: 4

RE[6]: Too funny
by cdude on Mon 13th May 2013 05:35 UTC in reply to "RE[5]: Too funny"
cdude Member since:
2008-09-21

The database is named filesystem. Binary dumps for logs? That is stupid like a binary config registry.

Edited 2013-05-13 05:37 UTC

Reply Score: 1

RE[7]: Too funny
by Soulbender on Mon 13th May 2013 06:14 UTC in reply to "RE[6]: Too funny"
Soulbender Member since:
2005-08-18

Binary dumps for logs? That is stupid like a binary config registry.


On the contrary, putting logs into searchable binary storage like ElasticSearch is great. Grep doesn't really scale.
Binary is not a good format for the default system logs though.

Reply Score: 4

RE[7]: Too funny
by Alfman on Mon 13th May 2013 06:21 UTC in reply to "RE[6]: Too funny"
Alfman Member since:
2011-01-28

cdude,

"The database is named filesystem. Binary dumps for logs? That's more stupid then a binary config registry."

I'm finding it peculiar that you'd bring up a "named filesystem" database given that it doesn't apply to logfiles.


With a database, each record exists and is manipulated independently from all other records. You cannot use file system level operators (cd, ls, find, etc) to query log files or manipulate individual records. In order to get the same level of granularity that a database gives us, you'd have to store each "record" or log event in a separate file. Another major difference is that the database records can be indexed such that queries will only read in the records that match the criteria. A text log file on the other hand has no indexes and needs to be fully scanned.


Text processing programs like sed/grep/cut/sort/etc are great tools, but SQL is far more powerful for advanced analytics.

Edit: Also, the windows registry sucks, no disagreement there. But it's not right to put all databases in the same boat as regedit. The registry has a huge gap in analytical power and structure compared to any real database.

Edited 2013-05-13 06:37 UTC

Reply Score: 4

RE[2]: Too funny
by Soulbender on Mon 13th May 2013 02:27 UTC in reply to "RE: Too funny"
Soulbender Member since:
2005-08-18

The logging is far superior to anything we've had before, the very fact that it is binary ensures more security.


Oh yeah, security through obscurity. That ALWAYS works out great. syslog can already log to a remote host and for your other log files you should already be using something like logstash or graylog2 if security and manageability is a concern.


Imagine, a single tool to initialize everything on the system, not one for bringing up the system, another for timed events, another for scheduling events, another for dealing with events on demand, another for acting on new hardware, all logs and tracked in a uniformed way, all managed in a uniform way.


See, this is what I *don't* like about systemd; it does too much. We already have cron and at for scheduling, we have udev for hotplugging and there are already many good solutions for managing logging. SystemD should stay the hell out of these areas and focus on the one area that does need solving: service management.

Personally I much prefer the Upstart approach of focusing on a small set of problems that need solving.

Reply Score: 5

v RE[3]: Too funny
by triangle on Mon 13th May 2013 02:49 UTC in reply to "RE[2]: Too funny"
RE[4]: Too funny
by Soulbender on Mon 13th May 2013 03:11 UTC in reply to "RE[3]: Too funny"
Soulbender Member since:
2005-08-18

What the heck are you talking about?
We are talking about systemd's binary logging and the (lack of) security it provides, not about closed-source software.

Reply Score: 3

RE[3]: Too funny
by Lunitik on Mon 13th May 2013 09:39 UTC in reply to "RE[2]: Too funny"
Lunitik Member since:
2005-08-07

See, this is what I *don't* like about systemd; it does too much. We already have cron and at for scheduling, we have udev for hotplugging and there are already many good solutions for managing logging.


There is no more udev, it is part of systemd - which is a good thing because the system management daemon should be managing the system. Why is it good to have at and cron around, as well as xinetd when all they're doing is managing services. The problem is that these are each using different methods, and are utterly incompatible so we are increasing the learning curve for no benefit at all.

Sure, systemd has a learning curve at first, but as you get used to it it just makes sense.

Personally I much prefer the Upstart approach of focusing on a small set of problems that need solving.


Upstart is crap, there is no real benefit to it over sysv at all. The main problem is it is still basically using scripting for everything, so there are still something like 3000 fs calls when bringing up the system. This, when compared to the about 700 of systemd, is simply too much - systemd needs to come down somemore too, but this includes everything up to a gnome desktop, when gnome-session starts to use systemd user sessions, this will come down drastically.

As others have said, fs access on linux is not great, so the less times we are accessing the disk, the better, and the faster the system will come up.

Honestly, I still don't even understand upstart, its language is just broken. I keep trying to look into it, but the more I do, the less I like it. Systemd uses the file system in a logical way for booting the system, and gives us more access in the /sys dir to many of the features of the system. Upstart gets us nowhere because fundamentally it is only a redesign of the init, it is not something substantially new.

You seem to think this is a good thing, but systemd simply results in a cleaner and easier to understand system. By removing many things like at and cron and syslog and logrotate and all these programs that do not communicate with each other, we end up with a more integrated base system. For me, this is a good thing, it is a miracle all these projects have managed to coexist for so long at all.

By moving all these into one project (with many binaries for each thing, because modularity is important for parallel booting) we now get a consistent API for all events on the system, whether hardware or software crashes or whatever, it all becomes predictable. Everything is handled in the same way system wide, and there are no more obscure config settings to learn depending on exactly what you're trying to achieve. This is a huge benefit to Linux, others moved to similar approaches a long time ago.

Couple this with the fact upstart has a CLA, and systemd becomes the only intelligent choice. Canonical does not have the Open Source communities best interests at heart, so their projects will not be touched by anyone outside Canonical. You essentially fall into the same trap as any proprietary company, you become utterly dependent on a single company for all issues that might arise.

Canonical will be replacing udev in UnityNext, and haven't been upgrading it in their repo's for a couple releases now. They are discussing replacements for NetworkManager, they want their own telepathy stack. They will be competing head on with not just Red Hat, but people like Intel and IBM, all these companies that are heavily invested in Free Software. Canonical are fooling themselves if they think they can compete by doing everything themselves. They simply lack the competency.

To date, the only things Ubuntu have actually done themselves is a Compiz plugin that was quite broken for most people, an init system whose initial developer left, and a few bloated APT frontends. Below this, they utterly depended on Novell and Red Hat for everything. Now they want to replace all this, they want to control everything themselves, everything good about Open Source simply is lost on Canonical.

For some reason, they are praised because everything "just works", but it works because of work done by others. It honestly makes me sad that the praise is so misplaced. In fact, Ubuntu are mostly reponsible for making things not work, for breaking others work because they don't actually understand the software they are shipping. To this day, Lennart gets a bad rap because Ubuntu devs didn't understand pulseaudio and so shipped a broken config.

Of course, Ubuntu is heavily used, so a broken config in their shipped release gave people a bad opinion of pulseaudio. Another example is the binary drivers constantly breaking on upgrades, it makes Linux look bad because users aren't really made aware of the issues before something breaks. Canonical just makes horrible decisions throughout the stack and this harms Open Source development because people just accept proprietary poorly maintained software instead of pressuring these companies to play nicer with Open Source.

This is my real problem with Ubuntu, they simply don't care about Free Software, they do not care about Open Source, they just want to benefit from it. They think they are being slandered in upstream projects because their code is rejected, but the code is rejected because it is bad code. Now they want to rewrite everything, they want the entire stack to simply comply with their tests, they cripple developers, they make it ok for poor code provided it meets the testing requirements. Open Source is innovative because of its dynamic nature, Canonical are ridding themselves of this benefit.

Edited 2013-05-13 09:57 UTC

Reply Score: 3

RE[4]: Too funny
by Soulbender on Mon 13th May 2013 10:01 UTC in reply to "RE[3]: Too funny"
Soulbender Member since:
2005-08-18

There is no more udev, it is part of systemd


Wow, how horrible. I'm guessing it's not only Ubuntu that will look at alternatives to systemd and udev then. Not everyone is happy about being at the mercy of RH and Lennart.

By removing many things like at and cron and syslog and logrotate and all these programs that do not communicate with each other


These programs have no need to communicate with eachother. at/cron performs completely different functions from syslog and logrotate. Putting all these disparate functions into the same daemon is pointless and doesn't solve any real-world problem.

Upstart is crap, there is no real benefit to it over sysv at all.


It's pretty obvious that you never worked with upstart. It solves all of the problems with sysv (PID files, backgrounding daemons, no automatic restarts etc) and it does so without completely taking over your system.
You know, like the old Unix philosophy: do one thing and do it well.

The main problem is it is still basically using scripting for everything


Uh, no it doesn't. It CAN use scripting if needed but most upstart config files are just configuration statements and one exec statement for the binary.

Honestly, I still don't even understand upstart, its language is just broken


I don't see what's so hard about it. it's a simple config file format. Here, i'll show you a very simple Upstart config for an imaginary service:

description "myservice"
start on runlevel [2345]
stop on runlevel [016]
respawn
console log
exec /usr/bin/myservice_executable

Canonical does not have the Open Source communities best interests at heart


Right, because Redhat, IBM, Novell etc has the community's best at heart...

By removing many things like at and cron and syslog and logrotate and all these programs that do not communicate with each other, we end up with a more integrated base system. For me, this is a good thing, it is a miracle all these projects have managed to coexist for so long at all.


Considering that they all do different things and at no point need to communicate with eachother it's really no miracle at all.

Canonical are fooling themselves if they think they can compete by doing everything themselves. They simply lack the competency.


Well, I for one welcomes competition and diversity.

Reply Score: 3

RE[5]: Too funny
by hussam on Mon 13th May 2013 11:32 UTC in reply to "RE[4]: Too funny"
hussam Member since:
2006-08-17

"There is no more udev, it is part of systemd


Wow, how horrible. I'm guessing it's not only Ubuntu that will look at alternatives to systemd and udev then. Not everyone is happy about being at the mercy of RH and Lennart.
"
the main reason why distributions are adopting systemd is it means they don't have to develop their own initscripts

Reply Score: 4

RE[5]: Too funny
by Lunitik on Mon 13th May 2013 14:12 UTC in reply to "RE[4]: Too funny"
Lunitik Member since:
2005-08-07

Wow, how horrible. I'm guessing it's not only Ubuntu that will look at alternatives to systemd and udev then. Not everyone is happy about being at the mercy of RH and Lennart.


Most contributers to systemd are from outside Red Hat.

Udev was already a Red Hat project.

Basically every distro that matters except Ubuntu is likely to move to systemd. Debian are looking at it quite heavily, and the only competent developer at Canical is even a heavy contributor to systemd so perhaps they will finally switch too at some stage.

What is clear is that systemd is the way forward, whether you like it or not. I think if you actually use it, you will see why people like me find arguments against it to be absurd.

These programs have no need to communicate with eachother. at/cron performs completely different functions from syslog and logrotate. Putting all these disparate functions into the same daemon is pointless and doesn't solve any real-world problem.


You don't think that it is absurd to have 4 binaries essentially doing the same thing in different ways, rather than a single uniform method? It is just duplication of effort, and what is hilarious is each is about the size of systemd. There is no necessity when a single binary can accomplish the task each is trying to fulfil in a cleaner way.

It's pretty obvious that you never worked with upstart. It solves all of the problems with sysv (PID files, backgrounding daemons, no automatic restarts etc) and it does so without completely taking over your system.
You know, like the old Unix philosophy: do one thing and do it well.


Its "PID files" is not nearly as good as cgroups. All daemons are backgrounded, that is sorta the point. Have you actually worked with upstart though? It seems to me you haven't, you just know that your Ubuntu desktop uses it and seems to do fine. Try to configure one of its event files, you will understand why people dislike it.

Systemd does do one thing, and it does to it will. It manages services, but it is no longer just a replacement for init. I don't think you are grasping the fact that systemd does not only consist of one binary. There are a few to do different things, and these are actually more Unix like than the prior alternatives. Things like hostnamectl or datetimectl are tiny binaries that do exactly what you think they do.

The fact that you want to limit that one thing to just bringing up the userland is where we will not agree. A modern system is more dynamic than this, the central process on the system should be dynamic too. Upstart tried to address this, but its fundamental design is stupid. Instead of focusing on the end result, it just runs every daemon that could depend on something lower down. Want to start NetworkManager? Cool, you're going to get bluez and avahi and whatever else could possibly use dbus that is installed because NetworkManager brought that up. This is broken and simply retarded.

Uh, no it doesn't. It CAN use scripting if needed but most upstart config files are just configuration statements and one exec statement for the binary.


Try looking at the actual event files, those hooks work with the event file which is still just a series of scripts. Browse around /etc/events.d or whatever, you will notice they are scripts doing the same sorts of things as old init files. The fact this is abstracted to a simple config format is besides the point.

Right, because Redhat, IBM, Novell etc has the community's best at heart...


Yes, they do, because without a strong community, these companies understand they would limit their capabilities. The whole point of Intel and IBM investment is that they don't want a single software vender to control their destiny. They each understand the benefit of Open Source deeply, they each understand they need each other, that their future depends on collaboration.

Canonical seem to think quite the opposite, that just throwing code over a wall is fine, that community is irrelevant. This is not how Open Source projects succeed, you need a strong community around the software, innovation comes by many contributors pulling the code in their own directions. Canonical seem to have gone out of their way to ensure developers are not interested in contributing, thus making it safe to throw the code over the wall.

Couple this with the inclusions of things like the Amazon scope, and you see that Canonical are only interested in money. Red Hat must make money, they are a publicly traded company, but they are adamant about Free Software at the same time. This is why people trust them, as much as Canonical is harping on about being the leading guest in the cloud, those clouds are running Red Hat as the host OS for a reason.

People use Ubuntu because it is free, companies use Red Hat because they are competent.

Considering that they all do different things and at no point need to communicate with eachother it's really no miracle at all.


They don't though, they all manage services. If you don't think logrotate should communicate with rsyslog, if you don't think rsyslog needs to communicate with each daemon to ensure correct information, you are fooling yourself. We have tried to standardize these things so they can better talk to each other, but it has just been a failure. If you had any real experience maintaining a Linux network, you would understand the frustration of the unpredictability of logs - everything seems to decide arbitrarily what method they will use to log things. This all goes away with the journal, it supports all these methods and keeps them sane, then offers a new API which is far cleaner and better defined. All of this with the intent of at least offering something as good as Solaris and Apple have had for over a decade. Instead we are basically in the same situation we were in the 80's on many Linux distros.

Well, I for one welcomes competition and diversity.


There are not enough developers in the Open Source ecosystem to justify such lack of teamwork. The app story on Linux is bad enough, but now Canonical are defining their own developer environment.

Personally, I am glad everyone else is just moving their apps to the web, the native Linux story for client stuff is embarrassing. Please do not bring up how Canonical are standardizing on Qt/QML either, while KDE/RIM/Sailfish are technically each using this technology, the very nature of QML is causing a lot of problems. Each developer environment will be fundamentally different, API's will not translate. It is not standardization, it is even more proliferation.

Qt is another project that uses a CLA, so that is not an option for a company that cares about Free Software, about user advocacy. Perhaps it would be better to drop GTK and have the big guns all move to EFL(Intel and Samsung are both heavily invested there), at least it seems to be gaining momentum with a clean stack. Then we have to rewrite a lot of apps all over again though. It is really quite sad what is happening today in Linux clients.

There are a lot of Linux developers, but those are all working on web platforms, cloud platforms, everyone can clearly see the client platform outside of Android is a joke and it is getting far worse. I blame Canonical for this, they gave people permission to fork Gnome Shell by creating Unity - the most ironic name ever for a software project. If they hadn't done this, Cinnamon and the other forks wouldn't have happened, there would be less proliferation. Instead everything is a mess, and it is getting worse.

Edited 2013-05-13 14:27 UTC

Reply Score: 2

RE[6]: Too funny
by Soulbender on Tue 14th May 2013 08:53 UTC in reply to "RE[5]: Too funny"
Soulbender Member since:
2005-08-18

You don't think that it is absurd to have 4 binaries essentially doing the same thing in different ways, rather than a single uniform method?


They don't do the same thing. Really. They do 4 different things, well 3 if you combine cron and at.
corn/at and syslog has NOTHING in common. They do NOTHING that is the same. Really, I find it odd that you'd think they do.
What's absurd is combining these 4 (or 3) apps that does completely different things into one.

Browse around /etc/events.d or whatever, you will notice they are scripts doing the same sorts of things as old init files.


There's no events.d that is part of upstart.

Yes, they do, because without a strong community, these companies understand they would limit their capabilities.


If you think they do you're seriously deluded. They're looking out for their own interests, nothing else.

They don't though, they all manage services.


uh, no they don't NONE of them is managing services. Cron schedules *jobs*, syslog writes logs messages, logrotate rotates log files. They do not manage services.

if you don't think rsyslog needs to communicate with each daemon to ensure correct information, you are fooling yourself.


That's what API's are for and there already is one for syslog.

If you had any real experience maintaining a Linux network, you would understand the frustration of the unpredictability of logs


I do and logs is a problem to which there are many good existing solutions, most of them better than the proposed systemd solution. Which you would have known if you had "real experience managing a Linux network".

Reply Score: 3

RE[7]: Too funny
by Lunitik on Tue 14th May 2013 14:56 UTC in reply to "RE[6]: Too funny"
Lunitik Member since:
2005-08-07

They don't do the same thing. Really. They do 4 different things, well 3 if you combine cron and at.
corn/at and syslog has NOTHING in common. They do NOTHING that is the same. Really, I find it odd that you'd think they do.


What is the purpose of syslog without something to monitor? What is the purpose of at, cron, init, or xinetd without something to start and stop?

What's absurd is combining these 4 (or 3) apps that does completely different things into one.


As opposed to having 4 utterly different codebases, and all the reproduction of efforts that implies?

There's no events.d that is part of upstart.


Now you just look ignorant.

$ cd /etc/events.d

Look at the files in there.

If you think they do you're seriously deluded. They're looking out for their own interests, nothing else.


It is in their best interests to work with the community, this is what you're missing.

uh, no they don't NONE of them is managing services. Cron schedules *jobs*, syslog writes logs messages, logrotate rotates log files. They do not manage services.


What is a job if not a service? You seem to have a very strange definition of what a service is.

If you don't think logging is a part of service management, I don't even know what to tell you. If I cannot keep track of services, management itself is simply impossible.

That's what API's are for and there already is one for syslog.


You'd think so, right?

You'd be mistaken, there are attempts to standardize the format but there is no API definition. Essentially, things are just farting text out of std[out,err] and syslog is throwing that raw into a file. It simply doesn't care what that info is, how it is formatted, nothing.

I do and logs is a problem to which there are many good existing solutions, most of them better than the proposed systemd solution. Which you would have known if you had "real experience managing a Linux network".


Please show me a solution which is as seamless as journald over the network. They are all hacks which try to address the shortcomings of syslog.

Reply Score: 3

RE[2]: Too funny
by Alfman on Mon 13th May 2013 03:05 UTC in reply to "RE: Too funny"
Alfman Member since:
2011-01-28

Lunitik,

Personally I've always found sysV init scripts clumsy and I'm kind of glad they're being phased out. They lack any sort of automatic dependency analysis, there's no restart monitor for dead processes (like init+inittab), init scripts cannot be scheduled and cron jobs are inflexible, etc.

So I think we're in agreement here, but I also agree with the author that systemd is a bit of an octopus. I don't think it's a bad thing though, I think it's good to have consolidation around system task management and it makes sense to consolidate all the components under a unified system like systemd.

Reply Score: 4

RE[2]: Too funny
by Darkmage on Thu 16th May 2013 01:43 UTC in reply to "RE: Too funny"
Darkmage Member since:
2006-10-20

Debian has support for it to, you just have to tick a box and it installs.

Reply Score: 2

RE: Too funny
by Soulbender on Mon 13th May 2013 02:19 UTC in reply to "Too funny"
Soulbender Member since:
2005-08-18

As much as I dislike many aspects of systemd it is still an immense improvement of the retarded abomination that is SysV init.
PID files? Yeah, sure. You could use an incredibly inaccurate and error-prone way of tracking the process OR you could just do it right from the start and run it in the foreground managed by a supervising process.
~5 billion symlinks with magic names in /etc in order to control what service starts when? Oh yeah, great idea. *Much* easier to manage than just a config file....
Also, runlevels needs to die. It's a concept that is pointless in te majority of the use cases. In the general cause you either just need to boot normally or in rescue mode. Anything else is a corner case.

Reply Score: 4

v I doubt it.
by tuaris on Sun 12th May 2013 20:52 UTC
RE: I doubt it.
by shmerl on Sun 12th May 2013 21:19 UTC in reply to "I doubt it."
shmerl Member since:
2010-06-08

Disk performance hugely depends on filesystem used. You can't just say "disk performance on Linux is horrifying" without going into details.

Reply Score: 5

Not in my experience it ain't
by Gullible Jones on Sun 12th May 2013 22:17 UTC
Gullible Jones
Member since:
2006-05-23

I could believe that Windows has all kinds of kernel space issues, but for desktop performance this doesn't really matter - because as good as the Linux kernel might be, the desktop environments basically suck. The desktops are bloated, the graphics drivers are incomplete, and now the desktops rely heavily on the graphics drivers; guess what happens?

Sure, Linux with Openbox or whatever is faster than Windows 7 on well supported hardware. The problem is that maybe 5% of new Linux users can be bothered to configure a standalone window manager. The rest will install *buntu because they're not interested in making work for themselves, and will more likely than not recoil in horror at the bad performance and go straight back to dear old Windows.

TL;DR: a Smart Fortwo can outrace a stock car, if the stock car has bad tires and is towing several tons of lard.

Edited 2013-05-12 22:19 UTC

Reply Score: 1

RE: Not in my experience it ain't
by ze_jerkface on Sun 12th May 2013 23:09 UTC in reply to "Not in my experience it ain't"
ze_jerkface Member since:
2012-06-22

You make a good point which is that it depends on which stack you are looking at.

Linux has a better selection of file systems but the video stack in quite inferior. Just ask any RHEL engineer if he would run X on a critical server. Most Windows Servers in the wild (including headless) are running a GUI and the gains from going barebones are minimal.

It also depends on what type of software you run. The MySQL developers have long maintained that their software is not optimized for Windows. Will it run poorly? No, but if you want to squeeze out every last cycle then Linux is a better choice. Is hardware a significant factor in project costs? No, it's the admin costs that matter. Cpus and RAM are dirt cheap, the average enterprise spends more annually on toilet paper. Linux cpu savings mattered a lot more a decade ago.

So it's a more complex situation than faster or slower.

But I will say that Windows Server 2012 is retarded for having forced-Metro. It's insulting really.

Reply Score: 1

RE: Not in my experience it ain't
by Soulbender on Mon 13th May 2013 06:16 UTC in reply to "Not in my experience it ain't"
Soulbender Member since:
2005-08-18

The desktops are bloated, the graphics drivers are incomplete, and now the desktops rely heavily on the graphics drivers; guess what happens?


Are you talking about Windows?

Reply Score: 1

Gullible Jones Member since:
2006-05-23

Pardon?

Whatever else you can say about Windows, it usually has very good graphics drivers available. And the Windows desktop does not bork on those occasions when hardware acceleration is not available; I have run Win8 in Virtualbox without graphics acceleration. (Had to actually because the Virtualbox drivers for 8 were broken at the time.)

Linux OTOH is ridiculous about this stuff. All the FOSS drivers are terrible for both 2D and 3D performance, Gnome 3 requires hardware acceleration (unless you want continuous 50% CPU usage from llvmpipe), and Unity is a freaking overgrown Compiz plugin. KDE 4 also assumes good hardware acceleration for rendering widgets and stuff using Qt's native backend. The result is godawful performance.

Xfce of course actually works. But who the hell uses Xfce by default?

Reply Score: 5

Gullible Jones Member since:
2006-05-23

I really have to call BS on this. I've used the nouveau driver; 2D performance was visibly worse than with the nVidia blob, and it was very unstable to boot.

I've also used KDE 4.10. Things like window resizing and alt-tab lag like crazy, and login can take 30 seconds even on very fast computers.

Now sure, if you have a quad-core Intel i5 desktop with 8 GB of RAM, a high-end nVidia card, and a nice big SATA hard drive, everything will be snappy. But that's not KDE or nouveau being overwhelmingly fast, that's your computer being overwhelmingly fast. I really don't think one should need an overwhelmingly fast computer just for the default desktop to perform well.

Reply Score: 4

Gullible Jones Member since:
2006-05-23

Actually, an addendum: current Linux distros with standalone window managers still suck moose on very low-end machines. ATM I'm using Fluxbox on a netbook, and the performance is disgusting - simple things like menu highlights lag absurdly, and you can see each window fill in as it opens.

I wish GTK+ would just die already.

Edit: unfortunately software is the only damn thing in the whole world for which you need a new version every week just to be secure vs. petty crooks. If cars were like that, nobody would drive.

Edited 2013-05-15 16:57 UTC

Reply Score: 2

The facts belie the central thesis
by ze_jerkface on Sun 12th May 2013 22:57 UTC
ze_jerkface
Member since:
2012-06-22

Microsoft certainly has cultural and leadership problems. Ballmer is an asshead that needs to go, he has been given plenty of chances. The Windows division under Sinofsky went into crazyland, only rabid MS fanboys and employees still defend his Metro-down-your-throat plan.

But there is a major problem with his argument because it makes a common false assumption that Linux code is developed for non-economic reasons.

Most Linux code is developed by corporations:
http://apcmag.com/linux-now-75-corporate.htm

Furthermore if "passion" creates good code then Linux should have a much better game selection.

The truth is likely somewhere in the middle. Red Hat probably has a better culture for developers but the actual development has nothing to do with glory or recognition. Most Red Hat developers have families and just want to pay the bills.

Reply Score: 0

shmerl Member since:
2010-06-08

Microsoft certainly has cultural and leadership problems.


Indeed. And it doesn't look like they really care to fix that.

Reply Score: 1

Problems in userland
by jhowie on Sun 12th May 2013 22:57 UTC
jhowie
Member since:
2013-05-12

I can speak from experience when I submitted code for inclusion in what was then known as the Windows Resource Kit. Without naming the utility, what I developed was tool that had two threads - a reader and a writer. The reader thread read data from the filesystem and stored data in a linked list, and the writer thread would take data from the linked list and write it out to the filesystem. All very elegant, tested to death, and CS 101, it was rejected because I did not use a named pipe between the threads. I pointed out that, at the time, calls to named pipes caused context switches and adversely impacted performance. The feedback I received was "what is a context switch?". I explained that switching from user mode to kernel mode in a thread had a performance impact. No-one cared. They had their way of doing things and refused to even look at others. I also learned that some of the review team did not understand how linked lists worked and could not be assured there were no bugs in the code because they could not understand it. The utility that eventually shipped used a named pipe, and was significantly slower than the one I originally developed. It is now part of Windows. Needless to say, I use my own version (still).

Reply Score: 3

v This is anti-MS propaganda
by triangle on Mon 13th May 2013 02:30 UTC
RE: This is anti-MS propaganda
by Soulbender on Mon 13th May 2013 02:50 UTC in reply to "This is anti-MS propaganda"
Soulbender Member since:
2005-08-18

This is total propaganda and I am sick of it. Windows is BY FAR the fastest OS. Linux is absolute junk.


The irony of this statement in relation to your post is epic.

Reply Score: 4

v RE[2]: This is anti-MS propaganda
by triangle on Mon 13th May 2013 02:53 UTC in reply to "RE: This is anti-MS propaganda"
Soulbender Member since:
2005-08-18

Oh really? Maybe you are too brain washed to think for yourself.


Already with the personal insults?

You heard Linux is fast and you actually believe it.


No, I know it's fast and so is Windows. Which one is "faster"? I don't know and I don't care. On any reasonably modern computer both are fast enough.

Facts mate... facts. The truth will set you free.


You didn't really provide any but here's my personal experience:
I have run Ubuntu 12.10 on a old laptop with a crummy Intel gm855 video card and only 768Mb RAM and it was perfectly usable. GUI operations did certainly not take minutes. I also have it on a 2Ghz dual-core with Intel HD graphics and 4Gb RAM and it's just as fast as Windows. In fact, Ubuntu is a bit snappier but since the Windows is factory Lenovo the difference might not be Windows fault.

Edited 2013-05-13 03:06 UTC

Reply Score: 3

Soulbender Member since:
2005-08-18

I see. Your personal experiences are facts while mine are bullshit and lies. How convenient.

You just continue digging that grave for yourself, it's rather entertaining.

Reply Score: 3

Soulbender Member since:
2005-08-18

What *I* said are facts because I KNOW them to be. I know what you said is not true.

Actually, what you said might well be true but your anecdotal evidence does not scale to the Linux Desktop in general.

I have stated my methodology.

Your methodology has already been shown insufficient by others so I see no need to repeat your experiments.

Reply Score: 3

RE[5]: This is anti-MS propaganda
by Alfman on Mon 13th May 2013 03:56 UTC in reply to "RE[4]: This is anti-MS propaganda"
Alfman Member since:
2011-01-28

Well, if the article was anti-MS propaganda, then your posts have more than overcompensated with a large helping of anti-linux propaganda. ;)


Just one point though, there is a difference between "facts" and "anecdotal evidence". You could conduct systematic benchmarks and post them to provide a much better context for a serious & interesting discussion. But as is, this thread is just an angry rant.

Reply Score: 3

triangle Member since:
2013-05-13

:)

The thread title is "Windows slower than other operating systems". It is not "Windows kernel is slower than Linux kernel" nor is it "Windows i/o is slower than Linux i/o".

It is most definitely Anti-Microsoft propaganda that flies in the face of reality.

If it said "Microsoft s****** up by removing the start menu" I wouldn't criticize. But slower than Linux and getting worse? Give me a break. Total BS.

On top of that, Linux is barely usable to begin with. People kind of look past that because the bar is set so low in the land of Linux. Any BS that can make the alternative look bad is welcome propaganda. Can you ever see a soccer mom recompiling a Linux kernel so her kid can play Minecraft or some other game because they just bought a new gfx card? When soccer moms start doing things like that on mass Linux will go from 1% market share to be the dominant desktop system.

Reply Score: 0

RE[7]: This is anti-MS propaganda
by Alfman on Mon 13th May 2013 05:25 UTC in reply to "RE[6]: This is anti-MS propaganda"
Alfman Member since:
2011-01-28

triangle,

I didn't say there was no anti-ms propaganda, never the less your posts are chock-full of anti-linux propaganda. You even took the opportunity to reply to my post with even more anti-linux propaganda backed by nothing more than literally made up anecdotal evidence.

Linux is good for some people, not for others, and that's fine. But it appears that you've got an axe to grind. If you are not going to bring anything insightful to this discussion, and you haven't yet, then I don't think there's enough substance to continue it.

Reply Score: 3

RE: This is anti-MS propaganda
by ba1l on Mon 13th May 2013 03:08 UTC in reply to "This is anti-MS propaganda"
ba1l Member since:
2007-09-08

I honestly don't even know where to start with this...

Test 1... Yep, sounds fair. Those versions of Linux are probably six years newer than that hardware, while XP is six years older and was designed for much weaker hardware. Would comparing with Windows 8 not be a more fair comparison?

As for test 2, what exactly do you think you're measuring? Most VM hosts support Windows better than anything else, and provide fast (enough) video acceleration on Windows guests but generally not Linux guests. Surely a better comparison would be to run then on real hardware, but there really won't be much of a difference. Anything remotely recent is many times faster than either OS requires. Windows might be slightly faster as a desktop OS, but Linux is hardly slow or bloated.

Reply Score: 3

RE[2]: This is anti-MS propaganda
by triangle on Mon 13th May 2013 03:39 UTC in reply to "RE: This is anti-MS propaganda"
triangle Member since:
2013-05-13

Fair enough.

Windows 8 would not be a better choice than XP in my opinion. In terms of date, your point is well taken but I would argue that "functionality" is the key concept. For a given functionality what overhead is there? Today's Ubuntu and Mint (some of the most pop distros and my fav also) are not yet on the same level of functionality as Windows XP. XP is far superior imo. If it came down to debate, it would not be hard to defend this point... even though I know it sound provocative. At the same time, Windows 7/8 is not much slower than XP if at all... so we could do as you say...but Win 7/8 require more ram than 1GB. I suppose it comes down to ideology. It is easy for us to set the date as the defining point. Functionality is a debatable sticking point. The major thing in my mind, is that XP is still modern in the sense that I can get drivers for XP even for modern computers (let's say if i wanted to build one). On the other hand, if I built a computer today, I would have to use bleeding edge Linux just to have a chance to run Linux. So in this respect, I think XP vs LinuxCurrent is fair game. Also, because of the centralised software scheme in Linux land, one is forced to use a relatively new distro. It is not like I can use an 8 year old Linux distro on the Athlon and be productive and secure.

As far as the second test goes, I have ran those OS's on bare metal on those systems. Linux is noticeably slower. I mentioned virtualization because there the additional overhead makes the difference even more obvious. As far as VM bias towards windows... VirtualBox does not bias towards Microsoft products.

Edited 2013-05-13 03:54 UTC

Reply Score: 0

RE[3]: This is anti-MS propaganda
by leech on Mon 13th May 2013 06:25 UTC in reply to "RE[2]: This is anti-MS propaganda"
leech Member since:
2006-01-10

I guess I'll feed the troll....

Seriously, what functionality does Windows XP have "Out of the Box" that any of the mint/Ubuntu Linux distributions don't have?

The only thing XP has going for it out of the box is Paint, Notepad, Wordpad, IE and Outlook Express.
Oh, maybe I'm missing solitaire and minesweeper....
Oh, you can search for files, and regular file management stuff. But what else?

Any Linux distribution (yes even from the days when XP came out) had more applications and a full office suite by doing the default desktop install.

Here's some anecdotal evidence to support the opposite of what you're saying.

This was a few years back, but my sister's friend had a crappy eMachine, which had a motherboard go bad. I tried to find a replacement, but unfortunately ordered her one that used PC133 memory, of which I only had a 128mb stick. Ubuntu (I think it was 8.04) ran on it, but slowly. OpenOffice loaded in about 5 minutes! Exact same hardware, dual booting with Windows Xp took about 10 minutes to load OpenOffice!

So you can berate Linux all you want, but the truth of the matter is that it still has leaps and bounds better memory management than Windows does. Ever look at your memory usage on Windows? I can pretty much guarantee that it's using Page file, even though it still has physical memory. I NEVER see this on Linux. We buy RAM for a reason, Microsoft... fix your damn memory management!

Reply Score: 5

triangle Member since:
2013-05-13

What does XP have out of the box that Linux does not?

For one, it works. That's a pretty big one. Let me elaborate.

Yes, Linux sometimes works. XP allways works and works fully. That is a huge difference. The function of an OS is to be an OS... not to supply you with a huge amount of free software. By the way, there is huge amount of free open source software for win also. Look up softpedia.

So let us compare specifically how the OS compares out of the box. I'll state 4 scenarios. All 4 scenarios will consider what the experience would if we were to install an OS today.

SCENARIO I - 10 year old computer.
---------------------------------
CPU: Athlon XP 2500+
RAM: 1 GB
Mobo: NF7 Series
GFX: Ati Radeon 8500
SND: Creative sound blaster 5.1

Current Ubuntu, Mint, etc will not work on this system. Score F-.

A legacy Linux distro will work partially on this system. Videocard will barely work and sound card will partially. However, and this is vital... because of the flawed centralized software repo scheme no new software could be used. This effectively means no security and very limited functionality/productivity. So in effect, installing a legacy distro is not an option. Score F.

Windows XP:
-Works perfectly. Everything fully functional and optimal.
-Super fast and light. Like computer was brand new!
-No junk comes with OS
-Can install any new software.
Score A.

SCENARIO II - 5-6 year old system
----------------------------------
CPU: Core 2 Duo 2.X GHZ
RAM: 3 GB
GFX: Ati 2 or 3000 series or NVIDIA 8400
SND: Creative sound blaster 5.1

New Ubuntu or Mint. With Ati shit out of luck. F-
With NVIDIA will work. Will be slow but but speed wise usable. Of course new GUI systems like unity and gnome 3 are total crap and that should be taken into account. Let's pretend classic is used. in this case the score depends highly on hardware. You are playing the hardware lotto. If run virtualized, the score is an F because the harware can't handle with enough speed the hog that Linux is.

Legacy Linux. Can't be run for sane reasons as above. No axs to new packages/software. Score F-.

Windows XP:
-Works perfectly. Everything fully functional and optimal.
-Super fast and light. Like computer was brand new!
-No junk comes with OS
-Can install any new software.
-No vitalization problems.
Score A+.

SCENARIO III - 2 Year OLD system
---------------------------------
CPU - AMD Phenom
RAM - 4GB
GFX - ATI 48XX
SOUND - Creative 6.1 or 7.1

Current Ubuntu or Mint, works but not fully. Minor isues with sound card and major issues with video card. Gaming out of the question unless you downgrade x server blah blah. Score C-.

Legacy Ubuntu or Mint. Not an option for above reasons and also pointless.

Windows XP:
-Brilliant. Everything fully functional and optimal.
-Super fast and light. Absolute speed demon.
-No junk comes with OS
-Can install any new software.
-No vitalization problems.
Score A+.

CONCLUSION
------------
One of the key functions of an OS is that is should work. Sometimes Linux does works but often it does NOT! You must play the hardware lotto. Yet it is the goal of an OS to make hardware work. So it fails at this very basic level.

Also, Linux does not age well. It is not usable with older hardware because that forces users to give up on new packages/software and security. In fact, even 2 year old hardware becomes obsolete real fast as with the Radeon 4XXX series forcing people to buy new hardware.

For hardware to work you need proper driver flexibility. Linux does not have this. Yes, this is a business issue not just an engineering issue but alas the result is the same. You are lucky to even get your hardware working. In Windows land you always have full hardware support.

In windows you can keep your software. They are not obsoleted by package updates.

Also, in the land of Windows... you can run up to date new software on old hardware.

In short Windows as an OS fulfills all the needs that an OS is supposed to fulfill. It is flexible and has long term hardware support. XP is 13 years old and is still marvelous.

I should also add that software developed for Microsoft still works for the most part in newer OS's because of the amazing backwards compatibility MS always achieves.

In short Windows has none of the problems Linux has.
With two exceptions of course, it is not free and not open source.

Edited 2013-05-13 07:37 UTC

Reply Score: 1

triangle Member since:
2013-05-13

I'm not arguing that XP is the best OS ever. I think that for the most part Windows 7 is better than XP. Obviously it supports modern hardware fully. The question is are current Linux distros on the same level yet of 13 year old XP. I say no. The above considerations are elementary consideration for an OS. Essentials. Linux fails on these and Windows scores extremely high (better than any OS). I am only considering these elementary considerations here. If we were to consider usage elements then I would argue that Linux isn't even yet on the level of Windows 95. Windows 95 was much easier to use/admin than Linux is today (or ever has been). Sad but true.

I should also correct my post for scenario 3. XP should score a B or a C and not an A since it does not support 64bit and more than 4GB ram...hence not fully the hardware after all. But at the same time Microsoft has Win 7 which would score an A in that scenario.

Edited 2013-05-13 07:58 UTC

Reply Score: 1

RE[5]: This is anti-MS propaganda
by hussam on Mon 13th May 2013 09:14 UTC in reply to "RE[4]: This is anti-MS propaganda"
hussam Member since:
2006-08-17

1) You are comparing to ubuntu/mint, etc.. Try comparing to something solid like OpenSuSE or Fedora.
2) Yes, linux installations can break but it gives you tools to easily fix it. I gave up on windows XP in 2004 after a horrible experience with Microsoft support where an idiot eventually suggested I format. I had stumbled only a NTFS bug. the microsoft support person denied the bug but it was eventually fixed in Vista.
luckily I didn't and I used a linux rescue CD with ntfs write support to fix my windows installation.

Reply Score: 3

yoshi314@gmail.com Member since:
2009-12-14

i call BS on your argument. score F.

I have an old (~7 years) system which ranks somewhere between your 10year old and 7 year old examples, and i don't have to degrade to unsupported legacy linux installations to use it.

The argument of no-junk windows xp installation and everything working out of the box can be thrown just out the window - i mean, there's outdated IE and you are missing most of the drivers for your hardware. and the whole system is rather vulnerable out of the box, unless you roll a custom installation media streamlined with all of the hotfixes so far.

Also, you silently assumed to run default desktops given distro comes with. I can use gnome3/kde fairly comfortably, although i prefer simpler setups more. in such case older systems might struggle. but not every distribution comes with beefy desktop option by default.

You also forgot to notice that windows xp is going to become unsupported soon.

> Also, Linux does not age well. It is not usable with older hardware because that forces users to give up on new packages/software and security.

BS. you can run recent releases of distributions on fairly old hardware. I run archlinux on laptop from 2004, and i keep it up to date.

do not make the assumption that recent linux versions are only for top-of-the shelf hardware. do not make the assumption that there is only gnome3/kde or other resource hogging desktop for linux. do not assume that old hardware only works with linux distributions released more than few years ago - there are linux distributions that will work well on older hardware while containg up to date software.

> In windows you can keep your software. They are not obsoleted by package updates.

Try installing some stuff written for xp on windows7 or 8. Or try installing something that requires most recent directx on windows xp. and then tell me it's not being obsoleted.

There are more and more apps getting left behind. And that includes drivers for older hardware - some hardware from winxp era is not getting drivers for modern windows releases. at some point there will be hardware with no xp drivers available. i doubt it will happen anytime soon, but it's a matter of time.

on linux you at least get the comfort of having your hardware supported as long as there are people using it, and it works realistically. even if it's discarded, you are free to fork the kernel or make a group of interest to restore given feature.

you can fire up fairly recent linux system and it will work with your hardware, recent or ancient. the exceptions are hardware that's truly obsolete (like 386, with its ram restrictions) or hardware nobody uses anymore (ancient modems, really old and obscure graphics cards) or hardware nobody has written drivers for yet.

Reply Score: 3

triangle Member since:
2013-05-13

"The argument of no-junk windows xp installation and everything working out of the box can be thrown just out the window - i mean, there's outdated IE and you are missing most of the drivers for your hardware. and the whole system is rather vulnerable out of the box, unless you roll a custom installation media streamlined with all of the hotfixes so far."

Not true. You click the update button. Then you are up to date. Too complex for you? Second, yes one should install drivers. In the land of Windows this is not necessary most of the time but is always an option and you can always get the correct drivers from the manuf website. You just double click the exe file and boom, all is perfect in the world. In the land of Linux... good luck with driver issues. Buy yes, I see your point, I know you like to roll your own driver and custom compile the kernel. As do all normal people...

"Also, you silently assumed to run default desktops given distro comes with. I can use gnome3/kde fairly comfortably, although i prefer simpler setups more. in such case older systems might struggle. but not every distribution comes with beefy desktop option by default."

Yes, I talk about how Linux comes out of the box vs. how Windows comes out of the box. How rude and flawed of me. Good point.

"You also forgot to notice that windows xp is going to become unsupported soon."

Oh my god... after 13+ years? How dare Microsoft do such a thing!?! Let's start the MS hate bandwagon right now! Just one question dude, what is the longest support a main-stream Linux distro gets? LOL Also, you may have noticed that MS has an OS called Windows 7. Spend a few bucks, it will be supported for quite a bit longer than any Linux distro will.


"there are linux distributions that will work well on older hardware while containg up to date software."

This is true. You can run Lubuntu... maybe... if you wind the Linux driver lotto. See... the thing is that Ubuntu is garbage enough. As far as usability is concerned. Why would I want to run something even more stripped down just to be able to run an OS? No thanks. Lose too much functionality/usability. We need to keep this apples to apples and since the best of linux isn't up to Windows 95 level in terms of ease of use/admin and functionality why would I want to run something even more stripped down?

"Try installing some stuff written for xp on windows7 or 8. "

Funny you say that. This for the most part is no problem at all! I have yet to find something written for XP that doesn't work on Win7 (drivers excluded of course). You see, not only that but you can even run 32 bit apps in 64 bit Win... unlike in Linux where they still can't get this right. Maybe in 10 years they will finally get multiarch working properly... fingers crossed. Yes if some dynamic lib is version num off the application won't run in Linux. In windows apps are smart enough to package what they need with the app unless it is a sys lib and those are guaranteed not to break in apps in the future (unlike in Linux where breaking older apps is the norm).


"Or try installing something that requires most recent directx on windows xp. and then tell me it's not being obsoleted."

This is true. Yet, you can fully enjoy any game in XP just fine. MS is using some pressure here for sure. I think it is fine for MS to ask users to spend a $100 after a decade and a half.

"There are more and more apps getting left behind. And that includes drivers for older hardware - some hardware from winxp era is not getting drivers for modern windows releases. at some point there will be hardware with no xp drivers available. i doubt it will happen anytime soon, but it's a matter of time."

True. But again... what is the point? That XP will one day become obsolete? Granted. Spend a $100 after a decade and a half. Give me a break. Programmers need to eat too.

"on linux you at least get the comfort of having your hardware supported as long as there are people using it, and it works realistically. even if it's discarded, you are free to fork the kernel or make a group of interest to restore given feature."

What a joke. Good luck even getting basic functionality of hardware on Linux. Any hardware that is not main stream has no chance at all. You can roll your own drivers ehh? What a flaky argument. You can do the same on Windows mate. There is nothing stopping you.

Edited 2013-05-13 14:39 UTC

Reply Score: 0

UltraZelda64 Member since:
2006-12-05

"You also forgot to notice that windows xp is going to become unsupported soon."

Oh my god... after 13+ years? How dare Microsoft do such a thing!?! Let's start the MS hate bandwagon right now! Just one question dude, what is the longest support a main-stream Linux distro gets? LOL Also, you may have noticed that MS has an OS called Windows 7. Spend a few bucks, it will be supported for quite a bit longer than any Linux distro will.

Microsoft wanted Windows XP dead years ago, but they couldn't kill it off because of Linux and that pesky netbook fad several years ago. Never mind the atrocity that was Vista, which wouldn't even run on the damn things. If Microsoft had their way, everyone would have jumped off of XP like fleas leaving a dead animal, but practically no one wanted to run that dud, to the point where Microsoft had to introduce downgrade rights (great way to cheat the OS market share statistics). The Linux distribution family with the longest support is probably Red Hat Enterprise Linux and its derivatives. Enterprise Linux 5 and 6 have a support lifecycle that spans a full decade of production phase, followed by three years of extended life. That's 13 years. Try again.

And that's all I am going to waste my time responding to. Your comments are so absurd, I have to wonder if you are going for the "Internet's Biggest Troll" award. You posts are sea after sea of pure, 100% bullshit. I have a hard time believing that even you believe what you're saying, and I can't believe people are wasting so much energy getting into such heated arguments with you. Seems like a lost cause to me.

Edited 2013-05-15 06:38 UTC

Reply Score: 2

RE[5]: This is anti-MS propaganda
by lemur2 on Mon 13th May 2013 10:25 UTC in reply to "RE[4]: This is anti-MS propaganda"
lemur2 Member since:
2007-02-17

Yes, Linux sometimes works. XP allways works and works fully. That is a huge difference.


It is a huge difference, but you have it the wrong way around.

Example:

SCENARIO I - 10 year old computer.
---------------------------------
CPU: Athlon XP 2500+
RAM: 1 GB
Mobo: NF7 Series
GFX: Ati Radeon 8500
SND: Creative sound blaster 5.1

A legacy Linux distro will work partially on this system. Videocard will barely work and sound card will partially. However, and this is vital... because of the flawed centralized software repo scheme no new software could be used.


I'll call your bluff:

Athlon XP 2500+ CPU is fine.
RAM: 1 GB is fine.
Mobo: NF7 Series: I can't say for sure but there shouldn't be an issue: http://www.linuxcompatible.org/compatdb/details/abit_nf7_s_linux.ht...
GFX: Ati Radeon 8500 : supported by the radeon open source driver from Xorg. Works flawlessly out of the box.
SND: Creative sound blaster 5.1: http://phoronix.com/forums/showthread.php?2216-Linux-Compatibility-... : An old sound card, based of the emu10k1 sound chip, that works perfectly in Windows and Linux, with hardware mixing.

So this system should work flawlessly out of the box with the latest Linux distributions.

SCENARIO II - 5-6 year old system
----------------------------------
CPU: Core 2 Duo 2.X GHZ
RAM: 3 GB
GFX: Ati 2 or 3000 series or NVIDIA 8400
SND: Creative sound blaster 5.1

New Ubuntu or Mint. With Ati shit out of luck.


Here is the list of ATI cards supported by the radeon open source driver as reported by the command "grep ATI /var/log/Xorg.0.log" under the latest Kubuntu 13.04 distribution: http://justpaste.it/2m4m

In other words, you are full of it.

Reply Score: 2

triangle Member since:
2013-05-13

You'll call my bluff with BS? Why do you assume I'm bluffing and your assumption that I'm bluffing is simply flawed. The examples I've stated were concrete specific examples because I know those things to be true.

You say "it should work". I agree. It should. Unfortunately in reality there is only how things are and not how they should be. You can't use modern distros with open source driver for that card. If u think that is a usable system... try it and enlighten yourself.

The open source ati driver is garbage. The fact that there exists an attempt to make a working driver does not mean it is successful.

Edited 2013-05-13 14:51 UTC

Reply Score: 2

RE[7]: This is anti-MS propaganda
by Riddic on Mon 13th May 2013 20:46 UTC in reply to "RE[6]: This is anti-MS propaganda"
Riddic Member since:
2005-10-05

You'll call my bluff with BS? Why do you assume I'm bluffing and your assumption that I'm bluffing is simply flawed. The examples I've stated were concrete specific examples because I know those things to be true.


This is like trying to dispute scientific evidence by quoting the Bible.
lemur2 at least gave you some links outlining the compatibility status of your hardware.


You say "it should work". I agree. It should. Unfortunately in reality there is only how things are and not how they should be. You can't use modern distros with open source driver for that card. If u think that is a usable system... try it and enlighten yourself.


So, you didn't get it to work and by extension it's absolutely impossible? Not much of a reference, I'm afraid.


The open source ati driver is garbage. The fact that there exists an attempt to make a working driver does not mean it is successful.


Nonsense.
My last ATI was in a Lenovo T410 (IIRC.. it was in 2006) and I got a lot to work with the open source driver just fine whereas the official driver blew chunks (dual screen using both video-out ports of the docking station, etc).

Reply Score: 3

RE[7]: This is anti-MS propaganda
by lemur2 on Tue 14th May 2013 04:32 UTC in reply to "RE[6]: This is anti-MS propaganda"
lemur2 Member since:
2007-02-17

You'll call my bluff with BS? Why do you assume I'm bluffing and your assumption that I'm bluffing is simply flawed. The examples I've stated were concrete specific examples because I know those things to be true.

You say "it should work". I agree. It should. Unfortunately in reality there is only how things are and not how they should be. You can't use modern distros with open source driver for that card. If u think that is a usable system... try it and enlighten yourself.

The open source ati driver is garbage. The fact that there exists an attempt to make a working driver does not mean it is successful.


I called your bluff BS because it is BS.

Linux works with nearly every system out there. It is absolutely certain to work if you get a Linux system in the same way that you get a Windows system ... by buying a system from the vendor with Linux pre-installed.

And it will work faster, better and with more stability than Windows.

Reply Score: 2

lucas_maximus Member since:
2009-08-18

And Windows 8 will also work flawlessly with such system, because that is exactly the sort of system I own.

Reply Score: 2

RE[6]: This is anti-MS propaganda
by zima on Fri 17th May 2013 19:53 UTC in reply to "RE[5]: This is anti-MS propaganda"
zima Member since:
2005-07-06

GFX: Ati Radeon 8500 : supported by the radeon open source driver from Xorg. Works flawlessly out of the box.
[...]
Here is the list of ATI cards supported by the radeon open source driver as reported by the command "grep ATI /var/log/Xorg.0.log"

There's more to it than such list. I happen to have an old PC with this GFX card, Radeon 8500 - and while the open source driver is the best choice, it really isn't very good: unmaintained (like many Linux drivers for older stuff); works poorly with 3D compositing enabled WMs; multi-monitor setups are really bad, very glitchy (especially when trying to use it with another GFX card, Matrox G450 PCI, also with OSS drivers; no such problems under XP...)

Edited 2013-05-17 19:55 UTC

Reply Score: 2

RE[5]: This is anti-MS propaganda
by leech on Mon 13th May 2013 14:08 UTC in reply to "RE[4]: This is anti-MS propaganda"
leech Member since:
2006-01-10

For one, it works. That's a pretty big one. Let me elaborate.

Yes, Linux sometimes works. XP allways works and works fully. That is a huge difference. The function of an OS is to be an OS... not to supply you with a huge amount of free software. By the way, there is huge amount of free open source software for win also. Look up softpedia.

Softpedia is filled with "DOWNLOAD NOW" links that go to different software than what you're looking for, not to mention other sites that fill open source software with their own toolbars and other malware / spyware. See below about Linux working...
So let us compare specifically how the OS compares out of the box. I'll state 4 scenarios. All 4 scenarios will consider what the experience would if we were to install an OS today.

SCENARIO I - 10 year old computer.
---------------------------------
CPU: Athlon XP 2500+
RAM: 1 GB
Mobo: NF7 Series
GFX: Ati Radeon 8500
SND: Creative sound blaster 5.1

Current Ubuntu, Mint, etc will not work on this system. Score F-.

Use Debian. Wheezy just came out and I bet you it'd run flawlessly on that setup. Ubuntu and Mint go for the newest hardware, and generally don't try too hard to support older hardware. I had Debian Wheezy running fast on a Pentium 2 with 512MB of RAM a while back. Only issue I had on that system (at the time, it should be fixed now that Wheezy is stable) is that I couldn't use full Gnome-Shell because the video card I had only worked with the legacy nVidia drivers, which wouldn't work with the newer kernel (hardly Linux's fault, that's all on nVidia.)
A legacy Linux distro will work partially on this system. Videocard will barely work and sound card will partially. However, and this is vital... because of the flawed centralized software repo scheme no new software could be used. This effectively means no security and very limited functionality/productivity. So in effect, installing a legacy distro is not an option. Score F.

Radeon 8500 is hardly a new card. Blame ATI for not supporting their video cards worth a crap. I have an AMD3200HD in a laptop and it's not all that old (from 2009?) and they've already dropped support for it. Grabbed the fglrx-legacy drivers from experimental and haven't had issues since. Again, as far as the repositories go, use a distribution that doesn't suck. Debian is the best for long term support, while the actual support isn't as long, the upgrade paths are more or less flawless. I went from sarge, to etch, to lenny to squeeze on my server. Only reason I finally re-installed is because the hardware got old and I upgraded to a 64-bit system. There were methods for converting from 32-bit to 64-bit Debian without re-installing, but it seemed like a royal pain in the butt, so I just reinstalled after backing everything up. Can't even do that on any Windows system.
Windows XP:
-Works perfectly. Everything fully functional and optimal.
-Can install any new software.
Score A.

Of course it's quick and fully(?) functional, when there isn't any software installed. Once you start installing software it gets fat and unresponsive. Thank the registry for that. And it is very rapidly becoming that you can't install any software. You're permanently stuck on DirectX 9 because Microsoft is forcing you to upgrade. I know, because I would have probably stuck with XP if it weren't for newer games all supporting Direct X 10+
SCENARIO II - 5-6 year old system
----------------------------------
CPU: Core 2 Duo 2.X GHZ
RAM: 3 GB
GFX: Ati 2 or 3000 series or NVIDIA 8400
SND: Creative sound blaster 5.1

New Ubuntu or Mint. With Ati shit out of luck. F-
With NVIDIA will work. Will be slow but but speed wise usable. Of course new GUI systems like unity and gnome 3 are total crap and that should be taken into account. Let's pretend classic is used. in this case the score depends highly on hardware. You are playing the hardware lotto. If run virtualized, the score is an F because the harware can't handle with enough speed the hog that Linux is.

Use KVM with QXL, or are you talking about Linux being the guest and Windows being the host? In which case it's Windows' fault that Linux is slow.
Legacy Linux. Can't be run for sane reasons as above. No axs to new packages/software. Score F-.

Tell that to my friend who supports legacy systems using RHEL 3 and 4, and ancient FreeBSD systems that customers refuse to upgrade. Sure it's a pain, but it's possible.
Windows XP:
-Works perfectly. Everything fully functional and optimal.
Score A+.

Installing guest drivers on XP can be annoying. But sure, if you install yourself and then (as above) start installing a bunch of stuff, it gets cluttered real quick. Basically to be usable as an 'operating system' it has to have applications installed, and installing those applications is what slows XP down. It's been a known issue for as long as Windows 95 has been around.
SCENARIO III - 2 Year OLD system
---------------------------------
CPU - AMD Phenom
RAM - 4GB
GFX - ATI 48XX
SOUND - Creative 6.1 or 7.1

Current Ubuntu or Mint, works but not fully. Minor isues with sound card and major issues with video card. Gaming out of the question unless you downgrade x server blah blah. Score C-.

Legacy Ubuntu or Mint. Not an option for above reasons and also pointless.

You keep mentioning sound card issues. I never had any issues with my Audigy cards, except when they finally started to make weird popping noises (which started in Windows) I've had MORE issues with getting them working in new versions of Windows than I ever have in Linux. Granted I didn't buy the X-Fi 'til much much later, and didn't have to wait for Creative to open source their drivers, but once I had one, I didn't have any issues with it.

CONCLUSION
------------
One of the key functions of an OS is that is should work. Sometimes Linux does works but often it does NOT! You must play the hardware lotto. Yet it is the goal of an OS to make hardware work. So it fails at this very basic level.

Also, Linux does not age well. It is not usable with older hardware because that forces users to give up on new packages/software and security. In fact, even 2 year old hardware becomes obsolete real fast as with the Radeon 4XXX series forcing people to buy new hardware.

For hardware to work you need proper driver flexibility. Linux does not have this. Yes, this is a business issue not just an engineering issue but alas the result is the same. You are lucky to even get your hardware working. In Windows land you always have full hardware support.

In windows you can keep your software. They are not obsoleted by package updates.

Also, in the land of Windows... you can run up to date new software on old hardware.

In short Windows as an OS fulfills all the needs that an OS is supposed to fulfill. It is flexible and has long term hardware support. XP is 13 years old and is still marvelous.

I should also add that software developed for Microsoft still works for the most part in newer OS's because of the amazing backwards compatibility MS always achieves.

In short Windows has none of the problems Linux has.
With two exceptions of course, it is not free and not open source.


I have a lot of games that work in Wine better than Windows 7. I have also been able to get any old binary only software working on Linux with the libstdc++5 libraries installed. I think all of your issues with 'Linux' are really issues with Ubuntu. Try a more stable distribution like CentOS or Debian. You'll find it far more pleasant to use, and work far better on older hardware. Ubuntu is for the New Kids, who like the shiny. Sorry, had to snip some of the quote, due to character limit.

Reply Score: 3

triangle Member since:
2013-05-13

Well, i give you prop for being specific.

In short, you admit that things don't work so well in Linux and that it can be a pain in the but. Your friend makes a living doing that. I'm sure you are correct, I wouldn't argue against the idea that it is possible to support legacy hardware on Linux. My point is that in the land of Windows people don't make a consultation career doing so because it is painless. Things just work perfectly.

Essentially what you are saying is that you are willing to make due with the hassles and shortfalls of Linux. Good for you.

I have used and do use Debian every day.

Blame ATI? As far as I'm concerned it is not about who is at fault, instead it is about the state of things. However, if we want to place blame for the state of things then I would say that the problem is the GPL.

As far as games running better in wine... I don't really have to comment on that. You should have left that one out because it reeks of bias and reality distortion. Even if there was a game or 2 like that... most games don't even run in Wine (of course you don't mention that).

Edited 2013-05-13 15:09 UTC

Reply Score: 0

RE[5]: This is anti-MS propaganda
by bert64 on Tue 14th May 2013 07:38 UTC in reply to "RE[4]: This is anti-MS propaganda"
bert64 Member since:
2007-04-23

Why would you install a legacy distro instead of just a lightweight distro? A machine with 1gb of ram is not bad at all, i run virtual machines with considerably less.

Also, XP lacks drivers for much of this hardware by default, which means you have to either use a modified install disc or install the drivers all manually, which can be extremely painful especially if you're not sure what hardware is present. In some cases you might actually need a floppy disc (!) to install XP on a machine where the SATA controller is not supported by default.

Amusing you call a centralised repository flawed, when the windows approach (download arbitrary binaries from random sites) is far more flawed, and requires considerably more user knowledge in order to avoid malware infestations. Also the "flaw" as you call it seems to be that the repository is outdated, but there is no real reason to run an old linux distro when the new ones are available for free.

I'm also curious as to how you had problems with older radeon cards, since i have had no problems using older cards with modern linux distros (7000, 9200, x1600, x1900) and they work out of the box with the open source drivers. Conversely these cards won't work at all with windows 7/8, and in some cases only run on xp if you're willing to accept drivers with known security holes.

XP is not "up to date software", and applications are only still being made compatible with it because many users have not upgraded to newer versions. Linux doesn't have the problems windows has which cause people to avoid upgrading, problems like lock-in, lack of drivers in newer versions, increased hardware requirements, cost etc. There is very little reason to ever run an old version of linux, even on old hardware.

The biggest problem with linux that you highlight, is that many hardware manufacturers are still stuck with the windows mentality of releasing closed source drivers... Open drivers work far better, and are the reason why modern linux supports all manner of hardware which is abandoned by modern windows, especially the 64bit variants. While 64bit linux supports almost all the hardware the 32bit version did simply by recompiling the drivers, once 64bit windows came out only new hardware ever got drivers, and there is all manner of older stuff which is unusable.
And then there's other processors, linux on arm inherits the majority of the drivers from x86 linux, so if you have an arm (or mips or ppc etc) based machine with pci or usb slots, you can plug all manner of hardware in and have it work. If you are running windows ce, or windows rt on such hardware you have extremely limited driver support - because the manufacturers only ever made x86 binary drivers.

Reply Score: 3

Drumhellar Member since:
2005-07-12

Ever look at your memory usage on Windows? I can pretty much guarantee that it's using Page file, even though it still has physical memory. I NEVER see this on Linux. We buy RAM for a reason, Microsoft... fix your damn memory management!


I want to add that after using my computer for a few hours yesterday, I was able to turn off the page file without having to reboot, meaning it wasn't in use (It also reported 0MB used in the page file). That is the first time that's ever happened to me in Windows.
That reminds me. I have to turn the page file back on, now that I'm nolonger defragging prior to shrinking my Windows partition.

I'm running Windows 8 with 8GB of ram. Not a small amount of ram, but not a lot, either.

Reply Score: 2

RE[3]: This is anti-MS propaganda
by ba1l on Mon 13th May 2013 07:15 UTC in reply to "RE[2]: This is anti-MS propaganda"
ba1l Member since:
2007-09-08

Today's Ubuntu and Mint (some of the most pop distros and my fav also) are not yet on the same level of functionality as Windows XP. XP is far superior imo.


I wouldn't agree with either of those two points.

If you exclude "runs Windows applications" as a feature (if that's what you want to do, just use Windows), I can't really think of anything Windows XP has that current (or even older) Linux distributions don't. Certainly nothing that I use.

I don't really consider Windows XP to be viable anymore, unless you're running on very old hardware that can't run Windows 7 well (or at all).

When using a Windows XP machine, there are plenty of things I miss from Linux. And Windows 7. And Mac OS X, for that matter. It just feels like an antique to me at this point.

As far as the second test goes, I have ran those OS's on bare metal on those systems. Linux is noticeably slower. I mentioned virtualization because there the additional overhead makes the difference even more obvious.


From my experience, that's usually a problem with the graphics drivers (nVidia's drivers, in the case I'm thinking of), but I've had no such problems in years.

It's hard to know what you mean by "slow" anyway. Unless there's a graphics card issue (dropping back to software rendering because you're in a VM will pretty much do that), I can't think of any way that Linux feels slower than Windows. Not on anything I've used in a very long time.

Reply Score: 4

RE: This is anti-MS propaganda
by Gullible Jones on Mon 13th May 2013 12:58 UTC in reply to "This is anti-MS propaganda"
Gullible Jones Member since:
2006-05-23

You lost me on the VMs bit. 1 GB Ubuntu VMs on a powerful machine like what you describe will work just as well as Windows. Hell, on a Core 2 Duo you can run Unity 3D with no hardware acceleration, on one core. That's how damn powerful modern CPUs are.

ATM I'm running Linux with the Mate desktop on a 1 GB Intel Atom netbook with trashy Intel graphics, less powerful than the obsolete configuration you mention. It's about as fast as Windows XP - i.e. not very, but usable.

OTOH, Gnome 3 makes this netbook cry. Thus, I reiterate: the problem is the desktop stack.

Edited 2013-05-13 12:58 UTC

Reply Score: 3

lucas_maximus Member since:
2009-08-18

Why does Ubuntu run like crap on virtual box then, while Windows 8 runs nice and smoothly?

Reply Score: 3

tylerdurden Member since:
2009-03-17

that's your argument, seriously? LOL

Edited 2013-05-13 18:40 UTC

Reply Score: 1

lucas_maximus Member since:
2009-08-18

LOL


Is that your argument?

It all anecdotal evidence, unless someone produces some real numbers.

Reply Score: 2

tylerdurden Member since:
2009-03-17

I was just confused as to what recognizing that you had no idea of what you were doing, in a specific matter, had to do with the general scheme of things.

Reply Score: 1

lucas_maximus Member since:
2009-08-18

Pathetic as always.

We are talking about performance, I stuck in my 2 cents for what it is worth.

I know it isn't a good measure and I don't pretend it is.

I was more interested in the fact that Windows 8 works pretty well in a VM and it seems to have some compositing, whereas Gnome 3 runs like dirt with the same allocated video memory.

Edited 2013-05-13 22:06 UTC

Reply Score: 2

tylerdurden Member since:
2009-03-17

You're being too hard on yourself. I wouldn't go as far as referring to you using your lack of technical competence in a matter as "pathetic" per se... more like an "odd" choice for the basis of an argument.

Reply Score: 1

Gullible Jones Member since:
2006-05-23

With Ubuntu it was actually under KVM, not Virtualbox. 1 GB RAM, no graphics acceleration at all. It still worked fine, mostly because the computer in question was a beast; on my netbook it sucks terribly.

Not sure how well it would work in Virtualbox though. If your hardware is similar to my late Core 2 Duo desktop, the problem might be with Virtualbox, not Ubuntu.

As far as Windows 8 is concerned, I found it quite fast on Virtualbox (even without GPU acceleration) but quite slow on an older laptop. I would guess it's heavily optimized for newer machines.

Reply Score: 3

lucas_maximus Member since:
2009-08-18

I am talking about my 8 processor Xeon machine at work.

I don't honestly know whether it is virtual box or the distro. But Windows 8 works pretty damn well, and anything with lots of 3D compositing (Gnome 3, Unity) seems to very badly.

Reply Score: 2

RE[3]: This is anti-MS propaganda
by phoenix on Tue 14th May 2013 17:31 UTC in reply to "RE[2]: This is anti-MS propaganda"
phoenix Member since:
2005-07-11

Why does Ubuntu run like crap on virtual box then, while Windows 8 runs nice and smoothly?


What virtual hardware are you using? Emulated drivers like IDE/SATA/e1000? Or virtio drivers? If you aren't using virtio, performance will be horrible, regardless of which OS you use.

And no "defaults suck! I don't want to tune! Wah!" is not an answer to why one OS works well and another doesn't.

Reply Score: 2

RE: This is anti-MS propaganda
by Yagami on Mon 13th May 2013 15:16 UTC in reply to "This is anti-MS propaganda"
Yagami Member since:
2006-07-15

Thanks !!

I appreciate your posts !

Usually, we here have Vim vs Emacs, Gnome vs KDE, Ubuntu vs Debian, Google vs Apple... everyone usually here is a little fanboy but within reason.

but your posts are highly original ! I must congratulate you !

You are the most delusional troll I can remember on Osnews ! You just talk bullshit, like you know everything , everyone else are liers and only you know the truth!

Its so stupid that I just suggest everyone to just laugh and forget what you said ! ( not worth mentioning linux servers on the world, the embedded market, linux always need less ram and cpu that windows usually, even nvidia and steam developers reporting FPS increase on linux comparing to windows ) !

Other people must really like you !!! ;)

Thank you for a good laugh !!! You are the King Idiot !

Reply Score: 3

RE[2]: This is anti-MS propaganda
by triangle on Mon 13th May 2013 15:26 UTC in reply to "RE: This is anti-MS propaganda"
triangle Member since:
2013-05-13

You are most welcome. I should also add that the market really likes Linux. That is why despite being free Linux has a 1% market share. Because it is so awesome.

Or wait...

Does Linux have a 1% market share because it is so awesome or is it because people like myself are too stupid too see the awesome that is Linux? Isn't the prevailing view of Linux users that ordinary people are just too dumb too understand how amazing Linux is?

Yes, you are too smart and too cleaver for the rest of us. Isn't it wonderful up on that high horse mate?

Edited 2013-05-13 15:28 UTC

Reply Score: 0

RE[3]: This is anti-MS propaganda
by Yagami on Mon 13th May 2013 15:38 UTC in reply to "RE[2]: This is anti-MS propaganda"
Yagami Member since:
2006-07-15

[quote] Yes, you are too smart and too cleaver for the rest of us. Isn't it wonderful up on that high horse mate? [/quote]

Dont know ... you tell me , Mr Know It All, Keeper of statistics and benchmarks ... the two truths !!

Reply Score: 3

Gullible Jones Member since:
2006-05-23

That's desktop market share. Server market share is way higher than 1%, because Linux is awesome - on servers.

Edited 2013-05-13 16:20 UTC

Reply Score: 2

RE[4]: This is anti-MS propaganda
by Riddic on Mon 13th May 2013 20:50 UTC in reply to "RE[3]: This is anti-MS propaganda"
Riddic Member since:
2005-10-05

and on mobile phones, embedded systems .. anything that has ARM at its core, inflight entertainment systems, as of late the ISS, etc.

Edited 2013-05-13 20:55 UTC

Reply Score: 2

unclefester Member since:
2007-01-13

You are most welcome. I should also add that the market really likes Linux. That is why despite being free Linux has a 1% market share. Because it is so awesome.

Or wait...

Does Linux have a 1% market share because it is so awesome or is it because people like myself are too stupid too see the awesome that is Linux? Isn't the prevailing view of Linux users that ordinary people are just too dumb too understand how amazing Linux is?

Yes, you are too smart and too cleaver for the rest of us. Isn't it wonderful up on that high horse mate?



Linux has only 1% marketshare on desktops and laptops because Microsoft coerces manufacturers into installing Windows.

However in markets where MS has little or no leverage - servers, supercomputers, embedded devices, phones and tablets Linux has been a spectacular success [and MS a spectacular failure].

In fact sometime in the next 2-4 years Linux (Android) will be the best selling OS as portable devices begin to replace most desktops and laptops.

Reply Score: 3

RE[4]: This is anti-MS propaganda
by lemur2 on Tue 14th May 2013 08:11 UTC in reply to "RE[3]: This is anti-MS propaganda"
lemur2 Member since:
2007-02-17

"You are most welcome. I should also add that the market really likes Linux. That is why despite being free Linux has a 1% market share. Because it is so awesome.

Or wait...

Does Linux have a 1% market share because it is so awesome or is it because people like myself are too stupid too see the awesome that is Linux? Isn't the prevailing view of Linux users that ordinary people are just too dumb too understand how amazing Linux is?

Yes, you are too smart and too cleaver for the rest of us. Isn't it wonderful up on that high horse mate?



Linux has only 1% marketshare on desktops and laptops because Microsoft coerces manufacturers into installing Windows.

However in markets where MS has little or no leverage - servers, supercomputers, embedded devices, phones and tablets Linux has been a spectacular success [and MS a spectacular failure].

In fact sometime in the next 2-4 years Linux (Android) will be the best selling OS as portable devices begin to replace most desktops and laptops.
"

Indeed, Android uses a Linux kernel, and it is easily the best selling OS on mobile phones. It has also recently just about caught up with iOS on tablets.

http://www.androidcentral.com/survey-says-android-claims-48-us-tabl...

Linux is the most popular OS for supercomputers, embedded in devices such as TVs, DVDs, PVRs, e-readers etc, even in cars, in network infrastructure such as routers, in servers, in render farms, and in server farms such as used by Google and Facebook.

Even for desktops, Linux ships currently on about 5% of machines sold.

http://www.phoronix.com/scan.php?page=news_item&px=MTA5ODM

Given all this coverage, over the whole of the computing market Linux is actually the majority OS of the computing world.

Oh, and given that many ordinary people in western countries own a car, a phone or tablet or both, a TV, a DVD player, a PVR, perhaps a separate GPS, maybe an e-reader, a wifi router and ADSL modem, and also a desktop or laptop ... then such an ordinary person is likely to be running about five times as many copies of Linux as copies of Windows.

Edited 2013-05-14 08:21 UTC

Reply Score: 2

lucas_maximus Member since:
2009-08-18

Android != Desktop Linux.

It is like me including Windows CE PDAs and POS machines in Windows market share.

While Android is Linux based, the underlying kernel could have been anything. Linux is one component of what makes Android.

Reply Score: 3

RE[6]: This is anti-MS propaganda
by lemur2 on Wed 15th May 2013 05:28 UTC in reply to "RE[5]: This is anti-MS propaganda"
lemur2 Member since:
2007-02-17

Android != Desktop Linux.


Agreed. However, Android does include Linux. The Android kernel is part of the main Linux source tree.

http://www.pcworld.com/article/252137/linux_unites_with_android_add...

It is like me including Windows CE PDAs and POS machines in Windows market share.


Hardly. Windows CE does not use the same source tree. I don't know what codebase Windows POS devices utilise.

While Android is Linux based, the underlying kernel could have been anything. Linux is one component of what makes Android.


Agreed entirely.

Therefore, if we wish to assess how many of the world's CPUs utilise a Linux kernel, we must include Android devices.

Edited 2013-05-15 05:30 UTC

Reply Score: 2

unclefester Member since:
2007-01-13


It is like me including Windows CE PDAs and POS machines in Windows market share.


Nope. "Windows" is merely a brand used by Microsoft. It is not a particular technology. Windows CE and POS machines aren't part of the NT-based ecosystem.


While Android is Linux based, the underlying kernel could have been anything. Linux is one component of what makes Android.


Linux is by definition a kernel. So anything that uses a Linux kernel is 'Linux'. Conversely any desktop sytem that looks and behaves like a Linux distro but uses a different kernel isn't Linux.

So by definition Android is Linux and MintBSD is non-Linux.

Reply Score: 2

RE[2]: This is anti-MS propaganda
by zima on Sat 18th May 2013 18:00 UTC in reply to "RE: This is anti-MS propaganda"
zima Member since:
2005-07-06

even nvidia and steam developers reporting FPS increase on linux comparing to windows ) !

That is likely largely meaningless... (replies in http://www.osnews.com/thread?561343 sub-thread)

Reply Score: 2

RE: This is anti-MS propaganda
by lemur2 on Tue 14th May 2013 11:43 UTC in reply to "This is anti-MS propaganda"
lemur2 Member since:
2007-02-17

This is total propaganda and I am sick of it. Windows is BY FAR the fastest OS. Linux is absolute junk. While the Linux kernel may be OK...and I stress may be OK... let us not accept such things on blind faith... Linux as a complete operating system is total garbage.


http://www.telegraph.co.uk/technology/news/10049444/International-S...

International Space Station to boldly go with Linux over Windows

Computers aboard the International Space Station are to be switched from Windows XP to the Linux operating system in an attempt to improve stability and reliability.

Dozens of laptops on the ISS's 'opsLAN' network - which provides the ship's crew with vital capabilities for day-to-day operations, from telling the astronauts where they are to interfacing with onboard cameras - will be switched, removing Windows entirely from the ISS.

“We migrated key functions from Windows to Linux because we needed an operating system that was stable and reliable – one that would give us in-house control. So if we needed to patch, adjust or adapt, we could," said Keith Chuvala of the United Space Alliance, which runs opsLAN for NASA.

Astronauts using the system were trained on specific courses tailored by the non-profit Linux Foundation.

Linux is already used to run various systems aboard the ISS, including the world's first 'Robonaut', sent to the Space Station in 2011. 'R2' can be manipulated by astronauts as well as ground controllers and is designed to carry out tasks "too dangerous or mundane" for astronauts in microgravity, according to the Linux Foundation.

Tailored versions of Linux are widely used in scientific projects, including CERN’s Large Hadron Collider.


http://www.pcworld.com/article/238068/how_linux_mastered_wall_stree...

How Linux Mastered Wall Street

When it comes to the fast-moving business of trading stocks, bonds and derivatives, the world's financial exchanges are finding an ally in Linux, at least according to one Linux kernel developer working in that industry.

This week, at the annual LinuxCon conference in Vancouver, Linux kernel contributor Christoph Lameter will discuss how Linux became widely adopted by financial exchanges, those high-speed computerized trading posts for stocks, bonds, derivatives and other financial instruments.

As an alternative to traditional Unix, Linux has become a dominant player in finance, thanks to the operating-system kernel's ability to pass messages very quickly, Lameter said in an interview with IDG. In fact, the emerging field of high-frequency trading (HFT) would not be possible without the open-source operating system, he argued. Lameter himself was hired as a consultant by one exchange -- he won't say which one -- based on his work in assembling large-scale Linux clusters.


http://www.smartcompany.com.au/information-technology/050276-ibm-in...

IBM, Intel and Linux dominate Top 500 supercomputer market

In terms of operating systems, 462 out of the 500 supercomputers on the list use Linux, 25 run on Unix, and just 13 are based on Windows.


Edited 2013-05-14 11:56 UTC

Reply Score: 1

lucas_maximus Member since:
2009-08-18

ISS are using hardened machines for outerspace, not modern machines.

Super computers != Desktop operating systems

I could go on. We are talking about primarily desktop operating systems with Windows ... and none of the examples prove it is a desktop operating system. It proves that it works well in those circumstances.

Lemur2, they guy that doesn't understand what a use case is or why it really matters when evaluating software.

Edited 2013-05-14 18:19 UTC

Reply Score: 2

RE[3]: This is anti-MS propaganda
by lemur2 on Wed 15th May 2013 05:57 UTC in reply to "RE[2]: This is anti-MS propaganda"
lemur2 Member since:
2007-02-17

ISS are using hardened machines for outerspace, not modern machines.


So? The ISS machines, modern or otherwise, for reasons of stability and supportability, are still going to be running Linux rather than Windows XP in the future.

Super computers != Desktop operating systems


True. Having said that, supercomputers do utilise operating systems, and many of the supercomputers on the top 500 list do utilise Intel CPUs. Furthermore, for reasons of scalability and speed, 462 of the top 500 supercomputers do use a Linux kernel, and there is, after all, only one Linux kernel source tree.

I could go on. We are talking about primarily desktop operating systems with Windows ... and none of the examples prove it is a desktop operating system.


No we aren't, we are talking about operating systems. This site is called "OSNews", which is short for "Operating System News". It doesn't say "Desktop Operating System News".

Lemur2, they guy that doesn't understand what a use case is or why it really matters when evaluating software.


Lucas_maximus, the guy that doesn't know what we are talking about under the topic of "Operating System News".

Edited 2013-05-15 06:01 UTC

Reply Score: 2

RE[4]: This is anti-MS propaganda
by zima on Fri 17th May 2013 19:39 UTC in reply to "RE[3]: This is anti-MS propaganda"
zima Member since:
2005-07-06

It doesn't say "Desktop Operating System News".

Hm, DOSnews would have a funny ring to it ;)

Reply Score: 2

RE[3]: This is anti-MS propaganda
by zima on Wed 15th May 2013 13:46 UTC in reply to "RE[2]: This is anti-MS propaganda"
zima Member since:
2005-07-06

ISS are using hardened machines for outerspace, not modern machines.

Actually, there are dozens of fairly normal laptops (Thinkpads, including quite recent T400) on the ISS. The only major modification is cooling (because convection doesn't work in microgravity).

Reply Score: 2

RE[2]: This is anti-MS propaganda
by zima on Fri 17th May 2013 19:42 UTC in reply to "RE: This is anti-MS propaganda"
zima Member since:
2005-07-06

One can argue that HFT is not a good thing...

Reply Score: 2

I don't buy it.
by Tuishimi on Mon 13th May 2013 16:27 UTC
Tuishimi
Member since:
2005-07-06

I worked at DEC and had the privilege of working on VMS. OS work was most definitely split into subgroups, I worked in security. But security had the benefit of being interwoven throughout the various components of the operating system so I worked in loginout, memory management, backup, command line interpreter, etc.

I had to touch other people's codes and in fact, was even assigned investigations FROM the other groups. Never had a bad experience where a developer was over-protective of their code or didn't want someone else's fingers in their pie.

Also, peer code review was prevalent. If you had a way to improve performance or a recommendation that would benefit the operating system in any way, you presented it to a group of your peers (and more senior developers) and if your recommendation was valid, you'd be given the go ahead to make the change.

There was no inter-group rivalry, it was one very large team and we all understood the concept of helping each other out since we were all working toward the same goal of releasing a bug-free, superior product.

(Note, the VP who drove NT was an ex-DEC VP if VMS so I would think the same philosophies would carry over).

Reply Score: 5

RE: I don't buy it.
by tylerdurden on Mon 13th May 2013 20:46 UTC in reply to "I don't buy it."
tylerdurden Member since:
2009-03-17

I don't know if DEC is a good example. They are long gone by now, whereas Microsoft is not only still around but thriving as well.

The two companies have very different definition of what "product" is/was. So corporate cultures may have been almost uncorrelated.

Reply Score: 2

RE[2]: I don't buy it.
by Tuishimi on Wed 15th May 2013 15:12 UTC in reply to "RE: I don't buy it."
Tuishimi Member since:
2005-07-06

DEC is no longer around because of poor management decisions, products were excellent (both software and hardware).

It was a weird situation as most employees could see the vast spending, uncontrolled growth in non-product-related areas and decisions regarding the future of personal computing and how that should affect the company's business model going forth would cause the company to stumble and fall. It fell hard and fast.

Despite the fact the company is gone, VMS and other divisions of DEC STILL live on, absorbed by other companies.

Reply Score: 2

RE[3]: I don't buy it.
by tylerdurden on Wed 15th May 2013 18:41 UTC in reply to "RE[2]: I don't buy it."
tylerdurden Member since:
2009-03-17

That's pretty common dissonance among engineering teams; It's always management issues, never technical or engineering ones. Even though management is part of the engineering process ;-).

E.g. Those excellent technical approaches by DEC, in the end, led to products which were either under performing, or priced themselves out of the market, or arrived too late to matter. Whereas, Microsoft's supposedly "sloppier" MO ended up being consistently right on the money, literally. Or at least they were, until other even sloppier and hungrier companies started to pop up trying to eat their lunch. It's the circle of life, I guess...

Reply Score: 2

RE: I don't buy it.
by unclefester on Wed 15th May 2013 09:33 UTC in reply to "I don't buy it."
unclefester Member since:
2007-01-13

I believe NT started of as a very high quality VMS-inspired OS. However it has generally deteriorated with each iteration since it became a consumer desktop.

Reply Score: 2

Tuishimi Member since:
2005-07-06

This I believe.

Reply Score: 2

Problems on both sides of fence.
by acobar on Tue 14th May 2013 01:27 UTC
acobar
Member since:
2005-11-15

OK, lets make things short.

First, I don't have any experience with windows or linux kernel development and patching submitting.

Peer review is a norm on medium/large projects on private code development and have been this way for years. No one wants to look dumb to its "pals" (and even less to its "rivals"). Windows kernel is not, definitively, a small project so, I guess, the "pride" differential argument really does not apply on this specific scale. It may happen on small (very few developers) open source projects though as bad code visibility may be, for some, kind of embarrassing.

New challenges are a great stimulus to attract talented developers but money is also that. MS used to be famous to reward its developers. Unless things have changed a lot on recent years, I would not use this card as a differential too.

Getting back to "regular" coding, I, however, have some experience applying for "fix snippets" on small projects through independent patch proposals. Some got accept, some not.

Some maintainers are nice guys and ask you a few questions, suggest improvements and give some well-funded arguments on why things are the way they are and what could be the consequences associated to changes; some just refuse or apply the patches; some are jerks that sit on the code like all that is there is the only way to do whatever the code needs to do reasonably and shot down any attempt to alter it using crap excuses.

It happens on both worlds.

I don't want to point fingers, so just google for jerk open source maintainers if you want.

Edited 2013-05-14 01:30 UTC

Reply Score: 3

Tuishimi Member since:
2005-07-06

Exactly. (Well, first part of your response, don't know about the other as my experience is limited there).

Reply Score: 2

Comment by ilovebeer
by ilovebeer on Tue 14th May 2013 02:23 UTC
ilovebeer
Member since:
2011-08-08

Nice to see the Linux vs. Windows penis measuring contest is still alive and well. Personally I use both operating systems so I suppose I win no matter what.

Reply Score: 3

RE: Comment by ilovebeer
by Yagami on Tue 14th May 2013 09:52 UTC in reply to "Comment by ilovebeer"
Yagami Member since:
2006-07-15

yeah .... so do I use both.

But my penis is larger when using Linux ;) ( such a crazy conversation , this is ) ;)

This whole conversation and topic is just toooo silly and the main point from the article is completly missed!

this isnt about "benchmarks and performance" ( because , kernel may be slower , optimized graphics make up for it , etc ... there is a whole stack of layers of software that alter the final performance vision of a computer ).

This is actually about companies like Microsoft and actually, any large company with employers and divisions/teams that defend more their job's and status within the company than the product and company itself.

Again, this is not something that doesnt exist on open source development ( with open source developers and their NIH sindrome ) .... but its different, because colaboration and patchs are extremely welcome and there is always the freedom to pick the code, fork and do it properly.

Having worked and experienced the huge difference from small open source companies and big JAVA enterprises.... i can relate to the topic completly.

Reply Score: 3

RE: Comment by ilovebeer
by Drumhellar on Tue 14th May 2013 16:54 UTC in reply to "Comment by ilovebeer"
Drumhellar Member since:
2005-07-12

Nice to see the Linux vs. Windows penis measuring contest is still alive and well. Personally I use both operating systems so I suppose I win no matter what.


Does that mean you have two penises?

Reply Score: 2

RE[2]: Comment by ilovebeer
by Tuishimi on Wed 15th May 2013 15:15 UTC in reply to "RE: Comment by ilovebeer"
Tuishimi Member since:
2005-07-06
Hmm
by kalcytriol on Wed 15th May 2013 21:32 UTC
kalcytriol
Member since:
2011-04-23

What else is new?

Reply Score: 1