We all know them. We all hate them. They are generally overdone, completely biased, or so vague they border on the edge of pointlessness (or toppled over said edge). Yes, I’m talking about those “Is Linux ready for the desktop” articles. Still, this one is different.
Instead of some vague exposition of why Linux on the desktop sucks (which almost always comes down to: “it does things differently from Windows”), this article presents a very simple and clear list of things that are currently lacking or underperforming in the desktop Linux world. No vague idealistic nonsense, just a simple, to-the-point list of what’s wrong with desktop Linux, and what needs fixing.
Written between April 30 and May 18 2009, the document “discusses Linux deficiencies”, however, “everyone should keep in mind that there are areas where Linux has excelled other OSs”. The author also adds that “a primary target of this comparison is Windows.”
While most of the items on the list are fairly accurate and reasonable, there are a few things on there that seem debatable in my eyes. For instance, the note about Gtk+ and QT being unstable is not something I’ve personally experienced – to me, it appears that some applications are simply unstable without it having anything to do with the toolkits. I’m also not sure if bringing up Win32 as an example of a good API is such a wise idea.
The codec complaint is also an interesting one. The author states that there is a “questionable patents and legality status” on Linux (when it comes to some codecs, that is). It goes on to say that “US Linux users cannot play many popular audio and video formats until they purchase appropriate codecs.” I live in The Netherlands, so the DMCA can bugger right off into an abyss – I will install whatever codecs I need on Linux, “clean” or otherwise. No need for me to pay for anything, and I doubt any American Linux users care all that much about the DMCA either.
The list is filled with other interesting items, and I’m sure many Linux users here will be able to counter other points as well. As a result, use this opportunity to discuss the current state of Linux on the desktop (eh…), and of course also maybe introduce some projects or initiatives that might address some of the concerns on this list.
I posted this response at Slashdot, but it got buried quickly:
I’ve been thinking about that maybe it’s time to let Linux be what it is, and start fresh with the goal to make an open source desktop OS that doesn’t get in the way. The world is different now and when Linux was started in 1991. It might actually be a good idea to rewrite the OS every fifteen year or so.
I’ve got some ideas for a new OS.
It consists of a kernel, an UI, a browser and some basic applications. This part of the OS is open source. Then there’s an app store or something similar where the user can buy applications, games, professionally designed typefaces, proprietary codecs etc.
It will be the first OS that is resolution independent. We wouldn’t even need anti-aliased fonts on the screen if monitor vendors started to increase the DPI. So that means that there will be vector graphics from the first pixel drawn after the boot loader. The default settings should favor what’s intuitive and non-intrusive for the average Joe.
I can imagine an UI with blues and grays, light gradients, mostly a flat look and some light shadows. Caching, timing and redrawing will get a lot attention, to keep the feel of the UI rock solid. Hot spots should be made the most of.
I think history has shown that there’s nothing wrong with the start-like menu, task list, clock and a desktop in the background. OS X and Linux have got the tray right for the most part. It’s for wireless networks, volume changes and similar. The user can install a program by clicking a link on a website or by using the app store-like program. The app store (or whatever it might be called) should keep track of the updates. Distribution of packages could get done by BitTorrent or similar technology. This means the death of mirrors and package maintainers.
I think the most important part is that you shouldn’t think about that you’re using an OS. Microsoft have experienced with browser integrated into the desktop. Most people didn’t like it. KDE 4 placed the desktop icons in a box. Most people didn’t like it. Let’s draw from operating system experience since the beginning, and use what worked. Throw away what didn’t.
And then, let’s talk about how it should be organized. One centralized website with an unified look. It should be easy for people to suggest ideas and comment on them. The ones who are making decisions will have to favor the public opinion rather their own. The feedback from the user should be taken very seriously. A release schedule will be set up that in the best way benefits the whole system. The Scrum process might be used as a model for the development in general.
The challenge is to convince people that this is a good idea. I will donate my time for free and lead the project, if there’s any interest. I’ve waited for an usable Linux desktop for over a decade, and I’m done waiting. That’s not to say that I don’t respect the work that has been done in Linux-land. A new OS would benefit from a lot of the Linux kernel source that has been written, and all that the world has learned about Linux. And Linux won’t die. This will be just another experiment, but with different organization and goals.
Edited 2009-05-18 19:31 UTC
If you want ot save yourself an insane amount of time just make QNX 6.4 more dekstop friendly. POSIX complient, super advanced and scalable kernel, nice API, Nice GUi (Photon, though its not as “pretty” as Enlightenment/KDE4, I like it), etc… Its an OS that was designed to have virtually limitless adaptations and incarnations made to it and from it, and is top notch. start with QNX as a base and save yourself a few thousand man hours of reprogramming the wheel . “Wheel 2.1b, now slightly rounder!”
Thanks for the advice!
I’m a bit worried about the licence, but I’ll look into it.
It might not be exactly what you’re looking for, but there’s an open source project called Whitix (http://www.whitix.org) that already offers some POSIX compliance, a graphical GUI and some more. It’s a very interesting project, it’s very immature but looks like a lot of fun to hack into.
Someday, someday I shall have spare time…
Nay – thank you very much. Not another insanely closed source operating system with very limited hardware support. What’s the use? It’s just like BeOS was: terribly nice design study, but pretty worthless for everyday work. And why pay an insanely high sum of money for that OS that looks like Linux ten years ago.
So many questions – but hey, THAT OS is definetely not the answer. Sorry.
Are you joking or what? QNX is well established OS, highly used on limited devices.
it is mostly open source witht he remaining closed source components going OSS continuously.
Actually I *do* use BeOS (or rather, its derivate Zeta) for day to day work. However I do agree that it is not recommended for everybody, as it requires some tech-savviness (and to be okay with working with a heavily outdated office suite, though thanks for Google Docs).
The above proposal to really “start something new” could be applied to the already running Haiku project (formerly OpenBeOS), I believe the larger body of contributors could speed development up quite a bit. But I don’t think this is the best solution.
Linux has a stable and versatile kernel. I don’t think it would be wise abandoning it. Perhaps some of the architecture around it could be modified, but this would require a forum of Linux experts to agree on a roadmap to structural change. This would take many years. I do believe it would be wise to rethink the architecture, but let this be a process running in the background.
Trying to categorise the article (see URL in original article), I see four main categories:
– stability & speed
– GUI / usability
– missing features
– organizational defects
In fact, it seems that slowness in boot, UI, software loading, shutdown, etc. is the main issue of Mr. Tashkinov (the author). He further mentions a number of usability issues (easy installing, configuration via a GUI, audio settings, server setup). The missing features (mainly hardware suport) in my opinion merely stems from a lack of both support from the hardware developer and other priorities for the developers. Organizational defects should be addressed in the forum I mentioned earlier.
As a cognitive ergonomist, I would like to say just a little about the GUI configuration issue. By now, usability designers have come to the conclusion that not everything can be done by GUI. That would make most interfaces completely clogged up with buttons, switches and whatnot. For advanced users, command line input still should be available. Perhaps Mr. Tashkinov does mean that some applications should also be provided a (simple) UI, but let’s make sure we don’t overdo it.
Couple critiques:
1)Linux isn’t a monolithic system, just a monolithic kernel. If there are parts that you feel are holding the linux based desktop back, feel free to re-factor/replace those parts. You will have too difficult of a job, if you throw out the kernel. Learn the lesson of SkyOs, don’t underestimate the amount of work a full operating system takes. Minimize and prioritize your work.
2)If you really want to do it, do it. If you are going to be project lead, you have to lead the project. That means producing a lot of code on your own, before you can convince others to join. There are plenty of write ups like yours in source forge projects that never went beyond the project goals.
Thanks for the advice!
By looking at the statistics, there’s less than 0.001% chance that I will succeed. There’s a theoretical possibility though, as every OS have had their start.
I said as a joke to my university teacher a couple of weeks ago that I’d write something similar to what Linus Torvalds wrote in that early newsgroup post, only as a 140 character tweet. That might bring some luck.
This isn’t a new OS that Android and Android is just an overhyped special-purpose Linux distro. Others like Linspire and Xandros did exactly the same thing years before Android.
I’ve got a great idea. I’m going to write a technical solution for a market problem, which I will fix by writing another kernel, app setup, and toolkit. It will have its own nice theme. I’ll see how this floats on Slashdot.
From my original post: A new OS would benefit from a lot of the Linux kernel source that has been written.
No, not even remotely accurate. To write a new operating system from scratch is a monumental task that no company in its sense would do it these days.
There are many OS out there, to start a new one and assuming you will be able to use lots of Linux is just a child dream, unless you make it Linux compatible… If it is Linux compatible and Linux is free? Why bother?
Start a scratch OS is very expensive and difficult. Check BeOS history, for example. And if you like what BeOS wanted to do, that it is remarkably similar to what you describe, start helping Haiku.
And, a GUI is not an OS.
We’ve been pretty successful using plenty of code from Linux device drivers for Syllable. It can be done.
“app store”
You do know the current package-repositories are the app-store, don’t you ?
I used the word app store to indicate that it would be possible to buy proprietary software.
As far as I know, that’s not possible right now in Linux-land, and as someone already has pointed out–it would be against the wish of many in the Linux-community to promote proprietary software.
I’m sorry if I’ve been unclear.
So you never heard of http://cnr.com/home then? Yes it’s possible and already done. That is not the only website available either.
Good luck, but I’m not holding my breath waiting. The average Linux distro is the product of millions of developer hours worth of effort – think you can improve on that?
Distro’s just need to get better.
Don’t worry it will happen, the only difference is by that time both the men and women who use GNU/Linux will be grey beards. Yes the women will have beards and will be to old to care about their looks.
The main problem with your suggestion is that this could be built on top of Linux and the app store was kind of done by Lindows/Linspire.
The biggest problem I have with Linux (as a desktop) is that it is design by committee and too many people have entirely different ideas for the operating system so there is never a clear direction to move in.
You have tons of people who have their own idea of what the system should be to succeed on the desktop and most of them are wrong.
His list was decent and I have a few of my own.
people have this idea that the Linux user never has to go deeper than the UI, and this is just wrong.
When you do go to the shell to do something, and –help or ‘man xxx’ to see the syntax, you get a page of flags when almost every single person checking –help needs exactly the same couple of flags.
It would take a new user (ie. someone who forgot the command) several minutes to figure out “tar -xzvf file.tar.gz” with man pages or –help.
I have read thousands of Linux related documentation and I am Linux certified, but I only ever use it every several months and I have to Google simple command line tasks like extracting a compressed file because the help files are like reading an instruction manual.
I’m a grown up now. I don’t have as much time to play with toys as I used to.
Oh for goodness sake. If you don’t have the time or inclination to use the command line, then don’t.
For your example of “tar -xzvf file.tar.gz” then just use one of these:
http://en.wikipedia.org/wiki/Ark_(computing)
http://en.wikipedia.org/wiki/PeaZip
http://en.wikipedia.org/wiki/File_Roller
and be done with it, and stop trying to come up with made-up complaints about Linux.
I think it’d be smarter not to throw some random tools in the room and say “there you go” but to define a nice alias in the profile, let’s say… untar for “tar xvzf”. At least that works on commandline as well…
All this GUI and whatnot… just a passing trend, I say!
I think that Haiku covers most of the aspects you’ve mentioned before. No need for a new OS.
Linux is shitty because is lacking a central development team and is lacking leadership. Linux is like chaos.
We don’t need thousands of GUI toolkits, thousands of desktops, thousands of windows managers, thousands of package managers. We need a single app for a job, but an app who does its job well.
Edited 2009-05-19 08:42 UTC
All of what you said can be achieved using Linux – Linux is just the kernel, there is no law that states that you must use X11, that you must use GTK, that you must use Qt, that you must use any number of things. There is nothing stopping you from using GNU/Linux and from there build a cutting edge graphic user interface engine (replace with X11 with some better – look at how Windows and Mac OS X did what they did) based on OpenGL whose focus is first and foremost creating a top notch desktop experience.
The problem is that almost every company who has come out with a ‘Linux’ distribution have basically just hacked up a version from some distributor and rebranded it. What Linux needs is a new approach to delivering the user visible stack.
Edited 2009-05-19 12:32 UTC
Exactly. A new approach that makes Linux to stand on its own, and no a me too product.
What you are describing is something I’ve been wanting to do for many years now, though I probably never will due to work and family taking up all my free time: An open-source alternative to the Mac OS. My own view though, is to do as Apple did and go with a BSD core instead of a Linux core. Besides the added stability, you have a more compatible license model for adding proprietary software in the userspace. This is one of the things inferred in the parent article I agree with: Linux is burdened by the GPL in many ways. I’m neither a GPL fanboy or hater, I just think it’s not a good fit for a mainstream desktop OS.
So, as a good free alternative to Windows and OS X let’s have an OS with a BSD core, a non-X based, OpenGL accelerated GUI and a unified, easy to use toolkit for third party application development. I realize this will not be possible in whole until both Nvidia and AMD/ATI do as Intel has and open up their video processor specs to the OSS world, not to mention the wireless networking manufacturers.
I’m willing to wait though, and until then I’ll continue to use OS X, Linux, BSD and yes, even Windows 7 to meet all my needs.
Thanks for the feedback!
What I find cool about OS X is that Apple aren’t afraid of making important decisions–and they make wise decisions more often than not. There are tons of stuff that I disagree with, but the overall impression is a positive one.
Apple is stubborn when it comes to certain stuff. You can’t cut a file in a folder and paste it in another if you want to move it, the dock is a waste of space on widescreen monitors, and the UI is horrible if you want to use your keyboard. And it’s closed software on closed hardware.
I love how the windows are managed in, well, Windows. The task bar with its grouping (I haven’t made up my mind on Windows 7’s solution yet), the three buttons on the top right. Windows got an overall “everyday life”-vibe that I like.
The idea that every application in the system is taken care of by a package manager is what I like most about Linux. I think it could be implemented in a much smoother way than what I’ve seen, but it’s awesome that you don’t need 15 apps screaming for an update in the tray.
“Both GTK and Qt are very unstable and often break backwards compatibility.
Many GUI operations are not accelerated.”
Stop Trolling ! So windows don’t break backward compatibility ?? It is number one in that thing !
Troll troll troll …
The problem with Windows is they try to maintain backward compatibility for far too long a period.
As a developer i agree, but as a user i find backwards compatibility the single most important thing in Windows.
As a developer I love this. How is Linux better by not doing this? The very reason for Windows success is because of supporting software… you know… what an OS is supposed to do.
This article is very accurate from a developer’s point of view (which I am one).
I am not going to waste my time porting my software to Linux when not only are the user’s hostile to me feeding my children (charging for it) but when also the binary compatibility and library landscape is so inconsistent.
There is nothing wrong with selling software on Linux. Is this really a hostile point with users?
If your software does something that I couldn’t get from a free app, why wouldn’t I buy it? (assuming I needed it).
The feelings I get from others in the Linux community is that because so much of the base (the kernel, libraries, some GUI kits) are free that any attempt by me to charge for an application (which has fewer man hours than the above list) is somehow morally wrong.
I do wonder, though, exactly how much of the community is hostile to paying for software? Yes, there are a lot of vocal zealots who don’t think they should ever have to pay for anything, but I wonder how many others are perfectly happy to do so and simply keep silent. You can count me in among that latter group, by the way, I have absolutely no problem paying for an app if it has the functionality I need at a reasonable cost and is better for my purposes than any alternatives.
It seems there are a lot of vocal zealots that give the Linux community as a whole a bad reputation, not unlike some of the more fanatical Mac or Windows fanboys/fangirls. As with all things, the people who are making a fuss are the ones that will be noticed, but their numbers are usually far less than you would think. They’re very loud, and the rest of us don’t make them see sense for the simple reason that they don’t want to.
I think a lot of Windows users don’t pay for their software either. So the whole point is actually pretty moot. Especially with companies, that actually let software companies create custom software for them. That’s still 80% of the software market.
I don’t see anyone complaining because of Maya being a commercial application. Or Smoke. Or Houdini. Or Doom3, Quake4, Unreal Tournament, Darwinia, ET:QW and Prey.
What I do see on the other hand is people that wouldn’t buy applications like NeroLinux, just because they offer nothing over other apps readily available from the repositories.
The “Linux users don’t pay for software” is a lie. Someone who wants to run Maya will be as likely or unlikely to buy or pirate it no matter what OS he runs.
Because you surely didn’t meant to say that Windows users buy all their apps, did you? Like, say, Photoshop?.
The issue with Linux users is that we have a lot of software available for free in the repositories. Some of it is great, some is crap and the rest is somewhere in the middle, but if you want us to buy your software you’ll have to offer something better than what we already have.
Because they are Linux users and they do not know those things exist… But you can get Quake 3 on Linux. Secret Maryo Chronicles and Atari games.
No, it is not a lie, most Linux users do not understand the difference between open source and free.
Adobe and Filemaker, for example, once they made Linux apps, and gave up because people assumed they were free.
Don’t assume everyone is a thief. Not only it is not polite, but is extremely inaccurate. Many people live on programming computers.
See, for example the AppStore for the iPhone.
Most of Linux software is old software. Not old by date, but old in technologies. I have not found a single piece of software in Linux, that is not available for other platforms better. For example, GIMP is a joke if you compare it with Adobe Photoshop CS. GIMP is like Photoshop 4.
Even the same software for Linux when you see it Mac OS X or Windows, looks better, example: Adium (Pidggin) or OpenOffice.
You are indeed a funny troll. You know that this list was composed by programs/games that all run happily under Linux, right?
Well, it is true. Linux users only care about their precious servers. Those are the only things they know exists, but common Apps that desktop users use, like games and so on… They do not even bother.
Why wont you just save face and backpedal in silence?
Because they are Linux users and they do not know those things exist… But you can get Quake 3 on Linux. Secret Maryo Chronicles and Atari games.
Autodesk Maya:
http://usa.autodesk.com/adsk/servlet/index?siteID=123112&id=7639522
Notice how it mentions Linux in the supported platforms? And Maya sure is a professional app if anything.
http://www.sidefx.com/index.php?option=com_content&task=view&id=415…
Same for Houdini.
http://www.idsoftware.com/games/quake/quake4/index.php?game_section…
See those Linux-native patches?
And so on. Do you wish to make yourself look even worse now?
Adobe and Filemaker, for example, once they made Linux apps, and gave up because people assumed they were free.
Adobe does not and has not EVER released their software for Linux except Acrobat Reader. Stop making lies to justify yourself.
Even the same software for Linux when you see it Mac OS X or Windows, looks better, example: Adium (Pidggin) or OpenOffice.
This one is just a plain strawman argument. I personally like more the looks of my Linux desktop than my Mac one, including OpenOffice and Pidgin. Just because you like some looks better than other doesn’t mean it actually is better for everyone.
Did you notice that Windows and Mac are also supported?
Did you see all the apps and games mentioned have Mac and Windows versions?
Tell me what is the benefit of using Linux on the Desktop? Something that is better than on the Mac or Windows?
Games are always better on Windows. Media Apps are always better on Mac.
Don’t tell me is getting there, because that thing I have been hearing for the past 12 years.
Adobe had a bunch of products that run on UNIX (SIG and SUN, 1994-96), I saw it working (Illustrator and Photoshop), and the big plan was to port it to Linux, later. Linux was starting., but you know, Linux would be the future… But at the end, they did not do it because how Linux users were perceiving the whole free movement. Some people even asked for an old version, but Adobe said no. There was no warranty people would by the stuff.
And Filemaker, did the same thing, only they took a step closer releasing Filemaker Server for Linux and then, killing it, presumably, for the same thing.
Look at the technology behind it. Double buffering like Mac OS X or Vista, no alpha channels, good fonts like Mac, nope. You can see even tearing the Windows when you drag them, because there is no double buffering.
It looks nice, yes it does, it looks like a computer of the 90s. Very BeOS.
But since they run on Linux, why would you want to use a Mac or Windows?
Have you ever heard of dbe?
Tell me what is the benefit of using Linux on the Desktop? Something that is better than on the Mac or Windows?
Games are always better on Windows. Media Apps are always better on Mac.
Don’t tell me is getting there, because that thing I have been hearing for the past 12 years.
You were the one claiming that there’s no proprietary software for Linux and that no one would be willing to buy such anyway. Well, I just proved you wrong. And I find no difference between games that are released natively both for WIndows and Linux. I have tried several ones and they run just as well on both OSes.
Look at the technology behind it. Double buffering like Mac OS X or Vista, no alpha channels, good fonts like Mac, nope. You can see even tearing the Windows when you drag them, because there is no double buffering.
There are alpha-channels, and I personally like the font-rendering under Linux better than OSX. OSX fonts are too fuzzy-looking. Yes, windows do tear when you move them around unless you are using a compositing manager, but I did complain about that already myself.
Ok… Explain Gnome Antialiasing configuration to me. Why RGB? and not BGR?
And imagine I am a normal user, not an OS news reader.
Do you realize you can easily switch RGB to BGR on Gnome through Theme Preference/Fonts?
Yes, you can… Now try to explain to a normal a user what is that?
Now explain a normal user the difference between the three types of antialiasing and the three types of hinting? At the end explain to them what hinting is?
By the time you finish, the person is looking to a Windows machine or a Mac. Should it be done different? Of course, people are people, not computer enthusiast, they do not care which algorithm is used or what mode is better for my particular brand of monitor. They just want the thing working.
Edited 2009-05-19 18:16 UTC
Does a normal user care about antialiasing to begin?
Obviously a problem with monitor setup, nothing do to with Mac nor Windows nor any Linux distribution.
And there you go talking about the impoliteness of assuming things about people.
What apps? What did they offer that made them worth purchasing?
Read again what you quoted: I’m saying that whether you pirate or not depends on the person and not the OS you run.
There was a statement about Linux users not paying for software, and I just put the example of Photoshop: a Windows program which happens to be probably the most pirated application.
Again I put an example: NeroLinux.
K3b gives the same functionality and then some more, so it’s obvious that no one will pay for Nero.
Regarding Photoshop, the joke is buying an expensive professional software to resize and remove the red eyes from your weekend photos… but then again as you know most people (home users, that’s it) don’t pay for it, but rather pirate it.
Photoshop is not an app for home users, gimp is.
Whether you like better how an application looks on whatever OS is irrelevant, I’m talking about the reason why no one would pay for an application that offers nothing over the apps available from the repositories.
Which apps have you purchased lately?
NeroLinux… a clone of Nero…Do you really think that copying something is innovative? Tell me a Desktop app that is better in Linux than any other OS.
Nero appeared on Windows, like 10 years ago.
Now GIMP is for home users… Photoshop Elements is also for home users, but it is usable, GIMP is terrible. GIMP is not for home users.
WTF? NeroLinux is created by Nero themselves, a Linux port of their Nero software! Somehow you turned this into a slam against Linux, and you’re surprised that people call you a troll?
Tell me what NeroLinux has that Nero doesn’t. It just a clone of Nero.
Why does that even matter? You seem to want to use NeroLinux’s lack of features compared to the Windows version as some kind of proof that Linux sucks. Strawman.
I’ll give you something that K3b has over NeroLinux *and* NeroWindows. It’s not over 10 MB in size and it does everything I need.
Lol, you are contradicting yourself. Professionals also use GIMP (Blender 3D Studio team) especially for less RAM requirement and texture. When it comes to interface, clearly you never used Photoshop in OS X environment hence similarity to Gimp.
Valgrind. Find me a non-Linux alternative that’s at least just as good.
Yes, that’s bullshit.
I’m happy to pay for software on Linux, I got all the id software games, and I also got a PS3 now because I can’t stand running Windows for games.
I also prefer to use FOSS when I get the chance, but for things like games, etc, I’m happy to pay for them.
There are those who refuse to pay for anything, but they exist on ALL OSes and platforms. As others have pointed out, Windows is the platform most used by pirates.
Then there are users who don’t like proprietary software. Well, there’s nothing you can do about it. But I actually think they are in the minority.
Then there are those who actually support proprietary application developers. Linux lacks f.ex. games, so there’s quite a bunch of people who buy proprietary Linux-supported games to show that there is interest for those.
I belong in the group that support proprietary software as long as it offers something that the free ones don’t; better/more features, easier use, more stable etc. Pick your choice. But no, of course you won’t get your software sold if it doesn’t offer enough value for its price. I’ve noticed that many, if not most, Linux-users are more critical of the applications and features they use and need so you won’t get by as easily as you can with Windows-users.
That is true, and thanks to that feeling, Linux is always going to be left behind. When technology is old and everyone knows how to do it, it will get its way to Linux. But, it could be 10 years later.
Mac OS X had double buffered Windows in 2000. Where are Linux double buffered windows after 9 years?
Double Buffering and Alpha Blending is provided by using any method of “indirect rendering” which has been in Linux since at least 2005 looking at my screenshot archive, all drawing operations in X have been hardware accelerated using OpenGL since the introduction of AIGLX into Xorg a couple of years back.
It’s true that X11 lacked the 2D hardware acceleration that was used by Windows GDI for many years but even the biggest 2D acceleration cheerleader Microsoft has now moved beyond GDI’s 2D acceleration paradigm to a more generic strategy using DirectX.
Windows Vista (and 7), Mac OS X and Linux are pretty much on the same page when it comes to desktop graphics these days.
You make it hard to swallow your arguments about how far behind Linux is when you obviously haven’t looked in years.
Stop confusing us with actual facts. Real men argue using unsubstantiated opinions thinly veiled as matters of fact.
Oh sorry,
What I meant to say was…
Windows is crap because:
* it doesn’t even have a real desktop, just “Program Manager”
* the 32-bit ABI is an add-on
* it still runs on top of DOS
* windows disappear when you minimize them and you have to un-maximize program manager to see them
* there’s no 3D support except 3DFX
* you have to use trumpet winsock to connect to the internet
* the latest browser you can get is Netscape 4.
* doesn’t support recently released hardware
I look at Linux every day. If you believe that a Linux desktop with AIGLX is on par of Windows or Mac, you must be blind.
Besides, it is not a problem with hardware acceleration. Mac OS X works with double buffering by default even in hardware that does not support 3D acceleration. It supports double buffering since the public beta in 1999.
Double buffering has nothing to do with hardware acceleration. That’s another thing.
I’m not sure what kind of special third-eye allows you to see differences in rendering infrastructure, but I suspect it might be the one in your arse.
I’m prepared to accept on face value your notion that Linux looks like shit. But that is not the fault of the rendering system.
I find it bizarre that you should be so focused on double-buffering on the desktop, as it’s something that only came to Windows with Aero, and Windows is the only operating system which requires hardware acceleration for double-buffering.
If you’re trying to claim Windows XP is unsuitable for the desktop, I think about 70% of the world would disagree.
Around 2005 “indirect rendering” provided double-buffering and the ability to perform transforms in the offscreen buffer allowing for “compositing managers” and alpha-blending. – That is the rough equivalent of Mac OS X’s Quartz (10.0) rendering, and slightly technologically ahead of Windows GDI (2000/XP).
More recently XGL and AIGLX have come along offering accelerated transforms on windows between the offscreen buffer and the screen allowing for distractions like wobbly windows and more practical applications like hardware accelerated live-zooming, – The rough equivalent of Windows Aero (Vista/7) and Mac OS X’s Quartz Extreme (10.2).
Are you familiar with the “GNU” part in “GNU/Linux” ? That’s why so many users (me included) are, let’s say, somewhat reluctant to the idea of using proprietary software. It’s not just that there happens to be a lot of “free” (of charge) material; the whole point of the GNU project is building a free (as in freedom) operating system.
That said, I don’t think programmers are particularly to blame. Nearly all the economic sectors where intellectual production is involved are devastated by the obnoxious “intellectual property” concept and legislation. The biggest problem in IT is software patents, which could eventually bring the industry to a standstill.
Exactly. I purchased MoneyDance because I liked it better than any of the alternatives. Those same people who would refuse to purchase software for Linux are probably the same that do no purchase software for Windows either.
I don’t think that’s entirely fair.
Windows is plagued with pirated software and freeloaders.
Where as OSS charges for support rather the software – so at least you’re controlling your product.
Now I’m not saying OSS is better than proprietary. Just that many a company has successfully made a business out of free software – so it can be done.
If companies manages to earn money by offering support instead of by charging for the software it is because the software is so complex that support is required. Most applications does not require more support (if any) than a help-file or a quick question on a forum can give. In other words for the majority of software it is not a viable business model.
Ever heard of SAP?
Yes, SAP… And SAP deliberately makes things impossible to figure out. Terrible user interface and experience, especially designed to charge for support. You cannot kill the business.
And they charge you for almost everything. It is very hard to argue they only charge for support.
At the same time thought it is a huge pain to build custom packages for the different versions of different distros and that is before you account for people that have upgraded/downgraded some of the libraries you depend on to non-default versions for the OS.
It is a crazy amount of work so it is often better just to open source your application and let the distro well, distribute it in binary form.
Like it or not, this is the Linux distribution model and it does not work nearly as well for commercial software.
As a developer by trade myself, writing commercial and open source software for Linux and Windows platforms…also as a Linux user, that article is far from accurate.
Many of the things mentioned there is a simple point and click action to solve. I’m using Ubuntu 8.04.2 and I’ve tested Ubuntu 9.04 which makes many things even easier.
I hate to agree with the whiner, but if it’s so easy to fix those things, why are they left broken or in a stupid configuration in the first place?
It does seem like in many cases Linux distributors are like bakers who aren’t really able to guarantee the right taste because they don’t control the raw ingredients in the recipe.
I guess it’s an unfortunate side effect of a decentralized development model, but it seems that Linux distributors are vulnerable to being taken for a ride by misguided developers, where someone like Microsoft or Apple has the ability to force their developers to conform to a lofty vision or strict usability guidelines. (Not that they do always)
Edited 2009-05-20 05:08 UTC
I don’t think there is a hostile atmosphere outside the hardcore “it must be all free” zealots. I think the biggest problem is exactly what you put your finger on – the lack of compatibility not only between versions but also compilers (C++ the magical moving target) and binary compatibility (library A compiled by foobah 1.0, the same library is compiled by foobah 2.0 – its a toss up whether it’ll work reliably on the later). The problems I faced were more distribution compatibility related more than anything else – I guess the only saviour would maybe Ubuntu becoming pretty much the defacto standard for Linux desktops to which vendors can orientate their middleware around.
I stopped reading here
2.1 No good stable standardized API for developing GUI applications (like Win32 API). Both GTK and Qt are very unstable and often break backwards compatibility.
Seriously GTK 2.x is API compatible since the damn 2.0 days.
That’s why they’re trying to remove all the deprecated stuff in 3.0 (and remember, only deprecated stuff, it’s not like it’s going to be a full change in the API).
That’s why they’re trying to remove all the deprecated stuff in 3.0 (and remember, only deprecated stuff, it’s not like it’s going to be a full change in the API).
See, that’s the backward compatibility being thrown out. Just because stuff is deprecated does not mean that it is not used. There are tons of applications that depends on it and it will be a royal PITA to convert them to use non-deprecated stuff instead (only to have them deprecated in a year or two).
To be backward compatible means keeping the old stuff (perhaps re-implementing it using new) and hoping that people will use the new and better APIs for new applications.
So, instead of making the app developers apply the API changes to the app itself, with your way the API developers, who need to always keep a clean design, need to jump through hoops.
There’s no way to avoid this: Or the API developers endure a lot of cruft hard to mantain, or the app developers endure the pain of API migration.
As a developer let me try and explain. Linux uses a VERY different design philosophy to Windows. Linux by design uses lots of small and very specific tools that can do a single task and do it well. Combine a few tools together and you have a very flexible system.
As for the backward compatibility… Linux apps are mostly open source, so there is no good reason to keep backward compatibility and simply hog down the API’s with old and unused stuff. The backward compatibility is one of the main causes of Windows buggy-ness and vulnerability. Under Windows, Microsoft has to apply one hack over another in each version of Windows to keep backward compatibility and at the same time try and improve Windows. This causes the code to be an absolute mess in no time. Hence their struggle to get stable code.
Linux design however is as follows… You have the code, so update your software to use the newer more secure and developer friendly API’s. Linux will NOT compromise code design and clarity for backward compatibility of old (and mostly unused) software. Hence the reason the code stays efficient and is easy to maintain and bugfix.
In both cases it was a design choice. Each having there pros and cons.
Why do you say “a year or two”? Gtk+ has been API and ABI stable on the 2.x series for about seven years now, it’ll be eight by the time 3.x comes out next year…
Plus the fact that I believe GTK2 and 3 will be parallel installable (ie – you can have both APIs active on the system)
Plus the fact that I believe GTK2 and 3 will be parallel installable (ie – you can have both APIs active on the system) [/q]
Indeed. Even GTK 1 is still available in all distros and will continue to be once GKT 3.0 is out.
GTK 3.0 will also not be totally incompatible with 2.x. Apps for GTK 2.x are easy to migrate, because most stuff that will be deprecated in 3.0 already had a better alternative in later 2.x versions. 3.0 will keep the better alternatives.
Qt is similar. Qt 4.x offers a compatibility layer for Qt 3.x (3.x is only an enhanced version of 2.x, so compatibility reaches even longer into the past).
Nokia has also no plans to ditch Qt 4.x in the next 5 years.
So what? If you want to use the old stuff you just use the old library, what’s so hard about it?
Much wailing and gnashing of teeth over ignoring what was said and ignorance to how compatibility is provided in *NIX land. There is nothing stopping you from installing GTK 3.x and GTK 2.x side by side – heck, for almost 18 months when GNOME 2.0 was release many applications had not be migrated to GTK 2.x and thus you had to have side by side installations of GTK 1.x and 2.x.
As a developer there is nothing stopping you from going out and gradually migrating off gtk 2.x to gtk 3.x and thus requiring both to be installed which can be handled by the packaging system. There is backwards compatibility with *NIX but you don’t need to have a single big fat ugly library with shims, hacks, cracks and other ugly work arounds. Want backwards compatibility, install the old library and you’ll find that the old and new can live side by side in peace and harmony.
Edited 2009-05-19 12:49 UTC
This seems, to me, to be a very rushed list with little research behind it and even some of the commands referenced were wrong. A couple of points:
1. About hardware setup: Agreed on some levels. I’m surprised, however, that scanners weren’t mentioned. Just try and find a current (not used or discontinued) Linux-compatible scanner that has all its capabilities supported and is not a huge enterprise SCSI device. Some areas of hardware certainly need to be worked on.
2. Codecs/Blue Ray… Well, I don’t know about the rest of the Linux users in the U.S, but I couldn’t care a rat’s buttock about software patents. I’ll install whatever codec I damn well please, I have every right to play any of my legally purchased media. Ditto for DVDs; I have libdvdcss installed and use it, and I say so proudly. Further, with oses like Ubuntu, this is a very easy thing to do–just follow the prompts. How many average joe’s read the legalese anyway? As for blue ray, his point is not true. It is certainly possible to play blue ray movies in Linux, however it’s a royal pain in the arse to do it and get it working properly. It certainly needs to be more streamlined, so I’ll grant him a half-truth on that one… but a half-truth is still false.
3. Win32 is a good API? I beg to differ… very strongly.
There were other little things. To me, this read more like an outline for a research project than an informative list of what needs to be improved. Clearly, the author hasn’t quite reached the researching phase yet, as there’s certainly a lot of valid points in there but very little detail and, of course, some things that are definitely false.
3. Win32 is a good API? I beg to differ… very strongly.
I’m quite sure that he meant that Win32 is a good example of a stable API. Whether it is a good API or not is neither here nor there in this context.
Yes, the Win32 API is a stable API. But that’s about the only good thing I can say about it. As a developer I absolutely loathe the Win32 API.
I also don’t buy Blueray, why would I want to ? That DRM-packed stuff, I just don’t need or want it.
Speaking for yourself. Unfortunetely when the consumer goes into the store to purchase a desktop computer with a Blue-Ray drive that they want to have the option to also play movies, well the argument of “well DRM and Blue Ray is evil” is not going to fly. You have to accept there is a difference in what consumers want and what tech geeks want.
“Unfortunetely when the consumer goes into the store to purchase a desktop computer with a Blue-Ray drive”
Only recently, so it seems, I haven’t seen any ‘in the wild’ yet. Most being sold until recently were laptops and netbooks that didn’t have them.
I’m not too sure about the users you hang around with but I don’t see many end users watching BluRay movies on their laptops; thats not to say that BluRay is a waste of time or money but I don’t think it is the so-called ‘deal breaker’ which some people claim. For me I am waiting for BluRay to come out with a low cost writer because right now the drives are incredibly expensive along with the media; until that arrives I’d sooner not have BluRay bundled as I’d sooner have reader/writer bundled than just a reader.
I take it, then, that you don’t buy DVDs either? There’s DRM on them too, you know.
I don’t buy blue ray either, the price doesn’t justify itself to me and besides, I don’t need HD video anyway. The audio is no better on blue ray than on a standard DVD in most cases.
I was just pointing out that his point, about being absolutely unable to play blue ray disks on Linux, was incorrect.
And when we come right down to it, whether you buy blue ray or not is irrelevant. If Joe User wants to buy blue ray, he should be able to do so. Free choice, and all that.
“I take it, then, that you don’t buy DVDs either?”
This is true, I don’t. Well I did buy one drive for my computer, because their just wasn’t a CD-burner when I needed one. So I bought a DVD-drive which can burn CD’s.
I think the drive has actually had a DVD in their ones.
I would actually want to buy a PC with a processor without TPM and CPUID as wel, but it’s not so easy to find.
I tried to buy a LCD TV which didn’t have a HDMI-connection and ticked all the other boxes, but it didn’t exist.
Edited 2009-05-18 22:27 UTC
With respect to your second point, you’ve completely missed what the author of the article was getting at. Yes, it is possible to get Blu-ray movies to play under a Linux system, but the context of the article was Linux on the _DESKTOP_, which in this case means _AVERAGE_ computer users. The process of getting this to work is by your own admission, “a royal pain in the arse”, and as far as your average computer user is concerned, this is effectively equivalent to “impossible”.
It’s less of a problem for DVD, but still a problem. The steps that need to be taken to get a DVD to play on a Linux system aren’t always simple and don’t always go smoothly.
To further expand the original point: irrespective of the technical aspects of getting this stuff to work, there is also the legal perspective. Personally, I’m a staunch opponent of current copyright laws and very vocal in my disgust of the major record and movie labels. But, regardless of your views on the matter, Linux users should NOT have to break the law in order to watch a DVD/Blu-ray movie. You can’t realistically expect an operating system to gain mainstream acceptance while at the same being only able to perform relatively basic functions by breaking the law. I’m aware of fair-use legislation and so forth, but, in many countries you will end up breaking laws.
You’re approaching the point from the viewpoint of both an experienced Linux user and someone with an understanding of the copyright situation with respect to digital media, while the article is trying to make a point from the view of a user at the opposite end of the spectrum.
I’m not in the mood to respond to all those complaints, but let’s keep one thing in mind. It’s completely useless to respond to any criticism of one’s favourite OS by saying it’s all crap, trolling, etc.
Embracing critics and listening to them, finding out where they’re right, and then doing something about those things if possible/desirable will only make things better.
Then again, it might be better to just start contributing. 🙂
I did find the argumentation rather poor. For example, how do you install keylogger malware on, say, a Fedora box? Just show us, because perhaps we don’t know.
You’re right about handling criticism, but the complaints about not about the mere existence of criticism. It has been shown that some parts of the criticism are factually wrong. One can’t help but fear that this misinformation will spread among people who don’t know any better.
Of course this doesn’t mean that the criticisms that are correct should be ignored. However, it does seem that people complain about Linux more often than actually working on it, which doesn’t help.
I must say that I mostly skimmed through the list, but it does seem really accurate overall. Yes, LDAP is a PITA to set up, yes, font configuration is too, yes, atleast GTK+ does a whole lot of stuff in software and thus the desktop is a helluva lot slower than it should etc. The last one is what annoys me very, very much: without a composite manager your windows will move around jerkily, GTK+ devs don’t understand to accelerate everything they can and so on, so the general feeling is that Linux GUI is slower than f.ex. Windows GUI.
Oh, and I didn’t see any mentioning about power-management in the list. Everyone probably knows it, but pm in Linux is still lacking quite a bit. I’ve seen dozens of people saying that no matter what they do, their laptops go out of battery faster under Linux than under Windows. When I still had a laptop the same applied to me, too, even though I tried every single trick I could find.
And stable kernel API for developing drivers is one thing that is lacking, and that’s one reason why certain parties won’t develop drivers for Linux.
Don’t take me wrong, I love Linux; I love the idea of a completely open system, with no archaic restrictions everywhere, and the license allowing me to change any part of it if I so wish. But I still can’t deny the shortcomings of it, and I too have been wondering if someone should just take up the task and code a new OS with focus on modern desktop needs.
Yeah, I think rewriting the OS based on specific use cases would lead to a much better OS, than twisting and bending Linux into doing the stuff decent enough.
An OS should IMHO set up a comfortable work area for the user, where he/she can be productive with tasks like Internet, music, video, photos etc. I don’t think those tasks were goals for Linux when it was first written.
Hello. Hi. Hi…guys? Guys? Hello? Is this thing on? *tap* *tap* *tap* Hi? Anyone?
Oh there you are.
We’ve been working on this with AtheOS and Syllable for over a decade now. Very few people appear to care, or at least care enough to put in the effort to actually help do such a thing.
We’ve been working on this with AtheOS and Syllable for over a decade now. Very few people appear to care, or at least care enough to put in the effort to actually help do such a thing.
Does either of those address the issues mentioned? Like f.ex. GUI using non-accelerated graphics, or LDAP being easy to set up, or a stable kernel API for driver development? I am not familiar with either of those, though I did check a few screenshots of Syllable but atleast those screenshots were rather…well, non-enticing, to put it nicely.
As for helping; I am unfortunately still a complete newbie when it comes to coding anything. I can do some basic GUI app with GTK+, or some simple console apps, that’s all. Sure, I can help with translation to Finnish, but that’s most likely not too important at the moment seeing as how raw Syllable still is.
My point is that someone saying “Well maybe we should just start writing another OS” belies the amount of work you actually require to even come close to getting a usable, working OS that addresses those issues.
As for the specific points you raise:
Yes, the kernel ABI is stable, yes we do have plans to make configuring Single-Sign-On/centralised user management easy (they involve Syllable Server too). Accelerated graphics? Yes, because we keep things simple and much of the “acceleration” people think they need, they don’t. For example the article mentions hardware accelerated font anti-aliasing: who cares? Software AA is fast, simple and Just Works.
That’s simply because “Making it look pretty” comes way, way, way down the list, behind “Making it work on Bobs Core2Duo with funky ACPI” and “Support the latest video card from nVidia”, amongst lots of other things.
Actually we’d love to have some help with translation. Swedish is pretty well supported, it would be nice to have Finish too.
Accelerated graphics? Yes, because we keep things simple and much of the “acceleration” people think they need, they don’t. For example the article mentions hardware accelerated font anti-aliasing: who cares? Software AA is fast, simple and Just Works.
Well, we could argue that just because something is fast on your machine it might not be that on everyone else’s.. Especially in the case where you have lots of text on a high-def screen (f.ex. coding or writing an essay) you will be doing an awful lot of work in software. And high-def screens are becoming more and more common nowadays.. I still say that if it’s possible to do in hardware then do it and don’t come up with excuses.
Actually we’d love to have some help with translation. Swedish is pretty well supported, it would be nice to have Finish too.
Will think about it. Atleast translating things is easy, even if a bit boring.
That’s the thing: it isn’t always possible to do it in hardware. You have to start mucking about putting glyph tables in off-screen video memory and all the overhead that entails. It’s a lot of effort for little benefit. You still have to have software AA as a fallback for hardware which doesn’t support hardware AA rendering anyway, which complicates matters even more.
When I say “Software AA is fast” I mean just that: it’s a tiny amount of CPU time to perform AA on a single glyph. Your OS probably spends more time in the scheduler in an hour than it ever will spend doing something as trivial as AA calculations in an entire week.
Edited 2009-05-18 22:21 UTC
We’ve been working on this with AtheOS and Syllable for over a decade now.
Sorry, NIH.
No, they were not. The Linux kernel has only one goal when it was designed, and it was to be able to use UNIX on a Personal Computer, particularly IBM PC.
GNU, had a different goal, provide software without commercial barriers.
If you see it closely, none of those projects were designed or destined to create a better OS. What we today know as GNU/Linux or just Linux, was more a “political” movement, rather than a technological one.
It was very different when, for example Macintosh or BeOS were created. They tried to do things differently. To provide a new way of doing things.
Macintosh set the goal on bringing the GUI to the people (At least if you could afford it) and BeOS tried to create something new from scratch with very nice technology that disrupted legacy technology.
However, as you can see, disrupting the past is not always the best commercial strategy, as Microsoft has prove it.
And, Linux, well, they never meant to be better, at least not on technology. They meant to be free.
It is frustrating though, how that freedom is also its biggest shortcoming. Commercial firms do not want to develop software for Linux because Linux users believe that all Linux software should be or is free, which is not.
And today’s Linux promotion is maintained and done by paid professionals that work for server computers, like IBM, Sun, Novell, Oracle, etc. It is just a myth that a teenager on a garage works at night on developing kernel extensions. While there must be some geniuses out there, the truth is Linux is a server operating system maintained to avoid Microsoft total domination on server arena.
Linux on the desktop is a joke. At least, 15 years behind what Microsoft or Apple provides.
“Linux on the desktop is a joke. At least, 15 years behind what Microsoft or Apple provides.”
Look at WinXPSP2. Look at a recent Fedora or Ubuntu.
They are in most aspects better than XP ever was. I would say, Desktop Linux is as good or better than Desktop Windows, having some things where it really shines (virtual Desktops, stability, Virus-free, still usable under 100% load), and some things where it is behind (apps, scanners, moving of windows).
Give them a few years and you will see fade more and more of the criticisms.
All any computer user now has to do is to ask himself these two questions:
1 – Can I do my work with Linux?
2 – Do I want to invest the time for learning how to administer Linux?
If both get a yes, go for it, if not, stay with the more expensive option.
Power management ? With everything it depends.
I’ve seen videocards that now start to be supported to do proper powermanagement in Linux. But I also see Windows-servers sucking up much more power then Linux-servers. So it all depends on the situation.
All of this is completely addressed in KDE4.
Installing KDE 4.2.1 (or later) and running that instead of GNOME/GTK is apparently one trick you didn’t try.
All of this is completely addressed in KDE4.
Installing KDE 4.2.1 (or later) and running that instead of GNOME/GTK is apparently one trick you didn’t try.
No, PM issues are not and cannot be fixed from KDE4 when the issues lie in kernel and drivers. So no, KDE4 does not address that.
Secondly, yes, I did try KDE4 just recently. I did not like it, it was cluttered and awkward. Though yes, it felt faster than GNOME.
I don’t think so.
http://www.kde.org/announcements/4.2/desktop.php
This says that the desktop and window manager activate the kernel and drivers (“wake them up”), and not the other way around. Reducing the frequency of these wake ups reduces power consumption.
So yes, KDE4 does actually address that.
“Cluttered and awkward” how so?
Try running Lancelot Menu, and selecting the “show catgories in panel” option within that. That will give you a tri-menu (application, places, and settings) which is very similar to GNOME. Use the desktop settings to change the activity type to “Folder view” away from the default which is “desktop view”. This will give you a traditional “icons on the desktop” arrangement, similar to GNOME. You can of course put as many or as few icons and widgets on the desktop as you please, including no icons or widgets on the desktop, for a totally uncluttered look.
You can also adjust the sizes of icons and widgets on your desktop to your taste. Tuck them all away as small icons lined up across the top or bottom or side of your desktop if you want a number of icons on the desktop but still have a relatively uncluttered look.
As for the interface elements themselves … a more common complaint of KDE4 is that “there is too much whitespace” … which is just the opposite of “it is too cluttered” … so you just cannot please everybody I suppose.
Edited 2009-05-19 02:27 UTC
I agree with most of it.
Just a note: When they talk about Qt or Gtk instabilities, I think are referring to API rather than implementation. So, basically, it all works well, but you never know when the compatibility will brake and force you to update the applications relaying on the libraries.
Of course, it’s not that dramatic. Qt4 did break the compatibility with Qt3, but with a very good reason. It became much better. It’s always a battle between progress and convenience.
Which is a *really* odd thing to say, given Gtk+ at least has been maintaining stable API and ABI since about 2002.
I didn’t know that, because I find both Gtk and Gnome ugly (both the code and the looks), so I don’t follow their development. I like Qt. (Note that I don’t want to start a flame war here. I know that there are a lot of people who like Gtk and Gnome, and I respect that.)
Thanks for the info.
… use “Linux” as a Desktop.
My wife started using “Linux” in 2001 writing her diploma thesis using LaTeX, xfig, Wingz, xv, … and OpenOffice to be compatible Microsoft formats on a Pentium II based Laptop running Redhat Linux. She choosed that combination after seeing other students going nuts using Microsoft products 🙂
Her actual focus beside web browsing and emailing is graphics work (Gimp & InkScape) and office work (OpenOffice).
I started using “Linux” in 1994. My focus was and is software development.
System stability never was an issue.
pica
No this article is not different. It’s point may be slightly different or worded different, but the fact remains that Linux is already on the desktop. It works well for professionals, intermediates and complete beginners.
I would say I am an intermediate. But the PC’s I administer are for complete newbs. My in-laws were Windows familiar and my friend was a Luddite. Both parties are blissfully happy with their computers now.
I’ll address a few:
0. Premise: As for the masses, no one cares. never even tried to explain it to my in-laws/friends. I just told them it was legally free.
1. No reliable sound system: Really? I didn’t know that. I have never personally experienced this so I did not know this.
2. X system: Lets not bring Windows API into this. Not exactly the model of stability. And slow? Again, did not know this.
3. Problems stemming from the vast number of Linux distributives: I am not touching this other than to say that is a strength, not a weakness.
5.1 Few software titles: I literally dropped my jaw.
5.1.1 No equivalent of some hardcore Windows software: so stick to Windows. No skin off my nose.
5.2 No games: Again, jaw dropper. What the hell have I been doing with Secret Maryo Chronicles, Alien Arena, Wesnoth,… wasn’t Doom also on Linux?
5.3 Incomplete or unstable drivers for some hardware: lol, yeah, Windows is the model to mimic here.
5.3.1 A lot of WinPrinters do not have any Linux support… no shit sherlock. In other news, water still wet.
7. A galore of software bugs across all applications: see my answer on 5.3.
Honestly, I just stopped here. If Linux works for you, it’s ready for the desktop. It already does everything better FOR ME than my Windows installations ever did. Plus, I am having fun again with computers. No more antivirus, adware, malware, registry cleaners and everything else I had to keep up to date just to use my computer.
I use windows in SRS mode (software restriction mode) and log into the system as a non-Admin. I use none of the above software you list. Send me a virus, really, it won’t even execute.
Edited 2009-05-18 20:00 UTC
some winprinters are not supported on windows vista/7 same for other hardware
You really have low standards.
Legally free, while it is nice, it does not make it, technology advanced, useful or competitive.
Linux is terrible managing sound. Any audio experte can tell you that. The problem is Linux is not a preemtive kernel by default, so real time audio relies on fast hardware an sometimes luck.
Windows API, while not ideal or super modern is highly maintained.
It is a terrible weakness, especially for a developer that tries to sell its software.
Ok… what about software titles that people want to use?
All that is available for Mac OS X and Windows too. All of it. Besides, all are old games.
Very low expectations.
Say what? The Linux kernel is not pre-emptive? On what planet?
http://en.wikipedia.org/wiki/Preemption_(computing)
The Linux kernel aside from just supporting premption since version 2.6 can also be made real-time if required:
http://www.windriver.com/news/press/pr.html?ID=4261
http://www.theregister.co.uk/2009/02/10/redhat_mrg_fedora11/
http://www.novell.com/products/realtime/
Why is there so much misinformation talked about Linux? These facts are not hard to check.
http://en.wikipedia.org/wiki/Phonon_(KDE)
http://en.wikipedia.org/wiki/JACK_Audio_Connection_Kit
http://en.wikipedia.org/wiki/PulseAudio
Edited 2009-05-19 00:01 UTC
Say what? The Linux kernel is not pre-emptive? On what planet? [/q]
In this planet. Preemptive kernel, not preemptive scheduling. Two different concepts. And I said BY DEFAULT.
Taken from Understanging the linux kernel, 1st edition, Daniel Bovet and Marco Cesati.
However, this change in version 2.6 of the kernel, somehow:
Taken from the same book, but third edition, 2006.
Linux 2.6 is not compiled as a preemptive kernel BY DEFAULT. However, Linux has always been a preemptive scheduling Operating System.
With this sentence you just make it clear, you don’t know what a preemptive kernel is.
But don’t worry, see the same paragraph you posted:
Linux 2.6 has preemptive kernel posibility since 2.6… Windows NT was a preemptive kernel… How many years before? 15?
Now, do you really expect that in two-three years that Linux has had the possibility of recompiling the kernel as a preemptive kernel, which is OPTIONAL on most distros to magically catch up and get the same applications for audio that has been on Mac OS X or Windows for years? Get real.
It is hard if you do not know what a preemptive kernel is, and how the kernel is compiled BY DEFAULT. May say it again: BY DEFAULT.
And if you just assume that because the latest version of the kernel can be compiled with preemptive kernel mode, every distribution has a preemptive kernel, which is falsehood.
And before you start telling that you can compile the kernel yourself… Well, that’s exactly the problem with Linux, no desktop user should be compiling anything to use the OS.
It was a problem back in 2002. With kernel 2.4. Fixed since a long time now.
http://www.linuxjournal.com/article/5600
Yes I do. Premption is required for “hard real time”. “Hard real time” is possible for Linux kernels, and there is some discussion if “real time” should become the default.
http://www.linuxdevices.com/news/NS9566944929.html
Can’t do that without a premptible kernel.
Both preemption and real time extensions for the kernel are of assistance in producing low latency non-stuttering audio.
I gave you three links to low-latency audio layers in modern Linux distributions, which utilise these features of the kernel.
Can’t you read, or something?
So you have gone from “Linux doesn’t have preemption” to “Linux has only had it for 6 years”.
http://en.wikipedia.org/wiki/Linux_kernel#Versions
Six years is a long, long time in the arena of rapidly-improving Linux.
Long enough so that audio latency issues that were once a problem have long ago been solved. As I said, I gave you three links to mature low-latency audio layers in current use on Linux.
You are seriously out of date with your attempted criticism.
Edited 2009-05-19 03:00 UTC
No, it is not completely fixed. Not all distributions compile the thing for preemptive kernel, because it is optional.
I repeat the text from “Understanding the Linux Kernel”:
Not all distributions compile with preemptive kernel flag.
Can you read: preemptive kernel OPTION.
Of course I can read… Can you read whole sentences, not just the first three words?
Do you understand the problem with Linux? Not all distributions require to have preemptive kernel.
I never said that… My first post I wrote:
Do you speak English? Do you understand what preemptive kernel BY DEFAULT means?
It means it is optional, it is not required, you can by pass it.
You just have to read the whole sentence. Not just the first part and start bashing people.
Yeah, and you think that Audio apps are going to star appearing the same time the kernel is fixed… Magically, because thousands of developers were just waiting for the kernel to be fixed… Especially a “fixed” that is not fundamental for Linux to work, because it is optional.
To get critical Application, take years. Recently some products for audio have appeared on Linux, while better products have been on Windows and Mac for more than a decade.
If you do not believe me, ask Adobe how many years is projected to take Photoshop to be 64 bits on Mac OS X. Or how many years Microsoft needed to take Office from PowerPC to Intel and still no VB support? And they have money and the developers to do it?
Developing software is a very complex and difficult task. Applications do not appear like magic.
Name one that doesn’t.
Name one that does.
Name a desktop distribution that ships with a non-premeptive kernel.
The aforementioned low-latency audio layers are the default on Linux desktop distributions. Ergo, the default (for Linux desktops) is to have … a pre-emptive kernel.
Pfft.
http://www.linuxpromagazine.com/online/news/fast_forward_vlc_1_0_0_…
Best media player on any desktop platform, bar none.
http://createdigitalmusic.com/2009/05/05/the-mobile-music-netbook-l…
Cannot be beaten for portability, functionality and value-for-money-price.
http://en.wikipedia.org/wiki/Amarok_(software)
Best music collection software on any desktop platform, bar none.
Agreed. It is a good job that you are at least six years behind the times about desktop Linux, then, isn’t it?
Edited 2009-05-19 05:50 UTC
By default? Ubuntu doesn’t. You need to install linux-rt. Same with Mandriva.
Uh, yeah, is that too hard to do? sudo apt-get install linux-2.6.xx-xx-rt?
I mean, it doesn’t mean Linux can’t do it. The same way Arch doesn’t come with X11 by default, but that doesn’t mean you can’t install it. And you get to keep your old kernel to boot, just in case you need it.
Edited 2009-05-20 00:15 UTC
Yeah, that’s not difficult, for me, for example…. But I am a computer geek, don’t kid yourself, people do not care about complicated things…
Most people do not even know what the problem is, or care. There are alternatives that work out of the box.
But, that’s good: keep repeating yourself that compiling kernel is a user friendly activity. Sure Linux is gonna sky rocket on the desktop. Especially since you need to start telling users what a kernel is, what compiling is, why the thing does not come compiled by default, what the hell are the commands they need to write… All that trouble, and the sad part is, the same application or better applications are ready for Mac and Windows.
Umm, you have Ubuntu Studio, which comes packaged with the -rt kernel. Sure that’s enough out-of-the-box for you?
WHAT? When did I mention compiling?! Installing the -rt kernel doesn’t compile shit, it just downloads the precompiled kernel, makes a GRUB entry, and that’s it. You just reboot, pick your new kernel and you’re done.
Yeah, it’s not like I don’t have a choice whether to use stock Windows or Windows Studio, geared especifically towards multimedia apps. Oh wait…
By the way, normal users don’t install or configure either Windows, OS X nor Linux. In a perfect world, an OEM vendor would offer users who use their PCs for audio a Linux distro with an -rt kernel by default. You have the freedom to do it, and they do, too.
Edited 2009-05-20 01:17 UTC
And that’s the solution… Bravo… People need to install a different Linux distribution, it is so easy, there are only like 300 of them.
Sorry my mistake… But it does not take away the fact that a normal user does not give a “thing”. To them, that’s a geek work. Downloading and installing a kernel or compiling a kernel, it is the same thing to them… They do not know what a kernel is, don’t kid yourself.
You have another choice too… Don’t use Linux.
Wait a minute… Is it not Apple business what you are describing?
And yeah…… right. People on the desktop use Linux a lot… I wonder how in this world Apple and Microsoft report billions of dollars in income, and we are in a melted economy.
And if you actually believe non-geek people are gonna start searching which Linux distribution is the right fit, well there is nothing I can do to open your eyes. Keep telling yourself that: Linux is so good and so perfect… But nobody cares, unless the whole thing changes.
Magic: ====>
http://www.renoise.com/
http://createdigitalmusic.com/2009/02/20/energyxt-25-is-here-is-awe…
http://www.creativepost.co.uk/
http://www.rosegardenmusic.com/
http://en.wikipedia.org/wiki/Ardour_(audio_processor)
http://en.wikipedia.org/wiki/Pure_Data
http://en.wikipedia.org/wiki/SuperCollider
http://en.wikipedia.org/wiki/Csound
http://en.wikipedia.org/wiki/ChucK
http://en.wikipedia.org/wiki/Linux_audio_software
<===== Enjoy.
Oh my.
Well, since you insist, let’s just take a look at the apps you mention.
Renoise: Mod Tracker. Welcome to 1994!
EnergyXT: Did you actually read the user comments on link you posted?
Although that is not to say that it is not useful, but EnergyXT is not capable of being the heart of a pro studio. If you read the comments, you mightn’t want to use it aat all – assuming, in the first place, that its design philosphy lends itself to the kind of music you want to make.
ArdourXchange: Good luck! Even DigiDesign can’t (or, possibly, won’t) make AAF’s interoperable between their own apps! Unless the maintainers want to constantly be chasing a moving target, ArdourXchange is of questionable utility, even if only because its only purpose is to import AAF’s into Ardour. Many DAW’s have integrated AAF/OMF import and export as standard components or optional add-ons. ArdourXchange’s inability to export Ardour sessions in AAF format is a very serious shortcoming.
As for Rosegarden and Ardour, all I can say is, you have no idea. These are completely unacceptable for real studios. There is a reason why professionals stay away from these apps in droves, although, to be fair, this could be a combination of the app itself, and the fact that they run on Linux. Yet, on the other hand, it is also necessary to take into account the extreme competitiveness of the world of profession recording and music production: if these apps offered any conceivable advantage over the more common Windows and OSX offerings, they would find a significant userbase. They haven’t. If, for example, you have never engineered a recording session for paying clients, then it is possible to understand why you would think that these are anything more than amateur apps. In fact, this applies to your whole post: it could only have been drawn up by someone completely unfamilar with the subject at hand. Acceptable DAW’s are: Nuendo, Pro Tools, Cubase, Sonar, Digital Performer, Logic. Traktion, Ableton Live, Sony Acid, and FL Studio are also extremely powerful and sophisticated music production environments as is Propellerheads Reason (although Reason is far too insular, in my view.) The one and only acceptable DAW available for Linux is the ported-from-Windows Reaper, and some people simply find it to be unacceptable (although it answers the needs of other people quite well.) However, regardless of the suitability of Reaper as a DAW, the adoption of Linux as a professional recording platform will always be as limited as the availability of professional-grade audio interfaces for it. (But such availability is to be understand a necessary but not a sufficient condition for broader adoption of Linux for professional music production,)
There seem to be some useful, if feature-limited, wave editors available for Linux.
As for Pure Data, SuperCollider, ChuckY, and Csound: Most musicians and sound engineers are not, and do not want to become, computer programmers. These highly complex and non-intuitive programming environments – which are also available for Windows and Mac, have, in spite of some of them being around since the 1990’s, found only a very very limited userbase. There could be no more damning indictment of Linux’s suitability – or lack thereof – for broad adoption by recording professionals and musicians than the fact that you have listed them among the “highlights” of Linux audio software. But then again, since you seem to be more or less unacquainted with the needs of musicians and recording professionals, perhaps we should not take that too seriously.
Edited 2009-05-20 08:16 UTC
So your argument is flawed. With linux you can have kernel the way you want with Windows you are stuck with one kernel which can’t be good for every use. Do you agree ?
“DEFAULT with most distributions” … but not all, right ? How about http://www.jacklab.org ?
You are completely out of the discussion. The main reason and the main problem with Linux is that it is its heavily fragmented user base. Everyone is pulling its own way.
Do you honestly believe that a normal user who just want to edit audio is gonna start asking: Which distro has preemptive kernel? Where is my C ompiler to make the stuff work? There are no good audio apps I am gonna make my own!
Get real. You were telling Linux is wonderful for audio, I told you what the problem with audio was. You said it is fixed. No it is not fixed. Other people in the thread have told you distros by name that do not come with preemptive kernel by default. And then you start telling me I am spreading FUD… I am not spreading FUD, I am telling you the problem with Linux on Audio.
Now you expect a normal user to be hunting a distro, or compiling the code themselves. If you really believe that’s the strategy for taking control of the desktop… Keep dreaming. Linux is not cool if you can do your work with it or if you have to become a computer geek to make it work, because there are other computers that just work.
What most audio people use? Mac OS X, and some Windows.
Sigh!
Once again, yes it is fixed, and it has been so for many years.
“apt-get install ….” is not compiling anything. It is just installing a piece of software. In this case, that software is the real-time low-latency version of the kernel.
Here is an example for you: on Windows, if you want a gaming rig, then apart from the base OS install you are going to have to install directx from Microsoft and a better video driver from the manufacturer. You need to do this because the one OS cannot by default be all things to all users, and so full gaming support is not the default.
Likewise, on Linux, if you are going to be doing professional audio, you need to install the pre-emptive real-time version of the kernel, because your requirements are a bit specialised.
The kernel version you need is already compiled for you, it is waiting in the distributions repositories (just like directx is waiting for you, already compiled, on Microsoft’s website), but it is not the default (just as directX is not the default on Windows).
Edited 2009-05-20 05:59 UTC
No matter how you put it. It is not user friendly. Most users just tell you: On my Mac I don’t have to do that and it works flawlessly. Don’t kid yourself, most people do not know what a kernel is, or what a preemtive kernel is. If you want to believe that, well, that’s your idea of the world, but you cannot expect a normal user to understand how Linux works to have a tool that can get elsewhere easily.
And no, it is not fixed… Why just not make all distributions with preemptive kernel by default… Well preemptive kernel is too new to make it mainstream. and yes, 2-4 years is too new in Kernel land.
Sigh!
No matter how you put it, Linux has no issues with audio. It has been fixed for years.
Linux desktop distributions are optimised for being a desktop, not for being an audio lab. The server distributions are optimised for server workloads. There are Linux distributions optimised for security forensics, and others for system recovery, and still others are designed simply to be firewalls.
So if you want to do audio on Linux, then just get a Linux distribution that is optimised for that particular task:
http://ubuntustudio.org/
http://en.wikipedia.org/wiki/Ubuntu_Studio
http://www.boingboing.net/2007/01/21/ubuntu-studio-linux-.html
http://www.64studio.com/
http://en.wikipedia.org/wiki/64_Studio
http://dynebolic.org/
http://www.musix.org.ar/en/index.html
http://www.apo33.org/apodio/doku.php
http://wolvix.org/
… and be done with it.
For heaven’s sake it isn’t hard. They aren’t at all difficult to find:
http://en.wikipedia.org/wiki/List_of_Linux_audio_software#Distribut…
Edited 2009-05-20 14:06 UTC
AhHa! I think we have discovered the problem here. You have a Mac.
Macs are good and all that … but I’m sorry that you thought you had to spend all that money when you could have had that functionality on far cheaper hardware and the software all for free.
Your posts are clearly just you trying to justify your expensive purchase in your own mind.
PS: Ubuntu Studio, or 64 Studio, are the Linux distributions that cover functionality that you are apparently after. Right out of the box they both work flawlessly, too.
Edited 2009-05-20 14:33 UTC
Gawd what an arrogant gasbag.
As an actual musician, I find posturing asshats like you vastly amusing.
Cut’n’pasting loops in one of your Fisher Price “wave editors” does not make you either an engineer or a musician.
Can you tune a piano? How about a drum kit? Know anything about microphones? How about arranging? Can you read music?
If the “heart” of your “professional” studio is a Windows box with Pro Tools or whatever other glorified video games you’re so thrilled with on it, then it isn’t a professional studio.
Is your precious “DAW” the only computer in your studio?
Here is a 6 year old article wherein an actual studio switches many of their machines to Linux
http://www.desktoplinux.com/articles/AT5847717353.html
And here is another actual studio who switched to Linux 6 years ago
http://www.soundonsound.com/sos/feb04/articles/mirrorimage.htm
What is the name of your “professional” studio? What artists have you recorded? What? You’re talking out of your ass?
I suppose *price* isn’t a concern for a wildly successful “professional” like yourself. Otherwise, you might be trying to figure out how to reduce costs such as software licenses, etc. Then again, with a major talent like you must be, I am sure that your clients don’t mind paying top dollar.
*whistle*
I don’t know if you are referring to me. But this argument, does not help the discussion.
Of course, if you fine tune something or if you have a special team to do it for you, who know about what to tweak, what to install, what not to install, what to do or not… You can get away with Linux. I got paid for doing so.
But it does not help the platform to become mainstream, and that’s what is discussed here: Desktop Linux.
Because Linux is a heavily fragmented platform… I keep telling that in all my posts. Linux needs a united front, and then if it is necessary some especial distribution… But not the mess we live in today.
Everytime I said something, there is someone that says, a distro xyz has that feature… That’s not the point. Linux is never getting mainstream until it simplifies its approach to end user.
And good apps are influenced my market share also, because many developers need to eat too and money talks. Market share is not the only thing important, but it helps a lot.
To what it seems to me, you got a custom made recording studio and they use Linux as a platform, which is fine, which is good… But it is not mainstream… Your studio does not classify as desktop Linux. As well as a Fortune 500 company does not classify as desktop linux, or a Maya rendering farm, or a super computing cluster does not either.
And no, I am not a musician.
Actually, my response was meant for this post
http://www.osnews.com/thread?364508
specifically.
Otherwise, your “argument” is pointless.
I have used Linux “desktops” for years with less hassle and bother than the Windows “desktops” I have also used.
How is finding a distro that has what you want any different from choosing one of the many flavors of Windows or choosing which software to run? Answer: it isn’t.
Well, if it isn’t, then answer the big question:
http://marketshare.hitslink.com/operating-system-market-share.aspx?…
I have give my opinion. I know it is sometimes technical and it is based on fragmentation. And thousand of people in the thread thing I am troll.
Give your opinion, but don’t say Linux is perfect, because numbers don’t lie.
I agree with that. Although the point of your statement is not clear.
Firstly, I no longer have to run a studio. However, when I did have to, I had someone to tune my piano when needed. I know drummers who tuned my drums. I know quite a bit about microphones. Not only do I know about arranging, I actually know how to arrange. And of course I also read music.
In that case, there are very few professional studios in the world, and “real” engineers and musicians such as yourself must be raking in a fortune. Or would be, if if weren’t for the inconvenient fact that very few “paying clients” agree with your bizarre point of view. (And in a moment, we will see that your primary example of a Linux-based studio also uses these “glorified video games!)
In my case, yes, it is the only standalone computer completely dedicated to for music and audio production. I am sure that you will be able to divine some meaning in that fact, although that “meaning” will be apparent to no one but yourself.
Let’s look at your examples a bit more closely.
Your first example is an internal studio of an unidentified radio station in an unnamed city, where Linux was deployed to replace… Win98! Since the radio station is not mentioned, we have no idea if they are still running Linux, or if the radio station is still in business, or if the person responsible for the Linux deployment there is still working there. If your intent is to say that studios should switch from Win98 to Linux, then perhaps you have a point. But what most other people would find far more relevant, is an example and some reasons why they should switch from rather more modern OS’s and their apps, such as OSX and XP.
Now, let us look at the studio cited in your second article, by visiting the studio’s webpage, shall we? It is right here: http://multitrack.us/ . Not a word about Linux there. Okay, let’s look at the “Equipment” page. Ah, here is what we want: It says “The Tascam DM-24 is our primary digital console, which interfaces to our computers with TDIF and ADAT. We also have a 32-channel SoundTrac Solo analogue desk and a pair of Yamaha AW4416 DAWs.
The Mac in the studio is a G4 machine running OS 9.2, Digital Performer, Logic and BIAS Peak. Our main Linux machine is a dual AMD Athlon 2600+ with 1GB of RAM, plus an RME Hammerfall 9652 card with twenty four ADAT and two SPDIF ports. Linux software includes the JACK low-latency audio server, the Ardour DAW and the Rosegarden MIDI/audio sequencer.”
Look, look! Your posterboy Linux studio also runs those “glorified video games” – the very same that you claim automatically preclude a studio from being truly “professional”.
Because I did not say that no professional musicians, engineers, or studios use Linux, you have not disproved my point. You have only managed to disprove a statement that I did not make.
I am sure that you will forgive me for withholding my personal details. I am sure that you can imagine that I would not want to be seen in public talking to someone like you. (Of course most people would know that asking someone “who they are” on the net is not just a complete waste of time, but pretty damn foolish as well. It is a pastime for the gullible.)
A professional-grade DAW can be bought for hundreds, not necessarily for thousands or tens of thousands. And the truth is, that in the total cost of equipping a real studio (or even a decent home studio, really), taking into account the costs of monitors (both the “loudspeaker” and “video” kind), perhaps those drums and the piano you mentioned, microphones, acoustic treatment, outboard consoles if needed, along with other audio hardware such as DI boxes, acquisition of and cost of maintaining the actual physical premises where the studio is located, salaries for and expenses attendant on having employees, insurance, and many many other expenses either one-time or recurring, the cost of the software licenses is a very very small item. A single room in a post-production house, for example, can generate enough income to pay for a new license for Sequoia, Pro Tools, or Nuendo, in a single day. Even for a home studio, the cost of a good DAW and a machine on which to run it, is in many cases less than the price of a decent instrument.
Eventually you might find that clients are indeed willing to pay top dollar for talent.
Judging by your knowledge of the studio business and its economics, it is abundantly clear that you are an amateur.
Take comfort in the fact that “possession of musical talent” and “being intelligent” are quite independent; so it is possible that you do have some musical ability
Why it is a problem ? Because you have to do some work yourself ? Either you do (in this case search and learn) or you pay as simple as that. Final result may be comparable.
Who is a normal user, really ? Is he real or you just imagined him, or worse, calculated him on averages ?
There are just people around; each one with different goals in mind and different pocket depth.
How about the right tool for the right job (within price limit you can afford) ? Linux is not wonderful – it’s just another OS – another tool.
You mean normal user as a hobbyst, a rich-professional, not-so-rich-pro, a rich-hobbyst, and maybe a hobbyst-thief ?
Just think for a second who will be reaching for open source. Doesn’t he deserve to be called normal user too ?
Bottom line is that (believe it or not) people are different and there is no spoon eee… normal user.
Because we are talking desktop Linux.
Do not get philosophical. It is so easy to know what a casual/normal user is that if you are asking that, you must be in trouble. If you believe that a normal user compiles kernels by fun, or hunt distros, you must be out of your mind.
That’s like saying normal chemistry users are tactical bomb specialist, when most chemistry uses are personal care related. And no, I am not upraising you.
Do you realize we are discussing Desktop Linux?
First, open source and Linux are TWO different things. Mac OS X in open source underneath, if you care to know. Not all, but a lot of it.
Second Windows and Mac while are pricier than Linux, are not impossible to buy. Stop the price thing, because there are other OSes which are also free and people are getting to them… Search and see, even Linux people are leaving Linux and going to OpenSolaris, BSD and many others. Open source is not a Linux invention.
And most of the Apps for Linux are available to other platforms. So stop the Linux is better because is free and opensource. Let me tell you: Casual and normal users, prefer to pay 100 bucks and get the computer fixed by a technician than doing themselves, even if it is just adding a RAM module.
And no, many of them are not rich, and no, they do not like to be ripped off, they are not morons either. They just don’t care about how a computer works.
The thing is, casual users are 99% of desktop computer users.
You really have low standards.
Not really. If I did, I am sure you and I would be friends by now.
Legally free, while it is nice, it does not make it, technology advanced, useful or competitive.
Now see, this just screams to me, “I want to pick a fight.”
Ok… what about software titles that people want to use?
I find this amusing since I switched to Linux because of the applications.
Very low expectations.
You know, why reply to a posting if all you want it a fight? Next time, just find somebody online and do a yo mamma joke on them.
Edited 2009-05-19 11:05 UTC
Oh I see. Anyone who doesn’t use your decided set of applications have low standards. How convenient.
Hehe, the game comment. It’s like saying doesn’t my calculator have tetris, it can play games too. But this is not the top of line 3D games.
Regardless, those that love Linux play games like World of Warcraft through cedega. There’s a significant following for this.
My family and I have used Windows since 95. We changed to Debian and Mac last year because of Vista issues. The learning curve is not high.
Being different is not a problem. Get the OS and hardware that do the job.
We really do not miss anything in Windows.
By the way, I still do not know which zone to set for my DVD-ROM. VLC player never asks such a silly question.
I read a few comments and a lot of people would like a new open source OS (like Linux) that can be modified and customized as the user sees fit. Yes a clean API would be wonderful as well.
What everybody seems to miss is the fact that there should be LESS operating systems. The OS I am waiting for is ReactOS. It is the most promising contender as it is compatible with Windows. A lot of drivers are available for Windows as so for applications. People that are dependent on a tool like Photoshop don’t want to learn a new graphics program, they just want Photoshop. Yes you could run wine but it is not perfect yet. ReactOS would solve this.
My dad would love to switch to Linux but he uses Photoshop, Quark Express, CorelDraw, etc. He hates Mac (mostly because of their fanboys) and agrees with a lot of my opinions. He can’t switch. He says he is to old to learn something new (can’t blame him he’s 65). I did get him to use FireFox and Thunderbird and he is loving it (yes he thinks there better than their Microsoft counterparts).
For people that are not dependent on any special application, like me, can switch.
ReactOS is so dead.
It has been years stalled.
Besides, it does not provide anything new, it just a Windows clone.
That’s news to us who actually pay attention to these things. ReactOS make regular releases. The last release was less than a month ago and Thom posted it right on the front page here on OSNews.
Edited 2009-05-19 10:12 UTC
Let me explain something… By the time ReactOS reaches complete compatibility with Windows, Windows would be completely different.
But again, that is not the problem. Copying an OS and making it free it is not the solution, because you are not making new and better technologies. Just copying.
Want better technology? Look somewhere else, ReactOS does not provide a new thing… It is technologically stalled in what Microsoft did 10 years ago. I don’t look behind.
I am really happy with the linux OSs I have tried, much better than windoze, so simple and easy. Keep up the good work all you clever peeps, your reward is in the joy you create for simple souls like me, who just want something that works.
peace and love to you all
mrs doyle.
That article is one giant troll.
A few examples:
Win32 is called a good and stable API. Win32 is neither good, nor does Microsoft endorse further use of it. Win32 is a deprecated legacy API. All fancy new stuff is developed by MS for .NET.
Buying codecs is somehow made into something unique to Linux. Whenever you buy a Windows license, license fees for the codecs are part of the price. If you want to live carefree without post-installation codec downloads, buy Mandriva Powerpack. That’s still cheaper than a Windows license. BTW: Out of the box Windows’ codec support is really bad. Downloading VLC is just as legal (or illegal, depending on the juristiction you live in) as under Linux.
“No games. Full stop.”
WTF? All PC games I care about run under Linux. All games by id work under Linux natively. All Unreal games except the last one work under Linux natively. EA made a deal with Apple to publish many games on Mac OS X as well. EA uses Cedega to achieve this, hence they also work with Cedega under Linux. Blizzard does not officially support Linux, but it it internally tests them for Linux (through WINE) nevertheless and they work.
BTW almost nobody still cares about PC games. The industry (including Microsoft) moved to gaming consoles. MS itself wants you to buy games for Xbox and not Windows, because MS gets license fees on Xbox games, but not Windows games.
Mentioning Bugzilla is silly as well. 1.) Bugzilla is also used for feature requests. 2.) Microsoft’s internal bug database is not open for inspection, so you can’t even compare.
“A very bad backwards and forward compatibility.”
Last time I checked, all apps programmed for GTK / GNOME 2.0 are still 100% compatible with 2.26.
KDE also guaranties binary compatibility for kdelibs.
I also don’t understand the often repeated “incompatible hardware” claim. Neither my desktop PC from HP, nor my laptop from Samsung (featuring a web cam) were built for Linux and yet they work better with Linux out of the box than Windows. Windows XP can’t even handle my HP’s LAN device. Get this: I had to download all Windows drivers under Linux. The only driver I ever had to install on Linux afterwards was the one from NVidia for my GPU, but modern distros handle that automatically via the integrated software update mechanisms.
IMHO that alone makes Linux easier to use than Windows.
Even the three mainstream DE’s for Linux are IMHO each easier to use than Windows. Overall, Linux is the 3rd easiest to use OS I’ve ever used and I used a lot over the years. IMHO Mac OS X is the easiest to use, BeOS came second, Linux is 3rd tied with PC-BSD, DOS is 4th, and Windows is only 5th.
Linux is ready for the desktop. I see its office use every day. Linux is not some hobby OS that can’t be used for real work.
“Windows. Overall, Linux is the 3rd easiest to use OS I’ve ever used and I used a lot over the years. IMHO Mac OS X is the easiest to use, BeOS came second, Linux is 3rd tied with PC-BSD, DOS is 4th, and Windows is only 5th.”
…DOS? DOS beat out Windows in an “easy to use” contest? really? If I gave someone a computer that booted to a DOS prompt and walked away I would get a call within 2 minutes saying the computer I jsut sold them was broken (had this happen when a guy didn’t want o pay for a windows licence and I had Dell pre install FreeDOS. I felt like being an ass as this particular customer was being intentionally dificult). Needless to say a blinking command prompt is the 2nd least “easy to use” thing for a “Desktop oriented user.” (The first being basic etch a sketch controls. See early netbook interface concepts version 0.1b)
Edited 2009-05-18 20:15 UTC
Yes, DOS. Unlike you I took the time when an OS was current into account. DOS, unlike Windows, at least did what you told it to do. Windows OTOH tells you what to do and it thinks that it knows better.
A CLI-based OS is not more difficuilt to use than a GUI-based one. It’s just different. A DOS user back in the 1980s had as much trouble adjusting to a Mac (or Amiga) GUI as a GUI user from today would have to adjust to a CLI.
Maybe, adjusting to a CLI would even be easier, because it’s still used for many cases. There are even totally new developments like Microsoft’s PowerShell. Today all PC OS’es also come with a CLI, back in the 1980s only a fraction came with a GUI.
“A CLI-based OS is not more difficuilt to use than a GUI-based one. It’s just different.”
I too took time into account. When moving from any command line os to a GUI such as with Amiga, GEOS, GEM, Win 3.1, OS2, or even the Mac os desktop, users productivity sky rocketed! It is GUI’s that moved the computer as a selling point. If you were alive and had a TV at the time you might remember a lot of commercials highlighting the graphic capabilities of these operating systems (in the cases of the ones i listed being full blown OS’s and not GUI shells), and their aplications.
While I could fill this post with links to various studies that proved that a graphical user interface not only improved user productivity, as well as being one of the most distintive factors when using an operating system, I won’t waist the readers time showing proof of something everyone already knows.
“A DOS user back in the 1980s had as much trouble adjusting to a Mac (or Amiga) GUI as a GUI user from today would have to adjust to a CLI.
Maybe, adjusting to a CLI would even be easier, because it’s still used for many cases.”
Moving back to CLE style of use is one of the reasons normal users were afraid of linux. just 2 years ago your average linux desktop user would see a command line inferface fairly commonly, usualy daily. But your average user in general sees a comand line interface as a scary terrible monster and avoids it. I can’t tell you how many end users call me when a comand prompt shows up because it has scared them into calling IT, even if it just showed up briefly as part of a software installation, or if I push an app from the server in the background.
For what it is worth I like CLI, i just don’t preffer it. I do love Windows Power Shell as a sysadmin and am no stranger to doing things without the ease of a GUI. But I would be foolish to not say that in almost all cases the GUI makes the end users life easier.
Edited 2009-05-18 22:29 UTC
I think you confused some things here. The productivity did not increase because users clicked on icons to start apps. It increased because of GUI applications, especially spreadsheets.
Apps with GUI’s often had better features, like different fonts, but the productivity of the OS itself did not increase.
That came years later with true multitasking (by true I mean interface as well as technology).
Only if you used a crappy distro. SUSE and Mandriva pretty much always shipped all-round GUI config tools (YaST and DrakConf).
There are certain distributors out there that simply remove config tools because they think that they are too complicated. In such cases the user has to use the CLI, but that only the distributor’s fault, not Linux’.
Misconceptions about CLI’s don’t make CLI’s hard to use.
“All fancy new stuff is developed by MS for .NET.”
Name me one app from Microsoft that actually used .Net.
I installed a Windows 7 on a VM to try and there is nothing in there that uses .Net. It’s only 3rd party stuff that uses .Net. Just to prove it I uninstalled .Net framework 3.5 that came preinstalled with Windows 7. Everything still works, as it did before.
It’s is all written in C++ or maybe even HTML/JScript/ActiveX (like the add/remove software starting from Windows 95). All though I didn’t see anything like that anymore in recent versions.
Silverlight
Silverlight
Which I’ve never ever used or needed.
You asked for one application by MS that uses .NET. I gave you one. I could’ve also written PowerShell, MS’s whole server stack, etc., but since you only asked for one app, don’t bitch.
I do not know, but if silverlight would be .net, why it does not run with mono and why is there moonlight ?
Even Novells Mono apps like beagle and f-spot are not realy .net, because last time I checked, the does not run with windows/.net.
.NET apps can call native libraries. Many .NET apps on Windows call Windows libraries. Those are not present in Mono.
That’s basically the same, just the other way around. Default .NET does not ship GTK# DLL’s. These are shipped by Mono. Beagle OTOT probably uses Linux file access calls (I don’t know for sure, though).
No it is not.
It is old: Yes. It is deprecated: Yes. Is it stable: Yes.
Still, it is better and more mature than what Gnome and KDX are doing. Both projects are just mimicking Windows API and they have not finish it yet.
What are you talking about? A program that works on Linux through an emulator or virtualizer is a program for another platform. Tell games made for Linux, not emulated.
No, Linux is ready for the server… Common user, sadly nope. It needs a cleaner focus.
Have you actually used the Windows API? It is such a nightmare to work with. I doubt very many people directly program to it anymore. Everybody uses wrappers like MFC or wxWidgets, or an independent toolkit that avoids the Windows API as much as possible.
Contrast that to Qt or even plain GTK+; both are much more modern toolkits and very usable directly. They both take care of all of the little details that the Windows API leaves to the programmer.
API stability is great, in that many programs made for Windows 3.1 (win32s at least) are still usable today, but don’t pretend that it is a superior API to more modern ones.
Learn to read, troll! I wrote (direct quote): “All games by id work under Linux natively. All Unreal games except the last one work under Linux natively.”
More native Linux games: http://en.wikipedia.org/wiki/Category:Linux_games
I also mentioned EA’s Apple deal because the article author mentioned Cedega and that it doesn’t only in a very way (which is wrong).
Ok… Now you tell me those games were designed for Linux. Because Linux has something that makes it special.
The truth is no. Those games were designed with Windows in mind or Consoles in mind. Linux is a me too.
I don’t even bother to tell you how many game titles are available for Mac, and Mac is probably the worst plaftorm for games, unless you consider Linux.
But you must be right… Linux is free and people do not use it because people must be stupid. Or better, a conspiracy against Linux, when the truth is so simple: Linux is not ready for the desktop because it has a terrible user experience. Linux is for servers.
Who cares whether it’s “designed for Linux”? Games are meant to be fun, not meant to give the creators of Linux an orgasm by letting them know that the games are specifically designed for Linux.
How many games are OS X-only and don’t run on Windows? I don’t know any – pretty much every serious game I know runs on at least Windows, and maybe OS X too. Do I care that Starcraft is not “designed for OS X”? Of course not, I just care whether it’s enjoyable. Starcraft doesn’t run natively on Linux, but I can play it through Wine. It works, it’s enjoyable, and that’s all I care about.
This entire argument of yours is trollish at best.
Edited 2009-05-20 09:34 UTC
Everyone knows Mac is a terrible platform for games, and still is better than Linux. Most of the games are ports, or emulated some how (Technique does not matter). If you think troll is just telling the truth… Well, that’s sad, but it is.
I dispute this claim, which you do not even back up with data.
Right now I’m sitting behind a Mac. Starcraft Map Editor doesn’t run on OS X Leopard but does run on Linux via Wine.
Your point being?
Woa… Chop the whole answer better.
Mac OS X Leopard does not include Wine…. Linux does not include Wine either. You added Wine to Linux and magically Linux is great for games, just because it is running WINDOWS software.
Why not just run Windows?
See the point?
No, I don’t see the point. Windows lacks a lot of software that I need. I can’t run Windows. Put me behind Windows and I can’t get any work done. Here’s a list of software that I need:
– Ruby and Ruby on Rails. Kinda runs on Windows, but suboptimal and slow.
– A sane terminal for general stuff. cmd.exe is not sane.
– A simple compiler toolchain without the need to tie myself to an IDE. Visual Studio is bloated, slow, eats up my screen real-estate and generally does not behave like I want to.
– Git. This is hands down the best revision control software I’ve ever used. I use it to not only manage my source codes but also personal documents, for recording history, backup and easy synchronization between computers. The Windows port is suboptimal and slow.
– No POSIX support on Windows, which I need for developing an entire class of server software. Cygwin works but is suboptimal.
– Valgrind. It doesn’t work on anything but Linux. Absolutely critical for debugging many C and C++ problems.
– No virtual desktops or OS X Spaces equivalent. The Nvidia driver has a sort-of imitation but I don’t have an Nvidia card.
– No mplayer, as in http://mplayerhq.hu. This is the best media player for me, and it does not work very well on Windows. Windows Media Player, VLC and other players are too bloated and slow for my taste.
The question should be: why should I use Windows if OS X and Linux already do everything I need?
Edited 2009-05-20 15:42 UTC
I agree with you in some of those concerns. But man, the only thing in that list that a casual user does or might need is the media player. and mplayer is available for Mac OS X.
And yes, many computer oriented people run Linux, but the reality is, We are not the whole world. In fact, we are not even less than 1 percent of the whole computer business.
And when we discuss Why desktop Linux is not ready, it is implied that desktop is what casual users perceived as a computer. Not what an enterprise sees as a computer and certainly, not what insiders (people who live on computers like you and I) see.
I know that, but that has never been my point in the first place. You seemed to make it sound like Linux has absolutely no advantage to anyone whatsoever.
But as I said, if you want to unify desktop Linux I believe the best way to do this is by promoting Ubuntu and attracting the people who care, but by trying to convince the people who fundamentally disagree with you.
If I don’t believe Linux has advantages over other OSes, I would not be answering and posting comments on OS news.
Sorry but no… Ubuntu, started fine, now it is not, and they do not want to do or don’t care about making things right. Ubuntu is just trying to please everybody again and when you do that, you loose track. These days Ubuntu seems only to care about grabbing Linux users. No matter how.
These days, I see with better eyes, projects like PC-BSD. If you dare to see the philosophy behind it, you can see they are talking fundamental decisions that benefit desktop users, like for example: Self contained Apps, package manager, only one user interface, binary drivers, small footprint, and so on. It is a very young system but it is not afraid to compromise. And if you ever want more, you can always have the other BSDs which are different products.
I don’t expect PC-BSD to gain more momentum than Ubuntu. As I said in another response, the state of hardware support is a big mess in all operating systems, including Windows and OS X. To get decent hardware support without the manufacturer’s help you need a lot of man power. Linux has more man power behind it than any of the BSDs.
PC-BSD does not have any desktop commercial software, so there’s no advantage over Linux here.
I also believe that coming pre-installed on computers is more important than the points in the article. I don’t see any reason why manufacturers would pre-install PC-BSD over Ubuntu. PC-BSD can improve their package manager and driver framework and stuff like that all they want, but ultimately I think the factors that really matter for desktop acceptance are totally different ones.
Maybe not… But at least, they are trying to fix the problem. They are trying to give perspective to their OS. It does not guarantee success, but at least they have a common front in what they know: Operating Systems.
They are not bashing vendors, manufacturers, everyone for Linux market share. They are not telling everyone this is perfect and wonderful or shouting criticism “you see problems because you are a troll”.
They are trying to get its act together. That is more than what Linux is doing for the desktop. Same with all other open source operating systems.
But an strategy and a common front is needed in Linux… And trying to please everyone is the worst possible strategy there is, because it is no strategy. Linux is like a teenager with no personality.
And OS and software is not the only industry that has gone through this. If you see history, Industrial design went the standardization route, and no, you cannot find every part you might want, but it works. The same happen with car manufacturing, people used to assemble their own cars and make them work. But to make it mainstream you needed to compromise, and may be a Ford-T was not as good, but it got main stream.
But Linux Libertinage, is taking the platform no where. At least not on the desktop.
http://marketshare.hitslink.com/operating-system-market-share.aspx?…
I don’t think anybody here is bashing you along the lines of “you see problems therefore you must be a troll”. Everybody here’s only bashing you for the things that are factually incorrect or the things that they fundamentally disagree with.
Actually no, I keep talking about ideas, and they start talking technical details and keep asking: tell me a distro by name and version that does that, or a user that did that, or a research that tells that.
Just stop thinking as OS insiders and start thinking as casual users. If you do, you will see the problems with the OS. So far I have been talking about it over and over the same:
1. Fragmentation and lost of focus in the community.
2. Trying desperately to please everybody.
3. Many technical details exposed to the user that only insiders should care.
4. Blame everything to the other companies, the market, enterprises, and lack of drivers.
And so on…. Read all the threads. But so far, nobody has discuss philosophical details or design decisions with me, they have only been saying really stupid things about how Linux works… I know how it works, but no body actually discusses why it works that way.
The only thing that is missing is someone correcting my spelling.
In this thread I have also discovered, that OS enthusiast do not understand fundamental differences between preferences/settings/configuration. Neither the difference between package manager and installation process. Don’t see the difference between niche market, business market, server market and desktop market. Do not know or do not care to argue philosophical details about why Linux does it that way, they only defend the thing even if is completely absurd they way it shows itself to the casual user. Do not see that Linux and open source is not the same.
And so on…
Now I understand how Apple and Microsoft have it so easy. Even the open source community is developing for Apple and Microsoft and open source developers are not even aware of it.
So far, the only ones that have gained something from open source are Apple and Microsoft. Every thing available for Linux is available for them, and they get the benefits.
Edited 2009-05-21 22:40 UTC
Reason #1:
GNU+Linux+Wine is Free.
Why should I pay for Windows if it is not absolutely necessary?
Microsoft is a criminal corporation; not a charitable cause.
Reason #2:
Scores of useful Linux software is not available for Windows.
Grsync is a perfect example of this. While it can be compiled in CygWin, why would I want to do this when Linux readily meets all of my needs without me having to compile anything?
Fact #1: Windows is not free and is limited to only running Widnows software.
Fact #2: Linux is free and runs all Linux software plus a lot of Windows software, especially the more popular Windows software.
What is the point? -OR- better question
What logical conclusions can be reached from the 2 undisputed Facts above?
Learn the difference between civil and criminal law, then maybe you can be taken seriously.
There is no dispute that Linux is free, but you and so many utterly fail comprehension. The logical conclusion should be to look inwards. When product A is free and loses to product B that costs, you have a problem, any junior business, marketing, or economic student can tell you that. Doesn’t it bother that there are many that prefer to actually purchase over the free option? In any other sector that did not have raving fandom this would be shooting the largest red flag up the pole.
I don’t believe there is an issue with the technology, but when you consider what Apple has done with OSX, one certainly can question whether the development model and community are at fault. Now you can jump up and down all you want, blame Microsoft for every evil act commited since the dawn of time, and lay whatever baseless claims you can think of, it still is not going to change the facts on the ground.
I think some really need to get outside some more to see how the world really is, instead of living in a shell. What makes a good operating system is numerous things, but what does not make a good OS is simply being the anti-Microsoft.
I concur. Specifically with the comment about the development model and community — the two things that are taken for granted. Ironically, the latter is unable to raise any kind of self-criticism over the former.
NO, seriously, Microsoft is a criminal corporation.
Maybe the USA has only been able to get a civil conviction;
buts the Facts are still the Facts
I stopped reading your nonsense here.
What grants you the right to declare winners and losers?
The sell-through of Linux PC’s as well as their installed base and overall marketshare continues a steady rise year over year. The governments of Cuba, Vietnam, Korea, Russia, Brazil and the French National Police are all switching to Linux for their Desktops.
Microsoft continues to incur heafty fines from the EU while letting go some of their employees.
Red Hat Linux continues strong economic stability and profitability even during a time of international economic turmoil. Red Hat Linux hired 6000 additional employees last year and plans to continue with that trend.
You, my friend, peer deeply inwards before feeling the need to point fingers at honest and responsible corporate citizens and call them “losers.”
So is Intel, and many would say that it is even a bigger criminal. And yet it just released a new Linux distribution for netbooks it powers. And you people cheer.
Strange bedfellows, hypocrisy, money and Linux advocacy.
While Windows is not free, it is not impossible to buy… And so far, many people buy it.
And Linux is free and is not mainstream. (But it should. Don’t bash me. I believe in open source.)
Right… Grsync is so necessary for the casual user. Keep focus on the discussion… Not everyone is a system administrator.
That would be a wonderful argument, if most apps available were not for Window. Even opensource apps like OO.org are also available for Windows.
You are overestimating free… As you can see in my previous posts… People prefer stealing Windows than using Linux.
I do not know, I do not care. I just care about the fact that Linux, despite Windows problems, and despite being FREE is really a no-one-uses-it product. It is a geek OS and no one except us, which are a minority by the way, uses it.
And big companies like Apple, are taking the benefits of open source and making big money.
No, I didn’t tell that, troll. Learn to read.
The original argument was that there are absolutely no games for Linux. I debunked that claim.
Stop putting words into my mouth.
Whatever….
Edited 2009-05-20 15:33 UTC
Repeat after me and everyone else: Wine Is Not An Emulator! WINE is a compatibility API.
My personal favorite is Wolfenstein Enemy Teritory, the game that made it possible for me to not do dual boot several times per day. And guess what? It is not emulated, not virtualized and not run trough a compatibility layer.
(BTW: Most games that are properly designed and intended for release on PC’s and game consoles are run though compatibility layers, as in WINE equivalents)
Why not the other way around… Games for Linux and then compatibility for Windows?
If you want to believe that games are designed for Linux, let me tell you, you are living in Alice in Wonderland.
And stop bashing me, the truth is easy to look at. I am aware of Linux situation and I am looking for ways to make it better. Believing that Linux is fine the way it is, is completely absurd:
More than 12 years later, a free product, and no one uses it on the desktop. Big companies like IBM and Oracle are driving Linux to the server and enterprise, changes made in the kernel reflects that.
But Linux was supposed to be for Personal Computers, not for servers, not for enterprises.
But if you really believe Linux is fine, and wonderful.. Then Why nobody uses it?
He never claimed that. Nor is that his point.
It doesn’t matter whether games are “designed” for Linux. The end user only cares whether games run on Linux. Compatibility layers are entirely acceptable as long as they give the user what he wants.
Edited 2009-05-20 09:36 UTC
Some games are capable of running in Linux… Some of them, using cleaver solutions… But Why on Earth do that when you can have the real thing?
And yes, the end user just only care about the game… Why bother with Linux?
Open source is not a Linux feature. There are other open source software out there if you are into open source. Because it is free? There are other free OS es out there, and Windows is not that expensive either.
Because now you can have Windows *and* Linux apps as opposed to only Windows apps. I’m not going to explain to you why one would want to run Linux in the first place, but it’s for the same reasons why one would want to use OS X instead of Windows, or BeOS instead of Windows, or whatever.
The fact is, I’m already using Linux. I don’t want to reboot to Windows just to play games. If Wine allows me to do that, then more power to it. I don’t know why you care whether the game in question is specifically designed for Linux, but any reasons for caring are entirely artificial. As far as end users are concerned, they just care that something works.
You can ask the same question about OS X. Why bother with OS X? I can’t answer for everyone but obviously there are plenty of people using OS X *and* Linux, and they have good reasons to do so.
I can even ask you this: why bother with Windows?
Don’t care. I still don’t want to use Windows. Give me one reason why I should use Windows instead of Linux.
Mac OS X is a completely different thing and it is phenomenon the Linux community should be very very aware of.
Underneath, Mac OS X and Linux are very similar… And I mean very. In fact, Apple has taken lots of opensource, and I mean lots and give only one direction to the platform. Put some proprietary stuff and voila: cash is raining on App Stores.
They have provide very nice applications that people love to use and they are only available on Mac. It does not matter if YOU particularly use it. But many people use them and love them.
The result: from 2% to 10% in less than 10 years or market penetration. And they charge for their stuff.
Linux should have been in that position. But wait, they can’t because everyone is going their own way, because the important thing in Linux is freedom, it does not matter if we go nowhere.
And we keep fighting each other with a new distro… That distro is not good, but this one is better, and that one…
Ok… You don’t want to use Windows, but you are using Windows. Because the game you are wineing is for Windows…
To me, at the end of the day, you are using Windows. But OK… You don’t want Windows, I cannot blame you for that… I don’t like it either. But you are missing the whole point here, even though Windows can be a bad product in some things, it is a good product in other things. If you cannot recognize it, there is nothing to do, but it is.
I don’t like Windows, and I can say, for example, that Applications in Windows XP launch faster than any other operating system. How they do it, if it is a trick, or whatever, it does not matter… That is a fact, and even non-computer people notices it.
People are not going to stop using Windows and go to Linux because they hate Bill Gates and Steve Ballmer… People are practical, they will switch only if they perceived the other product (Linux) is better. So far it has been a dead end. Only enterprise an niche markets take the Linux plunge.
And the small 1% client penetration of Linux is very related to Netbooks.
Sure, I understand this. But what do you want to do about it? Do you think there’s a way this can be changed unless you hold a gun in front of everybody’s head and force them to unify everything? Suppose that you actually have the power to do this, is it worth sacrificing Linux identity for this?
I’ve given up on wanting to unify desktop Linux a long time ago. I believe the only practical way to “unify” desktop Linux is by improving and promoting Ubuntu until almost everybody who cares about desktop Linux would voluntarily choose to use Ubuntu. Everybody who disagrees can simply use something else, but Ubuntu will be for the new users and people who believe in a unified desktop Linux.
But I’m running them in a non-Windows user environment, on top of a non-Windows kernel, which is all I care about. I’m completely fine with the fact that the app was originally written in Windows as long as it works fine on Linux.
I still don’t know what you’re getting at. I dispute this notion of yours that Linux is a failure unless any and all Windows-related bits are completely absent.
I’ve got years of experience with developing Windows software. Launching stuff is only fast on a clean system. After several months things slow down significantly.
Ironically OS X and Linux suffer from the same problem. This Mac was blazingly fast at launching apps the first time I bought it. Now, not so much.
Don’t disagree with you here, but don’t see how that’s relevant to this discussion either.
That’s true… and No. I do not know how to do it, but if it never happens, Linux will never be good, or as good as it can be. We will have drivers problems ’til the end of the world, and at the end, Linux would be another disaster.
And well, if you think Linux identity is “disaster”, well, do not expect people like me recommending Linux to casual users. And no, these days Linux is not about Freedom, but libertinage.
I believed the same thing, but Ubuntu has lost it too. First, Ubuntu Client and Ubuntu Server, like there are not Linux distros server. Now, *buntus.
And, I still believe something can be done to unify this thing. Before it is too late to be unmanageable.
No, what I am telling here is easy to see. the fact that you can run some Windows apps on Linux distros, whichever mechanism is used, it does not make Linux more accepted on the market, in fact, quite the opposite.
Well, There might be many things to do in that department. But with fresh installed, windows XP launch faster.
Don’t disagree with you here, but don’t see how that’s relevant to this discussion either. [/q]
Of course it is not relevant to the discussion. But it is an answer to some other comment. Sorry, If I mixed things.
Completely agree. I wanted to install linux on my PS3. YDL is the defacto standard. But I saw ubuntu had a version too. I got to download it and there are 3 *ubuntus for p3. There are two websites for PS3 ubuntu. One from ubuntu.org and another with the same color scheme that looks like an official ubuntu website. Each has different release version for the latest release.
I can spend time and figure things out but it is utterly ridiculous to expect the average user to even understand it.
I have seen this same issue for 11 years (when I was still using linux from 1998 gave it up in 2003 and went to OS X for my desktop). It has gotten worse.
Agree. I gave my wife the option of running XP in a VM on my Macbook pro since she needed it for work. She wouldn’t hear of it. Too complicated. Had to buy her a netbook with XP.
VMs and Wine are not for average users no matter what base OS is used. Most people I talk to have no clue what virtualization even means. Explaining it to them just gets blank stares.
Edited 2009-05-21 16:01 UTC
Well, thanks for backing me up.
I have to say too, that I keep myself using more and more OS X most of the time these days on the desktop. I also gave up promoting Linux for the desktop.
I wish something could be done, but If I see how this thread has evolved, I have to concur: There is nothing to be done.
What a lot of these points boil down to is that Linux seems to have very, very few advocates for the regular Joe user who just wants to do stuff on his PC without delving under the bonnet or going geek. And if too many developers go hardcore on you, it’s go geek or go elsewhere because you are investing too much time in administering your PC as distinct from doing stuff with it.
On the other hand, Microsoft went entirely over to “Marketing Rulez” and the result was Vista. Neither extreme works so one needs a balance. Clearly, a very hard thing to achieve but I hope the Linux community tries harder to achieve it. Distros putting out pulse audio before it was remotely ready was an example of getting the balance wrong, imho.
This article also makes me wonder whether open source (free in both senses) may not eventually turn out to be a dead end. I very much hope not, but pendulums can swing violently and unexpectedly for reasons that may never be understood. It doesn’t take that many people to decide “It may be free but it is just not good enough and my time means more to me than this” for there to be trouble on the desktop. Linux on the desktop has a small enough share for small swings in popularity to make a big difference.
But back to Microsoft. There are rumours that they may be pricing Windows 7 high. And if Microsoft really is taking aim at both its feet in this way, then I guess Linux can breathe easier.
Sorry, I don’t buy that anymore. My computer runs Linux and my girlfriend’s runs WinXP SP3. Want to bet which one needs constant maintenance? Sure, sure, pros can run Windows without any kind of AV software, but isn’t that “going geek” too?
Interesting points. I’m multi-OS-user. With my Linux hat on, I don’t care much for these arguments, because I use it anyways for my needs. I enjoy the freedom it offers. On the server side, Windows is very expensive and cost prohibitive. I always though MS should keep a basic OS, and charge prices for option packs, that way if people want the avalanche of crapola, like a search puppy that does tricks, etc, then people can pay for it, but instead they use their monopoly to force users to pay the upgrade tax. Under Linux and other open source, there’s none of that non-sense. Good open source projects get used and advance in popularity and support, other ones dwindle and die in their own right, at whims of community, rather than company squeezing people for money for things they don’t want.
Linux is on my desktop since 2004, and now you tell me it isn’t ready for the desktop? Thanks, Thom for bringing this great and not overdone article. My eyes have been opened for the first time, I have been using something not ready for the desktop for more than 5 years, GOSH.
I should switch now, but where should I go? Well, that leaves me with FreeBSD as the next best OS for the desktop.
Yeah right.. FreeBSD is a server operating system… You just sound as a Linux geek, they just do not know what a desktop operating system is or needs to do.
Maybe that’s why Linux is free, and it is only used in servers. Desktop, less than 1% market share… Oops, sorry, this year Linux got 1%, spread about 300 distros.
I would argue that between the 2, FreeBSD is a better desktop OS then Linux. A unified code base with a specific set of core programs that leave a function system. Different from the Linux philosophy where Linux is the kernel ONLY and each distro slaps stuff on top of it in a different fashion. The thing Linux has going for it is commercial support which is rather sad. It would be far, far easier for commercial companies to target FreeBSD. There is only one target to track and the license is far more permissive. The real problem was in the early 90’s a lawsuit scared off many potential commercial vendors and the ball started rolling to Linux. Unfortunately, it snowballed and Linux is still rolling.
FreeBSD has better sound support, with virtual hardware mixing built into the kernel. However, it does lack some device drivers. X, the GUI apps, etc.. are all identical to Linux so it really isn’t better or worse. In my experience, BSD performs far better under system load then Linux when running X. Linux tends to “freeze” up when under load even when trying to login to a SSH terminal. FreeBSD allows for better logins and end user performance when under heavy load.
All in all, if Adobe released Flash for BSD I would use it as a primary system in a heart beat. nVidia driver support for x64 should be coming in the fall 8.0 release as well, so one less reason to run Linux.
As someone stated above, Linux / BSD let you do what you want to do with a computer. Windows lets you do what it thinks you want to do. Of course, it’s also very easy to totally destroy your system in Linux / BSD due to this, but in the end it’s a better way of doing things and I would pick Linux / BSD over Windows as a primary desktop any day.
Well, I have to agree with you on your arguments. FreeBSD is less a mess than Linux distros. But FreeBSD is not aimed at desktop but server… PC-BSD would be better for desktop, in my opinion.
Believe as you will, even if they are blatantly paid for “statistics”
In the real world, Linux’s DESKTOP Marketshare steadily rises
and will continue to do so.
http://www.w3schools.com/browsers/browsers_os.asp
http://news.zdnet.com/2100-3513_22-140314.html
^BOTH Independent sources agree that Linux DESKTOP[/laptop] Marketshare was ~3% BEFORE the netbook explosion.
Thank God every single brain dead anti-Linux post on OSNews shows up in Google News as a separate “article” or I would have missed the parent!
This site should be nuked from orbit.
Yes, it’s not ready.
On the other hand, I’m using Slackware Linux on my desktop since 2002 all the time (having FreeBSD on spare machine for some time). Before 2002 we ran the whole office on RedHat Linux, some of us were using Debian also. So, it depends what you’re looking for. I must say, that having my Slackware box in my hands makes me feel it works like a blast comparing to the same machine with Vista on board. This was the main reason I removed Vista from my notebook and… I installed Slackware. To be honest – this notebook is shared with my wife. She also prefers Slackware as long, as everything works fine, she can use Firefox, Thunderbird, she has Flash support for modern sites and youtube, she can also write some documents in OO.org Writer and calculate home budget and expenses in OO.org Calc. I have GCC, Java, a lot of tools and – the great – VirtualBox for development and tests.
So, is it ready? For me (I think us) – yes.
You are not a typical computer user but an advanced one, compared to most folks. You need to think your way into the mindset of someone who won’t mess with configuration files or a console, doesn’t really know what a firewall is, isn’t prepared to learn the Linux file system conventions like /etc or /var and wants to know why Linux will not run their favourite Microsoft-platform games and does not come pre-installed on their shop-bought PC. That is where the broad mass of the market lies. One of the difficulties Linux has is that its advocates constantly misread the market. This returns Linux to where it has always been on the desktop and where it is likely to stay, imho: among those who want to run it and are therefore prepared to make the effort. Most folks aren’t interested in either.
Ok… Thanks to your reply, I’ve just realized some additional aspects I totally forgot. Computer games, all that “plug and play”, “drag and drop”, “fire and forget” technologies of our times. Indeed, typical users really don’t want to go into details. And the problem is, that GNU/Linux advocates are seeing all the things around this operating from the wrong perspective. In fact, I made similar mistake. GNU/Linux covers all my needs, it also is suitable for all the needs of my wife in fact, but… She is quite educated right now – living with a geek in the same house for some years right now is the result. Moreover, we are not playing games on the computer at all. For such purposes we have a console, which is in use from time to time. This is a difference in thinking – our point of view is that computer is a tool, which is suitable to perform your work and help you in daily activities. If you want some games – you should buy a console (no instalation troubles, dependencies etc.) 🙂 Anyway, thank you, you helped me to realize what are the needs of typical user.
I have been using it since 1994… Is it ready? No. Unless you assume users are geeks.
Besides Firefox, Oo.org, Thunderbird, Flash, Java… is not Linux. All is available for Windows and Mac OS X, as well. OO.org on the Mac, for example, sets the bar a whole lot higher than on Linux.
OO.org is so easy to install and remove on Mac OS X (drag and drop to the hardrive) that Linux should be ashamed with its Package Manager. On Mac OS X it just does not need installing. Linux and its package manager is a nice solution for the way Linux installs things, copying things in different folders all over the hard drive, but it should not have taken that path to begin with.
Why Linux does not use self contained applications? Because it is not a desktop OS thought for mass people.
OMG. Where to start?
OK … if you want to “drag and drop” to install an application on the Mac … where exactly are you dragging it from? Answer … from where you downloaded it to (ie. you are moving things in different folders all over the hard drive). How did you drag and drop it? With the Mac desktop manager / file manager. How did you download it? With a web browser, presumably. How did you find it, in order to download it? With a web search site (Google perhaps), apparently.
The one program on Linux, called the package manager, does all that for you … it allows you to search for a program, download it and install it in one operation. You don’t need a file manager nor a web browser.
OO.org is so easy to install on Linux. You go to the “Add/remove software” (package manager) application (just as you do to install anything else), you use the package manager search to find the OpenOffice package, you select it for installation, and you click “Apply”. That is it. All downloaded, unpacked and installed, including any dependencies, and entries placed on the menus for you, ready to go. Done.
On the Mac, you need to do all of this in a web browser, where there is no one place to get everything from, so you have to do quite a bit of searching before you can get to a correct download link. Once once you have clicked the link, the application is only downloaded, it is still not installed yet.
By understanding that Package manager and installation process are two different things.
Nice with the package manager, but it is not what I am discussing.
I know that, that is not what I am talking about.
No, it is not easy to install. The install process is really horrible. Click the package manager window and tell it to verbose the process.
You are completely confusing things… Completely. You are confusing a Package Manager repository with the installation process per se.
Package manager is one thing and installation process is another thing. Package manager was designed to help you install things, because in Linux, you need to install Applications. Everything needs to be installed.
What does “install mean”? It means placing this parts of the application in this folder, this other parts in this other folder, placing this simlink in this other folder, running tasks on the shell to create the simlinks and link simlinks, download libraries need, and so on…. The whole process is terrible and uninstalling an application by hand is terrible, is a nightmare, it is like a LEGO that fell on a garden.
The package manager easy things for the user to get things over the internet and automates the process of installing, but this is a work around a procedure that should not be there to begin with.
Applications do not need to be installed in Mac OS X. Mac Applications are self-contained so it gets rid of the installation/uninstallation process. How you get the file you want to install is another thing. You can bring it from the web, or in a CD, or pen drive… It does not matter.
And, while having a Common repository for files, Package Manager, is useful sometimes, it limits the distribution of software on other media that is not in the default repository of the distribution. It works for a server, but not for retailing of software… And users like to buy things in stores. Placing installers in CDs, is another problem.
In case you still do not see, In Linux with Package Manager you are not getting rid of the installing process, just hiding it. When package manager fails (Installation process) and you manually have to resolve a conflict in Linux… Welcome to my world.
Yes, it is that easy to install. You check if off and click apply. If you wish to see the details of what’s happening behind the scenes, you can, but you don’t have to.
You do not understand me. I am talking about the install process, not if you have to push a button or two, because there is an app that tries to automate things. A complex process to install even the most stupid Apps, as Linux does it is terrible. If something goes wrong it is a mess, and it goes wrong… You just need a faulty Internet connection, for example.
At the same time, the developer has to work a lot to make the installer do what it has to do. The developer has to deal with installers.
Besides there is a lot of dependencies that have to be resolved. Other OSs, like Mac OS X does not have that. If something goes wrong when copying a new App to the Hard Disk, a normal user can repair the thing. Just trash the faulty app and replace it with a new one—Self contained App.
It is not the same in Linux. The whole install idea is a terrible approach for casual users. Even to me, that I am an experienced Linux administrator, when Package Manager fails,(faulty connection, servers busy, servers updating ) is a pain to get the system back to its feet. I cannot imagine what a Windows or Mac user would do.
And the thing gets worse when you try to install something that it is not in the Package Manager. You need to authenticate, give permission execution, and start an installer and it starts asking and asking questions. Do you want feature x, y , z, c? Do you have a C compiler? where is is? Do you want to star experimental feature wuo? Do you want smooth fonts? do you want support for hyug.lib?
Desktop users… Never with that approach.
FUD. When you use a package manager, it checks the integrity of everything it downloads. Any error on download … no installation happens.
FUD pure and simple. Disinformation.
The developer does NOT have to do a lot of work to utilise the repositories. Firstly, that task is normally done by the repository maintainers, not the code developers. Secondly, it normally involves just answering a few questions about the package, and then compiling it.
http://www.gnu.org/software/autoconf/
http://www.openismus.com/documents/linux/automake/automake.shtml
If you are going to pointlessly TRY to criticise Linux, then please at least try keep to the actual truth.
Edited 2009-05-19 05:27 UTC
[/q]
Have you ever seen the application copy process in Mac OS X… with no installation. Then you will understand how the installation process in Linux is terrible. It is completely unnatural.
No wonder why Linux is not used by anyone and is FREE.
Imagine if you had to pay for it…
but he was referring precisely to what happens behind the scenes – because what happens behind the scene is what a user should do by hand if there wasn’t a package manager available, and really did by hand before the advent of apt-get and the likes
and the fact that a task that on windows and OSX, can still be done by hand by the end user (unzip program files, create shortcut and place on the desktop, right click -> “merge key” in the rare case of a needed .reg(istry) file, and you’re ready to go) needs a frontend on linux (a CLI one, but a package manager is nevertheless a frontend), to cover its inherent complexity and tediousness, is the result of the OS’s somewhat anachronistic and not very user centric design
after all, the unix filesystem was made case sensitive to not have to put case checking and conversion code in the filesystem implementation – analogously, the bin / lib / usr / etc (etc??) structure, allowed the system’s program loader to do with a less complex design and implementation, than if libraries can reside in arbitrary paths – in both cases, prioritizing ease for the the system and its developer, over ease for the end user, supposed to withstand some effort to understand and conform to his tool, instead of viceversa
And copy dll files to %windows%\system32.
And add the app to “add/remove programs”.
And create an entry in the start menu.
Yeah, easy as pie for any average user.
Have you never, ever, installed a self contained program in Linux through it’s own graphical installer? Or just extracting a compressed file in a directory?
It can be done. It’s being done already with commercial software.
There’s nothing stopping anyone from doing so, packages just happen to be the preferred way.
when installing a user level application (say, an image viewer) …/system or …/system32 shouldnt be touched at all
and i dont’ mean they occasionally aren’t, i mean there’s no valid reason for an application to fiddle with the system platform (especially since libraries that get installed alongside applications often are things like gdiplus.dll, that everyone needs, so end up either overwritten or replicated in a hundred places)
the entry in add/remove programs is just a link to the uninstaller (in its turn installed during setup)
but since i was taling about the installation process (that an installer or package manager “wraps”) as if the installer doesnt exist, and since the process of uninstallation will comprise of the reverse steps (removing files, removing registry keys and shortcuts etc.), the uninstaller and therefore the add/remove entry is not relevant (here)
that’s an optional step (mind you, i don’t even use the start menu at all, i have all my application programs directly unzipped under their own directory in one of the hdd’s dozen partions (one for each task, graphics, video, development)) but i don’t think it’s difficult at all to do
it actually is 😉
of course, because doing otherwise would require integrating with distributions’ repositories and package systems (which in turn would require either assisting distribution packagers with the source to the application, OR directly packaging, then testing and supporting, for all distributions) something that for commercial software is not always feasible or convenient
of course, because in the current model that distributions have created, packages are the lesser evil, as in they allow to install sw that hopefully works, yet avoiding needless on disk code duplication and with relative convenience (i find the “app store” list – type interface model quite inconvenient, you may think otherwise, thus it’s a subjective matter) for the user
with other means, the application:
– either would not be guaranteed to work, unless on a certified distribution (having a specified runtime configuration);
– or, it would require to bundle all its dependencies, with all the drawbacks that this entails;
– but, as in the current state, it would not run on ALL distributions with a SINGLE deployed image, AND avoid supplying its own copy of third party and system libraries, at the same time
And removing the registry key is done… how? Are you providing a script, or would the user be supposed to launch regedit and mess with that?
Ask an average user to do that.
Well, not.
Since you are providing either a)a binary installer, or b)the whole application contained in a compressed file, distribution packagers only have to put that in a package along with a script that a)launches the installer, or b)extracts the app files.
Actually, if you were to install a commercial app from a CD or DVD, a metapackage would be enough.
“Hopefully works”? Come on.
Who did ever say that commercial applications had to support every distro out there?
Take HP Openview: agents are supported on redhat and suse (afaik, although I’ve got them to run on other distros without problems). It’s up to you if you want to run an agent on a different distribution (dependencies are listed in the documentation).
Why would they have to (or want to) support say, gobolinux?
The Windows registry is not the path to follow either. And the Windows registry is a mess too. Do not copy wrong ideas, copy good ideas and improve them or create better ideas.
And stop the meta-package stuff, it is just a fancy name for installer.
Ask yourself? Why do I need to “install” every little app in Linux?
Because it is so difficult to do it by hand, that installers are needed.
And not, running the installer by hand, it is still running an installer. Package manager find the stuff and starts the installer.
Sigh!
To actually perform an installation of a package on a Linux system, getting the contents out of a package and putting them where they are supposed to go is no more complicated that extracting a zip a file. In fact, some package managers, such as pacman, actually use plain archive files as the packages.
http://en.wikipedia.org/wiki/Pacman_(package_manager)
A bunch of files get put in a few different directories. That is it.
It is the OTHER bits of package managers that are the clever bits. These involve:
(1) keeping track of what packages are already installed,
(2) downloading new packages and checking their integrity after receipt,
(3) determining what that package depends on, and comparing to what is already installed,
(4) fetching other packages to fulfill dependencies
(5) unpacking all the packages,
(6) keeping track of what file went where, and what package it belongs to, and
(7) updating the list of what packages are installed.
Only (5) is actually installing software. All of the rest is simply management, record keeping, and data reduction (ie. if more than one package uses a given library, the system does not need more than one copy of that library installed).
One doesn’t actually have to use the package manager. One can always download a stand-alone package and then install it afterwards, a-la the methods more commonly used on Mac and Windows.
Download a package from a site like getdeb:
http://www.getdeb.net/
… then use either dpkg (command line) or gdebi (GUI) to install it.
http://en.wikipedia.org/wiki/Gdebi
Edited 2009-05-19 15:29 UTC
Have you seen how it is done on Mac OS X?
You are just telling how to run an installer… Get rid of the Installer. Why people are so stubborn to see some OS else does it better.
Package Manager is one thing, the install process is a mess in Linux. Try it without an installer. Not to run installer by hand… Try to do it without an installer.
Drag and Drop to Install is just pure laziness – not good design or good framework. It leads to a file permissions clusterbang and a wasteland of Apple certified “technicians” who have no clue how to fix it.
`chmod -R +rwx` is again pure laziness that hides the real problem.
Need to open Firefox in safe mode or the Firefox Profile manager in Linux?
No problem – either in the Run Dialog or any terminal emulator of your choice – run `firefox -safe-mode` or `firefox -P` respectively.
On a Mac? GOOD LUCK from within that B0RK’D Environment:
`/Applications/Firefox.app/Contents/MacOS/firefox -safe-mode`
Holy Crap – always typing the full PATH is so much nicer,
computing hasn’t been that much fun since the 70’s!
But drag and drop to install is well worth that price, right?
except that it’s not the case for all Apps, is it?
especially Apps straight from Apple – they have *.pkg installers with a workflow similar to *.exe’s on Win98 don’t they –
gotta stick that EULA in there somehow.
2(or more) different ways to install software?
That doesn’t sound very “Desktop Ready” to me.
Let’s get some more _realism_ here –
Steps to get OO.org on a Mac:
1. pray for Intel hardware
2. go the site
3. begin the download
…blah, blah, blah
– How 90’s and what a waste of time.
Steps to get OO.org on Ubuntu:
1. you’re done already – “just works” out of the box.
What about a Desktop Ready, user-friendly sync frontend?
Mac: IDK, even if you can find a decent one –
Will it be available for PPC-based Macs, or just the newer Intels?
Will it be available for Panther, Tiger and/or Leopard?
Lions and 10.4 and Bears – Oh My!!
Will it be “resource fork” aware? – whatever the hell that means.
Ubuntu: `sudo apt-get install grysnc`
wow, a whopping 68 KB *automagic* download –
you have to wait all of 400 miliseconds –
no big fuss, no obnoxious PR – “just works”
and you have powerful *GUI* App that supports
multiple profiles, local file sync, remote file sync,
*secure* sync; *secure* there’s another thing that
Mac’s couldn’t understand even if it dropped a house on them.
Anyone out there need to use Java 6 on Mac 10.3? SOL and JWF!
Maybe it would be easy for Linux to follow Microsoft and Apple’s lead to the vaporous goal of “Desktop Ready” – all I can see that it takes is funny[admittedly] TV spots and utter contempt for users.
That’s a different matter. Apple coding could be buggy or wrong… What we are discussing here is user friendliness.
And yeah, all users run Firefox in protected mode. Do not confuse an idea with implementation.
Not all apps can be installed by drag and drop. Most of the apps that are installed with installers are ported applications from other operating systems.
iTunes, for example, is ported from Macintosh to Mac OS X (Two completely different OSes). Same with many other legacy apps, even ported from Linux, UNIX or whatever.
And of course, a system update needs to install a mess with the configuration so it needs an installer… But a normal app? No. It should not need an installer, unless you have a terrible approach like Linux does, that even the most simple apps is spread all over your file system.
You are again confusing things… Getting the app is not the same as installing the app. Why is so difficult to understand it?
While Package Manager is nice to finding apps, the install process is terrible.
That is not true. Not all Linux distributions have OpenOffice installed… Example Xubuntu.
My gosh, you are confusing so many things that is almost pointless to answer this comments. OSNews is becoming a fan site. Not a technical one. And linking articles to slashdot is bringing fans, and fans do not think by themselves, no matter if you are a Linux fan, Mac Zealot, a windows troll, whatever.
What Java has to do with install process? What PPC has to do with the install process? How having Java makes better the install process?
Sorry, you’ve obviously missed the boat somewhere.
People who actually have to USE computers for serious tasks
NEED executables grouped together for easy launching,
documentation grouped together for easy reading,
and shared libraries where they can be – DUH – shared.
For the most part, so called “Self Contained” binary software packages are just an Abortion of good software development.
Yeah right… 99% of the world does not use Linux, they use Mac or Windows… But no, Linux users are the only real computers users.
You must be out of your mind. Get real!
And it is not just me… Look at PC-BSD. They are trying to address Linux shortcomings, but if you keep telling yourself that you are the cool guy on the block, sorry I cannot compete with it.
Quite a few of his points are due to the use of C/C++.
Unfortunately many Linux developers are still using these languages for development and many problems result from this.
My advice (as a long-time Linux developer of applications for Linux) is to drop the C/C++ for the majority of applications and use Java instead. Mono is also an option but Java is ubiquitous and very, very fast these days (especially Java 1.6.13+ and things like JOGL).
Ok, so maybe 10% of apps aren’t suitable for building in Java (missing features, or platform-specific stuff), but the majority are. Plus you get the added bonus of being able to deploy (and even *sell*) on Windows.
The remaining Linux issues mentioned (sound!) wouldn’t be solved by the use of newer programming languages. The only thing that will fix that is a consistent vision and a lot of hard work.
Edited 2009-05-18 20:52 UTC
No thank you! I don’t want Java garbage anywhere near my system.
Well, that’s hardly a reasoned response. I’m guessing you heard something from some other uninformed person several years ago and have never questioned your assumptions since.
When was the last time you developed a large application yourself (at least 50000 lines), or as part of a large team on even bigger projects? I’m guessing you are either not a developer or don’t work on large projects that are meant to be used (and maintained) by other less-skilled people. How about writing multi-threaded distributed applications to utilise multiple cores? If you haven’t then your unreasoned prejudice against Java is severely misplaced. Even Richard Stallman now approves of its use.
Ok, you might think I choose Java ’cause I’m not l33t enough to use C++. In fact I’ve been cutting code since learning Basic in the 80’s. Have nearly 20 years doing cross-platform C and C++ from enterprise apps down to device drivers for scientific instruments and embedded systems. I got thoroughly tired of dealing with the platform differences and the stupid incompatible changes between C++ libraries (and even between minor revisions of the same library). Hell, several years ago Microsoft stated they would never truly support Standard C++, and the .NET C++ is a unloved child that really wants to be C#. I’ve done Fortran and scripting and all sorts of stuff in between.
In the end I settled on Java since it is productive enough for my purposes and can be reused in many application domains, from small single-board systems to the large banks I’ve worked with who have distributed systems. These days it is also Free Software.
So, if you have reasoned arguments as to why Linux developers couldn’t solve some of the issues raised in the original article using Java then I’m happy to hear them. Otherwise, perhaps it might be productive for you to examine why your knowledge is so poor on the subject of Java on Linux.
I’m not sure… Definitely Java is an option, although for some reason I would rather prefer Mono. Still, it would be great to have an alternative environments.
And you have a bunch of other tools in GNU/Linux (easily available): Fortran77, Ada (GNAT). This is what I like about GNU/Linux.
To be more precise – we have FreePascal with Lazarus, which is very impressive and interesting couple of tools. In my honest opinion, for many small projects it would suit better than Java.
Oh, yes… And we have all these srcipting languages with state-of-the-art Python (try it, it’s possible to use it with Java -> Jython ;-), Perl – we can create miracles when it comes to regular expressions and text processing, PerlDBI – and we have advanced tools for processing database reports, also, or at least for data preparation.
There is a lot of animals in this Linux world… And you might be right. We need standards and we should to concentrate on some subset of technologies as far as we mean “desktop development”. But it wouldn’t be so easy, as the power of Linux resides in its differences and huge amount of options, tools, alternatives etc. = flexibility given to the end-user. I like it. I can choose what I want.
I know, why are you talking about Java – because it’s an easy way to standarize this Zoo of different technologies. And the idea is good…
… But we will never resign from C/C++. These languages are the basic ones. GNU/Linux as well as all the Unixes were born from C. C++ is a reasonable continuation for many C developers. And it’s not so bad for end-user as long as the application is well written. To be honest, there is no problem to compile C or C++ source by hand if the sources are properly distributed with the proper configuration script. Moreover, in the majority of cases this way is the best one to get the software installed correctly – but yes, it means this is not an option for standard user.
In my opinion, GNNU/Linux is for people which have enough computer literacy to perform smart decisions and smart choices. The rest should work with Windows – they will never be happy with Linux. Even, if we would have 90% of software written in Java or C# (on Mono), standarized across most of the distros (if it’s possible) there will be still a huge group of people, which will prefer to use standard “configure/make/make install” approach. They would be creating alternatives all the time.
GNU/Linux is flexible, yet inconsistent. That’s why I like it 🙂 You can do a lot, but you cannot think that 90% of work would be done by a wizard.
Imagine how many distros we have – we cannot make “Linux” ready for desktop. “Linux” is to general term. Almost as general as “operating system” (and in fact, “Linux” means only a kernel, but I don’t want to go into these details). The question is bad and we will not receive a proper answer.
Maybe, the correct question is:
“Do we have at least one GNU/Linux distribution ready for desktops, to the same level as we have a lot of distros ready for servers – like: CentOS which is an exact copy of RHEL, an industrial standard?”
Let me think: Ubuntu? Which *buntu? 😉
Allright, for me, my distro of choice is Slackware. Ok, it’s not ready for all desktops, at least it suits my desktop (or “workshop”) very well.
Edited 2009-05-18 22:42 UTC
Good reply. I agree with you that choice is indeed good.
However, the use of C and C++ for the bulk of applications is part of the problem not the solution. It is just too hard to maintain these applications across different flavours of Linux and libc versions. Why not let Sun/Oracle do the bulk of the work for you?
For example, people whinge about Gtk+ and Qt not cutting it yet they have no experience on Swing, which is perfectly good for most basic application uses (although does miss some things, and admittedly doesn’t have perfect desktop integration). It is pretty hard to beat Java2D for cross-platform fonts and compositing graphics (particularly now that it is implemented as hardware shaders using Direct3D and OpenGL).
You are right. With small projects plenty of tools are adequate. For large projects of large duration and large teams the simplicity of Java is actually a benefit to have everyone understand what has been written (although you can indeed “write bad FORTRAN in aby language”, including Java).
I used to use Slackware back in the 90’s. After many years I just wanted to get things done without any hassle so I’ve been using Ubuntu for a while. I guess time-constrained users that want “low hassle, just get things done” is what the article is talking about, not developers like ourselves.
Uh, all the Java GUI apps I’ve ever used and coded for look hideous (and I’ve coded a lot of Java for Oracle-related development *in* freaking Oracle JDeveloper, which is an abortion of an interface and slow as hell). You have as many problems with libc versions as with SDK versions in Java (which vendor? Which point release? Which point release update?).
Java is fantastic for server-side backends. A lot of quirks aside that put me off (its verbosity, library mess, ugly use of nested exceptions, NullPointerReference and boilerplate code), it’s probably the best choice when it comes to rapid enterprise development. But I don’t want it anywhere near my desktop, at least not yet. Despite all its progresses in speed in recent years, the JVM is no replacement for a well-compiled C/C++ piece of code (that is setting aside that you have a lot more control over what your program actually does with the memory it allocates, something which is impossible to do in Java. Using the stack instead of allocating everything but the primitive types on the heap, for a very basic example).
By the way, have you ever coded with the Qt framework? It’s done wonders for C++ development. I feel a lot more comfortable writing in Qt C++ than in Java. Too bad I can’t use it at work.
Well, I can make good-looking Swing interfaces (even seen a ‘Filthy Rich Client’ for what is possible with Swing) and good-looking GWT interfaces. So the problem might be with the developer skill level rather than the library, eh? You are right, the Oracle Java stuff is damn ugly, as is the IBM stuff and old Sun stuff. Doesn’t mean Java itself is borked, just the average corporate developer often don’t have enough time to make things nice.
I have heard of problems with Java library versions. I don’t usually find them in the JDK so much as in things like log4j and the GUIs etc. Still, the consistency of the Java JDK classes and Standard C++ libraries are not worth discussing, they are simply mega-parsecs apart.
Ummm, you must be using an ancient JVM. Recent versions get to choose where to put things and they both very, very fast and much better at deciding where to put things than most humans (and compile-time limited C++).
You are clearly out of date with the capabilities of Java. It is a shame so many developers have stagnated on C++ since Java was admittedly slow until a few years ago. For example: here is an old (2005) example discussing Java 1.4.2 (which was glacial by the standards of 1.6.13+).
http://www.ibm.com/developerworks/java/library/j-jtp09275.html
Today Java is even more efficient (those Sun guys are *good*) – and you don’t have to recompile your code to take benefit of it.
One nice thing with Java is you can compile it to pure native code with statically-bound libraries just like your portable assembly, er … C/C++. GCJ has this capability. You get the flexibility to use the speedy Sun and IBM JVMs or GCJ (slower, but better for embedded stuff).
If you are worrying about your memory management then you aren’t busy getting the application functionality going. This is the real curse of C/C++ etc and why some many bugs come out in them and they are so slow to develop for. Ok, you hate the code style. Understood, but the simplicity is there so other people can understand what you wrote. Read much C++ from other folks or decide it’s not worth it and re-write yourself?
I must confess I haven’t used Qt for quite some time. It’s meta-object macros rubbish made me reflect on the 90s and the progress in object-oriented systems since then. Borland did a lot better job in those days with the Object Windows Library (OWL), which looked like you were manipulating C++ objects. Qt reminds me of the hideous outdated style of MFC and its macro mess (despite the similarity Qt is much superior).
Sorry, IMHO GUI constructs should look like first class objects, and *only* objects, not some hideous meta-compiled macro mash. I don’t consider C++ and macros to be l33t. Been there, done that, moved on years ago. That style of programming is quaintly old-fashioned and won’t solve Linux’s acceptance problems and the productivity requirements of developers (especially if you are doing big multi-threaded apps).
I’m glad you are happy with C++. Sometime you might want to have another peek at Java (in your own time) and see what it can offer. It takes a while to shake off the C++ mindset and really get into it (ignoring all the Java code out there that still looks like it was written by MFC hacks).
Also, get a decent IDE and ditch JDeveloper. Eclipse and (especially) NetBeans are both excellent and Free Software. IntelliJ is fantastic (and the cost is worth it).
Happy hacking.
I am a Java programmer, but, let’s be honest, the JVM WILL ALWAYS BE WRITTEN IN C++… a lot of commercial code will ever be C or C++ based and all the operating systems in production today are written in C or C++.
You can argue a lot of things, a lot of them reasonable and true, but the truth of truths is that C and C++ will always be there, being the base of the things we think are better.
True. But C and C++ require assembler portions for some stuff, does that mean we should write our applications in assembler because that underpins other stuff?
The discussion topic was the problems with Linux. One point mentioned that they haven’t been solved with C/C++/ I don’t think they ever will. Anyway, I was suggesting that good developers using Java (like yourself) could make a dent in this.
Edited 2009-05-20 02:08 UTC
I’m not saying it’s not possible to do so. I’m just saying that my experience with GUI Java applications has been less than favorable. As far as I can recall, the only desktop application that made use of Java that I’ve used on my desktop was JDownloader, and it was terrible in a lot of things, from desktop integration to outright ugliness. But you might be right–I really can’t know since I spend most of my Java time developing backend stuff.
Hmm, really? The only thing not really standard in the “Standard” C++ libraries is Boost and STL. Java has a mess of different standard libraries depending on the version you work with–and if you really haven’t yet found a client who has IBM Java 1.4 on their old AIX server, be glad.
Compiling with the right, standard flags for g++ (-O2 -fomit-frame-pointer -finline-functions), for now, is still much faster than Java for most of the processing. The only things Java does faster than C/C++ for now is memory allocation I/O, which is a fair point, but the rest of the number crunching is still decidedly hands-down to C/C++.
The IBM JVM sucks: http://www.stefankrause.net/wp/?p=9
Sun’s JVM, when compared to C/C++, is not much better, but it’s definitely a better choice over IBM’s. By that article I linked to, the JET JVM seem pretty fast, though; but I haven’t used it so I really can’t see the difference hands-on.
I never said C++ was faster to develop for. I just said it was better for desktop applications. Sure, some things certainly slow down development time (even though they bring in a lot of benefits), but we’re talking about open source desktop applications, where there isn’t a deadline for delivering a product to a hasty enterprise customer.
Of course, and that’s why Java is better for enterprise-level software, where code maintainability through easy reading and easy development is key. On the desktop, there are other priorities besides those.
I can assure you that, though many of the Qt tricks are based on preprocessor magic, you won’t see anything of it except the most useful ones, like QVector<T>. And most of the times you won’t even notice it’s doing that, except when adding the required Q_OBJECT macro to a QObject-derived class. Not too hard, is it?
99.9% is exactly as you stated it: You’re simply manipulating C++ objects.
Of course! I’m not saying Java sucks. I actually kind of enjoy it over pure C++ at work, because it allows me to get work done easier and have less headaches with impossible deadlines. But not on the desktop. The desktop app I develop in my free time is a Qt-based C++ app, and I wouldn’t have it any other way. At least for now.
Oh, how I wish I could do that, but it’s a company standard when interfacing with Oracle applications. It has a lot of good capabilities for that particular niche, but it sucks in everything else.
You too
Edited 2009-05-20 02:50 UTC
You all forgot about… C# 😉
Ok. Happy hacking to you all. Languages… It’s good we have so much to select the proper one for our task.
I’m not familiar with C# at all since we don’t use it at work and I run Linux at home. But for what I heard, it doesn’t sound like a bad language at all.
As I said before, happy hacking
I couldn’t disagree more.
Quite contrary, all of the good parts should be rewritten in C or C++ (the former in my personal opinion). Things like Python and Java are good for prototyping, but their usage is starting to already show in the increasing demand for resources. Modern desktop Linux distributions are not really that far from Vista. It is only a memory when people could say that (mainstream) Linux runs on old computers.
But then again, one serious reason why Linux is not ready for Desktop is that developers keep rewriting everything every year or so, thus perhaps it is better indeed to stay in Java, as an example. Good C and C++ applications last for decades, which clearly contradicts the current Linux paradigm of “innovation”.
Always be honest to your roots — in Linux’s case to UNIX and C.
Apart from flawed assumptions, the article suffers from being factually incorrect.
To make it short:
0) False
1) False
1.1) False
1.2) False
1.3) False
2.1) False
2.2) False
2.3) False
2.4) False
2.4.1) False indeed!
2.4.2) Cleartype in windows cannot handle bitmapped fonts
2.4.3) False. Default fonts (often) look really good WTF!? How to you measure taste in an objective manner!?
2.4.3.1) What does that mean? You cannot disable advanced font antialiazing unless you ship the distribution without FreeType2.
2.4.3.2) Many distributions come with really good fonts. They just tend not to be default. But then Tahoma sucks too, and the fonts in Vista are no better. And what does “Windows compatible fonts” mean?
Aaargghh… Why bother? The entire “article” is based on unsubstantiated claims, flawed assumptions and is by and large factually incorrect.
I’m amazed. This has to be the most ridiculous article ever posted on OSnews… it has:
– *highly* subjective opinions
– technical errors
– unfair comparisons
– “I want this to be as in Windows” nonsense
– general problems which affect Windows just the same
– many things that just don’t *matter* for the desktop experience
I could go into details but I would need to cite, which would probably be against the copyright-note…
Thom, I really don’t see why you think this article is “different”. It’s just silly FUD.
Linux is victim of divide and conquer solution. I am sure, if all is unified, there will be no MS.
Linux is not a victim, and division is not bad when it makes sense.
That’s the nature of Open Source, and the reason why we have choice in the first place.
The sad thing in my eyes is, that it’s all there. Nobody is really interested in putting it together or developing it into something real, but basically its there.
Take a look at etoile. They are on their way to develope a complete object/document based modular desktop environment, that would fit open source developement model perfectly. You only got to look how far they came with 3-4 people.
Then there’s PCBSD’s pbi system, which isnt the perfect solution but is in my eyes way superior to what we have now.
Then there’s the idea of gobolinux with their simplified file system hierarchy and hiding/linking directies like /usr (though I dont like their packaging).
And finally, the most incomplete of those is I think Minix3. Sadly but with good roots.
All things there but I’m not able to put a distro together… :/
Greetings
Good point.
One has to wonder why all these guys can’t just get together and make something great. And I’m one of those that are about to start yet another OS project.
I can only speak for myself when I say that although a huge part of the OS is technical, I think aesthetics plays an important role. If you don’t like the name or the look of an OS, then it isn’t tempting to work with for the next fifteen years.
I value good typography, timing-stuff, redrawing and a self-describing API. Although Apple doens’t do everything right, they’ve made me very picky and aware of details in the UI.
But most of all I just want to forget about it all, and use my computer for what want to do; for instance to record a song that I have on my mind. If I get the impression that the guys behind a OS are nerds, only interested in useless benchmarks that doesn’t translate to real usage–then I’m out.
These are just some examples, but I think there’s a psychological side to it.
Edited 2009-05-18 23:14 UTC
The fundamental problem with Linux is that, since most developers aren’t paid, there’s nothing forcing them to pay attention to the “little things”, spit and polish. People scratch their itch but don’t take the time to tie up loose ends and file down the edges. (Too many metaphors?)
Another problem is a lack of knowledge of or interest in cognitive engineering. Rather than adapting software to the user, Linux developers seem to have the attitude that users should adapt to them. Even the most basic, well-established tenets of HCI and UI design are completely disregarded. I’ve had it happen time and time again where I point out some UI element that is hard to use or flawed in obvious ways, just to be blown off by developers who JUST DON’T CARE. It’s not that they have a counter argument. They’re just not interested.
So when it comes down to it, Linux is not going to dominate the desktop because the developers don’t want it to. See, it’s not the developers who complain about Linux not dominating the desktop. (I develop open hardware, BTW.) It’s the users who do all the complaining. Every now and then, some users become developers, and some small proportion of the time, those users have enough insight into psychology to design something usable. But that isn’t the norm.
In Linux, just about everything user-oriented is an after-thought. And the unfriendliness isn’t limited to the GUIs. Even the command-line stuff is hostile. Every app has its own unique syntax for its configuration file, which makes is impossible to do automatic merges between an upgrade’s default config and your changes to the earlier version. Every app is spread across the file system, with things in /bin, /etc, and /usr. This requires complex (and often buggy) package managers that wouldn’t be necessary if apps were mostly self-contained (config files and system libraries notwithstanding).
Inertia is a problem. Developers, even the big ones like Fedora and Ubuntu, just don’t have the cojones to change things that are fundamentally broken. But even with the cojones, that wouldn’t help much. If someone patched Apache with the ability to read XML config files, the upstream developers would probably reject it.
Everyone’s in their own little world, with little interest in how their stuff might interact with someone else’s or how things might fit together into a coherent working distribution.
And there are plenty of developers who are actively hostile. You’ll recall the Red Hat employee, maintainer of glibc and how his dereliction of duty has recently come to light. Or how about the maintainer of KDE’s clipboard manager, Klipper, that locks the clipboard and breaks so many apps. Another non-team-player.
People talk about the FOSS “community”. Sometimes, I think that’s a joke. Everyone’s out for themselves. Everyone wants something for free. Few people are interested in contributing of their own time, and those that do want to do it in their own quirky way. Richard Stallman may be passionate about Free Software and the rights of users to modify the software on their own computers. But most of the users just don’t want to spend money.
Ok, there’s a fair amount of hyperbole in this comment, but you get the point. It’s like 5% or less of people in the “community” are contributors, and a small fraction of them actually see what they’re doing as a contribution to the growth of the community. A community requires compromise, cooperation, consideration of others, and vision. And there just isn’t enough of any of that.
Exactly… (Sorry I cannot vote since I do not have votes left). Some one with common sense. Very insightful.
I would like to add: The main problem with Linux is fragmentation… Too much liberties, everyone is pulling in its own direction, there is no common goal.
Yes that is it. There is no common goal.
Edited 2009-05-19 01:28 UTC
Well I do not completely agree, while a lot of is true.
Linux is maturing and more and more apps are aiming at usability. As for the file system, I think there are good reasons to do that. one should not care where the application is located if the package manager does all the job for them.
For the developer comment, yes a good load of developers do whatever the f–k they want and well lets face it, in most cases …they have the right to do so!!
There are good and bad developers everywhere anyway, be it proprietary or not. I should also mention that a lot of developers on Linux ARE PAID and paid WELL. A software being OpenSource doesnt mean that money is not involved.
theosib – Very good post, a lot of what you wrote is very dead on.
While I’m no Mac Fanboy, I think the nextstep Framework & Objective-C are by far the most throughthought things around out there. Things on Linux often dont seem very well throughthought. A lot of those “easy” things come from this good framework. Either take this framework or invent something similar good and then keep it.
My point of view of these problem are
1. Branding
Maybe not everyone like it. But, i think what Linux for desktop has to stop calling themself Linux.
Think of it like Android or Palm WebOS. Brand the new distro like these guys. Make hype and market so the majority of environment will support the platform.
2. Appstore (again)
I agree with the first comment. The OS should have Appstore. I know, synaptic and CNR is there, but again, focus on the user. User should only see the main apps not all those crazy dependencies library.
And, if they don’t have always on internet connection (or slow connection or completely offline), they should able to copy the installer package from friends of store. Make sure its working, again without dependencies.
Hardware driver also available here. All of them, no matter it is close or open. Don’t give user direction to insert a new restricted repository, or use another software (line Envy) to download it.
Everything should be here, and portable (download-redistributed offline).
3.slower new release.
For me, i still think that 6m release schedule use by major Linux distros is to fast. User have to all the installation and upgrade too often. Is confusing and time consuming. I know, they can choose not to, but for me, i can’t help it
I think MacOSX model is better. Give major release for every 1 to 1,5 years. Between them, release only minor release that focus of bugfix and optimization. And don’t break dependencies.
Last word. Most of what i said is already done, especially by Ubuntu. They only have to change and improve a little bit to achieve these. When they done, i think Linux based OS is ready for the desktop.
That’s a very good idea, IMHO.
Agree, it is terrible. The whole dependency tree is ridiculous for any app.
Agree completely. The whole update process every single day gives to the user the impression the whole OS is not polished, but it is patch after patch.
*buntu lost track… High hopes, they screw all on the way.
In fact, I had high hopes with Ubuntu, I thought in some years, people would call just Ubuntu OS and that’s it.
But now, Ubuntu is exactly with the same problems of other distros… They have Kubuntu, Ubtuntu, Xubuntu, Edubuntu. Client and server… A mess.
Applications and Drivers are very very very limited.
Probably 20 for applications and 20 for Drivers.
Linux distros suffer due to low quality drivers and applications around linux.
What makes me use any OS over the other is not its beauty or robustness but mainly its applications and drivers which makes hardware flys.
That’s the only reason why linux will still not be ready till judgement day.
That’s why Linux is not ready for the desktop AT ALL! Linux caters to the extremely advanced user who’s geeky enough to get under the hood and tweak the heck out of the OS via command line, or the extremely basic user who’s just happy with web browsing, emailing, chatting and doing word processing.
All other computer users, and we are the majority, are better served by Windows or OS X. Here are just a few examples of why I consider Linux a big flop and wouldn’t recommend it to anyone I know personally.
1. Program Installation – Dependency Hell has taken a new meaning. So you want to try out program X and you go to Synaptic, YaST, or whatever package manager your distro uses. You choose to install Program X and you find out it requires dependencies A, B, C, D and E. You proceed with the install and once done, you try program X. You don’t like it so you go back to uninstall program X. Program X is removed, but dependencies A, B, C, D and E, which you definitely didn’t need before you installed program X are left behind. Well, basic users will probably not even realize that extra junk was left on their systems so they won’t care. Uber-geeky users know their way around the Linux filesystem enough to search for package manager logs and remove files and configuration file entries manually. Everybody else in-between is left with the bad taste of a crappy uninstall routine.
2. Hardware support sucks, maybe less than before but still sucks nonetheless. You go ahead and install Linux (or run it from a live disc), and you see you have sound, video and networking. Cool! That’s until you realize the crappy support from those drivers. Remember those out-of-box Windows 98 drivers that were so crappy that sent you running to the manufacturer’s website so you could download and install the REAL driver? Well, that’s the kind of support you get when you want to use your video, sound, webcam, TV tuners, scanner, printers, etc with Linux… if they work at all. Again, the basic user will be fine with this support. The uber-geek will probably recompile his drivers to get them optimized for their kernel of choice. Everybody else, we’re just screwed.
3. Linux development is a huge MESS! You have to put up with getting editors, compilers, debuggers, the whole nine set up. And documentation? Forget about it! Linux development tools are all over the place and are the biggest proof of how disorganized Linux is. And if you are new to programming and want to learn? Seriously, do yourself a favor: get Windows, get the free VisualStudio Express and learn to program in C# or VB because the whole mess that is Linux development really doesn’t lend itself for a productive and enjoyable learning experience.
4. Multimedia setup is a nightmare. Unless you get a distro that doesn’t care about patents and copyrights, getting your system to play back MP3’s, AVI, MOV and WMV files can be a total nightmare, and don’t get me started on DVD playback. Also, I thought setting up Windows Media Center was quite a hassle, until I tried to set up MythTV and TVTime. I don’t think even their developers can understand these program’s configuration UI’s! And what about video editing. There’s nothing out there as usable as iMovie, Windows Movie Maker, Sony Vega’s Studio, etc. Kino and Avidemux are as pathetic as that Windows Live Movie Maker Beta or even worse. Nobody cares about Theora – it is not playable on almost any consumer equipment out there.
So you see, until Linux gets to cater the majority of computer users, not just those in both ends of the computer-literacy scale, Linux is NOT ready for the desktop. Period.
This ought to be good.
Only true if the new meaning of “dependency hell” in Linux is “what’s that?”.
It is obviously a loooooooong time since you have actually used a Linux package manager. That rant above is simply not true.
Package managers these days are literally: “browse or search for what you want, select it, click apply”. Done.
Linux supports more hardware than any available version of any other OS. Period. Supported out of the box, and supported better … no “dropped support for legacy cards”.
It is a huge mess of people … estimated 1.5 million involved.
It is a good job then that package managers are so easy and so capable, then, isn’t it. These days, most package managers support “groups” or “collections” of applications. This operation then is actually just a matter of selecting the appropriate group of applications in the package manager, in this case the “development” group, and click apply. One-click install.
Most distributions will give you a link to a repository. The one for Ubuntu is here:
http://www.medibuntu.org
Add it to the list of repositories in use, problem (which was created by US legal system, not by Linux) is easily solved.
All solved by adding the one extra repository. No drama.
You didn’t look too hard.
http://en.wikipedia.org/wiki/Kdenlive
Theora is not Linux.
No-one expects YOU to use if you don’t want to … use whatever you want too … but that is no reason for you to spread disinformation about Linux to try to put others off.
Edited 2009-05-19 03:23 UTC
Very good. That’s exactly how Windows and OS X users see the whole thing. After years and years promoting Linux to almost everybody, I realized the whole thing is a mess.
An you have not talked about the Fonts in Gnome.. Do you want RGB, BGR, antialiased (heavy, soft medium), hinting (soft, medium, easy)… What combination? Try to explain that to a normal user… Not even die hard Linux users know sometimes. And it is on the user interface.
People just tell you… I just want it to look nice like in Windows or Mac. And after, not all applications show the same fonts, sometimes they look bigger, or smaller, or just don’t care about the preferences pane.
No matter how hard I tried, desktop users say Mac or Win.
Eh? The choice of RGB/BGR/etc and the hinting levels are only shown in the ‘Advanced’ dialog of the font customization panel. The main panel just gives you a simple choice of “not antialiased”, “smooth antialiasing”, “best contrast antialiasing” and “LCD antialising” (not sure I got the exact wording right, I’m not on GNOME right now). Or are you suggesting removing the ‘Advanced’ button altogether and piss off the people who do want to customize that stuff?
Your argument sounds a lot like “Windows isn’t ready for the desktop because I launched regedit.exe and everything is confusing!”
Again… Not all Window Manager have the same way, and not all distributions have the same way… Can you see a patter here? The big mess.
Xubuntu, for example, takes you directly to it. And Xubuntu is aimed to cheap hardware, to the things that GNome cannot run on.
Actually, no.
The font customization dialog you’re talking about is part of GNOME, which is a desktop environment, not a window manager. There are only two major desktop environments, GNOME and KDE. Xfce has too little market share to matter – people who use Xfce are already tech savvy and don’t have to be considered in this discussion, which only focuses on people who might find Linux confusing and will only use GNOME or KDE.
I’ll give you the benefit of the doubt and assume that you actually mean “desktop environment” when you say “window manager”.
As far as I know there are no distributions that change the look of the GNOME font customization panel or the KDE font customization panel. This only leaves 2 possible font customization user interfaces, GNOME’s and KDE’s, on all Linux distributions.
I’ve already said that GNOME’s hides the details in an Advanced button. Is KDE’s any more complex? http://www.novell.com/documentation/nld/userguide_kde/graphics/cc_f…
Nope, doesn’t seem so.
No Linux newbie would or should use Xubuntu (see Xfce argument). If you recommend a newbie to use Xubuntu then that’s asking for problems. If the user’s a newbie, point him to GNOME or KDE. Nothing else.
Blaming Linux for Xfce’s usability problems is like blaming Windows for being unusable by putting a Windows newbie behind a customized Windows installation that runs LiteStep (a shell which replaces explorer.exe). Just because the choice of LiteStep exists doesn’t mean that its existence makes Windows less user friendly.
Edited 2009-05-20 14:45 UTC
I believe this is a dead end with you guys. You just cannot see what it is in front of your eyes.
I know that Linux is only a kernel. All the other parts of the system is something else. Linux as a kernel, is not the problem itself.
Hower, Linux or what is called Linux these days, everyone in this list know what it is: a Linux distribution. And window manager or desktop environment is not the point. You are just doing what lawyers do, trying to support the statements on little details to discredit a valid concern.
It is not that term or that term… Who cares? See why I am telling you.
And somehow, you, without knowing it, or knowing it, you are just telling the same problem I have been talking over and over during this long thread. What is called Linux is a fragmented base. You say, to a new user you have to recommend this or that…
Well, the thing is new users do not know where to start, because all distros are a mess. Why is so difficult to make a simple desktop Linux distribution that works for most users?
Ubuntu couldn’t so they develop the *buntus. You say the good one is the Ubuntu, but XFCE people say they are the good one, and of course Kubuntu say they are the cool one… and so on. then you say, Gnome and KDE are the good ones because everyone uses it, which might be true or not, but if what everyone uses is the good, then Linux is out of the picture.
I knew my concerns would rise many criticism, but this is not criticism. It is not import which distro I recommend, because I believe there are too many distros to choose. And everyone is pulling its own way…
It might be good for specific purposes like a super cluster, or a audio studio, but we are talking here about Desktop users.
Desktop users were the first target for Linux when it was created, and today it is the forgotten child.
And by the way, I stopped recommending Linux a long time ago to casual users. Only enterprises if they want to save money. When problems arise, and they do, (because casual users manage to do the impossible) fixing stuff is very time consuming. I had never worked so much for so little, because of my big mouth.
And now I try to point Linux troubles and everyone is bashing me out. It is clear why nobody uses Linux on the desktop and the thing is FREE.
I can. There are some truths to your article, but that doesn’t mean I agree with everything you say. Some things are just factually wrong in my eyes and there’s no way I can agree with them.
I find it interesting that you not respond to any of the things I said about the font customization panel.
No, the *buntus are not started because Ubuntu did a bad job, the *buntus are started because not all people agree with the choices that Ubuntu made. GNOME is a fine desktop for newbies, but certain classes of power users don’t like GNOME and want KDE or XFCE instead.
It’s not because Ubuntu did a bad job that people tried to create alternatives. It’s because people CAN make alternatives, and so they will.
You are assuming that there is one true way which can please everybody. There isn’t. If there is there would be no need for alternatives. The alternatives exist because they fulfill a need.
What I don’t understand is why you don’t just recommend Ubuntu to newbies. Kubuntu and Xubuntu exist but you don’t have to care about them. Ubuntu is by far the most popular one, and if you believe that there should be One True Desktop Linux Distro then why not just recommend Ubuntu and leave all the other distros to the tech savvy people who know what they’re doing?
I am not bashing you out for everything. I am only bashing you out for the things that I disagree with. There are faults in desktop Linux, but the font customization panel is a non-issue.
Edited 2009-05-20 23:40 UTC
It is not my article… I just pointed some things from the article that I agreed and because I really believe that somewhere down the road Linux has lost track.
Linux, or again, what today we know for Linux, which is a Linux distro, has become so complex and so fragmented that most casual users just do not care.
Sorry, I cannot respond all comments people made about my comments. Even though I am trying… this is getting frustrating. I am discussing ideas and people are discussing implementation details and keep telling counter example details of one or other distribution… But the thing is no one wants to admit: Every distribution has a problem, a short coming. All of them are intrinsically flawed in some way. And casual users do not care about fixing Linux.
If you are assuming that, then it means XFCE supporters are just stupid. They want to use XFCE instead of GNOME because they want and are taking lots of trouble just to be different. But that’s not what XFCE people say… They just say Gnome lost track and it has become more and more complex and slow. And Gnome people said the same thing about KDE. And I have been using Linux since there were not GUI installers.
No, I am not saying that. Not everyone can be pleased and there will be always need for niche markets. But again, don’t loose the focus, we are talking here desktop Linux. Evidently the needs of a scientist are not the needs of a casual user, a normal user. But this is not what is happening. Linux is becoming a niche OS, done, used and maintained only for insiders of the IT, big corporations, specific purposes, and people who at least are related somehow to all of this…. (And then comes another person bashing this post saying that compiling the kernel is not so difficult because he or she does it everyday when he needs to fix blah blah and he does not know a thing about computers… Yeah right, because everyone knows what a kernel is or cares. )
That’s a good question. There is a fundamental problem on how Linux is perceived. I have been in dozens of forums, which are mostly played at Universities… One of the main arguments about Linux, and it is repeated over and over is that Linux can take that old laptop or computer you had and bring life to it back again.
Many people then, decide to give a Linux a try, but they have a brand new shinning computer at home, so they bring older stuff and say… Can you help this machine to do something with Linux… And Ubuntu does not work fine on it, because Gnome is heavily bloated. People tell you, but Windows used to work fine or better than how this thing work.
Other people say I went to the Ubuntu site and I followed the link Which Ubuntu is right for you? and end up with Edubuntu. Why? My opinion: the Linux ecosystem is a complete disaster.
Because, again, that’s not what the *buntus tell in their pages and because, even Ubuntu is somehow flawed.
In fact, I though Ubuntu would be the best thing to do, until I realised it has severe limitations, and NO, I am not going to tell them here again, because I tried, and always there is gonna be a person who says, but it is so easy to fix that, just download xhyruoo applied -uioprn, install that and that and reboot the kernel and that’s it. Or better, recompile hyukku with flags -989. Like audio. And at the end, they say it is so easy my Grand ma can do it. Yeah, right. You’re grand ma can do type it, but she cannot figure it out (And then another says that he repair the registry in Windows once and Linux is better, and his/her grand mo could not repair the registry either).
Well, let me tell you, I know that, but do not expect casual users do that or know that or keep 6 or 7 hours asking people how to do it. Or paying people like me to fix their computers, when you have for example App Stores.
Do not expect people jumping from distribution to distribution when they see Ubuntu cannot do that, but this other distro does it.
Yes it is an issue. Because graphics designers, for example, have tried to move to Linux and find the thing horrible. And many graphic designers most of the time are “broke”, cannot buy Photoshop, and are normal users. Journalists the same thing.
And no one has dare to explain to me all the settings there are in the Advanced panel. But let me tell you… Those advanced settings are not about Fonts, those advanced settings are about “rasterization of fonts in screens” and even the best graphics designers I know, know nothing about rasterization of computer fonts. Graphic designers, for example love Mac because rasterization is done right (And another one is gonna tell me he does not like fonts on the Mac because he sees them fuzzy on his $99 monitor, that bought when CCity closed). So when you give the user control over the rasterization, just because you can and Linux can because is not about freedom but about doing whatever you want, the user usually ends with a disaster, especially if he/she prints something that is not remotely like the thing is and then he goes to the Advanced tab and get worse and worse until she says, this thing does not work…. And no, printing is not a niche market, printing is done by everyone, except may be us, IT insiders, who almost never print and maybe that’s why Linux do not get scanners nor printer drivers right.
And yes, even Windows get the thing better on paper than Linux (And another one tells me that’s not true because if you install huipscript with options -9857 the thing gets fixed for an hp, but get screwed in Xerox, but no one uses Xerox, so is better).
The thing is casual users do not care how the OS ticks. They want to do their work, that’s it, but because most of you like computers and hate Microsoft/Apple you expect people to embrace Linux, just because.
They are not stupid, they just don’t agree. If I ride a Toyota instead of a Volkswagen then does that mean that Volkswagen is a bad car? Of course not, it just means that I don’t like Volkswagens.
This is the natural state of things. It’s not Linux that’s a mess, it’s the entire computing industry that’s a mess. Linux does its best to cope with the mess but obviously it’s outside human ability to handle everything.
Look at Windows. Out of the box it does not support a lot of hardware. For most hardware support you have to install drivers from the manufacturer, and even then things still b0rk from time to time. Linux does not get any help from most hardware manufacturers, but how exactly is this Linux’s fault?
Look at OS X. They control their entire hardware market and even then things still b0rk from time to time. The last OS X update bricked a lot of peoples’ Macs.
If things on Windows and even OS X are so messy then I can only conclude that it’s not Linux’s fault that not all hardware is well-supported. Even if they put 10 times more man power into hardware support, there will still be people complaining about hardware support.
You’re basically claiming that Linux developers don’t try hard enough to please end users and that the current state is their fault. I’m saying that the current state isn’t (entirely) their fault and that they still can’t please most people even with 10 times more effort. 1000 times, maybe. But where do you want to get that kind of man power from?
I know a graphics designer. He has a Mac and he hates it (though less so than Windows). He’s seen me using Linux and wants to migrate to it, but the only thing stopping him is the lack of Photoshop. The lack of Photoshop is Adobe’s fault, not Linux’s fault.
Why do you think they hide that stuff behind an Advanced button? It’s like complaining that a user visited C:\Windows, clicked on ‘Show contents of this system folder’, saw a bunch of files he doesn’t understand, and concludes that Windows is not user friendly.
What do you suggest otherwise? Getting rid of the button entirely? And face the wrath of thousands of people who complain that the interface has been “dumbed down” and “crippled”?
I’m typing this on a Macbook Pro and I do not like the font sizes. I want to increase the font sizes but apparently it’s impossible. So not offering me the option to change the font is somehow more user friendly?
If you want to convince people, stop coming up with nonsensical anecdotes. You make it sound like a hypothetical Linux newbie wants to print something, and so he opens the printer configuration panel but for some reason feels the need to click on the ‘Advanced’ button, sees a bunch of things he does not understand, and somehow feels to need to mess with these advanced settings, thereby breaking something, and therefore Linux on the desktop sucks?
I’m sorry, I don’t follow this reasoning at all. This sounds like a typical Windows scenario to me: someone opens the Windows configuration panel, opens an advanced settings dialog, changes something he doesn’t understand, and “broke the computer”. It happens all the time. How is the existence of an Advanced button in a Linux GUI suddenly proof that Linux UIs suck even though Windows has had it for decades?
Of course, not… But now that you bring car analogies… A Volkswagen car is a different product than a Chevy car. By no means, Volkswagen, nor Chevy intends to give any user every possible thing a user wants.
They offer full-finished products… They are not expecting a common user to be customizing every single part of the car.
No Linux is not doing its best to cope with the mess… Linux is just trying to please everyone.
I am not discussing hardware manufacturer support because it is very difficult that programmers tell companies what to do. But be real, companies see this big mess Linux is and they just don’t care.
Yes, no OS is perfect… But if you think Apple has not done a good thing with OS X then you must be out of this world. Look at the figures, in less than 10 years, Apple is the first number one UNIX seller by figure numbers. And it is selling UNIX to non-technical people.
Some things, cannot be solved alone. Linux cannot solve all problems related to the platform. And drivers is a big issue, but not the only one. And there are some things Linux can solve and those things are big problems and Linux people refuse to see them… Well if we cannot get the act together, casual users care less.
Not, I am saying exactly the opposite. The main problem with Linux is fragmentation and fragmentation in Linux occurs by two things (at least):
1. No clear goal. No focus to where you wan to go.
2. Trying to please everybody.
I am not here to blame anyone. I am just pointing things that can be solved to make Desktop Linux happen.
Well, I guess that he is really really into compiling his/her own stuff. Messing with hardware driver, kernel, configuring everything by hand and so on. Mac OS X underneath is very similar to Linux, just teach him to use the terminal. Or tell it to use Photoshop in Wine, like many people suggest in the thread.
Really? do you think that is the same? Do you actually believe that a system preference that says “Advanced” fonts, just in Gnome by the way, is the same that going the System folder in Windows which clearly states a message that says: Here there is nothing for you.
We are talking about the Preferences.
You must be in another dimension. Why not better… Millions and millions of Linux users in the world in the desktop area who manage not to be seen by anyone.
Windows does not have those controls, Mac OS X does not have those controls… Linux server users do not mind about those controls… 99% of desktop users do not mind about those controls.
Well, not everything is wonderful in Mac… But don’t worry, in Linux you tell a font size to the Desktop Manager (Windows Manager, or environment) and most Applications do not care to follow it. And no, I am not talking about Java Apps, but many other apps.
No, I am not trying to convince anyone here. But I have been more than 12 years using Linux and every single year is the same “thing”. This is the year of desktop Linux, yeah, for the last 12 years.
Ubuntu does not expect you to click on the Advanced button the font customization panel. It works fine even if you do not look in there. So what was the problem again?
Actually no. I’ve never compiled a kernel since 2006. The only Linux driver I’ve had to install since 2006 was a binary blob for my scanner. I just downloaded a file, put it into a directory as per the instructions, and it just worked. Sound worked out of the box. Video worked out of the box. Network worked out of the box. OpenGL worked out of the box. Printer worked after I added the model in the printer configuration dialog, with no need to manually install any drivers. The only software I had to compile by hand was relatively obscure developer/sysadmin stuff; 99% of the software I needed, I installed graphically with ‘Applications -> Add/Remove’, including software that traditionally had to be installed via the commandline such as Flash and Java. And I did all this with a graphical user interface. Yes, on Linux.
I haven’t messed with the kernel or with system config files for years now, nor do I want to. And I still use Linux*. Guess what this means.
He’s never seen me messing with hardware because everything I use just works. Ironically, I have a USB stick here which works perfectly on Linux and out of the box, but isn’t being recognized by OS X.
* Not at this moment because my laptop’s harddisk is broken and I’m waiting for a new one.
Why is there a difference? Settings are settings. Show me a usability study in which it is found that confusing stuff behind an Advanced button makes the entire system unusable.
Rightclick on My Computer and choose Properties. Navigate to the hardware management console. You can seriously mess things up there changing values or delete devices. You don’t get any “Get out of here!” warnings for this either but that doesn’t make Windows unfriendly.
Another dimension huh? You must have missed the GNOME 1.x -> 2.x days. They simplified all the configuration dialogs A LOT. 75% of the configuration settings were removed. People cried foul left and right, and even today are still complaining how GNOME’s interface is dumbed down and/or crippled. Try reading Slashdot and Reddit comments about GNOME sometimes.
I’ve never supported the “this is the year of Linux” stuff. But I don’t think the problem can be solved by blaming the wrong parts.
I don’t expect OS X to gain ever overtake Windows for that matter. I actually believe that nobody can overtake Windows, no matter how good it is. Not on the desktop. There’s too much inertia.
Edited 2009-05-21 20:34 UTC
I almost forgot… the problem:
http://marketshare.hitslink.com/operating-system-market-share.aspx?…
And I am not going to repeat what I have been saying during the threat. If you want read, but read carefully.
May I remember you, that you are not a casual user… Do you actually believe YOU are going to have troubles using any OS? (Oh, sorry, maybe OS X, you know it is the easiest OS out there according to many casual users, but some how you can’t make it work.)
Well, it means that Ubuntu is perfect, particularly is a wonderful thing…
But, casual users, people who actually buy computer DON’T SWALLOW it.
And I almost believe you… Keep on. Sadly I have used the thing myself and it is not what you describe.
Yes… what a problem. I told you, OS X was gonna bite you. You can drive stick but some how you just can’t do it automatic.
That argument sounds very flawed.
And we come to the same thing over and over….
Because Windows sucks somewhere, then Linux has the right to suck too.
You know Windows is what 90% of the world uses, they have the market, they do whatever they want. They do not need to convince people to use their system because the world uses it. Microsoft problem is not market share, Microsoft problem is how to make people buy the new versions.
Get real… The only way for Linux to succeed is making it better. People need to perceived a better product. Otherwise they will never leave Microsoft…. well, maybe to the impossible to use OS X.
And no, I am not going to show you any study, if you want to learn study and do the research yourself as I have been doing it for the last 12 years. And no, I am not the only one who thinks Linux is flawed, see what other OSes are doing. No one is taking the Linux libertinage.
Another dimension huh? You must have missed the GNOME 1.x -> 2.x days. They simplified all the configuration dialogs A LOT. 75% of the configuration settings were removed. People cried foul left and right, and even today are still complaining how GNOME’s interface is dumbed down and/or crippled. Try reading Slashdot and Reddit comments about GNOME sometimes.
Sorry, I don’t read Slashdot anymore,nor the Register and not other sensationalist sites who do not check actual things before telling them.
And yes, you must be right, when Gnome was out, 99.5% of the computer universe complain about Gnome. Well, let me tell you something, when Gnome was out, no one care about Gnome, not even the Linux community, everyone was so focused on KDE that no one really though something else would get traction.
And people who complain about Gnome, were not casual users, were geeks.
Well, maybe that’s true. Maybe Windows will never be defeated, but not because it is impossible.
And I am not implying Mac OS X would take over Windows because its business model is not a universal business model. Although, that model could change.
But it does not matter, and it is not who take over the others. Particularly I believe a market driving by only one force is terrible, no matter if it is Microsoft, Apple or Linux.
But man, Linux on the desktop is depressing. If you believe that what I say is not right, well that’s your opinion, but at least I dare to say something and I know many people here do believe I am just a troll. and I might be wrong…
But unless people realize what’s wrong with the system they cannot provide a better product… And the OS is what we can fix, we cannot fix the model business, only the community and the OS. Application developers go to where the users go, they do not make OSs and don’t care about them.
I read the article and found a number of discrepencies. It sounds like a diatribe by a Windows user against Linux. The article falls apart in almost every criticism:
1. Re: “You may argue eternally, but complicated software like games, 3D applications, databases, CADs(Computer-aided Design), etc. which cost millions of dollars and years of man-hours to develop will never be open sourced.”
Postgres (Database) is a very sofisticated DB and is open source. It is higher quality than both DB2 and MS Sql Server.
2. “Both GTK and Qt are very unstable and often break backwards compatibility.”
Don’t know about Qt, but GTK has been very stable in my experience.
3. “Default fonts (often) look ugly.”
Not on Ubuntu – perhaps on SuSE.
4. “No double buffering.”
Java Swing implements this – I can’t verify if GTK does but that seems far fetched.
5. “No unified configuration system for computer settings, devices and system services.”
That’s no different between the various Windows offerings. Windows Server 2003 vrs XP vrs XP Pro vrs Vista.
6. “It should be possible to configure everything via GUI which is still not a case for too many situations and operations.”
I very rarely have to do anything via the command line (in fact I’ve never configured anything via the command line since Ubuntu 8.10).
7. “Few software titles, inability to run familiar Windows software.”
It’s not the goal of Linux to run Windows titles. But, there are arguably less titles for Linux.
8. “A lot of WinPrinters do not have any Linux support (e.g. Lexmark models).”
Most of these are now supported out of the box. Try setting up a network printer on Windows and you’ll find that Ubuntu has a much quicker and intuitive setup. Most USB printers need no configuration at all – you plug them in and a message pops up saying your printer is ready.
9. “US Linux users cannot play many popular audio and video formats until they purchase appropriate codecs. ”
That’s not a problem with the OS, that’s a problem with the patent system.
10. “A galore of software bugs across all applications. Just look into KDE or Gnome bugzilla’s – some bugs are now ten years old with over several dozens of duplicates and no one is working on them. ”
This one kills me. You won’t find a comparison there to Windows because they don’t publish their bugs.
11. “Most distros don’t allow you to easily set up a server with e.g. such a configuration: Samba, SMTP/POP3, Apache HTTP Auth and FTP where all users are virtual.”
Windows doesn’t offer this unless you dish out serious money for the server edition and licenses for every client. Ubuntu offers all this in a single package for free.
12. “General slowness: just compare load times between e.g. OpenOffice and Microsoft Office.”
Windows preloads office in memory. Since Open Office doesn’t own windows, it can’t grant itself the same privilege. Once you’ve loaded an Open Office module it comes up just as quickly.
13. “Bad security model: there’s zero protection against keyboard keyloggers and against running malicious software (Linux is viruses free only due to its extremely low popularity).”
I love this argument. You admit that it is virus free, but only because it is not popular. Linux has a completely different kernel and userland structure than Windows. You can’t affect system files from userland in Linux. You could gain some security in Windows by using a non-admin login, but then you can’t do much. Windows has come in last in every published hacker competition. It’s even hackable without physical access.
14. “Old applications rarely work in new Linux distros”
My applications have all worked on every upgrade I’ve ever done.
15. “No software policies.”
The Linux community has very defined and published policies.
16. “No standard way of software distribution.”
If you are looking at a distribution tree, such as the debian tree, the software distribution is standardized. Linux is a kernel and different vendors have different distribution standards, just like with other proprietary software.
17. “No SMB/AD level replacement/equivalent.”
First of all, all Linux distributions support SMB, so that’s not an argument. Besides, Linux had NFS long before Windows had SMB.
Exactly so. Below I have added some of my own observations to yours.
Firebird is arguably even more sophisticated.
http://www.firebirdsql.org/
http://www.firebirdsql.org/index.php?id=about-firebird&nosb=1
Qt and GTK both, I believe, break compatibility at major version breaks. That is, the transition from GNOME 1 to GNOME 2 broke applications, as did the transitions of KDE from 2 to 3 and from 3 to 4. That amounts to perhaps only one or two such breakages over their history. Hardly “often”. Even then, binary backwards compatibility is not an issue for Linux.
Indeed. “It IS be possible to configure everything via GUI”.
Having said that, it should be pointed out that over 200,000 titles for Linux should be enough for anybody.
True. Perhaps it should be noted that it is a lot easier to get Linux to play “many popular audio and video formats” without having to buy anything, and that doing this is perfectly OK in most countries.
Since Linux is in control with what is available on Linux, in order to compare OpenOffice on Linux with MS Office on Windows, one should provide Linux with a pre-loader equivalent to the SuperFetch pre-load used on Windows. The program to use on Linux is called “preload”. With that installed, loading times for OpenOffice on Linux compare quite well with loading times for MS Office on Windows. To install … “open package manager, search for ‘preload’, select, click apply”.
What you said is all true. It still doesn’t hurt to say, just for emphasis, that all keyboard keyloggers and malicious software are necessarily closed-source programs. To avoid them on Linux, simply install only open-source programs. Not that there actually ARE any keyboard keyloggers and malicious software programs for Linux that one must avaoid anyway.
Agreed. Old applications rarely fail in new Linux distros.
Yes, indeed. For Debian, you can read them here:
http://www.debian.org/doc/debian-policy/
It is a bit long and boring, but it definitely does exist.
Then, there is also products like this:
http://www.likewise.com/
Sadly…No, it is not. It seems like it, but once a problem strikes… Terminal, most of the time. Forums all over the net are full of type this or that in terminal to fix that or that problem.
It is better, but even today in 2009, many problems have to be solved typing.
I don’t agree.
If you read Linux forums, they will often give instructions there for the command line. That is because it is easier for both the person typing the instructions on the forum, and the people using said instructions on their systems, to use the command line.
All that is needed is to copy the text from a browser as you read the forum, and paste that text into a terminal window. Foolproof. Accurate. Easy peasy. Takes a few seconds. No typing (just copy and paste). On Linux, you can even do it with one-click-per-command-line (select the line of text in the browser, then middle-click in the terminal window).
That is a great deal easier to do (both to describe, and to follow) than trying to:
Start -> Control Panel -> System -> Hardware settings
Select the middle tab
Click on the so-and-so checkbox
blah blah blah.
The fact that the solution to the vast majority of problems described on Linux user help forums are easier to document and perform via command-line instructions does NOT mean that you HAVE to use the command line.
Edited 2009-05-19 05:14 UTC
You are free to agree or disagree, but some things cannot be fixed from the user interface, because, GUI in Linux is optional by design. Even though no one would like to use a Desktop Linux without a GUI, the truth is many servers do not have GUIs.
I could give many examples, but it is not the point here. A particular example is not going to make you change your mind.
So, I guess you will find out, sooner or later, that GUI does not provide all the functionality in Linux, as I have realized.
How can you set up your network’s routes in Windows without using the CLI?
I’m not trolling here, I’m actually curious because that’s something I always had to do the “old-fashioned” CLI way. Not that I complain, but I’d like people to be consistent if you state that everything on Windows can be configured from a GUI.
I hear this argument WHEREVER I go, it’s amazing, someone needs to put this one to the f’ing grave where it belongs.
Ok, so you mean to tell me that, with the CIA, Bankers, government, military, industrial complexes, scientific research projects, and all the top super computers using GNU/Linux for mission critical tasks, there is NO reason for crackers to expend their time and energy in finding ways of exploiting the operating system?? ARE YOU OUT OF YOUR MIND?!?!? Seriously, oh well grandma, grandpa and Aunt Mertle all use Windows, so that’s the reason why people find ways of exploiting it. Ok. It wouldn’t have anything to do with the fact that it’s fifty times easier?
Yes they break on major versions. But GTK 2.0 was released like what, 7 years ago? When GTK 3.0 releases next year GTK 2 will be 8 years old. Is 8 years of guaranteed binary *and* API compatibility not good enough for you?
You can install GTK 1, 2 and 3 in parallel. If you want to use the old stuff, you can. These versions are not mutually exclusive.
This list is ridiculous. Let’s go through the points one by one:
0. WTF does this premise have to do with Linux being ready for the desktop?! and software patents???!!!
1. I can’t speak about professional audio, because I have no experience, but that are not the majority of desktop users. So other subpoints, difficult to set up volume levels? How is sliding a mixer up and down difficult? PCM/Line In/Mic/Output confusing mixer settings (especially coming from someone who in his first statement said you can’t use Linux for professional audio). About distros, all the last distros I used had sound working out of the box, actually the only time I was confused by sound settings was with my GF HP netbook. If you plug in a headphone into the combined mic/headphone jack the internal mic stops working and there is no way to turn it back on that I found.
2.1 no stabilized API. That’s rubbish GTK 2.0 has been stable for what, 7 years. QT just broke backwards compatability but if you need the old API just install it in parallel. Also the prime example that this does not hinder adoption is Apple, they became popular just after they broke backwards compatability.
2.2 Slow GUI, without compiz enabled I’ve never had problems of things being too slow, with compiz it’s another thing (bad intel drivers).
2.3 I actually don’t know, because I never noticed. I know that raster (from enlightenment fame) at some point made some benchmarks and found that software rendering is most of the time faster than hardware on X (which brings us back to bad graphics drivers).
2.4 ??? I don’t see how some of those points are related (fontconfig high-level?) And can’t be changed on the fly (I must be doing something wrong here). Compatible windows fonts (What does that even mean?)
3.1 how does it drive most users mad that different distros configure things differently (which is only true in a limited sense), normal users don’t constantly switch distributions
3.2 You can distribute your software as static tarballs
3.3 You can’t install all software on windows either without either paying money or compiling it
3.4 Use static libraries
As a side note. I’ve yet to hear from any major software house that finds that this is actually a problem and keeping them from developing for linux
4 You can’t configure everything through a GUI on windows or OSX either. That’s sometimes on purpose, they only want advanced users to change some things. Also on windows and OSX you can’t configure some things full stop (Theme on OSX)
5. A lot of the programs average users use have equivalents in Linux.
5.1 All the people I know who use these programs use a pirated version and could just as well use the OSS equivalent. But then they wouldn’t have bragging rights
5.2 I agree that Linux is lacking in games
5.3 Yes driver support for some desktop hardware could be better
5.4 and how many average users have a BlueRay drive in their computers?!
5.5 How’s that different from Windows?
6. I actually agree to some degree. Hardware sometimes does stop working with kernel upgrades.
7 How does that prove anything
8 examples?
8.1 If you don’t know what you’re doing here you should probably not be doing it
9. He must have been comparing load times with the ooffice/MSoffice preloader active. I can’t confirm this at all
9.2 If we compare boot times (does windows have parallel boot) most Linux distribution run circles around windows in my experience. If’ gotten boot time on my laptop to 15s by simply enabling concurrency boot
9.3 ?? Again a lot faster than what I’ve seen on windows.
10. Well some applications show some errors only when started from the CLI, i.e. firefox. But then they would not show these errors at all on other systems, or they only show up in logs.
11. Documentation could always be better.
12. Now that’s just wako. What security model is he suggesting? Sudo requires CLI??? What about gsudo?
13.1 Correct if they were dynamically linked. But you just get the new version of the program.
13.2 If you link against a newer library you expect it to work with an older one?
13.3 The link actually points to a way how you can distribute software linked against one specific version of a library actually overcoming most of these problems. No bugs regressions
14.1 ??? I don’t get it.
14.2 Actually I’d argue there’s a more standard way of software distribution in any Linux distri than in other systems. In the Linux distribution the standard way is to use the package manager, on other systems it’s selling CDs in shops, downloading, sending out free cds (AOL anyone??), on magazine CD…
14.3 Why does Samba not count? Also NFS, dhcp?
This list was “not even worng” to quote Pauli
This is where the cluelessness shines and this is where I should have stopped reading. Unfortunately I didn’t and found an article that was just as incorrect as expected. Granted, there are a few good points but most is just nonsense and saying that anything is discussed is an overstatement. Most items are of the type “this is how it is because I say so” and lack any kind of explanation or discussion.
It’s funny to see that many of these points could just as easily be attributed to Windows or OSX. I guess they’re not Desktop ready either.
I’m not saying Linux does not have any challenges on the desktop but this is just another mostly irrelevant opinion piece.
I never actually saw the “no databases” part. Considering the arguably best database engine in the market, Oracle, runs perfectly on Linux…
I dont know whats scarier. The fact that this article came up, or the number of Windows users who seem almost mad about Linux. If you dont like linux, thats ok. But dont spread lies about it. The simple truth is that companies are using it for almost every type of work out there. From high end database systems to some of the top 500 supercomputers to professional 3d visual effects at production studios. In fact, the most sophisticated production software out there runs on Linux. And the kernel seems to work just fine for them. You people criticizing Linux need to read up a bit more. You will be surprised at all the people using Linux just fine EVERY day.
How does the Linux kernel on super computers have any place in discussion about the desktop?
Every time people talk about Linux on the desktop there is some guy defending it and referencing success in totally unrelated areas.
Please don’t be a FI.
Priest: I was merely making an observation that the kernel is indeed a very robust piece of software. Someone was making the argument above that Linux was worthless because it was developed in part by volunteers.
It is true, but you are missing the point. We are discussing Linux on the Desktop, not on the enterprise. Most enterprises have teams of experts that can help users to do what they want. And most enterprises have computers lock down so users cannot do whatever they want.
Scientist, engineers, researchers, 3D animators, those are not normal or casual users. They usually know what they are doing with the computer.
We are talking here, common users, people who otherwise buy Macs or Windows for their own use, because they want to surf the web, mail, making movies, retouch pictures, play games.
Those people do not know what a compiler is, or what C is, or what a kernel is. When they have a problem call the neighbour or go to the Apple Store. Those casual users are the majority of PC consumers.
The common user can’t run Windows any better than they can run Linux. My sister’s entire business is cleaning all the crap off people’s Windows computers. It doesn’t matter how good they think they are, they always end up with viruses and spyware on their systems. Your theory also is rejected by the sales figures of netbooks, which are 30% linux. Why do you think Dell is now moving Linux onto all its systems as an option? Yeah, NOBODY wants linux. You just keep telling yourself that and someday maybe it will come true.
Yeah, everyone wants Linux on the desktop, sorry my mistake:
http://marketshare.hitslink.com/operating-system-market-share.aspx?…
those are damned lies and you know it:
http://boycottnovell.com/2009/02/06/net-applications-lies/
http://boycottnovell.com/2009/02/04/microsoft-apple-net-application…
All I had to do was hover over you link and see that it is an ASPX page – DUH!
The 40+ MILLION people that had to _CHOOSE_ DESKTOP Linux says a lot more than the 900+ million that had windows forced upon them.
I think what he meant was that the toolkits aren’t as mature or robust as Win32, GDI, etc. bundle, or others like Cocoa, Quartz, etc. There may be some validity in certain aspects.
For the backwards compatibility: open source programs are typically recompiled, where as closed source is not.
I think there needs exact specificity to make the argument stick, as blanket statements weakens his point.
I do know there are outstanding problems, some addressed with commercial X system solutions.
I would not say, that the article is different from other similar ones. It does have more opinion, then facts and some of “facts” are not correct for both, Linux and Windows. No, this is not a correct comparison.
How about the OpenSolaris option !
Part of Linux’s problems may stem from the fact that it grew from someones bedroom rather than from demands of a commercial/”real world” environment.
The open source OpenSolaris stems from the latter scenario. This operating system is powerful/wonderful and can be positioned as a mainstream OS if it keeps getting polished.
There are many technologies in (Open)Solaris that Linux is now starting to clone or can only dream about (e.g. ZFS, DTrace, predictive self healing, programming tools (SunStudio) that leverage Dtrace/etc. technologies, etc.) and stem from the commercial-based realities that affected the design of Solaris from the point of view of Sun Microsystems.
Think of Linux as serving it’s purpose in showing that a sufficiently open-sourced operating system has advantages.
Now that this purpose has caused Solaris to be open-sourced by a major company (i.e. Sun), then drop Linux and embrace/support OpenSolaris.
Yes, Linux developers, that means YOU !
Port your Linux apps to OpenSolaris and get your customers onto OpenSolaris.
That’s all folks !
Because the average user doesn’t care about ZFS and DTrace.
I’ve just had to work with a system on OpenSolaris, and frankly, the packaging* system (especially when compared with Debian), and outdated command line utilities make it hell to use compared to Linux.
The analogy I’ve used to describe the experience to co-workers is of getting a new powerful all wheel drive truck, and discovering that there’s no knob on the gear lever, with just the bare screw sticking up, or that the fuses for all the electrics, including the lights, are not installed.
It’s not that it’s broken exactly, but that you have to spend a massive amount of time polishing the system to make it usable. Can I get a sane default bash prompt and $PATH? How about a working sudo installation? The lack of this sort of polish is what makes it hell to use, and more expensive than Linux to run – since everything take much much longer.
* Blastwave doesn’t count. It is a packaging system, and it does work, but it’s a second packaging system to look after next to the official sun system, and neither are anywhere near as good as dpkg/apt.
Greetings Murrel.
A user not being interested in ZFS/DTrace/etc. is fine and can be expected.
An administrator/developer not being interested in ZFS/DTrace/etc. is foolish and ignorant. Powerful tools/subsystems are important weapons for the admin/developer.
If the developer’s life is made more easier or if the developer is made more powerful (in a technological sense) then this would be a good bonus for the user using software created by the developer (better quality software).
The points you raise are related to the “polishing” I mentioned that OpenSolaris needs.
However, OpenSolaris is not a linux-clone (but a real UNIX) and may not necessarily have linux-centric stuff.
However, while “sudo” is used in Linux, “pfexec” is used in OpenSolaris. The snv_111 version of OpenSolaris has “sudo” and if this is not adequate there is no reason why it will not get further polished.
The community recognize the IPS packaging system can be better (still “early days”) and will get better as time goes on. Remember, Solaris began life essentially as a server OS and has many nice technologies both useful for workstation and server environments. Maintaining/developing more technologies may mean that some technologies may take longer to mature. Then again people can contribute to the OpenSolaris development community.
My point with OpenSolaris/Solaris is that:
* Alot of “hard work” has been done, been shown to be fruitful and has led to a mature/commercial-proof operating system. This “hard work” will be continuing to allow further enhancements to Solaris.
* Extra attention is now required for the “simpler” “polishing” of OpenSolaris.
Then you have not used OpenSolaris. IPS aka pkg works exactly like APT/dPkg. It works better infact.
pkg image-update pulls the latest full image to update . Clones the root FS since it is on ZFS. Incase the update breaks anything you can just reboot into the previous image and your system is exactly as it was before the update. This whole clone operation takes seconds.
There is a table on OpenSolaris.org that lists the pkg commands and their APT/dpkg counter parts.
May be you can explain why you think PKG is deficient compared to APT.
Also as I illustrated ZFS is great for the average users as described above. I just swapped two mirrored disks with 2 larger disks and it took 4 commands (could have done it with 2) and one reboot after the copy. ZFS automatically grew to fit the new size disks. Try that with any other LVM.
Don’t worry. Many people have migrated from Linux already.
If you look a little deeper into this thread, you find plenty of reasons why.
Keep up the good work. Good progress is being made with OpenSolaris to provide an alternative open source operating system.
Yes, Linux has problems but there is no need to take this thread off topic.
Do you mean this same thread that has 324 comments?
I am sorry, but the thread went off-topic a long before I got here ;-).
“I live in The Netherlands, so the DMCA can bugger right off into an abyss – I will install whatever codecs I need on Linux, “clean” or otherwise.”
The EUCD, on the other hand, has you bang to rights:
http://eucd.wizards-of-os.org/index.php/Netherlands
“No need for me to pay for anything, and I doubt any American Linux users care all that much about the DMCA either.”
Of course they don’t, but that’s not the problem. The problem is that as long as the DMCA / EUCD and the American software patent system exist, distributions with any kind of realistic legal exposure cannot ship important bits like dvdcss and libfaad, nor can they legally even instruct you on how to install them. Sure, savvy users can figure it out themselves, but that’s not really enough.
…As bad as the article suggests?
I’ve heard this comment levelled at Open Office many times (slow start times) and it’s something I’ve never really noticed on Windows (or Linux.) The difference would have to be really small on any semi-modern hardware capable of running Windows XP properly.
On the other hand, Open Office is a much smaller distribution, seems to be a more straight-forward install (and uninstall) with less unwanted (silent) “system integrations” and I’ve found it to be generally a far better behaving application suite that does everything I need.
Of course, I have supported Microsoft Office in the corporate enviroment for a long time (from it’s 2.0 days) I’ve had more head-aches from this application suite than I care to recall. The list of problems over the years is, quite frankly, endless – as any honest and experienced support specialist will tell you.
How many people can really find significant differences in capabilities between Word and Writer, or Excel and Calc?
Couple that to Microsoft’s perpetual upgrade cycle where you are asked to pay top-dollar for software that breaks backwards compatability intentionally, changes are made for changes sake, and existing bugs are ignored, I’m not looking back at MS Office with any sentimental feelings, that’s for sure!
I have similiar thoughts about other points made in this “shortcomings” article, so I have to wonder how “constructive” this author is really trying to be?
OpenOffice is an interesting example. I use OO for couple of reasons: it does what I need, it comes with Linux distribution, etc.
If I was Windows user I might have thought in different way. Perhaps I might say: “It looks almost like MS Office, but it is not MS Office. It does most of the things MS Office does, but still not all of them. Why should I use it, when I may use the real thing instead ?”. MS Office is not that expensive, after all, and it can be pirated I someone really wants that.
Here is an analogy. Some Windows people told that I could have UNIX shell and many UNIX programs on Windows, if I installed SFU or Cygwin or something else. Just like a hypothetic guy above, I told them: “Why should I do that, when I can have a real thing.”
Linux (and other OS) apps should use different models and paradigms to solve problems through software. It was Microsoft who successfully hijacked earlier concepts, like spreadsheet application, graphics environment that looks like working desk, etc. It is not likely that someone else can hijack those ideas again and become successful.
MySQL is a good example, perhaps the first DB engine one could use hassle free. MS SQL required Windows server, Oracle had many requirements, it was highly sensitive on OS version, and other OS and hardware issues. Installing Sybase ASE is a high art. MySQL was easy, a DB server for everyone. A new concept at that time. It was not so much optimized, and that made it less dependent on OS and hardware, easier to install and maintain. Other DB vendors started to release “developer versions” of their servers, which one could run on any PC. Linux apps should be like MySQL, a new concept.
I am a hardcore spreadsheet user and frankly I love Open Office more than Excel.
Most people don’t realize it but Open Office calc is a SUPERSET of Excel, and actually has more features that Excel does not. OpenOffice Calc is 100% compatible with Excel ++ extra features.
OpenOffice has nothing to do with Linux. Nothing. It is available on almost all platforms, even before than Linux.
i’ve been using various linux distros on my desktop for a long time, and i have to say i hate them all.
the only reason i use linux is because i really dislike windows, and especially the abomination called vista. after apple made the switch to intel, i made the switch myself, and i never looked back.
i think the only way linux will ever be “ready” for the desktop is to copy osx functionality. not the looks, the functionality. ok, osx has its flaws (no cut/paste, wtf?!), but overall it’s the easiest to use of them all.
i know my way around the cli, i know how to compile stuff, but i really don’t have the time to do all that. i have work to do, and my computer should help me do it, not be in my way.
i’m writing this from my laptop, which has the latest ubuntu installed (first thing i did after buying was to remove the vista install that came with it). thanks to toshiba crippled bios, i can’t run osx on it, but i can run linux. i chose ubuntu because it just had a new release, and it’s based on debian, which is the best linux distro imo.
well, surprise surprise, it worked out of the box. everything works so far. haven’t tried the webcam, i have no need for it, but i bet it works.
so, in this case, linux is ready for my desktop. i can do all my work on it, i can send email, write documents, etc.
now to the hate part:
i tried to use my laptop as a media player. sure, it works, and i have a choice between totem with gstreamer and vlc, or mplayer, and maybe others i don’t know about. it can play everything. well, sort of.
the laptop i use has a shared connection jack, which can be used as audio out or spdif. well, it does not work. of course, the gnome sound settings panel does not help, and listing a bunch of sound systems i can use is pointless because none of them works.
i said to myself this is probably another toshiba trick to make people use windows, and probably the jack only works as spdif on windows. so i plugged in an usb sound card which has a spdif out port and works perfectly on osx and windows. well, it does not work on jaunty. no sound coming out. i tried to play with the config file of mplayer, setting afm=hwac3, then removing it, tried to witch the audio server, tried vlc, no joy. funny thing was that totem did manage to get some sound out, but after quiting the app and starting it again there was no sound.
remember, the sound works on the laptop speakers, just not over the spdif.
another “nice” thing is the gnome network app. why doesn’t it use the /etc/network/interfaces file? and why is it that if i choose to manually edit that file, the gnome network app says the network is disconnected? also, for that matter, why it only works sometimes when i want to change the ip address? (using the gnome network app, not the interfaces file).
i tried to connect to my wireless router, and after entering the passkey a few time with no success, i tried a reboot (old windows habit) and then it worked.
with some wireless connections it works, with others it does not.
so, yes, ubuntu will detect your hardware, andd it will work out of the box, but as soon as you’re trying to do some “crazy” stuff like playing ac3 streams, it will crap its pants.
so, this is why i think it’s not fully ready yet for the desktop. i listed just two problems, but there are many, and i only have 8000 characters available
ubuntu devs should stop trying to get it to look nicer (a task at which they fail anyway, good thing you can always change the defaults), and start trying to get it to work better. seriously, stop copying osx’s looks, and start copying it’s functionality.
how come on osx i can have 3 sound cards (one on board, a pci audigy card and an usb card), and it manages just fine to get any sound out of any of them, i just have to select one to use?
if i find some free time to watch a movie, i really don’t want to restart the alsa sound system to make it produce sound, or to play with the sound setting until i’m no longer in the mood for a movie.
i really wish i live to see the day linux works as it should.
Copying OSX? Well, I don’t really see how OSX has a better user experience than GNOME (given the fact that I would use roughly the same apps on OSX anyway). Sure GNOME is not perfect in any way but it’s very usable for me.
The only real advantage of OSX is that Apple controlls everything. Hardware and software. Of course Linux will never be able to get to this point. But wait… what was Linux about anyway? Yeah, freedom
Most of your problems seem to be related to sound or networking hardware. In both cases, Linux has great desktop integration (NetworkManager and PulseAudio), the only problem being buggy drivers. This, in the end, is a vendor problem, not a Linux problem. For every kind of hardware you can find one that works perfectly out of the box on Linux – if you chose to buy another, well, you can’t blame Linux.
Finally I have to say that Linux has an *amazing* out-of-the-box hardware support.
simply saying that linux is not to blame for anything will never solve any problems. and if you really believe that the gnome experience is on par with osx experience, than i really don’t have anything else to say.
me as a user, i have the freedom to choose whatever os i want. i like the idea behind linux, i really do, but i really don’t consider it ready yet.
and it’s people like you that drive users away from linux. even though i clearly stated that i will only list 2 problems i found (sound and network) out of the many more, you took it out of the context and ran with it. also, i have to point out that the drivers used for both network and sound are not from the manufacturer, but from linux itself.
what you “fanboys” are missing (besides sex) is that for the end user it really doesn’t matter what causes the problem, all it matters is that they have a problem.
my problems with network and sound should never happen, this is why i say that it’s not ready yet.
and yes, linux has very good out of the box driver support, the only problem is that it “sort of” works sometimes.
Now I did read to page 8, but I want to add something to package management:
Try to Install say new Openoffice.org on an a little older distro( with old I mean for example 1 year). Or say you want new codecs for gstreamer. And those are easy things, that dont sit deep in the system.
Same with drivers. If you need a newer driver not included in the kernel, well then you’r lost.
What you say is correct of course but at the same time, it’s a non-issue because you kind of miss the point: If you need the latest driver or application version you have three choices on linux:
a) Upgrade to the latest distribution release which has the new driver/app nicely integrated. No real problem as most distributions release twice a year.
b) Use a rolling distribution.
c) Compile yourself.
One of those options should fit your needs. One has to accept that the distribution model that Linux uses is just fundamentally different than Windows, like it or not.
That said, a) and b) are no valid options for the usual user or companies. Rolling distro means it can produce bugs that can’t be resolved by everyone. So you really want the average Joe to update a whole distro, because he say bought himself a TV usb stick? And what if the new distro isnt out yet, or doesnt include those drivers? What if the environment is nice, and I like it. I also like the programs I have.
And you dont have to argue about companies in that point.
And c) well, maybe I can do that, but thats it. I think to 99% of the people none of those 3 options fit.
Not installing individual driver/app packages is *especially* the way to go for companies. Who would risk breaking a working but slightly outdated system for a new software?
In the case of “I want support for my new tv card but keep my old system”. Heh… well, say, I would love to run my new tv card on Windows 98, same problem. Drivers often need a new kernel, xorg, mesa etc release which are all *not* components you can just replace easily.
There’s a trick however: inform yourself before buying new hardware! If you want to use it on Win98 (Ubuntu 7.x) but it only offers support for WinXP+ (Ubuntu 9.x), don’t buy or live with the fact that you need to upgrade.
While there are some inaccuracies in there, namely his whole slant on virus susceptibility amongst other things, he’s right with things like software availability and how to get it as that’s all users care about. It only serves to highlight just how much work needs to be done and how far we are away from Linux desktops starting to replace Windows ones and staying there.
Unfortunately, there are those in the Linux desktop world who think it’ll all come right if they try and make something look like a Mac. Sad. Just sad.
Don’t you think it’s funny to define some OS as “desktop ready” by the availability of some special-interest software?
Well some would consider MS Office 2007 as totally essential. Some would consider iTunes to be essential. I guess everyone has some preferred apps, mostly due to being used to them for years. Add up all those preferred apps and you will end up with giant list. But every single user will maybe only miss one or two of those. I don’t know a single category where Linux “on the desktop” is missing some free alternative for some best-of-breed Windows app. Same for Linux “on the server”. You can always construct a case of weird special-interest stuff where Linux does not have anything to offer but those cases really don’t matter in this case.
Being able to install software on a desktop for a wide variety of fields, and giving the means for developers to write and sitribute such software, is not ‘special interest’ – however you choose to define that term.
You miss the point, as many do. Talking about specific software is pointless. It’s the ability to write, distribute and install software that is at issue. If you build it they will come.
Then you need to get out an awful lot more. Beyond some basic e-mail and office tasks there is a huge gap in required functionality, and even e-mail and office functionality can be lacking.
It’s funny. People always try to wave away the desktop by talking about Linux on the server or by dismissing the unavailability of software many people require by labelling said software as ‘specialist’. The software that people install on your average Windows desktop is not ‘specialist’ to the many who rely on it and I’m afraid applying the 80/20 rule doesn’t work either – the 80% of people use a different 20% of features each time.
Besides, the proof of the pudding is in the eating and Linux desktop share has not appreciably increased by many metrics despite being completely free to get and install. You know people are running out of ideas when Mark Shuttleworth talks about making Ubuntu as ‘beautiful’ as a Mac. It’s not going to help you.
Well, but if you look more closely: between XP and Vista are how many years? Also Vista and 7 – same drivers work there. And linux? Maye 3 months or 1/2 a year, until the next version comes out (of the bigger distros). Great! But, this is of course the best way to handle this. No, it’s not a stable driver API or binary drivers (it also belongs to freedom to let people have binary stuff), it’s upgrading a whole system every half year.
You mentioned the point: windows has longer release cycles, which is fine for commercial software. For free software this makes no sense however.
Heh… sure you are “free” to use binary-only software and drivers, but assuming Linux or any free software component has any obligation to make sure this binary keeps working over the years is just very, very wrong.
I know a lot of people assume this has to be the case but if you think about it, it’s deeply flawed assumption. Therefore, don’t consider a binary driver as “supported by linux”: it is supported in a special version of linux as long as the vendor likes to support¹ it.
[1] Companies are not really interested in doing this. Sure, it costs some money. But most of all, the company wants to sell new hardware (!)
Comment deleted by poster.
Edited 2009-05-19 10:15 UTC
If you treat the article as “Weak spots of GNU/Linux”, then some of the points are good – for instance, old programs trying to use old sound systems.
But the article is headed “Why Linux is not yet ready for the desktop”, and contains points such as this:
2.4.1 fontconfig fonts antialiasing settings cannot be applied on-the-fly. (why is this a Linux-desktop-blocker?)
5.4 It’s impossible to watch Blue-Ray movies. (plain false)
5.2 No games. Full stop. (also ridiculously false)
5.3.1 A lot of WinPrinters do not have any Linux support (e.g. Lexmark models). An argument that user should buy a Linux compatible printer is silly since that way Linux won’t ever gain even a traction of popularity. Why should I install an OS where my printer doesn’t work? (Yeah? Anyone ever heard of the Mac OS?)
9.2 (Being resolved) No parallel boot of system services. Questionable services for Desktop installations (Fedora, Suse, Mandriva, Ubuntu). No delayed loading of system services. (BEING resolved? What distro doesn’t do parallel boot?)
2.3 Many GUI operations are not accelerated. No analogue of GDI or GDI+. (how is this a problem to the average user?)
1.3 By default many distros do not set volume levels properly (no audio output/no sound recording). (Correct, if you define “many” as “the 50 most obscure distributions”)
This is so good to read. This is so true! And we all know nobody will ever be able do anything against most of the items! Because Linux is designed that way…
I truly can’t tell if you’re being sarcastic or not.
I’m glad the guy had the guts to not only mention the horrible slowness, but dedicate several points to it. Yes, backward compatibility is a problem, package management and other things are problematic, that’s all true, but for me, the only real, annoying problem is the hopeless GUI slowness. The massive overhead of all the inefficient GUI layers. It’s only getting worse and worse every year. The performance is vastly inferior to the commercial offerings. This is an area where Microsoft shines in an unparalelled way.
I’m going to have to take another look at my Linux and Windows boxes because I can’t recall too much in the way of GUI slowness, especially after install updated drivers.
What kind of hardware are you running?
Yep this list is definitely different. Its about the worst list full of the most crap Ive seen in a long time. Its not even worthy of being listed on OSnews.
The vast majority of items in the list have nothing to do with an OS being ready for the desktop. Specific applications listed that a tiny fraction of the populace use, comparing the speed of a system based on one application versus one other and not even running the prior in a native environment, the blatant lies about server configuration, security, lack of documentation, the complaint of not using fonts illegally then turning around and knocking the fact they dont provide “illegal” codecs and so on say one thing about this poor list. FUD
One aspect that definetely needs to change is the emotional attachment that exists. There is always an immediate defensive posture taken whenever any critique is given, which unfortuntely does nothing to improve. Of course this is not unique to Linux, software, or just about anything these days. Problem is that improvements come through critique.
One thing that is lacking in OSS is the financial ability to do real research into how users actually view the software, use it, etc. Whether one finds OSX or Windows to be their flavor is besides the point, both Apple and Microsoft have spent considerable sums over the years doing such research that the end product is tailored to their user base.
It goes besides the point whether one person is already using Linux. Hell, there are probably some that use ReactOS that could claim it to be desktop ready. But what we are talking about is having an OS on systems along side OSX and Windows in the stores available to consumers, not tech geeks who either enjoy the problems, flaws, etc..or simply overlook them.
God knows I will be flamed for saying this, and certainly modded down to hell, but I will say it never the less. I am sitting here at the moment in KDE4.2 using Opensuse 11.1. While this certainly is usable desktop OS, to put the complete package on par with either OSX or Windows 7 is almost insane. Nitpicking certain minor aspects of any OS simply distracts from the whole product. While far from being an OSX user, I simply can not compare or put Linux on the same level. The same goes for Windows. I simply can not stress enough that what the consumers as a whole want or see is 180 degrees from what geeks want and see.
Let me put it this way. There does exist a good number of people that do know of Linux, have tried Linux, have seen it in action, and have chosen either OSX or Windows to be their desktop. These are voices that should be listened to, not shouted down, mocked, flamed, or modded away. But this is what happens.
On a personal level while I may use Linux extensively, I actually somewhat despise the community for their maturity on such topics. Too often voices are given that pretty much give credence to the notion that Linux’s only good aspect is being the anti-Microsoft. Well is that all Linux can be? Just the anti-Microsoft OS? I don’t want to use a product that is merely an anti to anything, I want to use a product that I can enjoy, whether that comes from Apple, Microsoft, or Linux.
So what I would strongly argue is that going forward there needs to be a significant change in the culture. In both the attitudes and development model. I would specifically argue that there needs to be a much tighter coordination process in developing the complete OS. I want to see Samba, Openldap, KDE/Gnome, Apache, MySQL, etc.. work together to produce a product, not individually, nor with limited sharing or communication. I would even argue that OpenLdap and Samba need to be merged in order to provide a true directory domain/workgroup solution. But this is getting beyond my point.
While you may not agree, I hope some will at least think about what I have written.
I have to agree… I also have been defending my self for two days from people who do not see reality. Linux is not ready for the desktop. PC-BSD is a much younger product and it seems more willing to fix stuff than Linux developers.
I am not a zealot, but after more than 12 years hearing “this is the year of the desktop for Linux” and not happening, you see it is not gonna happen: You cannot expect getting different results if you keep doing the same things over and over.
Please before praising Win32 api, do a little research. And while the win32api is still there (for backward compatibility) is not the API you should use if you want the Windows Compatible logo on it, or the new features/look into your application. And that certification is what serious win developers aim at.
Win32api was bad, MFC was bad. The only api nicely done was .Net.
The only real fact is that still today you have to be carefull when buying hardware. There are a lot of cameras, printers, even motherboards that have issues with Linux. Multimedia and gaming keyboards or mouse, say goodbye to the extra keys or custom key programming.
And for applications, there is a truth from previous posts. If the program don’t offer anything better by a good margin over an open source alternative, don’t bother to sell it, because it won’t.
The LinEAK project has been supporting Internet and Easy Access keyboards for over 5 years.
http://lineak.sourceforge.net/index.php?nav=keyboards
The linux-gamers.net site is a good place to start for game-related stuff. The truth is, there’s a lot of support for all kinds of devices however it does take more elbow-grease than Windows.
That doesn’t mean it can’t be done.
The author does not understand linux, obviously. No different from the others…
Old applications opening sound directly? WTF? Ever tried running old DOS games on windows? Old applications are what they are… OLD!
The sound system works perfectly for me, I use ALSA no pulseaudio and that one is pretty old already.
API break? WTF? This is GPL stuff! I still have gtk1 libraries on my computer and GTK1 apps still run on my computer. What is the problem if there are new APIs? .Net is new and is not compatible with win32, so what? win32 is still there. gtk1 is still there and you can install as many freaking versions of gtk as you please. So WTF is the problem with compatibility? I can still run sofware that were compiled in 1980 if I want to
Application development a PITA? Why are so much developers working on their free time on linux? Because it’s fun maybe?
LDAP? WTF? Ever heard of NYS?
Linux is not windows, linux is not windows, linux is not windows…
So you say developers don’t care about users. Actually if they didn’t, they would not make software. Software is developed for the users, that is obvious. The fact is that they are developers and you are a user. They have many users to support and they know how to do it. If you knew better you would do it but you just don’t. You don’t want software developed by people who are not developers. So you think that their software suck and you whine. Think a little more and when you have any idea about what is wrong with the software PRECISELY, then post a feature request or a bug repport. Then, if the developer think it is a good idea, and only then will he modify the software, but remember that what you think it a good idea, only the developer get to decide if that is a really good idea for all the users (not just you) or not and that is a good thing, because he is the one who know how to develop software.
Edited 2009-05-20 08:37 UTC
I don’t agree. Windows is just as crappy as Linux but enjoys a huge marketing machine… Just give Linux a company that applies aggresive marketing, vendor lock-in, 10min commercials on TV and millions of dollars to bribe hardware vendors and politicians and they will succeed. The cause of Linux’ failure isnt a technical one, it’s a social cause. Linux is powered by Nerds, Windows and Mac are powered by businessmen.
Edited 2009-05-20 10:00 UTC
… a product. This “reason” — IMHO, in most cases — is located between the ears.
I used several UNIX systems (AIX 3+, SunOS 3+, Solaris 1+, IRIX 4.3+, *BSD, Linux), VMS, DOS (since MSDOS 3.2), Windows (3.1, 95, 98, NT4, 2000, XP, Vista, 7RC1), MacOS 7, MacOS X, … All these systems are usable, yes there are issues, but all those beasts are overall usable.
In my opinion the humans using a computer are more limiting than the computers itself.
Look.
Back in 1992(?) a film like Jurassic Park was produced on SGI Crimson and SGI Indigo machines.
High End DTP has been done on MacOS 7, which even lacks multitasking.
Today 90%+ of the computer users use computers with at least 10x the performance of these machines and use software which is much more sophisticated. And what happens, these people explain why it is unusable …
They tell why they cannot cut music using programs much more advanced than SGI Multitrack used to produce Jurassic Parc.
They tell why they cannot do simple text processing with program much more advanced than Apple Write or even Quark 3.
Yeah, a fool with a tool is still a fool.
pica
PS Sorry, I could’nt resist.
My, the MS fanboys are out in strength on this thread, despeartely trying to come up with things that are supposedly “wrong” with Linux.
Actually, I suppose that is fair enough in a way, because in the area of “things wrong with it” Linux is clearly way, way behind Microsoft’s offerings. Linux has very little to offer with regard to “things wrong with it”.
In comparison, what has Microsoft been up to in order to delight its fans:
http://yro.slashdot.org/article.pl?sid=09/05/19/1931249&art_pos=11
http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d…
Linux is NEVER going to be able to keep up with that kind of powerhouse Microsoft technology, let me tell you!
Oh wait …
Edited 2009-05-20 10:15 UTC
Don’t kid yourself. MS has many problems.. Windows has many problems. Security is very bad in Windows.
But Windows has things better than Linux.
And Linux fans refuse to see what is in front of their eyes. I am not a sales man, I just can point what is bad technically, or at least, how Linux is perceived by real people outside of OS enthusiast.
If those things are not fixed, Linux is never going to manin stream. No matter how wonderful the kernel might be.
Don’t kid yourself, Windows is DESIGNED to be crippled and to rip people off.
http://yro.slashdot.org/article.pl?sid=09/05/19/1931249&art_pos=11
Goodness me! An absolutely perfect illustration that Windows is NOT written in the best interests of the people who are expected to buy it and use it.
Microsoft Patents the Crippling of Operating Systems
I mean, if that doesn’t tell you everything you need to know about it, you have been seriously hoodwinked.
Pssssst. Wanna buy a bridge?
Edited 2009-05-20 14:42 UTC
What is always funny to me when I read most of these arguments about why linux is not ready for the desktop is that they ignore this truth.
If ‘these’ things were so important, then why was Windows 3, Windows 95, Windows ME, Windows 98, and NT 4 considered “ready for the desktop?”
None of those OS’s can meet these requirements either…yet we seem to totally ignore the past which PROVES that many of these issues are not really important at all when it comes to real world user marketshare.
The only thing that really counts is obvious given the historical example of crappy previous versions of Windows having spectacular sales are marketshare…..the only thing that counts for ‘sucess’ of an operating system is being PRE-INSTALLED BY OEMS.
You are correct: many technical details that are discussed here are not relevant to the user. Many technical details are not at all important to have a desktop ready OS.
However, the world has changed a lot, and to convince OEM to pre-install an OS you need to give them good arguments. They can install only one distribution… Telling them Linux has 300 distros and full opportunities to customize everything is not appealing to them. Because casual users do not do that.
And, pre-installed by OEM is not the only problem. In order for Linux to succeed, it needs to show to the casual user that Linux is better. For the last decade Linux has failed miserably on it.
Here it is an example.In the last Microsoft presentation made by Ballmer, Ballmer showed that the number one competitor to Windows was: Pirated Windows. Nor Mac, nor Linux.
http://www.osnews.com/story/21035/Ballmer_Linux_Bigger_Competitor_t…
http://blogs.eweek.com/applewatch/content/macbook/microsoft_ceo_sco…
Don’t read the news, look at the presentation and see it for yourself. According to statistics presented, people who assemble their computers, presumably prefer to steal Windows than going the Linux way. Something is wrong with Linux.
The only reason I, and most people, have with using Linux exclusively on the desktop is the inability to change the desktop wallpaper from Richard Stallman’s headshot.
I mean, my home folder is right in his beard…I can’t click that.
I don’t understand the ‘premise’ part. Of course Adobe aren’t going to open up the source for Photoshop. But that doesn’t stop them selling a Linux version. ATI drivers are closed, as is Enemy Territory Quake Wars, yet I use both on Ubuntu.
I agree that audio is bad. PulseAudio is the best hope, but it’s not particularly easy to configure.
Is a single unified API necessary? Use the API you like and mark it as a dependency so the package manager can handle it.
My personal experience of fonts in Linux is mixed. There are no decent font configuration tools, but I have managed to get a decent setup that I see no problems with. But I’m not in pre-press or graphic design, so..
“Many distros’ repositories do not contain all available open source software. User should never be bothered with using ./configure && make && make installer” …ok, here I must say ‘WTF?’. Where in Windows can you set up any repositories? For the most part, Ubuntu lets you point and click to install software without having to worry about dependencies. And sometimes you want to ./configure && make. I just recompiled Amarok 1.4.10 on Jaunty, enabling support for MP4 tag editing while reducing the binary size by omitting things I don’t want, like visualizations, aRts and iPod support. Can iTunes do that?
“It should be possible to configure everything via GUI which is still not a case for too many situations and operations.” Honestly, who hasn’t had to go into the windows registry at some point? Or manually register a DLL? Point taken though – but Linux is really getting better at this.
“Few software titles” …I completely agree. This is not the fault of Linux, but the likes of AutoDesk, Adobe, Apple etc. The cost of supporting another platform is high, and wouldn’t pay off with a small user base. When the linux market share increases they’ll just have to change with the times.
“..inability to run familiar Windows software.” Can Windows run Mac software? What this argument boils down to is “It’s not Windows.”
“A lot of WinPrinters do not have any Linux support”. So far I haven’t had problems printing to any printer from Linux. At all. As with software, it costs manufacturers to support additional systems.
“No games. Full stop.” …bollocks. Every Quake game from 2 to Enemy Territory Quake Wars, Doom 3, several versions of Unreal Tournament, Neverwinter Nights, EVE. It’s by no means a long list, but they prove that it’s possible. Again, blame the developers.
“Incomplete or unstable drivers for some hardware” But let’s take the Creative Zen for example. It has HORRIBLE drivers/software on Windows, whereas on Linux I added 2 lines to a file and installed libmtp8, and I can use it in Amarok. The Logitech MX Desktop is a similar story – horrible on Windows, simple on Linux.
“It’s impossible to watch Blue-Ray movies”. No it isn’t. The studios make it difficult (but not impossible) to get ‘fair use’ out of your blu-ray.
OpenOffice 3.1 (with a few intelligent setting changes) starts in about 6 seconds in Jaunty. (Bog standard hardware). And by enabling preload you can shave that down further.
“Questionable services for Desktop installations” applies to Windows too, which uses a kitchen sink approach for services in order to dumb things down for the user.
“Social engineering” is the main cause of security holes on any operating system.
“Enterprise level problems”, “No software policies” and “No standard way of software distribution.” My job is to manage thousands of Linux desktops, pretty much by myself. A substantial team handles a similar sized Windows PC estate. I can wake all Linux PCs up at night, deploy updates and then shut them down using a 5 year old management system that we wrote inhouse using free open-source technology. The Windows guys constantly hog the network up during the day because their SMS system can’t do that, even though Microsoft have been promising these features for years. “That’s in the next version. Give us a few hundred grand please.”
For SOME desktops, Linux isn’t just ‘an alternative’, but it’s ‘an ideal’. For other situations Linux is dreadful. My wife is a graphic designer, and it’s a complete no-no for her. I play music (hobby) and it took me a week to collect the know-how to get my M-Audio Keystation 61 to work at all. Horses for courses.
Edited 2009-05-20 18:00 UTC
Here is what it means: If you don’t accept the premise, you are a childish nerd. But if you do accept the premise, Linux has to provide certain things to succeed, e.g.
– a unified, easy method for installing proprietary 3rd party software coming on CDs/DVDs
– a much more stable API
Heck! Even OSS that is not in a repo is in many cases impossible to install, not to mention drivers that are not in the Kernel or are only available in a newer Kernel.
There is no such thing as “the package manager”! There are quite a few. And just telling the package manager to resolve and download certain Frameworks does not mean they CAN be resolved. You are still in a testing hell, as a 3rd party developer. And what are you writing on your package as system requirements?
Does it matter? If a user does not find a software in the repositories, he will be lost. Note my comments above about the premise part.
You keep ignoring the hard facts, that Linux still, in 2009, lacks some fundamental abilities for installing 3rd party software. Man, even the guys who do support Linux, like Ryan C. Gordon (e.g. UT 2004), are constantly saying that something has to change. Read for example:
http://icculus.org/cgi-bin/finger/finger.pl?user=icculus&date=2006-…
http://icculus.org/cgi-bin/finger/finger.pl?user=icculus&date=2005-…
Though old, they are still true! Which should make you think how serios some people are taking the importance of this.
Same story: Blame the Linux developers for not providing the necessary infrastructure.
Just look at Apples App Store: Though initially targeting a relatively small and new platform, developers are going cracy over it. The infrastructure is just great.
OMG… You calles adding “2 lines to a file and installed libmtp8” simple? You must be a comedian 😉
Question: Why didn’t the distrubution maker made those “few intelligent setting changes”???
Isn’t waking up and installing possible with things like Intel’s vPro, too? And tell us about the “standard way of software distribution” in Linux. I can’t find one, it’s absolutely distibution specific.
See? And for the desktops where Linux is ‘an ideal’ in your opinion, aren’t there maybe even better solutions like Sun Ray thin client <-> Server architecture?
Edited 2009-05-21 11:03 UTC
Linux is usable as a desktop right now. I use it everyday.
I do have windows on a VM, but rarely use it.
The KDE 4.2.x environment really works good and even newbies have no problem using it…….
Linux suffers from the fact that it gives way too much freedom to it’s developers. Instead of being confined to a certain domain of the system, all applications are equal citizens. Any given app may make use of a variety of low-level system calls, use one of several gui toolkits, employ one of many compiled or interpreted programming languages, etc. It’s a terrible ‘free-for-all’, and that has lead to a real headache for system maintainers and end users alike. Then we have distributors, like Ubuntu, who are essentially repackagers of existing technologies. They try their best to coax all these separate systems to play well with each other. Meanwhile, unsuspecting users find themselves as perpetual beta testers for audio subsystems (PulseAudio), and graphics subsystems (X11), which may break their system with each passing release cycle. Feature-creep, and bloat are constant forces working against the system, making it heavier on system resources, less straightforward, and more buggy. To ‘top’ it all off, we tack on a nice ‘chrome’ in the form of a Desktop Environment, in the hopes of sugarcoating the end-user experience as much as possible to somehow make up for all these previous sins.
Instead of all that, we need is a system with good abstraction, clean interfaces, good integration, and a presentation layer that can easily modified and tailored to individual tastes (that in itself would resolve nearly half of all the brainstorm.ubuntu.com ideas). We especially need a system where applications have to go through proper channels to do what they need to do (much as things happen now in Android.)
Picture a three-tiered system:
The ‘downstairs’ will be a hardware abstraction layer (yes, I have heard of HAL). This layer will vary from platform to platform. It will present a unified interface to the layers above it. This layer will communicate with device drivers, and the ‘bare metal’. This layer will be closed to application developers.
The ‘middle floors’ will be system-wide processes: security, memory managment, inter-process communication, etc. This layer will provide the necessary system-wide services (graphics, sound, networking, etc) to support the rest of the system. This layer will handle the proper containment of each application and process on the system. This layer will contain all the system-wide libraries. This layer will handle all the real ‘computation’ of the system and it’s applications. This layer will be the ‘controller’ (to borrow an MVC term) for all applications on the system. Application developers have limited access to this layer of the system, through a strictly-enforced API.
The ‘top floor’ will be the presentation layer, ‘chrome’ or end user interface. It will be cleanly abstracted from the other floors below it. It will be relative easy to modify. It will allow easy internationalization of the entire interface, including right-to-left languages and unicode charactersets. It will support themes and scalable graphics. Limited access to this layer will be provided to both application developers and end users. All applications, themselves, speak directly to (and only to) their controller in the middle layer, which in-turn, may pass calls to this presentation layer.
Applications will be confined to a specified API, which will make calls to processes on the middle floor (which acts as Controller — in a similar way, though not identical to MVC) only. The middle floor will then make the necessary calls to the downstairs and top floors, as necessary.
Better yet, we can do away with applications. Why should functionality be tucked away in an application when that fuctionality can be shared openly? We don’t need 4 applications, each with a separate spellcheck function. We need one, centralized spellcheck function, which can be accessed from any part of the system. Why bury the html engine in a web browser? What if the desktop wants to render a html widget? What if there are help pages written in html? Aza Raskin did a nice talk on Google Video on this, (“Don’t make me click”), where he talks about letting the functionality come out from behind the containment of applications. Instead of his ‘gnome-do-ish’ way as a tacked-on feature, this new ‘command line interface’ should be a foundational part of the entire system.
Also, get rid of the terminal and all those crazy commands ($ sed $^{} | akw 4 | grep ‘hi’). When Cortez arrived at the New World, he burned down his ships. As a result, his men were well-motivated to survive there. There was no going back. Similarly, if the system is able to have a terminal, developers will find the occasion to do things that can only be done on the terminal. This will lead to scenarios which require end users to do things that can only be done on the terminal. This will displease, discourage, overwhealm and confuse many users. DO AWAY WITH THE TERMINAL. Just don’t port one to the system. Then, everything that has to be done on the system will have a proper, gui way of doing so. Now you have just saved millions of users from having to hear the dreaded words “now just fire up a terminal”, or such ugly words as “grep”, “wget”, or “apt-get”.
This is what we need: Proper containment for applications. Proper abstraction of critical system components. Clean hardware abstraction. Standardized API’s. A single presentation layer, that may be easily modified so that users can have some choice in how their applications behave and apprear.
Actually this is the kind of comments that are worth to look at, and the kind of comments I expected to see after this news. I have to agree on this analysis on the situation. System philosophy not implementation details.
I proposed self contained applications to solve this situation. Apple has done something similiar with the Cocoa framework and what you describe might work for some applications, but there are applications that need “extensions” and “passing” the layer structure and go down closer to the metal. How to solve that using this?
This is exactly how Mac OS X and NeXT, and OpenStep provides functionality and with a little bit of Cyberdog and Ole. I like it.
I do not know if this is possible. Getting rid of the terminal is very difficult, since Linux is a very depending on it and UNIX compatibility is needed. Crafting everything from the beginning would be too difficult and time consuming. But it could be fun just to try. Maybe hide from the user, only available for root?
I agree with that. It is important to have contained applications, and not just to uninstall better, but to provide quality applications. It is very difficult for a developer to provide quality if after each update all libraries get new versions that he has not tested with his apps. And the line between Apps and System has to be drawn better.
Edited 2009-05-22 02:56 UTC
http://digg.com/linux_unix/GNU_Linux_Desktop_Market_Share_is_4_Gart…
http://www.w3schools.com/browsers/browsers_os.asp
http://www.computerworld.com/action/article.do?command=viewArticleB…
Is there no bright corner of the Internet left that is free of FUD Trolls?
http://artistx.org/site2/
ArtistX 0.7 is an Ubuntu-based live DVD that turns a common computer into a full multimedia production studio. There is no need to install the software.
Let’s comment from the beginning:
One problem is that some of the problems he mentions are distro specific and not general, but anyway.
1.*:
Yes Audio on linux have been a mixed bag, but have this ever been a problem for users who don’t need to do audio work? I mean my linux distribution got a nice volume control at the kde bar at the bottom. That is all that I have ever needed as a user(And the exact same thing I used on windows)
2.1
While win32 may be a stable api, calling it high quality is insane(And yes I have developed a c++ applitation against win32. Not fun or effective.
I can’t mention a single large application that only use win32 for the gui toolkit. (Remember win32 is so bad that it don’t even include a split pane).
But here is a list of applications that do include a non standand toolkit.
Anything adobe have ever made(Photoshop and so on).
Microsoft office. Oracle. Firefox, All the games.
And I don’t think that the Qt often break backwards compatibility. The last time they really did that was from QT3 to QT4. And you can install qt3 and qt4 together so this should not be any problem for the user. An application written for QT4.0 should not have any problem running with a QT4.4 library.
2.2 Very slow gui, unless you run the binary only drivers for your graphics card. (Which is exactly the same as on windows. Windows is also very slow if you don’t install the binary only drivers). If you install the drivers, the gui is as fast as on windows(Only testet on nVidia, don’t know about ati/amd).
2.3: Which importent GUI operations are not accelerated? Text antialias may not be, but who cares as long as my 5 year old cpu can do it so fast that no slowdown is noticed?
2.5 What??? Qt does double buffering. (And yes it is supported by the X server too).
3.2: You can just skip the package installer, and ship the software as a ready to install tar.gz archive. PostgreSQL(Yes the opensource db), firefox, openoffice and a lot of other applications does this.
3.3: This one is unfair. You were talking about points where windows were better then Linux, but windows don’t have all opensource software either. (Just a funny note. Synthesia is a ‘guitarhero look-a-like’ for keybords. Its binary is windows only, but getting the software to work with a software-midi-player(TiMidity) is more easy on Linux with wine then it is on windows. And this is for a native Windows application.
3.4-compilers: How does this differ from windows, where you got 4 different compilers and don’t even know how microsoft and whatever 3.party library you use are compiled? And why do you care. We do have a fixed c abi which really have not been changed in many years.
3.4-different libraries versions
How does this differ from windows? If you depend on a a special non-standard library you can bundle it yourself(Or do a static link). Just like on windows. Just for fun, try to se how many mfc*.dll files there is on a Windows. (Last time I used windows, the answer were 11).
4: Examples, please? My linux distribution may have some things that can’t be configured with a gui. But I have not found them yet. Anything I need to configure, such as network setup, screen-resolution and so on have been configured from the gui.
8.1: And you can do this easy from windows?
Remember the premis is(Or should be) that windows is ready for the desktop.
9.1: While the linker is bad, I don’t think that is the problem here. Remember that wine is mostly a wrapper to native linux calls/libraries so running the openoffice with wine, should not require less linking then running openoffice native. The problem is most likely that the windows version of openoffice have been much more optimized, similary to for example firefox, which run faster on windows then linux, because they used profile-guided optimization on windows, but not on linux.
12: Is this different from windows, or are you trying to prove that windows is not ready for the desktop either?
13.1 The only old application I have ever had to run were Sas and java 1.1. Both worked perfect on a distribution made 10 years after the application.
This might be a problem for other applications, but one that by definition is impossible to solve now, because the old applications are already written, and we can’t change them. But you could say that this have been solved, because it is most likely that binaries relased today are likely to run 15 years from now.
13.2: That should not happend. Got any examples of libc versions where this is a problem?
What most likely happens here is that the application is not source code compatible but is using autoconf or similary to rewrite the source code to match the installed libc. And the rewritten code which get compiled are not source code compatible with the old libc. So you might change this point to:
Using autoconf to manage the differences between linux distributions are bad.
Edited 2009-05-23 16:52 UTC