Antrix Angler has written an editorial for tuxreports.com in which he takes a look at some of Linux’ shortcomings for desktop usage: “What I intend to highlight in this piece are some of the major shortfalls (in my opinion) in the current state of Desktop Linux. The intent is not to belittle or ridicule the efforts that have been made to get Linux on the desktop. Rather, it is in the hope that it will spur development to overcome these shortcomings; because this is the way Linux has developed.“
Honestly, how many bloody commentaries on the “state of desktop linux” do we need to read? Instead of people writing their opinions, why not just get into the effort and try to fix the problems?
This article isn’t really too good. The author obviously hasn’t taken the time to look at the various distributions – for example SuSE has integrated their yast tool into the KDE control center for over a year – exactly what he suggests. His comments about graphics card support, simply suggest what the distros already do. He also complains that Open Office is slow to load, which is hardly news to anyone who’s used it. I could continue in the same vein, but I won’t.
Rich.
A litte off topic but the article mentions menu editing and I have to agree with him in relation to GNOME’s menu editing capabilities (KDE is ok for me). Using Nautilus to edit the menu requires restarting GNOME to see changes (but doesn’t work reliable anyway). Right clicking the menu allows me to add entries but submenus don’t work. In general it is a mess. Sometimes it works but most of the time it doesn’t. Changes in the entry dialog are also lost most of the time. And yes it is GNOME 2.2 compiled from sources. To be fair I have to admit that this (and the file selection dialog) is the only real trouble I have with GNOME.
We are beating a very, very, dead horse here. I agree with the first poster. We all know Linux is no M$, so let’s move on and fix the problems.
Anyhoo, I do not want Linux to be another M$. It is odd but I find myself thinking…so many distros, so many desktops….I like that variety and choice doesn’t everyone?
I would be very put off if Linux was as boring as M$.
I’m perfectly happy with linux being for power users. If it never becomes a major desktop player, so be it. As for the hardware, I’ve never had a major hardware problem, except for winmodems. I know that if I use decent hardware, my system will run linux. I trust that my abit motherboard, nvidia graphics card, creative sound card, usr hardware modem, 3com nic, western digital HD, and Acer CD will work. I buy stuff I know will work with linux, its one of my main research points.
As for the software, I’m happy with what the offerings are, they suit my tasks. Even if they aren’t the best, as long as I can do the work in a reasonable time, I’m happy. The only thing I want a little more of is games, but hey, they aren’t necessary.
I’m not trying to troll, but as long as linux will serve my needs, I’m not worried if 95 percent of people are using windows. I can customize linux to do what I want, and I’m happy.
All these ‘Linux on the desktop’ articles can be reduced to three main requirements.
1 Easy installation of binary packages.
2 Easy installation of new hardware.
3 Integration of system configuration utilities with the gui.
All valid points. All very hard to accomplish.
To accomplish all three will require the end of the Linux desktop ‘distribution’.
Why? Because once they are achieved, there will be no difference between the seperate desktop distros.
If any distribution sets up hardware properly, allows you to install any software, and provides an easy means to configure it, why choose one distribution over another?
It’s the differences between the distros that cause all the problems. Each has their own package manager, slightly different file system, keeps their config files in weird places, seperate hardware detection and config methods etc etc. To enable ANY package or hardware to install cleanly on ANY distro, these differences would have to be nullified. LSB compatibility does not go far enough to achive this, as it does not include hardware configuration or a set of common gui config tools.
There will still be room for specialist distros such as embedded, hardened, supertweak source based (like gentoo), but for the desktop user who has not the slightest interest in what goes on behind the gui, there can be only one.
I know many of these articles are repetitive and can get tiring. The sheer numbr of them, however, shows there is great interest.
Honestly, how many bloody commentaries on the “state of desktop linux” do we need to read?
Well, to quote the article: Those in the know will remember how, just a couple of years ago, people were saying, “What good is Linux? It doesn’t even have a journalised file system!”. Today, thanks (in part) to the rants of those very people, we have a plethora of JFS options to choose from. “The squeaky wheel gets the grease” and all that jazz.
Instead of people writing their opinions, why not just get into the effort and try to fix the problems?
Again to quote the article: I have a minimal knowledge of programming… Or, as in my case personally, I have minimal time for programming… The review is written quite respectfully, I think, and gives credit where credit is due.
In any case, whining about a problem, is a way getting into the effort and trying to fix a problem. Open Office’s “Hey look at me! Don’t look at other windows! I’m all you should be able to look at right now!” is such a severe useability problem that it ought to be whined about. (Although — on my machine OO.o only takes a few seconds to load, so I wouldn’t ordinarily whine about it.)
Geekzaus, wouldn’t an open source and free MS be a good thing? I am not saying Linux has the power to do such a thing anyways(unless something is done to make a single *nix standard, which has been discussed before and there is no need to address this anymore). IF linux is to become a contenting OS, then is must have a ‘simplified’ desktop for average users to understand.
I am sick of people complaining about this too, though. So I wish someone would just help developers fix it.
4) Easy of migration for an exisiting Linux system from old hardware to new hardware. Try swapping out your motherboard from a P3 to an AMD athlon and see what happens, To fix it in Windows XP, you have to boot off the CD and negotiate the menus till you hit “Repair Current Installation.” All your desktop settings remain, and your bookmarks. Try that with a Linux Distro. I haven’t been able to get this working yet.
I wish the various distributions would make their first priority a package management and installation system that was consistent and simple, like the author described. I wish they could agree on a consistent desktop interface. If they could manage those two things Linux would be a real alternative for average users. Accomplishing that shouldn’t in any way make Linux less attractive for the power user. Linux’ great strengths are stability and customizability, those qualities will always be there and provide ample opportunity for different distros, user interfaces, desktop environments, installation routines. It seems like the starting point should be usability and products can express themselves from that point. Products shouldn’t differentiate themselves by their different, Rube Goldberg workarounds for basic, everyday tasks.
Gawain,
Good point. I personally use Linux just for the simple fact that it is challenging on the level that it kepps me interested. I was being a bit sarcastic but serious at the same time. ๐
I use linux for 2 reasons:
1: It isn’t M$. (I am not a M$ hater, I just choose to seek alternatives rather than complain about something I have no control over.)
2: It is the road less taken. (This appeals to my geek sense.)
I beleive that if it gets too successful that this may change. I do not want Linux to be another M$, I just want it to be Linux. On the other hand, I do not mind paying for the distro if it is a qulity product.
As for helping companies find alternatives to the M$ licensing woes. It really doesn’t concern me due to the fact that I am not in that arena. This seems to be one of the main pushes behind getting Linux to the desktop. As for me, I already use it and think it’s fine. Yes, it has quirks, but that’s half the fun. ๐
As for being a contending desktop. It already is, it just needs to mature a bit. That will come in time. I just hope it matures and doesn’t grow into a monster. As for Joe user being able to install and use it…that will take time, but it is becoming more possible every day.
The problems discussed here go far beyond the desktop…
IMHO the biggest problem with free software is that you (mostly) don’t use interfaces when programming, you use directly the implementations (BUT: applies also to proprietary sw systems), i.e. you use GTK+ or QT or XX, you don’t use THE WIDGET TOOLKIT. Vendors of proprietary sw can dictate which implementation to use, developers of free software can’t (there is something like evolution – good sw lives, bad one dies), thats why proprietary systems are more integrated and free sw takes longer to integrate seamlessly. They both pay a price: security (keyword: mono-culture) or/and trickier maintainability.
I think this has something to do with developer laziness (designing interface AND implementation -> double effort) and the lack of good aspect oriented programming and component technologies, which would enable evolutionary sw design (in contrast to top-down or down-top design), the separation of implementation and interface, and loosely component bindings.
Adressing that would simplify package distribution, ui, sw maintainability, …
I understand completely what you are saying.
In linux the way around this goes something like this:
Before (notice I say BEFORE) you upgrade the motherboard grab your install CDs. Go to the one with the kernel packages on it (usually CD 1 on any distro) and install the kernel package for the type of CPU you are going to upgrade to.
rpm -ivh kernel-<such and such version AMD or whatever>.rpm
This will install the new kernel alongside the old one in case you have to revert back to the old motherboard.
Install the source rpm for the proper CPU the same way (the kernel source is sometimes on another CD).
What does all this give you. It gives you a machine with two kernels on it. One for your old CPU and one for the CPU you are about to upgrade to.
Shut the machine down and replace the motherboard.
Boot up the box with the new motherboard and lilo or grub will list two different lines for linux (one for the old kernel and one for the new AMD kernel). Choose the one for the new kernel and it will boot up using the proper kernel for the proper CPU.
When you are sure you don’t want to revert back to the old motherboard or the old kernel you delete the old kernel.
rpm -qa | grep kernel
(this list out all the kernel rpms on your system so you can copy and paste the old one into the next command line I list)
rpm -e kernel-<old one for the P3>.rpm kernel-source<old on for the P3>.rpm
Be sure to pick the right one because you do not want to delete the working kernel just the old one you do not want anymore.
I use command line references since I do not know what distro you have. I believe that SuSE calls their kernel package k_deflt.
Yes, it would be cool if you could just pop in a CD and fix the kernel to the motherboard you are currently using.
On one hand, the strength of Linux is that there are endless options and the possibility to modify GPL-code to best suit your needs. On the other hand, this increases complexity and makes it more difficult to manage a Linux system.
For corporations this is not really an issue. Many of them have admins that take care of the systems. The ordinary corporate user will not need to bother. There is no reason for corporations not using Linux on grounds of “difficult” system management.
The complexity might be scary for home users though, but they are probably better off using Windows anyway (if they do not appreciate the power of Linux).
Of course it would be nice if the things in the article were realized, but not critical for Linux survival or something like that.
Ok, It’s not going to happen. Linux is a mess. I don’t know how many distributions, window managers, environments…
I have been using Linux for years and it has always been the same thing. If someone creates something new for the os then someone else has to do it better, or under a different license.
Come on.. Commercial OS will always have the advantage. They can mandate that everyone use a standard decided on by the company. This is not a bad thing. This is the way that work gets done. I mean if you have a hundred different libraries where the hell do you start.
I guess you could think of it this way “only the strong will survive” but evolution takes along time, and its going to take about that long to clean Linux up.
BTW post something new please…
I agree that standards make work more efficient for corporations. However, it is possible for corporations to standardize on Linux instead of Windows if they want to. Corporations do not have to use 43 window managers. They can standardize on one. No problem, make the decision, learn the system and go for it. Linux is about choice.
>I have been using Linux for years and it has always been >the same thing. If someone creates something new for the >os then someone else has to do it better, or under a >different license.
This only applies partially, free software is in “persistent flow”, some sw merges another one “dies” (as i said before). But apart of that having multiple implementations(!) of the same thing is a GOOD thing (freedom, security, personal preference,…). Multiple interfaces (of similar impl.) should only be used when necessary.
>Come on.. Commercial OS will always have the advantage.
this would be untrue if people would use interfaces instead of the specific implementations.
Will require:
1) Easy installation of new upholstry and seating.
2) Easy installation of engine upgrades and mods.
3) Integration of diagnosis tools with the steering wheel.
All valid points. All very hard to accomplish.
To accomplish all three will require the end of the car ‘manufacturer.’
Why? Because once they are achieved, there will be no difference between the seperate car makers.
>>>>>
I hope you realize how silly you sound. First of all, for a large percentage of users (office users, who have tech support staff) stuff like RedCarpet and Kudzu make your three points quite obsolete. Now, if you’re talking home users, then only point #3 is valid. Most distros have figured out packaging and hardware detection long ago. On my Inspiron 8200 (running Gentoo no less!) all I had to do was download and install nvidia’s binary drivers (which is a good idea even with Windows), a process that, I might add, was completely automated. Everything else (From my NIC to my sound card) was autodetected by the kernel. As for software installation, I can’t even beat up the RPM-based distros about that anymore. If you’re still downloading RPMs off the web manually, then it’s your own damn fault. Use up2date, urpmi, or RedCarpet. They’re even built in these days! If you’re using Debian or Gentoo, you’ve had package management tools that outstrip anything Windows can do for as long as you’ve used your distro (and yes, there’s even a GUI for both!) The “no more distro” comment is totally off base. Let’s see: RedHat will probably still be there as a gnome-oriented, business distribution. SuSE will be the business distro counterpart for KDE and in Europe. Mandrake might take the home user market. Slackware will still be there running servers with admins that know what they’re doing. Debian will still be there for servers that are mission critical enough to require rock-solid, time-tested configurations. Connectiva will still cater to its Latin American market, and RedFlag to its Chinese one. And you’ll have to drag Gentoo out of the cold, dead hands of power users who want the ultimate in bleeding edge software, 0-day software. Each distro has it’s individual strengths. Microsoft has deluded everyone into believing that “there can be only One!” They talk about unification and unilateral changes as if they’re good things. Rather, in a healthy capitalist market, things are quite the opposite. Just take a look at GM, Ford, Chrysler, Toyota, Nissan, Honda, Fiat, et al. They all make cars, don’t they? Should they all get together and make “one really great car!” Of course not! Thus, the number of competing Linux distros is a natural symptom of a competitive market. Just as people are free to choose any car, because they all run on the same roads, people are free to choose any distribution, because they all run the same software. Without the properitory products, vendor lock-in, and locked software base that make Microsoft a monopoly (and yes, a judge ruled that MS was a mopoly, they just didn’t get punished), the number of Linux distributions is simply a symptom of a natural competitive market.
“Using Nautilus to edit the menu requires restarting GNOME to see changes (but doesn’t work reliable anyway).”
Works fine for me, has for months.
“4) Easy of migration for an exisiting Linux system from old hardware to new hardware. I haven’t been able to get this working yet.””
I do this all the time, it’s never been a problem. Let Kudzu do it’s magic then it’s back to work as usual.
I’ve heard Xandros has done something on these lines, but I haven’t got the money to try it out ๐
But now, thanks to OO.o, no more reboots! (Except for the occasional fragging in Quake3 ๐
Who the hell uses these when writing an article?
This piece was terribly written and poorly researched.
Honestly, how many bloody commentaries on the “state of desktop linux” do we need to read? Instead of people writing their opinions, why not just get into the effort and try to fix the problems?
Well it must take more I have not seen some of those features yet.
&ndash
Huh? What is all those about?
Easier Software Installation: This one is the biggest deterrent to most new users. No (./configure; make; make install) is not an acceptable way to install software. And neither is RPM; most users would reboot to Windows with the first dependency issue that crops up. Haven’t tried apt-get, I don’t have a reliable 24 hour high-speed net connection. But from what I’ve heard, it still sounds something of a mid-way solution. What I want is a simple GUI setup for all software. A setup which asks if I wish to install this software for all users (as root) or for myself (home directory). And does the install. Period.
I don’t know how the technical details of conflicting libraries, version mismatches, compiling and whatever else will be sorted out. But I trust all you code-warriors out there CAN do it. And all I can say is please, do it!
Did any of you hear that? I am for this.
System Configuration: I’ve not seen a single distribution till date which makes this task simple and hassle-free. The idea of hunting around in menus searching for the right config tool is illogical. So is the idea of having one control panel for desktop settings and another for system settings (Mandrake, are you listening!)
Well SuSE does have this.
Easier menu customisation: You have to give kudos to Microsoft for this. There’s nothing easier than modifying the Start Menu (except, of course, not modifying the Start Menu!). A simple right-click on any entry offers all the options I need. And the menu loads real fast!
Well SuSE got this almost right (you have to go to the control center), but a little more effort would be good.
And don’t say go back to windows, if I wanted windows I would use it.
Hmmm…
The part about the above quote that bothered me was the fact I have been running Q3 on my linux box for fragging for awhile along with Castle Wolfenstein too.
Waste of a reboot. Now, if he had to reboot to edit a Visio network diagram or something then I could understand.
I thought the article summed up a lot of issues people here have been talking about for awhile. The article had issues but still nit-picking it does not address the central concerns still out there about using linux on the desktop.
The issues with libraries and dependencies are not new ones either with linux or commercial unixes for that matter.
There were a few times when I installed some little free for download app for Solaris where it would bomb immediately asking for a library or a version of a library that was not there.
Back in the day, these apps would come in bin format in a tarball from some little out-there site. Most of the companies who bothered with commercial unix support at all had a way around this. It is revolutionary and a bit insane but it is called a static built binary.
What the heck is that? Go to opera’s site and start to download a rpm of Opera. There is this one little choice that should stick out like a sore thumb. A static build is offered to the general public that has no dependencies because all the lib stuff is compiled in. It makes for a larger download assuredly.
However, it resolves a lot of issues with dependencies and such.
If you try to upgrade Evolution another product made by a commercial company you better be ready to download at least 12 different packages and install every single one or you will get at least one conflict or dependency issue.
BTW, if you rpm -Uvh every single one of the provided packages for your distro given to you by Ximian you will be fine but god help you if you don’t.
If Ximian provided a statically built evolution sure it would be huge! Still, it would be a lot easier to upgrade and use the thing.
Is the problem a rpm one? No, at least rpm checks for the dependencies and such and reports back what is missing. As long as a package management system installs, upgrades, removes packages and reports back conflicts correctly it is fine IMO.
The problem is with the way that packages and package groups are put together from the development level outside of the distro (there are tons of libs that should be in gnome-libs because they are a dependency for 90% of the gnome apps out there) to the distro itself (do we need a gnome-print, gnome-printui and gnome-print-lib packages).
Part of this has to do with the fact that different groups of developers develop different libs on different schedules from other libs that are also used by the same group of programs.
Another problem is the fact that as a community the opensource groups are getting further and further away from the day where most applications (text editors, email packages those sort of things) could be downloaded in a static build format. I remember there was a time when the majority of programs could be downloaded in a static build binary format. No muss, no fuss.
this would be untrue if people would use interfaces instead of the specific implementations
Alot of the linux software already does this. For example I don’t think people have realized that they can stop making interfaces for sox now.
If you know of some universial interface please let me know. I hope that it’s not QT or GTK.
so I think you all agree with me if I say it is the distribution makers choice to code-freeze the Desktopenviroment of their choice.
Many people seldom update as frequently as us powerusers.
Another thing that bothers me is this gnome menu editing, it isn’t that hard and yes it does update as soon as you change the properties of the launcher in question. And I firmly believe that it is up to the distribution makers to work with the menu system, and putting a “nautilus applications:///” ‘shortcut’ in the menu isn’t that hard either , for the distro maker, this is something the distro maker should do..
And KDE3.1 has it’s own issues, such as redraw bugs and a total bogdown of the system when doing anything other than editing text in kate.
Most of the problems explained in these “reviews” are being adressed.
this is still oss development , not a antimatter warpdrive.
and I’d like to second rayiners Great explanation.
I said ” To accomplish all three will require the end of the Linux ->desktop<- ‘distribution’.” There will always be a place for specialist distros, but here I was more concerned with the whole ‘Linux on the Desktop’ farrago.
I use Gentoo, and find it easy to use and reliable, but I don’t see most people making the effort to understand it, even though there are many benefits if you do.
What I like about Gentoo is that when you install KDE, you get KDE, not KDE with some irritating tweaks and pointless modifications.
KDE is very nice to start with, and messing with it makes it less coherent and stable. If there are bug fixes to be made, these should be merged back into the main source tree, not randomly scattered across the different distributions, some of which will have different bug fixes than others.
My point is that if it is made easy to install any software you like, and not be required to configure it beyond a few user preferences, then what are the benefits of choosing a particular distribution?
If I want a server ‘distro’, I will install Apache, MySQL, PHP etc, if I want a desktop ‘distro’ then I choose KDE, X, ALSA, Mplayer. What is a distro other than a collection of software?
I have been running Linux for some years now, and the longer I do, the more similar the distributions appear. What differences remain are the sources of incompatability, rather than advantages.
To use the car analogy… The way each distribution tweaks a desktop like KDE differently is like buying the same make of car, but each garage sells you a version that is slightly different, with different flaws and fixes, and the indicator stick on a random side of the steering wheel.
If you just want that particular make of car, you want it how the manufacturer intended, not with unpredictable tweaks and modifications.
Perhaps it is just that when you find the distribution that is perfect for you, it is hard to understand why anyone would rather use anything else. But, I still hold to the fact that the software is important, not the distribution. There is enough variation in window managers alone to keep anyone happy if they like a particular look and feel.
Having a medley of hardware detection, system configuration and packaging methods is at best irritating, and at worst the main barrier to widespread acceptence of Linux as the future of desktop computing.
this would be untrue if people would use interfaces instead of the specific implementations
Alot of the linux software already does this.
Which one?
If you know of some universial interface please let me know. I hope that it’s not QT or GTK.
QT and GTK are surely no universal interfaces, if I program for QT i get an QT app, the same applies to GTK. The ideal situation would be if a program would use a software unit which addresses a specific concern, in this case a Widget Toolkit (I know QT can also do more) and the user chooses his preferred toolkit. This flexibilty isn’t possible BETWEEN applications, you can only do this in OO-Languages trough the use of an interface and dynamic binding, and if this would be possible across different programs and different languages in a concern-related manner this would solve many many problems (mentioned in my other posts)
As far as installation packages, here’s a question for the people that say ‘Well, just use RPM/apt/red carpet/whatever …
What happens when the app you want to install isn’t available for the above? Worse yet, what happens when the app is available, but it’s like several versions out of date? In my experience, /configure make make installw works about half the time.
It’s because of things like this that new versions of apps are considered ‘bleedling edge’, even if they are FINAL releases. WTF is up with that?
Which one?
grecorder(sox), gnome-ppp(pppd), kppp(pppd), that x cd roaster software, I can’t remember it’s name.(mkisofs,…). There are alot of other ones.
The ideal situation would be if a program would use a software unit which addresses a specific concern, in this case a Widget Toolkit
In my experience all good software units addresses a specific concern. Are you talking about a GUI or someother interface?
This flexibilty isn’t possible BETWEEN applications, you can only do this in OO-Languages trough the use of an interface and dynamic binding,
Why couldn’t I use a dynamic language like guile or schema?
I’ve been following all the discussions about Win vs. Linux only because I’m forced to use Linux, some of the statistical sware runs only on Linux machines. What bothers me is that lin-people keep saying how “things are hard” OK. I tried to install RH 8.0 on my laptop, no power management, FAQ say that the implementation of ACPI in my Compaq machine is “buggy” I wasn’t lazy so I installed Win 98 and guess what the power management worked. The point is: money buys good, efficient sware and ones based on enthusiasm keep living in the dark. People just have to accept this fact.
P.S. I agree with the guy who wrote the article on all the points he made.
I have yet to find a compelling enough reason to use Linux. Its just so damn archaic. It looks functional enough… up until you actually try and do “anything”. And its one of the most featureless OSes on the face of the earth.
At least AmigaOS was groundbreaking, M$ Oses are functional and Unix, well their just Unix. Isn’t MacOS more than enough as far as alternative Oses go for this day and age, sheesh! Kill it already!
Quote
>A litte off topic but the article mentions menu editing and I >have to agree with him in relation to GNOME’s menu editing >capabilities (KDE is ok for me). Using Nautilus to edit the >menu requires restarting GNOME to see changes (but doesn’t >work reliable anyway). Right clicking the menu allows me to >add entries but submenus don’t work. In general it is a mess. >Sometimes it works but most of the time it doesn’t. Changes >in the entry dialog are also lost most of the time. And yes >it is GNOME 2.2 compiled from sources. To be fair I have to >admit that this (and the file selection dialog) is the only >real trouble I have with GNOME.
end quote
I am sorry – but this is just wrong
This perfectly describes gnome as as 2.0.1 (RH8, Mandrake 9, SUSE 8.1)
It is not relevant since 2.0.2, and DEFINITELY not anything in the 2.1/2.2 series.
(and I am ceratinly biased here – I made myself seriously unpopular on DDL over this issue)
but give me (and gnome a break) – its fixed.
When a piece of hardware, for example ACPI in laptops, doesnt work in linux, its branded as “buggy”.. Yet, in windows it works flawlessly.
The Linux devs can’t afford all the time it takes to work around hardware bugs. There is a whole lot of crappy hardware out there. There are ways to get it to work, but doing so requires more intimate knowledge of the hardware (which the hardware manufacturer, writing the Windows drivers, has) or a whole lot of time to waste getting things to work. You can’t possbly think that hacking up the OS to work on broken hardware is actually a strength?
To “Linux Powered Rice Crispies:” Exactly what feature are you looking for that Linux doesn’t have. These days, I can do pretty much everything in Linux, including 3D modeling. The only thing really missing is good audio editing tools like Cakewalk, and low-end to mid-range video/graphics editors like Photoshop and Premiere.
What makes the distros different? Aside from software config, there are a lot of differences:
1) Localization. The Linux developers are not numerous enough to fully internationalize their software. Distributions like Connectiva and RedFlag are absolutely critical.
2) Target Market: Windows isn’t easy for total newbies. You people just think it’s easy because you’re mainly power users who are used to it. If you want easy, you look at something like BeOS, which is completely stripped down and straight forward. On the server, you need something with more configuration flexibility than Windows. I can easily see Mandrake aiming at the former, and RedHat at the latter. Similarly, RedHat and SuSE easily fit in, one offering GNOME, the other KDE. Business users need the interface to be consistant, for support purposes. If a corporation needs one or the other, they can buy into one distro or the other.
3) Software selection. In the end, a distro ships with a certain set of software. RedHat tends to be kinda middle of the road (except with respect to GCC and GLIBC). Debian is time-tested and stable. Gentoo is bleeding edge. Each distro can optimize for a particular selection of software, and offer the best experience for that selection.
4) Specialization. Linux distributions (like Lineo) can specialize in specializing Linux to work on particular target markets. This is also where stuff like SGI Linux or Sun Linux fits in.
If you think about it, there are different “distros” of Windows too (Home, Professional, Server, CE, Terminal Server, Embedded NT). There are even seperate points of sale and support departments for each. Microsoft is big enough to be several companies in one. Since no Linux company will probably ever grow that big (thanks to competition!) there is a need for several companies to serve particular markets. Lastly, there needs to be many, if only for the sake of competition. What’s the different between a Nissan and a Toyota? They kinda look different, have different power plants, warrenties, and different price points. But are they really all that different? Not at all. The market can easily handle many car manufacturers because there is enough diversity and variety within the market to absorb all their product lines. I hold the computing market is probably just as diverse, if not more diverse, given the large number of things computers can do.
PS> Anybody who complains about Linux software installation, and then admits not only to not having used apt-get, but still downloading RPMs manually, is just plain stupid. If you’re on RedHat, check out RedCarpet and RedHat network. On Mandrake, check out urpmi (or the GUI, gurpmi). If you’re using Debian or Gentoo, you already know the “One True way” to install software I still can’t understand how anyone can think that any software installation method that actually requires more than one instance of user interaction, much less actually hunting down an installation file *manually* can be any good!
That is easy Darius. The latest release is not always a stable release at all.
I am running 1.1.15 of Gnumeric but it is not the stable release I did get it from a devel apt source but that is a totally different story.
If configure, make, and then make install only works for you half the time then you probably do not have a number of devel packages but then again if it ain’t in rpm format quite honestly you do not need it. Why do I say something this harsh?
Because only programmers and hard core tweakers should be building packages from source. There are too many different sources for very up to date rpms to bother with compiling yourself.
I have built only three programs straight from source on my RH8 box. I wanted Mozilla with GTK2, XFT, compiled against gcc2.96 so all the plugins would work. I could find gtk2 rpms but they did not have XFT and found XFT rpms without gtk2 and I found gtk2 xft mozilla rpms but compiled against gcc3.2 so I made my own and compiled galeon against those. The only other thing I compiled from source was mplayer for the quicktime win dll support for sorrenson3 playback.
Honestly, I think that all opensource projects should stop distributing binary rpms unless they create static binary rpm versions. This will take care of the dependency issues right off the bat. Yeah, the programs will be huge but please that is sooo worth it to stop the newbs from freaking out so damn much.
They should NOT distribute source code at all. If the person wants the source they should have to email the developers an affidavit swearing they will actually file a bugzilla report before cursing out half the developers in public on the web and on the devel-list, read the FAQ and the HOWTO off the frickin’ website before telling the programmers (none of whom are getting paid for this crap) that their code sucks because the damn user can’t be bothered to read the aforementioned HOWTO which lists what other programs they need to make the program work.
But to answer your question, if the latest version of a program is not listed in apt yet, wait. It will be very soon. If there is a version of that program you want that still is not available notice the version number. I play around with abiword 1.3.99. No it is not stable. I just like the interface of Abiword and want to see the progress.
Same way with Anjuta2 version 0.3.6. Notice the version of the program you are lusting after and if it has not even reached 1.0 or it is some odd number version that has the feature you want then you know why there is no rpm yet. It is not a stable release and when it does become a stable release apt will update pretty darn quick.
I’m confused or maybe it’s easier to blame hardware for something that Linux can not accomplish 4 years after MS. My laptop suffered Win 98, Win Me, Win 2k and Win XP and the ACPI support was working right after the installation, no additional drivers. No such luck, as a matter of a fact no luck at all, with SuSE 8.0 or RH 8.0 and these are the products of year 2002, mmm 4 years after ’98. Or maybe the fact that in RH 8.0 (out of the box installation) X consumes 30% of RAM (320 MB) just with Mozilla, Evolution and Nautilus opened. I’m starting to bash Linux, all things fair, for specific needs it’s an excellent OS, for mainstream and business it sucks big time, never the less I’ll keep using it because there’s no substitution for everything.
A project to try and address some of the desktop deficiencies in Linux was covered here only recently
http://www.osnews.com/story.php?news_id=2712
Milos, Mark, they say the *implementation* of the ACPI code for *your* chipset was buggy. The chipset itself may not be buggy, the implementation code is.
If RedHat has billions of money too then Linux will support your chipset flawlessly. Heck, if *any* company has billions of money, they can make an OS that works flawlessly on nearly all hardware. But that isn’t realistic: they only have limited resources so they can support every single piece of hardware out there.
On the other hand, the Linux ACPI implementation *my* chipset *does* work flawlessly. No additional drivers, no configuration: it just worked.
Don’t blame Linux, or any other operating system except Windows, for not supporting every single piece of hardware out there until it’s the biggest OS. What Linux already has is excellent support for most popular hardware.
Johnathan Bailes:
“Honestly, I think that all opensource projects should stop distributing binary rpms unless they create static binary rpm versions. This will take care of the dependency issues right off the bat. Yeah, the programs will be huge but please that is sooo worth it to stop the newbs from freaking out so damn much.”
Repeat after me:
Satic linking is not the solution!
Satic linking is not the solution!
Satic linking is not the solution!
The ONLY thing it solves is dependancies, but it introduces lots of other issues, which cause different complaints:
1. Memory usage. Statically linked apps don’t share any memory at all. Can you hear the “OMFG LINUX IS BLOATED!!!!”-complaints already?
2. Size. Prepare to be flamed down by “I HAVE A 56k MODEM DAMMIT!! WHY IS EVERYTHING SO BLOATED??? LINUX SUCKS!!!”-messages.
Or in 1 word: efficiency.
Statically linking is technically wrong. What you actually want is to not have dependancy problems. There are other ways to achieve that. Why choose the technically wrong way?
Check out http://autopackage.org/
“They should NOT distribute source code at all.”
Not possible legally. Unless you use your own license, you *must* publish the source code if you distruibute binaries too.
Ok, static linking is not the only solution.
Yes, in other previous threads I have mentioned autopackage myself but packaging alone does not solve dependency issues. Even if the dependencies are handled better they still exist. Gnumeric still needs libs from gnomeprint, gal2, gtk2, glib possibly Guppi and a dozen or so other libs and apps.
I was being over-the-top and overstating the point to be sure.
Many apps especially those with out there ahead of the curve dependencies should have a static linked version alongside a downloadable version that is shared much like the Opera example I used in a much earlier post.
To summarize: If you include a rpm for your project include two — one that is shared and one that is static for those people that try to use the shared version and cannot because of dependency issues. Is this a pain in the ass? A little bit but if I can build it shared then I can build it static and the time saved on the lists walking folks through dependency troubles is gold.
On the distributing source comment.
Once again, I was being over-the-top and I said that projects should not allow source code downloads UNLESS the user signs a affidavit swearing … etc…etc.. from my previous post.
Not quite the same. It was a joke. Signed affidavit? Come on. I am just tired of seeing guys that plug away on this stuff in their spare time getting reamed in a public forum for the sin of actually caring enough to post out there software in a public forum and making it publically available.
“Yes, in other previous threads I have mentioned autopackage myself but packaging alone does not solve dependency issues. Even if the dependencies are handled better they still exist. Gnumeric still needs libs from gnomeprint, gal2, gtk2, glib possibly Guppi and a dozen or so other libs and apps.”
The goal is to make autopackage’s dependancy resolution good enough to solve those issues. It already checks for dependancies (the actual files, not a DB entry). If a dependancy is not installed, then it should first look in local sources (the harddisk or CD-ROM), and if those aren’t found, it should download dependancies from the Internet. No matter how many dependancies, it will just install all of them until the dependancies are satisfied. This is all done automatically.
What is wrong with this approach? It does the same thing as static linking (resolving dependancy problems), but is a hell lot more efficient because it only downloads what you need.
There is nothing wrong with the autopackage approach. Like I said before I have mentioned auotpackage myself in earlier posts. However, it is not ready for prime time yet as even the autopackage website itself mentions.
If autopackage can find all the libs involved and if projects use it then it is fine. If autopackage can safely install the newer dependencies without breaking old ones as it states then cool. However, over half the issues with rpm and deb have more to do with the packagers and the packages and not the package management system itself.
Autopackage depends a lot on a DNS approach where the maintainers of the packages build the autopackage stuff themselves and post it. Once this becomes the norm and after autopackage goes 1.0. Then it is fine.
However, I am talking about a solution for many projects that can be implemented now.
Personally with apt, I have only hit one dependency hell situation with RH8 and that was solely of my own making.
It is not like someone running a project can use autopackage today but they can enable a static build and post a static rpm. Especially if the project has decided to go on a limb and use some way ahead of curve lib dependency. I used a version of Mozilla that had XFT support statically linked in way back in the day for example.
If distros are not going to use some sort of complete apt backend to their package management systems and until autopackage gets out of beta and into wide use then static linked rpms have a lot of potential to help new users who want certain apps with dependency trees that look like large forests.
I use to see this all the time for .tar.gz Solaris packages of end-user applications back in the day. For end-user space programs with complicated dependencies using bleeding edge libs, it is not a bad solution. It is not the best technological solution, but it is a decent solution for the end-user who has difficulty tracking down the dependencies.
>>>redtux:
“I am sorry – but this is just wrong
This perfectly describes gnome as as 2.0.1 (RH8, Mandrake 9, SUSE 8.1)
It is not relevant since 2.0.2, and DEFINITELY not anything in the 2.1/2.2 series.”
You should not question me without good reason. If I state the version is 2.2 you can trust me that I know that it is 2.2. GNOME 2.0.2 did indeed better with menu editing than 2.0.1 and on my machine even better than 2.2. Since I use Gentoo it is still possible that it is Gentoo’s fault, but so far I have no evidence for that.
You can’t possbly think that hacking up the OS to work on broken hardware is actually a strength?
Good point
A static build is offered to the general public that has no dependencies because all the lib stuff is compiled in.
That would be good
Isn’t MacOS more than enough as far as alternative Oses go for this day and age, sheesh! Kill it already!
I would buy a $40 distro before I would go out and get a $500+ mac
About drivers, why do we even need them? They make things to much of a pain sometimes, my old C64 could print on my old dot matrix just fine, I don’t remember needing drivers at all. Ya stuff has more features now, the OS needs to know, but is there not a easier way? Like the board can talk to the OS any OS from a chip on the board.
I admit that I have no experience in coding or programming, I just speak from the user stand point. Somehow my discussion is turning to Linux dealing with ACPI, Intel’s site gave me an idea how not trivial this things is. Linux has an army of people who spend their spare time improving it and there are also “big” guys who are in for the profit. I’d be pissing mad if I had paid $100+ for RH Professional box, fortunately my lab got it for us and you can download it from the net for free. We shelled out big bucks for Win XP Professional and expected it to do all the things damn well which it did and still is. To sum up: in the early 60-ies America put men into the space and by the end of that decade on the Moon (unless you believe in conspiracy theories, after all this is a free country) and now China is getting ready for theirs first manned space mission, lets make MS Win USA and Linux China, by now Chinese are boldly going where others have gone before, so where’s my ACPI ๐ Linux is good, there is one better player and then there is one declining OS, it rhymes with hack ๐