Linux news is getting more and more exciting, and somehow, managing to get less and less interesting. Why? Because the speed of development is getting so rapid that it’s hard to get excited for each upcoming release, keep your system completely up to date, and even remember what the current version of your favorite distributions are. This breakneck pace of development means good and bad things, but I have a few ideas about how I expect it to turn out.
The opinions in this piece are those of the author and not necessarily those of osnews.com
There are literally hundreds, if not thousands of distributions out there. In fact, with Knoppix, almost anyone can make his own. Each season, it seems we watch some distributions fold and others form. It’s getting harder and harder to tell them apart. Think you’re an expert? Answer these questions quickly:
According to a recent post on Distrowatch.com, “It is time to face the facts: the number of Linux distributions is growing at an alarming rate. On average, around 3 – 4 new distributions are submitted to this site every week, a fact that makes maintaining the individual pages and monitoring new releases increasingly time consuming. The DistroWatch database now lists a total of 205 Linux distributions (of which 24 have been officially discontinued) with 75 more on the waiting list. It is no longer easy to keep up.” Distributions change often, as does the popularity of each. Keeping up is almost impossible. Many Linux users install new distributions every few days, weeks, or months. Sadly, many of these folks keep a Windows installation – not because they prefer Windows, but because it’s a “safe haven” for their data which can’t find a permanent home on any given Linux distribution. Can this pace continue? I say no.
Predicting the future is always risky for an author, especially one who contributes to internet sites, where your words are often instantly accessible to the curious. But I’m going to put my money on the table and take some guesses about the future of Linux. Here, in no particular order, are six theories that I believe are inevitabilities. Keep in mind that although I’ve been liberal in tone, nearly everything in this piece is speculation or opinion and is subject to debate. Not all of these theories are necessarily entirely original thought, but all arguments are.
1) Major Linux distributions will collapse into a small, powerful group.
“Major players” in the Linux market, until recently, included Red Hat, SuSE, Mandrake, Debian, and Slackware. Some would argue more or less, but now you have a number of popular distros making inroads into the community, Xandros, LindowsOS, and Gentoo to name a few. Another fringe including Yoper, ELX, and TurboLinux are making plays for corporate desktops. I’m coining a new term for this era of Linux computing: distribution bloat. We have hundreds of groups offering us what is essentially minor tweaks and optimizations of a very similar base. This cannot continue at this pace. There will from this point on, be a growing number of Linux installation packages as people become more skilled, but there will be fewer distributions on a mass scale as commercial Linux stabilizes.
I think we’ll see the commercial Linux market boil down to two or three players, and this has already begun. I expect it to be a Ximian-ized Novell/SUSE distribution, Red Hat, and some sort of Debian offshoot – whether it’s User Linux or not remains to be seen. Sun’s Linux offering, Java Desktop System, will be deployed in Solaris committed companies and not much more.
2) Neither KDE nor Gnome will “win;” a third desktop environment will emerge.
The KDE/Gnome debate is a troll’s dream come true. People are often passionate about their desktop environment. I believe they both have strengths and weaknesses. However, a third DE, with a clean and usable base, will emerge in time, its sole mission to unify the Linux GUI. Only when there is true consistency of the look and feel of the desktop, or close to it, will Linux become a viable home OS for an average user. Currently, we see this consistency forged by common Qt and GKT themes, and offerings like Ximian Desktop which attempts to mask the different nature of each application. This is not about lack of choice – it is, however, about not allowing choice to supercede usability of the whole product.
Features that a desktop must include are obvious by now: cut & paste must work the same way throughout the OS, menus must be the same in all file manager windows, the same shortcut keys must apply in all applications, and all applications must have the same window borders. Many seemingly basic tasks that haven’t entirely matured, or in some cases, been accomplished at all, yet.
In any event, the DE’s importance will lessen once greater platform neutrality exists. This will doubtlessly cause many to argue that I am wrong – admittedly, it’s a tall order especially with Gnome and KDE becoming established and accomplishing so much. I maintain that unless there is some sort of merging, not a set of standards like freedesktop.org, but rather, a common base for development, that there will be a fragmented feel to Linux that simply doesn’t exist in Windows today.
3) Distribution optimization will become more prevalent
Most distributions today can be used for anything – a desktop system, a web server, a file server, a firewall, DNS, firewall, etc. I am of firm belief that Windows’ greater downfall on the server is that it has been a glorified desktop for too long. The file extensions are still hidden by default, you’re forced to run a GUI, and you can still run all your applications on the system. I predict that we’ll start to see flavors within distributions tweaked at the source level for optimization. Systems made to run as a desktop will have many different default options from their server optimized counterparts.
4) Integration will force the ultimate “killer app”
I predict an open, central authentication system will take the Linux world by storm. There still isn’t a Linux comparison to NDS/eDirectory or Active Directory that makes user management across the network as simple as either of the two. While eDirectory will run on Linux, there is no open standard with a GUI management tool that automates this mass management. An authentication service whose job is only to watch resources including users, devices, applications, and files doesn’t exist and can’t be built without serious Linux know-how. This service, which I’ll casually refer to as LCAS (Linux Central Authentication System) for lack of a better term, will be as easy to establish as a new Microsoft domain’s Active Directory.
LCAS will operate using completely open standards (X.500/LDAP) and will be easily ported to the BSDs and to commercial Unixes. Unlike Active Directory, LCAS services will be portable, and stored in a variety of databases, including PostgreSQL, MySQL, and even Oracle and DB2. LCAS, like Linux, will be pluggable, so that as it matures, management of other objects, like routers and switches, your firewall, and even workstations and PDAs and eventually, general network and local policies, will be controllable from your network LCAS installation. Perhaps, in time, it will also manage objects on the internet and how they can act within your network. I envision the ability to block, say, a particularly annoying application‘s HTTP traffic, the ability for certain users to use specified protocols, or installing internet printers via LCAS.
5) Releases will become less frequent, and updates more common
There is a competition for versioning in the Linux world, as though higher version numbers are somehow “better.” Version inflation is commonplace, with companies incrementing the major version for minor overall updates, and going from X.1 to (X+1) after a few application updates and a minor kernel increase. There is also a software trend that eventually, when the version number gets too high and is abandoned in favor of less harsh sounding names. No one would upgrade Windows every six months, so why upgrade Linux every six months? Because the software gets better too quickly! And the best way to get new software that works is to upgrade the whole distro! This is backward. The software should be incidental to the distro, not the reason for its version stamp.
Gentoo Linux just changed their release engineering guide specs to include for a year number with subsequent point releases. This, I think, is the right idea. I predict that we’ll start to see releases like DistroX 2004, DistroX 2005. As a counterpart, we’ll begin to see downloadable updates like service packs, like DistroX 2004 Update2. These updates will be easily installable and will update and patch not only the OS, but all components that came with the distro.
It is not unlikely that we’ll see a front end installer that launches, checks your system and the server, asks which pieces you want upgraded, and then processes it. There are systems like this in place today, however, they are constantly updated. Too often, people don’t patch or update, they just reinstall. We’re going to see only security updates for each distro, and approximately quarterly, we’ll see an official Update. Updates distributed in this fashion are much more likely to be applied by a common user than the slew of updates issued on an almost daily basis. Updates like this allow users to utilize a modern system much longer in between releases – for years in some cases. Unless OpenCarpet catches on, I see a service pack mentality prevailing for all commercial distributions.
6) Linux-approved hardware will become common
Part of the fight for useable Linux is with device drivers and hardware. Getting your video card to work properly, even with a binary driver available, is still way too hard. While this isn’t always the fault of the hardware, we will see, in time, Linux approved hardware. The hardware will include Linux drivers on the accompanying disk. There will be a certification process that tests hardware against a certain set of standards. Soon, a Tux badge on a PC case will be as commonplace as the “Built for Windows XX” stickers on most cases today.
I don’t claim to be visionary by any means. I also don’t want to forcefully bring spirituality into the mix, but I believe all things exist in waves, with highs and lows. Linux started small, it’s gained an audience, and as it swells to a large point, we, the community, should anticipate the future refold of things. The eventual downswing shouldn’t be an implosion, but rather, an opportunity to organize and streamline the existence of free software. It doesn’t have to be a reduction in use, it can be a simple cooperation, reduction of market saturation, and convergence towards standards.
Within the next two years, we’ll likely see Linux kernel 2.8, Gnome 3, and KDE 4. We’ll see exciting new projects. We’ll see many new Linux distributions and many existing ones disappear. We’ll see the pre-Microsoft Longhorn media blitz. And I bet, not too much longer than that, we’ll see some of the above start to become a reality as well.
Adam Scheinberg is a regular contributor to osnews.
very interesting, I hope many of the things mentioned will come true. The rise of a third DE is the most interesting to me as a user. Exciting times ahead!
Gnome and KDE will somehow combine. Hopefully I will be on the other side of the planet when this happens.
Perhaps rather than a third DE (which would create yet another mess of incompatibility, as it improved and changed ,so would KDE/Gnome…) I think there should be a third format for widget themes that when imported via it’s configuration tool would automatically export to all widget sets (qt, gtk, wxwindows, etc…)
Yes in the near future all I can assume is that things that are common between the DE will be shared in the backend, which will make it easier for each DE to be able to focus on the front end.
I also think that after a user uses Linux for a while they will choose a DE and stick with it.
I have chosen GNOME, but I like to use some KDE apps. But of course I find more and more each DE are increasing beginning to have equivalent apps, so eventually I think I will just stick with GNOME.
I agree, there is no third DE to emerge, but better integration between all X11 DEs.
If I had the resources and time I would fork KDE and would try to get my patches to Trolltech for Qt (or fork Qt too if Trolltech wouldn’t accept them). KDE has the best under the hood technology of the two DEs today, but its usability is pretty bad mostly because of the UI bloat. What I want is a good looking and *simple* desktop a-la Gnome (as opposed to the ugly toolbars, badly designed menus and 2000 configurations of KDE), but with the KDE/Qt technology and APIs. And of course it would be compatible to freedesktop.org’s standards and would also load a session of the real KDE and Gnome so their apps would load faster under this hypothetical DE too (yeah, RAM is cheap these days).
GNOME and KDE will continue being the major desktop environment in Unix for the foreseeable future. Applications today are literally designed specifically for both desktop environments.
Unification of these environment will kill Linux. Linux is what it is today because of healthy competition. Do you think GNOME or KDE will be close to anything they are today if they didn’t compete?
Standards and interoperability are welcome. But their philosophies will hardly change. Take a look at any of the unified desktop environments today. How many of them have advanced as fast as KDE and GNOME? None!
Do you know how bored developers will get when their newly hacked algorithm that is 5.67% faster has no competition? Competition keeps OSS moving and advancing. Unification is dangerous to OSS and is characteristic of proprietary.
Diversity, competition and choice is OSS’ oxygen. And as with everything, it has its bad and good sides.
I don’t think a third DE has any chance. Could be wrong though.
The DE that will become the dominant commercial/consumer DE will likely be driven less by technology and more by circumstance. The factors that may lead to dominance are:
1. A major application like Photoshop or Macromedia Studio is made for Linux using GTK+ or QT. If that happens the competition will respond with offerings using the same technology.
2. The choice of DE by distribution(s) in markets where consumers pay for additional software. Currently it seems like KDE is winning here. Of course Gnome is probably winning the corporate desktop (and numbers in the corporate desktop are likely to explode before they do in the retail category).
I would be surprised if we saw a third DE displacing KDE/Gnome in the next 3-5 years. The issue is two fold. First th corporate backers would rather improve/support a known API thn write one from scratch and try and convince the open source community and other corporations to back it. Second is that the developers work on Gnome/KDE because they want to, I haven’t seen any inkling of ayone coming forth with enough whuffie to cause the developers from the two teams to abandon their efforts and work on something else.
I can see “specialized” distros gaining in popularity though. (I’ll stick to Slack for all my Linux needs for now.)
Also for your LCAS, why not LDAP with a predefined configuration? Add a GUI to make it easier for the “masses” and be done with it.
Sun’s Linux offering, Java Desktop System, will be deployed in Solaris committed companies and not much more.
Like Walmart and Office Depot.
So much for in depth analysis.
Does that imply KDE and Gnome can not at this time compete with other DE on alternative platforms?
OSS competetions are more like dog eat dog fight
Like Walmart and Office Depot.
Last I checked, neither deal was finalized. See http://www.eweek.com/article2/0,4149,1406463,00.asp.
Not to mention I have yet to read a very positive review of JDS.
Also for your LCAS, why not LDAP with a predefined configuration? Add a GUI to make it easier for the “masses” and be done with it.
If it worked out of the box with Windows clients as well, then this would certainly be a killer app.
This is not a great article. The amount of resources necessary to develop another DE is staggering. There are a number of smaller DE’s as well, but the big boys with wide adoption are KDE and GNOME. Both will keep on improving, and there is zero chance of another COMPETITIVE BROAD-BASED DE emerging. Not enough time, resources and KDE and GNOME are not standing still… and furthermore, with time there will be less NEED for another DE as the major two keep evolving to satsify BROAD USER needs.
I also disagree that there will be only a few distros left standing. There will be just as today, 3 major ones, and a rash of small and tiny ones… because it is cheap to do small optimizations and put up a site to address a tiny niche – the very essence of how Linux works.
[i]Like Walmart and Office Depot.</>
Just because they stock something doesn’t mean people will flock to buy it – and these stores are so massive they can afford a few poorly performing product lines.
I agree that a third DE is probably out of the question. What I believe will happen is more of a split between KDE and Gnome. That Gnome will go even simpler and easier to use and KDE will become a Super DE that has everything but the kitchen sink.
I have always loved the idea of a distro for a router (smoothwall,IPCop,etc), a distro for server (Redhat,debian,etc.), and a desktop distro (libranet,mandrake,etc.). There’s really no need for X on a server unless it’s a terminal server.
Number 5 could be implemented without having “service packs” or whatever. Think apt-get or portage. Updates for broadband users could be seamless if it were scripted. Of course the system would need to know what kind of bandwidth it could use (Don’t download KDE on a dial-up,for example).
Sure LINUX has more choices than ever, perhaps too many, it is growing, becoming more common, more powerful and more POPULAR so this is natural and it is how innovation happens.
But don’t worry, over time, only 3-5 linux companies will hold 95% of the marketshare, and there will also be fewer used applications for each task because the distributions will push only certain applications. Right now it is LINUX darwinism, survival of the fittest, many programs, few survivors.
This article is really just too much speculation for things that nobody can know yet. I see nothing but positive things coming out of Linux and spending 30 min a day keeps me up to date on everything and even allows me to read some commentary such as this. I am more excited about Linux than ever before, because now I actually see that there is real potential for Linux just about eveywhere and it is actually being taken serious and used by large companies. Now Linux is slowly but surely hatching, breaking out of its shell, by 2004 it will be loose and you will see what it can do. Linux is reaching critical mass, now more than ever, is a time of great excitement for this humble operating system.
The underlying technologies for Enlightenment 0.17 are REALLY interesting. I seriously think this environment could become the no 1 choice for many power users, though it’ll probably never be a “Windows replacement”.
OSNews interview with the author can be found here:
These are all nice ideas, except for one (I’ll get to), but they’ve all been stated OVER and OVER again.
The two big DE’s are trying to cooperate (freedesktop.org),and slowly it’s happening. “Predicting” that integration and hardware support will happen don’t make you much of a visionary.
This stuff can’t happen overnight, and many people saying “I want it” over and over again won’t make it go faster.
Now, you want more “service packs” and less releases, I dont like this. I hate having to install windows and then instantly having to install 3 service packs BEFORE I CAN PUT IT ON THE INTERNET.
Now, if the packager also releases updated CDs with the service packs already applied, I wouldn’t care. This would be more like Debian’s release system.
KDE and GNOME will never unify into the one codebase. Applications between the two will interoperate better by use of freedesktop.org standards.
As a previous poster noted, competition is what keeps OSS alive and unification is a proprietary model that stifles innovation.
That is asssuming it will ever actually be released, right?
>KDE and GNOME will never unify into the one codebase
Nobody is talking about codebases. You can’t possibly unify two so different toolkits. What we are talking about is having common standards for the under-the-hood actions, e.g. MIME system or VFS.
The KDE/Gnome debate is a troll’s dream come true.
Sadly, it is (that’s like soccer talk).
About the hardware, today there are plenty of PC Parts that already have the Linux system on the box.
The problem is for the vendor to include a driver ! How can they predict what distribution packaging system the final user will have installed ? rpm ? deb ?
or could they choose the NVidia way with a universal distribution installer ?
But, then again, NVidia life was more easier (if compared) they only had to deal with XFree which is pretty standard and common on all distributions.
A service pack mentality prevailing for all commercial distributions.
Service packs would be great. But I think this mentality will be here sooner for non-comercial distributions like Debian. Maybe I’m wrong but comercial distributions prefer that you buy a new version and new support.
A universal (all distributions *must* use it) Linux package installer (even if only available on command line but, still, universal) would be of great strategic importance but no effort on that direction has been seen (that I know of…).
And there here remains only two years before Longhorn, as the author correctly pointed out.
The third DE idead is intriguing. Adoption would not be difficult. If i released a fully integrated desktop that had the sexy appearance of OSX and the ease of use of windows, People would just on it right away.
However, given the amount of work that such a project would require, I doubt this will happen. Gnome has made huge gains in the last year especially with their 2.4 release.
They have had the balls to have 1 app for a given task. This is the right choice. They really have their act together in terms of what direction they should take, and where they need to improve.
The few areas where KDE has an edge on Gnome are pretty trivial. Nothing is stopping anyone from taking over the linux desktop, people will use whatever is the best, and i think 6 months to a year from now the obvious choice will be Gnome. The highly anticipated FreeDesktop Xserver will add the next generation featureset linux users have been looking for.
You forgot the ton of manpower to make the desktop itself. You also forgot the tonne of manpower required to make supporting applications. It has taken KDE, what, 8 years (?) to get where it is now.
While I don’t think that an entirely new DE will be developed from scratch, i’d say that it is very likely that a few smaller “mini-DE’s” that use Gnome/KDE API’s will emerge to satisfy lower end computer owners who want more than fluxbox can offer. In fact, XFCE is already headed in this direction.
>and the ease of use of windows
The current DEs are easy to use pretty much. What makes the experience sour is the fact that these DEs are not integrated to the underlying system. The solution is not to create a new more user-friendly file manager or panel. The solution to the problem you are reffering is proper integration to the OS it runs on. The user should not know what X is. It should just work. There are no Windows users discussing how to configure the “GDI”, are they? That’s where the real problem is, system integration.
Mozilla was developed for years. Suddenly, Firebird comes along and takes the code and makes a huge dent in the browser market. People who never would’ve used Mozilla proper have now moved to Firebird.
Who says a third DE won’t be heavily based on one of the existing ones? Who says it won’t be “Gnome Express” or “KDE lite”? Remember, a third DE doesn’t necessarily have to be built from scratch – it’s free software! They have the benefit of everything that already exists.
The biggest parts are already released (evas, imlib2 etc)… the WM itself is still in alpha state though, but the WM is just a very small piece of the code (like kwin in KDE)
“Suddenly, Firebird comes along and takes the code and makes a huge dent in the browser market.”
I’m not sure what your definition of “huge dent” is, 1%?, 2%?
I’m not sure what your definition of “huge dent” is, 1%?, 2%?
How about “more than Mozilla Seamonkey ever had”?
Why are 200 distros, including a dozen commercial ones, an unsustainable amount? That’s someone from the Windows monoculture side of computing talking.
As the computing industry matures (it hasn’t yet), I don’t see why there isn’t room for 20-30 commercial distributions with a few million paying customers each, in addition to nonpaying customers and non-commercial distros. It’s not hard to imagine a dozen or so major distros like SuSE, Red Hat, Xandros and Lindows; regional distros like Red Flag and TurboLinux; and a bunch more distros that concentrate only on enterprise or personal or multimedia computing.
It’s a big planet, and with the potential for hundreds of millions of desktops around the world, why can’t there be that many distros?
And when it’s open source, with each distro borrowing the best ideas of the others, innovation and quality can maintain a rapid pace.
Yeah, integration is the real issue. Not only integration with X, but also standards-compliance so that “little” things like copy and paste, drag and drop, etc, work between apps with different toolkits.
However, the author loses site of the fact that there are two dominant toolkits that are now replacing the numerous that exist from the old UNIX world.
GTK+ will be a C developers tool and QT will be geared towards the C++ crowd who are generally from the Win32 world and find qt alot more familar. GTKMM unfortunately won’t be feature complete and comparable to GTK+ until GTK 2.6.
With that being said, if the hand of god suddenly strikes down the CEO of Adobe, an “enlightened” manager comes forward and decides not only to port the applications to Linux but to also work with the GTK+/GTKMM community to push the development forward, then maybe we’ll see a stronger shift to GTKMM by the C++ programmers, however, until then, the line is clearly written in the sand.
As for the desktops, what is required is a unified configuration system. Why are there two competing menu formats? why does each distro insist on locating them in different places? its getting as sad as the debate over whether user applications should be installed in /usr/local or /opt. Make a decision and bloody well stick to it.
It is time for both the KDE and GNOME programmers to put aside their dogma and start developing a solution that brings both parts together. Moaning, “oh, it is too [desktop] like” isn’t going to build a better desktop, actually developing the desktop with interoperability will.
Add a start menu and taskbar to Fluxbox and you would have a light and fast DE that is still not nearly as slow and buggy as KDE.
The collapse of several Linux distros into only a few is just not possible under GPL. I can freely rebrand and recompile any GPL code I like, if you are selling it for $50, I can sell the same code under my brand for a fraction of that by not selling support with it. What if a small group of people want to do something slightly different so we build our own distro the way we want it. GPL simply will not allow only 3 or 4 market players.
The only way to make money is with a hybrid of proprietary code or by selling support, and face it, the only reason people need support is because Linux is lacking in several areas. If Linux is ever “fixed” the market for support (and thus the 3 surviving Linux companies) will go right out the window.
There are no easy answers. This article would require huge changes and a rewrite of nearly the entire platform.
If you want an answer look to OSX. Put the entire thing on Fiasco or L4, then BSD (non GPL) layers, a new display server, and a new window manage, and dump all the backwars compatibility crap that keeps getting lugged arround. The honest truth is that this system would not be that hard and a great deal of the code is already there.
Having 20-30 distros is a bit different from having 200. The real issue is that people are wasting time and effort constantly reinventing the wheel.
Generally speaking, a distro is made up of base packages + config tools + installer. Some distros play around with the configuration of existing ones, but many, many distros insist on building their own packages (with custom changes) installers and config tools. If everybody worked together, or at least concentrated on 20-30 “major” distributions, Linux would be much better overall now rather than constantly building new tools to do the same things.
To make this happen, distributions need to be more open to work from the ‘outside’; Fedora and Debian are both good models in this regard.
Once we have reduced the number of distributions, we can hopefully get people working on more important, fundamental problems, such as integration issues, hardware support, etc. It’s not as glamorus, but it is what’s needed.
“Mozilla was developed for years. Suddenly, Firebird comes along and takes the code and makes a huge dent in the browser market. People who never would’ve used Mozilla proper have now moved to Firebird.
Who says a third DE won’t be heavily based on one of the existing ones? Who says it won’t be “Gnome Express” or “KDE lite”? Remember, a third DE doesn’t necessarily have to be built from scratch – it’s free software! They have the benefit of everything that already exists.”
The analogy is flawed, desktop environments are not browsers, they are far more complex and require much more mainanace and development.
Your also forgetting one BIG thing. Firebird is going to become Mozilla, this cancels out your whole example. Because by this there would be lets say a forked GNOME, people would like it and use it, and the GNOME developers would realize this and so they would merge the two brances. Thus, there would still be 2 major DEs. Anyway there is a GNOME-Lite already, its called XFCE.
..i dont think so.. there are too many othere ways to get around this.. For example. Fluxbox menu generator.. i used it yesterday. Now wmaker, flux and blackbox all have the exact same popup menus .. complete consistancy among three different wms. Why go to the trouble and $$$ to develop a 3rd DE?
According to Linus there will not be a 2.8. The next stable release will be 3.0.
Ever since the release of kde 3.1 I have been more than happy with it. At least as far as navigation goes and setting up associations and the like. They provide some very niffty front ends to machine and os maintenance. I have used GNOME and I think that its asthecis are a bit more out of place and harder to hunt down. Now grant it there are some things in KDE that I don’t see a need to make a secondary app for…something that could be wrapped up with something else but over all I think both have come a long way. That too and I do have a bit of influence when it comes to GNOME, I was talking to an ex-dev’er and he pointed out alot of small design flaws in the way apps interact with one another. Not saying KDE doesn’t do this but it just seemed from what he showed me that GNOME at the time was a bit worse than KDE on that front.
But in the end there is flux box and Im darn happy with that 😀
In theory, if standards are followed, then any number of DEs could come about. This is not a bad thing.
Okay I can see that there might be one too many “Commercial” distros and I agree that in time that you’ll see a thinning of the heard of these for profit distros. Yet when it comes to free OSS distro’s based on GNU/Linux you will not IMHO see this effect. Frankly free distros like Gentoo, Debian, SlackWare, etc.. will always be around and will grow in numbers IMHO. Why ? Well it’s because anyone can grab source code and recompile it and make their own distro if they please and distribute it to anyone who has the time to download it and install it. The very nature of OSS and the Linux movement allow for free distros to be maintained as long as their is someone willing enough to do the work and put out a new release every so enough.
“enough” <- Should be -> “often” !
I agree that there will be light versions to come, right now I am using one of them which is XFce4 on Mandrake 9.1. Loads quickly and runs KDE apps and GNOME apps without any problems.
Just my 2 cents
Since we already had our Gnome/KDE flamefest with the KDE 3.2 beta article and since there was already a gtk+ article I thought I would bring something up.
Why the hell is gtk2.x+ so damn slow? There is some serious brain-damage there. Believe it or not, today was the first day that I had tried a KDE 3.x desktop and I can’t believe how much faster it is than Gnome/Gtk+. I used to think it was just my po-dunk machine, but apparently it’s not, others have mentioned this too(including Eugenia).
Don’t get me wrong, Trolltech has every right to make money off their toolkit, but if qt had been released under the lgpl(at least for linux) back in ’96 or so we would have one dominant desktop today which would probably blow away anything that Mac OSX or Windows had to offer today. Call me bitter.
Eugenia, I don’t think the whole “clutter” issue with KDE is the issue for the corporate types. This can be easily rectified by a distro maker or even a corporate admin, I think it’s the qt license that is the problem.
Would it be a difficult task to have a single API that programmers can base their apps on.
But then this API would be translated to the environment people wish to use.
Diffcult ? Yes…but not impossible.
The concept of “healthy competition” is incredibly widespread, most especially in the USA. Sadly, that doesn’t keep it from being a myth. By and large, more progress, and happier people, result from cooperation rather than competition.
Examples abound. For starters, Free/Open Source software progresses because of the *cooperation* of the thousands of programmers who create portions of it. The people who do the most “competing” are usually the ones who enjoy a good fight more than they truly care about the progress of the software. The real alpha hackers usually don’t waste their time and energy getting involved in squabbles over whose, ahem, software is bigger.
For another, the world of scientific research has mostly been based on the same basic ideas as Free/Open Source software, i.e. the contributions of one person or group are freely available to all others (via research papers, etc). While in recent years patent-happy lawyers have managed to throw some monkey-wrenches in the works, as Newton himself said, “If I have seen farther it is because I stand on the shoulders of giants”, i.e. he had all the previous scientific work of other scientists available to him. Using this approach of mostly cooperation, science has managed to take us from wood fires and 30-year average lifespans to antibiotics, space travel, and PDA’s.
For a third, in America we had (still have?) various competing formats for cellphones, while Europe and other countries which understand the value of cooperation have put their heads together and come up with joint, open standards. Results: better cellphones and wider adoption, because all the phones work with all the carriers.
More examples? The Japanese approach to car building (more team-based), which had the highly competitive and individualistic American auto industry on the ropes for a while.
For a fourth, consider something as simple as an ants-nest. If not for the cooperation of the individual ants, the whole nest would die. There’s a valueable insight in there for those of us willing to question some of the more damaging cultural assumptions we grow up with.
There are 189 countries in the UN. 200 distros is not that much.
Distros and diversity are a good thing. Linux should organically adapt to the different needs of different users throughout the work. There should be different distros specializing for each group out there. People should not have to speak english and understand western concepts to use a computer.
The call for a unified this and a unified that seems to be part of human nature. We seem to fear and hate it the fact that other people choosing different solutions to the same problems.
It would do a lot of people some good to take a deep breath and realize that people are different and that there is nothing short of genocide that is going to change that. We are different. We have different ideas of how software should work. We have different needs from software. Many of these ideas are mutually exclusive. There is not one true solution. Use what is best for you. Accept that other people are going to use something else. Share what you can, but do not use it to force yourself onto others.
I think the desktop will follow in the footsteps of the kernel. Big companies will become dependent on Linux and will want a ‘controlled’ product. The way to do that is to hire those in control. For the kernel that is Thorvalds, Morton and Tosatti. I think the same will happen with the desktop. Novell is the one most in the spotlight here because of their recent purchase of Ximian and SuSE. Can they organize this task without alienating everybody in the open source community? I mean – which variation of Linux will people choose. The one that you pull from a web site or the boxed version with documentation underwritten by a company with a name you have heard of? And Novell will probably prefer supporting only one desktop. As did Redhat with Blue Curve. The article did not mention the X server. High bandwidth items such as games and movies may drive it towards direct rendering. Unless hardware catches up to obsolete the issue.
I agree that KDE is fast. I use it sometimes for that reason. But I keep coming back to GNOME, not because I think it’s technically superior, but because to my eyes KDE is just ugly.
DISCLAIMER: I know this is *subjective* and I’m not trying to start a GNOME/KDE flame war. I have a lot of respect for KDE, and I use it on and off.
That said: Roy, I agree with you that given enough development KDE could blow a whole lot of people out of the water, because they’ve accomplished a hell of a lot and have a lot of potential. But in my opinion they need some really good graphic designers. Their look needs an overhaul. I just don’t think it looks professional. It hurts my eyes. If it looked like GNOME I’d use it all the time and never look back.
In fact, if I were using old hardware, I’d probably use it all the time now. But GNOME 2.4 isn’t all that slow on a moderately fast machine (Athlon 2000+, 512 RAM); and since I mostly use the command line to move files around, I don’t have to suffer through Nautilus very often (and, to be fair, Nautilus has improved some).
I just wish there were a DE with KDE’s speed and GNOME’s looks. I think it’d do wonders for desktop Linux.
>I just wish there were a DE with KDE’s speed and GNOME’s looks.
Very well said. QT is just faster than GTK+ (or so it gives that impression regarding responsiveness), but Gnome2 apps look much better because of the way the default gtk UI was designed. More on that in 2 days on my KDE 3.2b2 preview.
Nature is full of diversity, sure. And yeah…the world is big enough for yada yada yada…
What does this have to do with what is needed for Linux to succeed in the commercial world — that is, standards and standardization?
Also, could you post the stats you’ve compiled about the number of anti-Linux articles posted on tech news sites? That’d be real interesting, thnx.
Oh, and…if you click on the author’s name at the top of the article, you’ll find a photo, brief bio, and contact info. Sure, you don’t get a book, but that’s not quite the same as ‘anonymous.’
So why is GTK+ slow, any technical reasons?
based on Mono..
Ximian will let go of Gnome and create a Mono based DE that has API/ABI stability.
3rd DE = Mono + Wine + X
If I were to develope a 3rd DE that would be my choice combination.
I am not sure I agree. Linux today is about freedom and choice.
Most of the predictions in this article are about making choices for people. Intergration is nice, but it is not the be all end all of computing. If it is, then win32 is already the winner and will likely remain so.
Diversity in computing systems is vital because it presents a nice check on the power inherent in centralized control systems.
Instead of figuring out ways to make choices that people will be happy with, why not facillitate the making of choices in a way that people will be happy with?
I left win32 computing because I found that intergration sharply reduced my ability to make full use of the computer. Sure some features are nice, but the tradeoff is harsh; namely, I must work the way others feel I should be working.
It is going to take a long time for people to really grasp the idea that Linux does not have to evolve (or devolve depending on your point of view) into one ‘great’ system for everyone. Folks, that is Microsoft today. Look at where that gets us –it’s not pretty in the long term sense.
There is one simple truth nobody wants to admit, but that needs to be considered in this discussion:
We don’t know how computing is best done today. I would argue that the ideal computing platform does not exist because we, as a race, are too different. Building one perfect system for everyone to use is the ‘holy grail’ something we all want, but never to be found. Put another way: One size fits all means nobody is perfectly happy.
The future of Linux is about building ways for people to make their computing choices in a way that makes sense, not forcing people to accept choices simply because a majority of folks favor a particular choice.
KDE / GNOME: Eventually these two will work together nicely. Perhaps people will find ways to ‘skin’ the toolkits in ways that create similarity and perhaps others will like that, but that does not mean everyone will eventually use those features or that those features are the ‘right’ way to do things.
Number of Distros: I agree with this. We will see a small number of distros that attract the attention of the commercial developers. This will be a good thing, but choice will remain because the plumbing is OSS. Running an application on your own distribution is a choice you will always be able to make –just don’t expect to call the hotline and get support. (Of course, you could just lie and say RedHat, then translate because they won’t know and should not care…)
Task Specific distros make sense as well. Someday, vendors of complex products (PLM, ERP) are going to wake up and find they can build just the environment they want and save a lot of installation / configuration hassles. People can choose a server, boot the environment and configure according to their environment. Simple things like games and firewalls are a no brainer. Somebody should be thinking about software evaluation also. Boot the CD, work with the software, save your data to some filesystem and restart when you are done. (these could call home to get licenses and such if proprietary.)
As for the service pack mentality, no thanks. Remember Linux is about choice, so you upgrade when you want to. I don’t upgrade my Linux every six months, but I do upgrade software and systems as I see fit. Again, this is simply mapping the win32 mindset onto a platform that is not win32. Providing this service is a choice people would be happy to take advantage of, but this does not mean it is right or the future of Linux. It only means that some folks are going to want to update as they see fit.
As for the hardware, the sooner the better
Remember, there is not going to be a one size fits all Linux. That is just not how it works. It is going to take years to break from the upgrade treadmill Microsoft has promoted. As it happens, the market will choose winners and losers. This in turn, will do a lot to focus and bring forward the best in breed packages and distributions, but this will not reduce the choice and activity seen in Linux today; otherwise, we lose the very innovation and creativity that creates the potential choices of the future.
There are two types of computer users. Those that want to be in control of the machine and those that would rather the machine be in control.
Today many folks are used to the machine being in control because they are used to working on systems where all the core decisions are made in one place. (This will change)
This first group is looking for feature rich software that they can run cheap. Why? Because the more standard features, the better chance the ones they need will be in the package. Problem is the number of features required to make everyone happy gets larger as the number of potential users increases. (Bloat, hard to use, confusing…)
The rest of us grok the OSS thing! We are in charge of our environment. We understand we can get the features we want to work the way we want them to. Today this is tough because you have to know some things in order to make sense of it and because many bits of the system are still being written with the first group in mind. Tomarrow, this will be less of an issue because:
1. more of Linux will become common knowledge to more people, (application design will begin to reflect this)
2. the people building Linux today will become more adept at presenting environments to potential users that both promote choice and are understandable.
Consider this different point of view: Instead of trying to build the one unified desktop environment, we should be building things in a way that can be easily be intergrated! Some will be followers and choose to pay for somebody elses intergration services in the form of software subscriptions or update services, others will choose to intergrate things themselves because they see some advantage in it.
This way of doing things allows people to build (or choose) an environment that works as they do instead of matching how they work to the environment.
There will always be people who want to just have it work for them, but they are not the people we should be building Linux for. Let a specialized distro provide this service. I hate to say it, but these people will not be competing on the same level as the rest of us will. Bad? Depends. Maybe your core business is not sensitive to your computing environment. Maybe it is though.
Should those that need powerful systems be forced to make due with stuff targeted at the masses when there is no reason for it?
Sorry for two posts, I just did not have all the ideas at once….
GTK is not slow. It is because it is double buffered, they are trying to work around limitations in X. I am not a specialist on GTK, but it is not slow, it feels slow on redraw performance and so on, but it is not slow. But I suppose perception counts more than anything else it seems.
The issue is XFree most of the time. Once that is fixed, GTK should not feel slow anymore. Try some GTK apps on windows. They should not have these issues but then again, I could be mistaken.
If it is not slow, is it possible for me to get a source rpm for my fedora install and get it to recompile without double buffering?
“(…) to my eyes KDE is just ugly.”
Well, that is true regarding the default theme. But, if you just take the time to add ThinKeramik + Knifty + a good icon set + anti-aliasing, it will look very nice! I wish that the KDE guys would dump those ugly crystal icons – they don’t look professional to me!
LCAS is already possible with LDAP + samba + PAM, I think. A dog to configure and setup though I expect.
No, it is not possible to disable double buffering.
Part of the problem is that people judge “speed” by a few very specific factors – for instance, opaque resize and move performance.
In Fedora, there are hacks to try and improve that, and they work a bit but the real solution lies in the X server (kdrive in particular).
I suggest you fire up gtk-demo at some point, then try moving the splitters around. This causes the window to be re-rendered but it does *not* involve resizing X windows (at least, not much). I think you’ll be surprised at how smooth it feels, I know I was.
I think GTK+ is slightly slower at drawing than Qt but that’s primarily because GTK+ double buffers everything (makes some things faster and smoother, ie there is no flicker, but other things a bit slower), and GTK+ has more advanced widget containment and text rendering subsystems which eat more CPU time.
I don’t find the speed of GTK+ a problem, and besides, in future double buffered X windows will make the toolkit speed less of an issue. Things like the quality of the widgets, the beauty of the stock artwork (of which Qt has little to none) and so on make or break a widget toolkit for me. I find GTK+ better in these regards as a user.
Oh, finally, I dunno where ChocolateCheeseCake got the idea that GTKmm is less featureful than the GTK+ C bindings. IIRC GTKmm wraps 2.2 just fine, and there is even a version that does 2.4 – personally I think people bitching about the GTK+ API or “technology” are typically back seat coders who don’t actually do that much. At least, I see lots of GTK apps out there, so if it really was as bad as some make out, why do we see so many?
To me it seems that Gnome and KDE are going in completely different directions, and satisfying different markets, Gnome the corporate desktop and KDE the power user. Personally I see no need for one standard environment; instead, I think that the distro’s should be aimed at one market or another and not try to be everything at once. This means including one consistent desktop environment and ditching the other. This is already starting to happen.
Lest we forget, in our desire for clear vision:Linux does not have a single, uniform user base. The underlying philosophical debate which permeates all discussion about the future of Linux and the integration of its various subsystems is the question of the identity of Linux. Linux does not have an identity. Normally this debate is preformed by mutually opposing ideas- ie. that Linux is about choice or is deficient by virtue of a lack of integration, a lack of uniformity,and in essence chaotic due to the superabundance of alternatives.
I seriously doubt most developers write their programs to maximize choice. Choice is usually understood as “the freedom to”. Those who criticize the apparent integrative failings of Linux often are looking at choice from the opposite perspective, ie. “the freedom from”. Overloaded with alternatives, each having advantages and disadvantages, many seek freedom from having to choose, and having to deal with the inherit consequences of such choice. Thus one longs for an integrated Linux. Developers are usually more interested in utility than in the heavily-laden ideological question about choice.
The dirth of various distributions is not a sign of the immaturity of Linux, it is on the contrary a sign of its maturity. The domain of utility for computers, and their usage has diversified, a tell-tale sign of system complexity. Linux has evovled along with the evolving diversification of this domain of utility. Many of the problems that are associated with Linux are in fact “problems” of the computers themselves. The computer is unlike any other item ever invented by mankind- no other tool has so many diverse uses and ways of being employed. Perhaps one could compare computers to synthetic plastics- which can be formed and used for just about anything. Computers have become a part of virtually all modern technology, yet one rarely thinks of their microwave as a computer, let alone their VCR.
Linux, having been primarily a development platform, has exposed(revealed) the computer relative to its origin-ie. Linux opens up more possibilities for uses of computers than any other Operating System ever has. The success of Microsoft and Apple was due to their successfull delimitation of what computer should be. Both of these companies took computer technology, which can be used in a nearly limitless number of applications, and chose, for us, which applications a computer should be best suited for. Linux does the opposite- Linux has undone this choice opening up the pandora’s box of potential utility. Linux is not as good as Windows or Macintosh for those things for which Windows and Macintosh were created. Linux has a fundamental superset of all of the technologies present in Windows and Macintosh.
The choice about which application a computer is best suited for is now the choice of the distributors. Is the computer to be used as a web server, or rather as floppy-only firewall router, or should it be a document processing device, or a multimedia platform, should it be a print server, or should it be the basis of a cluster. More and more we see Linux being used in hand-held devices, whether we are talking about mp3 players, cell phones or personal organizers. In as far as Linux maintains and furthers standards the question of unification and integration become mute.
Most of those things people complain about, including myself, are due to the lack of sufficent standardization and inadequate implementation thereof, whether one is talking about MIME-types, support for document types, or hardware. In our industrial consumer world we are led to believe that “one-size fits all”. Linux proves the opposite, paradoxicaly. Linux, taken as a family of associated operating systems running on devices which contain certain supported technology is a “one-size fits all” operating system, yet if we compare Debian and Redhat-we know that we are talking about two very different things.
The identity of Linux is almost as fragmented as the identity of computers themeselves. And this is because of the originary relationship of the operating system to the computer technology itself. And the distributions which are most flexible and allow the highest degree of configurability are the most salient examples of this relationship. Redhat, SuSE and Mandrake have all taken the road of providing a “one-size fits all” distribution. Distros like Debian and Gentoo enable the user to configure an operating system precisely for a particular set of utility. This same debate is also at the center of KDE versus GNOME versus all of the other wm’s and dm’s.
The tool which best fits the needs and values of its user is the appropriate tool for those uses. Without reference to values, Windows is best suited to the majority of desktop uses of most users. But the open source community is proof that values are of increasing importance. The real value of Linux is priceless, no amount of money could buy the devotion of so many developers. The price we pay as users of Linux for this pricelessness is actually recognizing and evalutating, for our purposes, which parts of the whole we wish to use together, and investing enough time and energy to find the unique solution for the tasks at hand. The commercial distributors of linux equate this investement in time and energy in terms of dollars and “free” us from this burden, yet Linux remains free. The burden of choice is our obligation to the developers, for the developers have given us more than we have asked for.
There may come a day when something based on Linux has become sufficiently commercialized that it becomes a brand-name like Windows. But there never will come a day where there are only two or three Linux distros, except in the case where the technology has been so superceded that Operating Systems as such no longer exist. Freedesktop.org will continue to work on creating standards. WM’s and DM’s will over time draw on the common benefit which such standards offer. Those WM’s and DM’s which do not do so will whither on the vine. Linux has now come full circle with distro’s like Gentoo- Linux has returned to it’s (the)source.
The bane of Linux has been its package management. And package mangement is alone the single largest factor in the fragmentation of Linux. Due to the fundamental relationship between how binary packages are managed and the ensuing structure of the operating system, no real headway can be made towards creating a unified Linux. But other options exist.
Firstly if the Linux distro is so constructed as to work from source package management becomes simply a question of compilation time. Given time, source based distros put an end to the problems of binary package management. Secondly, things like zero-install, may in the broadband future, completely surpass all existing package management solutions. Imagine turning on a computer built with an open bios which automatically boots with an internet connection and a browser appears with an assortment of software categorized by utility- the user clicks on the software they want to use and voila-the application appears, and is cached to be used again without access to the internet.This is the idea of zero-install, and it may change what we associate with computers, when and if broadband becomes ubiquitous.
Binary package management problems are easily solved: either avoid binaries, or avoid management. A hybrid of these two variants may well pose a future of Linux- meta-distros(distros which return to the source) plus zero-install. Shy of this we will end up having 200-300 distros, based on binary package management, each having a targeted user-base, each being radically delimited in terms of scope of usage, ie. we will have more of the that which we already have.
The fact that Sun’s distro is out of hand discounted by the alledged knowlegeable reviewer is interesting.
The entire Linux market fits in $200 a year. After you add Sun’s sales I bet it is quite a bit bigger. IBM is giving Linux lip service to blunt MS. Sun is actually selling their JDS to all comers.
The fact that Sun is in negotiations with Walmart and Office Depot, proves that discounting it out of hand is not very, well analytical.
If you ask me, Sun and Novell will be the big Linux players. RH will likely disappear. The other distros will be bit part players and the hobby guys will download debian/fedora.
But then what do I know.
Mike Hearn writes…
I suggest you fire up gtk-demo at some point, then try moving the splitters around.
How do I do this?
>> The entire Linux market fits in $200 a year.
Really, I never realised it was *that* small.
That Gnome will go even simpler and easier to use and KDE will become a Super DE that has everything but the kitchen sink
Its pretty clear even after 5 minutes usage that KDE is the most sophisticated, powerful, and fully featured Desktop Environment out there – only hope it doesn’t become too complex/”integrated” at the expense of being flexibly incorporated into other “systems” – it would be sad if KDE was overlooked in an increasing number of areas due to its all-encompassing nature – a more streamlined KDE “lite” would be nice……
I was thinking the same thing. If a third DE was to emerge I would think enlightenment could be the mysterious DE.
Well said Mike Hearn.
>>Like Walmart and Office Depot.</>
Yeah, but it may just be their online sight. Walmart and Sam’s Club already offer Linux computers but unless they’re in the normal WalMart brick and mortor store we won’t see a big increase in Linux sales.
Overall, good article if one wants to hear about Linux for the corporation. But *nix is more than that! It’s about having control of your computer again – something we havn’t had since the days of the C-64. The whole “free”DOM concept of Linux he has completely ignored. This is why there are so many Linux distros that he’s complaining about they are mostly people’s hobby OS’s not necessarily a serious competitive “distro”.
Excellent words Karl. Computers are complex devices and are used in many different ways. But some people think it will become a toaster like device. The computer can be used for a single purpose but it can not be a single purpose device.
I read some post that mentioned Darwin. In some respects you can think of Linux living out a natural selection life. Each distribution has some survival features that people want. One distribution might have a great package management system and another distribution might have a great installer. As time moves forward other distribuitons will combine some of these great features. The distributions that have the features people want will survive while the distributions that offer less won’t. There will always be a variety of Linux distributions though. As people’s wants change so will the features in Linux.
Linux is still young yet in its life cycle. Trying to force Linux into a small niche market will probably hinder its development. Linux is not just a desktop operating system. It is an operating system for a variety of devices.
The features of the future Linux are dependent on the areas it will live in. Phones, PDAs, game consoles, TV appliances, other embedded systems, desktop computers, and future devices.
Personally, with as much Mac OS X envy I’ve seen floating around these days, I’m surprised developers haven’t flocked to GNUstep. That could be the “OS X on x86” people have been whining about.
That aside, it’s pretty clear that Linux on the Desktop is really just a pipe dream unless something cohesive can be produced by the community as a whole. And no, it isn’t about limiting choice, it’s about establishing standards and practices. People who want to go off and do their own thing are still free to do so, they’ll just be departing from the mainstream.
People argue both sides in the debate over combining GNOME and KDE; how people don’t want to run two complete environments, yet the C vs C++ debate keeps them separate. And how combining them “limits” choice”.
I submit that they already are “combined” at a certain level. They both make X11 calls to an X11 server. I don’t hear any raging debate about X11 being the standard Windowing interface on *NIX. Maybe some calls to improve it. But, they both have some commonality at that level.
Why can’t we just push the commonality forward a bit? Leave the API for both, but consolidate the duplicate code into a single library, like Xlib? Is there any reason why each toolkit needs to re-implement widget drawing, theming, text services, etc? This scheme works fairly well for Apple: Carbon (C based) and Cocoa (Obj-C based) co-exist rather nicely, to the point that, unless you know what to look for, the average user doesn’t know or care what kind of app they are running. Both Carbon and Cocoa hook into Quartz for drawing to the screen, but it doesn’t mean the Carbon programmers have to give up C and start learning Obj-C or vice versa. Can’t the Linux community do the same? Re-factor the common bits into a beefed up XLib-like library? This would come with some nice benefits; when things like anti-aliased type are added to the common library, any toolkits using it will benefit automatically.
I don’t really understand why it has to come down to a war between a “consistent look and feel” and “limiting choice”. When it really comes down to it, end users don’t really care what the program they’re using is written in, as long as it looks and acts in a consistent manner.
“I don’t really understand why it has to come down to a war between a “consistent look and feel” and “limiting choice”.”
Well said! I would even say: Give me a choice to have consistent look and feel 🙂
Of course, I know that I can have all-GTK or all-QT desktop. But today this alternative is a bit limiting…
By the way: to make it easier to achieve that goal, how feasible it is to create a cross-toolkit development environment and theme builder? I imagine it would be cool if I could automatically build two identical frontends to the same program using QT and GTK+.
I’ve settled on using Mandrake since 9.0 as my full time OS. Certainly the rate of upgrades is difficult, when a whole new point release of the distro comes out I have to reinstall (upgrades are just too likely break a system that’s been altered from the default).
I had to put of upgrading to 9.2 for several weeks before I could find the time inbetween work to spend the hours needed to get it up and running in a useable state.
I don’t really see the current state of the linux distro market changing much in the future. It’s the very nature of linux.
A Common desktop interface with a common development API is important for Linux/BSD OS only if:
– End-users want to see powerful applications. They could even come from mac/windows world only if manufacturers (for drivers) and software companies like macromedia, adobe … are interrested in taking the leap. First thing to appeal software producers is an unified and powerful API/GUI.
Is it what you want to see on a Linux plateform?
– New and fresh developers are lost. they don’t even know which language and toolkit to use because they are too many!
If there was a de facto API/GUI on Linux … it will be easier to make a decision and boost also the learning-curve. (Look at windows, little by little developers are making the move to .NET which is becoming the new MS standard, with the possibility to be cross-plateform if MS wants to. Microsoft did it right … and appeals developers)
Is it what you want to see also on a Linux plateform?
– network and server administrators need also common tools to configure and manage database, mail servers, etc … with a ‘look and feel’ tight to the desktop. There are so many different distros with their own tools that nobody can pretend to be a linux expert. You can be an expert on RedHat, not on SuSE. Consulting companies would embrace Linux if a common plateform with common tools exists. Personally, I’m bored to see 7 different tools to configure a MySQL or PostgreSQL, 7 others for apache. One based on Qt, the other on GTK+, and so on …
Is it what you want to see on a Linux plateform?
There should be more DE’s, many more, in fact since there are not thousands, than Linux has failed because not enough people are able to levere open source knowledge. What the community does and what vendors do are different things and although they can work together, the community should not be limited by any vendor, and the idea of a monopoly is weak.
I maintain that unless there is some sort of merging, not a set of standards like freedesktop.org, but rather, a common base for development, that there will be a fragmented feel to Linux that simply doesn’t exist in Windows today.
I fully agree but I also think/hope that freedesktop as it matures will grow deeper for greater interoperability between both environments.
This with shared-development models like UserLinux will help.
> – End-users want to see powerful applications. They could even come from mac/windows world only if manufacturers (for drivers) and software companies like macromedia, adobe … are interrested in taking the leap. First thing to appeal software producers is an unified and powerful API/GUI.
Yeah, so Windows has pretty much several “standard” API’s — i.e, win32api, MFC, visual basic,. .NET.
Yet, why do vendors like Adobe and Macromedia not use any of them on windows, but rather make their own internal ones?
The reason Adobe and Macromedia will not be coming to Linux anytime soon is not because it doesn’t have one standard API– but rather that they would likely sell very few copies of their software– the Linux market is still mostly server based.
>The reason Adobe and Macromedia will not be coming to >Linux anytime soon is not because it doesn’t have one >standard API– but rather that they would likely sell very >few copies of their software– the Linux market is still >mostly server based.
True .. but I think they will come to Linux faster than we think if new companies with great ideas start to develop high quality products and run their business on Linux. The “common plateform/API” based on Linux is just a kind of “sparkle” that could attrack software companies … don’t you think?
It has already emerged, and it is XFce4.
Man do I LOVE it!
The real answer is here < http://www.xfree86.org >. This is where the work has to be done in order to expose the desktop so that we can learn to develop successful projects involving general widget libraries from which vendors can base a product line and from which our technology we can improve the technology. The original concept has to be dug out and exposed, brought into light so to speak.
The victory that the community can achieve from open source technology is accomplished by exposing the systems knowledge and generalizing oportunites for vendors who in turn use flexible architecture (such as OOP and frameworks and virtual machines and …) to base their product line on. There is no victory without freedom so your third desktop is the vendor offering, it’s specialization of the communities general progress at learning. If you generalize any desktop, than you will find the common code base. It’s that base that we need to expose because that is the constraint that affects all of the specialization and prevents us from moving forward. If you want to view ‘that’ as the ‘one’ desktop, than it is true today.
First thing to appeal software producers is an unified and powerful API/GUI.
Yep…. it would definately help – the current situation adds extra complications – however – there’s room to focus on two things at once – some “centrality” and “standards ” to aid the porting of certain software etc etc ect – and on the flip side, a plethora of DE’s/Window Managers growing and evolving on the periphery as usuall – there’s room for both.
If we did the work, than unification would be easy for vendors to accomplish, however if the work is not the focus than you are going to give too much control to the vendor.
Hewlitt Packard is wrong. They are not good for Linux. They don’t understand what it’s all about, they need IBM. HP and IBM should merge, lol. That makes sense.
SUN is the leader in terms of a unified solution, their solution is Java and their product line is based on Linux.
…in order to have the best though, than we need to do the work.
Or rather, HP and Novell should merge, but Novell is unproven.
Under Linux, the ‘standard’ is FREEDOM!
I cannot report abuse for Eugenia because the link is missing. Are you immune?
But keep doing – the idea is not bad and really, really old…
In fact, if I were using old hardware, I’d probably use it all the time now. But GNOME 2.4 isn’t all that slow on a moderately fast machine (Athlon 2000+, 512 RAM); and since I mostly use the command line to move files around, I don’t have to suffer through Nautilus very often (and, to be fair, Nautilus has improved some).
I use really old hardware, a P2 266 mhz laptop with 160MB ram and a 4gb hard drive. No burner. Fedora runs NICE. Gnome 2.4 is quick and responsive (once up and running) and I couldn’t be happier to be away from ugly KDE for a little while longer. I honestly doubt I will pay much attention to KDE until the slicKer project is finished.
In posing possible futures for Linux, Adam’s excellent article describes various things that are wrong with it now.
Notwithstanding the fundamentalistic attitudes people evince toward such trifles as “desktops”–if your identity gets tied up in such things, you got problems–I hope Linux can “unify” itself and transcend its chief flaws, which many have noted to be packages, hardware/drivers, and too many chaotic–that is, self-cancelling– “choices”.
The main thing is: what are you Producing via your OS–words, music, graphics, games, networks?–and what application will you use for it? Linux applications for the “arts” are years behind those for Mac and Windows. (Yes, I’ve tried the best, and, repeat, they are years behind). They cannot “evolve” until Linux does.
Mac and Windows are vendors, and Linux is a platform. Linux is not a vendor!
So get after the vendors like Sun, RH, and SUSE for unification, etc.
I say that the problem is going to be solved by generalizing the technology, by learning and relying on the nature and following the direction of open source. That will creat opportunities for vendors basing a product line on Linux.
You are the fundamentalist.
Until you realize that it is not SUSE’s, RH’s, or SUN’s fault. It’s our fault.
In order to find the problem, than generalize, and all desktops on Linux probably lead to one God damn library!
I have written (AND WOULD LIKE SOME BETA TESTERS) for sloppyadm, which uses LDAP, samba and cups. It includes a setup utility to get it working (which has a kommander (quanta) front-end, sorry none yet for the tool)
Written in bash, so it’s easily modifiable. I and another person plan to add in the ability to use it locally, and a usable GUI (hopefully) by Febuary.
One more thing: use CVS.
Mac and Windows are vendors, and Linux is a platform. Linux is not a vendor!
Actually, you couldn’t be more wrong. Mac and Windows are both products! Apple and Microsoft are vendors. Linux is a platform EXACTLY as Windows is a platform. Mac is both hardware and, arguably, an OS.
“Linux news is getting more and more exciting, and somehow, managing to get less and less interesting. Why? ”
The real reason is because Linux is maturing. Now that it has become a real product, it is seeming less like the second coming of Christ and more like a tool you use. I’m not insulting the product, it is very fine indeed, but it is only an operating system. I wouldn’t even be surprised to find Microsoft winning the OS war due to complete apathy.
There are so many different distros with their own tools that nobody can pretend to be a linux expert. You can be an expert on RedHat, not on SuSE.
For one, the GUI server admin tools on any distro are there for the small domain/server admins. The ones where some poor sap who knows which way to insert and IDE cable got roped into being the network admin for the entire domain.
Real administrators (or more accuratly, those who deal with more complicated setups in a non-hobby area) use the command line and are quite happy to edit text files, which don’t tend to differ across distrobutions.
“Real administrators (or more accuratly, those who deal with more complicated setups in a non-hobby area) use the command line and are quite happy to edit text files, which don’t tend to differ across distrobutions.”
Let’s play the “Guess Which Distribution I am Using” game.
The correct answer was: “yes, infact, even the simplest file location/logic do greatly differ among different distributions but they do not change much within new releases of a distribution”
The truth is that “Real Administrators” will use one of the larger Linux distributions, and understand the underlying concepts of Unix/System Administration. Once this is done, it is a trivial matter in switching distributions or even Operating Systems. Remember, Linux is only a kernel.
Okay, we get it! Give it a rest, will ya?
I’m pro-Linux, pro-FSF and OSI, and against MS’s monopoly, yet I submitted two of your posts for moderation. If you can’t keep a certain level of decorum in your posts then you’re no better than the anti-Linux trolls – unless you’re really one, posting like this to make Linux users look bad.
Either way, stop. We get it.
Just my 2 cents, but the entire history of Microsoft shows that you don’t need to have the best technology. What you need is technology that’s only “good enough” allied to a killer app or three. In Microsoft’s case, this meant spreadsheets and WP for corporate accounts departments. Everything else spread outwards from there.
In this light, it’s hard to foresee any outcome other than that Gnome eventually pushes KDE onto the sidelines. KDE may be technically superior, but GTK and Gnome are where all the action is in terms of daily use by large numbers of people. And being mostly corporates, they are the people who have the influence and the money that pulls in the big software houses. it’s not rocket science.
As for distributions, a heck of a lot could be done if some of the existing commercial distros took a much tougher and more focused line. Instead of spraying a user’s hard disk with 25 different apps that all do the same thing, the distro houses could major on a much smaller number of best of breed ones. And if necessary they should get in there and rewrite chunks of the apps and/or underlying code to ensure that the whole thing hangs together. No doubt doing so would provoke howls of outrage from the “community” but until someone has the balls to formulate a vision for a single, unified way of doing things on Linux and push it through come what may, the present mess will just continue. At present the best candidate for this is Novell-SuSE since they also have access to IBM’s piggy-bank.
As a Red Hatter said during the original Bluecurve debate, “They’ll be talking choice all the way to the bargain bin at CompUSA.” I’d simply like something that works. Hint: if something doesn’t work, no one uses it.
You reveal pretty serious shortcomings in your grasp of the forces which drive open source development, linux development, new distro development, and the desktop on linux issue.
I don’t really have time to deal with all of the shortcomings of your statements, but I will instead just use your absurd comments on the Mozilla –> Firebird evolution to illustrate your lack of understanding. Firebird did not “Suddenly … come along and takes the code and makes a huge dent in the browser market.” The Firebird project was initiated by a group of people who were already close to the main Mozilla project, and was the expression of a longstanding desire of many Mozilla developers (and users — like myself) to move away from the multifunction (browser, mail, html editor, etc.) paradigm inherited from Netscape. Firebird was not the first project to tackle this unmet need, but, because it was much more closely related to the main Mozilla project (in fact, the componetized model of which Firebird is the browser core is planned to become the mainline product), it met with immediate acceptance from existing Mozilla users, as there is a high level of confidence that Firebird will provide exactly the same browser capabilities as Mozilla. The existence of “parallel” Thunderbird email project enhances this effect. Contrary to what you imply, the Firebird developers worked (work) much more closely with the main Mozilla development effort than any other such effort (to develop a standalone browser), to the point where it could be called “in parallel”.
Now, on to the phenomenon of “People who never would’ve used Mozilla proper have now moved to Firebird”. This phenomenon is nothing more than a validation of the strategy of deconstructing the capabilities now in Mozilla into separate components, in particular, MAKING EMAIL A SEPARATE FUNCTION. The barrier to adoption presented by the reluctance of potential personal, but especially corporate, users, to introduce a program incorporating an email program into an environment in which there was an existing email infrastructure (and the investment implied) has long been recognized, and was in fact one of main motivations for adopting the new “componentized” model, the other being complexity of managing development under the single-product model. The new model makes it possible for corporations to painlessly experiment with adoption of Firebird, with confidence that page rendering will be exactly the same as Mozilla/Netscape.
I suppose I can’t resist commenting on your third desktop prediction. Yeah, I it could happen, but you give no empirical or deductive argument why it should be so, other than what appears to be a personal yearning. I don’t necessarily believe that space aliens are going to land on Ellis Island tomorrow, but I’m not totally ruling that out either. While I understand there is an argument to be made for the market imperative of a single desktop paradigm, people like you need to deal with the fact that such an outcome only expresses itself when there is a single dominant actor, which thankfully is not true for Linux (and, once again thankfully, is particularly hard to achieve). Given the effort involved in first building and then growing mindshare for such a third (DOMINANT) desktop, such a product could only come from a large corporation or group of corporations: possibilities include Microsoft or a consortium of IBM, Novell, and others. Somehow, I doubt either of those possibilities would light an adoption fire in existing users of Linux. I’m also confused by your backhanded swipe at KDE. I use it all day long, and it feels pretty “unified” to me. Here’s my prediction for the desktop: (1) Both KDE and Gnome will continue to improve. (2) A high proportion of Linux users will continue choose to use one of those two desktops. (3) An investment will be made (by corporations) to enable a core group of key applications (Firebird, OO?) to function natively on either platform, where licensing concerns can be dealt with. Other than that, people will use the version of the app/utility designed for the the desktop they are using – kedit vs gedit, photo viewing apps, etc., etc., etc. — after all, that’s one of the parts of deciding on which desktop to use. (4) Some people will continue to use, on either KDE or Gnome, apps not tailored specifically for either platform, or one for the other, foregoing the fact that “dialog buttons are not consistent”, blah, blah, blah, because they found some other, more compelling reason to use said program. This will go on pretty much forever, AND THAT WILL BE OK, Adam, because if that’s what some people want to do, that’s what they want to do.
your absurd comments on the Mozilla –> Firebird evolution
I love how people who seem to have intelligent things to say virtually always ruin it by inciting negativity with their tone. If you have something to say, I would expect it said without you being rude in the process.
In the meantime, I still don’t buy what you’re saying. Do I give a crap if the Firebird developers used to work on Mozilla, still work on Mozilla, forked Mozilla, or even have sex with hookers while shouting “Mozilla”? No. All I know is that Mozilla has been around a long time, and it was not used by many I knew outside of Linux. Conversely, Firebird, which is still relatively young project in its own right (aside from the age of the code), is now deployed in many companies – I know about 20% of our desktop users have it both here and at home. The arugment was only to illustrate that if a third DE rises, it could be heavily based on all of or a subset of KDE or Gnome code.
While it seems like you know what you’re saying, it doesn’t seem all that different from what I was saying.