One of the major advantages Linux has to offer is cross-platform functionality. Far from being a PC operating system that has been extended for other uses, it runs on cell phones, mainframes and everything in between. This offers IT departments the possibility of using Linux to consolidate resources into a single skill set, or at least a single OS. The danger, however, is that since developers are free to conduct extensive customization, it may fork into a number of incompatible versions, says EarthWeb.
but so far it hasn’t.
Maybe it never will.
Why worry?
Actually, it’s already somewhat begun. For a while now Linux distributors have been distributing massively patched versions of different libraries and even their kernel. This makes it nearly impossible to take a *binary* commercial application compiled for one Linux distribution and run it on another without *any* issues. Even those that stick to the standard libraries (C++, etc.) do not always run correctly or may experience other issues.
Source compatability is usually pretty good, but binary compatability is awful.
” Source compatability is usually pretty good, but binary compatability is awful.”
that means that binary distribution is obsolete…
What is it? 7…8 years since I heard this the first time?
That we’re seeing desktop computer that are plenty quick enough to compile an occasional application; and it’s really not that hard for the average administrator (even the more complex compile procedures that expect you to make use of a few configure arguments).
Even most users could easily ./configure, make, make install. And I would imagine autopackage has, or will build in, source installations?
Anyway, the kernel maintainers have partially forced heavy patching with the 2.6 policies. Essentially, kernel reliability is in the hands of distributors; just running a vanilla kernel could be risky business these days if you don’t test it first. It might break something, suddenly become incompatible with your video drivers!
But in the end, there aren’t that many major distributions; there are a lot of small ones and highly localized ones but a lot of these simply make a few small changes to each release of a major one (Knoppix for example has seemingly limitless knockoffs, even its knockoffs have knockoffs).
Major projects seem to fork less, and just have everyone move to another project based on the same code…
Source would be fine with me, but a standard way to have them compiled need to be used for the joe user.
Perhaps the “installer” could do the compile also.
Problem is that it would require a mentality shift for developer, they would need to code far less monster cock app than they do most of the time, those take ages to compile. App over 2M would be borefest to compile.
Another problem is that you need to keep the source around as you might not able to find it later.
that means that binary distribution is obsolete…
So you’re saying that anyone who doesn’t compile all the apps he wants to install is using an obsolete distro? Seems that we have come full circle
“Ian Murdoch [sic], Debian’s founding father, does not believe Ubuntu’s popularity bodes well for Debian-based distros. ‘If anything, Ubuntu’s popularity is a net negative for Debian,’ Murdoch told internetnews.com. ‘It’s diverged so far from Sarge that packages built for Ubuntu often don’t work on Sarge. And given the momentum behind Ubuntu, more and more packages are being built like this. The result is a potential compatibility nightmare.'”
Read more on his web site:
http://ianmurdock.com/archives/000244.html
Come on now, source isn’t going to be an option for most users simply because they want to download and install a package, being able to use it right away. I’m an experienced user, and hell if I want to wait for a bunch of programs to compile every time I shift a system’s uses. That goes double on older systems.
I think the solution is going to be distributors patching minimally downstream, and instead submitting them upstream and waiting for the trickle down. Slackware does this, so does Arch, but few others at this point. It’s going to become increasingly more important, though, as the community grows and attracts more closed development.
There is a difference between a distribution and kernel
:”ctually, it’s already somewhat begun. For a while now Linux distributors have been distributing massively patched versions of different libraries and even their kernel.”
name one. you are talking about backporting 2.6 features in 2.4 kernel for Red Hat?. its not a fork at all since everything is upstream already. massive amount of forking has never happened in the kernel level. this is mere FUD
Try to build a linux from scratch … with X server and somme stuff like Apache / Mysql / php5 / fluxbox or xfce / firefox it’ll take you about a week of compilation / configuration !! Before it reaches an “acceptable” level !! And after you’ll never upgrade your distro except user level programms and maybe the kernel …
And for what enhancement ?! 10% max speed more …
What should be done is a kind of half way between ./configure; make;make install and rpm -ivh / or apt-get stuff.
It should be the distribution maintainers and some other expert guys who create specific package for a lot of architectures ( not only i386 … real x86_64 maybe … ) with optionnally the power to compile it yourself if you wanted to !
I’m now on Archlinux and you can do that ! You choose a package or you compile it from sources ! ( only i686 optimized but incredibly faster and STABLE with uptodate kernel / libs / apps ! )
It’s just that it is not going to be the problem all these people make it out to be. Would you expect the same app that runs on a mainframe to run on your mobile phone? Probably not.
And before everyone goes off into some stupid discussion about binary compatibility vs source. Remember that you can always get binary compatibility if you want. Remember ICBS anyone. We’ve been down that road before and gotten through. Not to mention that binary compatibility has a difficult future because Linux does not just run on IA86 anymore, in fact IA86 is dying. Developers are not stupid they know these things and can take them into account.
Is this where Gnu/Hurd enters the playing field? After all, it is a question of what servers to turn on/off to fit a certain need. The OS’s funcionality may vary only slightly with the selection of servers but, along with a definition of norms, the core functionality can stay pretty much the same.
That should be IA32 not IA86
Heh, that’s why I don’t run binary-based distros, apart from the fact that they are unstable.
There are as many differences among Linux distributions as there are among Solaris/AIX/HPUX/IRIX. Whether or not the kernel forks isn’t all that significant compared to the huge variety in the user space.
“Heh, that’s why I don’t run binary-based distros, apart from the fact that they are unstable.”
total crap man. everything you actually run is binary. just because you compile from the source code doesnt make unstable code stable or vice versa
But mainly a userland problem. Once OSDL certifies gtk+ as the standard toolkit then a lot of these problems will go away.
”The different OS types and versions make it very time consuming to verify that all the pieces (libraries, compilers, file systems, etc.) work together as expected.”
Maybe there should just be one standardised package manager for the mainstream distros,hint smart package manager.When the source is available what’s the prob then?Task of the package maintainers to make the packages and the devs to modify the code in order to correct the deviations that could cause some unnecessary inconvenience.
”A researcher should be able to use any of the systems without having to know the paths to all of the necessary tools, the methods to submit jobs for execution or the paths for the storage systems on the systems,” he adds.
Depends,if the researchers major is computerscience (rather prefer the term information technology),then ther’s something not quite right.
There are more than 380 different Linux distributions, after all, and developers need to make sure their products function well on at least all the major ones in order to make their efforts profitable.
Very simple to code a environmental variable deviation database.”We think you use Mandriva,press 1 if that’s right,in order for the program to find …bla bla”.”We think you use Gentoo…”. It’s trivial,and boring.How many mainstream distro’s are there? 4? maybe 5 (Gentoo,Debian,RHEL,SuSE,Mandriva).
A very exagerated article with lack of experience.
Hurd is pretty much dead! Still being worked on but not in a scale that is nescary to make something out of it. The Linux kernel simply made the Hurd effort obsolete. It will still remain but only for people intrested in kernel development.
“One of the major advantages Linux has to offer is cross-platform functionality.”
So why are we complaining about “Binary compatibilty”?
When is the last time you could run that same binary file on a Cray, a Mac, a PC, and a cellphone?
Does someone actually notice this article isn’t saying anything worthwhile and tends to seed confusion where there isn’t any reason for it.
Rahul @ Red Hat: “total crap man. everything you actually run is binary. just because you compile from the source code doesnt make unstable code stable or vice versa”
It does. My toolchain is consistent, optimized and verifiable. I can’t say the same for a third party binary, processed by an anonymous toolchain.
Most of the problems I have had with binary-based distros have always been with bugs associated with incompatible toolchains. That is the binary is compiled in an environment that conflicts with mine.
This is one problem area almost eliminated by source-based distros.
”
It does. My toolchain is consistent, optimized and verifiable. I can’t say the same for a third party binary, processed by an anonymous toolchain”
be specific. there is no anonymous toolchain used by distribution.
“:This is one problem area almost eliminated by source-based distros.”
unless you are specific there is nothing meaningful in this
Just because Eugenia is posting an idea that is not yours, she is with the “evil microsoft” … ridiculous.
Every distribution has a set of core libraries, applications and environment variables used to compiled a binary. For almost all distributions, packages like Glibc, GCC, binutils, debianutils, auto tools, linux-headers, distro’s initialization tools and many more comprise the toolchain.
It turns out that the problem with binary incompatible among binary-based distros is because almost every linux distro at any given point does not have the same consistent and verifiable toolchain. Distributer X compiled a binary with an environment variable X, glibcX, GCCX, binutilsX on architectureX.
Running this same binary on a linux environment with a slightly different toolchain, environment on a slightly different architecture would result in anything from mild bugs to segmentation faults.
Source-based distros eliminate all these problems by default because all your packages, libraries and applications are linked to and compiled against a consistent and verifiable toolchain, yours. Except the source code itself is buggy, bugs related to incompatible toolchains are completely eliminated.
The linux kernel has become even more tolerant of this problem. In the past if you compiled a new Linux kernel on the same box, you had to recompile all the drivers and modules too. Today that isn’t necessary, well except for binary distributed drivers. The proprietary Nvidia driver for instance has to be recompiled again when you upgrade your kernel. The same is not true for the open source-based Nv driver.
Also, with binary-base distros, if you update or upgrade any low level toolchains like Glibc or binutils, chances are that you have to recompile all your binaries again, or they might just start segfaulting for no reason.
What further compounds the problem is that because most binary-based distros have small repositories, except debian, users are sometimes forced to hunt for third party binary repositories that aren’t as thoroughly tested as the distros’ repositories.
So you have people distributing binaries that they compiled in one environment on one architecture and sometimes with a different toolchain than the users. I consider that fact these binaries actually work on other machines a miracle, or just pure fluke. The reality is that majority of them don’t work properly, that is if they work at all.
Last week I installed Ubuntu on a friends machine, and nothing would launch. I mean nothing. We installed it on another laptop and everything worked fine. We then decided to install Vida on the problematic box and this time everything worked well. I’m willing to bet that Ubuntu was compiled with an arch that wasn’t surported by my friend’s box. That problem would not exist if I installed a source based distro on it.
I could go on and on about individual problems I have experienced with binary-based distros especially with multimedia packages, but that’ll just bore you. Therefore, I reached the conclusion that source-based distros are more robust, stable and manageable in the long run.
Also Linux isn’t like OS X or Windows, where the toolchain is pretty much set in stone for years. On those OSes, binary distribution makes sense, because the distributors tell you explicitly on which version of Windows or OS X their product will run. On Linux everything is constantly changing, packages get obsolete on average within 3 – 6 months.
A binary package that would have run on binutils-X will probably need to be need to be recompiled for binutils-Y. So if the developer compiled the source-code on binutils-X and I have binutils-Y, will the package run without problems on my box? Will it run at all? Your guess is as good as mine.
I don’t have time for all that, so for me, it make a lot more sense to distribute source-code than binary on Linux, just because its development model has been honed for source-based distribution.
In place of loads of distros I would like to see the following
a) GnuLinux for Servers Group
– 3 Major Companies involved here to avoid monopoly
b) GnuLinux for Business Users Group
– 3 here too
c) GnuLinux for Home Users Group
– 3 here too
d) GnuLinux for Older PCs
– 2 here
That makes it 11 distros… And yeah the distros of a group should atleast be standardized with each other. 🙂
…..Wake up Ankit!!!! this doesn’t seem possible
Making a binary that runs on different distributions requires work, but is not impossible. In the past few years, we at the autopackage project (http://autopackage.org/) have been working on documentation and tools to make binary compatibility easier. We already have users successfully using binaries that are compiled on totally different distributions. Please take a look at chapter 7 of the autopackage developer guide, apbuild, relaytool, and the shared library howto.
The danger with Linux is not at the kernel. Although there are many independent developers contributing their labor, what gets released publicly is firmly under the control of Linus Torvalds.
They are not talking about kernel forks.
Love the predictable shouts of fud, fud from the zealots though. Thou shalt not say anything bad about my favourite OS or I’ll huff and I’ll puff and I’ll blow your house down.
That’s right! Linux has been forking since day one!
FORKING AWESOME!! ;-p
“They are not talking about kernel forks. ”
only the kernel is called Linux.
From the article:
“”A researcher should be able to use any of the systems without having to know the paths to all of the necessary tools, the methods to submit jobs for execution or the paths for the storage systems on the systems,” he adds. ”This would appear to be simple but it becomes very complex when this same type of goal is applied to multiple sites, such as those within the TeraGrid.””
What does Linux care about this? Linux is a kernel and its job is pretty clear cut: working on as many platforms as possible and having drivers for as many components as possible.
The miriad of things between the kernel and the user are not Linux’s problem. Is this a proposal to regulate and unify user interfaces? We have the GNU tools, we are getting standards from FDO. You can’t impose these things, the community has to feel the need for them. The creative community too, not just the users.
> By Anonymous (IP: 68.189.137.—) “Exactly.”
Another AC in agreement 100% with Hobbit. Good post…
You;ve got the linux fanboys on one side, and the rest of the world on another LOL
I think the guy has valid points. AFAIK companies DID TRY to unite linux (Suse and a few others made UNITE linux 1.0) but it died out as far as I know…
Also the assertions of fanboys that “its not that difficult to compile”, well it might not be, but do you want your grandma, or dad to do this or do you want them to double click on install, and have then done with? I personally want to isntall binaries and run them, I dont want to sit and compile, EVEN IF I dont have to alter any code
it nearly impossible to take a *binary* commercial application compiled for one Linux distribution and run it on another without *any* issues. … Source compatability is usually pretty good, but binary compatability is awful.
OK, so don’t do that.
If you want to distribute a *binary* commercial application, then make four or five versions of the binary for the different main distributions.
but do you want your grandma, or dad to do this or do you want them to double click on install, and have then done with?
The differences between skill sets needed to administer different versions of *nix (evene from freeBSD to linux) is less than the difference between skill sets needed to administer different versions of windows.
All major versions of windows have a frossly inconsistent directory layout. I can go from NetBSD to Ubuntu to Slackware and always know where /home is. Can’t say the same with windows.
“Making a binary that runs on different distributions requires work, but is not impossible.”
Why bother?
Just take your source and compile it under four or five different distributions – making four or five different binaries, label each package as belonging to this or that distribution – and you are done.
but do you want your grandma, or dad to do this or do you want them to double click on install, and have then done with?
No they just have to press short-cuts,i installed and customised Gentoo on all their PC’s. 🙂
clearly you’ve not been here long. this is not simply a matter of eugenia posting something i don’t agree with. this is about eugenia being whiny, self-righteous and sharing the mindset that i described in my previous post.
she carries this idea that linux/gnome devs owe her something, and often disguises this behind the excuse that they owe their users something, and that she is somehow representative of that userbase. none of this is true. they don’t owe her, they don’t owe their users, and she’s not representative of that user base anyway.
in the story i referenced, and in several before, she implies that if the gnome developers do not listen to her and do what she says, their project will suck.
this is exactly what a lot of these magazine articles and words from the talking heads at Sun, MS, etc. amount to lately. they are trying to bully OSS devs into doing what they want, when they want and how they want.
this is the unfortunate part about linux gaining popularity. all of the management types come out and want to dictate direction and voice opinions without doing any of the work, contributing any money, or even understanding what it is that they are talking about.
if you don’t like it, get the source and get cracking.
if you can’t do that, find some devs that can and pony up the cash.
i know this isn’t an answer anyone likes to hear at this point. i know “waaah…this mindset is what’s hurting linux’s popularity.” i don’t care. linux dev and gnome dev are both doing fine without jumping at every whim that some asshole with a magazine or website decides is important.
“but do you want your grandma, or dad to do this or do you want them to double click on install, and have then done with?”
Two points:
1. From where is grandma going to get install.exe to double click?
2. In Linux, you click on a Package Manager (such as Synaptic or YAST) – not install.exe – to install binaries.
“it nearly impossible to take a *binary* commercial application compiled for one Linux distribution and run it on another without *any* issues. … Source compatability is usually pretty good, but binary compatability is awful. “
Remember Both Mozilla and Openoffice distribute distribution independant binaries. The same for Java and Realplayer. Also a real commercial application Softmaker’s Textmaker is released as a QT based statically compiled binary. I have installed all of the above under various versions of Mandrake (8.2 to 10.1) and had no problems. Oh yes I have just installed the distribution independant statically compiled binary for Inkscape 0.41 and it works a treat:
http://prdownloads.sourceforge.net/inkscape/inkscape-0.41-1.static….
So it is just not true, binary compatability is perfectly satisfactory for static builds.
Hey thanks for the link to that inkscape static build. I’ve been having problems getting that going in Gentoo lately, so maybe this will be a short-term fix.
-Mark
so one distros packages isnt compatible with another…. guess you go with the distro you like and provides everything you need! so whats the big deal….
would i like to see less ‘small’ projects/distros and instead more help on the ‘larger’ projects/distros, sure would but I certainly wouldnt dictact that at all…
anything else
So it is just not true, binary compatability is perfectly satisfactory for static builds.
Wrong. All the examples you cited are “trivial” commercial applications. Much better examples would be things like the Oracle database, or SAP. Those are real, heavy duty applications. Another example would be commercial Linux games. Ask anybody who used to work for LokiGames about the hell they had to go through to get the binaries to work on multiple distributions, go ahead I dare you. The point is, while it may be *possible* *sometimes* to get a binary that works across multiple distributions the hell a developer has to go through to get there is inexecusable.
Not only that, static linking is quickly becoming a non-option. I’ve already had arguments with RedHat in the past about this. I felt it was perfectly reasonable to statically link against only a certain set of libraries, and then dynamically link against the rest. They told me in no uncertain terms that static linking was frowned upon and basically unsupported when mixing dynamic and static linking. Additionally, static linking against libc or libstdc++ was completely unsupported.
You obviously have no experience in distributing large commercial applications or games for that matter…
Sorry, not my quote. I left you a URL where you can read the entire article (by the head of the Debian Linux team) and reply directly to him.
But it does seem funny that you don’t think there’s a problem, while the head of the Debian development team does. Ummm, whose opinion carries more weight, do you think?
So it is just not true, binary compatability is perfectly satisfactory for static builds. _
“Wrong. All the examples you cited are “trivial” commercial applications. Much better examples would be things like the Oracle database, or SAP. Those are real, heavy duty applications. Another example would be commercial Linux games.”
They are not “trivial” they are (with the exception of Sun Java) just desktop applications including OOo which is about the largest most complicated desktop application for Linux. I don’t dispute that games present special problems. But I am sure I have read about people running Oracle on unsupported distributions. So how about it anyone out there doing it, how about giving us some details ? Or is it just apocryphal ?
Oh and BTW – binary compatibility is not so important for the big enterprise application – since the vendor can specify supported distributions and the corporate purchaser will happily just use it for a dedicated server. On the other hand if an ISV is developing a closed binary desktop app for the home, SOHO and/or SMB market the purchaser will probably already be using a specific distro and won’t be prepared to change for one application.
I think the guy has valid points. AFAIK companies DID TRY to unite linux (Suse and a few others made UNITE linux 1.0) but it died out as far as I know…
The problem has nothing to do with Linux. It is related to SCO issues with their infamous “lawsuits”.
Just stop this silly notion to call every OSS project under the sun Linux!
Gnome != Linux.
RedHat != Linux.
sourcefoge.net != Linux
Linux(tm) is owned by Linus Torvalds, but the only thing he cares about is that BadGuys(tm) doesn’t miss use it.
I wouldn’t expect binary packages for Win98 to work on BeOS so why on earth should binaryes for RedHat work on Debian?!?
What those industry folks are afraid of isn’t wether a program will install or not. They just realized that the trademark they invested so much money in isn’t worht anything in reality.
You can’t sell Linux software, because there is no Linux.
As has been pointed out previously int this thread. This doesn’t mean a thing for OSS development. Just ignore them and keep coding.
For useres is is kind of bad though, in the long run this will mean that all still existend and new to come “Linux versions” of software will be replaced by “RedHat versions” and “Novell versions”. In effect creating a kind of vendor-lockin.
What does this fuzzy term ‘Linux’ rellay mean by the way?
My take
It MUST have a kernel based on Linux.
It SHOULD implement a FHS-like filesystem.
It SHOULD have some UNIXish userland which MAY be other than GNU-based.
IFF it has a GUI it SHOULD be X11 compatible.
This is as far as I dare to go, anything beyond this is just not true…
You can’t sell Linux software, because there is no Linux.
Even though this is a little bit generalized, its still the most insightful comment I’ve every seen on OSNEWS. Good Point!
wow. you make an on-topic negative comment about eugenia and your post gets deleted. good job.
what a great site.
“wow. you make an on-topic negative comment about eugenia and your post gets deleted. good job.”
There’s a difference between making a negative comment and outright flaming. Calling someone a “tool” definitely belongs in the second category, hence the deletion of your post.
The different distributions have more become different operating systems (like the UNIX family), all closely related, but each a tiny bit different …
But source distribution isn’t always a flawless solution either, think about the jump from g++ 3.3.x to 3.4.x which broke some apps at the source level (with the stricter template implementation)
Those apps have not been totally standard conformant, but it is not pretty seeing that you need to fix the code yourself to compile it with your compiler.
And I point it out quite often (and sometimes even say GNU/Linux), but a lot of people in this thread are just arguing semantics to avoid the point, subconsciously or not. The article might not have been well written, and might not even offer valid supporting arguments. Bug claiming that an issue doesn’t exist because one article is poorly written, and because they misused the name Linux… you’re just sidestepping an issue.
But then, having incompatible dependency trees does not a code incompatability make. The Debian vs Ubuntu thing isn’t a valid argument here. That’s just Debian complaining (rightfully, I think), that Ubuntu is calling themselves Debian when they’re really not, and the misconception that fosters is fragmenting a previously very strong community.
Thats what makes Portage so good – its easy to use and it compiles from source!
Probably a mixture of Portage and Autopackage would do the job quite well and serve everybodies needs…
rgrds
Uh… did you people never hear of LSB? We already have all of this working today.
*SIGH*
Read: http://www.linuxbase.org/
Write your software according to the LSB standards and its guarenteed to install and run on any LSB-compliant version of Linux. They even have a set of certification and testing suites you can use to certifiy your application as LSB-compliant. Most popular commerical distros are LSB compliant, RedHat, Mandrake, SuSE, Progeny, Novell, Sun Java Desktop..and even Debian Sarge.
The goal of the LSB is to develop and promote a set of binary standards that will increase compatibility among Linux systems (and other similar systems), and enable software applications to run on any conforming system. In addition, the LSB will help coordinate efforts to recruit software vendors to port and write products for such systems.
IBM has even written/published a book how to write your applications: http://www.phptr.com/bookstore/product.asp?isbn=0131456954&rl=1
But unlike for instance the BSD or propietary licenses, any fork of GPL code (which has always been encouraged for competition) must be licensed under the GPL exactly, nothing more and nothing less.
So parallel development can exist, but work and innovation from one fork can always be transferred to the other fork without any legal nightmares.
– Unlike proprietary licenses which always require close analysis and legal discussion when merging code under differing licenses! Aaagggh! often too difficult and costly!!!
If writing your own software from scratch, you are the copyright owner and you can release it under any license you choose. It is common to release under multiple licenses, such as the GPL and something else. The GPL version can also be combined freely (liberally) and easily with other GPL’d software.
Its so simple with GPL…
Someone has still to merge it, and not in every case everything gets merged.