Installing software on Linux. In the world of online minefields, this is the big one. Back in the day, you installed software on Linux by compiling it manually. Time-consuming, but assuming you had a decent knowledge of gcc, make, and maintaining library files, this could actually work. Later one came the package management systems that were supposed to make installing software on Linux a breeze: rpm, dpkg, and so on, and so forth. Since human beings have the innate tendency to assume that everyone else is wrong and only they are right, we are now stuck with 3453495 different Linux package managers. Denis Washington, a Fedora developer, is taking steps to resolve this issue.“Some time ago,” Washington writes to the Linux Foundation desktop mailing list, “it was discussed on an LSB face-to-face meeting that an API should be developed that allows ISVs to install software packages which integrate into the package manager.” The idea, the Berlin Packaging API, sizzled out, apart from a Wiki page with a rudimentary proposal. Washington decided to take matters into his own hands, and has designed and implemented a prototype of this packaging API – and he imaginatively called it the LSB Package API, and it uses a simple D-Bus interface and an XML-based package description format.
The implementation currently supports integration into RPM and dpkg; due to its modular nature, support for more package managers could be added
later on.I hope this implementation will act as a starting point for resurrecting the Berlin API process. Let us overcome the “Third-party software installation on Linux sucks” problem and strive to a brave new world of easily distributable Linux software! 😉
Some may think this sounds a lot like PackageKit, but Washington is quick to point out this isn’t the case. PackageKit provides an abstraction layer on top of the existing package management systems, providing users with a front-end for managing repository-based package systems. “However, it does not address the problem of software distribution itself,” Washington explains, the repositories and package files are still specific to the packaging system.”
The Berlin API, on the other side, does exclusively deal with providing a package-manager-neutral software distribution method. So the Berlin API is not a replacement for PackageKit, but a complement. In fact, as the software installed with the Berlin API is added to the package system’s database, it can be managed (e.g. uninstalled) with PackageKit afterwards – a dream team! 😉
Personally, I have my doubts about these so-called solutions, and not only from a hard-to-implement point of view (have fun getting all those noses to point into the same direction), but also from the point of view that these solutions do nothing to fix the actual underlying problem that most package managers and software installation systems (Linux, Windows, OS X) deal with: the complicated nature of installing software, and the inability to give users control over and insight in how they manage their own systems.
I wrote down my own preferred way of installing and managing software in an article earlier this year, and I still believe that pretty hefty choices need to be made in order to make the process of managing software as flexible, easy, and powerful as possible – all without limiting the power user, or overwhelming the novice user.
I’m still open for suggestions.
The problem isn’t that good third-party packaging systems don’t exist, it’s that distributions don’t enable them by default. e.g. (my own) Zero Install is “in” most distribution repositories (Ubuntu, Debian, Fedora, etc), but you have to enter a command to install it first.
Anyway, here’s the thread on the LSB packaging list:
https://lists.linux-foundation.org/pipermail/packaging/2008-June/000…
And some more links for things that already exist (* = written by me):
XML formats for describing packages:
http://trac.usefulinc.com/doap
http://klik.atekon.de/wiki/index.php/Klik2
http://0install.net/interface-spec.html (*)
http://luau.sourceforge.net/intro.shtml
Packaging systems that work on top of a distribution’s installer:
http://klik.atekon.de
http://autopackage.org/
http://java.sun.com/products/javawebstart/
http://0install.net/ (*)
Other:
http://osnews.com/story/16956/Decentralised-Installation-Systems (*)
http://www.gnu.org/software/stow/
http://checkinstall.izto.org/
In the form of Autopackage:
http://en.wikipedia.org/wiki/Autopackage
http://www.autopackage.org/
There’s also others (see the other posts); but it’s still the same kind of thing that is needed. (Autopackage is really only complicated by the fact that it tries to mitigate the LibC API too.)
Seriously. The distributors need to realize that commercial software (freeware, shareware, companies, etc.) want control of how they package software, and there needs to be something that mitigates this.
Under Windows you have MSI – which InstallShield, Wise Installer, and everything else work with. And really that is the same kind of thing that the F/OSS world needs to. If a distro wants to use RPM, DPKG, PKG, or e-builds – fine.
Of course, this all boils down to the question of: Who is going to generate the package? And F/OSS developer’s don’t seem to want to do so for their own stuff – while everyone else wants to. So we need a method for both.
It’ll make life a lot easier for companies too.
companies want lives easier? then stop trying to control EVERYTHING, that shit just work in this world.
they need to release their stuff properly, this way, every distribution can package it as they please, plus users can just use a generic release if their distribution doesent carry the package..
theres really no great big mystic problem, its just that most people dont understand a flying f–k what they are talking about, and makes lame excuses because their software stinks.
That’s exactly the problem with requiring every company to release everything so that each individual distribution can carry it in their own format, on their own terms.
Please take your own advice here.
Fact is – it is far simpler to have a standard API that someone can write an installer to and for software developers to provide an installation package for (e.g. MSI, InstallShield, Wise Installer, etc.) than it is to get everyone to do things the way *you* want it done and for each individual application to be turned over to the great distributors to send out to the users.
Fact is – non-F/OSS applications will never get picked up that way. Yet there is a great demand for them. (Yes, I prefer F/OSS when available.)
Fact is – not every distribution is going to package every piece of software under the sun. They can’t. They don’t have the time to.
Fact is – every distribution wants to do things slightly differently. So standardizing on a single package manager isn’t going to work, and telling a company to produce packages for 10 different package managers is just insane.
So please, get past the bull, and realize that for a major desktop supporting commercial applications is a must, and to support them the package systems (package managers, etc.) have to get beyond the idea of controlling everything down to the package format. All they need to control is what is installed, and that does not have to be in the package format.
To quote MSI as another example – I could use Microsoft’s MSI packaging implementations (WiX, VS Installer, etc.), or go out and get other solutions (InstallShield, Wise Installer, NSI, etc.), or just write myself an EXE that uses the MSI API to do perform the back-end tasks. MSI itself doesn’t care.
RPM, PKG, DPKG, etc. can all learn a lot from MS’s MSI system. MS could learn a lot from them too; but MS does get what the commercial interests are – and not everyone wants to open-source their code base. (Not everyone can even if they wanted to!)
One solution for example: Take an MSI type installer, make the interface tell it explicitly what files are installed and where so they can be tracked in the package system’s database; provide a compliance program that tests to make sure that no other files are installed elsewhere. It’ll achieve the same result, and allow for a far better system overall.
the winblows installer way, is _STUPID_. Developers of applications must be only responsible for _THEIR_ little piece of software, they shouldnt have any say whatsoever on how its installed..
Solution:
provide 2 types of packages(hell, only 1 if they are morons).
1: source
2: self-contained directory containing everything.
this is really easy, with type number 1, the user and/or distribution can simply install/package however the f*** they please.
with type 2, if the provider is a moron and makes crappily closed crapware, the distribution can still just repackage if they please, or if the user has a distribution which doesent prefer to package this application, just download the damned tarball, extract it, and RUN the software.. no crappy msi, no weird third party installers doing weird weird stuff, just run..
how hard can it f–king be? its not hard at all, its just bozos not understanding that there IS NO ISSUE
Actually, that only works for a small sub-set of programs. Consider the impact of shared libraries? If the program is not setup to pickup libraries out of wherever the distribution wants to have them (and there are programs out there like that – like it or not) then your whole solution breaks down for commercial applications – the biggest violator of such.
A lot of company’s do not want others using the libraries. They write the libraries for their own use. They don’t provide headers to those libraries. Those libraries do not belong in the /usr/lib or /usr/local/lib directories as a result. They belong with the application; yet a distributor designs things to be put in common areas (/usr) – they don’t want things in other places (e.g /usr/local/someapp/).
Get over it already.
You may think it’s broken; but it’s a fact that has to be dealt with for commercial applications. Whether its an Adobe application or some random piece of shareware. That’s the reality, and not one distributions are going to fix either.
As another example – distributions break things too. For example, Aaron Seigo’s recent blog (http://tinyurl.com/54upvd) about distributions packaging software in a way that does not help developers. If the developer’s controlled it, it would be packaged as needed. (As stated before, F/OSS developer’s typically don’t want to package anything.)
Honestly, there are two different communities being talked about here: F/OSS and Commercial. And they both have different requirements and expectations on how to package software. If we (the F/OSS community) want commercial applications, then we have to provide some way for them to be able to package software their way too, and your “solution” does not address it that way – its a ‘my way or the highway’ approach, which is not going to win over commercial software vendors.
Like or not that’s the reality.
you did not understand what i wrote..
How is MSI stupid? On the windows platform you can script any installer for any package, and deploy it across any number of machines, customized according to group policy.
Deploying to a home desktop and across a big network and wildly different things, and aren’t really comparable.
To create more advanced package management tools, you need a standard format to work against. The lack of a standard format means the inability to build robust tools for large scale deployments.
There are distro specific tools that are up to the task, but that means you are stuck with the package selection the distro provides you (or take on the burden of packaging yourself). It would be a far better solution if the project itself could take care of packaging (since they know more then anyone else what is supposed to be going on), and have that seamlessly integrate into whatever package manager a distro happens to be using.
Edited 2008-06-23 20:20 UTC
MSI is just another package manager that adds to the confusion. I don’t see Microsoft doing anything to make MSI compatible with autopackage, dpkg, rpm or any other package manager. If you package your software with MSI, it will only install on Windows. That’s not better than doing a rpm that will install only on Mandriva or a deb that wil only install on Debian.
Of course, there are more people running Windows than Mandriva and Debian put together, so the problem doesn’t show on Windows and it looks like it is a good way to distribute software, but really it’s just the package manager of Windows and it’s not better than rpm or deb. I can’t think of anything that can be done with MSI that can’t be done with rpm, but I think about a lot of stuff that can be done with rpm but not with MSI, and even more with urpmi, apt-get and yum.
I mean, the problem is not with rpm or deb. The problem is that 1000 people have MSI installed when 1 man has rpm installed and 1 man has dpkg installed. Add another api on top of rpm and dpkg, and maybe 1 people will have that installed.
So, if you are a software vendor, what will you use? the new api (1 user), rpm (1 user), deb (1 user) or msi (1000 users)?
Anyway, MSI is available on many distros (including Mandriva and debian) with wine. wine implements MSI.
Edited 2008-06-24 08:20 UTC
MSI itself is not really a package manager by any means. Rather, it is an API that software can use to tell the system that it is installed, where, and how to uninstall. It is certainly a key component to a package manager. MSI does provide a “package format”, but you need not use that format to use the MSI API; though most do. MSI has certainly brought Windows closer to providing a package manager – that is certain – as nearly every installer program for Windows has switched to using it.
However, even some Microsoft tools still do not use MSI as the primary source of their installer (so far as I can tell) and use a mix of MSI packages and non-MSI packages; yet the whole thing is registered through MSI with Windows as being installed.
The difference between MSI and a package manager is that MSI is focused on how one piece of software installs into the system. There is nothing in it for dependencies or anything else of that nature. There is no repository being MSI from which to grab all kinds of stuff. And perhaps that is what you are thinking can be done with RPM/DEB/etc and not MSI – but that is exactly the case because MSI is not a package manager; it is a package installer, nothing more – and the two are quite different.
Perhaps that’s why you’re so confused – and think there is so much confusion – with respect to Windows.
Also – you don’t see the RPM/DEB/etc guys porting their stuff to Windows either. You do see F/OSS software ported to Windows, and then the projects usually end up providing a nice MSI installer for it, which works very well.
The point is that MSI is just another way to install software, be it a layer on top of the package manager, this is what packagekit or CNR are.
It may seem like a good idea at first: let’s unify all package managers under one API that everybody can use. This is not such a good idea actually, because you just create another library that need to be installed everywhere to replace everything, but actually only SOME people will use it, but never all and your API will only add to the confusion. What you need is not create another API, but to collaborate with those that already exist. rpm is fine as such, MSI is fine as such and dpkg is fine as such. The problem is that they all exist separately.
The confusion is not MSI, the confusion is that you have one way to package software on Windows, one way to package software on Mandriva and one way to package software on Debian, but a package created with MSI won’t work if you don’t have the libraries installed, a package created for rpm won’t work if rpm is not installed and a deb package won’t work without dpkg.
By default, Mandriva doesn’t have MSI nor dpkg, Windows doesn’t have rpm nor dpkg and debian doesn’t have MSI. Fortunately, there is alien to convert packages between rpm and deb and wine to use MSI on Debian and Mandriva and this is the way to go: instead on adding just another layer on top of the package manager (so called package installer) which just adds to the confusion, we need to reduce the number of package managers and make bridges between them.
In other words, creating something like MSI but not MSI on Mandriva or Debian or Suse does just mean that the software that use the MSI-like API will only install where the MSI-like API is available, that is to say nowhere or maybe one or two distros. On the other hand, rpm, dpkg and MSI are already available on many machines. Porting rpm to windows would be a way better idea. Already too much package managers and confusion.
This is one of the weaknesses of massive distributed systems like open source, sometimes you need someone to say “Do it that way and don’t argue with me.” you can’t really do that, because there is noone really in charge.
My money is on autopackage as the thing that will save us all too. I had honestly hoped that by now it would have gained more traction then it has though.
Autopackage is out of the question because it enforces libc usaged from a C-like program (for compiled apps). It doesn’t work without certain “libc path” hacks, which are impossible to do if your binary doesn’t use libc at all (say it does syscalls via other interface).
When will they understand that the solucion is not in the package format but in binary compatibility?
Stop ignoring the real problem already.
Since when is there a problem with binary compatibility? The LSB pretty much solved that. Where it failed was on package management, mandating RPM which could be problematic (though not fatal) for dpkg based systems.
I still say the solution for third party software is to simply treat “/opt” like “C:/Prgram Files/”.
Since… forever?
Try compiling a C++ app on one linux distro and running it on another. Doesn’t work, unless the distros in question track the same core debian release or something.
Try running an old linux app – say an old commercial game from Loki like Simcity 3000 or Civ CTP.
Doesn’t work.
Binary compatibility just doesn’t exist in the Linux world.
For all the blather about open standards, the Linux community doesn’t give a crap about them when it comes to it’s own core systems. e.g kernel and binary APIs
Ha. Try compiling a C++ app on Windows and deploying it to another machine. Doesn’t work. Unless of course you bundle all the runtime libraries for the compiler you’re using into your installer. Guess what, the same thing works on Linux. Include all your dependencies with the app and binary compatibility is no problem.
The reason binary compatibility is harder on Linux is because Linux apps actually try to share libraries, instead of each app bundling everything they need and then loading X copies of it in memory. Bundling everything is easier, but not very efficient.
No, no it doesn’t.
If you compile on newer distro and put it on older distro, even with all libraries, it won’t work because of [gay]libc.
The problem is that you can’t distribute libc itself (because the kernel API and ABI changes), but your apps/libs depend on a specific libc abi too (GLIBC_2_XX required errors).
So.. no, you don’t really know what you’re talking about. If you need a nice eyesore for compatibility, have a look at statfs.h file in linux. See that OFFSET define? Know what that can cause? I do.
Forward compatibility is different than backward compatibility. The low level interfaces (glibc and especially the kernel) do not change very often at all, and distros generally ship older versions for a long time. You can’t expect to build an app on a new distro and distribute it to something years out of date. However, if you build it on the old distro it will likely run on the new one as well. That’s pretty much standard practice for software development.
>> [gay]libc
Yeah real mature.
Well, Mozilla Firefox, Adobe Acrobat, Macromedia Flash, Unreal Tournament 2004, Doom 3, Matlab and many more manage to be distributed as single binaries that runs anywhere.
I don’t know what you’re doing but as long as you’re developing for at least the 2.4 kernel and a reasonably recent glibc (older versions should be available in your distro’s repository) you shouldn’t be having trouble.
Well, it does, however vendors seem to prefer using distribution and version specific ABIs over the standard one.
Unfortunate but it’s their decision. If they prefer rebuilding and repackaging, e.g. for very tight integration into the system, then that’s what they are going to do
Since not everyone runs an x86 32-bit processor.
For instance, I had a lab where I needed to target Itanium, 64-bit x86, 32-bit x86, Sparc, and more. Simple binary compatibility would not work to target those systems.
Nice thing is that the LSB actually specifies the interfaces for a couple of architectures:
LSB 3.2 (IA32
LSB 3.2 (IA64)
LSB 3.2 (PPC32)
LSB 3.2 (PPC64)
LSB 3.2 (S390)
LSB 3.2 (S390X)
LSB 3.2 (AMD64)
http://www.linuxfoundation.org/en/Specifications
Notice how that is version 3.2. That was only recently (in the last year) released. It still doesn’t solve the problem though as distros actually have to support them all too, not just a few of them as is the case now.
Only it’s not their decision to make. Honestly, if I wanted third party developers to treat my computer like it was their own back yard, I’d use Windows.
Since human beings have the innate tendency to assume that everyone else is wrong and only they are right, we are now stuck with 3453495 different Linux package managers.
Is it the week of sensationalist titles or summaries? There are two package formats used by virtually all major distributions: RPM and Debian/.deb. Sure, a Fedora RPM can often not be used on SUSE, but that’s primarily a system layout/library issue.
Actually, now that a far majority seems to use Debian/Ubuntu, Fedora/RHEL/CentOS, OpenSUSE/SLED/SLES, and maybe Mandriva, it’s not really that bad. I think the primary possible improvement is to work on unifying source packages/specfiles and associated better.
“hyperbole (hÄ«-pûr’bÉ™-lÄ“)
n.
A figure of speech in which exaggeration is used for emphasis or effect, as in I could sleep for a year or This book weighs a ton.”
Edited 2008-06-23 18:30 UTC
Sensationalism
Making a bigger deal about something (exaggerating) than is needed, in order to increase the audience’s interest.
Edited 2008-06-23 18:38 UTC
Get over it, dude. I use hyperboles on OSNews all the time, and if you can’t handle them, that’s your problem, not mine. Now stop trying to hijack this thread because you don’t understand basic rhetorical instruments.
Knowing a “rhetorical instrument” and knowing when and how to employ it are two very different things. While I don’t doubt your knowledge in the former I have to agree with the original poster in questioning your skill in the latter. Why not simply accept that text in question wasn’t as well presented as it could be and that the criticism may have some validity.
This thread bores me to death.
“Back in the day, you installed software on Linux by compiling it manually.”
Assuming “Make install” is more manual than “apt-get install”, then yes.
“…assuming you had a decent knowledge of gcc, make…this could actually work.”
Let’s bend the words around and journalistic style be damned. While building from source isn’t a one-click effort, it’s not that cumbersome, and demands zero understanding of either gcc, make or libraries. In fact, it demands nothing more than being able to read and follow basic instructions, the *NIX philosophy, being “don’t treat your users like apes”.
“…package management systems that were supposed to make installing software on Linux a breeze”.
I believe “did” should be the actual verb rather than “were supposed”, as would anyone who used Synaptic/Aptitude/Yum/etc. testify. In a world where a user can change his desktop environment from KDE to GNOME and back while installing Apache/MySQL in no more than a few mouse clicks, how more “breezy” should software installation get?
“rpm, dpkg, and so on, and so forth.”
Actually, just those.
While not an exact measurement, if you check Distrowatch.com top 10 list, you’ll find that all of them use either RPM or DPKG. The other two major formats are Slackware’s “tgz” and Source packages (e.g. Gentoo). The writer could’ve mentioned that the big issue is that the implementation of the formats is different in each distro, or that the packages are compiled differently, or any of the other issues that exist, rather than “so on…”, but that doesn’t sound as “big”. This is similar to saying “there is so much to know about light switches! turn it on, and off and so on and so forth!!”. If you don’t know, don’t exaggerate.
“Since human beings have the innate tendency to assume that everyone else is wrong and only they are right”
Those pesky humans and their innovations! Jonathan Swift would’ve been proud of you.
“we are now stuck with 3453495 different Linux package managers.” Exaggerate much?
Here’s the deal. There are about several hundred GNU/Linux distros in the world. Most user (I’d wager about 90%), however, use about 10 of those. Those 10 distros, as I mentioned, have about 3-4 different package management systems, with varying levels of compatibility between them.
Those levels can go anyway from “mostly compatible” (say, Knoppix and Debian), to “somewhat compatible” (Ubuntu/Debian), to “mostly not” (SUSE/Red Hat) to “not compatible” (RPM/DEB). Despite that, since all of those are basically compiled versions of the same source code, there are tools that can convert between the different formats. Even this situation is orders of magnitude better than, say, Mac and Windows, or other systems.
However, as each distro represents its founder(s) idea of how he/she/they believe a GNU/Linux OS should function, incompatibilities exists, and I’m tempted to say, will always exist. To be compatible, first, everyone must abide to the same rules of system hierarcy, of where each file should go (/usr/bin/[local-site/local]/etc), of which versions should each compiler/library use, whether dependencies should be immediately resolved, suggested, recommended, or else. If your distro creator wants to use the latest QT/GTK/other libraries, or wants to use a different-older/newer compiler, or, if they want to use a source packaging system like Gentoo does, would break compatibility and won’t “play nice” with what “the others” want.
Free Software, which is the foundation on which GNU/Linux is build, is based on a simple idea: you are given the code, and can do whatever you want with it, to satisfy your needs and demands. If you are not satisfied with other’s solutions, you are free to create your own. Same with package management. Creating API’s, XML formats, and any other version of “ground rules” will only work as long as developers will want to use it, and “working nice with others” just isn’t what most Distro creators have in mind, but rather creating the best solution they can, according to what *they* think the best solution is. And “playing nice” just isn’t compatible with that.
I just wanted to get that straight: contary to the information in the article, I am _not_ a Fedora developer. In fact I use and develop on Ubuntu. Anyway, that’s all I wanted to say here.
If you are interested in the discussion and have something to say about it, please get your voice heard on the LSB packaging list (but don’t forget to look at the archive, your points may have already been mentioned). Constructive criticism is more than welcome!
I usually don’t want to be the one sending out stop energy but TFA is so ridiculously crack-inspired one has to wonder if those involved are on the same planet as we are.
This seriously reminds of Douglas Adams and The Hitchhikers Guide to the Galaxy: a planet(a giant computer ran by rats) was built to find the answer to Life, The Universe, and Everything. After eons of calculations an answer was found, but none could remember what the problem was, so another planet was built to find the question that matched the solution found by the other planet. Unfortunately immediately prior to the successful completion of this project that planet was destroyed to make way for a hyperspace bypass.
Just a couple of notes in passing:
1) Differences in package management, repo structures and the tools involved in installing and removing software are the primary differentiators of Linux distributions. If this “problem” was solved, there would be remarkably little left to differentiate the distributions. And the only people interested in having such a “solution” are the one’s who already think that the LSB has already provided the answer to Life, The Universe and Everything.
2) On a superficial level there are ample opportunities for distros to reach practical consensi(pl?). XDG has been achieving this and making some nice progress for some time. These solutions do not try force distros to neuter themselves but rather to work together where there is practical benefit for all involved.
3) There will never be a single unitary package management system for Linux. Each of the various package management systems have advantages and disadvantages, each have their own problems, and each enable certain things which other systems do not. This is not merely chaos or mayhem- there is method to this madness: The organizational, social, and economic structures of the various distributions are mirrored in the particular package management systems of these distributions-in a sense, and very important sense really, these factors pre-condition each other. In other words the package management system *is* the distribution, ontologically speaking. The reasons for these differences reach into the raison d’etre of the distributions themselves-which ultimately devolves into questions of the viability of the distribution itself -whether socially, or economically.
4) Never use force to achieve what people are inclined to do anyway. With each version change in the fundamental libraries which make up the Linux ecosystem new functionality is exposed and form the basis for advances in the system itself. The individual distributions strive to take advantage of this infrastructure and utilize it for their own profit. This means that despite the rather large differences in the package management systems of the various distros that there is rather high degree of homogeneity in the version numbers of the fundamental core libraries which constitute Linux across the vast majority of distros. The differences in which actual versions are used are primarily a function of time-if the window of comparison is based on a one year time frame this homogeneity is *almost* universal, whereas if one takes 6 months as the basis there is a minor degree of diversity. Massive breaks in ABI are seldom and are surrounded by years where this is not an issue-far less often that Microsoft brings out new versions of it’s own OS.
5) the moment that any third party software mandates that my system conforms in some non-superficial way to some standard of which the distribution I am using is not part of is the moment that that software dictates which distribution I must use in order to have the priveledge of using that software. This is already the case for software like Oracle-I must use specific distributions which are endorsed by Oracle if I want to use Oracle software. Changing the distro I use for something which costs me many thousands of dollars is probably not unrealistic-but for me and the vast majority of Linux users there is no demand for extremely pricey proprietary software which would justify such. Back here, in reality, third party proprietary apps have to play ball according to our game. This does not mean that a lot cannot be done to make it easier for third party proprietary apps to integrate well within Linux-and I am all for such steps, but the LBA does not represent Linux-it represents Redhat and SUSE and to a lesser extent Debian and that’s it.
If you, as a third party propietary app developer, want to get wide scale adoption of your software in Linux then hire people who will get your software into the repos of the major distributions and provide tar balls for rest of us. Make use of the XDG tools available, target a rather actual core of main libraries (ie. not gcc-2.95) and target the infrastructure behind KDE or GNOME. If your software is enterprisy target Redhat, and SUSE(SLED) and perhaps Mandriva if you are feeling liberal-forget about the rest. If your software is for mere mortal beings target fedora, opensuse and debian/ubuntu. If you make software FOSS the community will do the work for you if your software is good enough to convince the community to support it.
If i have failed to address what the the author is looking to solve, please help me to see the problem, because I cannot find the problem which is being addressed by this solution.