* Distro compatibility fixes – bugs that showed up on SuSE and Slackware in particular
* Support for optional/recommended dependencies
* A new QuickStart guide for packagers
* Packaging of autopackage itself has been improved, with better support for upgrades and a developer tarball
* Upgraded to a new luau which brings a better XML repository schema
* Supports root only installs now
* A new Qt based frontend, contributed by David Sansome
* Many bugfixes and lots of polishing. In the run up to 1.0, this is the bulk of the work.
And now autopackage is on a feature freeze until 1.0 is released. Although I am Gentoo user, this project is still very interesting and I hope it catches on.
Central repositories are a strength, not a weakness. Take a look at Debian’s central repositories. There is a giant QA team that makes sure all the packages work nicely together, something the haphazard “each developer packages his app” model can never achieve. Debian’s APT mechanism also allows them to integrate each piece of software into the whole system, so a centralized configuration mechanism (DebConf), can be used to configure any package.
Another important thing to note is that the centralized repository model exists in part for the same reason commercial companies have people whose only job is packaging and installation. Very few OSS projects can keep a dedicated packager around to do Q&A on a dozen platforms. A centralized repository allows a division of labor, with developers specializing in writing software, and packagers specializing in distribution of it.
Yep, and after the recent news about IBM proposing an XML based solution for simplifying software installations across OSes I think there are some nice things in store for us in the future. Just imagine trying to explain to grandchildren how messed up software installations used to be… that would be nice.
Precisely what is messed up about software installation today? It’s just like the internet — a smart middle-man (APT) presents a unified interface to a giant distributed database (repositories). The only thing messed up is that most OSs still haven’t caught on to how to handle software installation properly.
I like the Debian system too. But there are also some well known problems. Debian tries to include everything and the kichen sink in their repos, Debian tries to support more than 10 platforms too. And the aim is that the software for all those platforms is as stable when released in the stable and testing repositories. Because of those high goals new packages tend to be availbale in the stable and even testing repositories relatively slowly. If you’re ready to wrestle with the potential dependency and other problems of the unstable Debian distribution, it is not so big a problem though.
The whole release cycle of Debian has been very slow because of the Debian goals mentioned above. I hope that the promised new time-based relese cycle (1 release/ year?) will solve that?
Anyway, I do see that this new Autopackge thing could help Debian people too, espcially to install commercial Linux software not natively available in Debian repositories. Autopackage might be a good way in the future for commercial software makers (from games to office software) to avoid trying to support dozens of distros or only provide packages for one major distro (=Redhat).
Don’t get me wrong, I do think that there is a place for autopackage (exactly the sort of situations you mentioned). Heck, I don’t even see autopackage needing to get away from the repository model (it seems to have the technical capability of doing something like that). However, I think when the packages you want are available, it’s preferable to have a consistant, integrated repository than a hodge-podge of separately-compiled packages.
Sorry, couldn’t let this slide. Disclaimer: I am the autopackage lead developer and also a Wine developer working for CodeWeavers. You’ll see why this is relevant in a moment.
I have a theory, and it’s a theory founded in practical reality working on the front line of Linux software development (on Wine). My theory is that the traditional orthodoxy that packagers know best how to package and developers know best how to develop is crap. I think developers know best how their software should be installed – after all, they wrote it. I think the system of massively duplicating packaging work gives users a subpar experience, and I’m going to explain why I think that.
In July 2004, installing Wine is not difficult. In fact all you have to do is ./configure, make, make install – what could be simpler? Yet, the number of ways packagers screw it up is phenomenal. Let me give you a few examples – there are many, many more:
* Gentoos packages are totally broken. They place files in the wrong directories, mess around with the users prelink settings months after that became no longer necessary, place binaries in /usr/lib and then don’t add it to the path (with the result that Wine can’t be found) and use wrapper scripts which disable all debug output making tech support difficult. Browsing their bugzilla for bugs on Wine is just flat out depressing – almost every bug is based on misinformation and ignorance.
* Debians Wine packages break things arbitrarily (and wrongly) into several packages with one, “wine-utils” marked as “optional” despite there being no optional components in Wine – it’s shipped as a single tarball for a reason. Things that may appear optional to the untrained eye – like notepad and regedit – really aren’t optional at all as Windows programs and third party scripts depend on them being there.
* The FreeBSD ports simply comment out code that doesn’t work on that platform instead of fixing it.
* Random 3rd party RPMs have contained a litany of mistakes in the past, from placing critical files in totally the wrong place meaning Wine won’t even start, to containing crude hacks long after they became unnecessary.
Many, many packages contain 3rd party patches – some of which were submitted upstream and rejected as being wrong and damaging but are still applied downstream anyway.
Half of the time when people come to us in #winehq with a problem, I end up saying “Just install from the source”, either because they’re using old builds and their problem was fixed in the latest release, or because the problem they are seeing is caused by bad or broken packaging. The source install, while slow, is at least done by the developers.
It’s not just Wine that has these problems – the MonoDevelop lead developer has told me that the Mono packages for many major distributions are similarly broken.
You say There is a giant QA team that makes sure all the packages work nicely together, something the haphazard “each developer packages his app” model can never achieve. but my experience flat out contradicts that. If there is such a giant QA team how do they ship packages that are so often out of date and sometimes produce binaries that don’t even start. What are these QA people doing?!
Instead of being a “strength”, the centralized packaging simply creates a support nightmare for the troops on the ground trying to help people use the software that they got from god knows where.
CrossOver is a massively popular product, despite being based almost entirely on free software that is made available on our website in source form. People buy it anyway – what is its secret sauce? A lot of it boils down to smart packaging. Unlike random distro packager X, we know the software inside out, know how to make it integrate with the right software, and do testing both internally and with widespread betas to ensure it works correctly on many different distros.
This is something that any open source project can do – smart testing and betas are not the preserve of the commercial world, many succesful volunteer based projects do the same.
Now I’m not saying all packaging is broken! Of course there are some people who really do understand the software they are packaging and do an excellent job of it, but my experience is this is sadly not the majority of packagers.
Developers do not subcontract usability, or documentation, or artwork to distributions. Why should they do so for packaging? It’s fundamentally illogical.
Not all software should be packaged by its developers. 3rd party binary packages of the kernel, or GNOME, or glibc do not make sense. Distributor packaging does have a place in linking core OS components, but for programs like Inkscape, Wine, Mono, Frozen Bubble and SuperTux it’s just more likely to lead to an infinite rainbow of stupid bugs.
Instead, binary packaging should be done by those who know the software the best – the developers. That is what autopackage makes possible.
wow… it’s actually great to hear a developer speak out against the current packaging system(s).
i have to admit that i love slackware and dread working with debian because i hate debian’s packaging system. there should be an “End APT’s tyranny!” campaign to stop the sorts of things Mike was pointing out. I hate trying to compile something from source on a Debian system (because they don’t have everything) because 1) APT treats me like a terrorist and 2) More or often than not the source’s configure script won’t be able to find what it needs even when the right Debian package is installed…
at any rate, i’ll be checking out autopackage monday!
Standards are good. it’s so odd that everyone is happy wth the current state of affairs in the install field (“it’s choice”), and with the web it’s war (“crappy internet exploderklfdjdkfj”).. ;/
sorry by standards I mean a good way to install stuff But it would be nice if distros put things in the same location, there’s no reason not to, is there?
2) “Alternative” mechanism for tools like “ilasm” so that DotGNU can be installed side by side.
3) Consistent location of .NET assemblies under /usr/share/dotnet, so that DotGNU can use it too.
4) Generation of GAC(Global Assembly Cache) on library package installation time.
5) Automatic .NET assembly dependency analysis, and tool to get dependency right — analogous to Debian’s shared library dependency tool, shlibdeps. (Non-Debian specific part of this is included upstream.)
Sure, packaging can cause problems. But let me depend Debian a little more. I cannot speak for other distributions though.
First, my understanding is that Debian maintainers don’t upload packages that “won’t start”, as Mike said above.
Next, bugs are reported against Debian maintainers through Debian BTS(Bug Tracking System), not to upstream. Maintainers read them, ask for more information, add information himself, and if appropriate, i.e. not packaging bug, send it upstream. This way upstream gets “better quality” bug report.
Debian does auto-building on 11 architectures. Take a look at portability patches for XFree86 and OpenOffice.org originated from Debian. They are numerous. I don’t think every open source project have resources to do autobuilding. Sure, some projects can do it, like Mozilla’s Tinderbox. But not all.
You right, every word that you say about is true except somes. Every packaging system has packages broken but the broken ones conform the minority, NOT the mayority, and is really dificult to maintain thousands of packages without making a mistake. I think that projects like FreeBSD, Gentoo, etc, etc, etc… are doing a good job, maybe could be better, maybe not. All I wanna see (NO offence) is Autopackage running without broken packages, i think that is a very hard task.
Maybe I misunderstand, but the point of Autopackage isn’t to argue the merits of Slackware packages vs Debian packages, no more than the point is to argue Apt vs Rpm. Linux needs an Install Shield clone for software installation. Slack uses gz, while Debian uses apt. Redhat, Mandrake, & Suse use rpm, but good luck trying to install a Redhat rpm on a Mandrake or Suse system. The reverse is true as well. Why should gaim-0.79 have to be packaged 150 different ways for every single distro that exists? Would it not save us all time if gaim could be packaged once and once only, then installed on ANY distro? This is why Linux needs an Install Sheild clone and unless I misunderstand, this is the problem Autopackage wishes to address.
I have currently got to install 2 rpms with the same functions, but different version numbers on my system because for some reason, 2 packages decided they each wanted a specific version. Why the packager did not think to just statically link in that case is beyond me, because there is nothing worse than depending on a specific version of a library in my books. The library in question is gtkhtml.
I have also managed to install inkscape from autopackage.com on my pc. No problems. It actually downloads the dependencies if it needs them. I think huge packages are ok for apps, which do not necessarily depend on other apps, but that’s just me.
RPMs are good for system files. Autopackage for everything else.
-You are most likely installing an rpm not made for your system.
-statically linking is a huge waste
-statically linking usually requires work, work that is easier for the author to do. Many packagers are not serious programmers.
-when packaging rpms, you can either not set a dep or set one. Packagers try to be as accurate as possible when settings deps, usually depending on the configure scripts requirements.
-Libraries change over time. Requiring on a specific version ususally means that the program version and the lib version work well together.
-Inorder for programs like apt or urpmi to work, you need specific deps
I’m a user, not a programmer. I followed Eugenia here from BeNews. I’ve been waiting for Autopackage since the first day I read about it. Today I downloaded Inkscape, followed the instructions on the Autopackage site, and voila! I have Inkscape on my Xandros system and in the menu. It works! And that’s the purpose – to make it easy for users to install applications they need on Linux. That’s the only way we’re going to get people to switch. Messing with dependencies, broken packages, debs, rpms, config files, etc., is not what users want to do. . .we just want to use!
I agree with Rich Lewine. Inkscape install went sweet for my brother who I convinced to try Suse. Unfortunately he is going back to XP. He only browses, listens to music and aim/yahoo IM’s, but it was a chore for him to get the software needed installed and going even when using Suse software install (not to mention problems with software itself). He just wants to download, install and go and I don’t blame him. I’m a tech and getting some of this stuff installed, placed in the menus and going was, in some cases, plain rediculous. He doesn’t give a crap or want to give a crap about the shell or having to keep typing in the root password to install stuff, and again I don’t blame him.
Since then I’ve been looking for somewhere to contrib to help with this. AP looks very interesting. I was looking into ZeroInstall as well. I figure if I dedicate some time to create installers for some packages, like firefox/media players/im progs, then it would encourage the devs of those packages to continue to use it. And, hopefully catch on like a California fire.
I guess others have actually made my return argument for me but just to reply to your comment “The only thing messed up is that most OSs still haven’t caught on to how to handle software installation properly.”
In this comparison it’s important to realize that yes you get your internet usually through one router connected to the rest of the internet however that doesn’t mean the rest of the internet such as websites and such are maintained by the same group that owns and runs that router.
My second argument is that even if the only problem was that most OS or distributions don’t handle software installation properly as you suggested, that seems like a large enough problem to warrent these software installations as “messed up” don’t you agree?
Never used lq site before so who is Mike so I can look for his “insights”? I went through that thread and could only get to page 5 after my brain melted from the zealot posts. tcaptain kinda ticks me off with his “must use your brain” comments.
… how much time will pass before we see the first ‘Autopackage Only’ distro?
UM never. Autopackage isnt meant to replace Package managers. RPM for example does an excellent job at managing packages, the frustration comes from installing them.
Theres plans on intergrating Autopackage into RPM, APT, etc with the post 1.0 releases.
I think developers know best how their software should be installed – after all, they wrote it.
This is very often not the case. For a very large number of projects, the developers do not have the time nor the desire to spend large amounts of time fine-tuning a particular installation. Take, for example, something like apache. Do the Apache folks really want to take the time to integrate Apache into DebConf? Probably not. Do users miss out on a lot when they can’t use DebConf to manage their Apache installation? Yes! Debian has a policy of releasing packages that work out of the box. In many cases, this goes way beyond just getting the package installed. This means putting files in global paths, setting up daemons, setting up cron-jobs, etc. This sort of TLC is something developers really don’t have time for, but Debian’s packaging team can focus on.
In July 2004, installing Wine is not difficult. In fact all you have to do is ./configure, make, make install – what could be simpler? Yet, the number of ways packagers screw it up is phenomenal.
I’ve used Debian for a long time, and I must say that WINE is an exception rather than a rule. As a counter-point, I’ll point out that if you do Lisp development on *NIX, Debian’s packages are the gold-standard. The post-install configuration makes life much easier than on a “separate-packaging” platform like Windows, where much manual setup has to be done to get the pieces to play nice. For me, it’s been extremely rare to come across a broken Debian package, while the benefits of packaging/OS integration reveal themselves every time I install software.
but my experience flat out contradicts that. If there is such a giant QA team how do they ship packages that are so often out of date and sometimes produce binaries that don’t even start. What are these QA people doing?!
Again, Debian has an extremely good reputation for quality. In general, I’d say their packages are better than what most developers package themselves (especially in Windows — the land of broken installers). I don’t think you’ll find many people who’ll agree with the statement that Debian is lax on their QA.
A lot of it boils down to smart packaging. Unlike random distro packager X, we know the software inside out, know how to make it integrate with the right software, and do testing both internally and with widespread betas to ensure it works correctly on many different distros.
Yes, CrossOver’s installer is very good. On the other hand, Debian’s packages for XFree are also very good. Furthermore, packaging XFree is a lot harder than packaging WINE. Last time I checked, CrossOver on a small fraction of the platforms Debian’s XFree packages run on. I’m not saying that developers cannot create good packages for their software (clearly CrossOver is a good example of one), I’m saying that there are lot’s of other data points that don’t fit your argument.
This is something that any open source project can do – smart testing and betas are not the preserve of the commercial world, many succesful volunteer based projects do the same.
While that may be the case, I think you’ll find that a lot of OSS projects lack the resources to *integrate* their software into each environment the user may run in. Sure, you can do one-size fits-all packages, but then you lose a lot of system-specific features, which is a net loss for the end-user.
Developers do not subcontract usability, or documentation, or artwork to distributions. Why should they do so for packaging? It’s fundamentally illogical.
It’s actually quite logical. Developers do not have the full information they need to create packages. It’s the distributors that know their platform best, and it’s the distributors that can best tailor pieces of software to their particular environment. Integration is key, and it’s just too difficult for developers to integrate with a wide variety of external software. In practice, systems that make it a policy for developers to package their software (specifically, Windows), have very little integration within their software system. This “subcontracting” is hardly unheard of in the commercial world. Hardware companies do this sort of thing all the time.
I agree with you in a lot of respects. However, I have experienced broken packages in Mandrake (urpmi) quite a number of times. When it works, it is fantastic, when it doesn’t, what do you do? What if I want the latest version of something because the current version is broken? Do I wait for my distributor to package it, or do I install from source?
Now we can install from autopackage. For apps that don’t place their tentacles everywhere and need lots of OS integration (e.g. GIMP, Kopete, gaim, inkscape, and other apps), then I think autopackage is a great thing. It can get you out of a hole when your distributor has left you with an old and buggy version of a program, and your only option is to compile from source, or wait.
When autopackage integrates into rpm and deb, it’ll be even better. That way add/remove type programs that read the rpm/deb database, will be able to remove autopackage apps as well.
I think everyone here agrees that autopackage fills a void. I’m looking forward to 1.0.
Someone asked about the “safety” of running a binary installer file. First of all, the installer (.package) is a shell script, with the tarball within so it’s not a binary in its full meaning.
But it all ends down to trust. If you can trust a (rpm|deb|tgz|…), why wouldn’t you trust a .package, even if it had a built-in installer? What I’m trying to say is, that security don’t have anything to do with the package format, but rather the source of the package.
autopackage is part of the future. Distros need to have their own packaging system, and imho apt and urpmi work rather well but can’t do everything.
Autopackage answers the need of developpers who wish to provide packages themselves, without having to learn each distro’s system. It’ll be a powerful help for budding project that don’t yet get support from the main distros. If distros provide better packages, then they’ll be used. But autopackage provides a credible alternative for whenever the distros repository is not the answer.
Now, does using autopackage break apt and rpm databases ? I suppose it doesn’t but does someone know for sure ?
Sure, like I said, some packagers do a great job, but nobody should be forced to depend on the packager for your distro doing it right. Besides, this is totally missing the point – even if Debian has perfect QA (it doesn’t) why should people have to use Debian to get non-broken software installs? What about all those people who want eg, 6 month release cycles, graphical installers and all the other things Debian don’t provide.
Even if Debian had no broken packages at all, other distros do and many people want to use other distros.
You give examples of Apache, Mono etc, but none of these things are exclusive to distro-specific packages. The binfmt_misc interface is the same on all distros as it’s a part of the Linux kernel, not Debian or Red Hat. The EXE/PE switchers for Mono/dotGNU/Wine etc should be handled in an upstream project not by the distro. Setting up cron jobs and other things should be done upstream, and if upstream don’t care or won’t do it, then somebody else can always step up to fill the void and fork it (or at least, fork its upstream packaging).
IMHO none of these arguments are for distributor packaging, they are instead for people building high quality packages generically. There’s no reason this has to be done over and over again by people affiliated by with distributors, this can all be done upstream. Even the DebConf example is imho not a good argument – why does Apache not ship with its *own* configuration tools? Too old skool? Pity, but that doesn’t mean the wheel has to be re-invented time and time again.
Anyway, the Apache example is somewhat academic, autopackage isn’t really targetted at servers so I’m sure Debian will be maintaining mini-forks of Apache for some time. This is really targetted at the sort of programs we already package – games, 3rd party programs etc.
No the Firefox installer is custom. It’d be possible to make a firefox autopackage but as you have noticed, they already have a custom GTK2 based installer generated from the same data as the Windows installer, so I think it makes sense for them to keep it for now. In future if we add RPM/DPKG integration and such it may make sense as it’d add features you don’t get with their custom thing, but for now it wouldn’t add much (except maybe better menu compat).
Re: Jeff
I think I only posted a couple of times to that monster thread, I certainly haven’t been arguing ceaselessly there. I do read LQ though and sometimes post as “mhearn”.
On “autopackage only distros” – not going to happen. It’s not meant for that.
when will i be able to download PackageFoo.file, double click it, have it mount on the desktop, then drag the bundle to /Applications and have it Just Work. I LOVE Linux. this isn’t a troll post. It runs all my servers. But why can’t i just get a simple mac like system for installing desktop apps? Everyone tries to invent the new snazzy perfect package manager when all i want is one package manager. EVEN if it sucks. I just want one!
the bundle system takes it a step furthur with system libraries too (Frameworks in os x) where you can install frameworks either in ~/Library/Frameworks or /Library/Frameworks for global installs. it really is a logical, clean, system. it pretty much has all the benefits of unix without all the poor management.
open source even has the advantage! they could make a nice gui frontent to ALL the software, and i wouldn’t even have to go on the web to find the package i want. yet they are still behind in usability with package fragmentation
“when will i be able to download PackageFoo.file, double click it, have it mount on the desktop, then drag the bundle to /Applications and have it Just Work. I LOVE Linux. this isn’t a troll post. It runs all my servers. But why can’t i just get a simple mac like system for installing desktop apps? Everyone tries to invent the new snazzy perfect package manager when all i want is one package manager. EVEN if it sucks. I just want one!
”
this kind of application folders work well in generally monolithic systems to a extend. after that they fall apart. just take a look at the number of OS X packages that DONT do application folders type of installation. in linux this simply wont work.
It explains where we’re going long term, which is basically a MacOS X style UI but implemented better. Downside is it takes more effort to implement and because we’re only like 3 people working on this, it’ll be a fair old while. Upside is that unlike OS X, it’ll actually work consistently.
Of course, I’m not saying that autopackage doesn’t have a place. I’m saying that high-quality native, well-integrated, packages are preferable to high-quality generic ones. And I have to point out that Apache’s configuration tools and DebConf really don’t overlap. DebConf handles configuration that is distro-specific or configuration that relates more to how the software integrates into the OS rather than how it functions. To get the same effect, Apache would have to have different configuration back-ends for all of their supported platforms. Further, how many application developers are going to write configuration tools that have the same powerful features (eg: C API and XML scripting) as DebConf? Apache might, but most smaller software projects won’t. There is also an advantage to the user to be able to configure all your software in one place, instead of having to use a dozen different application-specific interfaces.
PS> Wrt graphical installers: They’re a waste of time. The only thing the installer needs is a GUI front-end to select packages from a repository. The package itself, unless it needs to prompt the user for something (which DebConf has a GUI for anyway), should install totally silently.
OK, I agree with you that given a choice between equally high quality native packages and generic packages, you prolly want the native packages. But, I stand by my belief that such a choice is unfortunately rare. For Debian users maybe it is more common but autopackage wasn’t really designed for them anyway
On graphical installers, yes, autopackages are not interactive. However Synaptic style UIs are fundamentally broken from a usability perspective – no UI that contains a listbox with thousands of items can be good. What we want is MacOS X style drag’n’drop UI, whether you consider that graphical or not is arguable. I’d say it is.
This is really THE tool linux desperately needs! Pity that only three devs are working on this great program. Why not join them (speaking for myself, I’m not a programmer or linuxgeek, just an ordinary user…)?
I’m going to test the autopackages available on the website ASAP.
I did not find my question in the FAQ answered so i ask it here: are you going to support proprietary Unices, the BSD’s? I did saw you stated FreeBSD Ports criticism but did not read wether that’s supported as well. What if the person who made the packages for the Linux distribution in question didn’t want to support a specific distribution (various examples available!) or what if the developers of an application with AP support do not want to / cannot support an OS, platform? Does it reside to: “you have the other choice then, and that choice is a good thing too.”
While i think of it: it costs the distributor less time to package stuff up. Which means for the commercial ones more time for other jobs but which also means distributions will become more like each other. Because, basically, about all the software will (in the end) work on all the distributions.
Anyway, will proprietary Unices be supported or is that 100% up to the developers of the software? They _do_ get more work because of this, right? What happens if there’s only 1 developer who disappeared or doesn’t support his/her software anymore? Who’s gonna host the package information then?
Only x86 Linux is supported. Currently there are no plans to support other platforms.
The installation problem is pretty unique to Linux. It would be pointless to support FreeBSD (for example) too since FreeBSD users are already happy with their Ports system.
“What if the person who made the packages for the Linux distribution in question didn’t want to support a specific distribution (various examples available!)”
Then he should not use autopackage. The whole point of autopackage is to make packages that can be easily installed on any distribution.
Why would anybody want to blacklist/ban a specific distribution? And frankly, what’s the point in making packages that refuse to install on a specific distribution? People can still install that app from source if they really want to.
In future we intend to support multiple CPU architectures much better than we do now. Long term plan is to integrate cross compilers so the maintainers can produce x86, PPC, AMD64 packages etc.
I personally prefer Portage, but Autopackage looks like it could be the answer to some of the software installation trobules that plauge most distros.
We’ll see though.
* Distro compatibility fixes – bugs that showed up on SuSE and Slackware in particular
* Support for optional/recommended dependencies
* A new QuickStart guide for packagers
* Packaging of autopackage itself has been improved, with better support for upgrades and a developer tarball
* Upgraded to a new luau which brings a better XML repository schema
* Supports root only installs now
* A new Qt based frontend, contributed by David Sansome
* Many bugfixes and lots of polishing. In the run up to 1.0, this is the bulk of the work.
And now autopackage is on a feature freeze until 1.0 is released. Although I am Gentoo user, this project is still very interesting and I hope it catches on.
This hopefully could solve the central repository problem of most of the current package management systems.
Central repositories are a strength, not a weakness. Take a look at Debian’s central repositories. There is a giant QA team that makes sure all the packages work nicely together, something the haphazard “each developer packages his app” model can never achieve. Debian’s APT mechanism also allows them to integrate each piece of software into the whole system, so a centralized configuration mechanism (DebConf), can be used to configure any package.
Another important thing to note is that the centralized repository model exists in part for the same reason commercial companies have people whose only job is packaging and installation. Very few OSS projects can keep a dedicated packager around to do Q&A on a dozen platforms. A centralized repository allows a division of labor, with developers specializing in writing software, and packagers specializing in distribution of it.
Yep, and after the recent news about IBM proposing an XML based solution for simplifying software installations across OSes I think there are some nice things in store for us in the future. Just imagine trying to explain to grandchildren how messed up software installations used to be… that would be nice.
will you create a GNUstep based frontend too?
Precisely what is messed up about software installation today? It’s just like the internet — a smart middle-man (APT) presents a unified interface to a giant distributed database (repositories). The only thing messed up is that most OSs still haven’t caught on to how to handle software installation properly.
I like the Debian system too. But there are also some well known problems. Debian tries to include everything and the kichen sink in their repos, Debian tries to support more than 10 platforms too. And the aim is that the software for all those platforms is as stable when released in the stable and testing repositories. Because of those high goals new packages tend to be availbale in the stable and even testing repositories relatively slowly. If you’re ready to wrestle with the potential dependency and other problems of the unstable Debian distribution, it is not so big a problem though.
The whole release cycle of Debian has been very slow because of the Debian goals mentioned above. I hope that the promised new time-based relese cycle (1 release/ year?) will solve that?
Anyway, I do see that this new Autopackge thing could help Debian people too, espcially to install commercial Linux software not natively available in Debian repositories. Autopackage might be a good way in the future for commercial software makers (from games to office software) to avoid trying to support dozens of distros or only provide packages for one major distro (=Redhat).
Don’t get me wrong, I do think that there is a place for autopackage (exactly the sort of situations you mentioned). Heck, I don’t even see autopackage needing to get away from the repository model (it seems to have the technical capability of doing something like that). However, I think when the packages you want are available, it’s preferable to have a consistant, integrated repository than a hodge-podge of separately-compiled packages.
Sorry, couldn’t let this slide. Disclaimer: I am the autopackage lead developer and also a Wine developer working for CodeWeavers. You’ll see why this is relevant in a moment.
I have a theory, and it’s a theory founded in practical reality working on the front line of Linux software development (on Wine). My theory is that the traditional orthodoxy that packagers know best how to package and developers know best how to develop is crap. I think developers know best how their software should be installed – after all, they wrote it. I think the system of massively duplicating packaging work gives users a subpar experience, and I’m going to explain why I think that.
In July 2004, installing Wine is not difficult. In fact all you have to do is ./configure, make, make install – what could be simpler? Yet, the number of ways packagers screw it up is phenomenal. Let me give you a few examples – there are many, many more:
* Gentoos packages are totally broken. They place files in the wrong directories, mess around with the users prelink settings months after that became no longer necessary, place binaries in /usr/lib and then don’t add it to the path (with the result that Wine can’t be found) and use wrapper scripts which disable all debug output making tech support difficult. Browsing their bugzilla for bugs on Wine is just flat out depressing – almost every bug is based on misinformation and ignorance.
* Debians Wine packages break things arbitrarily (and wrongly) into several packages with one, “wine-utils” marked as “optional” despite there being no optional components in Wine – it’s shipped as a single tarball for a reason. Things that may appear optional to the untrained eye – like notepad and regedit – really aren’t optional at all as Windows programs and third party scripts depend on them being there.
* The FreeBSD ports simply comment out code that doesn’t work on that platform instead of fixing it.
* Random 3rd party RPMs have contained a litany of mistakes in the past, from placing critical files in totally the wrong place meaning Wine won’t even start, to containing crude hacks long after they became unnecessary.
Many, many packages contain 3rd party patches – some of which were submitted upstream and rejected as being wrong and damaging but are still applied downstream anyway.
Half of the time when people come to us in #winehq with a problem, I end up saying “Just install from the source”, either because they’re using old builds and their problem was fixed in the latest release, or because the problem they are seeing is caused by bad or broken packaging. The source install, while slow, is at least done by the developers.
It’s not just Wine that has these problems – the MonoDevelop lead developer has told me that the Mono packages for many major distributions are similarly broken.
You say There is a giant QA team that makes sure all the packages work nicely together, something the haphazard “each developer packages his app” model can never achieve. but my experience flat out contradicts that. If there is such a giant QA team how do they ship packages that are so often out of date and sometimes produce binaries that don’t even start. What are these QA people doing?!
Instead of being a “strength”, the centralized packaging simply creates a support nightmare for the troops on the ground trying to help people use the software that they got from god knows where.
CrossOver is a massively popular product, despite being based almost entirely on free software that is made available on our website in source form. People buy it anyway – what is its secret sauce? A lot of it boils down to smart packaging. Unlike random distro packager X, we know the software inside out, know how to make it integrate with the right software, and do testing both internally and with widespread betas to ensure it works correctly on many different distros.
This is something that any open source project can do – smart testing and betas are not the preserve of the commercial world, many succesful volunteer based projects do the same.
Now I’m not saying all packaging is broken! Of course there are some people who really do understand the software they are packaging and do an excellent job of it, but my experience is this is sadly not the majority of packagers.
Developers do not subcontract usability, or documentation, or artwork to distributions. Why should they do so for packaging? It’s fundamentally illogical.
Not all software should be packaged by its developers. 3rd party binary packages of the kernel, or GNOME, or glibc do not make sense. Distributor packaging does have a place in linking core OS components, but for programs like Inkscape, Wine, Mono, Frozen Bubble and SuperTux it’s just more likely to lead to an infinite rainbow of stupid bugs.
Instead, binary packaging should be done by those who know the software the best – the developers. That is what autopackage makes possible.
Autopackage rocks and I’m looking forward to the day most Linux apps are distributed as Autopackages.
Great job!
wow… it’s actually great to hear a developer speak out against the current packaging system(s).
i have to admit that i love slackware and dread working with debian because i hate debian’s packaging system. there should be an “End APT’s tyranny!” campaign to stop the sorts of things Mike was pointing out. I hate trying to compile something from source on a Debian system (because they don’t have everything) because 1) APT treats me like a terrorist and 2) More or often than not the source’s configure script won’t be able to find what it needs even when the right Debian package is installed…
at any rate, i’ll be checking out autopackage monday!
Standards are good. it’s so odd that everyone is happy wth the current state of affairs in the install field (“it’s choice”), and with the web it’s war (“crappy internet exploderklfdjdkfj”).. ;/
sorry by standards I mean a good way to install stuff But it would be nice if distros put things in the same location, there’s no reason not to, is there?
“I hate trying to compile something from source on a Debian system (because they don’t have everything)…”
And the notion that slackware has “everything” gives me a good laugh.
“1) APT treats me like a terrorist”
uh huh..
“and 2) More or often than not the source’s configure script won’t be able to find what it needs even when the right Debian package is installed…”
-dev packages
-pkg-config
-/etc/ld.so.conf
No doubt you had to deal w/ the last two issues on slackware.
I cannot speak for Wine, but I think I can comment on Debian Mono packaging. It adds (or plans to add):
1) Kernel BINFMT_MISC support, generic .NET executable invocation wrapper.
2) “Alternative” mechanism for tools like “ilasm” so that DotGNU can be installed side by side.
3) Consistent location of .NET assemblies under /usr/share/dotnet, so that DotGNU can use it too.
4) Generation of GAC(Global Assembly Cache) on library package installation time.
5) Automatic .NET assembly dependency analysis, and tool to get dependency right — analogous to Debian’s shared library dependency tool, shlibdeps. (Non-Debian specific part of this is included upstream.)
Sure, packaging can cause problems. But let me depend Debian a little more. I cannot speak for other distributions though.
First, my understanding is that Debian maintainers don’t upload packages that “won’t start”, as Mike said above.
Next, bugs are reported against Debian maintainers through Debian BTS(Bug Tracking System), not to upstream. Maintainers read them, ask for more information, add information himself, and if appropriate, i.e. not packaging bug, send it upstream. This way upstream gets “better quality” bug report.
Debian does auto-building on 11 architectures. Take a look at portability patches for XFree86 and OpenOffice.org originated from Debian. They are numerous. I don’t think every open source project have resources to do autobuilding. Sure, some projects can do it, like Mozilla’s Tinderbox. But not all.
You right, every word that you say about is true except somes. Every packaging system has packages broken but the broken ones conform the minority, NOT the mayority, and is really dificult to maintain thousands of packages without making a mistake. I think that projects like FreeBSD, Gentoo, etc, etc, etc… are doing a good job, maybe could be better, maybe not. All I wanna see (NO offence) is Autopackage running without broken packages, i think that is a very hard task.
Maybe I misunderstand, but the point of Autopackage isn’t to argue the merits of Slackware packages vs Debian packages, no more than the point is to argue Apt vs Rpm. Linux needs an Install Shield clone for software installation. Slack uses gz, while Debian uses apt. Redhat, Mandrake, & Suse use rpm, but good luck trying to install a Redhat rpm on a Mandrake or Suse system. The reverse is true as well. Why should gaim-0.79 have to be packaged 150 different ways for every single distro that exists? Would it not save us all time if gaim could be packaged once and once only, then installed on ANY distro? This is why Linux needs an Install Sheild clone and unless I misunderstand, this is the problem Autopackage wishes to address.
I have currently got to install 2 rpms with the same functions, but different version numbers on my system because for some reason, 2 packages decided they each wanted a specific version. Why the packager did not think to just statically link in that case is beyond me, because there is nothing worse than depending on a specific version of a library in my books. The library in question is gtkhtml.
I have also managed to install inkscape from autopackage.com on my pc. No problems. It actually downloads the dependencies if it needs them. I think huge packages are ok for apps, which do not necessarily depend on other apps, but that’s just me.
RPMs are good for system files. Autopackage for everything else.
-You are most likely installing an rpm not made for your system.
-statically linking is a huge waste
-statically linking usually requires work, work that is easier for the author to do. Many packagers are not serious programmers.
-when packaging rpms, you can either not set a dep or set one. Packagers try to be as accurate as possible when settings deps, usually depending on the configure scripts requirements.
-Libraries change over time. Requiring on a specific version ususally means that the program version and the lib version work well together.
-Inorder for programs like apt or urpmi to work, you need specific deps
-Any decent distro has dep resolution
I’m a user, not a programmer. I followed Eugenia here from BeNews. I’ve been waiting for Autopackage since the first day I read about it. Today I downloaded Inkscape, followed the instructions on the Autopackage site, and voila! I have Inkscape on my Xandros system and in the menu. It works! And that’s the purpose – to make it easy for users to install applications they need on Linux. That’s the only way we’re going to get people to switch. Messing with dependencies, broken packages, debs, rpms, config files, etc., is not what users want to do. . .we just want to use!
… how much time will pass before we see the first ‘Autopackage Only’ distro?
I agree with Rich Lewine. Inkscape install went sweet for my brother who I convinced to try Suse. Unfortunately he is going back to XP. He only browses, listens to music and aim/yahoo IM’s, but it was a chore for him to get the software needed installed and going even when using Suse software install (not to mention problems with software itself). He just wants to download, install and go and I don’t blame him. I’m a tech and getting some of this stuff installed, placed in the menus and going was, in some cases, plain rediculous. He doesn’t give a crap or want to give a crap about the shell or having to keep typing in the root password to install stuff, and again I don’t blame him.
Since then I’ve been looking for somewhere to contrib to help with this. AP looks very interesting. I was looking into ZeroInstall as well. I figure if I dedicate some time to create installers for some packages, like firefox/media players/im progs, then it would encourage the devs of those packages to continue to use it. And, hopefully catch on like a California fire.
I guess others have actually made my return argument for me but just to reply to your comment “The only thing messed up is that most OSs still haven’t caught on to how to handle software installation properly.”
In this comparison it’s important to realize that yes you get your internet usually through one router connected to the rest of the internet however that doesn’t mean the rest of the internet such as websites and such are maintained by the same group that owns and runs that router.
My second argument is that even if the only problem was that most OS or distributions don’t handle software installation properly as you suggested, that seems like a large enough problem to warrent these software installations as “messed up” don’t you agree?
Actually I think Mike Hearn is my new hero after arguing endlessly in favor of autopackage and other such software installation issues on LinuxQuestions.org at http://www.linuxquestions.org/questions/showthread.php?s=&threadid=…
Looking at http://www.autopackage.org/docs/howto-install/index.html it seems that I have to run a binary I got from the net to install the software. This sounds very suspicious to me.
Second, does every user have to install autopackages by him/herself? When I install a rpm the software is available for all users.
Never used lq site before so who is Mike so I can look for his “insights”? I went through that thread and could only get to page 5 after my brain melted from the zealot posts. tcaptain kinda ticks me off with his “must use your brain” comments.
… how much time will pass before we see the first ‘Autopackage Only’ distro?
UM never. Autopackage isnt meant to replace Package managers. RPM for example does an excellent job at managing packages, the frustration comes from installing them.
Theres plans on intergrating Autopackage into RPM, APT, etc with the post 1.0 releases.
“UM never. Autopackage isnt meant to replace Package managers.”
That’s sad. I would love to see a distro where every package were of this type.
I think developers know best how their software should be installed – after all, they wrote it.
This is very often not the case. For a very large number of projects, the developers do not have the time nor the desire to spend large amounts of time fine-tuning a particular installation. Take, for example, something like apache. Do the Apache folks really want to take the time to integrate Apache into DebConf? Probably not. Do users miss out on a lot when they can’t use DebConf to manage their Apache installation? Yes! Debian has a policy of releasing packages that work out of the box. In many cases, this goes way beyond just getting the package installed. This means putting files in global paths, setting up daemons, setting up cron-jobs, etc. This sort of TLC is something developers really don’t have time for, but Debian’s packaging team can focus on.
In July 2004, installing Wine is not difficult. In fact all you have to do is ./configure, make, make install – what could be simpler? Yet, the number of ways packagers screw it up is phenomenal.
I’ve used Debian for a long time, and I must say that WINE is an exception rather than a rule. As a counter-point, I’ll point out that if you do Lisp development on *NIX, Debian’s packages are the gold-standard. The post-install configuration makes life much easier than on a “separate-packaging” platform like Windows, where much manual setup has to be done to get the pieces to play nice. For me, it’s been extremely rare to come across a broken Debian package, while the benefits of packaging/OS integration reveal themselves every time I install software.
but my experience flat out contradicts that. If there is such a giant QA team how do they ship packages that are so often out of date and sometimes produce binaries that don’t even start. What are these QA people doing?!
Again, Debian has an extremely good reputation for quality. In general, I’d say their packages are better than what most developers package themselves (especially in Windows — the land of broken installers). I don’t think you’ll find many people who’ll agree with the statement that Debian is lax on their QA.
A lot of it boils down to smart packaging. Unlike random distro packager X, we know the software inside out, know how to make it integrate with the right software, and do testing both internally and with widespread betas to ensure it works correctly on many different distros.
Yes, CrossOver’s installer is very good. On the other hand, Debian’s packages for XFree are also very good. Furthermore, packaging XFree is a lot harder than packaging WINE. Last time I checked, CrossOver on a small fraction of the platforms Debian’s XFree packages run on. I’m not saying that developers cannot create good packages for their software (clearly CrossOver is a good example of one), I’m saying that there are lot’s of other data points that don’t fit your argument.
This is something that any open source project can do – smart testing and betas are not the preserve of the commercial world, many succesful volunteer based projects do the same.
While that may be the case, I think you’ll find that a lot of OSS projects lack the resources to *integrate* their software into each environment the user may run in. Sure, you can do one-size fits-all packages, but then you lose a lot of system-specific features, which is a net loss for the end-user.
Developers do not subcontract usability, or documentation, or artwork to distributions. Why should they do so for packaging? It’s fundamentally illogical.
It’s actually quite logical. Developers do not have the full information they need to create packages. It’s the distributors that know their platform best, and it’s the distributors that can best tailor pieces of software to their particular environment. Integration is key, and it’s just too difficult for developers to integrate with a wide variety of external software. In practice, systems that make it a policy for developers to package their software (specifically, Windows), have very little integration within their software system. This “subcontracting” is hardly unheard of in the commercial world. Hardware companies do this sort of thing all the time.
Does the Firefox linux installer use Autopackage?
It is really slick. I know Firefox is about the easiest thing to install (no deps). But still it feels very professional.
I agree with you in a lot of respects. However, I have experienced broken packages in Mandrake (urpmi) quite a number of times. When it works, it is fantastic, when it doesn’t, what do you do? What if I want the latest version of something because the current version is broken? Do I wait for my distributor to package it, or do I install from source?
Now we can install from autopackage. For apps that don’t place their tentacles everywhere and need lots of OS integration (e.g. GIMP, Kopete, gaim, inkscape, and other apps), then I think autopackage is a great thing. It can get you out of a hole when your distributor has left you with an old and buggy version of a program, and your only option is to compile from source, or wait.
When autopackage integrates into rpm and deb, it’ll be even better. That way add/remove type programs that read the rpm/deb database, will be able to remove autopackage apps as well.
I think everyone here agrees that autopackage fills a void. I’m looking forward to 1.0.
… how much time will pass before we see the first ‘Autopackage Only’ distro?
I was going to see why have a distro when you could have a whole OS http://slashdot.org/comments.pl?sid=114399&cid=9692692 but it appears that I was mistaking AutoPkg for OpenPkg. Does anyone know the difference?
Someone asked about the “safety” of running a binary installer file. First of all, the installer (.package) is a shell script, with the tarball within so it’s not a binary in its full meaning.
But it all ends down to trust. If you can trust a (rpm|deb|tgz|…), why wouldn’t you trust a .package, even if it had a built-in installer? What I’m trying to say is, that security don’t have anything to do with the package format, but rather the source of the package.
but under xandros, the provided programs didn’t install properly (yet) – (f. e. supertux):
cat: /var/lib/dpkg/alternatives/x-window-manager: No such file or directory
QFile::writeBlock: File not open
rm: cannot remove `/home/xyz/tmp46875190/payload/@supertux.sourceforge.net/supertux:0.1. 1/share/supertux’: Permission denied
rm: cannot remove `/home/xyz/tmp46875190/payload/@supertux.sourceforge.net/supertux:0.1. 1/share/applications’: Permission denied
rm: cannot remove `/home/xyz/tmp46875190/payload/@supertux.sourceforge.net/supertux:0.1. 1/share/pixmaps’: Permission denied
also, i can’t find any menu-entries in kde neither for autopackge nor for supertux.
back to the lab guys…;-)
autopackage is part of the future. Distros need to have their own packaging system, and imho apt and urpmi work rather well but can’t do everything.
Autopackage answers the need of developpers who wish to provide packages themselves, without having to learn each distro’s system. It’ll be a powerful help for budding project that don’t yet get support from the main distros. If distros provide better packages, then they’ll be used. But autopackage provides a credible alternative for whenever the distros repository is not the answer.
Now, does using autopackage break apt and rpm databases ? I suppose it doesn’t but does someone know for sure ?
Can you come over to #autopackage at irc.freenode.net? We need more information than that to solve your problem.
“Now, does using autopackage break apt and rpm databases ? I suppose it doesn’t but does someone know for sure ?”
It doesn’t even touch your RPM database. If it does it wouldn’t have been released in the first place.
Is there a directory of applications packaged like that somewhere ?
Sure, like I said, some packagers do a great job, but nobody should be forced to depend on the packager for your distro doing it right. Besides, this is totally missing the point – even if Debian has perfect QA (it doesn’t) why should people have to use Debian to get non-broken software installs? What about all those people who want eg, 6 month release cycles, graphical installers and all the other things Debian don’t provide.
Even if Debian had no broken packages at all, other distros do and many people want to use other distros.
You give examples of Apache, Mono etc, but none of these things are exclusive to distro-specific packages. The binfmt_misc interface is the same on all distros as it’s a part of the Linux kernel, not Debian or Red Hat. The EXE/PE switchers for Mono/dotGNU/Wine etc should be handled in an upstream project not by the distro. Setting up cron jobs and other things should be done upstream, and if upstream don’t care or won’t do it, then somebody else can always step up to fill the void and fork it (or at least, fork its upstream packaging).
IMHO none of these arguments are for distributor packaging, they are instead for people building high quality packages generically. There’s no reason this has to be done over and over again by people affiliated by with distributors, this can all be done upstream. Even the DebConf example is imho not a good argument – why does Apache not ship with its *own* configuration tools? Too old skool? Pity, but that doesn’t mean the wheel has to be re-invented time and time again.
Anyway, the Apache example is somewhat academic, autopackage isn’t really targetted at servers so I’m sure Debian will be maintaining mini-forks of Apache for some time. This is really targetted at the sort of programs we already package – games, 3rd party programs etc.
Re: AndrewG
No the Firefox installer is custom. It’d be possible to make a firefox autopackage but as you have noticed, they already have a custom GTK2 based installer generated from the same data as the Windows installer, so I think it makes sense for them to keep it for now. In future if we add RPM/DPKG integration and such it may make sense as it’d add features you don’t get with their custom thing, but for now it wouldn’t add much (except maybe better menu compat).
Re: Jeff
I think I only posted a couple of times to that monster thread, I certainly haven’t been arguing ceaselessly there. I do read LQ though and sometimes post as “mhearn”.
On “autopackage only distros” – not going to happen. It’s not meant for that.
*sigh*
when will i be able to download PackageFoo.file, double click it, have it mount on the desktop, then drag the bundle to /Applications and have it Just Work. I LOVE Linux. this isn’t a troll post. It runs all my servers. But why can’t i just get a simple mac like system for installing desktop apps? Everyone tries to invent the new snazzy perfect package manager when all i want is one package manager. EVEN if it sucks. I just want one!
the bundle system takes it a step furthur with system libraries too (Frameworks in os x) where you can install frameworks either in ~/Library/Frameworks or /Library/Frameworks for global installs. it really is a logical, clean, system. it pretty much has all the benefits of unix without all the poor management.
open source even has the advantage! they could make a nice gui frontent to ALL the software, and i wouldn’t even have to go on the web to find the package i want. yet they are still behind in usability with package fragmentation
>> when will i be able to download PackageFoo.file, double
>> click it, have it mount on the desktop, then drag the
>> bundle to /Applications and have it Just Work.
Autopackage _does_ Just Work.
And all that with a lot less work than what you’re describing there.
Remember, we’re talking about Linux (or xBSD) here, not Mac OS.
“when will i be able to download PackageFoo.file, double click it, have it mount on the desktop, then drag the bundle to /Applications and have it Just Work. I LOVE Linux. this isn’t a troll post. It runs all my servers. But why can’t i just get a simple mac like system for installing desktop apps? Everyone tries to invent the new snazzy perfect package manager when all i want is one package manager. EVEN if it sucks. I just want one!
”
this kind of application folders work well in generally monolithic systems to a extend. after that they fall apart. just take a look at the number of OS X packages that DONT do application folders type of installation. in linux this simply wont work.
read the autopackage faq
Read this: http://www.autopackage.org/ui-vision.html
It explains where we’re going long term, which is basically a MacOS X style UI but implemented better. Downside is it takes more effort to implement and because we’re only like 3 people working on this, it’ll be a fair old while. Upside is that unlike OS X, it’ll actually work consistently.
Of course, I’m not saying that autopackage doesn’t have a place. I’m saying that high-quality native, well-integrated, packages are preferable to high-quality generic ones. And I have to point out that Apache’s configuration tools and DebConf really don’t overlap. DebConf handles configuration that is distro-specific or configuration that relates more to how the software integrates into the OS rather than how it functions. To get the same effect, Apache would have to have different configuration back-ends for all of their supported platforms. Further, how many application developers are going to write configuration tools that have the same powerful features (eg: C API and XML scripting) as DebConf? Apache might, but most smaller software projects won’t. There is also an advantage to the user to be able to configure all your software in one place, instead of having to use a dozen different application-specific interfaces.
PS> Wrt graphical installers: They’re a waste of time. The only thing the installer needs is a GUI front-end to select packages from a repository. The package itself, unless it needs to prompt the user for something (which DebConf has a GUI for anyway), should install totally silently.
OK, I agree with you that given a choice between equally high quality native packages and generic packages, you prolly want the native packages. But, I stand by my belief that such a choice is unfortunately rare. For Debian users maybe it is more common but autopackage wasn’t really designed for them anyway
On graphical installers, yes, autopackages are not interactive. However Synaptic style UIs are fundamentally broken from a usability perspective – no UI that contains a listbox with thousands of items can be good. What we want is MacOS X style drag’n’drop UI, whether you consider that graphical or not is arguable. I’d say it is.
This is really THE tool linux desperately needs! Pity that only three devs are working on this great program. Why not join them (speaking for myself, I’m not a programmer or linuxgeek, just an ordinary user…)?
I’m going to test the autopackages available on the website ASAP.
I wish Mike Hearn and his team suc6!
Very interesting discussion.
I did not find my question in the FAQ answered so i ask it here: are you going to support proprietary Unices, the BSD’s? I did saw you stated FreeBSD Ports criticism but did not read wether that’s supported as well. What if the person who made the packages for the Linux distribution in question didn’t want to support a specific distribution (various examples available!) or what if the developers of an application with AP support do not want to / cannot support an OS, platform? Does it reside to: “you have the other choice then, and that choice is a good thing too.”
While i think of it: it costs the distributor less time to package stuff up. Which means for the commercial ones more time for other jobs but which also means distributions will become more like each other. Because, basically, about all the software will (in the end) work on all the distributions.
Anyway, will proprietary Unices be supported or is that 100% up to the developers of the software? They _do_ get more work because of this, right? What happens if there’s only 1 developer who disappeared or doesn’t support his/her software anymore? Who’s gonna host the package information then?
Only x86 Linux is supported. Currently there are no plans to support other platforms.
The installation problem is pretty unique to Linux. It would be pointless to support FreeBSD (for example) too since FreeBSD users are already happy with their Ports system.
“What if the person who made the packages for the Linux distribution in question didn’t want to support a specific distribution (various examples available!)”
Then he should not use autopackage. The whole point of autopackage is to make packages that can be easily installed on any distribution.
Why would anybody want to blacklist/ban a specific distribution? And frankly, what’s the point in making packages that refuse to install on a specific distribution? People can still install that app from source if they really want to.
In future we intend to support multiple CPU architectures much better than we do now. Long term plan is to integrate cross compilers so the maintainers can produce x86, PPC, AMD64 packages etc.
“Why not join them (speaking for myself, I’m not a programmer or linuxgeek, just an ordinary user…)?
”
you can contribute but filing bugs, writing docs or so on. just ask them on irc
“Read this: http://www.autopackage.org/ui-vision.html“
That’s some pretty damn neat stuff.