Canonical is announcing the availability of PPA: a Launchpad-integrated free service which allows anyone to get 1 GB of space to upload whatever software they want. Launchpad will compile it automatically and will set up an apt repository with your package to anyone who wants to use it. Aditionally, PPAs offer bug reporting and translation services.
Packages from PPAs enable users to get the software they want in a reliable manner; really great idea, especially as the uptake of Ubuntu in particular grows.
Also, before it is mentioned, a quote from the linked press release (yes PPAs have been around for a while now).
“Canonical today announced the general availability of the Launchpad Personal Package Archive (PPA) service”
Seems like a good idea. It’s unfortunate that something like this is required. I would have hoped that by now the linux distros & gcc developers would have worked together so that binaries might be work across the various system configurations.
As it stands, windows remains the undisputed king of backwards (and perhaps more importantly, forwards) compatibility. Until this is fixed, developers will continue to only offer source code for linux.
I assume that openSUSE’s build service ( http://en.opensuse.org/Build_Service ) helps/will help with this.
I think, if I understand you correctly, that you’re missing the point. The free software ecosystem is fundamentally incompatible with the binary “build once run anywhere forever” mentality. The various distribution projects have different goals, release cycles, and administration tools. The lingua franca of the community is the source tarball.
The trend to which PPA belongs (including a similar service from openSUSE, not sure about Fedora) is the automation of building and packaging processes for various free software platforms. By uploading source tarballs and patches to these services, developers are able to ensure that users of these platforms have access to binary packages automatically prepared for easy installation on their systems.
It’s a “push” process that requires developers to take the initiative to make their projects available on a selection of platforms, but it makes this task as easy as possible. Furthermore, if a package becomes popular on a certain platform, its community can take over the initiative in starting to “pull” from upstream. Think of this as a way for developers to get their foot in the door of a major distribution project and gain the exposure needed to become a part of the official repositories.
Don’t hold your breath waiting for unified binary compatibility across the free software ecosystem. Especially as free software is aggressively moving onto post-PC hardware platforms and pursuing more appliance-like turnkey solutions, the flexibility of source code will remain a crucially important competitive advantage.
Source code got us where we are today, and it will take us wherever we need to go tomorrow. Part of the free software vision is to demonstrate not only that proprietary software is repressive for users, but also for distributors. PPA highlights the fact that free software makes practical sense for distributors, and the more users demand diversity, the more freedom will triumph over secrecy.
Hear hear!
Thanks you, butters, for so eloquently voicing what I have been unable to for years!
Every so often, I run smack bang into the assumption that for some reason, Linux needs to support binary only code across distros. I have tried to explain that the whole Linux ecosystem needs to be viewed from a completely different perspective but have either failed miserably to express myself properly, was talking to some one unable to grasp this concept or, more often than not, a bit of both.
Invariably, I end up stating that Linux would not be where it is if cross distro binary compatibility was a real issue. To this, most people’s answer is that unless the situation changes, Linux will not evolve beyond it’s current point. As I have had this argument for several years running, it’s quite obvious that the latter statement has proven to be untrue.
I can see the point of cross distro binary compatibility and do agree that it would be a great means of getting more desktop ISVs on board, but only in the short term and at the expense of software vendors opening up their code (or, failing that, start offering solutions based on FLOSS). I feel, and think that many others will agree with me on this, that by showing the world just how far FLOSS has come, in the very short space of time it has been around, we can get more companies to start adopting FLOSS as their preferred development method.
I’m not going to start preaching to the converted, so I wont go into too much depth but I do think that Linus’s analogy of the situation sums up what I am trying to say and that basically, closed source software is akin to alchemy while FLOSS is closer to science, i.e. open to peer review.
As I’m sure you have noticed, when even FLOSS’s most ardent opponent starts creating FLOSS licenses and opening up code, there has got to be at least some truth to what Linus says.
Unified binaries are needed for Linux to grow beyond of the hobbyist/enthusiast market. And it won’t do it if you don’t create comfortable environment for proprietary software, for reasons that I’m lazy to discuss tonight. Maybe it doesn’t need such growth, but this is another question.
Lack of unified binaries creates duplication of packager work (go ask VMWare) and user frustration. They shouldn’t and needn’t be the primary means of packaging, but every distro that calls itself a desktop – whatever its native packaging system is – should support an alternative, optional unified packaging system. So that I could conveniently install Photoshop for Linux (hehe) bought and downloaded via CNR (hehe) in any desktop distro I choose.
Proprietary software isn’t a growth market, it’s a hold-over from a bygone era that’s slowly fading away. The new software products being adopted by businesses and consumers are overwhelmingly free software. It’s not a market worth chasing. I’m content to let the proprietary software vendors chase us as the major free software platforms continue to gain marketshare.
There will be Adobe Photoshop for Linux. The only reason it’s taken this long is because Adobe is hard at work overhauling their entire content creation portfolio to support their AIR framework, and while it will be proprietary software, I’m fairly confident that they will ship Linux packages.
My Dad’s been using Ubuntu for almost 2 years now on his home computer. Not that he knows or cares, he only recently conversationally asked what his computer was running. He nodded when I told him it’s Ubuntu Linux and I bet he forgot about it right away.
I’m pretty sure he won’t be coming to me anytime soon and say “Son, I think that Linux really needs unified binaries”. ‘Cause I’d be coughing some of that great pie that Mom makes around the room if he did.
In other words, “Linux needs unified binaries” is only something that a geek/hobbyist would think to say. Regular people, who outnumber geeks greatly, don’t know, don’t care. They use computers like a VCR and that’s all.
I think you have the key point. This would build against Ubuntu’s included packages as much as possible because it’s their servers. Like you say, this gets the “foot in the door” so that all the packages can be represented in some fashion. A small developer can post to sourceforge and some of these PPAs and cover 90% of their base with minimal testing and such.
This is a good balance between raw independence and ubuntu having to do everything.
The goal of Shuttleworth, I think is to have package maintainers work directly in Ubuntu’s system, as well as others. Forget the “ivory tower” of Debian where a select few “approve” stable releases and hold everybody back. Also forget the corporate Red Hat that absorbs and “tweaks” projects to their specs so far out of the mainstream that it’s useless to give back to the community without lots of work. (issue is that rpms don’t “just work” on RH, Suse, Mandriva.. it’s all just a bit different) Ubuntu is hitting a wall with doing the maintenance on their own. Shuttleworth is smart to fix the problem not the symptom and make distro management and package building more acceptable to programmers directly rather than trying to root around the internet trying to pull stuff in and fix it up. Ubuntu is in the packaging and supporting enterprise business… directing traffic, not getting drug down into splitting hairs over code. It’s very similar to how Linus works by letting others build on their own then he pull in the best options as “official”.
Yep, Windows is the king of backwards compatibility.
But in a free software world backwards compatibility is much less important. If things change developers just update their software to work with the new changes.
This might sound like extra work but is hardly a problem for an active project. It also means less broken legacy code or APIs that need to be supported.
Microsoft refuses to send out patches for some security issues because they will break compatiblity, this is very bad.
the “Windows” view is that software is a shiny disc. You sell it, you support it if you have to, you sit in a tower and put out a new version when you want people to pay you more money. Software is a “done” thing… dead once it’s out the door.
Free Software is alive, in use, being modified for users RIGHT NOW, not in the next version in 6 months… fixing little things from time-to-time is much easier than trying to fix ALL the bug in one shot so the company can sit back another 9 months or year.
Windows is Windows. The kernel changes but the libraries are all there back to the dawn of Dos. It’s one repeatedly rebranded OS from a single controlling company.
This is where the confusion comes in; Linux is not an OS but the kernel at the core of many different and distinct posix inspired OS made from commodity parts assembled differently by each distrobution.
Mandriva != Debian != Red Hat != Ubuntu != Mint. These are all seporate but similar OS. They are all very similar and they all use the same kernel at there core but they are all slightly different depending on the goals of the distribution.
Now here’s the kicker, Linux based OS prize choice instead of end user lockin. A standard would be nice but then we’d all have one distribution to choose from like y’all stuck in the smaller but more popular Windows world. One size does not fit all needs though so different “standards” are applied to different needs for the best solution in that specific case. Don’t like a standard; try a different distribution or learn the simplicity of tarball -> make -> make install.
Comparing Windows to Linux based OS just doesn’t result in a valid analysis. Yes we all do it but if you remove the religion, remove the purely emotional basis and compare purely on technical attributes they become simply uncomparible. Direct comparisons are not trully possible.
Good for windows supporting MS programming bugs all the way back to Dos. The primary goal of the Windows product line is to retain complete compatability (maintain barriers to change for users).
I’m personally glad that there are hundreds of different Linux based OS to choose from. Windows runs my games but the freedom of choice and a fully configurable system runs everything outside of video games for me very nicely.
Thus endyth my off topic rant.
Binaries *are* compatible. Packages are a different manner, but binary compatibility is already there.
Strangely enough, often they do. My Debian responds to ‘rpm -qa’ and lists winetools and cedega. It’s not uncommon for me to install debs meant for Ubuntu.
The main difference is the package format. There are of course considerations to be given to distro integration, which is why I won’t insist on installing “foreign” packages too often. But down at binary level a package will work on many distributions if they match fairly current releases of the libraries it was linked against.
Is it comptable with the distro repos? I know that stuff like Automatix has been said to break your Ubuntu installation, during an OS upgrade. And Trevino’s compiz packages have wreacked a lot of havoc on users upgrading to Gutsy. If it is compatible, then I think it’s a wonderful idea. One thing a user can get bashed for in the Ubuntu Forums, is using repos that aren’t standard and when something breaks, they are told not to use 3rd party repositories. I know, because sadly I’ve done that many times.
AIUI, PPA builds for multiple Ubuntu versions (assuming the package requirements fit, etc). So, a developer could theoretically set up a PPA for package foo that contains packages for Dapper through Hardy.
http://www.happyassassin.net/2007/10/24/mistakes/
While I agree with most of what you’re saying there, there are a couple issues.
1. Why not just say it here. We’re reading comments and posting, not running all over the internet chasing down each other’s blogs.
2. It seems a bit hypocritical coming from someone who works for Mandriva, a distro that just recently (2007.1 Spring) failed to rebuild all python defendant rpms after upgrading the core python install to 2.5. I believe the official answer I got on this was “Well you don’t expect us to rebuild every package in the distro against each other for every release, can you? Why, ummm, yeah I do.
1) because it’s rather long.
2) yes, that does suck. however, the fact that some things suck about MDV shouldn’t prevent me from pointing out things that suck about Ubuntu, especially when the knowledge that the thing sucks springs from first-hand experience of the same thing previously sucking in MDV.
For 2008, we (well, it was about 90% me…) rebuilt the entirety of /main, which had never been done before. For 2008.1 I’m hoping to get /contrib done too. But it would be nice to have an automated rebuild of everything, and there’s an ongoing discussion of whether that is feasible in the 2008.1 timeframe.
1) because it’s rather long.
Fair enough. It is a bit long. It’s been a long day and I’m a tad on the edgy side. My bad.
2) yes, that does suck. however, the fact that some things suck about MDV shouldn’t prevent me from pointing out things that suck about Ubuntu, especially when the knowledge that the thing sucks springs from first-hand experience of the same thing previously sucking in MDV.
Good answer.
For 2008, we (well, it was about 90% me…) rebuilt the entirety of /main, which had never been done before. For 2008.1 I’m hoping to get /contrib done too. But it would be nice to have an automated rebuild of everything, and there’s an ongoing discussion of whether that is feasible in the 2008.1 timeframe.
I’m sure the MDV users appreciate the work put in to rebuild all those packages. Just as a suggestion, you might want to check with the cAos developers. I spent some time working with them and they have a sweet custom cvs tool for rpms, as well as an automated build system for just this sort of thing.
Why not using something like seti@home to rebuild the distro….
…of course you’ll have to avoid security issues, but it could be worth to give it a try ….
regards,
glyj
thats a good point! Could distributed computing be used in this way? And if not, why not? this is a Genuine question, i’m interested in the answer
It would likely not be efficient in most cases. Most packages only take a minute or two to build on a decent build system. Only the beefiest stuff, like OpenOffice.org, takes a long time (that takes nearly a day to build on the MDV build cluster…)
then use my cpu for that min or two….
If such a project exists ill happily contribute
Users run the risk of breaking their system any time non-core repositories are added, virtually regardless of the distro base. That will never change.
openSUSE has had their build-service running for some time now, and it runs very well. Within it are any number of conflicting repositories that could easily break a user’s system if they chose to randomly start adding repos. Yet this generally doesn’t happen.
There are “mission-specific” repositories for things like current and developmental builds for the kernel, Xorg, KDE, OOo2, compiz, updated kernel drivers etc. These are built against various versions of openSUSE (and in some cases, other distros) and allow the user to “pick their poison”. Want the latest OOo2? Add the repo and update. Want to play with daily builds of KDE4? Add the repo and update. Latest version of KDE3? Add the repo and update? Don’t like it? Remove the repo, and re-update via the base repo. All of these repos are designed to cleanly co-exist with the base system, and as such to co-exist with any other fellow repos the user chooses to add. It all works very well, and ensures that users have granular access to the latest popular software packages of their choosing, regardless of their distro version and without having to mix and match developmental or backport repos that can inadvertently update or break components. In fact, the build-service combined with the custom-iso builder is the magic behind the frequently updated KDE4 LiveCD’s.
Then there are the developer /home repos that I think are probably more in line with what you’re getting at. These repos are available to developers (not exclusive to Novell/openSUSE, anyone can apply) and this is where the pet projects and often more experimental things are placed. This is where you can cause havoc with your systemy. None of these packages are built against each other, and often there are conflicting versions depending upon what the developer is testing or trying to provide. In some cases, these are simply bleeding edge packages not for general consumption (ie. the compiz-fusion packager maintains “release” packages for compiz in the main repo, but daily git-builds in his home directory that may or may not break.). In other cases, it’s an experimental patch that the dev may do to a core library (ie. patches to hal) that should under no circumstances be pushed out to the general public.
Despite the availability of these various repos, havoc has not ensued. You have to have a bit of faith in the users that they won’t simply enabling dozens of repos at a whim without understanding what is in them, and for most of the /home repos, the only way the users find out about them is when the devs post a message or blog to that effect.
Will the same hold true for Ubuntu users? Who knows. Certainly the risk for damage is there, but the risk for damage is just as high for users that use third-party repos or worse, packages downloaded directly from well-meaning community members.
I will say that the advantages of the build-service in openSUSE far, far, far outweighed the potential disadvantages you’re pointing out, and if problems arise, then the community just needs to be better educated rather than lose out on a valuable resource.
And FWIW, the openSUSE build-service is fully GPL’d and available to anyone that wants to use it. I’m not sure if it supports mandriva environments right now, but since it supports fedora/RHEL and Debian/*buntu, I don’t think it would be that difficult.
You never know, you guys might like it…
“Users run the risk of breaking their system any time non-core repositories are added, virtually regardless of the distro base. That will never change.”
Thank you for excellently summarizing my objection to this project. I consider “run[ning] the risk of breaking their system[s]” to be a fairly definitive pointer towards any project’s being a bad idea.
“well. Within it are any number of conflicting repositories that could easily break a user’s system if they chose to randomly start adding repos. Yet this generally doesn’t happen.”
What’s your evidence for the assertion that “this generally doesn’t happen”? Do you have enough involvement with and experience of support for OpenSUSE users to be sure that statement is correct?
“It all works very well, and ensures that users have granular access to the latest popular software packages of their choosing, regardless of their distro version and without having to mix and match developmental or backport repos”
from your description, these basically *are* developmental and backport repos, just tiny single-purpose ones which take no account of each other.
“Despite the availability of these various repos, havoc has not ensued. You have to have a bit of faith in the users that they won’t simply enabling dozens of repos at a whim without understanding what is in them, and for most of the /home repos, the only way the users find out about them is when the devs post a message or blog to that effect.”
Again, I’d like to see some support for this. My experience is that there is a substantial core of people who will enable just about any repository they come across if they think it contains Some Random Package that they have decided they want. The SUSE build service has also not been around very long: give it some time, and especially see what happens when it comes to upgrade time.
“And FWIW, the openSUSE build-service is fully GPL’d and available to anyone that wants to use it. I’m not sure if it supports mandriva environments right now, but since it supports fedora/RHEL and Debian/*buntu, I don’t think it would be that difficult.
You never know, you guys might like it… ”
Thanks, but no thanks – we already have a buildsystem that does everything useful that the openSUSE system does. It, too, is fully open source, BTW, and has been around for a long time (most of it came to us with Conectiva). It could easily be (ab)used to produce a constellation of tiny repositories that were not compatible, but we don’t feel this would be a sensible use of it.
** Thank you for excellently summarizing my objection to this project. I consider “run[ning] the risk of breaking their system[s]” to be a fairly definitive pointer towards any project’s being a bad idea. **
Wow. Nice selective interpretation. Here’s another one for you: the best way to protect against internet threats is to disable network connectivity, so you may as well go an rip out the network stack as well. The poor feeble users certainly cannot be trusted to use their systems intelligently otherwise.
I’ll admit that I’m not familiar with Mandriva, but are you seriously telling me that you force users into a static environment with no ability to update packages after release beyond either being forced to third-party repos or having to compile their own packages and subsequently lose the ability to leverage their built-in package management system? Or should they open up access to testing repos and risk stability to their entire system for the sake of updating KDE or OOo2? Or are you simply implying that Mandriva somehow manages post-release package updates better than any other distribution has figured out?
** What’s your evidence for the assertion that “this generally doesn’t happen”? Do you have enough involvement with and experience of support for OpenSUSE users to be sure that statement is correct? **
Yes, in fact I do. openSUSE is certainly not without issues, some versions more so than others, but the build-service has not created any significant problems during the time it’s been active over the last two releases. If you question that, feel free to surf through the general user forums or mailing lists. I’m not going to pretend that users haven’t run into *any* issues at all, but they represent isolated instances rather than recurring problems, and the frequency is certainly no more than occurred prior to the build-service being available.
I would take it a step further and even argue that the build service has reduced the instance of some problems in the past, such as having to download standalone update packages from any of suse’s various ftp or affiliate sites. From an upgrading POV, the build service tracks changes in dependent packages and automatically triggers rebuilds when necessary, which means that even through the development cycle, packages built against the factory build are automatically kept up to date. Once the the development versions is finalized and released, the “additional” packages that may not be part of the core release are already packaged and available. Speaking from experience, this greatly reduced the “dependency hell” normally inherent in version upgrades. It also gives the external package maintainers the opportunity to accommodate changes during the development cycle and adjust for them prior to release, so there is a greatly reduced length of time between the release of the distribution and the availability of popular add-on packages. This type of thing would simply be far too resource intensive to do manually, which is why in the past there was no support of developmental versions and why most users ran into those upgrade issues.
I’m not saying it’s bulletproof, just pointing out that it is hardly the doomsday scenario you are trying to portray.
I’m curious to know where the empirical basis is for your assertion that making managed external repositories with a proper automated build structure to reduce the risk of packaging errors is somehow more inherently flawed than forcing users to seek out their own custom packages with whatever inherent risks that carries? Or is it simply hopeful speculation?
** from your description, these basically *are* developmental and backport repos, just tiny single-purpose ones which take no account of each other. **
They don’t need to take account of each other, they are built against the base distribution. There are guidelines for specing the files to ensure that there will not be core-library breakage. In the rare case that a package may be dependent upon a library package that also has an update available, then the devs generally create an additional repo to accommodate this and use the dependencies to prevent overlap. The only specific example I can think of was previous builds of compiz/beryl where newer versions required elements of a newer version of Xorg than was available on earlier openSUSE versions. A re-packaged version of Xorg was made available to the various versions, and the compiz repo was broken into one for older Xorg versions and one for newer. It’s more work for the developers, but still much easier for them than in the past.
I think it’s time we drop this antediluvian concept of distros being a solid and unmalleable collection of interdependent packages. Haphazardly updating the kernel, or udev, or gcc et al. can certainly impact system stability, but why should a user be prevented from easily updating to the latest version of Amarok since it holds no dependencies for the more critical underlying systems? That is, unless you reject the concept of using automated tools to greatly simplify the process, and prefer to point to the lack of developer resources as a reason for locking users into a static model.
Again, I’m not advocating allowing users to rip apart their distributions at will and expect the developers/community to still support that self-inflicted damage, but I think we can give the users a little more freedom and leeway when it comes to making their own decisions.
** Again, I’d like to see some support for this. My experience is that there is a substantial core of people who will enable just about any repository they come across if they think it contains Some Random Package that they have decided they want. The SUSE build service has also not been around very long: give it some time, and especially see what happens when it comes to upgrade time. **
It may be your experience that there is a substantial core of people, but my experience is that it is more of a fringe. At least, within the openSUSE community, I won’t pretend to speak for all of them. Quite frankly, most new users are afraid to diverge from stock configs until they’re more comfortable, and once they are, I find that more often than not, they solicit advice or ask questions in the appropriate forums before blindly leaping in.
That’s also the reason for the delineation between the main sub-repositories you find in the build-service for things like KDE or OOo2, and the more risky and experimental developer repositories under /home. Combined with the new one-click install capability, users can add packages by clicking appropriately in the openSUSE wiki, using packages maintained by openSUSE developers and intentionally packaged for public consumption against release versions. If users are going to blindly enable a repository just because they came across it in a browser, chances are they’re going to find a way to bork up their system regardless.
Even Gnome gives users the option of using gconf to modify otherwise unavailable config options. Lighten up a bit and give your user base some credit.
** Thanks, but no thanks – we already have a buildsystem that does everything useful that the openSUSE system does. It, too, is fully open source, BTW, and has been around for a long time (most of it came to us with Conectiva). It could easily be (ab)used to produce a constellation of tiny repositories that were not compatible, but we don’t feel this would be a sensible use of it. **
Well, time will be the true judge of that either way. I assume from your attitude that Mandriva will remain one of the few mainstream distros not moving in this direction, so you may very well be able to post back in a couple of years and say, “See? Told you so!”.
But I suspect not.
At any rate, it’s clear that we’re going to have to agree to disagree, there’s no point belaboring it.
“I’ll admit that I’m not familiar with Mandriva, but are you seriously telling me that you force users into a static environment with no ability to update packages after release beyond either being forced to third-party repos or having to compile their own packages and subsequently lose the ability to leverage their built-in package management system? Or should they open up access to testing repos and risk stability to their entire system for the sake of updating KDE or OOo2? Or are you simply implying that Mandriva somehow manages post-release package updates better than any other distribution has figured out?”
You obviously didn’t read my blog post. Mandriva has a centralized /backports repository (well, one for each section – main, contrib and non-free). If a maintainer wants to update a package in a way that’s not suitable for an official update – the kinds of package we’re talking about here – they can upload it to /backports. /backports packages are built on the official buildsystem, same as any other Mandriva package, in a clean chroot for the release in question. As it’s a single repository, the packages are built against each other where appropriate, so /backports is exactly as internally consistent as the conventional release and updates repositories are.
Edited 2007-11-28 21:23
No, I get that. But what if you have both Gnome and KDE in a single backports repo, what if I only want to update KDE? Can I do a system upgrade without updating all packages available /backports? Or am I expected to update all available packages for the sake of consistency? I’m asking this as a legitimate question, because as I said I’m unfamiliar with your repo topology.
At any rate, I get your point, I simply don’t agree with it. Dependency management need only be as fragile as the thought and planning that goes into it. We’re in agreement that system stability is dependent upon supplementary packages being built against a consistent base. We’re in disagreement as to how deeply entrenched those dependencies really need to be. If upgrading to a newer version of Amarok runs the risk of borking Gnome, then there’s something very wrong with the package layout to begin with.
And obviously, if a newer version of Amarok requires a newer dependent package that could break the standard install of Gnome, then that is an entirely different case and in such a case that version shouldn’t be presented as an upgrade possibility. That’s why there exist packaging guidelines and standards.
A poorly managed build service in any sort of implementation will lead to all sorts of problems, but then so will sloppy package maintainers in a non-automated system. Nobody will argue that. But you still seem to be presenting theoretical arguments against something that is already beginning to prove its value and advantages. You’re arguing worst-case scenario factors without giving due credit to the developers to manage the process correctly, the users to show common sense, and the community itself to make it successful. If the openSUSE devs and user community have found value in the build service, then it’s likely that the Ubuntu community will as well. Particularly in comparison to the present system, which seems to consist of users self-packaging apps and hosting them on private ftp servers or posting them online.
Human error can naturally cause problems, but it’s not like human error doesn’t occur with manually packaged repositories either.
“No, I get that. But what if you have both Gnome and KDE in a single backports repo, what if I only want to update KDE? Can I do a system upgrade without updating all packages available /backports? Or am I expected to update all available packages for the sake of consistency? I’m asking this as a legitimate question, because as I said I’m unfamiliar with your repo topology.”
GNOME and KDE are disallowed as backports by policy, as being entirely too likely to cause problems. But I see your point. It’s easy to enable and disable repos in MDV, and the recommended way to use /backports is to enable the repository when you want to install a specific app from it, install the app, and then disable it again. We don’t recommend using it as a general-purpose repo that’s enabled all the time.
There’s a filter in rpmdrake (the software manager) that is supposed to allow you to install apps from /backports manually even when the repo is disabled, but I’ve seen a couple of reports that this isn’t actually working as intended in 2008; but this mechanism is intended to cover exactly the scenario you outlined.
btw, I did a quick troll through SUSEForums to see if anyone was having trouble with the build service, and couldn’t actually find anyone using it. I may have been looking in the wrong place, though.
**GNOME and KDE are disallowed as backports by policy, as being entirely too likely to cause problems. But I see your point. It’s easy to enable and disable repos in MDV, and the recommended way to use /backports is to enable the repository when you want to install a specific app from it, install the app, and then disable it again. We don’t recommend using it as a general-purpose repo that’s enabled all the time.***
Ok, now we’re getting somewhere. So under your system, you can’t provide “major” updates to core packages like KDE and GNOME due to the understandable potential for system disruption, despite the fact that pretty much every other distro can. And somehow your system of allowing users to temporarily enable a backports repo to install a package while then preventing them from receiving updated versions due to security/functionality patches is somehow better for system stability and upgradeability than having a system of automated “micro” repositories that allow the user to have granular control of selecting newer versions of packages while maintaining core package integrity *and* automatic update capability?
Dude, I’ve seen many of your posts here over time and you’re generally very rational and well spoken in your opinions, even if I don’t always agree with them. But I have to admit I’m really struggling to see why you’re being so stubborn on this one. You’ve yet to provide a compelling reason as to why this is a bad move for Ubuntu, particularly in light of the fact it has worked well for openSUSE. If Mandriva is shunning automated build-systems for granular package management, that’s absolutely your choice, and it’s up to your users to decide if they’re better served that way.
Like I said previously, give it some time. Maybe you will have the last laugh. But I still don’t think you will.
***btw, I did a quick troll through SUSEForums to see if anyone was having trouble with the build service, and couldn’t actually find anyone using it. I may have been looking in the wrong place, though.***
Possibly the wrong place, but it’s scattered all over. There are posts for users regarding the KDE4 repos, or compiz, probably two of the most popular right now. Plus many users of 10.2/10.3 have already upgraded to current versions of KDE and Gnome through the build-service. The latest version of OOo2 is another popular update, and for added measure the build-service also provides snapshot builds of OOo2-unstable for testing. There are some users with Xorg 7.3, also available through the build-service. You could also stroll through the openSUSE wiki, and see the various 1-click install links, almost all of which rely on build service repos as well. Power users also acquire automatic updates of the KOTD (Kernel of the Day) through the repos, where is was previously provided through ftp download.
It’s not black magic or blind luck, it’s simply smart design on behalf of the developers that it works as well as it does. *buntu may have some bumps along the way initially, but I have no doubt it will become as powerful for them as it has for the openSUSE community.
Embrace the future.
btw, I did a quick troll through SUSEForums to see if anyone was having trouble with the build service, and couldn’t actually find anyone using it. I may have been looking in the wrong place, though.
Anyone referencing “1-click install” or instructions from the openSuSE wiki is almost certainly using the build service. Technically, anyone who adds the official repositories is also using the build service.
There are still some external repos, but out of the 11 I have configured, 2 of them aren’t on the build service (Packman and NVidia).
The fact that I can safely upgrade to the latest KDE and/or Gnome simply by adding the appropriate repo and running an update is a HUGE benefit of running openSuSE.
EDIT: Sorry, dup post
Edited 2007-11-28 22:28 UTC
I still think that this might be a good idea, had it been personal …
If people were able to easily customize and update .debs for use with a specific implementation, host it on a server available from the internet and had to allow others to use it (which doesn’t always make sense) it might actually serve a purpose …
E.g. – if one needs a Debian based setup with a MIT Kerberos KDC you don’t get the opportunity to deploy properly GSSAPI-enabled Netatalk AFP file server, since Netatalk builds against Heimdal’s GSSAPI library.
Similarly, when deploying a Heimdal Kerberos KDC you don’t get properly GSSAPI-enabled OpenSSH, since it’s been built against the MIT GSSAPI library …
In such cases you’d have to rebuild .debs from sources, making updates more time consuming and potentially troublesome.
My point is that in such cases, it might be a good idea to easily being able to build and update packages with the dependencies being right for your specific setup and get it deployed via apt-get, without having to actually host the repository (or PPA) yourself.
Your fatal mistake is underestimating apt.
PPAs will clearly show why apt > * where package installation is concerned.
First off, while apt is a decent system, it’s nowhere near the holy grail of package management people make it out to be. It’s quite usable, but so are the various alternatives.
Going back to AdamW’s point, according to him, he should be able to sit back and watch the meltdown of openSuSE, since it adopted the build system with 10.1, and has continued it into 10.2 and 10.3. Problem is, there hasn’t been a meltdown.
Personally (openSuSE), I’ve only had issues with 32-bit-only packages compiled for 64-bit (NX springs to mind), because it’s not easy to request specific architecture for a specific package. Once the package maintainer eliminated the 64 bit version, that problem went away.
I can see instances where a sloppy developer could create a repo that requires specific versions of libraries that aren’t compatible with the base, but at that point the package management engine should fail to resolve all dependencies, AND DOESN’T CHANGE ANYTHING. Solution? Remove the repository, and look for an alternative source.
Now, if someone wants to get creative, and attempt to force an installation, ignoring unmet (or impossible) dependencies, then they’re welcome to whatever mess they make of their system. If you get a warning light that says “Don’t do that”, and you do that… well.. you were warned, weren’t you?
as I wrote, in practical terms it is possible to have a situation where a package will *install* without complaints but won’t *work* properly. I have seen this happen on more than one occasion where people use third party repositories.
as I wrote, in practical terms it is possible to have a situation where a package will *install* without complaints but won’t *work* properly. I have seen this happen on more than one occasion where people use third party repositories.
Please provide an example? Doesn’t have to be real-world, but something other than anecdotal evidence would be nice. I’ve seen people abuse the RPM system and completely hose their system, but they had to work at it (usually by overriding warnings).
The openSuSE system, as I understand it, builds packages based on dependencies already in the system. The only example I could come up with would be if I, as a private maintainer (for example) created a library called “gtk-1.2.10-993.x86_64.rpm” that wasn’t actually gtk-1.2.10-993 for x86. Since the build system is, in theory (I haven’t packaged anything with it) doing the building, it should see “gtk-1.2.10 required”, and grab the appropriate package to build against.
To get that to break would almost have to be deliberate, and malicious.
While your package and my package may not be built against each other, both packages should be built against dependencies already in the base system, minimizing the chances of an explosion.
Anything after that is going be the result of incorrect dependency resolution or package management failures.
You still have to prepare a proper debian source package to have it compiled and packaged.
Personally, i think that as long as most “core” packages are kept in the main repos (both debian, and ubuntu has a massive collection) problems should be kept at a minimum.
There is, of course, a chance that this might cause breakage if people publish, and compile against non standard libraries etc. of their own, overriding the default core packages from the official repos, as adamw pointed out to us…
Only time will tell i guess.
True, but Canonical’s initiative is meant to simply adress the needs of devs who already do this, but can’t be bothered to jump through the hoops to make their package an official Debian, Ubuntu, Fedora etc. package.
These people often set up a couple of directories on a webserver, tell people to add the URL to their apt sources and that’s all.
Canonical gives them free hosting space and a bug tracker. That’s all an individual developer or small team needs to get going: easy publication and organized feedback. It’s a brilliant idea.
If I ever complete any of the minor projects I’ve been hacking in my spare time, I’ll definitely use this service to offer Ubuntu and Debian packages. I wouldn’t be suprised if all kinds of projects would flock to this offer.
from the article it sounded like an ok service, but then most of you guys relate it to Linux… I know Canonical makes Ubuntu and probably the majority of people using the service will in fact be using Linux, but… is it suitable for non-Linux? (ie: as in cross-development)
Edited 2007-11-28 02:15
Since the apps are going to be built into .debs, which AFAIK is not really used outside of Linux, no.
I guess it would make sense for Debian GNU/Hurd or Debian GNU/kFreeBSd, too, but that’s about it.
I know this is not what you meant but I can’t help to speculate. I’ve always wondered whether a Linux-distro-like repository idea would work for Windows.
Probably not. You can’t have the source in most cases, so you lose a lot of the things that make Linux repositories attractive: the guarantee to fit with the system, no malware, and so on. The publishers would most likely not all agree to grant distribution rights, so you wouldn’t be able to carry some popular pieces of software. You’d have to fight a huge wave of spam and you’d never know if a package isn’t a trojan or a keylogger or whatever.
And still… it’s a very attractive idea. With very tight screening of the accepted software it might be doable. There’s quite a bit of decent honest software out there whose publishers can be trusted (based on past experience) to not bundle crap. And you can always kick them out if they do. If this was kept small scale and high quality it might work.
so, it’s not so useful in making my Z80 based project (cross assembled from a Linux or Windows host) available to all?
Come on, now. Canonical deals with Linux. You can’t expect them to cater to every OS out there. They won’t even cater to every platform, AFAIK Ubuntu is only available for a handful of architectures (x86, AMD64 and UltraSPARC T1 for the server edition; PC, 64-bit and Mac for the desktop).
There is a bug in Galeon on Gutsy where it segfaults on trying to open the printer dialog:
https://bugs.launchpad.net/ubuntu/+source/galeon/+bug/136479
According this report it has been fixed upstream in the Galeon SVN. It would be great if a Galeon developer would take advantage of this service to provide Gutsy users with a current working binary package of Galeon.
“It would be great if a Galeon developer would take advantage of this service to provide Gutsy users with a current working binary package of Galeon.”
Wow, there’s a drawback I didn’t even think of before: enabling people to do end-runs around proper update procedure that are useful in the short term but damaging in the long term.
Given that most Ubuntu updates are security updates not bugfixes, it seems to me to be a good way to get a bugfix out to users. The update would presumably work its way though the procedure and come through for Hardy as should happen normally.
It is a damn sight easier than the other way to fix it than for users like me which means turning my system into a dev system and compiling from SVN myself.
Edited 2007-11-28 19:19
Wow, there’s a drawback I didn’t even think of before: enabling people to do end-runs around proper update procedure that are useful in the short term but damaging in the long term.
PPA is already being used for this. I had already installed the latest version of gchempaint and the gnome-chemistry-utils after adding:
deb http://ppa.launchpad.net/laserjock/ubuntu gutsy main
to my sources.list
This enables me to uses the latest version of this chemistry software which has specific new functionality needed in my work as a research chemist – now.
I think the combination of a defined distribution supported build environment together with the strengths of apt will ensure that users systems aren’t broken. Only time will tell. By the way on an anecdotal note while I was using Feisty I was one of those users that added any random third party repository that had a specific program I liked and couldn’t easily get a binary for, to my sources.list file – Nothing broke.
Lack of a central repository with policies designed to keep it intact led to everyone effectively statically bundling all their binary dependencies into their applications. That led to a decoupling of the development of dependencies from their users, since they could always simply bundle the binary JARs of the previous version, while the developers were on another API refactoring trip. That in turn made deployment of multiple applications very hard, unless one kept each and every released version of their released dependencies in the system’s repository, as subtle changes in the exposed interfaces broke code dependent on them, and no one really cared much about cross-project integration issues.
That horrible state of affairs is something the oen source Java world is trying to get away from, grudgingly, after it took it about 10 years to figure out that ease deployment actually matters, and means something different than ‘whack a bunch of semi-arbitrary library binaries together, make it show a green bar in JUnit, and ship it!’.
The only reason I could see something like this making sense for Ubuntu is that they have trouble scaling the free as in beer volunteer resources to cope with the demand from users and developers to see their desired software packages show up in their preferred distribution.
The solution to that is not to decentralize distribution repositories.
It happens whether you want it or not. A package repository is just a bunch of files and directories on a FTP or web server. Anybody can start one in 30 seconds if you have a properly composed .deb or .rpm and a meta-description file.
A lot of developers don’t want to push their project into the official repositories. It’s too much work, too bothersome. So they set up a repo of their own and tell people to add an apt source. Job done.
Canonical is taking this state of fact for what it is. If it’s gonna happen anyway, they might as well help. Free hosting, automated compilation and a bug tracker is a very good service. Saves people the time and money to set up their own repo, website and tracker. Google Code or SourceForge do something similar. Canonical’s service is simpler and aimed at Ubuntu, which is to be expected.
Ubuntu has done a lot to encourage unofficial (“community”) contributions. This has some obvious advantages. Canonical (Shuttleworth) saves money when volunteers do a lot of work in Ubuntu. Also, making contributions easy fits well together with the FOSS spirit and participation makes the community stronger and more motivated.
However, I agree with AdamW’s critique. Unofficial, unsupported packages can easily create problems because of their lack of proper quality assurance — and apparently such problems already exist in Ubuntu. The “universe” package repository is only maintained by the community, which means: no official quality control, no official bug-fixes, and no official security updates. Leaving the “universe” repo without official support is the most likely cause for the many upgrade problems that users have experienced in Ubuntu:
http://ubuntuforums.org/showthread.php?t=285446
http://ubuntuforums.org/showthread.php?t=414935
http://ubuntuforums.org/showthread.php?t=580852
I’m not sure if saving money in support is really the wisest thing to do in the long run. Ubuntu hasn’t even proven yet that they can provide a safe and trouble-free upgrade path from their Long Term Support edition to the next LTS release. If their LTS releases turn out to have upgrade problems, that will be seen as a huge disadvantage from the point of view of potential enterprise users — and that’s where the big money is. Making even more unsupported packages available sounds to me like a bad business plan.