InternetNews.com states: “Microsoft (or a really smart ISV) should build a full application manager for Windows, similar to what most Linux distributions do today.” Most Windows applications come with their own distinctive updating mechanism (much like Mac OS X), instead of having a centralised updating location like most Linux distributions offer. While it certainly wouldn’t be harmful for Windows to gain such a feature – the question remains: isn’t it time we rethink program installation and management altogether?
If we limit this discussion to the three major operating systems – Linux, Mac OS X, and Windows – then all have their distinctive set of advantages and disadvantages when it comes to the methods they use to install software. To start with Windows – while installers are generally easy to use, each application seems to use a different type, which can be quite confusing. And even though most Windows applications can be removed from the system using the Programs module of the control panel, they can’t be updated from there. The end result is that you get these nagging dialogs each time you start your application, asking if you want to update it, which seriously interrupts your workflow. It’s annoying, and highly inefficient.
Mac users claim to be much better off, but in fact, that’s nonsense. Mac applications mostly lack the annoying installers, but in return, they leave a trail of files all over the system that you can’t remove easily without 3rd party tools. To make matters worse, Mac OS X also allows for installers, but these installers almost always lack an uninstall option. Most of Apple’s own software included. In addition, the Mac still suffers from a lack of a central updating tool.
Linux users claim to have the holy grail of application management, but they’re also wrong. Yes, they have this elegant central updating and management utility, but in return, you are limited by how up-to-date your distributor is keeping its repositories – or how much stuff they put in there. It’s quite annoying to know that a new version of Pidgin is out, but your distributor hasn’t packaged it yet. On top of that, these central updating mechanisms in Linux are – still – notorious for making a mess out of things during more complicated update sets.
The thing is, all of the different mechanism have their strengths and weaknesses, and claiming one is better than the other is like arguing over which method of suicide you’d prefer – you’re going to die anyway, so who gives a rat’s bum. What we need is to take application management to the next level, to combine all of the strengths of the various mechanisms into one. I made a hypothetical proposal once (be sure to read the comments, very insightful stuff in there), and I still believe that the system I proposed then is far, far superior to anything current operating systems have to offer.
For this to happen, Microsoft would have to let go of some of its control. When Code Red was running rampant on our university campus, we were not allowed by Microsoft to distribute the patch ourselves. We had to have the students all go through Windows update. You can imagine how much bandwidth that took on a campus with 20,000+ computers.
Plus, you would have to unhinge WSUS from AD. Not all machines are in AD, but you would still want them to use this service.
Mircosoft could be the package maintainer. And only tested and scanned submitted apps would be allowed onto the repo. Thus making the threat to the customers alot smaller than current situation where people download apps from all over the place.
I think it would be too much of a threat to MS and it’s shareholders. They would have to host competitive software along side there own products. Wile I’d love to see the Windows Update site offering Firefox, Opera and Safari along side IE.. something about winged bacon comes to mind.
Centrlized package management would do a lot of good for the end user but it’s not about them, it’s about the shareholder’s payoff.
but that’s the point, they want you to control them so they can be licensed. And they charge extra to programmers to use the good .msi packaging tools that only work under the AD+ SUS combo.
on top of that ISVs have their own “tried and true” methods of installing.. think Oracle, Autocad, Adobe, etc. and they won’t bend to Microsoft’s way, they WANT to make installing painful.
The answer is NO. It’s not broken.
Look the problem with linux has been the 10,000 distros, and the 10,000 ways to do a single action.
The last thing we need is 10,000 variations of the control pannel. I don’t want apt-update,pkg_add,yum,yast,…. The linux way is ocmplete chaos.
Frankly it isn’t broken, it works just fine, thankyou very much.
I don’t know what you are talking about.
Searching: pacman -Ss <keyword>
Installing: pacman -S <packet>
Upgrading: pacman -Syu
Removing: pacman -R <packet>
That’s all I ever need. Sometimes I put a ‘f’ in there.
You moan about all these 1000 different ways. What you seem to forget is that every single user only needs his very one single way for his distro.
Talking about 1000 different ways, that’s exactly the problem of Windows software installation and upgrading. Every piece of software you find somewhere else, you have to run through a semi-conherent installer program and in the end have a semi-coherent way of hopefully removing it again. And then you have to actively download new versions, etc. if it doesn’t come with some sometimes-crappy self-update option.
No central system means that everybody is re-inventing the wheel, some with better success than others.
On the other hand, I don’t see packet management like in Linux on a commercial platform soon. There is much more behind it than just a convenient way of keeping your software up-to-date. Think about the tight interplay of all these different libraries. In a commercial work there is no place for this, packet managers would never gain the control they really need to provide a high-quality software system.
You moan about all these 1000 different ways. What you seem to forget is that every single user only needs his very one single way for his distro.
You seem to forget that it’s a pain to distribute an application for each possible way that each distro can manage packages.
Windows doesn’t have that problem. Each different Linux distro is like a completely different platform because of the community’s inability to decide on anything.
Or because there are SEVERAL communities.
Windows is just yet another system with yet another way of installing software. It has the very same problem as other distros.
“THE linux community” has nothing to do with this.
Has the author (of this post) ever used Linux?
You dont need to care about every damn distro, only the top ones, which leaves you with basically RPM or .deb. Wow, two systems. The difference between the different distros that are using the same package system (rpm or deb) is so minor that usually it doesn’t even matter.
Maybe this is a bit harder for the developer but here’s a hint: as a consumer I dont give a damn. I care what makes things easier for ME. Cry me a river.
It’s still a valid point. Its not like there is one central repository for all linux programs, there are hundreds, all with different content, different levels of quality, and different levels of compatibility with each other.
Its not hard to get into alot of trouble as soon as you venture out of whatever distro specific repository you should be using for everything.
No.
Are you really suggesting there is one for all Windows and Apple program ?
Your talking about GNU/Linux , not windows and Apple.
1 distribution usually as one official repository and one official package manager.
I wonder how you survive with a windows system , it’s worst.
There’s a reason there are so many different repositories. Every major Linux distribution keeps their own repositories which only contain software and libraries that are tested and known to work and be compatible with eachother. If you install something from a default Ubuntu repository, you can be certain all of it’s dependencies will also be there, and be the right version. Adding more repositories in Ubuntu is as trivial as adding a URL to the end of a list. One line, nothing complicated. In the past three years, I have never had a serious problem with package management, no matter how many third party repositories I’ve added.
If having the latest version of a program is more important than having the tested version in the repositories, often you’ll find the developers have already packaged it and it just hasn’t been accepted into the official repositories yet. I normally add the repository of a program I like to keep up to date (like WINE), and then keeping it up to date is exactly the same as keeping any other program up to date. The same update tool is used, it shows up in your list of updates right next to everything else. It’s virtually seamless.
Package management in Linux isn’t perfect, I’ll admit, but in my opinion it is better over-all compared to OS X’s and Windows’ software installation.
There is a third option that you seem to have missed that uses the package management system, guarantees it will work if you choose the correct source, and does not require you to add extra repositories to your repository list.
An good example of this other “repository-less” method of installing packages for Linux that I speak of can be found here:
http://www.getdeb.net/
I use Getdeb when I’m feeling lazy, and only if the version on Getdeb is the latest version of the program I want. Otherwise, I compile from source; but that’s a rare occurrence for me too.
I’d dispute that. Package managers these days have very good checks for dependencies, and very good algorithms for resolving them.
If there is no solution to install a given package, then the package manager won’t install it.
Where is the path that would “get you into a lot of trouble”?
In all the years I have used linux it happened three times with fedora and once with ubuntu, once with debian that the package manager ended up getting borked in some way to the point where I couldn’t add or remove packages.
With fedora it was because I kept adding third party repos that were compatible with fedora, but not compatible with each other, and had overlapping packages. Both times with debian it was because I was installing debs pulled from somewhere other then the debian repos (once it was from a friend, once off of a website)
Its not like this is a pandemic, at least not since the rise of ubuntu (the debian repos IMO are top knotch and fairly comprehensive). but it does happen every once in awhile, and when it does it is a real problem, especially for a newbie linux user.
It’s rare that one would need to venture outside the distributions repositories in the first place. Mandriva includes everything I need for a desktop with bleeding edge program versions. Debian’s repositories are an even larger software libary to choose from. The only thing I need outside of those two sources is VMware Server which is completely painless to install from tarball with it’s own installer; server software requiring some typing during installation is not an issue for me though.
Maybe this is a bit harder for the developer but here’s a hint: as a consumer I dont give a damn. I care what makes things easier for ME. Cry me a river.
Yes, I have used Linux, and your attitude is exactly the reason why people don’t support Linux. Which, frankly, is fine by me. I have not tried a single Linux distro that I felt was worth my time to continue using.
Do you seriously think Windows consumers care if something that makes things easier for them makes things a bit harder for the developer? Hint: No.
Do you seriously think mainstream developers want to develop for a set of pain in the ass platforms? Hint: No.
The most pain-in-the-ass platform to develop for in the long run was always Windows (although MS has really good developer tools). So your answer is wrong, it is obviously “yes”.
And btw. I developed applications for GNU/Linux. I provided the source online, nothing more. Others packaged it. Result: It is part of official Debian and Ubuntu repositories. It is part of official and inofficial ArchLinux repositories. There are Gentoo packages. There are some “generic” packages to be found on the internet as well…. In the end, almost every Linux user can use my software and all I did was to provide the source code in a standard way (autotools).
I don’t even have to compile it for them or build an installer. Still every one of my happy users can install and uninstall it easily.
So whoever invented that crap about “Linux is bad for developers”: Talk to the hand. It is crap. If you stay to the rules, it is the best platform to develop for. Lots of free libraries to use and you don’t have to ship them yourself, etc. pp. Make it run on your system and (stay by the rules!) it will work on every system, even _future_ ones. (People in Ubuntu once introduced a patch to my program to make it compile with their new GCC version — I had to do nothing for that).
As soon as you don’t want to give away the source code it gets harder — but still (imho, ymmw) not harder than Windows.
Why are they developing for Windows then. Win32 isn’t exactly a smooth ride.
Certainly there needs to be incentives but package management has very little to do with if someone is going to write an app for Linux or not.
Edited 2008-12-16 10:28 UTC
Don’t exaggerate will you?
There are in general three kind of files the program writers have to supply:
rpm – Redhat package manager files.
deb – Debian package files.
tgz – tarball file.
The rpm and deb files are only in essence the program files with a list of standard locations and dependencies that have to be resolved by the distro’s package manager. Nothing more, nothing less. The program makers don’t have to supply their program for all variations and tastes of distributions. The program writers only have to sum up what dependencies.
It’s the task of the distribution makers to make sure the application runs on their specific ditribution. If the standard package won’t run, those distro makers have to compile and package it themselves.
So – please stop spreading the stupid myth the program writers have to supply their program for all tastes of Linux distributions. This is simply not the case…
Okay?
I agree with that, and while they are many different distributions, only a few are worth a try for beginners :
Ubuntu or one of the many flavors of it (Then Debian if you want to go further)
OpenSuse
Mandriva
Fedora is a bit more technical and I think the rest are outliars, by this I mean : wouldnt be as fun for beginners.
If anybody can think of any other popular one, let me know.
Spread knowledge, not misinformation.
which is why, eg. the abiword developers are saying that it is easier to deploy on Windows than Linux and users constantly complain because they are using the old version.
So you are hurting developers and users.
Yes, they have if they want to make it easy for their users. Just look at skype, flash, pidgin etc. and tell me they have a single version.
Pidgin:
Windows
Fedora
CentOS / RHEL
MacOSX
Notice two different linux versions.
Skype:
11 (!) different Linux versions.
Only Linux users would say this is an advantage!
Stop spreading the myth you can always get the latest version from your distro and that developers don’t have to do a thing.
As far as software developers having to release different binaries for different flavors of Linux, I really think that is a moot point. Here’s why:
They can release the source (.tar.gz) package and possibly one or two of the popular binary packages (rpm, deb) IF THEY WANT. In the case that they don’t release anything besides a source package the developers/maintainers of each independent distribution can create the packages themselves, which is often what happens. I mean, who better than the developers of the distribution to package something for their own distribution?
So, while the task of packaging has to be done by someone, at least that task does not have to be placed on the shoulders of the application/whatever developer(s).
Sigh…
This is NO proof. If the developers want to make packages for several distro’s that’s fine…
However…..
They don’t HAVE to do it. They are not forced to do it and they don’t need to do it. You see – all they have to do is just give the tarrball with the source code. That’s the reason there are distro’s and distro-builders. They do the job of fitting the application in the distro…
Yeah – and people complaining. There are always people complaining. I tell you a big secret – even developers of windows applications get complaints their software is not working. An that’s only one platform so their should not be a problem at all (using your logic). Amazing huh?
Keep the application version up to date is also a job of the distro builders. Most times they will test the new version before bringing it to their platform. And that is the right way to do it. Why should I want the latest version right now if I can get a tested and packaged version a few day’s later. What’s the hurry?
Now when it comes to closed source code it is a bit of a different story. Most times the developers just give a standard package with pre-coded pieces inside. No source code here. Maya for Linux is a good example of this. Problem? … No! You see – there is a common set of minimal parameters every Linux distro has. By building Maya against these common parameters the developers can be sure it works on most distro’s. Maya works on Redhat, CentOS, Mandriva SUSE and a lot of other distro’s. All use the same standard rpm package. No problem at all…
I agree a package manager would not work on a Windows platform, but it is doing a great job on a Linux platform.
For FOSS code this is not a problem. Each distribution will take your source code, compile it with options compatible with that distribution, package it and include it in their repositories. As the author you don’t have to do anything at all … others will help here.
For commercial closed-source code, you are fighting an uphill battle to get that application accepted on Linux anyway. People will nearly always choose an open source equivalent. But even then, if you really want to have a go at a closed-source single-supplier-only application for Linux, then if you follow the LSB API for interfacing to the desktop you then need only really compile your code once and package it in two ways … as a .deb package and also as an .rpm package … and you have most Linux system covered.
It is not that hard. Plenty of projects do this.
http://download.openoffice.org/other.html#en-US
funny, the distribution maintainers seem to do ok managing additions. It’s not like Mozilla has to provide a package for every distribution, just the tarball; the basic universal package. Each distribution then chooses to include it through there own manager.. or not.
But, I suspect we’re already way over your head on this one.
I hear you… Some have tried and have thrown the towel, for lack of time but also for obvious lack of interest from the community (http://winpackman.org).
On the other hand, PC-BSD has thought the other way around and wondered “What if we used a strong Unix system as a base for our OS and integrated an intuitive Windows-like software management system, something anyone could use right away?”.
The result is here: http://bsdstats.org – The basic difference between PC-BSD and DesktopBSD is the way they handle software. Over the same period of time, PC-BSD has gained more than 10 times more users, and PC-BSD itself gathers 3/4 of all BSD users.
The Linux way of managing packages is great, has undeniable advantages, but it is not what regular users want, sadly (or fortunately).
The result is here: