Klik is a system which creates self-contained packages of programmes installable over the web with a single click. In this article Kurt Pfeifle discusses the potential uses of this technology for helping the non-coding contributors to KDE. He also looks at how the system works and the obvious security issues involved.
Klik: OSX Style Application Install on Linux
118 Comments
-
2005-09-17 2:24 pmelmimmo
The rest of the folders are free to allow silently downloading to the user’s home folder. Should the OS prevent that?
Oops. That was me and I mean “The rest of the browsers…”
Another package manager!
Honestly, I don’t know whether to open a bottle of wine…or a bottle of tylenol.
Really. This sounds interesting, but there are so many package management systems out there. Can’t we just agree on *something* that will seemlessly work for *most* systems. I’m tired of distrohopping.
-
2005-09-16 10:52 pmAnonymous
“Another package manager!
Honestly, I don’t know whether to open a bottle of wine…or a bottle of tylenol.
Really. This sounds interesting, but there are so many package management systems out there. Can’t we just agree on *something* that will seemlessly work for *most* systems. I’m tired of distrohopping.”
Synaptic works on all rpm based distro’s, but of course people seem to think there is non. Besides different package managers do work the same, just click the rpm and it take care of the rest, synaptic just sorts the deps out better.
-
2005-09-18 9:10 pmaesiamun
Isn’t synaptic only a frontend to apt or the likes?
It doesn’t sort out anything, it parses the output of apt…if it exists for rpm distros it’s because of apt for rpm…
-
2005-09-16 9:49 pmAnonymous
Because of the ability to simply download and run the program without the need for installation of any kind. The application is entirely self contained.
-
2005-09-16 10:16 pmnathan_c
The similarity is that on MAC OSX the standard way of distributing an application is by using an application bundle that is compressed into a disk image (osx uses HFS+ to preserve all file system attributes). Kilk takes the application directory and compresses it into a cramfs or similar file system disk image – also preserving the file system attributes. Some could argue that tgz is just as valid (and you can actually distribute runnable applications for osx that way).
The neat thing about OSX-style applications (which klik misses I think, unfortunately) is not the one-file distribution (tgz, zip, and even msi do that) but rather the standardization of the application file hierarchy within the file (that Apple calls an Application Bundle). With OSX, an application that you click on in the Finder is really only a specially-crafted directory with a .app extension (the Finder knows where to find the executables inside the directory). You can browse into it, look at the files inside, and all that. You can even add multiple executable files, resources, and all that. If you do it right, you can write the application prefs and settings back into the bundle, so that if the user moves it, the settings go with the bundle – even if they move it to another machine. … neat.
It’s a standard (you can look on the apple site to find the documentation) and it provides a really simple way to build really complex applications with many dependencies, resources, and executable files, and be able to manually alter applications as needed during development.
BTW, the bundle also holds a file (.plist) that contains a bunch of information about version number, associated file types, mime types, APPL codes, a description, etc so that an installer doesn’t have to insert that information into a launch menu somewhere – it just goes with the bundle.
IMHO, of all the things that the linux community clones and tries to re-build, this is the one they should really focus on. Just download the Apple spec and implement it for linux – on every desktop. Keep the rpms, debs, and pkgs for system updates (and core system libraries), but I would strongly suggest that the linux community adopt something very similar to the application bundle for all client-side apps.
That’s my 0.02
-
2005-09-17 4:46 amJLF65
With OSX, an application that you click on in the Finder is really only a specially-crafted directory with a .app extension (the Finder knows where to find the executables inside the directory). You can browse into it, look at the files inside, and all that. You can even add multiple executable files, resources, and all that.
IMHO, of all the things that the linux community clones and tries to re-build, this is the one they should really focus on. Just download the Apple spec and implement it for linux – on every desktop.
Already done – GNUStep.
It is exactly the same as OSX since they’re both derived from OpenStep. The only difference is a few Apple specific interfaces like CoreAudio. In fact, GNUStep points you to Apple documentation for how to get started programming in GNUStep.
-
2005-09-17 1:42 pmAnonymous
Just an offtopic rant…
> If you do it right, you can write the application prefs and settings back into the bundle, so that if the user moves it, the settings go with the bundle – even if they move it to another machine. … neat.
Might be “neat”, but just DON’T do that, OK? Writing preferences into the application folder has been one of the main faults of Windows, because that requires everyone to have rw to the folder containing the app. A user might place his app into his own Home folder in OS X, but he might also not do that, and use /Applications instead which should never have rw permissions for all users, which would choke if the app expect it to have them.
However neat you think that is, I think it is still much nicer to have the original bundled app virgin on one side, and the user preferences in another, clear and obvious side, which you can take to another computer if you want (and what place more obvious than ~/Library/Preferences can it be?)
And this would be OSX style because of????
You can just copy one single file onto your system, wherever you possess “rw” privileges (no need for being root) and this file represents a complex, runnable application. Rather similar to how you do it on OS-X. Try actually reading the article sometime:-)
As I understand it OSX applications is the application and all it’s needed files all contained in one single executable file.
-
2005-09-16 11:21 pmBlackJack75
It is actually a folder with a name ending with .app . You can easily browse/modify the contents from the finder and you can double click them so the app inside launches.
This *folderlookslifeafile* technique is the reason why to install an app you just drag your folder onto Applications folder and to uninstall it you just drag the same unique file to the trash.
-
2005-09-17 2:56 amAnonymous
Very close.
OSX apps are a folder with sub-folders containing all the needed files.
( and on a slight tangent , the .download files that Safari creates from in-progress downloads are similarly folders containing the partial file and meta-data )
Does anyone know of a link discussing security considerations for klik or Zero Install or other application-bundle style install systems?
-
2005-09-16 10:38 pmVarg Vikernes
Is there even any need for a tight security regarding this? An executable can pretty much do whatever it wants once it’s run… Or am I missing the point?
-
2005-09-16 10:48 pmcm__
> An executable can pretty much do whatever it wants once it’s run…
And that’s exactly why you’ll want to be picky about the sources for your klik files. Lists of trusted archives and digitally signed packages come to mind.
-
2005-09-17 9:25 amAnonymous
Does anyone know of a link discussing security considerations for klik or Zero Install or other application-bundle style install systems?
For the older version of Zero Install (the one that needed a kernel module): http://0install.net/security.html
For the new version of Zero Install (0launch): http://0install.net/injector-security.html
Some security features of Zero Install (0launch version):
– All interfaces are GPG-signed, and you must confirm that you trust the key before it will accept the program.
– Nothing is run as root when software is installed.
The design allows downloads to be shared safely between users: users independently download the (small) interface files, which give a cryptographic signature of each implementation. The implementations themselves can be shared, since they are named by their digest.
So, if one user puts a modified version into the shared cache it doesn’t matter, because it will have a different signature and other users won’t find it.
Here is an example interface file (for ROX-Filer):
http://rox.sourceforge.net/2005/interfaces/ROX-Filer
There’s a walk-through showing how it works here:
http://0install.net/injector-using.html
This looks interesting, but from what I can tell, it doesn’t check for dependencies, there’s no way to update installed applications, and it’s only available for Linux. Then again, its still a work in progress, and anything that helps open source software gain momentum in the eyes of the average computer user is a good thing.
I’m perfectly happy with the ports system on FreeBSD. Maybe Klik will help ease the transition for less computer savvy users though.
-
2005-09-16 10:18 pmCaptain N.
There’s no way to check for updates on installed applications on Windows or Mac either, unless they have that option.
To me this is exactly what is need on Linux for commercial apps. Synaptic and the GUI wrappers for it, is fine for the OSS stuff. There’s no reason they can’t co-exist.
Still, it would be nice if there was some standard way to run a Klik app (or something like it) that adds itself to a list (a la Windows’s add/remove list), that has remove and update buttons for each item (and of course, if the app is deleted manually, it gets removed from the list manually). Something like that could co-exist perfectly with a Synaptic front end (except it would only list actual applications, like Firefox, or OpenOffice, and leave the libraries to another list program).
-
2005-09-16 10:21 pmCaptain N.
* There’s no way to check for updates on installed applications on Windows or Mac either, unless that option is built into the application.
Apple has finally installed a check for applications before completing a download.
As we all know just clicking any link or visiting a site can start a download. The resulting file could look just like a generic hard drive icon on the desktop, appearing behind a window for instance.
“oh what’s this? another drive? wonder what’s inside….”
Mac users are smarter on average, but we do have a lot of newbies too.
-
2005-09-17 12:00 amBlackJack75
I agree with that “first run check” thing. It’s funny that you are warned when downloading an app from the net only.
I think the first time ever you launch an app, wherever it comes from (cd-rom, whatever) the user should be warning with a message like: “Hey, it’s the first time you launch this application, are you sure you want to allow running it”. Ideally you could test drive the app in a safe temporary user-account, just like you can do with dashboard.
-
2005-09-17 2:05 pmAnonymous
In Tiger, you do not get an alert the first time you run an app by double clicking it. I guess it is assumed that the user explicitely wanted to do that, although I understand how some apps might fake themselves under a folder or drive icon.
However Tiger does present an alert the first time you launch an app by double clicking a document associated with it, just in case a rogue app changed the defaut association. Which is neat IMHO.
Mac users are smarter on average, but we do have a lot of newbies too.
Wow, who would expect the above level of modesty from a mac user?
-
2005-09-16 11:23 pmBlackJack75
As as mac user myself I happen to be smarter. But this dates back to well before I used my first mac 🙂
More seriously I’d rather say things are so easy and intuitive on a mac that mac users would rather look pretty stupid when faced with another operating system (if all they knew was osx).
-
2005-09-16 11:29 pm
I don’t want to be a spoilsport but Rox had this for a long time. The appdirectory and the oneclick installer.
-
2005-09-16 11:18 pmcm__
> I don’t want to be a spoilsport but Rox had this for a long time.
How does that fact spoil anything? I guess you could install rox that way but nothing else?
-
2005-09-16 11:51 pmcm__
> I guess you could install rox that way but nothing else?
I retract that statement.
But still, what does the fact that rox had AppDirs for a long time spoil about the ideas laid out in the article?
-
2005-09-17 6:35 pmcm__
> But still, what does the fact that rox had AppDirs
> for a long time spoil about the ideas laid out in
> the article?
To the person who modded me down: Why don’t you answer my question instead?
It’s not a question of who had AppDirs first.
I use (and like) Ubuntu.
I use (and love) OS X.
Let’s install Opera on both systems.
==UBUNTU==
dkpg-i nameoffile
*nix gobblygook
No message about where the program has installed.
No message asking me where I want to install.
Assume the Opera will be in the Applications menu because, y’know, it’s an Application and it’s logical that it would be installed with the other applications, right?
Open applications menu. Where the hell’s Opera?
Hit ye olde command line. Can’t find it.
Login as Sys Admin and hit ye olde command line, find Opera living in a folder called .bin, along with every other program man kind has ever coded. (Okay, so that’s a bit of hyperbole.)
Link Opera to Applications.
Logout.
==OS X==
Double click File on desktop.
(a)File unzips into a DMG which I click on and end up with an icon that I drag to my Applications folder and bingo, it’s installed.
(b)File unzips unto an icon which I [i]drag to my Applications folder and bingo, it’s installed.
Occasionally after a double click I’m asked if the application has permission to go find additional updates it will need to install.
-
2005-09-16 11:28 pmBlackJack75
Not to mention you don’t even have to drag the application anywhere.. installing it is just a matter of chosing where you want to have it. You could as well launch the app directly from the disk image.
And once you have copied it to your app folder (or elsewhere), if a friend tells you he’d like to try that (free) app you can just send the app to another machine using iChat or whatever.
The complete app is really in that .app file (folder).
Of course some apps need an installer, if they are drivers or modifiy the system in anyway.
I’ll come back to linux as soon as I can quickly try the lastet alpha release of *any* program by just downloading and douclicking.
-
2005-09-16 11:57 pmkadymae
(meep. That was me as Anonmymous.)
Not to mention you don’t even have to drag the application anywhere.. installing it is just a matter of chosing where you want to have it. You could as well launch the app directly from the disk image.
But the key is … you know where the damn program is because you put it there.
What I don’t like about the current Linux installer/package manager system is you don’t know where the frelling program’s going to end up. *Most* of the time (but not always) it ends up in /bin, but if there would just be a step forcing the user to choose a location for install, or a line saying “installed in /NameOfFolder” I would like Linux package managers so much more.
So to Klik, I say ‘BOUT FRIGGIN TIME!
-
2005-09-17 5:39 amarchiesteel
Actually, most apps are in /usr/bin, not /bin.
The big question, however, is why should you care where the app is? You don’t need to know where it is to run it…package managers will put the app in your menu structures (on good distros they do, at least) and you don’t need to specify the path if you want to start it from the command line if it’s included in the $PATH environment variable (which it should be on a good, modern distro such as Mandriva or Kubuntu).
If you really want to find it, there is the powerful “locate” command.
That said, Klik is a good idea for commercial apps. Personally, I think package managers are fantastic for OSS software.
-
2005-09-17 12:25 pmAnonymous
“What I don’t like about the current Linux installer/package manager system is you don’t know where the frelling program’s going to end up. *Most* of the time (but not always) it ends up in /bin, but if there would just be a step forcing the user to choose a location for install, or a line saying “installed in /NameOfFolder” I would like Linux package managers so much more.”
And that’s because humans are lousy at tracking what’s installed and what needs to go where. Computers are very good at this and good package managers do this for you automatically.
Additionally, if you took the time to understand rpm or deb format, you would understand that you can query the database and it will tell you where every single file associated with your rpm has been installed.
Don’t let your ignorance of an issue keep you from screaming to the heavens above about it though.
-
2005-09-17 1:00 amAnonymous
Not completely true…some programs (including Firefox when i did this last time) run into a endless loop when run from the disk image itself
-
2005-09-17 6:49 am
-
2005-09-17 2:58 pmAnonymous
There is definitely a flaw in the error check routine. It is arguable whether at the level of the application or OS. Of course, you can call Firefox a garbage hack and I can call Mac OS the same, I guess.
-
2005-09-17 4:56 pmAnonymous
==UBUNTU==
dkpg-i nameoffile
*nix gobblygook
ctrl+f2
“opera”
enter
EASY!
Gnome gives you a tab-completing run dialog when you press ctrl+f2, even if you don’t see something in the menu, it’ll work.
(KDE also offers such a run dialog, too bad Mac and Windows are far behind such a simple advancement)
Using menus to launch apps is so yesterday. It’s good for browsing what’s available, but that’s about it.
since it’s not well integrated into KDE.
klik makes use of AppDirs (that ROX users should know) coupled with .cmg files (that basically are cramfs images, sort of OSX .dmg files) so the great revolution should be making konqueror (and also nautilus and maybe thunar) understand them, give the ability to the user to download a .cmg file (for example firefox.cmg) mount it, and drag the AppDir on the desktop, and be able to run it by double clicking on the AppDir.
Add the ability of kmenu to scan the system for AppDirs and add them to a kmenu entry (like klik does) and you have a good distro-indipendent application management that couples very well with commercial (and big binary) applications.
AppDirs as a standard could have a future, actually klik it’s a good start, but gives another layer of complication for the user since it doesn’t give the ability for the user to manage downloaded apps
-
2005-09-17 1:02 amAnonymous
well, you might be right except… that klik is NOT distro-indipendent at all! it states that it works on debians (and a single suse. maybe)
-
2005-09-18 1:21 pmAnonymous
That’s not necessary true, actually. You could create a CMG from a dir containing an autopackage. As the autopackage is distro-independent, the CMG will be also. See? Unfortunately this method hasn’t been tried – yet –
-
2005-09-17 2:48 amAnonymous
give the ability to the user to download a .cmg file (for example firefox.cmg) mount it, and drag the AppDir on the desktop, and be able to run it by double clicking on the AppDir
Erm, you run it by clicking on the .cmg, I fail to see your problem. Actually it’s doing exactly what you want it to do but using a file instead of a directory.
-
2005-09-17 6:23 amAnonymous
It’s a little different, on OSX you download the dmg mount it, take the app bundle inside and move it where you want.
Here klik:// manages downloading the file and uncompressing it in /tmp that maybe it’s not the best place to store apps, also konqueror doesn’t understand what an AppDir is, so you cannot simply drag an AppDir and double click on it
I love how people forget about the original things..
NEXTSTEP had the *.app folders that osx uses.. I other OSes of olde probably used stuff similiar.. but i cannot think of any off hand..
-
2005-09-16 11:56 pmBlackJack75
Indeed. Yet we know Jobs founded Next and didn’t exactly come there from MacDonalds. You could already drag’n’drop apps on a classic mac a long long time ago.
Of course at that time the application was really just one file, not a folder appearing as a file. But then again I am pretty sure someone can come up with a video from 1962 showing the same process.
-
2005-09-17 5:30 pmAnonymous
Yes, but how many people ever had the opportunity to use a NeXT? I LOVED the ones that the university I attended had for us to use! When I purchased my Apple system it was deju-vu…except it was in color. 😎
This is less of an “omg not another package management system” issue because it’s not specific to any one distro, If anything KDE, which will achieve more portability than any ( decent ) package manager has accomplished, and it doesn’t replace anything.
Dear Kristian Herkild,
it is a pity that people like you start commenting on articles they clearly have not read.
But I assume it were people like you whom Kurt Pfeifle had in mind when he backed up his proposal to the KDE community about offering .cmg klik files to the non-techie part of their contributors by writing an extra-simple blog entry:
http://www.kdedevelopers.org/node/1456
Kurt however did not consider that people like you would not read this either. Maybe he relies on people like me who are then going to spell it out for you nevertheless?
-
2005-09-17 7:20 amAnonymous
Dear anonymous.
You ought to come out with who you are – you’re not going to be killed. And please use my nick instead. The other way looks so formal to me
I did read the article, and I also read the second one (thank you for that link). It doesn’t change my point of view, and if you think it makes me ignorant, then fine.
I still consider it useless, because there are several other ways to do it (and they are superior in my mind).
But you have another opinion. And that’s perfectly fine. I could say nasty stuff about you, but I’ll leave the nasty stuff for you to play with.
Kind regards,
dylansmrjones
kristian AT herkild DOT dk
-
2005-09-17 11:09 amcm__
> I did read the article, and I also read the second
> one (thank you for that link). It doesn’t change my
> point of view, and if you think it makes me
> ignorant, then fine.
Not only reading but also understanding 😉
…the article and the suggested target audience (i.e. *not* end users).
> I still consider it useless, because there are
> several other ways to do it (and they are superior
> in my mind).
Would you please let us know these superior ways?
All your arguments up to now in this thread were about
a) stable, released software that
b) can be found in the distros’ repositories
and about end users installing them using smart package managers. Which is fine for them, but you’re completely off topic.
Not a single word on why klik is useless for the purpose the article and the blog entry is about (making life easier for non-coding contributors to OSS who have to see apps before they’re released) or superior ways to achieve that goal other than that you think they exist.
Note: I’m not that anon person you were replying to.
I see many of you argue against this approach due to the lack of dependency checking/installing. This is very true, but it leads to an interesting an elegant solution that’s even better than the original. Packaging dependencies in with the executable is much nicer than having a sprawling, inter-dependant mess of applications. Most windows programs do this, anyway, but still use a dated installer method.
The second benefit to application folders is that it greatly improves package management–instead of complex package managers that you can never be sure what it’s doing, you can use an already familiar interface–file management. Because you control everything manually, you always know exactly what’s going on.
Finally, I’d like to point out that application menus are a huge hack, and really aren’t too nice to use. They’re basically just a work-around for the crappy application management methods currently available.
(I’ve mentioned this before in a previous post of mine in less detail. I’ve thought about it quite a lot, because I’m planning on including it in my current project, which, as some of you might already know, is an OS.)
-bytecoder
-
2005-09-17 6:08 ambutters
Read the parent post for the mostly opposite argument. The OS envisioned by bytecoder seems to be much like that envisioned by KDE 4.0 (or maybe SymphonyOS) with Klik.
In the endgame, application management dominance should theoretically be determined by a basic formula:
P(t) = Po + So*t + E*t^2, where
P(t) is the number of packages available at a given time, t
Po is how many packages there are right now
So is the current size of the contributing community
E is the ease of package contribution
As t->infinity, the only term that matters is E. Just something to ponder…
-
2005-09-17 6:53 pmAnonymous
Read the parent post for the mostly opposite argument. The OS envisioned by bytecoder seems to be much like that envisioned by KDE 4.0 (or maybe SymphonyOS) with Klik.
Far from it, actually. If anything, it’s more similiar to a more suped up BeOS/Mac OS Classic hybrid in the sense that the overall feel will be the same (the interface will actually be quite different).
-bytecoder
It’s about time Linux catches up.
I don’t give a crap about updating all 10,000 applications on my PC (most of which I didn’t choose to install, but was forced to by the package manager.)
I just want to get apps that I want installed easily, and easily removed if I want.
I want a simple GUI app with drag and drop to be able to compile any sourcecode into a single appdirectory file and dropped conveniently on my desktop when ready.
1) Make real LSB-Desktop standard
2) Use APP Folders like OS X, compile all non-LSB stuff as static
3) Users will be VERY happy.
It really isn’t that hard. Just Do It and we could all drop this stupid subject. Apple already showed us what’s The Perfect way to install software, now let’s just follow it.
-
2005-09-17 6:42 amAnonymous
1) Make real LSB-Desktop standard
Thats where your list falls apart. At the beginning. The only way a Desktop Linux standard will exist is if one desktop distro gets way more popular than the others and forces the others to comply.
-
2005-09-17 9:50 pmjziegler
2) Use APP Folders like OS X, compile all non-LSB stuff as static
Yes, because I want to have a copy of libfoo.so in my RAM for every application that uses it, instead of one copy. I also want to download it again and again with every application instead of downloading it only once.
Please wake up and smell the roses. It’s 2005. It is not necessary to install applications “by hand” anymore. You can tell your computer “please go out, download firefox and thunderbird and install them for me”. On my computer, in translates to “aptitude install mozilla-firefox mozilla-thunderbird”.
-
2005-09-17 9:51 pm
-
2005-09-17 10:06 pmjziegler
Read above what? Where? Which comment?
Also, it’s always a good idea to think before you speak.
I hope you want to have a civilized discussion, I’m not in the mood for a flame. If you disagree with me, be specific and provide counter arguments. If you have written them in a different thread – sorry, I was responding to this one.
-
2005-09-17 10:13 pmAnonymous
Read above what? Where? Which comment?
I was referring to my reply to the other post, which was very similiar to yours:
This isn’t very relevant if you include commonly used libraries in the base system. Non-commonly used libraries would be rarely needed, and thus wouldn’t be much of a disk-space waster.
-bytecoder
I hope you want to have a civilized discussion, I’m not in the mood for a flame. If you disagree with me, be specific and provide counter arguments. If you have written them in a different thread – sorry, I was responding to this one.
I’m always civilized. Whether other people think that or not is their problem, not mine
-bytecoder
Let me introduce the application paradigm. In this model we designate a common root for applications to install to, for example /Applications or C:Program Files or /opt. Each application comes with everything it needs, even if it is already provided by another application, and sets up a distinct hierarchy within the application root. Where’s the binary? It’s in there somewhere, hopefully there’s some standard path for the main binary within the application hierarchy, or maybe an included config file says where it is.
Now consider the system paradigm. The applications are made suppordinate to the idea of a cohesive system. Items commonly packages with applications are installed in system-wide directories with similar items from other applications. Binaries, libraries, data, and configurations are grouped together. Where’s the binary? It’s in your PATH, unless the application is installed in application paradigm instead (i.e. /opt). In this case you rely on the distributor to properly update all user PATHs.
The key to application management, really, is to allow the user to run an installed application without knowing where it is actually installed. With the application paradigm, this can only be accomplished if each application specifically makes sure to update the user PATHs and add the binary link to an appropriate menu. In the system paradigm, the PATH does not need to be changed to accomodate a new application, and the menu can be autogenerated or selectively updated from the binaries in the PATH.
Other issues in application management involve adding, removing, and updating applications. There are two paradigms for this as well: discovery and repository. In the discovery paradigm, the user is expected to discover new software on his own, via the Internet or word of mouth. Installed software is updated either through a repeat of the discovery process or through web-enabled reminder within the application. In the repository paradigm, the user is made aware of every available application supported by the system, and he can select from these applications when needed. Upgrades can be provided as soon as they are available.
Both application and system paradigms are possible to implement on discovery and repository systems. The tendency is toward application/discovery and system/repository, although there are very notable exceptions (Linspire and RedHat are good examples). The system paradigm usually employs a package manager, which some perceive to be a complex and overbearing idea. The repository system imposes a barrier to application availability. Theoretically if the complexity and visibility of the package manager, and the barriers to adding packages to the repository, were to become negligible, then the system/repository combination would be the most elegant solution. The application paradigm creates dependency redundency and puts more burden on the package maintainer. The discovery system makes finding and updating applications more burdensome for the user. Theoretically if dependency commonality amongst applications, and the similarity of application demand amongst users, were to be negligible, then the application/discovery combination would be the most elegant solution.
Low let’s apply this framework in the context of free software. The goal of free software is to provide the vast majority of users with a usable, productive, and enjoyable computing experience. With regard to application development, free software is inherently based on the principle that the community will work towards the development or a set of applications that accomplishes this goal, and it will do so by leveraging the collaborative effort of the community. Therefore, we are working under the assumption that the conditions that lead to the application/discovery system becoming more elegant are false!! With respect to management tools, free software is inherently based on the principle that the community will work towards the development of simple and effective tools, and that these tools with allow the community to share their work. Therefore, we are working towards creating the conditions that lead to the system/repository system becoming more elegent!!
These are ideals that the free software community should continue to pursue, rather than shun them in favor of easier, but less effective application management ideas. IMHO, the system/repository distribution that best embodies these ideals is Arch Linux, and I look forward to future developments in this style of package management.
-
2005-09-17 8:48 amgrrr
I like your analyzes application vs system discovery vs repository very clever . But i do not think our ideals of freedom are at al compatible.
“The applications are made suppordinate to the idea of a cohesive system”
“on the principle that the community will work towards”
” leveraging the collaborative effort of the community”
“distribution that best embodies these ideals”
I think a application/discovery is clearly best for a heterogeneous free software community
-
2005-09-17 10:59 amMysterMask
Neat theory roundup but you drew the wrong conclusions.
Now consider the system paradigm. The applications are made suppordinate to the idea of a cohesive system.
A quick look at a linux/unix filesystem (as an example of the system paradigm species) easily proofs that a cohesive system only exists as an idea. Reality looks different, because to get a ‘cohesive system’, you need at least a fixed standard. However, OSes are “work in progress” and constantly changing: New ideas make old assumtions of the cohesive system obsolete. Undisciplined or uninformed developers will do things in non-standard ways, etc. Therefore, you end up with a mess.
With the application paradigm, this can only be accomplished if each application specifically makes sure to update the user PATHs and add the binary link to an appropriate menu.
You assume that a system needs a “PATH” to find the binary. However, the “PATH” is a concept which applys only to certain systems. E. g. OSX is aware of (new) apps and memorizes the place where an app is located (likewise, the system knows when you move the app to another location or make a copy). No need for user interaction or special treatment within the app if the OS is “clever enough” (Mac OS did that long before OSX)
As for the menu: in the application paradigma, you don’t need a menu, since you can locate apps like documents with the filesystem browser. A menu holding links to apps is handy as a shortcut, but it is not a must. For example “viewer” apps (apps that do not create documents themselfes) can be started by the system as part of opening a document of the type associated with the viewer app, hence no need to have a menu entry for it – just open the document..
The application paradigm creates dependency redundency and puts more burden on the package maintainer.
* dependency: The contrary is true. Since the distribution of an app does not depend on the availability of other apps, libs, etc.
* redundancy: Yes, some code might be redundant on your system (e. g. libs). However, if certain code get’s very commen, it will normaly find the way into the OS over time.
* more burden on the package maintainer: Hardly, since he can distribute a single piece of software with all dependency the way it was developed, knowing that it will run as expected because the user runs exactly the same code (e. g. installers like InstallAnyware will bundle everything into a package including the JavaVM and will run the java code with the bundled VM, no matter if there is already a VM on the target system. This way, the app maintainer can exclude bugs stemming from using a different VM).
As somebody else pointed out, the system paradigm adds additional burden to the user, since he needs a special app for software management. He becomes dependent on the software management, has to know how to handle it and get’s a single point of failure. What if you’re software management system insists to install the app under /bin but there is not enough space on the drive. Ever had to clean the Windows registry by hand or had to do similar “emergancy routines” on Unix/Linux if an installation went wrong? Horrible!
With the application paradigm, the user can handle apps like any other document on the system. Easy, predictable and in a controlled manner.
Not enough space on the main hard drive? Copy it to the external drive instead and move it back later, when enough space is available.
The discovery system makes finding and updating applications more burdensome for the user.
Yeah, but adds additional freedom. The repository system works only if you can make sure that software find the way into centralized repositories. However, centralized repositories create new dependencies. E. g. if Microsoft would adopt a single repository system: How would you think an MS user would feel having to get all applications through Microsoft?
Low let’s apply this framework in the context of free software. The goal of free software is to provide the vast majority of users with a usable, productive, and enjoyable computing experience.
But is there a unified opinion how a “usable, productive, and enjoyable computing experience” looks like? Sharing a common goal does not include that everybody agrees on the way how to accomplish it. There is no such thing as the “one true way” for problem solving hence you end up with different solutions: different software distribution mechanisms, different repository formats, different UIs, etc. Furthermore, your idea of “software darwinism” and centralized software repositories contradicts the idea of free software (as in “freedom”). Your ideas somehow reminds me of communism ..
-
2005-09-17 9:57 pmjziegler
* redundancy: Yes, some code might be redundant on your system (e. g. libs). However, if certain code get’s very commen, it will normaly find the way into the OS over time.
That makes sens for OS X or Windows maybe. On Linux distros a set of basic _packages_ make the OS. So in order to have an basic OS, you already make use of packages.
E. g. if Microsoft would adopt a single repository system: How would you think an MS user would feel having to get all applications through Microsoft?
There are repositories for .deb packages maintained by the Debian project and repositories maintained by other organizations and individuals. Users can utilize both of them. Same for .rpm repositories for Mandriva or Fedora Core. Hence I deduce it would be technically possible to create 3rd party package repositories for Windows as well. No freedom lost.
-
2005-09-18 11:46 amMysterMask
On Linux distros a set of basic _packages_ make the OS.
The discussion is not about OS installation but application management. And it’s not forbidden for Linux to install additional libraries as part of the OS core, if the library is widely used.
BTW: An OSX system installation consists of a set of packages, too (e. g. the base system, the BSD subsystem, the X11 package, etc.).
Hence I deduce it would be technically possible to create 3rd party package repositories for Windows as well. No freedom lost.
Yes. But that contradicts the repository paradigm. How does your software management system knows where to get software if not from a central repository? You have to configure your management app to get software from elsewhere. And that means “discover” it first – and voilà: your back at the good old discovery paradigm.
The MS example was only to illustrate that people probably would not accept such a thing from a commercial OS vendor, so I fail to see why this would be a positive goal for free software. Centralization might make software management easier but that has a price: Not only does it create dependency, it creates also centralized points of failures and insecurity (e. g. if your repository get’s hacked).
-
2005-09-18 1:09 pmjziegler
The discussion is not about OS installation but application management. And it’s not forbidden for Linux to install additional libraries as part of the OS core, if the library is widely used.
Probably misunderstood you on this one. Nevermind.
The MS example was only to illustrate that people probably would not accept such a thing from a commercial OS vendor, so I fail to see why this would be a positive goal for free software.
Because it works? It works satisfyingly well for me (and probably for other Debian users as well). OK, it comes with a price – dependancy, single point of failure.
Dependancy is a moot point – you are always dependent on your OS vendor on some point.
Single point of failure – not completely true, there are mirrors. Though they will mirror any insecurity created in the original, they will provide the required data even when the original source experiences failure.
I gladly pay this price compared to manual downloading of each package, or going back to statically linked apps, or having a library installed numerous times for “self-contained” apps.
I do not claim that the packaging systems used by Linux distributions are the perfect way to distribute software. However, I have not experienced anything better, nor did I like anything presented in the discussion under this article. Everything seems less powerfull than what we already have.
Still digging into klik, the more I look into it the more I found some oddities, it manages .cmg in a bad way for me, why it has to modify my fstab in this way?
/tmp/app/1/image /tmp/app/1 cramfs,iso9660 user,noauto,ro,loop,exec 0 0
/tmp/app/2/image /tmp/app/2 cramfs,iso9660 user,noauto,ro,loop,exec 0 0
/tmp/app/3/image /tmp/app/3 cramfs,iso9660 user,noauto,ro,loop,exec 0 0
/tmp/app/4/image /tmp/app/4 cramfs,iso9660 user,noauto,ro,loop,exec 0 0
/tmp/app/5/image /tmp/app/5 cramfs,iso9660 user,noauto,ro,loop,exec 0 0
/tmp/app/6/image /tmp/app/6 cramfs,iso9660 user,noauto,ro,loop,exec 0 0
/tmp/app/7/image /tmp/app/7 cramfs,iso9660 user,noauto,ro,loop,exec 0 0
Isn’t simplier to let the user open the .cmg (distributable single file) and then manage to run the app by clicking on the AppDir (instead of actually clicking on the .cmg that gets mounted in /tmp/app and launch the app)
Also reading the fstab means that I cannot open more than 7 apps?
It’s still a good start btw
When we in the linux community stop copying other OS’s and make a mark, we’ll make news. Until then, who cares?
From
http://klik.atekon.de/docs/?page=Compressed%20Application%2…
Limitation
Since .cmg uses loop to mount the compressed images, the number of applications you can run simultaneously from .cmg files is limited by loop. Default in the kernel is 8, but you can increase this by using the max_loop=64 cheatcode, and by creating nodes with mknod -m660 /dev/loopXXX (also change AppRun and fstab accordingly).
-
2005-09-17 9:34 amAnonymous
Ok but don’t you believe that this is the wrong thing to be managed, shouldn’t klik (or konqueror) manage only AppDirs and let the user mount the .cmg images instead of playing with fstab?
-
2005-09-17 10:35 amcm__
But the end user is not allowed to mount the image unless root adapts /etc/fstab accordingly (*). The klik installation instructions just automate this modification.
(*) I think FUSE will remove this requirement in future Linux kernels if enabled.
-
2005-09-17 1:17 pmAnonymous
I knew this, without FUSE or other it’s a problem for the user to mount an image, but klik still focuses on the wrong bundle 😀 because it’s based on AppDir, but doesn’t support it, it introduces the (good!) idea of .cmg files (simlar to Mac OS ones) but hides the position of the AppDir, which is for the system and the DE nothing more than a normal directory that contains every application files.
first the cmg remains mounted on /tmp and the app
second I cannot see and manipulate the AppDir like in macos where you get the AppDir out of the .dmg image and place it wherever you want
Third it’s still distro dependant (I’ve seen .deb packages while downloading an app) while the pure AppDir implementation seen in ROX supports compilation, the application contained in the AppDir could be automagically compiled by double clicking on the AppDir for the first time.
I hope noone gets offended by my words, It’s a feature that I like very much, but IMHO it’s badly implemented because it hides the real technology behind the magic
for me AppDir must be directly supported from KDE and GNOME, ROX has born with it and there is also a patch for bash to support it 🙂
With Klik and HotNewStuff the KDE security has gone down to the drain, (It is already the most unsecure DE on Linux) there’s minimun security on these improviced technologies, I smell ActiveX like problems on KDE in the next few years.
The sad part is that they had been warned about this and they just are to excited or to worried about GNOME that they don’t care, with the mentallity of “I’ll fix it later” just like Microsoft.
-
2005-09-17 4:51 pmcm__
> With Klik and HotNewStuff the KDE security has gone
> down to the drain,
– HotNewStuff is mostly used to download wallpapers and other non-executable content. Security is discussed, not ignored. GetHotNewStuff is comparable to firefox’s central plugin download offer, only that that content is always of executable nature. Is firefox too insecure for you, too?
– Klik doesn’t necessarily have anything to do with KDE. There are images for GNOME apps, too. It’s not even part of KDE as of now.
But more importantly, if you had read the article before starting to troll you would have known that its suggested use is during development, not for software distribution to the end user.
> (It is already the most unsecure
> DE on Linux)
Care to back that up with facts, troll?
> there’s minimun security on these
> improviced technologies, I smell ActiveX like
> problems on KDE in the next few years.
Your post smells of troll dung, as usual.
Security *is* important, but you’re only on your typical anti-KDE trolling spree.
1) The installation of the applications is the main reason why I am not using Linux. I never succeed to install things correctly.
2) How do I use yum, apt-get an so on if I have no internet connection?
3) xxx-setup.exe, click, click and that’s done. And please do not enter in the “dll hell or so” discussion, I never had a problem. The last application I installed (yesterday) was Sokoban YASC, it was done in ~20 seconds, even on a machine without having an internet connection.
Having said this, I wonder if the *nix folk is not taking the right direction. See for example PCBSD.
-
2005-09-17 5:59 pmAnonymous
How do I use yum, apt-get an so on if I have no internet connection?
Use it with the distribution’s CD’s. That is why there are so many, because the cds are full of software that you can choose to install if you wish.
xxx-setup.exe, click, click and that’s done.
Where did you get the xxx-setup.exe file from?
And please do not enter in the “dll hell or so” discussion, I never had a problem.
And I have never had a problem installing software on Linux properly. I could equally say [i]And please do not enter in the “The installation of the applications is the main reason why I am not using Linux. I never succeed to install things correctly” as I have never had a problem.
You cannot have it both ways.
-
2005-09-17 6:37 pmAnonymous
How do I use yum, apt-get an so on if I have no internet connection?
Use it with the distribution’s CD’s. That is why there are so many, because the cds are full of software that you can choose to install if you wish.
xxx-setup.exe, click, click and that’s done.
Where did you get the xxx-setup.exe file from?
Very simple. I buy a “pc-magazine” with a CD containing somethink like “the 100 best free applications”. When I look at the “Linux” magazines, I only find things like FireFox for the distro a or app b for the distro c and so on. I conclude from that, every distro has its own applications. Maybe I am wrong?
-
2005-09-17 6:47 pmAnonymous
My experience is that most software for linux distributions are available either from the distribution’s cds or by downloading. So if magazine cds are your main source of software, you will likely not have as much software available to you with Linux as with Windows.
However, since you are posting here, it is clear that you do have internet access. I am perplexed why downloading software is a roadblock for you.
-
2005-09-17 6:54 pmMorty
Very simple. I buy a “pc-magazine” with a CD containing somethink like “the 100 best free applications”. When I look at the “Linux” magazines, I only find things like FireFox for the distro a or app b for the distro c and so on. I conclude from that, every distro has its own applications. Maybe I am wrong?
You are a little right, as you always should use the binaries of applications for the distribution you are using. So basically every distro has it own variant of the applications. Exactly the same way as you should use the windows 95 variant for 95, and the Xp variant for Xp. Some times it’s only one version or it only works on some windows versions, but it’s the same principle.
Where you are making the mistake, are with the number of applications on the magazine CD. If those applications had been included on the widows install CD, the point in including them on a magazine CD had been little. That is how it is in the Linux world, most likely the application is already included on your distribution install CD. Making it pointless to have on the mags CD, as the users already have it.
-
2005-09-17 7:22 pmAnonymous
To be honnest, I will reply. That’s true, I have now an ADSL connection. But, two months ago, with a dialup connection, it was a great pain. (There are still a lot of people with only this type of connection).
I should say, some “win CD magazines” are really jowels, e.g. the OpenCD. From the magazines, I discover Python, Ruby, Haskell…
It does not remain, I prefer to spend my limited spare time playing with the above languages, than trying to install them on a Linux platform. For the story and even on my win platform, I attemped to play with LUA. Infortunately, it is available only as source code, no prebuild binaries for windows, so I had to path my way.
-
2005-09-17 8:25 pmAnonymous
I should say, some “win CD magazines” are really jowels, e.g. the OpenCD. From the magazines, I discover Python, Ruby, Haskell…
Similarly, this is what, for me, is great about many Linux distributions is that they come packed with so much software. With the cds, one can discouver so much softare just like the ones that you mention without have to download anything. Including many programming languages like Python (my favourite), Ruby, PHP, Perl and many many more.
The usual CMG file generated in Klik is cramfs material. The files in cramfs are limited to 16MB and 7 files are provided for downloads. Zisofs is also available acording to the developer.
The Klik system will permit windows apps in Linux as well. One item of note is that the downloads are stored in compressed format (50%) providing quick access to program material provided in the download file.
The user can create his own cramfs apps should he have the ability and cramfs system tools.
The Klik system allows the user to utilize an application at will and then remove it from his system, and call it back later.
My arguments are valid and Im not trolling, if you like a KDE developer got offended for something like this to bad for you, we know who you are already and how propense you are of calling Trolls to averyone who dissagree with something related to KDE, I my self won’t lose my sleep about it , the time will tell and prove my points.
-
2005-09-17 5:42 pmcm__
> I my self won’t lose my sleep about it , the time will tell and prove my points.
It may be news to you but it’s up to you to back up your points in a discussion if you want anyone to take you seriously. Just making unfounded claims doesn’t do.
And no, saying “my arguments are valid” doesn’t cut it when in fact you don’t have any. You are very funny.
I don’t know what “propense” is supposed to mean but I’m only calling *you* a troll because you exhibit that trollish behaviour of jumping each and every remotely KDE-related thread and spreading FUD.
That’s hardly “averyone [sic!] who disagree[s] with something related to KDE”. There are many other posts in this forum I happen to disagree with or that disagree with something KDE-related but I haven’t called anyone else here a troll. Not in months. It’s just that you are so obnoxious.
Or would you rather like me to call you anti-KDE zealot? Would maybe be a better term… yes, I think so.
I bestow upon thee the title of “Chief anti-KDE zealot”. How does that sound?
-
2005-09-17 5:48 pmYuske
And again, I will not lose my sleep about, I know what KDE developer you are and that makes it easier to not listen to you.
-
2005-09-17 6:18 pmcm__
> And again, I will not lose my sleep about, I know
> what KDE developer you are and that makes it easier
> to not listen to you.
And again you refuse to address any of the arguments I brought up against your FUD.
But it’s ok, I didn’t expect anything else from you. Sleep on. Good night.
BTW: I’d be proud to be called a KDE developer but I don’t claim to be one.
-
2005-09-17 6:20 pmYuske
And again, I will not lose my sleep about it, I know
what KDE developer you are and that makes it easier
to not listen to you
-
2005-09-17 6:07 pmYuske
And again, I will not lose my sleep about, I know what KDE developer you are and that makes it easier to not listen to you.,
“HotNewStuff is mostly used to download wallpapers and other non-executable content.”
This simple shows the uninformed you are, with HotNewStuff the data to download depends of the application who needs it, if Amarok needs scripts then it will ask for scripts (not only cd covers) and scripts may be executable or malice code, and this is only a small example, but again is not may problem, warnings had already be done I wont go deeply on this.
-
2005-09-17 5:46 pmcm__
> This simple shows the uninformed you are, with
> HotNewStuff the data to download depends of the
> application who needs it, if Amarok needs scripts
> then it will ask for scripts
– You should really work on your reading comprehension. That’s exactly the reason why I said “mostly”.
– These scripts come from an amarok host, not from some arbitrary web site like ActiveX
– You haven’t explained yet why this is different from downloading firefox extensions.
I think this is cool for new linux users. I know ill definatley be using it. Most people i know(i only know windows users btw) usually download say firefox installer. Keep it in a downloads folder or something similar. So if they ever format, or want to use it on another computer etc, there is no need to redownload it and waste bandwidth.
For me i always choose apps i dont need to install. For example i can format my windows drive. Soon as ive reinstalled i can use firefox, trillian etc without download or installing. I just run it.
I think click would be simillar, not to mention it as some tidyness to the linux folder structure. If all your apps can be found in one folder instead of hunting for them.
Well it looks cool, the site is easy to navigate and everything. I don’t see how is it in the style of OSX though.
I don’t see how is it in the style of OSX though.
As in drag & drop style installing. The application is statically linked, all the libraries it depends on are compiled within the application, it doesn’t depend on system installed libraries. This is to cure the so called ‘dependancy hell’.
The downside is statically compiled applications are much bigger in file size, but nowadays this shouldn’t be a problem but it also makes updating more cumbersome. Instead of updating a dynamic library, zlib for for example used in all c apps, you have to download updates for every application. If you want to keep up with security updates that is.
I use OSX and really like the drag & drop but i still prefer the debian way.
Not only are their bigger on disk (no-one cares), on the network (some might care + compression helps), statically linked apps are bigger in RAM as well and that still hurts.
I too prefer using a system, which know which files to download, to searching and downloading the files one-by-one.
On the other hand, some of the proposed uses are good – install without having root, easy to install for non-technical types (translators, UI designers). For these reasons, I wish the luck and success with this project, though I probably won’t use it.
Not only are their bigger on disk (no-one cares), on the network (some might care + compression helps), statically linked apps are bigger in RAM as well and that still hurts.
This isn’t very relevant if you include commonly used libraries in the base system. Non-commonly used libraries would be rarely needed, and thus wouldn’t be much of a disk-space waster.
-bytecoder
First, I’m not talking about disk-space waste.
Second, how do you define “commonly used”? Do for example both Gnome and KDE libraries belong there? What about a distro like Ubuntu, where they only care about Gnome? KDE libs would not be included in the “core”, hence statically linked to each KDE app – I hope you can see how much that would suck. OK, so you include KDE libs in the “core”. What else do you include? What do you exclude? Who decides that? Based on which information?
Too many question for something, that has been already solved by package managers, dependencies and dynamic linking.
IMHO, static linking would be not one, but five steps backwards.
First, I’m not talking about disk-space waste.
size in general
Second, how do you define “commonly used”? Do for example both Gnome and KDE libraries belong there? What about a distro like Ubuntu, where they only care about Gnome? KDE libs would not be included in the “core”, hence statically linked to each KDE app – I hope you can see how much that would suck. OK, so you include KDE libs in the “core”. What else do you include? What do you exclude? Who decides that? Based on which information?
I’ve seen this thinking a lot, lately. I’m talking abstractly; if the current linux model doesn’t fit, then it’s not very good. Doing anything else would just plaster over the real problem.
Too many question for something, that has been already solved by package managers, dependencies and dynamic linking.
You keep thinking that…
-bytecoder
I’ve seen this thinking a lot, lately. I’m talking abstractly; if the current linux model doesn’t fit, then it’s not very good. Doing anything else would just plaster over the real problem.
I’m used to the Linux model. I used it for a long time. Maybe that makes me biased, but I believe it works rather well for me. Better than the Windows model I have experienced before.
I’m curious about your model.
You keep thinking that…
Nice to see some arguments
I’m used to the Linux model. I used it for a long time. Maybe that makes me biased, but I believe it works rather well for me. Better than the Windows model I have experienced before.
I’m curious about your model.
Again, I’m talking abstractly For example, the use of application menus is basically a band-aid that covers up the lack of a decent fs layout.
Nice to see some arguments
Heh. You’re a funny guy, that actually made me laugh a bit.
-bytecoder
For example, the use of application menus is basically a band-aid that covers up the lack of a decent fs layout.
You are getting it backwards, the illusion that changing the filesystem layout will do any difference. What you can mostly hope to achieve are making it mimic the layout of the application menu, and that’s rather pointless as you already have the menu.
The core of the matter is, users don’t care where the applications are located, they only want a quick and easy way to start them. Other things users care for are speed and efficient use of disk and memory space, or as they see it room to save their stuff and ability to do lots simultaneously. Having the applications menu gives the user the first, the fs layout helps delivers the other.
You are getting it backwards, the illusion that changing the filesystem layout will do any difference. What you can mostly hope to achieve are making it mimic the layout of the application menu, and that’s rather pointless as you already have the menu.
Well, let’s take a look at this for a second. First, let’s start out by listing what the user could possibly want to do with an application:
1) install
2) execute
3) move/copy
4) uninstall
Based on these characteristics, it’s obvious that an application manager has to have these features to be complete. Because all of these features can be provided by the filesystem, it would make sense to just use that, instead. Now, if we take a look at an application menu, we see that it only implements #2, which is most definitely not satisfactory. Having to use separate interfaces for these intrinsicly related operations would be horribly inelegant and unintuitive.
The core of the matter is, users don’t care where the applications are located, they only want a quick and easy way to start them. Other things users care for are speed and efficient use of disk and memory space, or as they see it room to save their stuff and ability to do lots simultaneously. Having the applications menu gives the user the first, the fs layout helps delivers the other.
Nah. Users don’t usually care where it’s located, but they do if, for example, they want to copy it to another computer along with all their settings.
-bytecoder
You’re not thinking about this the right way. Only small, infrequently used, libraries should also come in the appdir. Things like QT or GTK should be optional add-on components to the base system.
The “Base” system would contain a subset of packages that are common to pretty much everything. On top of that you can add applications that depend only on the base system. Additional components, such as GTK, QT, SDL, CUPS, or PERL could be added if desired, and would allow you to run applications that depend on them. The idea would be to keep the number of installable components nessesary to support any given app small.
For instance, lets say I’m running on a distribution that is KDE-Centric. The components installed should be Base, QT, and KDE. So apps written for KDE should only depend on those components. If an app requires SomeUncommonLibFoo then it should be included in the appdir. And of course, if you want to have some GTK apps, like The GIMP, you can install the GTK component. Likewise, if you’re using a GNOME desktop you could install the QT component to be able to run QT dependant apps.
At least, thats my idea of how it should work.
You’re not thinking about this the right way. Only small, infrequently used, libraries should also come in the appdir. Things like QT or GTK should be optional add-on components to the base system.
The “Base” system would contain a subset of packages that are common to pretty much everything. On top of that you can add applications that depend only on the base system. Additional components, such as GTK, QT, SDL, CUPS, or PERL could be added if desired, and would allow you to run applications that depend on them. The idea would be to keep the number of installable components nessesary to support any given app small.
For instance, lets say I’m running on a distribution that is KDE-Centric. The components installed should be Base, QT, and KDE. So apps written for KDE should only depend on those components. If an app requires SomeUncommonLibFoo then it should be included in the appdir. And of course, if you want to have some GTK apps, like The GIMP, you can install the GTK component. Likewise, if you’re using a GNOME desktop you could install the QT component to be able to run QT dependant apps.
At least, thats my idea of how it should work.
Well, the problem with that is you introduce the dependency problem again. The simplest solution, by far, would be to just include the dependencies with the application. Including commonly used libraries with the base system would cut down 99% of the cases that would require outside dependencies.
-bytecoder
I always said that installing applications in Linux is way too complex. And for a person new to Linux (who doesn’t know the filesystem) thinks I just installed this package but now how do I launch the app?
(Happened to me years ago)
OSX style applications on Linux would be perfect.
apt-get install foo
yum install foo
emerge foo
installing programs on linux is neither hard or complex, its different, which is hardly the same thing.
> apt-get install foo
> yum install foo
> emerge foo
> installing programs on linux is neither hard or complex
$ apt-get install foo-snapshot2005-09-17
Reading package lists… Done
Building dependency tree… Done
E: Couldn’t find package foo-snapshot2005-09-17
Or:
$ apt-get install cool-commercial-third-party-app
Reading package lists… Done
Building dependency tree… Done
E: Couldn’t find package cool-commercial-third-party-app
And now?
(But you missed the point (and so did the original poster). The suggested main use of klik is providing packages for multiple distros *before* they enter the respective distros’ repositories or general availability, especially during development. And all this without the software developers having to cope with each distro’s peculiarities. This is *not* not supposed to replace the distro’s package manager.)
$ apt-get install foo-snapshot2005-09-17
Reading package lists… Done
Building dependency tree… Done
E: Couldn’t find package foo-snapshot2005-09-17
I’ll take this as you haven’t heard of synaptic nor of SmartPM.
You generally don’t use CLI to install applications via yum or apt-get. You use a GUI for that
They solve the situations with missing dependencies, especially smartPM is good at this.
The suggested main use of klik is providing packages for multiple distros *before* they enter the respective distros’ repositories or general availability, especially during development.
Why make it easy to install an application which clearly is a development version? You’re not supposed to install such software, unless you know exactly what you’re doing. And in that case you know how to compile and install it using CVS.
End Users shouldn’t use it until it’s marked as stable and has entered repository. Therefore – klik is not useful.
dylansmrjones
kristian AT herkild DOT dk
> I’ll take this as you haven’t heard of synaptic nor of SmartPM.
How does any of this software help you if the package does not exist at all?
> Why make it easy to install an application which clearly is a development version? You’re not supposed
> to install such software, unless you know exactly what you’re doing.
I *really* suggest you read the article before commenting. It states very clearly that there are many non-software-developers involved in the making of a program. Artist, translators, usability people, bug reporters (“can you you try the current SVN version to verify your problem is fixed?”). Many of them had a hard time installing the CVS or SVN versions of the software they were contributing to.
I always said that installing applications in Linux is way too complex. And for a person new to Linux (who doesn’t know the filesystem) thinks I just installed this package but now how do I launch the app?
(Happened to me years ago)
It’s not complex at all. You’re copying some files from one place to another place, just like on windows and all other OS’s. Only the FHS is different.
Anyway, GNU/Linux has had one-click installation for years, and the result of the installation can be found in the main menu (equivalent to start menu in windows).
Besides that, klik and the mac osX system is hampered by serious drawbacks.
What those are can be read here: http://autopackage.org/ui-vision.html
The most powerful installation system is selfextracting archives, possibly combined with installation scripts (where necessary). No more – no less.
dylansmrjones
kristian AT herkild DOT dk
DISCLAIMER: I’m a geek.
Interesting page. It’s nice to see that people really are establishing working theories to get packages that install flawlessly on Linux. I look forward to seing this more widespread.
However I think one of the biggest problems with linux (unix) is the file structure. It’s way too complex. Once an app has been installed you can pray so that your installation log/db doesn’t get corrupted or you’ll never be allowed to roll back. How about a simple folder called “applications” where you could drag your end-user apps and they will work? No, it has to be a package that puts the binary app in a folder, the doc elsewhere, some additional libs in another place, the icons in /usr/share/icons/cryptic2lettername and of course the title image in /usr/share/pixmaps/what/is/this/already.
Yes, I know, shared libraries, shared icons. Every app can access everything. But the problem is that this is not everything.. it’s just ‘anything’. From an end-user perspective the correct term is ‘a mess’. It’s all organized in a way that suits perfectly computers and people who think they think like them.
The OSX approach may have “serious drawbacks” but it just happens to work. I never had to care about “installing” OSX apps. On Linux this has always represented a good amount of the time I spend when I want to try a new app. Just trying to get it installed. Finding the right package, converting from one of the dozen package formats out there. And then crying because you also have to convert the associated libs but it will fail and so on…
On either Windows, OSX, BeOS, Amiga OS, or whatever OS I’ve tried if you wish to test something quick and fast you just download it, uncompress and run it and that’s it.
Linux… half the time configuring drivers, the other half trying to get simple applications to just freaking get installed. I remember trying to find the best IM client on linux… nightmare is a small world to describe what it requires to test 6 programs that to the same thing (that was on a mandrake 8).
Again it’s fine if you stick to what’s available to you. If you run Debian stable, you know what you can get, and when you see an app mentionned somewhere that looks nice, you just wait patiently 20 years for it to make it to the stable tree.
(end of rant)
On either Windows, OSX, BeOS, Amiga OS, or whatever OS I’ve tried if you wish to test something quick and fast you just download it, uncompress and run it and that’s it.
Linux… half the time configuring drivers, the other half trying to get simple applications to just freaking get installed. I remember trying to find the best IM client on linux… nightmare is a small world to describe what it requires to test 6 programs that to the same thing (that was on a mandrake 8).
I find this comparison deceivingly misrepresentative (perhaps unintentionaly). This is a comparison of installing software that is repeated so often but is in essense false in that it represent (again, perhaps unintentionally) a universal statement of installing software on the various operating systems. In fact, on Windows for example, not ALL software is installed by “just download it, uncompress and run it and that’s it”. Maybe most, but not all. Similarly on Linux, not ALL software is installed by “half the time configuring drivers, the other half trying to get simple applications to just freaking get installed”. In fact, on most Linux distributions this is by far an exception to the installation process. Most software is installed using a package manager program and most software ends up being installed “flawlessly”. Even downloading for example an RPM of Adobe Acrobat or Opera simply requires “just download it, uncompress and run it and that’s it” on Mandriva and I suspect it is likely similar on many other distributions.
This is not to suggest that there is no difference in the installation of software on Linux compared to other operating systems, just that it is so often repeatedly stated that in Linux on has to executed so many steps to install software. As a general statement, that simply in not true.
However I think one of the biggest problems with linux (unix) is the file structure. It’s way too complex. Once an app has been installed you can pray so that your installation log/db doesn’t get corrupted or you’ll never be allowed to roll back. How about a simple folder called “applications” where you could drag your end-user apps and they will work? No, it has to be a package that puts the binary app in a folder, the doc elsewhere, some additional libs in another place, the icons in /usr/share/icons/cryptic2lettername and of course the title image in /usr/share/pixmaps/what/is/this/already.
You are right about praying for the install DB to stay correct. That’s, what backups are for. In the 4 years I’m running debian (currently on 4 machines), I never had the package DB go FUBAR.
Let me offer you some other perspective on the Unix file structure.
* short search path for executables
* short search path for libraries
* short search path for manual pages (yes, people still use them)
* easy separation of what needs to be backed up and what not (/etc, /var, /home) – other stuff can be re-installed. If apps kept their settings in their subdirectories, backing up configuration files
* similar for setting to read-only. You can have (and it was done often in the past, when disks were expensive) the whole /usr tree mounted read-only from a NFS server.
* For me, the technical reasons are more important than having a “nice” /apps/ directory. I don’t need to know from the top of my head, which files belong to which application. The application itself knows it and the package DB. I rarely need this information and when I do need it, I ask the package DB
The OSX approach may have “serious drawbacks” but it just happens to work. I never had to care about “installing” OSX apps. On Linux this has always represented a good amount of the time I spend when I want to try a new app. Just trying to get it installed. Finding the right package, converting from one of the dozen package formats out there. And then crying because you also have to convert the associated libs but it will fail and so on…
I prefer typing “aptitude install mozilla-firefox mozilla-thunderbird gaim xvncviewer gvim ssh” to “go to mozilla site, download firefox.exe, save somewhere, run it, click through 5 screens of installer, download thunderbird.exe, ….”. Drag&drop on Mac only saves part of these mundane tasks – you still have to save the bundle somewhere and drag&drop it. One by one (at least the finding on web and saving).
Even if compiling from source – if you read the documentation, it usually lists what libs you require. My rule of the thumb is to install as much libs a possible from my distribution’s package repository, compile only the bare minimum.
Linux… half the time configuring drivers, the other half trying to get simple applications to just freaking get installed. I remember trying to find the best IM client on linux… nightmare is a small world to describe what it requires to test 6 programs that to the same thing (that was on a mandrake 8).
Not true. I only do things with drivers, when I get new hardware. And mostly, if I use a distribution kernel, there’s nothing to do but use the HW.
What difference is there in trying out 6 programs on Windows and on Linux? From my experience (limited) with Windows, at least one of the programs would leave stuff around even after uninstallation.
If you run Debian stable, you know what you can get, and when you see an app mentionned somewhere that looks nice, you just wait patiently 20 years for it to make it to the stable tree.
You still can use distributions that have more up-to-date packages (Debian/Sid, Ubuntu, Mandriva, Fedora Core,…). OTOH, I would not put an “ever-changing” distro on a server or a desktop I do not wish to take care of much.
* easy separation of what needs to be backed up and what not (/etc, /var, /home) – other stuff can be re-installed. If apps kept their settings in their subdirectories, backing up configuration files
* similar for setting to read-only. You can have (and it was done often in the past, when disks were expensive) the whole /usr tree mounted read-only from a NFS server.
* For me, the technical reasons are more important than having a “nice” /apps/ directory. I don’t need to know from the top of my head, which files belong to which application. The application itself knows it and the package DB. I rarely need this information and when I do need it, I ask the package DB
http://www.gobolinux.org/index.php?lang=en_US&page=doc/articles/clu…
Read the “There is a reason why things are the way they are” section. Basically, everything you listed as a technical superiority is moot, and quite a bit of a hack, as well.
I prefer typing “aptitude install mozilla-firefox mozilla-thunderbird gaim xvncviewer gvim ssh” to “go to mozilla site, download firefox.exe, save somewhere, run it, click through 5 screens of installer, download thunderbird.exe, ….”. Drag&drop on Mac only saves part of these mundane tasks – you still have to save the bundle somewhere and drag&drop it. One by one (at least the finding on web and saving).
Installing applications isn’t something that usually occurs frequently. Most of the time, when someone wants to install something, they also want to know more information about it, which means the trip to the website is necessary, anyway. Upgrading applications is usually only performed when the user actually cares about the new features, otherwise there’s no point in doing it; I’d say most people don’t care about 90% of the apps on their harddrive, maybe more.
What difference is there in trying out 6 programs on Windows and on Linux? From my experience (limited) with Windows, at least one of the programs would leave stuff around even after uninstallation.
I don’t see how this is pertinant to the discussion at all.
-bytecoder
Installing applications isn’t something that usually occurs frequently. Most of the time, when someone wants to install something, they also want to know more information about it, which means the trip to the website is necessary, anyway. Upgrading applications is usually only performed when the user actually cares about the new features, otherwise there’s no point in doing it;
I might be installing a second or third computer. Or my friends computer. Sure I do NOT need to read about Firefox the third time. I upgrade either for new features or security updates. The latter happens
quite often, unfortunately.
The point I tried to make is that typing “aptitude install app1 app2 … app10” scales better with number of apps to be installed then “download 1, save, install, download 2, save, install, …. download 10, save install”. With upgrading, it is even better – “aptitude update ; aptitude upgrade” is constant with any number of apps to be upgraded.
Why should I rememer where the Firefox installer lives on the web, when my computer can do it?
I’d say most people don’t care about 90% of the apps on their harddrive, maybe more.
Well, then I’m in the 10%. Maybe I use my computer differently than 90% of the people you had in mind. How _I_ use it forms my opinions on what is good and what is bad.
I might be installing a second or third computer. Or my friends computer. Sure I do NOT need to read about Firefox the third time. I upgrade either for new features or security updates. The latter happens
quite often, unfortunately.
Well, most people like to read about what they’re installing/upgrading before they do it. Even if you don’t want to, it’s still only a two-step process: retrieve app and drag to app folder. Sure, you need to repeat it for every package you install, but it doesn’t usually matter, because different apps usually have different release schedules.
The point I tried to make is that typing “aptitude install app1 app2 … app10” scales better with number of apps to be installed then “download 1, save, install, download 2, save, install, …. download 10, save install”. With upgrading, it is even better – “aptitude update ; aptitude upgrade” is constant with any number of apps to be upgraded.
I can’t disagree, it does scale much better. For the very few instances when this is actually needed, it wouldn’t be too hard to write a simple script to update the programs that you want. This is another advantage of doing it manually: you know how everything works and, if something breaks, you know exactly what to do. This type of system presents a much more familiar, and thus easier and more powerful to use, abstraction to the user compared to programs that “automate” the process.
Well, then I’m in the 10%. Maybe I use my computer differently than 90% of the people you had in mind. How _I_ use it forms my opinions on what is good and what is bad.
That’s unlikely. When was the last time you cared about the version of cat? How about zip, or bison? The applications that people would want to upgrade are almost always the ones that they use and care about, which isn’t very many when you think about it.
-bytecoder
Besides that, klik and the mac osX system is hampered by serious drawbacks. What those are can be read here: