Have you ever wondered why in other Operating Systems such as Windows, MacOS or even BeOS installing software is so easy compared to Linux? In such OSes you can simply download and decompress a file or run an installer process which will easily walk you through the process.
This doesn’t happen in Linux, as there are only two standard ways to install software: compiling and installing packages. Such methods can be inconsistent and complicated for new users, but I am not going to write about them, as it has been done in countless previous articles. Instead i am going to focus in writing about why it is difficult for developers to provide a simpler way.
So, why can’t we install and distribute programs in Linux with the same ease as we do in other operating systems? The answer lies in the Unix filesystem layout, which Linux distros follow so strictly for the sake of compatibility. This layout is and was always aimed at multi-user environments, and to save and distribute resources evenly across the system (or even shared across a LAN). But with today’s technology and with the arrival of desktop computers, many of these ideas dont make much sense in that context.
There are four fundamental aspects that, I think, make distributing binaries on linux so hard. I am not a native english speaker, so i am sorry about possible mistakes.
1-Distribution by physical place
2-“Global installs”, or “Dependency Hell vs Dll hell”
3-Current DIR is not in PATH.
4-No file metadata.
1-Distribution by physical place
Often, directories contain the following subdirectories:
lib/ – containing shared libraries
bin/ – containing binary/scripted executables
sbin/ -containing executables only meant for the superuser
If you search around the filesystem, you will find several places where this pattern repeats, for example:
/
/usr
/usr/local
/usr/X11R6
You might wonder why files are distributed like this. This is mainly for historical reasons, like “/” being in a startup disk or rom, “/usr” was a mount point for the global extras, originally loaded from tape, shared disk or even from network, /usr/local for local installed software, I dont know about X11R6, but probably has its own directory because it’s too big.
It should be noted that until very recently, unixes were deployed for very specific tasks, and never meant to be loaded with as many programs as a desktop computer is. This is why we don’t see directories organized by usage as we do in other unix-like OSes (mainly BeOS and OSX), and instead we see them organized by physical place (Something desktop computers no longer care about, since nearly all of them are self contained).
Many years ago, big unix vendors such as SGI and Sun decided to address this problem by creating the /opt directory. The opt directory was supposed to contain the actual programs with their data, and shared data (such as libs or binaries) were exported to the root filesystem (in /usr) by creating symlinks.
This also made the task of removing a program easier, since you simply had to remove the program dir, and then run a script to remove the invalid symlinks. This approach never was popular enough in in Linux distributions,
and it still doesn’t adress the problems of bundled libraries.
Because of this, all installs need to be global, which takes us to the next issue.
2-“Global installs”, or “Dependency Hell vs Dll hell”
Because of the previous issue, all popular distribution methods (both binary packages and source) force the users to install the software globally in the system, available for all accounts. With this approach, all binaries go to common places (/usr/bin, /usr/lib, etc). At first this may look reasonable and the right approach with advantages, such as maximized usage of shared libraries, and simplicity in organization. But then we realize its limits. This way, all programs are forced to use the same exact set of libraries.
Because of this, also, it becomes impossible for developers to just bundle some libraries needed with a binary release, so we are forced to ask the users to install the missing libraries themselves. This is called dependency hell, and it happens when some user downloads a program (either source, package or shared binary) and is told that more libraries are needed for the program to run.
Although the shared library system in Linux is even more complete than the Windows one (with multiple library versions supported, pre-caching on load, and binaries unprotected when run), the OS filesystem layout is not letting us to distribute binaries with bundled libraries we used for developing it that the user probably won’t have.
A dirty trick is to bundle the libraries inside the executable — this is called “static linking” — but this approach has several drawbacks, such as increased memory usage per program instance, more complex error tracing, and even license limitations in many cases, so this method is usually not encouraged.
To conclude with this item, it has to be said that it becomes hard for developers to ship binary bundles with specific versions of a library. Remember that not all libraries need to be bundled, but only the rare ones that an user is not expected to have. Most widely used libraries such as libc, libz or even gtk or QT can remain system-wide.
Many would point out that this approach leads to the so called DLL hell, very usual in Windows. But DLL hell actually happened because programs that bundled core system-wide windows libraries overwrote the installed ones with older versions. This in part happened because Windows not only doesn’t support multiple versions of a library in the way unix does, but also because at boot time the kernel can only load libraries in the 8.3 file format (you can’t really have one called libgtk-1.2.so.0.9.1 ). As a sidenote, and because of that, since Windows 2000, Microsoft keeps a directory with copies of the newest versions available of the libraries in case that any program overwrites them. In short, DLL hell can be simply attributed to the lack of a proper library versioning system.
3-Current DIR is not in PATH
This is quite simple, but it has to be said. By default in Unixes, the current path is not recognized as a library or binary path. Because of this, you cant just unzip a program and run the binary inside. Most shared binaries distributed do a dirty trick and create a shell script containing the following.
#!/bin/sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.
./mybinary
This can be simply solved by adding “.” to the library and binary path, but no distro does it, because it’s not standard in Unixes. Of course, from inside a program it is perfectly normal to access the data from relative paths, so you can still have subdirs with data.
4-No file metadata
Ever wondered why Windows binaries have their own icons and in Linux binaries look all the same? This is because there is not a standard way to define metadata on the files. This means we cant bundle a small pixmap inside the file. Because of this we cant easily hint the user on the proper binary, or even file, to be run. I cant say this is an ELF limitation, since such format will let you add your own sections to the binary, but I think it’s more like a lack-of-a-standard to define how to do it.
Proposed solutions
In short, I think Linux needs to be less standard and more tolerant in the previous aspects if it aims to achieve the same level of user-friendlyness as the ruling desktop operating systems. Otherwise, not only users, but developers become frustrated with this.
For the most important issue, which is libraries, I’d like to propose the following, as a spinoff, but still compatible for Unix desktop distros.
Desktop distros should add “./” to the PATH and LIBRARY_PATH by default, this will make the task of bundling certain “not so common”, or simply modified libraries with a program, and save us the task of writing
scripts called “runme”. This way we could be closer to doing simple “in a directory” installs. I know alternatives exist, but this has been proven to be simple and it works.
Linux’s library versioning system is great already, so why should installing binaries of a library be complicated? A “library installer” job would be to take some libraries, copy them to the library dir, and then update the lib symlink to the newer one.
Agree on a standard way of adding file metadata to the ELF binaries. This way, binaries distributed can be more descriptive to the user. I know I am leaving script based programs out, but those can even add something ala “magic string”.
And the most important thing, understand that the changes are meant to make Linux not only more user-friendly, but also more popular. There are still a lot of Linux users and developers that think the OS is only meant as a server, many users that consider aiming at desktop is too dreamy or too “Microsoft”, and many that think that Linux should remain “true as a Unix”. Because of this, focus should be put so ideas can coexist, and everyone gets what they want.
About the Author:
For some background, I have been programming Linux applications for many years now, and my speciality is the Linux audio area. I usually receive emails from troubled users every few days (and a lot more on each release) with problems related to missing libraries, distro-specific, or even compiler specific. This kind of things constantly make me wonder about easier ways to distribute the software for users.
It’s pretty simple to do this anyway, any provider not writing using standard libraries can end up using “runme” style scripts, it’s not a problem.
Using standard libraries, though, is always the best bet…what we need is a standard installer other than the “./configure && make && make install” way…users don’t want to compile if they don’t have to.
I say a installer that knows how each major distro is setup and copies the files to the right place based on that system.
Maybe…
Compilers in linux seem to be A MUST! if you are planning
to install software. _why_ do you need to compile it in the first place? Most people uses packages than the ones using compile-only distros (like gentoo or lfs). They shouldnt be
there unless you are a developer. Also remember that global
installs are very insecure compared to user installs, so this “promoting all global” tends to increase insecurity.
Also remember, there is a lot of people against this kind of ideas because they think they kill opensource.
First, the reason why everything is so easy to install and works on Windows is because no one really shares libraries (actually in Windows XP this is a feature?!?!? will keep multiple dlls for each app). Every application installs it’s own dlls and uses it own dlls even when these libraries could be shared by every other app (example mfc). The exception to this rule is of course Microsoft dlls that ship with Windows. But Microsoft does the smart things and emulates old versions of the standard dll’s behavior (usually you have to request a certain version).
Contrast this to Linux where many shared libraries (ddl equivalent) are used by many different apps and you run into dependency hell (dependency hell is just what the user sees. in reality its different apps wanting different versions of a library).
So, what needs to happen in Linux? Right now, we have every distribution shipping their own packages compiled against their own libraries. This works if you only install binaries from your distribution. To effectively allow linux users to download a single binary for any distribution we need the library programmers to provide versioning within the library. This of course is a big task.
Finally, the rest of the problems (where to put stuff, etc.) is relatively easy to solve.
if you don’t also offer the source in .tbz2 format or some other form of distribution method.
I agree that not everyone should have to compile a binary. I don’t believe that global installs are any less secure using binaries than compiling source. If the source is available, who cares?
Not everyone goes through the source, nor should they have to. Include it in the install package if you want…make a directory called ‘src/’ and allow people to compile it if they want.
The installer is possible because major distributions don’t usually change their filesystem layout at all, that would confuse 3rd party developers more.
a frigen PERIOD!!!!
my god, that is so damn simple why doesn’t the LSB define it!!!!
they a program can bundle the libs in its own dir that are not in the LSB….for god sake!!!!
and no, I do not se this as makeing more problems. as a desktop user, I don’t care if I have 5 instances of a library open in memory. sure, if you have a server set up, then by all means, worry about it, but for a desktop set up, function before form holds true, as with any consumer product….if it does not work well, no matter how cool it looks, it sucks.
LSB….please please please FIX THIS
Nice to see a insight into Linux issues without whindgefest 2002 being lauched, and every Windows/Linux fruit cake come out and throw their uneducated rant into the ring.
As for the issue with packages and so forth. In a perfect world, everyone would simply base their distro on Debian 3.0r1, and life would be easier, however, this isn’t a perfect world, and there are zealots in various distro camps claiming their way is the best way, and should be the only way.
Personally, I prefer the portage system that FreeBSD uses. Change to the director, type make then enter, sources are retreived and patched, once compiled, then make install, voila, no problems.
i agree, that’s a simple fix for standard desktops, where computers are known for having at least 128M of memory and no services running (apache, dns, etc).
What are the security implications of doing it this way, though?
> First, the reason why everything is so easy to install and
> works on Windows is because no one really shares libraries.
Yeah, that’s kinda it. Why sharing so many weird an rare libraries causing dependency mess? The core ones can
still be installed system wide and that’s pretty much all
of it. My debian install has nearly 4 years and I seriously have an enormous amount of software installed, even with that, check this:
red@server:~$ du -ch /usr/lib/*.so* | grep total
226M total
That’s not likely to bother my 80 gig hd if programs
ship these many times, assuming most of that space IS
taken by core libs.
That is what I said years ago that Windows should have done. If these vendors want to haul dll’s around with then, sure, by all means, but don’t ram them into /windows and then complain because you have broken the users system.
As for developers who use weird and wonderful libraries they KNOW won’t/don’t exist in a standard distro, then either include in the directory the library, or statically link against it. IIRC, that is what TheKompany does to get around this problem. All their applications are statically linked against qt.
I also think that the LSB doesn’t cover enough. It needs to exactly specify where EVERY is located and what versions of what libraries should be in a certain place. For example, LSB Supreme could cover that gtk2, glib2, qt3 must be included and placed in the directory /usr/lib. Then state that the library directory /usr/local/lib is setup so that those who do compile and find their application is moved into the “/usr/local ZONE!” can run their applications with out the application moaning because a library doesn’t exist.
Why? there are a hundred of VERY popular opensource
windows applications where the regular users just get binary and the developers work on the source, to name a few:
zsnes,virtualdub,dev-C++,putty or scummvm. As for the people using redhat, they mostly dont compile and that hasnt killed opensource.
This kind of things constantly make me wonder about easier ways to distribute the software for users.
How about static linking instead of dynamic. Have you considered that as an option? I haven’t done much programming in Linux so I don’t know if this is possible in Linux. It probably is. I am aware that static linking will make the executable slower then when linked dynamicly but I am running P4 2.0 with 256MB DDR and I don’t see any difference in speed in my own programs when I static link them. Probably because I have a fast machine. Still however, perhaps if you distribute your files both ways one static linked so all the user needs to do is run the file and one non linked so the user must compile it. The last resort is, make a GUI install program 🙂 but there isn’t such thing as “InstallShield” in Linux which just supports my theory I said once in my previous posts.
It’s only opensource if I can get the source code and compile it myself. Proprietary software is when a user gets a binary and ONLY the developers get the source code.
If you want to release binaries, go ahead, but the GPL demands that if the end user gets the binary, he can also request the source code…if he doesn’t get it, it’s not GPL software.
If I want to distribute under BSD, then that might be different. But in order for it to be open source, the code still has to be available.
>i agree, that’s a simple fix for standard desktops, where >computers are known for having at least 128M of memory and no >services running (apache, dns, etc).
>What are the security implications of doing it this way, >though?
Not many, a very old exploit was to make a fake lib that a suid root program needs, change the path and make the suid program link against it. But this was solved long ago (althought i really dont know how) if you try to do it, it will not work. Besides that, i cant see any security issues.
That’s your opinion, and i respect it, and also agree in part.
But I guess the term opensource (and specially in the GPL)
is very well defined already, and it differs.
It’s not an opinion…it’s cold hard fact that OpenSource means that the source is open, available.
If the user wants it or not doesn’t matter, it’s wether it’s available for those who want to improve it.
Many companies try to pass their software off as opensource, like Sun did with their solaris source code. While it was available, and open (yes you could use it to build solaris), it wasn’t free software.
Opensource projects will allow their users access to the source code. While I don’t run windows, and have no knowledge of zsnes and the other projects you listed, can I download the sourcecode to contribute? If not, then there’s no fine line, it’s closed source.
Linux is based on the GPL. If you want an easy to install version of a program that statically links against GPL libraries, then you have to provide the source.
It’s not a war on licenses, if you want to provide a closed source package, go for it. Just don’t call it opensource if it’s not.
I can get the putty source code from their cvs server, it is GPL’d, and has been since day one.
I don’t know about the others though, just because the package doesn’t come with source code doesn’t make it not opensource…they distribute it from their website.
You and I are on the same page, I just think we got mixed up a little…Sorry for that.
What’s really funny is that the source to most BSD projects is much more readily available than a lot of linux projects ‘protected’ by the GPL.
Look at some of these new desktop linux distros, how easy is it to get the source for them? Oh yeah, the source is available — but you’re not going to get it in a terribly useful form. What good is source if it comes to you on 842 5.25″ floppy disks? (yes this is a satiracle exageration).
umm..lets see:
open box….look for source CD.
hmm that was simple.
What distros are having problems getting source from? RedHat offers them all on cds when you buy it and from their ftp site, Suse does the same, debian, you can get those with apt-src, gentoo? well just run emerge
The bsd source is distributed in tar archives just like linux…that’s not the point though.
THe point is, the source is available.
If you develop a program and want to make it opensource, you can just put the source on your website, or make it available to order through the mail.
It doesn’t have to go into the package. It just has to be available.
Zsnes and all the other packages that were mentioned are sourceforge projects, its usually not all that hard to get the sources to those eheh.
One of the few good ideas Mac OS X had was bundling. In essence, instead of installing applications, you drag a single icon to the appropriate folder. The icon was really a folder, but with metadata instructing the OS to treat it as a single file except when using special programs.
Linux should take a hint. Scrap RPM packages, and in their place use metadata-based bundles. To install an app, the user drags it into their /usr/bin folder (or /bin, because /usr shouldn’t exist… go gnu hurd! ). If an app requires library support, the user drags the appropriate library bundle into their /lib folder. If an application requires pre/post-install/deinstall scripts, then well, it shouldn’t. Use Office-style “First Run” scripts if absolutely necessary, but there’s no reason why a bundle shouldn’t come preconfigured.
A very simple and elegant solution, with the added plus that there is no central database to maintain. And technically you don’t even need metadata, but rather just a smart bash and/or filemanager.
I didn’t say the source wasn’t available. I said it wasn’t available in a terribly useful form.
How many of you can ftp the entire source for your distro and build it from scratch? I’m running windows on this machine, now how easy is it going to be for me to get the source for Xandros and somehow magically turn this machine into a Xandros box?
I don’t want to start a licensing war here but think about it. The GPL doesn’t really protect any better than the BSDL. There are a million ways an evil company (like microsoft) could legally make acquring the source a huge pain in the ass. Where’s your GPL protected freedom then?
> There are a million ways an evil company (like microsoft)
> could legally make acquring the source a huge pain in the
> ass. Where’s your GPL protected freedom then?
It’s true that the GPL will have little effect on an evil company in Seattle (or Cupertino for that matter) using GPLed code illegally. What the GPL does is promote a certain culture. Companies that use the product are more likely to be hacker-friendly companies. Apple and Microsoft both use BSD-licensed code, while RedHat uses GPL. Compare the culture at the companies. I know where I’d rather work, and it’s on the East Coast…
Additionally, it can sometimes be used as a legal threat against companies that proclaim to be using the product. Take Lindows, for example. Obviously it is a Linux distro, but it initially did not release its modified source. Pressure and threats of a lawsuit forced it to do so. Had Linux been BSD-licensed, Lindows would have stayed closed source forever.
“1-Distribution by physical place”
This has always intrigued me… what’s the difference between /usr/share, /usr/local/, and /usr/local/share? What are the rules for where programs go? It’s pretty confusing, even for someone with some *UNIX/*BSD experience… it’s always fun to play “where did the binary get installed” game 🙂
“2-“Global installs”, or “Dependency Hell vs Dll hell””
I think “install by dir” would work, as long as you separate the user’s home directory from the user’s installation directory. One of the nicest things about the UNIX .files is that they allow _very_ easy backups. In most cases, tar.gz up your home directory, backup, install/upgrade/move to new computer, and untar away and you’re done (given that the config file format didn’t change between versions, if applicable). Perhaps a /home/{username} and a /app/{username} would be a good idea, as it would allow both per directory installs _and_ simple backups of personal data. I really like the *NIX custom of separating user data and programs…
“3-Current DIR is not in PATH.”
I believe the reason this is not done now is for security reasons. Case in point, say a rogue .tar.gz file you downloaded decided to install a new ‘ls’ in your path when you ran the program. This would not only affect you, but any user (including root) that happened to cd into your directory. Since root ran your ‘ls’, it’s now operating at an elevated level… `rm -rf` anyone? Now, this shouldn’t happen as long as you have the correct order in your path, but you can never be too careful.
However, to use the /app/{user}/ scheme above, if for each install you had, a shell script was placed in /app/{user} then all you would have to do is put /app/{user} in your path and you’re done. Installation/removal would be as simple as removing the directory and shell launcher script, and simple GUI tools could very very easily be created. If your personal data is separate, there’d be the extra step of having to manually make sure to delete the appropriate .directory.
“4-No file metadata.”
I wonder if a standard section could be added to the ELF binaries that all file managers, etc look at when looking for specific info (icons, etc). This might allow backwards compatibility, although my knowledge of the ELF format is almost nonexistant 😉
Most of these problems could be handled by appropriate recommendations/standards set by LSB (or even Redhat). Something tells me that that won’t happen, or if it does then nobody will follow it.
Why do people have to troll about this? It’s getting pretty lame already. Why do you pick on Linux? If you dont like it, dont use it.
Mod me down, if you will, but Linux is great because of it’s diversity. You can use it practically everywhere, there are no “standards” other than simple Posix ones and usually, each distro has it’s own great packaging format.
As someone once said, Linux has been going for 12 years and it’s a diverse place. Linux does not need to appeal to Windows/BeOS/MacOS users. If they wanna switch, they WILL do it no matter what and they WILL learn it no matter what. It’s that simple.
So basically, dont be a troll about it – there are several operating systems for people like you…
License does not control the way that the source is distributed. If you want the source, you CAN get it no matter what license it uses. And GPL is more mainstream than BSD, so companies like Redhat, SuSE, etc use GPLed software. And they dont really want people changing much (it’s a feature, not a bug, since who the hell recompiles their redhat system from source anyway?)
If you want it, it’s always there. But RedHat, SuSE, Xandros, *insert major linux distro here* does not want you to recompile your entire OS nor does it expect you to do that. You’ve got Gentoo or LSF for that. I know, I use Gentoo mainly.
perhaps they can move of this horrid Linux package crap and implimnet a simple app install.
How about static linking instead of dynamic. Have you considered that as an option?
It could be a solution, but IMHO not the best one:
if you link statically, the running programs won’t share the common code of the library in RAM, because it’s “buried”, so to say, in the application itself.
In this case the library size adds up on the disk AND in RAM.
But if you distribute the dynamic libraries with the application, the size adds up only on the disk, while the OS is able to share the memory for those libraries (given that they are the same version).
This, I think, is much preferable to the current approach
because applications WILL ALWAYS RUN no matter how old and non-stantard your distribution is (well, within limits :-))
.
But of course this requires a directory layout where it’s easy to bundle applications with their own libraries, a layout that I would call “application-centric”
(AtheOS has a great example of the concept).
I am aware that static linking will make the executable slower then when linked dynamicly but I am running P4 2.0 with 256MB DDR and I don’t see any difference in speed in my own programs when I static link them.
AFAIK, libraries linked statically are slightly faster,
but the difference is probably not noticeable in
real-world conditions.
So basically, dont be a troll about it – there are several operating systems for people like you…
Yeah, I can’y wait until eComStation gets released. That thing is good! You’ve got the ease and look of Windows and stability of Linux! hehe Ok come on, flame me….I know you will
Run-time, statically linked binaries are slightly faster, but perhaps not noticeably so (unless they use lots of libraries all over the place). However, loading times of statically linked binaries are much much smaller than those of dynamically linked binaries.
Why do people have to troll about this? It’s getting pretty lame already. Why do you pick on Linux? If you dont like it, dont use it.
It’s not that we don’t like it we love it and want to make it better. Making a few standards to make installing binaries a painless thing won’t kill linux or reduce your choice or freedom. Reasonable standards will only increase choice
So basically, dont be a troll about it – there are several operating systems for people like you…
Elver, “several” is opinionable.
Today, if you don’t count really alternative OSes or OSes running on proprietary HW (sorry, I used to love MacOS but closed HW platforms are no more on my list),
the choice boils down to Linux and Windows.
And Incognito writes:
It’s not that we don’t like it we love it and want to make it better. Making a few standards to make installing binaries a painless thing won’t kill linux or reduce your choice or freedom. Reasonable standards will only increase choice
Exactly!
And about improvements, do you think possible an overhaul
of the directory system towards a more “application-centric” one… something borrowing from the app-in-a-directory concept shown in AtheOS (I repeat myself today!) and ROX Desktop?
I think it’s possible, and can be done while still keeping compatibility with Unix, but I would like to hear other opinions…
Also support for file metadata, or maybe even better, file attributes, would do great for a desktop OS:
I have seen the power of having arbitrary attributes on files, and I am a believer now 🙂
Well, you could say that my views are influenced too much by BeOS, which I’ve used alot in the past, but in my mind it’s still the reference model of an agile desktop,
and it would be great to see Linux [for the desktop]
evolve in that direction.
Ah, this reminds me that I haven’t looked for my flame-suit…
mmm, let’s see, I thought it was here 🙂
Programs that come with the distro really ist a problem, its mostly writing a userfriendly frontend to the package managment system, apt or urpmi (or whaterver its called)
If the user wants to install say SuperShinyApp 3.0 that isnt in the distro, he should be able to download supershinyapp-3.0.bin from supershinyapp.org and just right click on it in konquerror and make it executable, then double click it. It should then install to /home/bni/supershinyapp or something like that.
Take a look att the installer for Quake 3, it does a lot of things right, it even has a GTK GUI!
There is really no reason why for example mozilla couldnt hand out a file like this, with obscure libs that it uses statically linked. So what if it takes more memory, it is only a temporary solution until it becomes part of the distro.
I realise that installing in users home dir opens up for viruses, becouse of writeable executables, but I think its less of a risk than Windows anyhow, partly becouse you explicity have to set a file as executable.
try that with . in your PATH:
as a user:
echo “rm -rf /*” > /tmp/ls
as root:
cd /tmp
ls
aaaargh !!!!
you also have to do chmod +x /tmp/ls as a normal user. Else it won’t “work”.
Don’t try this at home kids. Just in case you didn’t know, that can wipe out your hard drive 🙂
What are you saying ?
I use Linux for lot’s of thing but one of them is the easy installation of software compare to Windows.
Windows seems to bea easy : clic clic clic wait….done.
Now uninstall this software : clic clic clic
“Do you want to remove RE998R8E.dll ? It can be use by another program”… Uh ? Yes . BOOOOOMM !!! You lose. You just crash your windows after rebooting
Other stories : installing an image viewer need to reboot the system. If you don’t reboot a BSOD will help you.
Now linux version :
rpm -U pipo.rpm
rpm -e pipo
What’s difficult ? Just use package made for your distribution.
And now it is more and more easiest :
urpmi pipo
or
apt-get install pipo
Don’t know has easiest in Windows or Mac World
Here’s my proposition:
Developers bundle the binary and all the (uncommon) necessary libs in a tgz archive. The tgz file should contain a “bin” directory and a “lib” directory. This tgz archive should also contain a “meta.xml” file containing infos like the icon file, the entry in the menu etc…
The tgz file should be called, for example, myapp.pak.
Then, we create an installer that will handle those “.pak” archives. It should de-tar the archive in /tmp and look for the meta.xml file.
Then it should create a “myapp” folder in ~/, add an icon on the desktop, add an entry in the menu and (the most important part), add the path to the “bin” and to the “lib” directories in the user’s path (in a permanent way).
The app would be installed for the user. It could then be de-installed by removing the ~/myapp folder.
When you re-start your GNOME/KDE/whatever session, it should look for broken links in the menu/desktop and remove them so that the removed apps totally disappear from the user’s system.
Is it technically feasible without modifying UNIX bases ? If yes, I’d be glad to help in such a project.
All your 4 points were irrelevant regarding ease of installation and running installed applications.
Important things for end users are:
– know where to find precompiled binaries (including related binaries)
– know how to use rpm, deb, installation executables (this is easy)
– know how to launch them
Mono applications run anywhere you put them,
and you can use any version of a library cause
mono checks in the same folder as the .exe for them.
You only need to install mono, and all mono-based apps just run.
“First, the reason why everything is so easy to install and works on Windows is because no one really
shares libraries”
But they do on Amiga, where installation is even easier than on
Windows. One reason is that a program cannot specify a single version
of a shared library, but only the oldest acceptable version. So only
the newest version of each library has to be installed, not multiple
versions.
1. just putting “.” in your LD_LIBRARY_PATH is not a good idea. That assumes that the current working directory is the binary’s path, which is not the case when you start an app on the command line. Nor want you the app to change the cwd, because you may give a relative path as an argument to the app. The dynlinker needs to be enhanced so it can find libraries at a location relative to the binary
2. One problem with distributed packages that has to be solved is the PATH. Many old-school unix users want to start their apps on the command line, and people will not accept a solution that doesnt allow this.
3. The mac os bundles do have a few problems that make it more difficult to apply them to the linux world. Some programs *need* to run pre-install scripts. Basically all apps that need to install themselves as /etc/init.d scripts. Unless, of course, you want to change the whole boot process (which is certainly not a good idea for compatibility reasons, and the boot process as defined by LSB is nice& clean).
I install RPM files using my Redhat, and it’s installs itself.
And using package handle, it is easy also to remove it from my operating system.
There is already a standard for menu entries, icons etc that is used by both KDE and Gnome: .desktop files.
1.I agree. Maybe the best way is to app each app’s folder to the LD_LIBRARY_PATH but it may slow the system down, no ?
2.Here again, adding each app’s folder to the PATH could make it but it might slow the system down if you install 1000 apps for example.
3.I think that apps that need to be installed in /etc/init.d scripts are not end-user apps. The problem is only with very end-user apps because people who want to install servers or non-GUI apps should be able to install them easily. More over, with such “dangerous” apps, they should always use the apps provided with their distro to avoid security problems.
Yas you’re right, the meta.xml file should be dropped and replace by a myapp.desktop file. In fact, the only remaining problem is: how to add the app folder in the LD_LIBRARY_PATH and to PATH in a clean way.
“pkg_add -rv <packagefile>” will fetch the binary, md5 checksum it, and install it. what can be easier? if you want to compile from scratch “cd /usr/ports/…; make; make install” all dependencies are downloaded, compiled and installed. want to remove something? “pkg_delete” and “make uninstall”.
just use FreeBSD.
Basically I want to look into what it takes to put this kind of metadata into the journaling filesystems like ext3 and reiser…
I fell in love with BeOS for one reason…the query system. If linux had that ability, it would be amazing and it would open up the system a lot for ways data is handled.
But that also opens up the ability to store metadata like MacOS does with icons, etc…
Anyone think it might be possible?
pkg_add -r prg
and 5 seconds later prg is installaed and working.
No problems with dependencies, where it dl’d from
or where it installed. “It just works.”
But then, it’s BSD.
apt-get install <name of package>
Boy, that’s tough. Or for those who can’t do that (maybe their machine doesn’t have a keyboard??), use Synaptic, and click on the programs you want. I don’t know, maybe you should have a CS degree before you attempt that.
I can’t see how it can get much easier. It isn’t a matter of difficulty, it is a matter of familiarity. It is really, really, simple to install Linux software if you get pointed in the right direction. Even my wife was astonished when I showed her how easy it was to download and install stuff from the Debian archives.
apt-get will only install software from the debian repositories which certainly won’t help when you download some shiny beta sourcecode from a distant website.
The point here surely is that you can run into all sorts of problems when you want to wander off the beaten track. SuSE pacakge-manager and SuSE YOU will get me over 5000 pacakges, however what can I do if I want to install the lastest version of KDE? I have to compile it myself and that’s when I run into dependency issues.
It would be nice to sort out such issues. Projects like autopackage (http://www.autopackage.org), which was recently discussed here at OSnews, seem like good solutions. They expand upon the dependency resolution features of apt-get but do not rely on a single repository.
IMO it would be sacrificing some of the good things about Linux if everything became statically linked or hundreds of redundent libraries scattered the filescape for the sake of brains-free installation. We should instead work on creating a decent method of dependency resolution so the user doesn’t need to be concerned.
So as said, Linux is fine for dependency troubles as long as you stick with your distrobution’s methods of software installation. If you want to move beyond that, for the time being at least you must have some patience — a solution will emerge. Personally I just really hope it isn’t of the “bundle a graphical installer variety” (I’ve nothing against graphical installers, providing they are part of the system, not bundled). As said, people without the patience should probably switch OS.
Yes, that’s perfect for “normal” packages. I mean those who are on your debian CD. But if you want to install “foreign” packages, you also have to edit your “source” file. That’s where it begins to be tough. Of course that’s quite easy once you have understood the concept and made it once or twice before but for a newbie, that’s yet another thing to learn. More over, apt-get isn’t installed by default on every distribution and packages are still not compatible between apt for deb, apt for rpm for RedHat, apt for rpm for Suse etc…
So, yes, in theory your solution is ideal but that’s just theory. The fact remains that if you want to make a package from your software, you have to create an RPM for Red Hat 8, one for RH 7.3, one for Mdk 8.x, one for 9.x, one for cooker, one for Suse 7.3, one for Suse 8.0, one for Debian, one for Slackware etc…
And, like most developers, you end up doing NO package at all and providing source only because it’s such a mess to create packages. And users go to your web site, find a tgz file, maybe they even happen to install it, but, then, their whole packaging system is corrupted… Finally, they end up complaining to whoever they think is able to solve their problem and articles like this one appear on internet… and people think “waow, linux is so complicated”…
I was once looking for really alternative and tiny WMs for Xfree. I found ROX. It’s a bit like BeOS, I like it a lot anyway. Try it!
http://rox.sourceforge.net/
Matthew,
While I use Debian and I think that it’s package management is very nice… there are still other issues. I for one still like to use the occasional commerical app. JBuilder, to name one. Well, After tinkering a bit, I made a symlink from one libstdc++ lib to a newer one, and got it working… well, I installed a newer version of mozilla, which in turn installed a new libc, which in turn broke JBuilder. Library management on Linux is not nice. Same on Windows… I think the idea of distributing apps in their own folder is a whole lot easier… not package management needed. Just unarchive, and go. It should be that simple.
By Aleksandr (IP: —.ne.client2.attbi.com) – Posted on 2002-12-16 05:47:44:
One of the few good ideas Mac OS X had was bundling. In essence, instead of installing applications, you drag a single icon to the appropriate folder. The icon was really a folder, but with metadata instructing the OS to treat it as a single file except when using special programs.
Scrap RPM packages, and in their place use metadata-based bundles.
A very simple and elegant solution, with the added plus that there is no central database to maintain.
I was going to propose this.
Really, the idea of an executable archive is too good. The key is the desktop hook. The desktop needs to understand how to work it, and the shell only needs editing tools.
A standard installation program can be used, such as Loki’s setup.sh. Loki’s setup allowed you to chose your installation target. The installation tool should be provided by the desktop environment, but one should be includable with the archive. Reasonable dependancies should be included for installation if they are not provided for.
The user should have the ability to “Install Globally” or keep the application itself sitting on his desktop if he desires.
I think dependancies should be defied by keywords. The metadata itself isn’t all that important, as long as the installation programs are smart enough, consistent, and the metadata format is standard.
I also think the /opt heirarchy should be used, starting inside the /opt/ directory. A simple global install could drop the archive in the /opt/ directory, or extract it.
I think it’s a reasonable idea. It has already been proven to kick ass on MacOS X.
Solutions for linking are:
– allow the ELF binaries to specify (relative) paths for libraries and let the dynlinker use them. Probably the cleanest solution, but requires many changes in the linker and binutils
– allow paths relative to the binary in ld.so.conf and LD_LIBRARY_PATH. This would either re-define how relative paths are evaluated (both the binary’s path and the cwd) or require some special syntax. Both can have backward-compatibility problems
Solutions for PATH:
– whatever unpacks the package must create symlinks from the package’s binaries to some common directory. This directory must be in PATH, and could either be /usr/bin or some new dir, like /Applications/bin
i used debian (Knoppix hd-install) and did “apt-get install licq”, but nothing happened, no such package, something about obsolete and dependencies that aren’t there. I don’t know nothing about these stuff, but simple??? It didn’t worked, for some reason. I guess, the solution would be simple, but you have to know. But i don’t know, so NO is wasn’t simple. NO LICQ!
The way i want to install a program, is by going to the product website, download, click and follow instructions and it works. How this is done, i don’t really care, cause i’m not interested in programming, i just wanna install and use a nice (non programming, but chat, photo-editing, multimedia or whatever) program that i see somewhere on the internet or (when linux becomes more popular on the desktop) that comes on a cd (in a comp. magazine).
> whatever unpacks the package must create symlinks from the > package’s binaries to some common directory. This directory > must be in PATH, and could either be /usr/bin or some new > dir, like /Applications/bin
The problem with linking to /usr/bin is that you need root access to write into it. I think that’s the same problem for any directory that’s in the PATH. A good solution might be to simply put the absolute path to the binary in the .desktop file. For example, if you install “LimeWire” in ~/Apps/LimeWire, the .desktop file should point to ~/Apps/Limewire/bin/limewire.
RPM works fine if you have the correct version built specifically for your system. Otherwise you end up having to install many libraries to cure a web of dependencies.
apt-get and BSD ports works fine to sort out this web of dependencies automatically, if you have a fast internet connection. But many people only have a phone line, or maybe not even a connection. So upgrading many Mb of libraries to cure a dependency problem is not necessarily convenient.
Compiling from source reduces many dependency problems, but means that the package manager does not know that you have the software installed. Uninstalling such software can be a nightmare, as it is difficult to know where all the resoutces were installed to.
The best solution I have encountered is the ROX application directory (app-dir) method (borrowed from RISC OS).
Similarly to the Mac OS X application bundle (borrowed from NeXT) it bundles required resources into one directory and the desktop visually renders the directory with an application icon rather than a folder. Double-clicking the directory runs the application inside, rather than opening the folder.
Installing the application is just copying the directory (i.e. the application icon) to the desired volume. Uninstalling it requires deleting the directory (ie the application icon). Relocating an application is just dragging the icon to another location.
Shared system libraries can be bundled in their own library app-dirs. If an application requires its own specific local version of a library, it can be included within the app-dir.
Calling these applications from the command line can be accomplished with a very simple BASH patch. The only requirement is that the application directories are stored somewhere within the system path (but note the path does not need to include the application directory).
Applications can be delivered as pre-compiled binaries or as source code. When the application is run, if a binary exists for the current OS and processor, it is run, else the application compiles (if you have access rights) and runs.
The installation of system wide or user local installations are supported easily by this method.
There is a project underway to build a Linux distro around ROX app-dirs. It is in the very early stages, but urgently needs contributors. It’s (initial) web site is at
http://lemmit.kaplinski.com/home/green/Linux/
ROX is at
http://rox.sourceforge.net
The introduction of the article is way off. Installing can actually be, and most of the time is, easier than with any other OS (with great tools like apt-get). Linux does have trouble with install paths and the like, but that is not was the article suggested in the intro.
And second, suggesting ./ to your path is just plain evil.
in BeOS, the LD_LIBRARY_PATH includes %A/lib, where %A is mapped to the executable’s location.
So /beos/my-app/my-app can have custom libraries stored in /beos/my-app/my-app/lib/my-app.so etc.
The LD_LIBRARY_PATH also includes the standard library directories as well as ~/config/lib (which is more or less /usr/local/lib, since BeOS was single user)
Also, look at GNUstep. I’m not entirely familiar with the directory layout, but applications are stored in directory bundles with resources, propert lists, images, and other goodies, rather than just being a binary file.
Is it possible to add %A to Linux’ LD_LIBRARY_PATH or is it BeOS-specific ?
I doubt that Redhat and others will do this but…
Include apt and synaptic or even better tie the package management tool for Redhat 8 for example to trusted apt sources.
No command line!
You just get the latest list of applications packaged for your distribution.
In synaptic you click on the app (let’s choose one with dependencies from hell) like totem or rhythmbox and click install.
It just happens. All the dependencies are worked out and everything is installed on your box. If you want the latest greatest bleeding edge garbage that you probably should not install yet anyway then include some other sources for your apt stuff (I include Nyquist’s RH8 rpms for the latest XFT/GTk2 choose your opensource acronym here rpms).
What if I want to use such-and-such software that just went beta two seconds ago? Well, like any freeware/shareware/GPL software for any OS you find on the net you better be ready to compile it yourself.
Gosh, if no one has ever built a package for the software you end up using the source. Yeah, that is the case for BSD, Mac OS X, linux and Windows. I have grabbed beta software for Windows from this or that Opensource project and guess what I had to compile it myself because it was not ready to be packaged yet.
Honestly I think part of the problem are old-school guys on the mailing lists and boards who when asked how to install something either spout out rpm -Uvh or configure, make, make install impulsively.
Half the time if I see some tiny utility on the net in rpm format I want to install and there is no apt repository for I can still install by downloading the thing and double-clicking on it. I then ask the author nicely in a email to submit the rpm to freshrpms or something.
just thought I’d point out that microsoft does use gpl’d code. they use it in their unix compatibility tools or something. and yes the source is available.
One problem still remains even with your solution:
Sometimes (very often in my case), you just don’t want/can’t connect to internet. There should be a way to install a package like – for example – rhythmbox from a CD-ROM without needing to download additional packages.
I think the best way for that would to have a rhythmbox package providing gstreamer, monkey-media and whatelse in ONE package. Of course that’s not clean but not everybody wants to be connected to internet in order to install software packages.
I don’t know how it is elsewhere but in France, most people have a PPP connection and it’s very hard to convince your girl-friend that you need the phone in order to install your favourite video game
Hi,
I you wanna see a really SIMPLE way of handling shared
libraries, have a look at AmigaOS !!
And the same goes for the icons system…
Regards,
Nogfx.
Hi, I’m the autopackage guy, and you’ll find a lot of these arguments already discussed a few weeks ago. In short:
1) Appfolders/Bundles are not a good idea. They are not simple, or elegant IMHO, not even on MacOS. The idea that you should have to manually manage dependancies is flawed – OS X apps simply don’t have dependancies which puts Apple in total control, as they dictate what apps can and cannot do, and how. Linux isn’t like that (obviously).
ROX has AppFolder style GUI elements it’s true, but those appfolders are usually just wrappers around the binaries. And the problem of dependancies has not been solved with that either.
AppFolders have other issues, for instance menu sharing (users may want access to apps but not have them in the app launcher ui), and corporate deployment scenarios.
Finally, considering that the heirarchical directory system itself is probably going to be dumped in favour of database style filing systems with live queries (see Storage+/Reiser5), appfolders tie software management to an obsolete concept.
2) There are a whole host of reasons why installing Linux software is difficult. File paths are one, but they are the easiest to fix. The biggest problems are at the binary level, for instance:
– C++ ABI breakage
– Hardcoded prefixes (with automake macros)
– Library symbol clashes.
The last one we discovered a few days ago at autopackage HQ and it look like the only real solution is to rewrite part of the dynamic linker
Forgive me if somebody has already mentioned this. I think the easiest solution on Linux for shared library dependency problems is to do what Opera does with their Linux version. Offer the binary in a dynamic library version and a static library version. If you try to install the dynamic library version and it doesn’t work because you have conflicting libraries, then install the static library version. Or, if you don’t want to mess with it, install the static library version from the outset.
O.T: I just want to congratulate Matthew Gardiner on his adroit use of the word whinge. I love that word. It says it all.
There are great sides to everything: BeOS, Amiga OS, RISC OS, Mac OS, Linux, BSD and even Windows has it’s good sides. For me personally, the best solution seems to be a Linux distro built on top of ROX desktop system.
ROX has full drag and drop right now. It would need to get this DnD support out to the world. Then the next thing, that’s needed is a file system with live queries and built in mime-types. Something like BeFS would be nice. Probably built on top of ReiserFS or Ext3.
Once these two are down, the focus can shift back to getting more apps to work well with ROX. And to getting some prettier graphics on ROX.
Hmm. I’m gonna build a Gentoo-based ROX-only system on my spare partition tomorrow. Might be a nice adventure trying to get the thing user friendly and fully integrated with ROX.
That is why the caveat is there where I said for most things.
1. You are right it assumes you have an internet connection.
2. It also assumes Redhat and others get the clue that people will want to install packages not specifically created by them so they would either tie their own package management in or include synaptic and apt with distro. Ain’t gonna happen even if they do link to freshrpms on their site.
I do not mind libraries that are going to be used by multiple packages being in their own package.
What I do mind is the fact that in linux most of these packages are bundled exactly as the original developers created them. What is the problem with that?
The deal is that with gtk+ package you do not get gtkmm, GtkPerl, gtkhtml and half a dozen other packages that eventually some gtk app will want.
Also, there is the gnome-libs that do not include say libgsf, libgnomecanvas, libgnomeprint, or libgnome<what the hell ever> that some gnome app will need.
I do not like the idea of bundling shared libs with app or you end in a Windows old school situation of one app over-writing dlls that another app needs (AOL use to do this all the time).
A better solution is to bundle all libs of certain categories together. If you are going to create a package based on some way newer version of a lib connected to these packages then you make a static library version of the package.
Opera does this as Iconoclast said above.
I think that the way distros package their base system libraries and the application libraries can seriously improve.
Still, applications like apt and synaptic help out a great deal for those who have a reliable internet connection that does not cost outrageous amounts of money to keep up.
If the current directory was placed before library directories in the path, this could allow a potential attack.
This type of attack could come in the form of malicious code with the name of a common library being placed in the current directory. This ‘malicious library’ could then be executed with the use of a trusted app or command.
This is a disgusting attempt to ride the “FUD Bandwagon” with respects to fragmentation that is being addressed by the LSB. This author is apparantley clueless about why *nix is so fundementally secure against virii when compared against the Windows environment. The entire article is voided by the following statement (which not only works on debian but also any rpm based distro thanks to apt4rpm)
apt-get update && apt-get install <package>
done. Get over it
Don,
Actually, on the Amiga, you can program your application to look for a specific version. Typically, this is not done, but it can be. Typically, libraries on the Amiga did not break old apps when a new library came out. If it was that different, it became a different library.
I guess I should answer this kind of question as my project has a similar design –
basically, there is no good way around the fact that sometimes to install a program you need to download a lot of stuff, other than perhaps buying SuSE which comes on a DVD
Bear in mind if apps start shipping libs within themselves (not a good idea!!) then you’d end up downloading more in the long run anyway, those libs have to get on your system somehow, best to only download what’s needed.
This is a very interesting and well written article; there are parts of it that I don’t understand, but it gives me greater insight into the matter (and makes clearer where I need to do some more reading.)
OS X is a Unix. What have the folks at Apple done that makes it so easy to install programs? What are the pros and cons of their choices?
When I install a program, when I want to tell it what folder to go live in, if I want my (computerphobic) husband to have access to it I install it in the main applications folder. If I do not want him to have access to it, I simply open up my personal folder and install it in the applications folder there; any body not logged in as me can’t see it or use it. Fast and easy.
How has Apple done this? What are the pros and cons?
I also hear that I *should* be able to drag an application out of the main applications folder into my private directory’s applications folder and that all the appropriate links & paths will auto update (and vice versa), but I haven’t tried this yet.
If this is true, what has Apple done? Pros and cons?
(Finally, 12/25/02 will mark the start of my 2nd year as an OS X user. I have made a grand total of ZERO trips to the command line to get anything installed or configured.)
I can’t see the point of this article…
I don’t have dependency problems if i install my distro packages. They are easily installed under /usr.
If i install other kind of sotware it usually goes to /usr/local, with the binaries on /usr/local/bin, libraries on /usr/local/lib, etc…
If the program wants to ship with its own libraries it can do it statically linked, or it just installs on /opt with its own /opt/*/bin, /opt/*/lib, /opt/*/lib and then a symbolic link of the bin in /usr/bin or /usr/local/bin.
If you want a decent installation program, just go with Loki-installer. It’s easy, GPLed, and runs on console and GTK.
I can’t see any problem with linux. The problem is with people packaging the programs, and the kind of packagement they choose.
OS X is a Unix. What have the folks at Apple done that makes it so easy to install programs? What are the pros and cons of their choices?
You’ve run into problems right away with your first sentance. Is OS X a UNIX? It’s arguable either way really. It may be (partially) based on UNIX, but then so is a Tivo, does that make a Tivo unix? I’d say – no, being based on a technology is not the same as actually being that technology. Something else to chew on is that for quite some time (dunno if it’s still the case) Windows NT and up were more POSIX compliant than OS X – in that case, is Windows a UNIX?
Well, either way I don’t really care. OS X uses a different software management method (in theory) to every other form of UNIX, which makes it different enough that we have to draw distinctions between them.
Pros of the OS X approach:
– Simple, both for developer and user
Cons:
– Limiting for the user/admin/developer
– Requires central control
– Makes updating the OS in chunks much harder
– Inefficient
Well, Apple don’t really care about limiting the user – OS X already limits what users can do in many other ways (for instance, if you don’t like the apple gui/theme, you can’t change it). It does care about keeping things simple. Well, appfolders are an easy way to do that, but at the expense of many other things.
Requiring central control could be seen as a goal of appfolders if you’re a conspiracy theorist. I’m not, but there’s no denying that it means to get OS upgrades you must buy a whole new copy of the OS. For instance on Windows MS released separate installers for old versions of the OS, so DirectX was available separately before it became integrated, ditto for IE, ditto for MSI, ditto for many other smaller things that you will never have noticed.
Because there’s no way for an installation to verify that you have the required pieces however, that’s much harder on the Mac. Yes, you can put it off until the first run of the program, but that’s not very helpful is it? The app doesn’t get any chance to selectively install stuff without storing inside the app itself, in which case you’re now just wasting space by having two copies of the same thing.
There are ways to hack around this, but what it basically means is Apple get another chance to dump all over their customers. Without dependancy tracking, apps will end up saying stuff like “OSX 10.3 minimum required”, even if it’s really just a few frameworks that have bumped up their version numbers. Hence you pay again to upgrade. Hence Apple get more money. Which is for them, the aim of the game.
Another problem is corporate deployment/menu virtualisation. If you have 10 apps, but only 6 are actually shared between all users, it makes sense to hide the other 4 from the users that don’t need it to avoid cluttering the GUI. You still want a way for those users to run those apps though just in case. When I lived at home, we did this all the time, for instance my brother used a selection of useful audio apps which I occasionally would use too, but mostly I didn’t so there was no need to put even more stuff in my already overloaded menus.
With appfolders, because the filing system enforces certain rules, it’s harder to do stuff like that. I guess you could do it with link farms, but it’s harder.
Appfolders are inefficient. This is less of a concern on an OS without dependancy management because apps will rarely, if ever, make use of functionality not provided as a part of the base OS. If they do, the frameworks will be bundled with every app, so you could end up with say 5 apps all having their own copies of the same lib. If a problem/bug/security flaw is discovered in that framework, you have to manually alter the contents of every app, or upgrade every app manually. With dependancy management, the weekly upgrade will deal with it automatically.
Folders are going the way of the dodo soon on both Windows and Linux (if storage+ and reiser5 pan out). Mac has no such plans as far as I’m aware, but it’s a remote possibility.
Finally, you have to actually go onto the web and download appfolders. I find that more hassle than just typing the name.
In short, appfolders have (imo) several major disadvantages for users and developers with their only redeeming feature being that they take almost no effort to implement (important when you write an OS from scratch i guess).
Juan.
In case you are interested… With regards to points 1 and 3, the Linux distro which I am using (SuSE), makes heavy usage of ‘/opt/’ and includes the current directory in the path.
I am actually very happy with the way that software is distributed under Linux. I find installation is generally simpler and uninstallation is 100 times better than in MS-Windows (I couldn’t imagine going back to a system which has no equivalent to ‘rpm -ql’ or ‘rpm -qf’). However, I feel it would be nice if RPM had some way of installing programs into your user area, to enable better security.
The truth is that linux development is MUCH faster than proprietary O.S.es. There are 2 or 3 versions of a linux distributions per year (and many software updates in one version), while there are one little upgrade (e.g. Win95 -> Win98 -> WinME) at each 3 years !
For dummies, there are simple tools to install and upgrade software (and even entire distribution) like synaptic, aptitude, apt-get, Red carpet, etc.
If you aren’t a nerd or geek to compile from sources, use only the update/install tool odf your distribution.
The OpenOffice/StarOffice installer is an example of easy (a la Winblows) graphic installer and it is identical to the windows version. What is the problem with linux ?
I guess that the first thing to do is start with a dependency standard.
Even if you replicate libraries into application folders, I would imagine that you would want to get that application’s dependencies for automation purposes.
What standard should be the standard.
Cool people use Debian and never have package troubles again. lUsers work with other distros.
Sorry to be so flame-inspiring but if you don’t know about Debian and apt, it’s time to learn.
I’ve heard people say that Portage is even superior to apt-get. Can someone explain why? Does Portage really prevent “dependency hell” or are there still problems? Can you use Portage for binaries? Do you lose the advantages of Portage by not compiling from source? Any information would be appreciated
actualy, that exploit was dealt with a LOOOOOOONG time ago. it is no longer an issue.
/usr/local and /usr/share come from the days when much more of the filesystem was spread across a network. /usr/share is for system/architecture independent things, often stuff like documentation etc. whereas /usr/local was binaries and the like since they were system/architecture dependent and installed locally.
Mounting the stuff anyone can use on a network share means that you save disk space on the local machines. Not such an issue these days but once upon a time people worried about these things.
One additional problem is that Unix/Linux does even not get the concept of folders! Nearly all apps get installed in things like /usr/bin. So even if you install an app you can’t find it because you have to go through a list of hundreds of files. Which libs to which application belong is probably also unknown there are hundreds of libs in some other folders.
Thats even worse than the windows concept of windowssystem32 ! At least on Windows this directory is now secured in order no install app can overwrite the libs with older versions. And applications can use the same dll in their own version. Yes. We don’t talk about Windows 95 anymore! We should compare current concepts which are implemented in Windows 2000 and XP. And if you compare the install of Linux software and Windows Software… I can’t really find this “DLL-hell” any more in Windows! Tell me one application which still makes such problems on Windows. On Linux I could try nearly every application. Most apps I installed on Linux gave me a Linux library hell.
The real problem is that Linux fans really like this fact. They like it that no normal user can really use this OS. Because it is so “advanced” that only really experienced users can use it 😉 I am amused about this.
I think it is a good idea. Why do some of you guys act like Superman close to cryptanight when you hear linux needs to get more user friendly? It won’t kill linux ok, it will be a new bigining for it. If you don’t like easy stuff go hunt for your food, skin it, gut it, debone it, bleed it, then cook it over a fire outside. That is how the native americans did it.
the one BIG limiting factor to all tha is…..the end user can not go and get a new package he/she finds on the web. that is the problem here. sure apt is great until I download a binary package in deb format of a new app….does apt resolve the dependancies for me when I double click on the package…if anything happens at all….hell no……it throws up a bunch of errors about dependancies.
perhaps a GPLd netinstaller can be created that will make the creation of an installer simple for developers. then the dev team can point the user to the correct locations of the libs needed.
that is sort of what Mike is doing with autopackage, except ituses the dependancy resolution network rather than a predefined loactiuon like Mozilla or OO/SO use.
the only problem with the Mozilla instaler is that to run the install script you need to call it from the command line and it is named the same thing as the binary file.
just call the script setup.sh then allow for double clivk installs and meta data so that the icon can LOOK like an installer.
Putting “.” into the PATH or LD_LIBRARY_PATH by default is less than entirely wise. As others have pointed out, this leads to security issues. Besides, it isn’t hard to type “./somefile”, and applications can always exec() themselves after modifying their environment, if necessary.
Many have railed against the Mac OS X style of app-folders. Personally, if there’s no shortage of disk space, I think that this is a great idea. However, if you’re running short of disk, then it’s obviously not so great — but then, why would you need icons bundled with the application if you’re running short of disk?
The idea that app-folders require centralized control is, well, astonishing. I’m afraid the rants against app folders are, despite being well written, incomprehensible. If you want centralized control, you could do it that way, but you always have the option of “installing” an application locally without any centralization whatsoever.
The debian geeks going on about apt-get, well, yeah, you’re very clever. But if the application in question has not been debianized, you’re right back to where you started.
As for meta-data, well, I am a fan of the Amiga .info files. (Although I suppose the Info.plist approach isn’t so bad.)
And, I suppose, I’m not convinced that Linux “needs” to “win the desktop war”. It’s not a race. It’s not like Linux can lose — unless it chases after new users so much that it becomes less appealing to experienced users. Personally, I want stability, maintainability, and sensibility — change for the sake of change is NOT progress.
The key distinction is — and that I haven’t seen brought up yet — between multi-tasking single-user systems and multi-tasking multi-user systems. Frequently, the objections to Linux (and UNIX) relate to how things are done on a multi-user system that seem inappropriate for a single-user (or single-owner) system.
A single-owner system only ever has one person logging in to the system — perhaps “at a time” for family-owned systems — and no concurrent users. There’s no need to run a bunch of services on the machine, there’s no need for an administrator to manage the users and resources. Applications may be installed willy-nilly, and removed just as quickly — at the whim of the owner.
A multi-user system has to deal with multiple, concurrent, possibly hostile or incompetent users. Resources must be carefully managed [which is where the approach used (but not invented by) StarOffice is useful — install centrally, and then store local data in the user’s home directory] and not needlessly duplicated.
A single user can afford to waste tens of megabytes of disk. A multi-user system might not be able to afford to have each user waste tends of megabytes of disk.
It all comes down to how you’re using your system.
First most web sites that have software available for linux for download list the dependencies especially if they provide rpms.
If you don’t read the website before you download the app you are going to have trouble sure. Most places also link directly to their dependencies. They also include many times install instructions even. Can it be easier? Sure.
I download an rpm from a site and doubleclick the thing and it either installs or tells me the dependencies I am lacking — RH8 BTW.
And it looks like an installer too.
I like the idea and think someone said that Mandrake is wired into urpmi which is another rpm like-apt sort of system so maybe they resolve the dependencies when you click on the rpm. That would be great. I like this idea alot.
Like I said before if RH got a clue and tied their package management into the apt system then a lot of these issues would go away. Like deb-man said, the package management system could follow the dependencies and download them for you.
It has become habit for me to check synaptic for any new software I am thinking of installing in order to avoid such issues. gnome-crontab may not give me dependency issues (download to desktop and double-click to install) but the latest gnumeric certainly would.
Good points deb-man.
The idea that app-folders require centralized control is, well, astonishing. I’m afraid the rants against app folders are, despite being well written, incomprehensible. If you want centralized control, you could do it that way, but you always have the option of “installing” an application locally without any centralization whatsoever.
The central control stems from the fact that you can only assume that stuff Apple ships with is present, as there is no dependancy management. That means that it’s not possible to introduce competing frameworks of any complexity into the system without horribly bloating the app – unless you use installers of course.
Linux on the other hand, you cannot assume anything, because there is so much variety and it’s decentralised. So you need dep management.
… as you risk being listened to. Unix has been around for a long time, and has seen many of the “improvements” suggested in the article fail. Mostly, developers need to unlearn poor habits learned on Windows. Libraries are not shared to save disk space, they never were; the are shared to reduce the complexity of the system and minimize runtime interactions and security risks. Consider the late zlib security flaw, fixing which proved highly cumbersome because a number of people thought it would be clever to include zlib code instead of linking (the kernel was the only sane exception). Others have noted that “.” in the $PATH or even worse in the $LD_LIBRARY_PATH is inappropriate and I will not add to that; it’s just that you have to unlearn the Windows “application” concept and learn the Unix “tool” concept – each program cooperates with the system and with many others it knows only by interface, meaning the developer has to learn said interface and follow it. If in doing so you have to give up some feature, rest assured it won’t be missed. One Windows was enough, thank you.
There’s an easy way to end that stupid problem with the shared libraries:
Use version numbers inside the library, and make newer version of a library backward compatible with the old ones, so the only thing to do for the user is to install always the latest version of the lib.
Yes I know it sounds stupid, just think about it.
—–8<—–
Anyway, this has no solution, nobody agrees about how to solve things like this in the Linux world, everybody is growing his ego looking at his belly button.
Look at the state of the desktop and the apps integration between diferent distributions…
Nah.
Cons:
– Requires central control
FUD! Define “central control”. All an admin need do to install say Microsoft Office for all users is drag it to the /Applications folder. If I want to install it locally I drag it to the /Users/mynamehere/Applications folder. What is “central”?
– Makes updating the OS in chunks much harder
Total BS. The software update utility updates the OS in chunks. I installed 10.2 and have since updated to 10.2.1 and 10.2.2 and am expecting 10.2.3 shortly. If you mean that you can’t updated a certain specific lib?… sure you can, just like any other Unix, you just take responsibility for fscking your system up. That’s what releases are for. They are known to work rather than getting something that is partially incomplete.
– Inefficient
(I don’t really agree with this line of thinking but…) Disk space is cheap and executable code is small. Content/data is what really eats up disk space.
Another problem is corporate deployment/menu virtualisation. If you have 10 apps, but only 6 are actually shared between all users, it makes sense to hide the other 4 from the users that don’t need it to avoid cluttering the GUI.
So as the admin put the apps in separate folders and symlink them into the /User/mynamehere/Applications
use of functionality not provided as a part of the base OS. If they do, the frameworks will be bundled with every app, so you could end up with say 5 apps all having their own copies of the same lib.
An interesting feature of frameworks in .app bundles is that they export their version of the library to be used by everyone. However if that version is compatible (or missing because the app providing it was removed) then the app will revert to using it’s own copy. You get all the benefits of using the latest version and none of the hassles. An app will always work.
If a problem/bug/security flaw is discovered in that framework, you have to manually alter the contents of every app, or upgrade every app manually. With dependancy management, the weekly upgrade will deal with it automatically.
Bzzzt! Wrong! See above.
Finally, you have to actually go onto the web and download appfolders. I find that more hassle than just typing the name.
You know not everything is available as an rpm or deb. It pisses me off that all Linux people seem to think it is. Some of us do use commercial packages.
How about this:
1) Application developers make it a point to statically link all but the absolutely most common libraries into their apps. Bandwidth is (usually) cheap and disk space is (very much so) cheap.
2) Enforce versioning across systems. Make it known that if you want your app to work, it had better identify to the loader what versions it can work with. Make promises to developers regarding major/minor library and compiler versions and ABI’s/API’s and keep those promises.
3) (and here’s the magic part) Make the loader a hell of a lot smarter than it currently is. It should work like this:
if ( executable required libraries are installed system-wide and compatible)
{
Load process image from app + system SO’s.
}
elsif (executables required libraries aren’t installed, but it does have them statically linked internally)
{
Load process image from app + app’s libs.
}
elsif ( executables required libraries aren’t installed and the app doesn’t have them either)
{
Bomb and tell the user to go get the proper libraries
}
1. Notice, that all this is a matter of a convention between developers of how to share libraries. If you don’t share anything, you can install by simple “unzip -d app.zip” or “tar zxf app.tgz”. If your style of sharing is much different from mine (as example) it will bring me many problems to use your software. It will be your program I erase at first appropriate case.
2. I think “one common proposed” solution is a bad thing,
so your new (installing) software should interfere with mine
in minimal. It should not be “the one big stone on road”.
And it should be flexible to be customized for more integration.
3. The simplest way to achive proposals of pt.2 is to install application in separate subdirectory.
Advantages: 1) installation is “tar zxf app.tgz”;
2) removal is “rm -rf /path/to/this/app”;
3) “/I/hate/all/these/long/pathes/that/are/hard/to/type”.
Drawbacks: 1) integration; 2) sharing libraries.
4. Solution. Men, you are happy ones, you have hard and symbolic links there in all your unices! I have none of them. (I emulate a kind of them with batch files under DOS.)
Exposing your main binaries to user and sharing libraries can be achieved with:
1) link with (BASE-APP-DIR)/lib/link-to-that-lib.so.X.X.XX;
2) link/load libraries listed in (BASE-APP-DIR)/usedlibs.lst;
3) third way (invent your own).
Advantages: 1) you don’t need all additional binaries (that should be in $PATH) in “/bin”, “/usr/bin” &al.;
2) all listed in p.3.
The only thing you need is a way application can get its base directory name (“where am I?”).
But all this can be done with scripts! Men, you have long command lines there in your unices! Men, you have good shell there!
I don’t know exactly, but you can create a separate environment there!
Almost all you “/usr/bin” can be the scripts, that set environment variables and go on.
Evil developers of course won’t do such a thing as providing linking list and dependencies mapping, but you can use links, and if newly installed application refers used name for different meaning, then patch it, keep you hex/bin editor handy.
You’re happy there in unices, your apps’ code is hardly self-modifying.
There is no problem *at all* with Linux binaries and it’s working fine. The problem is *how* you distrubte them. This is how I would fix it, this is terribly simple to do and the first distro that does this could win the Linux market easily. No, this is not a stupid post, read the full thing.
– Developpers release sources in tar.gz or tar.bz2 format and this file happens to be linked from freshmeat and sourceforge for example.
– Distro X make a little program that “grep” freshmeat and sourceforge for new package.
– When you buy Distro X, you get Y votes, that is, the serial number found in your Distro X box can be used Y times to vote. You pay for essential program of your distribution and the rest is used to pay the votes you will use.
– The result of the “grep” is put into the Distro X web page and users can vote on a certain package that he want to be built. Of course, voting take some of your Y votes that you got from the Distro X box.
– After there is Z vote on one package, someone at Distro X build the package and put it in whatever format they want (for example .rpm) for free download (they already got their money from the votes they “sold”).
– Installing for exmaple a .rpm package example is a joke, click on the rpm file and if Distro X properly configured their stuff, it will automagically install the software, if you provide the root password (security first, always!).
– The user can buy more vote by sending to Distro X some money.
Wow, that was hard, isn’t it? I bet this could take 1 month to set this up and you you want overkill and simplicity, write a client for this. With this system you’ve got:
– The full spirit of free software, you’ve got the sources if you want to mess with it.
– If you’re cheap, built it yourself or what that people vote on the software you need.
– Unlimited flexibility, each distro can use their own package management and put files where they want.
– Even easier than Windows, every program install the same and no stupid “Where do you want I install this?” or “Please insert CD key” each time you want to install something.
– This will give Distro X a lot of money, if for example they wait that for example 1000$ in vote is spent on a package and it take at someone at Distro X 1h @ 20$ to build it, 980$ directly in, WOW!
If some good distribution doesn’t put this stuff up, Lindows will put this stuff (they already have a similar system Click and Run, they can only implement the vote system and it’s done!) and win the Linux market. This method is simply the best and is deadly easy to setup (a few perl script and maybe writting a client to be even more easier). Now, send this post to all commercial distro and watch the dog fight…
Distribution by physical place:
FALSE. The FHS defines the paths that way to be able to install programs in a single computer and share them among 250 more, which you can’t do the Windows way (you need to install the same program in each computer).
X11R6 is an anomaly, stemming from people who thought like the article’s author
Installs need to be global:
FALSE. I myself have several software packages installed only on my home dir. Any UNIX user will tell you that’s the case. What I agree with is that RPM/DEB package installs are system-wide.
Several versions of a program cannot be installed:
FALSE. You can install several versions of the same program, using different libraries (you know, libraries have version numbers in their file names). Just name your programs differently.
What software release managers ought to be doing is testing their software packages with different distributions and providing links to the libraries required (or bundling them in a zip file and making an install script, which most big name software companies do).
Current dir is not in path:
Thank God. That’s one of the reasons you don’t have trojaned programs on your computer.
No file metadata:
FALSE. Linux binaries DO have metadata (in the crooked definition the author is using for the word). There is no icon defined in the binary, and that’s another problem. Icons are installed in standard places (freedesktop.org).
Linux package systems take care of so much more than installation. They help in audit and deinstallation tasks.
This is a great article. When the Linux people realize that features have to be popular as well as useable then they might start to contend with Microsoft.
I heard a quote once but don’t quote me on this”The best thing about Microsoft is they take other peoples products and make it their own.” They take products, find out what is popular about those products and emphasize those qualities of those products and hey presto… you end up with the Killer Application.
The little things such as ./ in PATH count, they really do.
Davide, I agree with -everything- you said. Unfortunately, Linux is not Unix, as the LINUX acronym indicates. The development of UNIX (pretty much any UNIX) has been slow and careful. Development has been conservative, with an eye to code/version stability and the other to manageability.
In Linux, every dork thinks he needs to update a certain library or toolkit, because that would be so cool. The only thing dorkier than that is, there is a developer of an app that thinks it’d be so cool to utilize this cutting edge feature, available only in the newest library or toolkit, and to heck with backwards compatibility – or ANY compatibility, for that matter. The kernel development has been just like that, too.
Read the article better, I’m not saying all these are poblems in unixish OSs, they are not. All these suggestions are meant for desktop new users who dont really want to learn how to be sysadmins, not for the expert leet unix hax0r.
> Unix has been around for a long time, and has seen many of the
>”improvements” suggested in the article fail.
Like, when? i dont recall any major distro or unix vendor doing it. As they are not really meant for server usage.
>Libraries are not shared to save disk space, they never
>were; the are shared to reduce the complexity of the system
>and minimize runtime interactions and security risks.
>Consider the late zlib security flaw, fixing which proved
>highly cumbersome because a number of people thought it
>would be clever to include zlib code instead of linking
>(the kernel was the only sane exception).
So what? desktop users install programs all the time, that is way more insecure. The bundled libraries are meant to be the rarer ones, not the ones you can install system-wide (like zlib), I feel you are taking the argument to the opposite extreme.
> it’s just that you have to unlearn the Windows
>”application” concept and learn the Unix “tool” concept
No, the UNIX tool concept works perfect for the developer,
or even the smart user, but it’s definitively NO WAY something a regular or common windows user is willing to learn when switching to linux. They want applications, and a graphical interface (which can interact with the unix tools anyway) where they have everything at the reach of their hand, they dont want to RTFM or read through dozens of manpages to find out the command line switch that does its works.. and which is even harder to memorize. So get over it, unix tools are not for the regular user. Apple and Microsoft have shown this since long ago and it’s stupid to ignore it and pretend that the users can “dewindoize” themselves and use a computer the way a sysadmin does.