Have you ever wondered why in other Operating Systems such as Windows, MacOS or even BeOS installing software is so easy compared to Linux? In such OSes you can simply download and decompress a file or run an installer process which will easily walk you through the process.
This doesn’t happen in Linux, as there are only two standard ways to install software: compiling and installing packages. Such methods can be inconsistent and complicated for new users, but I am not going to write about them, as it has been done in countless previous articles. Instead i am going to focus in writing about why it is difficult for developers to provide a simpler way.
So, why can’t we install and distribute programs in Linux with the same ease as we do in other operating systems? The answer lies in the Unix filesystem layout, which Linux distros follow so strictly for the sake of compatibility. This layout is and was always aimed at multi-user environments, and to save and distribute resources evenly across the system (or even shared across a LAN). But with today’s technology and with the arrival of desktop computers, many of these ideas dont make much sense in that context.
There are four fundamental aspects that, I think, make distributing binaries on linux so hard. I am not a native english speaker, so i am sorry about possible mistakes.
1-Distribution by physical place
2-“Global installs”, or “Dependency Hell vs Dll hell”
3-Current DIR is not in PATH.
4-No file metadata.
1-Distribution by physical place
Often, directories contain the following subdirectories:
lib/ – containing shared libraries
bin/ – containing binary/scripted executables
sbin/ -containing executables only meant for the superuser
If you search around the filesystem, you will find several places where this pattern repeats, for example:
/
/usr
/usr/local
/usr/X11R6
You might wonder why files are distributed like this. This is mainly for historical reasons, like “/” being in a startup disk or rom, “/usr” was a mount point for the global extras, originally loaded from tape, shared disk or even from network, /usr/local for local installed software, I dont know about X11R6, but probably has its own directory because it’s too big.
It should be noted that until very recently, unixes were deployed for very specific tasks, and never meant to be loaded with as many programs as a desktop computer is. This is why we don’t see directories organized by usage as we do in other unix-like OSes (mainly BeOS and OSX), and instead we see them organized by physical place (Something desktop computers no longer care about, since nearly all of them are self contained).
Many years ago, big unix vendors such as SGI and Sun decided to address this problem by creating the /opt directory. The opt directory was supposed to contain the actual programs with their data, and shared data (such as libs or binaries) were exported to the root filesystem (in /usr) by creating symlinks.
This also made the task of removing a program easier, since you simply had to remove the program dir, and then run a script to remove the invalid symlinks. This approach never was popular enough in in Linux distributions,
and it still doesn’t adress the problems of bundled libraries.
Because of this, all installs need to be global, which takes us to the next issue.
2-“Global installs”, or “Dependency Hell vs Dll hell”
Because of the previous issue, all popular distribution methods (both binary packages and source) force the users to install the software globally in the system, available for all accounts. With this approach, all binaries go to common places (/usr/bin, /usr/lib, etc). At first this may look reasonable and the right approach with advantages, such as maximized usage of shared libraries, and simplicity in organization. But then we realize its limits. This way, all programs are forced to use the same exact set of libraries.
Because of this, also, it becomes impossible for developers to just bundle some libraries needed with a binary release, so we are forced to ask the users to install the missing libraries themselves. This is called dependency hell, and it happens when some user downloads a program (either source, package or shared binary) and is told that more libraries are needed for the program to run.
Although the shared library system in Linux is even more complete than the Windows one (with multiple library versions supported, pre-caching on load, and binaries unprotected when run), the OS filesystem layout is not letting us to distribute binaries with bundled libraries we used for developing it that the user probably won’t have.
A dirty trick is to bundle the libraries inside the executable — this is called “static linking” — but this approach has several drawbacks, such as increased memory usage per program instance, more complex error tracing, and even license limitations in many cases, so this method is usually not encouraged.
To conclude with this item, it has to be said that it becomes hard for developers to ship binary bundles with specific versions of a library. Remember that not all libraries need to be bundled, but only the rare ones that an user is not expected to have. Most widely used libraries such as libc, libz or even gtk or QT can remain system-wide.
Many would point out that this approach leads to the so called DLL hell, very usual in Windows. But DLL hell actually happened because programs that bundled core system-wide windows libraries overwrote the installed ones with older versions. This in part happened because Windows not only doesn’t support multiple versions of a library in the way unix does, but also because at boot time the kernel can only load libraries in the 8.3 file format (you can’t really have one called libgtk-1.2.so.0.9.1 ). As a sidenote, and because of that, since Windows 2000, Microsoft keeps a directory with copies of the newest versions available of the libraries in case that any program overwrites them. In short, DLL hell can be simply attributed to the lack of a proper library versioning system.
3-Current DIR is not in PATH
This is quite simple, but it has to be said. By default in Unixes, the current path is not recognized as a library or binary path. Because of this, you cant just unzip a program and run the binary inside. Most shared binaries distributed do a dirty trick and create a shell script containing the following.
#!/bin/sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.
./mybinary
This can be simply solved by adding “.” to the library and binary path, but no distro does it, because it’s not standard in Unixes. Of course, from inside a program it is perfectly normal to access the data from relative paths, so you can still have subdirs with data.
4-No file metadata
Ever wondered why Windows binaries have their own icons and in Linux binaries look all the same? This is because there is not a standard way to define metadata on the files. This means we cant bundle a small pixmap inside the file. Because of this we cant easily hint the user on the proper binary, or even file, to be run. I cant say this is an ELF limitation, since such format will let you add your own sections to the binary, but I think it’s more like a lack-of-a-standard to define how to do it.
Proposed solutions
In short, I think Linux needs to be less standard and more tolerant in the previous aspects if it aims to achieve the same level of user-friendlyness as the ruling desktop operating systems. Otherwise, not only users, but developers become frustrated with this.
For the most important issue, which is libraries, I’d like to propose the following, as a spinoff, but still compatible for Unix desktop distros.
Desktop distros should add “./” to the PATH and LIBRARY_PATH by default, this will make the task of bundling certain “not so common”, or simply modified libraries with a program, and save us the task of writing
scripts called “runme”. This way we could be closer to doing simple “in a directory” installs. I know alternatives exist, but this has been proven to be simple and it works.
Linux’s library versioning system is great already, so why should installing binaries of a library be complicated? A “library installer” job would be to take some libraries, copy them to the library dir, and then update the lib symlink to the newer one.
Agree on a standard way of adding file metadata to the ELF binaries. This way, binaries distributed can be more descriptive to the user. I know I am leaving script based programs out, but those can even add something ala “magic string”.
And the most important thing, understand that the changes are meant to make Linux not only more user-friendly, but also more popular. There are still a lot of Linux users and developers that think the OS is only meant as a server, many users that consider aiming at desktop is too dreamy or too “Microsoft”, and many that think that Linux should remain “true as a Unix”. Because of this, focus should be put so ideas can coexist, and everyone gets what they want.
About the Author:
For some background, I have been programming Linux applications for many years now, and my speciality is the Linux audio area. I usually receive emails from troubled users every few days (and a lot more on each release) with problems related to missing libraries, distro-specific, or even compiler specific. This kind of things constantly make me wonder about easier ways to distribute the software for users.
This is fun, you say the article is full of inaccuracies and then you prove me right in every point!!
> FALSE. The FHS defines the paths that way to be able to
> install programs in a single computer and share them among
> 250 more, which you can’t do the Windows way (you need to
> install the same program in each computer).
> X11R6 is an anomaly, stemming from people who thought like
> the article’s author
We agree! now, since when do desktop users share the programs betweem 250 computers? Hard disks nowadays can install everything on them with lots of space left, so setting up a network, even for an office, with 250 computers can actually be slower, more hassle and more expensive!
>Installs need to be global:
>FALSE. I myself have several software packages installed
>only on my home dir. Any UNIX user will tell you that’s the
>case. What I agree with is that RPM/DEB package installs
>are system-wide.
Uhm. download favorite.tar.gz
tar zxvf favorite.tar.gz
cd favorite
edit INSTALL -> it says do:
./configure
make
make install
where did everything go? probably to /usr/local. Local instal? where? you can hack it up if you want, even create
~/bin and ~/lib and throw stuff there, or just configure with –prefix=~ , but still you need to do some hack up to
what your distro comes with to achieve it, and it’s definitively not standard. Not to mention you _still_ cant have different versions of a program installed.
>Several versions of a program cannot be installed:
>FALSE. You can install several versions of the same
>program, using different libraries (you know, libraries
>have version numbers in their file names). Just name your
>programs differently.
Yeah thats what I said! but libraries are not programs, you know. And how are you supposed to name your program different when stuff like gnome expects to do so much IPC (corba/bonobo) with known binaries? you really cant do that.
>What software release managers ought to be doing is testing
>their software packages with different distributions and
>providing links to the libraries required (or bundling them
>in a zip file and making an install script, which most big
>name software companies do).
Yeah but those methods dont work well, specially with closed software (where they need to proovide only binaries) and they only proovide support for ONE distro (usually redhat). Dont believe me? Check IBM DB2, Oracle or even Alias Maya! They only give you support if you isntall known redhat versions.
>Current dir is not in path:
>Thank God. That’s one of the reasons you don’t have
>trojaned programs on your computer.
Uhm.. if someone gets to put a trojan in your computer, or even get access to it from any of your users, installing a trojan is an easy task on a desktop computer. You cant really expect those users to be as security aware as a sysadmin, and besides, they are simply way more in risk
by doing the usual ./configure ; make ; make install anyway,
since you need to be root for that last step.
>No file metadata:
>FALSE. Linux binaries DO have metadata (in the crooked
>definition the author is using for the word). There is no
>icon defined in the binary, and that’s another problem.
>Icons are installed in standard places (freedesktop.org
Read the frigging article!! Yes! you can place an icon in a .icon section if you want, so what? that’s far from being a nice solution. What if you want to place more than icons in there, like a search criteria for programs or something else? Thats why a “standard way of metadata” would be nice!
And what you suggest is yet the usual, to install an icon in a “standard place” you need to be root, where’s the security then? and what if you want to have different versions of the program with different icons? no more standard place there i guess!
Seriously, think for a moment as a windows user, we might dislike microsoft in a lot of aspects, but they really did things right in many others, so it’s not bad to learn from then and that’s what people like Miguel de Icaza see too. And what does Miguel get for it? a lot of bashing and “pro microsoft” adjetives. Grow up people!
If you look back to the formation of the POSIX standard and why it was done, you will see that a similar standard for the end user (EUPOSIX??) would solve all of the problems in this binary loving (why?) world we live in.
For example:
Core Binaries -> /usr/bin (grep, gcc, su, tar, vi)
Extended Binaries -> /usr/local/bin (pine, emacs, gmc)
X11 Binaries (GNOME, KDE, others…) -> /usr/X11R6/bin
no /var/…. B.S Core (base) Libraries -> /usr/lib (glibc, glib, gtk, qt)
Extended libraries -> /usr/local/lib
Core Docs -> /usr/local/doc
Core Config Files -> /etc
Resources -> /usr/share
Or something of the sort
The problem is that packagers of all kinds have their own ideas about where what should be kept. Clearly, this needs to be solved with a STANDARD along with some kind of versioning scheme. Or should we just recompile *BSD packages for you linux guys and be done with it along with some kind of gtk_pkgadd to keep the newbies happy.
There’s a good reason why the current directory isn’t in your path by default, and I wouldn’t use any distribution that did that. All you that are whining about it, PLEASE take a moment and think of the security implications, if you have that capacity. It’s people that try to dumb down good operating systems so they can be used by “average users” that’s made most Linux distros the bloated, gui-oriented crap that they are today.
The author claims that Linux has got an advanced library versioning system. Some commentators, me included, say it hasn’t. Having several incompatible libraries with their version number in the filename is not an advanced versioning system. It’s a crude versioning system.
Look at AmigaOS. All libraries (save those which are private to a single program and aren’t meant to be shared) are put in LIBS:, and they’re backwards compatible. When a new version appears, you get a new entry point for the new version, while the old entry point still allows you to use the library with the older interface. And they have their version number integrated into the binary. Type “Version diskfont.library”, and it will tell you its version. Thus, installers won’t overwrite libraries, either, since they’ll do a version check to see if the library is newer than the bundled one.
No need to have several version of the same library, a working versioning system and a uniform place to put them. Not bad for a stone-age system, or what?
NOTE: NOT A REPLY. DON’T GO LOOKING FOR “All you Linux code monkies”
—————–
It is great that all you “Anthing against Linux is FUD” people are so happy with your OS, even thought the method you use to install programs and update your system software is CRAP. I can’t believe that Linux lost it’s edge when it came to software development. I remember the times when “If this works better, we’ll do it” was the theme going on in the OSS comunity’s head… not “We’ll this sucks but we’re stuck with it.”
Give me a great, easy to use PROPRIETARY system any day instead of the hard to use, legacy oriented open source solutions we are stuck with.
This is the last time I’ll deal with these issues. I’ve answered questions on appfolders time and time again, and the only responses I get are either agreement or people simply repeating how it works. I know how Appfolders work thanks, that doesn’t make my points any less valid. To be honest, I can’t be bothered anymore.
FUD! Define “central control”. All an admin need do to install say Microsoft Office for all users is drag it to the /Applications folder. If I want to install it locally I drag it to the /Users/mynamehere/Applications folder. What is “central”?
Without dependancy management, Apple are the only people who can feasably update the OS and provide new features to it. See the $120 upgrade price to 10.2? There will be more of that in the future. See how PGP8 requires 10.2? Ditto.
Linux is a free market of components – it’s quite feasible for an app to pick and choose multimedia frameworks for instance. That isn’t the case on OS X, because it’s controlled centrally – ie apple are the only ones who can change it. I’d have thought this was obvious.
Total BS. The software update utility updates the OS in chunks.<p>
Yes, and who runs software update? Oh, that’d be Apple.
(I don’t really agree with this line of thinking but…) Disk space is cheap and executable code is small. Content/data is what really eats up disk space.
I find that attitude tiresome. Saying “well X is cheap, so it doesn’t matter” is what poor engineers have said since the invention of the wheel. I don’t have acres of disk space, and if I did I’d need it for more important things than 10 copies of the same framework.
An interesting feature of frameworks in .app bundles is that they export their version of the library to be used by everyone. However if that version is compatible (or missing because the app providing it was removed) then the app will revert to using it’s own copy. You get all the benefits of using the latest version and none of the hassles. An app will always work.
No offence, but that’s a dumb way of doing it. So now if there’s a bug in a framework, you install a new app and it’s magically fixed. Wizard. Now you get tired of that app and trash it, and all your apps silently break again. Or of course the user can start downloading and updating frameworks manually (or wait for Apple to ship them on software update). Great.
You know not everything is available as an rpm or deb. It pisses me off that all Linux people seem to think it is. Some of us do use commercial packages.
Yes and the reasons for that are well documented, and being fixed. I for one do not have time for an OS that requires Apple to be in charge as a design feature. I’ll say it again – for the last time on these forums, considering we went through all this a few weeks ago – appfolders are nice in theory, but have so many practical disadvantages that when the last remaining issues with linux apt style software distribution are solved, it’ll wipe the floor with them in convenience, ease of use, security and efficiency. It’s more effort but so what? We don’t have deadlines to meet.
In GNU/Hurd (http://hurd.gnu.org), “/usr” is a symlink to “/”. As recently seen in its mailing lists, many Hurd developers think that “/sbin” should be a symlink to “/bin” as well. So the filesystem becomes cleaner.
And with the Hurd, the user has more privileges than with traditional Unix-like systems (as Linux of *BSD), so installation of third-part programs can be easier.
OTOH, for software not natively packaged (as in Debian GNU/Hurd), “GNU stow” is a practical tool for having a clean organized files. But this is an admin tool, not for the end user.
“Look at AmigaOS. All libraries (save those which are private to a single program and aren’t meant to be
shared) are put in LIBS:, and they’re backwards compatible. When a new version appears, you get a new
entry point for the new version, while the old entry point still allows you to use the library with the older
interface. And they have their version number integrated into the binary. Type “Version diskfont.library”, and
it will tell you its version. Thus, installers won’t overwrite libraries, either, since they’ll do a version check to
see if the library is newer than the bundled one. ”
Including the version number string in the binary would be an
essential first step for Linux programmers, and could be done without
breaking anything.
AmigaOS uses a string of the form “$VER:” as in “$VER:12.34”
This is easily searched for.
If everybody started to include that string in their shared libraries,
then in a year or two further improvements could be made, heading
toward the ease of installation found in AmigaOS.
(Of course even in AmigaOS there are sometimes bugs in install
scripts, but generally the system works well.)
Statically linked libraries are IMO part of the source code for the
program and as such are no concern of the user.
With a better better package mangement with auto compiling and autodownload of missing packages could solve the problem.
anyone who thinks software install on windows is easy needs to go work in a it support department
it is not
dll hell
visual basic hell
odbc driver hell
windows hell
The reason that PATH and LIBRARY_PATH don’t point to . is for security reasons. And, there is a simple fix – add “.” as an rpath to your binary!
By the way, static linking is not the horror people make it out to be. Static linking is a perfectly fine solution to the problem – one that many people use successfully. In fact, Mathematica runs on every version of Linux because it is statically linked.
I think the real problem is that we just need a good packaging tool in Linux for the package _creators_. One that autofinds dependencies through ldd (and perhaps even running the program), and adds all necessary libraries into the package, and then on depackaging figures out which ones it needs to install on a particular system. If you think that’s overkill – that’s precisely how Windows applications do it.
Alternatively, we can come up with a new file format that includes a virtual filesystem that overlays the current one for the process when executed, although the HURD would probably be a better OS for that than Linux.
It never ceases to amaze me how people fail to realize that something that is good for effectively single-user systems needn’t be bad for multi-user systems.
Currently, you can share an installation across a network simply by sharing the /usr directory (or its subdirectories).
Now let’s imagine a system where the files belonging to one package (or application) aren’t scattered throughout the filesystem. Instead, all files belonging to an application/package reside within a single directory. Let’s assume we’ve got a good implementation of file attributes and querying, so it doesn’t matter where this directory is actually located. In other words, an application “foo” could be installed in directory /apps/foo/, /apps/scientific/foo/ or ~/apps/foo/ and it wouldn’t matter (although in the last case the application would only be visible to a single user).
You basically said that sharing an installation across a network isn’t possible in this system. But which part of this system prevents you from doing that? You could simply mount /apps as a remote share, just like you can mount /usr as a remote share.
In fact, the app-directory system would be more powerful than what we currently have. For example, you could get application exports from more than one server, resulting in e.g. /apps-server1/ and /apps-server2/. In addition to this, you can still have locally installed applications.
So tell me again – why is the app-directory system not suited for a multi-user network?
This isn’t the first time I’ve had this discussion. Often, people wonder how configuration and volatile data (the stuff you find in /var) fits into this scheme.
Obviously, both configuration and volatile data needs to reside *outside* the application directory. The files in the app directory should never be modified at all (except for an update of the application, obviously). In fact, I’d also argue that installing an application should *not* install any configuration or volatile files.
If an application cannot find its configuration file, it should either continue using a default configuration or fail to start up, depending on the type of application: Basically all user-oriented/GUI apps should launch using a default configuration or bring up a configuration wizard. Security sensitive things like daemons should usually refuse to start up without prior configuration by the user.
Of course, all this doesn’t solve the dependency problems, but it would result in a much more robust system – especially if the idea is extended to *all* packages including frameworks (every framework like Gtk or Qt is in its own directory like /usr/lib/qt/, /usr/lib/gtk/). For example, mixing binary packages with custom compiled packages would Just Work instead of being a major PITA. Additionally, it would actually be possible to update & uninstall applications manually, and so on.
It’s people that try to dumb down good operating systems so they can be used by “average users” that’s made most Linux distros the bloated, gui-oriented crap that they are today. >>>
I take the opposite view. OS X, a usable *nix with a learning curve that didn’t have me howling in frustration in under 45 minutes is a smartened up OS. It’s all waiting for me when I’m ready to dig in and live at the command line (if I decide I want to go there), but in the meantime, it’s not getting in my damn way when I need to get something productive done.
Your comments remind me of my grandma’s rant against the electronic fuel injection and automatic choke in my car. In her view it was “simpler and better” when all you did to warm up your car on a cold winter’s morning was build a fire under the engine compartment.
And finally, any study of social history shows that it is the true elites in a society who realize that the society as a whole steps forward when the best things are packaged in a form that the masses can enjoy/use. Hmmn … huge innovations in technology and a drastic drop in youth mortality right around the time that education becomes compulsary. Could there be a link?
An easy to use desktop linux with across the board standard packaging will lead to the wider adoption of *nixes which will very likely lead to some true innovation in software and break the stranglehold that Mordorsoft has on things.
Linux is a free market of components
Basically what you’re saying is that you prefer the Linux solution because it’s free (as in speech and/or beer) and you don’t like one company controlling the future of the OS. Fine, that’s understandable.
I use whatever works better for me. I have used Linux on the desktop for extended periods of time and run a server box or two. What you call proprietary I call polished. I used to have endless hassles with installs in Linux. I have had none in MacOS X. Copy the app over. Done. Appbundles work well for 99% of the people out there. The 1% that would use MacOSX Server might need something more industrial strength.
I find that attitude tiresome. Saying “well X is cheap, so it doesn’t matter” is what poor engineers have said since the invention of the wheel.
Except that sometimes when you’re engineering something you have to make tradeoffs. If the tradeoffs are small and the gains big… Think trading off memory footprint for runtime complexity or vice versa.
appfolders are nice in theory, but have so many practical disadvantages that when the last remaining issues with linux apt style software distribution are solved, it’ll wipe the floor with them in convenience, ease of use, security and efficiency. It’s more effort but so what? We don’t have deadlines to meet.
And I’ll be sure to look in on Linux on the desktop at that time, till then I’ll use whats less of a hassle.
The article mentions, almost mentions good reason for /bin and /sbin. /sbin is secure directory while /bin is not.
But the reason for having /usr is that you can install a program once and run it on hundreds of machines.
If I ask administration to install e.g. Word for windows, he already did install it perhaps hundred times, yet he walks over to my machine, kicks me out of my unfinished work for 15 minutes and fiddles with my computer.
And you say, that Windows is easy? If it was installed once in a shared /usr/bin , it should be available to anyone.
/usr/X11/bin was meant for those with graphic terminal, but nowdays these programs are usually in /usr/bin, since graphic terminal is nearly an obvious thing.
Petrus
I have to stop reading articles on this site. Everytime I
follow a link from some other portal and end up here reading
a linux article from a “contributing editior” it turns out
to be an ignorant rant. Suggesting putting “.” in your path
shows a complete lack of understanding about why linux is
by default generally mroe secure than other os’s. And this
same old “its hard to install stuff on linux” thing over
and over again, this is a straw man. This is what package
managers are for, believe me, APT is smarter than you are.
I agree that day to day some aspects of windows are more
user friendly than linux but handling the installation/uninstallation of
software in a robust, secure, and reliable fashion is an area
that linux shines in. Whoever runs this site, please screen
the stories you are publishing, credibility is a key aspect
of journalism.
OSX is a desktop, consumer-oriented operating system. Linux and
UNIX derivatives are workstation/server OSes. I’m quite sick of
the increasing dependence on GUI tools and “user-friendly” hacks,
ignoring the wants and needs of competent system adminstrators.
The reason much of this has happened is because of the insistence
of a few that Linux try to be a friendly, desktop operating system.
This can’t happen without things that comprimise compatibility,
ease of administration, and simplicity of UNIX. OSX is a fine
desktop OS. It’s also nowhere near as stable or usable from the
command-line as a server OS(and please, don’t bring up the halfassed
Xserve boxes).
People need to accept that there are different tools for different jobs,
instead of trying to mash Linux into something it’s not, destroying it
in the process. You cannot make one size fit all.
Actually, the s in “sbin” originally stood for “static”, not “secure”
or “superuser”. Note /sbin/sh on Solaris, you’ll see a statically
compiled shell (not so in Linux – /sbin/sh -> /bin/sh -> /bin/bash.
An abomination, yet another example of the dumbing down of that OS).
I’m sure we can have many other path wars, but just a bit of trivia.
Once I bought a software package I needed for $1,000, and
I had to throw it away 3 months later because I upgraded
the system and the binary would not run anymore even with
a compatibility library.
Binaries can be very quick to install. For this sole
reason I also distribute Open Source code in a binary
form. The program does not depend on anything but the
c library (strcpy, memcpy, printf) and a basic windowing
library. Because of this luck in simplicity, there is
only one exe file for Windows and it installs on
Windows95,Windows98,Windows2000,WindowsNT,WindowME,
WindowsXP. In case of Linux one version is not enough.
There should be a version for each vendor per release.
That would be more than about 50 different binaries.Still
that won’t run on future versions without an extra
compatibility library.
In my case, this is caused by different version
of the c library, Note that I just use strcpy memcpy
and printf! So I stick to the binary I can make on
my system and redirect all help-seekers on an older/newer
different distro OS to the source code.
It is hard to imagine why there can not be a strong
major number c library with minor fixes and once a
while a c library with incompatibilities with a different
major number. This would reduce and compatibility library
installation hell (as opposed to dll hell in windows) a great deal.
So Linux does not like programs distributed in binary form.
Personally I see nothing wrong with having some companies
who want to make programs for Linux with a non-Open-Source licence.
I thinkg disabling binary distributions programmatically
is not the way Open Source should show its superiority.
A matching product that is better than the close source
binary distro should be created instead.
. is current directory which will not necessarily be the directory containing the libraries and changes over time.
Far better to put in a wrapper script:
#!/bin/sh
dir=$(dirname $0)
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$dir
exec actualprogram
This has the added advantage that you can put the libraries in a subdirectory such as:
$dir/lib
I think the only real problem with binary packages in Linux is just the fact that packages aren’t compatible. They aren’t compatible because they have dependencies with other packages whose names could be different from the ones you actually have installed. For example, maybe “gimp.rpm” needs “libgtk2.rpm” while you have “gtk2lib.rpm”. That’s as stupid as that.
That’s why you should all go to http://autopackage.org and see what they are working on.
If they success, we should simply all use autopackage’s packages and the installation nightmare will be over.
Just go to the site, read it well and come back here if you still have comments.
Yes the autopackage project is promising but until it produces a final solution you have to use the ones thats available now.
The ones today is debians apt package system and redhats rpm system.
Then you have the combination of them mandrakes urpmi system
which uses rpms managed by a version of debians apt system.
this means that you always have a database that keeps track of what you have installed and what you have available to install.
It also takes care of the dependency problem.
If you install a rpm simply by click on it in a gui browser it will automatically ask for the needed cds or download them from whatever.
This is also usable when you try to uninstall something it will tell you if there is other rpms depending on that one.
There is also the combination of apt and synaptic which gives you a gui version of apt.
/thac
Ok, so I’m a normal usr and I write a program called ‘ls’ and stick it in my home dir. root comes along and does a dir listing in my home dir for whatever reason. Because ./ is in root’s path, he executes my ls program instead of /bin/ls. Mine is actually is a program that changes my usr acct uid to 0. Now my usr has root privledges.
Ok, you say that you just put /bin and /usr/bin in the path before ./, well then I simply change the name of my program from ls to sl and wait for root to make a typo.
Linux is still a multi-user os, not just a desktop wannabe.
Hum!
I belong to the group of people that think computers should be to make things easier for the user.
Now, some of the people here are “desktop users” and some seem to be “server administrators” and some are just “die hard ‘lite konservative *NIX hackers”.
We, need to understand the different need’s of these different people.
I can understand that a (f.ex) webserver administrator don’t think it’s hard to install binaries/packages because he is happy just seeing the latest version’s of apache, php, perl running, fiddle with emacs and don’t care about graphical userinterfaces, sound and probably don’t even have a mouse attached.
On the other hand we have “Joe the homecomputer fiddler” which always downloads the latest helper app’s, game demos theme’s and what not from the net, installing deinstalling an app 2-3 times a day.
On the third hand (and here is where i want Linux to grow) there is the “working person” who only uses a computer at work; emailing, working with the appropriate app for his company and maybe listen to some CD. He doesn’t like ‘puters much, but he needs one to get his work done. Here is where the challange of the word “userfriendly” lies.
I think every die hard *NIX hacker should be forced to make an app for this “working person” just to wipe the ego out of his soul.
Belive me, some people even find Windows and Mac OS way beyond comprehension.
Maybe Linux should be split into a server and a home/workstation version to not kompromise security for the server end, i don’t know.
Why not make a “Linux expert group” that desides which lib’s should always be shared by default and how the directory tree should look like. I’m for the version numbering in the binary though, this freakin softlinking is nut’s, and some sort of reasonable backward’s compability.
Don’t bother listening to this article. I have extensive experience with many Unix systems, and NOT ONCE have I come across any problem that these measures could have had any effect on.
Here are the big problems in the Unix world:
Packages:
Different package managers.
Insuffecient capabilities in the package managers.
Package maker’s varying ideas about where (in the filesystem) the files should go.
Source Compilation:
Dependance on libraries that change significantly.
Lack of documentation about which library version was used.
Non-portable operations/features (eg. OSS v. Alsa)
Nothing mentioned can solve any one of these problems.
In addition, this solution (to a problem that is easilly worked around) would always work in even the rare situation that the libs are in the same directory as the executable. That is not to mention that EVERY Unix program distributed with it’s own libs (that I have seen) DOES NOT store them in the current directory.
So, not only would you have to include “.”, but also “./lib”, “./libs”, and every other variation of “lib”, or any other directory someone might use. All these extra directories being included only make the lib system less dependable, and all JUST to try to make it so that application developers don’t need to launch the program with a shell-script (eg. Mozilla, OpenOffice)… of course, most programs that complex already need a shell script for other initial actions, so the benefits of including “.” is absolutely nothing in most cases, and very, very little in the rest of the cases.
I thought of this after reading this article and came up with some ideas.
Here’s a snippet of me talking to myself in #linux:
11:28 <Nos[afk]> Something interesting; the quandry that is installing software for the new user.
11:29 <Nos[afk]> Why is it that we don’t have a graphical installer that can compile source for you?
11:29 <Nos[afk]> It wouldn’t be terribly difficult.
11:29 <Nos[afk]> If it was standard, that is.
11:30 <Nos[afk]> Possible scenarios: the installer asks if you want a global or local install.
11:30 <Nos[afk]> If local, then install to ~/bin.
11:30 <Nos[afk]> Support for RPM’s and .Deb’s.
11:30 <Nos[afk]> Instead of bailing out on a stupid error, make the installer intelligent to download other packages for you
11:30 <Nos[afk]> So.
11:31 <Nos[afk]> If I’m not talking to myself — and I am — then the question becomes:
11:31 <Nos[afk]> If I’m not talking to myself — and I am — then the question becomes:
_Why not?_