LinuxWorld takes a closer look at the technological considerations behind one of the biggest issues for software-installation: shared libraries. A similar interesting article, a guide to how and why to make incompatible versions of libraries and development tools install in parallel, can be found here.
Incompatible versions of the *same* library should not be an issue in the first place. A library is a library and should be backwards compatible with older versions. Not only does it make life easier, it makes things simpler too.
I agree. If backwards compatibility is going to be lost the lib needs to be renamed so both can be on the same computer…
> Incompatible versions of the *same* library should not be an
> issue in the first place. A library is a library and should be
> backwards compatible with older versions. Not only does it
> make life easier, it makes things simpler too.
If only it were that simple. Linux distro vendors often put these libraries in different places (/usr/lib, /lib, /opt/lib, /opt/xxx/lib/, /usr/local/lib… etc) and compile them with different optimisation settings, thus producing further incompatibilities. To make matters worse, legacy ones are often kept for compatibility with obscure naming schemes. Thus, when a new one comes in like qt3 for example, libqt.so, which is actually qt2 is overwritten by qt3, making all old qt apps unusable unless libqt.so is changed to libqt_old.so, libqt2.so or something of the like. This is where RPM fails.
Although some think otherwise, I am a firm believer that BSD ports/apt-get/portage is the answer. GUI frontends for some of these already exist and they allow you to upgrade to new versions of your OS without purchasing more cds and waiting for the next version.
The reason that RedHat/Suse/Mandrake. etc don’t support this out of the box is it removes any reason for people to buy new versions of their products.
After reading this article, I finally understand how the libs work in Linux. Glad I read it but still I must agree with the above comments. To anyone that doesn’t have a clue how to solve the lib problems in Linux, I strongly recommend this article/tutorial. It will make life on Linux much easier. Honestly I believe I would know what to do now when I run into a lib-dependency problem after reading this article. I would know that when I find the lib.so.lib.2.1 file on the net, I would know that I will need to put it in one of the paths specified in the /etc/ld.so.conf file or create a symbolic link and after doing that, the program should run. I haven’t tried yet Still I support the above comments.
Very nice article indeed!
…urpmi in Mandrake. With that tool based on apt you actually can upgrade, install, remove, anything… So you can upgrade to cooker or simply when the next version comes out just install it with urpmi. Mandrake is not the *greedy* type distro, it just tries to get the best features, often comming to a bloated result. Red Hat is a commercial company and tries to deliver a distribution for the enterprise. Each release is a sort of *standard^. Debian is a cool distro and has good package management. Gentoo is also cool…while SuSE simply sux. And BTW with MDK 9.1, Mandrake has shown that they can do a all purpose distro, at least it works for me…and when I’m bored I’ll just urpmi cooker…
Which is why people need to come to agreement on some common standard. Standards are *good* things. Standards can only make Linux better.
…is the answer! Not EVERYTHING of course, but there really should be more of it these days. Its not like there isn’t plenty of disk space, ram, and bus speed around. If staticly linking libraries were more common, we would certainly see less complaints about dependancy hell.
No you can’t use static linking, for the same reason you can’t have parallel installs of libraries on linux. It wastes a few megs of hard drive space when its more fun to waste a few hours of a users time.
Actually, I like this idea. Its more efficient than OS X’s static linking. Since you don’t have 90 copies of the same Foo4 library every where, while letting programs use Foo 5, Foo 4 or even Foo 3 if they really wanted to. However I guess you’d have to hope that no ISVs decided to name their libraries the same thing with the same version number. That might get a bit hairy. Otherwise if App X installs its own custom Poo v.4 library for sorting. Then later App Q wants to install is own custom Poo v.4 library for encoding mpeg video in the same place. I could end up having App X try to encode my data into mpeg by accident. For core frameworks, however, this is a good idea.
The common-sense approach of parallel library installations is too simple for the *NIX world to grasp; if you take away the security blanket of broken installation routines, many Linux “users” might wake up, start wondering why other things in the OS are so deficient by design, and desert the world’s largest free OS project. Questions concerning the device namespace, filesystem layout, kernel module design and especially Xfree86 itself will be looked at anew, and the resulting collective exclamation “What the heck have we been doing for the last twelve years??!” will crush many fragile developers.
Imagine a world without developers, and you will see my point: this proposed “solution” in the article must not stand.
“If only it were that simple. Linux distro vendors often put these libraries in different places (/usr/lib, /lib, /opt/lib, /opt/xxx/lib/, /usr/local/lib… etc) and compile them with different optimisation settings, thus producing further incompatibilities. To make matters worse, legacy ones are often kept for compatibility with obscure naming schemes. Thus, when a new one comes in like qt3 for example, libqt.so, which is actually qt2 is overwritten by qt3, making all old qt apps unusable unless libqt.so is changed to libqt_old.so, libqt2.so or something of the like. This is where RPM fails.”
I don’t think this kind of mess is good enough if the Linux community expects to be taken seriously.
If Microsoft had not set such a low standard, a mess like this would never have been accepted for a moment.
I realize that there are several half-baked solutions to dependency hell under Linux (portage, anyone?), and that there is one decent solution (Debian package management) but the fact remains that Linux is a shifting mud puddle for users. Config files are seemingly placed at randome throughout any given system, in directories (or even partitions, for the overzealous) that, again, vary from distro to distro. One must necessarily devote a goodly portion of one’s time to the study of these little traps, simply to avoid being tied to a single distro. Another nice little gotcha I’ve stumbled across is binary incompatibility depending entirely upon the version of GCC used; as stupid as it sounds, this is a routine hassle for many would-be converts who would like the ability to update their chosen distro. The LSB doesn’t properly address these concerns. The LSB is a half-assed measure, and GPL developers probably don’t care about it anyways.
Nuff said.
I realize that there are several half-baked solutions to dependency hell under Linux (portage, anyone?), and that there is one decent solution (Debian package management)
Honest question: what makes portage half-baked and Debian decent? (If it’s just the fact that Portage gives you half-baked packages for you to fully bake, those half-baked bread rolls you can buy and finish baking yourself are a lot nicer than pre-baked or self-made ones. IME, the same applies to portage.)
Thus, when a new one comes in like qt3 for example, libqt.so, which is actually qt2 is overwritten by qt3, making all old qt apps unusable unless libqt.so is changed to libqt_old.so, libqt2.so or something of the like
Actually the app shouldn’t try to link against libqt.so it should link against libqt.so.3 or libqt.so.2 some programmers don’t take into account the fact that the library might change.
As for paralell installs of libraries. Not only is that possible in Linux it has been done for years. For example
libpng.a -> libpng12.a
libpng.so -> libpng.so.3*
libpng.so.2 -> libpng.so.2.1.0.12*
libpng.so.2.1.0.12*
libpng.so.3 -> libpng.so.3.1.2.5*
libpng.so.3.1.2.1*
libpng.so.3.1.2.5*
both version 1, 2 and 3 are installed here. If the app links correctly it can use the correct library.
1 word
By Terry (IP: 24.144.57.—) – Posted on 2003-04-20 01:52:08
Apt-Get
The above is a decidedly humourous comment. Can somebody mod it back up please? If anything here is meaningful, APT is certainly germane to this discussion.
As an aside, the only real problems I have with APT are the generally lackadaisical attitude devels have toward maintaining current .deb packages, a problem that (I believe) stems from the pathetic state of Debian in general. Debian-stable is old hat, and -unstable is a distro for the 0.1 percenters. I’m no developer and I don’t want to fiddle to make new .debs that work. I also don’t want to endure the debian installer, and I can’t afford to waste money on something that may or may not work as I expect it to – thereby ruling out Xandros (nearly as expensive as Windows) and Libranet (another risky proposition). I’ve paid exorbitant sums in the past for distros, only to find out (repeatedly) that Linux is still just a server OS.
As an aside, one would imagine that, given the demand, InstallShield or something similar could be ported to Linux. Why not? It seems that Red Hat has been working on the foul RPM for years with minimal success; why doesn’t a commercial enterprise step up to the plate, if Linux really is the Next Big Thing?
Interesting case of Pointing Out The Blatantly Obvious…
However, installing two ABI-incompatible versions of a same library isn’t enough IMO. It should be possible to install two different ABI-compatible versions as well. Here’s why…
Something that often annoys me with binary distributions is that applications almost always depend on the very latest versions of all the libraries they need. So when I want to install a new application, I’m sometimes forced to install a new version of a library as well, even though the change is only from version 2.5.1-2 to 2.5.1-3. That’s just pathetic.
If there were a simple way to link and run an application against an arbitrary version of a given library, application developers and packagers could easily figure out the lowest version requirements that the application can live with while retaining full functionality.
Now I’m aware that new versions are usually not just released for fun, but because there are bugs to be fixed. That’s fine with me, but sometimes keeping a bug that doesn’t bite you is easier/better than upgrading everything (e.g. when you’re on a slow internet connection). It would be sensible to release binary packages that DEPEND on version X but RECOMMEND version X+n
Because InstallShield is Not The Answer(tm).
The reason you think InstallShield is the solution is because you only look at the user-visible stuff (doubleclick, Next, Next, works). RPM/DEB are not InstallShield, will never be InstallShield, and were not meant to be like InstallShield.
InstallShield is an installer. That’s it. RPM/DEB are package managers. Completely different things.
InstallShield does not scale. If you install lots and lots of apps, InstallShield is inefficient:
– Although not directly an InstallShield problem, lots of Windows apps ship their dependencies with their install app. While this almost solve the dependancy problem (they do not always include *all* dependancies!), it also wastes disk space like mad. And if done incorrectly, it will cause DLL Hell.
– InstallShield is an installer. It’s graphical. All you can do is click buttons and nothing more. You cannot automate installations.
“Foul RPM with minimal success”? RPM does exactly what it is supposed to do and does it well: to manage packages. It checks for dependancies and conflicts and makes sure you won’t hose your system (unless you use –force –nodeps), something InstallShield does not do.
The only reason why you think RPM is “unsuccessful” is because it doesn’t have automatic dependancy handling by default. If it has good automatic dependancy handling, then BAM, all your complaints are gone. Try apt4rpm.
Most people blame the package format while they only don’t like the dependancy resolution. Know what you’re talking about, don’t go randomly blame wrong things just because you don’t know better!
“why doesn’t a commercial enterprise step up to the plate, if Linux really is the Next Big Thing?”
Most likely because 99% of the linux market is on the server and people looking after servers are (most of the time) educated enough to deal with the tools at hands.
I use Linux on the desktop at home and at work but sometimes I just wonder if I’m mad or something I guess I deal with it mostly because of principles (anti-microsoft?) and I like to tinker with my system.
So far, I have been burnt once with trying to install a linux distro on a laptop for a colleague and realizing along the way I was doing so much tweaking/updates to make it behave reasonably that I just let this guy go back to W2K… Although the tweaking/updates I was making seemed natural to me (after a few years of tinkering again and again…) It was frightening to him!
I think the equation is very simple on the desktop side of things::
– if you are ready to tinker a lot, give linux a try (or FreeBSD)
– if you’re not, get Windows 2000/XP or Mac OS X
Note: when I say tinker a lot I mean hours of research on the web (forums, mailing lists, irc, blogs, …), hours of trying and very often failing (although when success comes you feel so good except that you realize it would have taken 10 mins under windows to get this new bluetooth usb dongle to work instead of 3 freaking days…), getting annoyed because it’s still not working after some freaking hours spent on your computer and your partner gives you shit because you spent more time staring at your screen than looking at her/him
I do not mean to troll. As I said I am running LM9.1 now and it works pretty well but I’m keeping a real close eyes on the market this year and providing Apple sort out the (lack?) of CPU power with the upcoming PPC970(?) I may well be a MAC OS X convert next year for the simple reason I would get a nice consistent packace and keep the ability to tinker on a true Unix system.
If it has good automatic dependancy handling, then BAM, all your complaints are gone.
But it doesn’t
apt4rpm
Is that the solution? Then what’s the problem?
I haven’t actually read the article yet, but i’d like to pose a question that’s been bugging me before i do.
Why do we need shared libraries? I realise that ‘back in the day’ the efficiency and space that was saved by sharing libraries was a real consideration, but surely now with storage being so much larger/cheaper it no longer matters…? I’m just thinking here of all the .dll problems i had using windows throughout the 90s, and how much simpler things are since moving to OS X 1.5 years ago. Can’t comething OS X-like be done in the Linux world?
Having played with Linux on and off over the last 6 months, it seemed most of the apps we downloaded came in the form of packages anyway. Just change it so that the package itself is executable, but for those that want to play with the innards they can always right click and show package contents.
It seems to me that this is a pretty crucial issue for Linux at a pretty crucial time too. Word is that M$ will be doing away with .dll’s with Longhorn, expected around 2005. Surely if Linux world can sort out these problems (installation ease and library conflicts) within the next few months, that will give them 1-2 years to really push Linux and show people that it’s even easier than windows (which is the main reason people i know don’t even consider linux – far too complicated).
Maybe reading the article now will shed some light on this..;-)
L.
We might not like to admit it, but .NET has improved on this area, so perhaps Linux could use a similar approach.
Essentially, .NET guarantees that a program will get the exact version of libraries that it was compiled against. For each dependency in an application, it records a unique key for the version of the library. Only that specific library can satisfy the dependency.
Applications can be completely self-contained with their libraries, but not statically linked. The benefit of not statically linking is that the application libraries can still be upgraded individually in the application directory and redirected to the updated libraries via policy files in the application directory.
Further, to allow sharing of libraries, .NET has a global cache where libraries can be stored using their unique key (and I imagine referenced counted). Any application can look into the global cache to get its libraries and it will only get a library if it matches the unique key of the library it was compiled against. And like above, policy files can be used to redirect linking for bug fixes, upgrades, etc.
This solution isn’t perfect, but it is a lot better than what Linux has.
Let’s suppose that there is a buffer overflow bug in, for instance, libpng. If you linked it statically to all the programs that use it, you have to update all those programs. If you linked it dynamically, you just update libpng once. If I understood Richard correctly, the .NET approach would require updates to all the software, too (since the fixed library would have different key).
The problem is human not computer. By matching the library against the app in the dot net way you solve the problem with a computer solution. However the correct answer is that minor changes to a library should NEVER break an application. As soon as you start breaking applications then you need to increase the major revision number. There are problems with this though, how does a library developer know that the changes break an app?
As to automating the linking process so that the application links to x-n or greater n. This can be done in the configuring scripts such as a autoconf script. The problem here is that even though your app may currently use version 3.2.4 you have no idea if it will work with 3.2.5 What if the lib developer breaks something, you cannot predict the future. So the solution here is to make sure that minor revision changes do not break applications.
We might not like to admit it, but .NET has improved on this area, so perhaps Linux could use a similar approach.
Instead of using bloated Microsoftisms look at how it is done in AmigaOS: Every library must be backwards-compatible and an application can only request a minimum version (so a newer library will always do). This means no conflicts, no disk/RAM waste and only one version of each library.
About debian and unstable … unstable is really usable. I’m using on my desktop since 2 months and I never had a problem.
But the difficult point is the install, I agree. I really recommend using Morphix (as I did) to install a clean debian unstable but with a preconfigured Xfree and KDE or GNOME.
Morphix is a Knoppix Based LiveCD and it’s really great in my opinion. Check http://morphix.org . It comes in several flavours : HeavyGUI (GNOME), KDE, Light (IceWM), Games, etc …
After install you just have to use correct sources to put in /etc/apt/sources.list, here are mine :
deb http://ftp.fr.debian.org/debian/ unstable main non-free contrib
deb-src http://ftp.fr.debian.org/debian/ unstable main non-free contrib
deb http://non-us.debian.org/debian-non-US unstable/non-US main contrib non-free
deb-src http://non-us.debian.org/debian-non-US unstable/non-US main contrib non-free
deb http://security.debian.org/ stable/updates main contrib non-free
# mplayer & co
deb http://marillat.free.fr/ unstable main
# xfree 4.3
deb http://penguinppc.org/~daniels/sid/i386 ./
Xfree 4.3 will soon be in debian so you won’t need the last line.
Bye
“Why do we need shared libraries? I realise that ‘back in the day’ the efficiency and space that was saved by sharing libraries was a real consideration, but surely now with storage being so much larger/cheaper it no longer matters…?”
The point is not to save disk space so much as to save debugging time. A library is a collection of ready-made, fully debugged (we hope) routines, which saves the author of a program from having to code them himself. It avoids re-inventing the wheel with a new set of bugs every time.
Shared libraries are an essential part of an OS and it is important that they are done properly.
“Why do we need shared libraries? I realise that ‘back in the day’ the efficiency and space that was saved by sharing libraries was a real consideration, but surely now with storage being so much larger/cheaper it no longer matters…?”
Yeah we may have a lot of disk space, but what about bandwidth and memory usage?
Every software package would at least be 10 MB. Ouch. Think about the 56k users! Think about the huge bill my ISP will give me!
If each and every app on my computer is statically linked, it will fill up my 384 MB RAM in no time.
“and how much simpler things are since moving to OS X 1.5 years ago. Can’t comething OS X-like be done in the Linux world?”
OS X doesn’t “solve” the problem. AppFolders just bundle all their dependancies and that’s it. No advanced dependancy management. It still wastes space (and bandwidth!)
> If it has good automatic dependancy handling, then BAM, all your complaints are gone.
> But it doesn’t
>> apt4rpm
> Is that the solution? Then what’s the problem?
The article was talking about parallel installation of libraries but the people here somehow turned the discussion into dependancy handling (confusion? ignorance?).
I was talking about dependancy handling, not parallel installation.
Why do we need shared libraries? I realise that ‘back in the day’ the efficiency and space that was saved by sharing libraries was a real consideration, but surely now with storage being so much larger/cheaper it no longer matters…?
I’m tired of reading things like this. Run the numbers. You’ll find that if software was of the same level of sophistication as it was ten years ago, maybe, just maybe, this argument would have some merit. Thankfully, it is not, we have things like advanced GUIs these days, and don’t blink an eye when our media players are capable of playing practically every format and codec ever invented. If everything was statically linked, we’d need over a gig of ram to even get a basic desktop running, and it’d still hit the swap.
Can’t comething OS X-like be done in the Linux world?
No. MacOS apps are typified by lack of code sharing, whereas Linux apps are not. Linux is fundamentally different to MacOS, it’s developed in a fundamentally different way, and attempting to impose the broken NeXT appfolders implementation upon Linux would lead to disaster.
Word is that M$ will be doing away with .dll’s with Longhorn, expected around 2005.
I think you are confused – Microsoft will be introducing Linux style versioning systems to try and eliminate DLL hell, but they still have no way to automatically download OS updates and components on demand like Linux does.
On .NET:
“This solution isn’t perfect, but it is a lot better than what Linux has.”
Binding to an exact version is less than ideal, if anything it seems an overreaction to the problems of DLL hell. Libtool-style versioning or symbol versioning are far more advanced ways to do the same thing. I’d suggest you investigate the techniques Linux has to ensure proper management of shared libraries before you claim .NET is more advanced. Newer != better.
What I would really like to see is a decent packaging system for Windows. Application writers are forced to use the ancient “lug everything you need with the app” approach. The system is horribly broken with different apps using different installers (as opposed to a single one on my Gentoo system) and installer wizards showing several useles screens when all that is needed from them is dumping the program to directory in Program Files. This inferior application management system has no central database for all the applications available, so if I need to install an app, I have to surf the damn Intraweb and manually download the program. It can’t even check if new versions of my applications are available and update them – I have to deal with different incompatible new version notifiers in various apps (and most don’t have them at all), and for the upgrade itself I need to do a manual download for each app!
Compare this to Gentoo, which has a unified app database that can be searched using regexps and a single simple and easy way to install, upgrade or remove any applications. It can check which applications in my system have a newer version, and upgrading requires typing in a single command.
Application writers are forced to use the ancient “lug everything you need with the app” approach.
What du you mean? Are you against complete packages? You want the programmers omit some things you need?
The system is horribly broken with different apps using different installers (as opposed to a single one on my Gentoo system) and installer wizards showing several useles screens when all that is needed from them is dumping the program to directory in Program Files.
Those two things does not make a system “horribly broken”. I think you realise that too. Further, if MS would force every developer to use Windows Installer then Slashdot would make a nice little article about that and the crowd would cry monolopy or something equally stupid.
About those “useless screens”; Those are not useless. Most users need to be told that the program installed succsessfully. I know UNIX doesn’t like feedback when something succeeds but that may be important too (I am no usability expert). Those “useless screens” are hardly a problem.
This inferior application management system has no central database for all the applications available, so if I need to install an app, I have to surf the damn Intraweb and manually download the program.
So you want a centralised database of ALL AVAILABLE APPLICATIONS? That’s a very nice idea…at least before you think it through. In short: it’s not possible.
It can’t even check if new versions of my applications are available and update them – I have to deal with different incompatible new version notifiers in various apps (and most don’t have them at all), and for the upgrade itself I need to do a manual download for each app!
I actually agree with that. It would be very nice to have a update system for all software, not only for MS applications (Windows Update). Of course, only MS could design such a system for Windows and make an API available for programmers. But they can’t force all developers to actually use it.
Compare this to Gentoo, which has a unified app database that can be searched using regexps and a single simple and easy way to install, upgrade or remove any applications.
That is possible because Gentoo’s world is microscopic compared to the Windows world.
Regexps huh? If searching are done using regular expressions then no non-UNIX-nerd would use it. I suspect most people doesn’t even use the Windows search function and even less use it with simple matching like *.
(Please excuse me for my reply if your post was ment as a troll)
What du you mean? Are you against complete packages? You want the programmers omit some things you need?
I am against having a zillion of copies of one library on my system.
About those “useless screens”; Those are not useless. Most users need to be told that the program installed succsessfully. I know UNIX doesn’t like feedback when something succeeds but that may be important too (I am no usability expert). Those “useless screens” are hardly a problem.
First, I get the screen that says that program Blah would be installed, then there is a screen that let’s me pick the directory (which should be my Program Files dir unless I override it, then there is a choise of group in Start menu, and then the “final” screen. If I am forced to do all those clicks thru the dumbed-down wizard that features no more then two controls per step and I am not able to just install it with the default settings, it is probably broken, don’t you think?
So you want a centralised database of ALL AVAILABLE APPLICATIONS? That’s a very nice idea…at least before you think it through. In short: it’s not possible.
Could you please explain why?
That is possible because Gentoo’s world is microscopic compared to the Windows world.
Gentoo would have to face some redesign of their packagement system if the number of packages would increase a thousand times. Maybe they would go from having a full package tree on each machine to creating package list servers. However, it is possible and would not require a major redesign.
Kobold: First, I get the screen that says that program Blah would be installed
90% of all installation programs are called setup.exe. I don’t know about you, but I’d like to know that I install the right program
Kobold: then there is a screen that let’s me pick the directory (which should be my Program Files dir unless I override it)
I like to keep things organized, and that means not having 500 programs installed in the same directory (even if they have their own directories).
Kobold: then there is a choice of group in Start menu</b
I like to organize the menu into Graphicprograms, games, programming etc. If a program doesn’t give me this choice, I actually get a bit annoyed. I could always move it to another sub-menu later, but then it won’t be removed when I uninstall it (and most programs that do this are generally not worth having anyway)
There are enough comments about doing things “The OSX Way” in this article that I am not even going to pick a specific one to respond to. If you think the OSX way is better, this response applies to you.
Now, at one point, I thought the same way. Why not static link everything? On the surface it sounds like a good idea, but after giving it some thought (and seeing the results of using static-linked programs) I have changed my mind… here’s why.
First of all, static linking certain toolkits causes ridiculous inconsistencies on the desktop. Now, I know some of you already think the desktop is inconsistent in linux, but mine is absolutely not. I use qt apps when under KDE, and GTK+ apps under GNOME. I pretty much think of KDE and GNOME as two separate OS’s. This means I have great desktop consistency; however, when using a statically linked qt app, my desktop settings for KDE are ignored. Thus, text highlights in yellow, instead of cyan, as I have selected, etc. This clashes horribly with the rest of my desktop. I don’t want that for one app, certainly not all of them.
Then there is the problem of linux being pushed as something that can work on slower machines. That is an advantage that is lost by statically linking everything. People always say things like (RAM is cheap, harddrives are cheap, why not statically link?). Well, that isn’t the point. Everybody doesn’t have plenty of RAM or hard drive space. Why should we unnecessarily make the OS less efficient than it has to be.
What we really need are smarter installers. Something that checks for dependencies and installs necessary ones. That is what would me most helpful. You hear far fewer complaints about these issues from people with apt-get/bsd ports/portage.
The common-sense approach of parallel library installations is too simple for the *NIX world to grasp; if you take away the security blanket of broken installation routines, many Linux “users” might wake up, start wondering why other things in the OS are so deficient by design, and desert the world’s largest free OS project.
This comment is ignorant at best, a bold faced lie at worst. You make it sound like parallel library installation is common in the OS world. Windows XP doesn’t have it. Microsoft is just now getting around to addressing the fact that installation programs are installing dlls over each other and it is a disaster when too many programs are installed on windows. Is it time we address this in Linux? Yeah, it is. Parallel library installation needs to work, and work well. That doesn’t mean linux is behind. Nobody else (except the hackish OSX) has things working right yet.
What if shared libraries were like objects in software development? The developer would create an interface to the libraries that would not change but the implementation could be changed to fix bugs or provide quicker routines. If the libraries were to be treated as objects, you could have a base library that could be extended through inheritance. A program needing a particular extension could first acces the base library then work its way down the chain of extensions. Of course you would have a mess with all extensions of the base library.
Libraries are nothing but packages of data structures and funtions. Objects are packages of data structures and funtions. I think it would be hard to place such an organization onto libraries. But objects do have some great qualities like programming in self-inpsection. Self-inspection in a library would give the installer program more information about the data structures and functions the library provides. The application program could then decide if it can work with the already installed library and extensions or if the program needs new extensions.
The application could provide its own extensions to the base library that is installed on the system. The extensions could be installed in the same directory as the program. So no over writing of other extensions with the same name.
Namespaces could also be used to differentiate a function in extension A from a function in extension B.
This is just a thought experiement. I am not a professional programmer so I do not have all the right information. Hopefully, some professional programmers could comment on the good or bad about this idea.
it wouldn’t eliminate all the problems with Linux package
magagement. It would go a long way towards it, though.
Plus, the shortcomings in addition to being much smaller,
would dimininsh with greater resources than Debian alone
can muster.
The LSB should just adopt debs and apt-get, call it lpm
for “linux package management”. Rhat through pride and
NIH has done Linux a great disservice saddling users with
the hair-pulling rpm system. Hair replacement clinics
are happy though.
So you want a centralised database of ALL AVAILABLE APPLICATIONS? That’s a very nice idea…at least before you think it through. In short: it’s not possible.
Could you please explain why?
Sure. The first thing is: Who would own that database? The UN? No, it would be Microsoft because it’s their OS. That would be a problem (and a story) for the MS bashers. Then, how is it possible to include all existing Windows applications in that database? It’s impossible because the number is way too huge. MS wouldn’t assign a lot of people to track down every Windows application because it’s not feasible or profitable. It would be the developer’s responsability to submit their information to MS and only some of them would do that. Again, MS can’t force anyone to do that.
Gentoo would have to face some redesign of their packagement system if the number of packages would increase a thousand times. Maybe they would go from having a full package tree on each machine to creating package list servers. However, it is possible and would not require a major redesign.
Of course, it’s technically possible but who are going to pay for all those resources? Do you really think Gentoo glady would do that with nothing in return? Even if the Gentoo/Linux community would work something out, it’s different for a company. It has to be profitable.
Who would own that database? The UN? No, it would be Microsoft because it’s their OS. That would be a problem (and a story) for the MS bashers.
If MS would behave well, there would be no problem. If it would pull its old tricks… well, it’s problem with MS, not with the database.
Then, how is it possible to include all existing Windows applications in that database? It’s impossible because the number is way too huge.
Authors can just submit their software if they want to see it in database. And there isn’t that much software. Even if there are 1,000,000 different programs, the master server would only have to store information about it and the actual download URLs. 10KB per app seems to be realistic.
Of course, it’s technically possible but who are going to pay for all those resources? Do you really think Gentoo glady would do that with nothing in return? Even if the Gentoo/Linux community would work something out, it’s different for a company. It has to be profitable.
Volunteers are already providing the rsync servers with nothing in return. Why do you think it should be different in future?
Here’s implementation suggestion for fixing library hell:
Fix ld
When I bind my executable on my build machine library references are resolved (well excepting dl_load’ed so’s).
Record some info, right into the executable, about those libraries. Minimally, the full .so name, or even better would be some “strings-like” description of the library “libqt compiled with super-plex debugging symbols and kudzu options”. Maybe, I could even store in the exe a list of known validated libraries.
Because there are a “resonablly” small number of permutations of each library. upon install on your system, the installer could examine the executable and search your system (ala ldd) for libraries that matched exactly.
If an exact matching library is on your system, great, patch the exe to use it. If not, try and find the next best match. The installer could also give the advanced user a shot at patching up which library to use.
As a user, in most cases, I just want the application to work. I will happily provide the exact libraries that the application wants if I know what they are and I can find.
The hard part is just that – figuring out which permutation of libraries (and libs they depend on..) the application wants and then finding those libraries.
In a few cases I will still want to drop in newer/better versions of libraries that have capabilities beyond the originals that I need. I would suggest that in general this is better handled via “plug-ins”, but when this drop in newer library stuff works, it’t great.
My solution requires these changes:
– new ld on user systems
– new development tools (cc/CC/g++ – which launches ld…)
to store this extra info in the executables and
libraries
– new installer & patch-tools to help with finding and
patching of applications
When was the elf object file format born? 1990? Perhaps in another 15 years we will get something better..
Compare this to Gentoo, which has a unified app database that can be searched using regexps and a single simple and easy way to install, upgrade or remove any applications.
don’t whine then, operate! Gentoo didn’t just happen – someone cared enough to start and maintain it
>I also don’t want to endure the debian installer, and I can’t afford to waste money on something that may or may not work as I expect it to – thereby ruling out Xandros (nearly as expensive as Windows)
My copy of Xandros (sans Crossover) was $40. And I’m not sure what you’re saying “that may or may not work as I expect it to” what exactly are you expecting?
Xandros is a rock solid distro that in most cases “just works”. After trying Redhat (5.2 through 8.1 beta), Mandrake (6.x – 8.0), SuSE (several versions), Knoppix. Libranet, Lycoris, Yoper, Gentoo, Debian (plain), Slackware (old versions), Peanut, Stormix, Turbolinux, Caldera, and Corel (probably forgot a couple), I find Xandros to be the best by far even if it doesn’t pack the latest versions.
To be clear, I am no lover of M$ or .NET, but they have at least made a decent attempt at solving this problem. It is definitey not perfect.
In its simplest form, it guarantees that you get the exact version that the application was compiled against (whether the application is self-contained or gettings its libs from the shared global cache)…and this is a good thing.
But you are not stuck with this simplest policy. .NET can use three different policy files to re-direct library version binding. You can have an application redirection policy file, a library provider redirection policy file, and a system-wide redirection policy file.
So, for example, if an application updates itself, it doesn’t need to include new versions of everything, it only needs to update its policy file to redirect the version bindings.
Likewise, a library provider can offer a policy to redirect binding in case of a compatible bug fix, for example.
Lastly, the system-wide policy can redirect binding for all instances of a library, which might be done by an administrator.
I admit to not being a Linux nor a .NET guru (I have been running Linux as my main desktop for over two years and have never used .NET). But in truth, I shouldn’t have to be a super-guru to understand this stuff. At first blush, yes, .NET does seem to make it a little simpler than Linux…and I have looked into both.
Does this mean that I will be switching to Windows? No. But this doesn’t mean that I should be blind to it.