Linked by Thom Holwerda on Wed 9th Nov 2011 21:26 UTC, submitted by edwin
General Unix Way back in 2002, MIT decided it needed to start teaching a course in operating system engineering. As part of this course, students would write an exokernel on x86, using Sixth Edition Unix (V6) and John Lions' commentary as course material. This, however, posed problems.
Thread beginning with comment 496952
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: binary for windows....
by jabjoe on Fri 11th Nov 2011 11:32 UTC in reply to "RE: binary for windows.... "
jabjoe
Member since:
2009-05-06

Shared objects aren't really about saving space any more (much of Window's bloat is having a massive sea of common DLLs that might be needed, and multiple versions of them, for both x86 and AMD64). It's about abstraction and updates. You get the benefits of shared code from static libs, but to take advantage of new abstractions or updates, with static libs, requires rebuilding. That's a lot of rebuilding. Check out the dependency graph of some apps you use some time. They are often massive. To keep those apps up to date would require constant rebuilding. Then the update system would have to work in deltas on binaries else you would be pulling down much much more with updates. With shared objects you get updated share code with nothing but the shared object being rebuilt. Easy deltas for free. Having to rebuild everything will have a massive impact on security. On a closed platform this even worse because the vendor of each package has to decide it's worth them updating. Often it's even worse because each vendor has their own update system that may or may not be working. Worse, on closed platforms, you already end up with things built against many versions of a lib, often needing separate shared object files (which defeats part of the purpose of shared objects. Manifest is crazy with it's "exact" version scheme.) Static libs would make this worse. With shared objects not only do you get simple updates but abstraction. Completely different implementations can be swapped in. Plugins are often a system of exactly that. Same interface to the plugin shared objects, but each adds new behaviour. Also put something in a shared object with a standard C interface, and many languages can use it.

With an open platform and a single update system, shared objects rock. You can build everything to a single version of each shared object. You update that single version and everything is updated (fixed/secured). You can sensibly manage the dependencies. You removed shared objects if nothing is using them. You only add shared objects something requires. This can and is, all automated. This does save space, and I would be surprised if that if you build everything statically the install wasn't quite a lot bigger, unless you have some magic compressing filesystem witch sees the duplicate code/data and stores only one version anyway. But space saving isn't the main reason to do it.

Any platform that moves more to static libs is going in the wrong direction. For Windows, it may well save space to move to having static libs for everything because of the mess of having so many DLLs not actually required. But it will make the reliability and security of the platform worse (though not if it already has an exact version system, then it's already as bad as it can be).

In short, you can take shared objects only from my cold dead hands!

Reply Parent Score: 3

bogomipz Member since:
2005-07-11

In short, you can take shared objects only from my cold dead hands!

Haha, nice ;)

I agree with those that say the technical problems introduced with shared libraries and their versioning have been solved by now. And I agree that the modularity is nice. Still, the complexity introduced by this is far from trivial.

What if the same benefits could have been achieved without adding dynamic linking? Imagine a package manager that downloads a program along with any libraries it requires, in static form, then runs the linker to produce a runnable binary. When installing an update of the static library, it will run the linker again for all programs depending on the library. This process is similar to what dynamic linking does every time you run the program. Wouldn't this have worked too, and isn't this the natural solution if the challenge was defined as "how to avoid manually rebuilding every program when updating a library"?

Reply Parent Score: 2

Vanders Member since:
2005-07-06

What you're describing is basically prelinking (or prebinding). It's worth mentioning that Apple dropped prebrinding and replaced it with a simple shared library cache, because the cache offered better performance.

Reply Parent Score: 2

jabjoe Member since:
2009-05-06

I don't see how your system of doing the linking at update time is really any different than doing it at run time.

Dynamic linking is plenty fast enough, so you don't gain speed. (Actually dynamic linking could be faster on Windows, it has this painful habit of checking the local working directory before scanning through each folder in the PATH environment variable. In Linux, it just checks the /etc/ld.so.cache file for what to use. BUT anyway, dynamic linking isn't really slow even in Windows.)

You have to compile things different from normal static linking to keep the libs separate so they can be updated. In effect, the file is just a tar of executable and the DLLs it needs. Bit like the way resources are tagged on the end now. Plus then you will need some kind of information so you know what libs it was last tar'ed up against so you know when to update it or not.

What you really are searching for is application folders. http://en.wikipedia.org/wiki/Application_Directory
Saves the joining up of files into blobs. There is already a file grouping system, folders.

The system you might want to look at is: http://0install.net/
There was even a article about it on osnews:
http://www.osnews.com/story/16956/Decentralised-Installation-System...

Nothing really new under the sun.

I grow up on RiscOS with application folders and I won't go back to them.
Accept dependencies, but manage them to keep them simple. One copy of each file. Less files with clear searchable (forwards and backwards) dependencies.
Oh and build dependencies (apt-get build-dep <package>), I >love< build dependencies.

Debian has a new multi-arch scheme so you can install packages alongside each other for different platforms. Same filesystem will be able to be used on multiple architectures and cross compiling becomes a breeze.

Reply Parent Score: 2