Linked by Thom Holwerda on Wed 9th Nov 2011 21:26 UTC, submitted by edwin
General Unix Way back in 2002, MIT decided it needed to start teaching a course in operating system engineering. As part of this course, students would write an exokernel on x86, using Sixth Edition Unix (V6) and John Lions' commentary as course material. This, however, posed problems.
Thread beginning with comment 496971
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: binary for windows....
by jabjoe on Fri 11th Nov 2011 17:39 UTC in reply to "RE[3]: binary for windows.... "
jabjoe
Member since:
2009-05-06

I don't see how your system of doing the linking at update time is really any different than doing it at run time.

Dynamic linking is plenty fast enough, so you don't gain speed. (Actually dynamic linking could be faster on Windows, it has this painful habit of checking the local working directory before scanning through each folder in the PATH environment variable. In Linux, it just checks the /etc/ld.so.cache file for what to use. BUT anyway, dynamic linking isn't really slow even in Windows.)

You have to compile things different from normal static linking to keep the libs separate so they can be updated. In effect, the file is just a tar of executable and the DLLs it needs. Bit like the way resources are tagged on the end now. Plus then you will need some kind of information so you know what libs it was last tar'ed up against so you know when to update it or not.

What you really are searching for is application folders. http://en.wikipedia.org/wiki/Application_Directory
Saves the joining up of files into blobs. There is already a file grouping system, folders.

The system you might want to look at is: http://0install.net/
There was even a article about it on osnews:
http://www.osnews.com/story/16956/Decentralised-Installation-System...

Nothing really new under the sun.

I grow up on RiscOS with application folders and I won't go back to them.
Accept dependencies, but manage them to keep them simple. One copy of each file. Less files with clear searchable (forwards and backwards) dependencies.
Oh and build dependencies (apt-get build-dep <package>), I >love< build dependencies.

Debian has a new multi-arch scheme so you can install packages alongside each other for different platforms. Same filesystem will be able to be used on multiple architectures and cross compiling becomes a breeze.

Reply Parent Score: 2

bogomipz Member since:
2005-07-11

I don't see how your system of doing the linking at update time is really any different than doing it at run time.

The difference is that the kernel is kept simple. The complexity is handled by a package manager or similar instead. No dynamic linker to exploit or carefully harden.

If you don't see any difference, it means both models should work equally well, so no reason for all the complexity.

You have to compile things different from normal static linking to keep the libs separate so they can be updated.

What do you mean by this? I'm talking about using normal static libraries, as they existed before dynamic linking, and still exist to this day. Some distros even include static libs together with shared objects in the same package (or together with headers in a -dev package).

In effect, the file is just a tar of executable and the DLLs it needs. Bit like the way resources are tagged on the end now.

I may have done a poor job of explaining properly. What I meant was that the program is delivered in a package with an object file that is not yet ready to run. This package depends on library packages, just like today, but those packages contain static rather than shared libraries. The install process then links the program.

Plus then you will need some kind of information so you know what libs it was last tar'ed up against so you know when to update it or not.

No, just the normal package manager dependency resolution.

What you really are searching for is application folders.

No, to the contrary! App folders use dynamic linking for libraries included with the application. I'm talking about using static libraries even when delivering them separately.

The system you might want to look at is: http://0install.net/

Zero-install is an alternative to package managers. My proposal could be implemented by either.

Reply Parent Score: 2

jabjoe Member since:
2009-05-06

The difference is that the kernel is kept simple.The complexity is handled by a package manager or similar instead. No dynamic linker to exploit or carefully harden.


Not really a kernel problem as the dynamic linker isn't really in the kernel.
http://en.wikipedia.org/wiki/Dynamic_linker#ELF-based_Unix-like_sys...

What do you mean by this?


When something is statically linked, the library is dissolved, what is not used the dead stripper should remove. Your system is not like static linking. It's like baking dynamic linking.

This package depends on library packages, just like today, but those packages contain static rather than shared libraries. The install process then links the program.


Then you kind of loose some of the gains. You have to have dependencies sitting around waiting in case they are needed. Or you have a repository to pull them down from....

No, just the normal package manager dependency resolution.


That was my point.

No, to the contrary! App folders use dynamic linking for libraries included with the application.


Yes.

I'm talking about using static libraries even when delivering them separately.


As I said before, it's not really static, it's baked dynamic. Also if you have dependencies separate you either have loads kicking about in case they are need (Windows) or you have package management. If you have package management all you get out of this is baking dynamic linking. For no gain I can see.....

Zero-install is an alternative to package managers.

It's quite different as it's decentralized using these application folders. Application folders are often put forwards by some as a solution to dependencies.

Reply Parent Score: 2