Linked by Thom Holwerda on Wed 9th Nov 2011 21:26 UTC, submitted by edwin
General Unix Way back in 2002, MIT decided it needed to start teaching a course in operating system engineering. As part of this course, students would write an exokernel on x86, using Sixth Edition Unix (V6) and John Lions' commentary as course material. This, however, posed problems.
Thread beginning with comment 496999
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[6]: binary for windows....
by jabjoe on Fri 11th Nov 2011 21:39 UTC in reply to "RE[5]: binary for windows.... "
jabjoe
Member since:
2009-05-06

The difference is that the kernel is kept simple.The complexity is handled by a package manager or similar instead. No dynamic linker to exploit or carefully harden.


Not really a kernel problem as the dynamic linker isn't really in the kernel.
http://en.wikipedia.org/wiki/Dynamic_linker#ELF-based_Unix-like_sys...

What do you mean by this?


When something is statically linked, the library is dissolved, what is not used the dead stripper should remove. Your system is not like static linking. It's like baking dynamic linking.

This package depends on library packages, just like today, but those packages contain static rather than shared libraries. The install process then links the program.


Then you kind of loose some of the gains. You have to have dependencies sitting around waiting in case they are needed. Or you have a repository to pull them down from....

No, just the normal package manager dependency resolution.


That was my point.

No, to the contrary! App folders use dynamic linking for libraries included with the application.


Yes.

I'm talking about using static libraries even when delivering them separately.


As I said before, it's not really static, it's baked dynamic. Also if you have dependencies separate you either have loads kicking about in case they are need (Windows) or you have package management. If you have package management all you get out of this is baking dynamic linking. For no gain I can see.....

Zero-install is an alternative to package managers.

It's quite different as it's decentralized using these application folders. Application folders are often put forwards by some as a solution to dependencies.

Reply Parent Score: 2

bogomipz Member since:
2005-07-11

Not really a kernel problem as the dynamic linker isn't really in the kernel.

Sorry, I should have said that the process of loading the binary is kept simple.

When something is statically linked, the library is dissolved, what is not used the dead stripper should remove.

Yes, this is why dynamic linking does not necessarily result in lower memory usage.

Your system is not like static linking. It's like baking dynamic linking.

This is where I do not know what you are talking about.

Creating a static library results in a library archive. When linking a program, the necessary parts are copied from the archive into the final binary. My idea was simple to postpone this last compilation step until install time, so that the version of the static library that the package manager has made available on the system is the one being used.

This way, the modularity advantage of dynamic linking could have been implemented without introducing the load time complexity we have today.

Reply Parent Score: 2

moondevil Member since:
2005-07-08

Still you loose the benefits of plugins, unless you adapt some form of IPC mechanism, like sandboxing in Lion.

Reply Parent Score: 2

jabjoe Member since:
2009-05-06

Sorry, I should have said that the process of loading the binary is kept simple.


At the cost of making updating or subsisting more complicated and run time subsisting impossible or more complicated.

Yes, this is why dynamic linking does not necessarily result in lower memory usage.


Only if you have one thing using the lib, or everything is barely using the lib and the dead strip gets most of it. It would be a very very rare case when it uses less disk or less RAM.

Creating a static library results in a library archive. When linking a program, the necessary parts are copied from the archive into the final binary.


Linking isn't just libs, it all the object files. A static lib is just a collection of object data. Linking is putting all this into a executable. With static linking it doesn't have to care about keeping stuff separate, you can optimize how ever you like. Your system would mean that you would need to keep the lib distinct. Think about when a function is called, you can't just address the function, why, because the lib sizes will change, thus the file layout, thus the address layout. So you call to a jmp which you change when you bake the libs. You do that, or you have to store every reference to every function in every lib and update them all at time of baking. Which isn't what you would do as it's insane. You would do the jmp redirection and keep the lib distinct. Your system is more like dynamic linking than static linking.

Right I've done a quick test using lsof and python, totalling up the file sizes for shared object files for all and unique. This gives a rough idea how much memory the current system uses and how much your would use.

Current system: 199 meg
Your system: 1261 meg

Disk will be worse because it will be everything, not just what is running. Might still not be much by disk standards, but by SD standards....

So although I still don't think the biggest thing is space used (it's management and simplicity), it certainly shouldn't be discounted.

Very smart people have been working on this kind of thing for many decades and there is a reason things are like they are.

Reply Parent Score: 2