Linked by Thom Holwerda on Wed 9th Nov 2011 21:26 UTC, submitted by edwin
General Unix Way back in 2002, MIT decided it needed to start teaching a course in operating system engineering. As part of this course, students would write an exokernel on x86, using Sixth Edition Unix (V6) and John Lions' commentary as course material. This, however, posed problems.
Thread beginning with comment 497047
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[7]: binary for windows....
by bogomipz on Sat 12th Nov 2011 16:35 UTC in reply to "RE[6]: binary for windows.... "
bogomipz
Member since:
2005-07-11

Not really a kernel problem as the dynamic linker isn't really in the kernel.

Sorry, I should have said that the process of loading the binary is kept simple.

When something is statically linked, the library is dissolved, what is not used the dead stripper should remove.

Yes, this is why dynamic linking does not necessarily result in lower memory usage.

Your system is not like static linking. It's like baking dynamic linking.

This is where I do not know what you are talking about.

Creating a static library results in a library archive. When linking a program, the necessary parts are copied from the archive into the final binary. My idea was simple to postpone this last compilation step until install time, so that the version of the static library that the package manager has made available on the system is the one being used.

This way, the modularity advantage of dynamic linking could have been implemented without introducing the load time complexity we have today.

Reply Parent Score: 2

moondevil Member since:
2005-07-08

Still you loose the benefits of plugins, unless you adapt some form of IPC mechanism, like sandboxing in Lion.

Reply Parent Score: 2

bogomipz Member since:
2005-07-11

Yes, you are right. dlopen() and friends are implemented on top of the dynamic linking loader.

Although sub-processes and IPC is more in line with the Unix philosophy, plugins are definitely useful.

Reply Parent Score: 2

jabjoe Member since:
2009-05-06

Sorry, I should have said that the process of loading the binary is kept simple.


At the cost of making updating or subsisting more complicated and run time subsisting impossible or more complicated.

Yes, this is why dynamic linking does not necessarily result in lower memory usage.


Only if you have one thing using the lib, or everything is barely using the lib and the dead strip gets most of it. It would be a very very rare case when it uses less disk or less RAM.

Creating a static library results in a library archive. When linking a program, the necessary parts are copied from the archive into the final binary.


Linking isn't just libs, it all the object files. A static lib is just a collection of object data. Linking is putting all this into a executable. With static linking it doesn't have to care about keeping stuff separate, you can optimize how ever you like. Your system would mean that you would need to keep the lib distinct. Think about when a function is called, you can't just address the function, why, because the lib sizes will change, thus the file layout, thus the address layout. So you call to a jmp which you change when you bake the libs. You do that, or you have to store every reference to every function in every lib and update them all at time of baking. Which isn't what you would do as it's insane. You would do the jmp redirection and keep the lib distinct. Your system is more like dynamic linking than static linking.

Right I've done a quick test using lsof and python, totalling up the file sizes for shared object files for all and unique. This gives a rough idea how much memory the current system uses and how much your would use.

Current system: 199 meg
Your system: 1261 meg

Disk will be worse because it will be everything, not just what is running. Might still not be much by disk standards, but by SD standards....

So although I still don't think the biggest thing is space used (it's management and simplicity), it certainly shouldn't be discounted.

Very smart people have been working on this kind of thing for many decades and there is a reason things are like they are.

Reply Parent Score: 2

Alfman Member since:
2011-01-28

jabjoe,

"This gives a rough idea how much memory the current system uses and how much your would use.

Current system: 199 meg
Your system: 1261 meg

Disk will be worse because it will be everything, not just what is running. Might still not be much by disk standards, but by SD standards.... "

Could you clarify specifically what you are measuring?
It's not really fair just to multiply the size of shared libraries into every running binary which has a dependency on them, if that's what you are doing. This is certainly not what I'd consider an optimized static binary.

"Very smart people have been working on this kind of thing for many decades and there is a reason things are like they are."

Possibly, but on the other hand our tools may have failed to evolve because we've focused too much on shared libraries. They are the metaphorical hammer for our nails. Not that I view shared libraries as necessarily evil, but assuming they did not exist, we would have undoubtedly invested in other potentially much better solutions to many problems.

Reply Parent Score: 2