Linked by Thom Holwerda on Wed 9th Nov 2011 21:26 UTC, submitted by edwin
General Unix Way back in 2002, MIT decided it needed to start teaching a course in operating system engineering. As part of this course, students would write an exokernel on x86, using Sixth Edition Unix (V6) and John Lions' commentary as course material. This, however, posed problems.
Thread beginning with comment 497053
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[8]: binary for windows....
by jabjoe on Sat 12th Nov 2011 19:00 UTC in reply to "RE[7]: binary for windows.... "
jabjoe
Member since:
2009-05-06

Sorry, I should have said that the process of loading the binary is kept simple.


At the cost of making updating or subsisting more complicated and run time subsisting impossible or more complicated.

Yes, this is why dynamic linking does not necessarily result in lower memory usage.


Only if you have one thing using the lib, or everything is barely using the lib and the dead strip gets most of it. It would be a very very rare case when it uses less disk or less RAM.

Creating a static library results in a library archive. When linking a program, the necessary parts are copied from the archive into the final binary.


Linking isn't just libs, it all the object files. A static lib is just a collection of object data. Linking is putting all this into a executable. With static linking it doesn't have to care about keeping stuff separate, you can optimize how ever you like. Your system would mean that you would need to keep the lib distinct. Think about when a function is called, you can't just address the function, why, because the lib sizes will change, thus the file layout, thus the address layout. So you call to a jmp which you change when you bake the libs. You do that, or you have to store every reference to every function in every lib and update them all at time of baking. Which isn't what you would do as it's insane. You would do the jmp redirection and keep the lib distinct. Your system is more like dynamic linking than static linking.

Right I've done a quick test using lsof and python, totalling up the file sizes for shared object files for all and unique. This gives a rough idea how much memory the current system uses and how much your would use.

Current system: 199 meg
Your system: 1261 meg

Disk will be worse because it will be everything, not just what is running. Might still not be much by disk standards, but by SD standards....

So although I still don't think the biggest thing is space used (it's management and simplicity), it certainly shouldn't be discounted.

Very smart people have been working on this kind of thing for many decades and there is a reason things are like they are.

Reply Parent Score: 2

Alfman Member since:
2011-01-28

jabjoe,

"This gives a rough idea how much memory the current system uses and how much your would use.

Current system: 199 meg
Your system: 1261 meg

Disk will be worse because it will be everything, not just what is running. Might still not be much by disk standards, but by SD standards.... "

Could you clarify specifically what you are measuring?
It's not really fair just to multiply the size of shared libraries into every running binary which has a dependency on them, if that's what you are doing. This is certainly not what I'd consider an optimized static binary.

"Very smart people have been working on this kind of thing for many decades and there is a reason things are like they are."

Possibly, but on the other hand our tools may have failed to evolve because we've focused too much on shared libraries. They are the metaphorical hammer for our nails. Not that I view shared libraries as necessarily evil, but assuming they did not exist, we would have undoubtedly invested in other potentially much better solutions to many problems.

Reply Parent Score: 2

jabjoe Member since:
2009-05-06

Could you clarify specifically what you are measuring?


I'm using lsof to list all the open files. Those that are .so (shared object) I'm noting. To guess you systems count I am just adding them together each time they are appear. For the current system count I count each just once.

It's not really fair just to multiply the size of shared libraries into every running binary which has a dependency on them, if that's what you are doing. This is certainly not what I'd consider an optimized static binary.


I said 'rough idea' and that it is. I've tried explain a few times, your system would not allow for much more optimization than the existing dynamic lib system because you need to keep the lib distinct. I've tried to explain why you would need to keep lib distinct. The only optimization you're system might have is you could dead strip what is not used when baking the binary. It will make a difference, but no where near enough.

Possibly, but on the other hand our tools may have failed to evolve because we've focused too much on shared libraries.


We evolved to shared libs. At one point daemons where used just for shared code.

They are the metaphorical hammer for our nails.


It does happen, but this is not one of those. The only problem with shared libs is dependencies (resolving them and distributing them), and that is solved with system wide package management and open source.


Not that I view shared libraries as necessarily evil, but assuming they did not exist, we would have undoubtedly invested in other potentially much better solutions to many problems.


You are wrong. The history is longer and more diverse than you seam to think. The people involved where/are smarter than you and I.

Reply Parent Score: 2