Memory management is a large, complex, and time-consuming set of tasks, one that is difficult to achieve because crafting a model how systems behave in real-world, multi-programmed environments is a tough job. Components like scheduling, paging behavior, and multiple-process interactions presents a considerable challenge. This article will help you decipher the basic knowledge required to engage the challenge of Linux memory management, providing you with a start.
This model looks very similar to Unix but with little differences. Linux memory management is slower than that for windows, but windows memory protection is way less than linux/Unix leading to effective similar performance. So for me Linux/Unix Model is more appropriate than that of windows. It’s strange that the whole world didn’t come up with different models than the two that existed in VMS and Unix and their decendants, in which both are relatively old (since 1970s). So we are computing now on 2 memory models by 2 companies Digital & AT&T.
> It’s strange that the whole world didn’t come up with different models than the two that existed in VMS and Unix and their decendants
How, strange?
Maybe quantum computing will produce an unforseen…leap.
This model looks very similar to Unix but with little differences
That should have been expected : they took the best ideas.
Linux memory management is slower than that for windows, but windows memory protection is way less than linux/Unix leading to effective similar performance
That does not mean anything to me. What is a “slower memory management” ?!!
Where are the benchmarks ? Same for memory protection.
Similar performance ? The kernel do a lot of things, but in every area I can think of (networking, FS, launch of processes, swap, caching), I don’t see similar performance in Linux at all, but better performance in every case. Are you talking memory allocation and saying that the rough spot of memory allocation is in the kernel ? Because if that’s the case, I can tell you the Linux kernel uses one of the (if not the) more efficient memory allocation available in kernels.
It’s strange that the whole world didn’t come up with different models than the two that existed in VMS and Unix and their decendants, in which both are relatively old (since 1970s). So we are computing now on 2 memory models by 2 companies Digital & AT&T
Isn’t this tied to the hardware architecture ?
This would just explained that they were well designed, and that we still haven’t changed architectures much. PCs still use BIOS too after all (once again, perhaps Apple will change the tide with EFI).
Linux just proved that these Unix derived models works (sometimes very well) on every architecture.
Anyway, the article seems bound to the x86 architecture.
Edited 2006-01-25 13:30
Linux memory management is slower than that for windows, but windows memory protection is way less than linux/Unix leading to effective similar performance
This is a very generic and misleading blanket statement. Much like threads, processes, mutexes, etc, the real answer is “it depends.” Both architectures are very different, and it’s not something you can simply abstract away and expect them to behave the same, nor simply apply the same kind of benchmarks, especially simple generic benchmarks like process launching speed.
It’s strange that the whole world didn’t come up with different models than the two that existed in VMS
and Unix and their decendants, in which both are relatively old (since 1970s).
Keep in mind that VMS and UNIX are only two of the more well known solutions, and that the world of computing
was largely composed of various dissimilar proprietary architectures for its first couple of decades.
In other words, there *have* been a number of different memory management models created over the years, with
some of them still being used in production environments, but many (most) of those simply haven’t been
translated to common desktop computing architectures so they aren’t visible to you unless you actually
interact with systems using those architectures.
Two cases in point that I’ve worked with personally:
The Unisys 2200 mainframe (once known as the UNIVAC 1100-series, now known as the Clearpath Dorado) is a
36-bit word-oriented (not byte-oriented) machine which is still heavily used in some industries (airlines in
particular), and the OS 2200 memory paging model is somewhat different from UNIX or VMS. I’m not an EXEC
guru, but you can obtain more information about its workings on comp.sys.unisys on USENET. Some bits of
information are here:
http://en.wikipedia.org/wiki/UNIVAC_1100/2200_series
http://people.cs.und.edu/~rmarsh/CLASS/CS451/HANDOUTS/os-unisys.pdf
http://www.bitsavers.org/pdf/univac/1100/
The Unisys A-series mainframe (once the primary mainframe architecture developed by Burroughs, now known as
the Clearpath Libra and LX) is a stack-based machine which runs MCP and which is architectually different
from almost anything else ever developed.
http://en.wikipedia.org/wiki/B5000
http://en.wikipedia.org/wiki/Master_Control_Program
http://www.bitsavers.org/pdf/burroughs/A-Series/
Holy crap… The site froze for five minutes after I posted and then the above came out… Let me post if again with better formatting:
It’s strange that the whole world didn’t come up with different models than the two that existed in VMS and Unix and their decendants, in which both are relatively old (since 1970s).
Keep in mind that VMS and UNIX are only two of the more well known solutions, and that the world of computing was largely composed of various dissimilar proprietary architectures for its first couple of decades.
In other words, there *have* been a number of different memory management models created over the years, with some of them still being used in production environments, but many (most) of those simply haven’t been translated to common desktop computing architectures so they aren’t visible to you unless you actually interact with systems using those architectures.
Two cases in point that I’ve worked with personally:
The Unisys 2200 mainframe (once known as the UNIVAC 1100-series, now known as the Clearpath Dorado) is a 36-bit word-oriented (not byte-oriented) machine which is still heavily used in some industries (airlines in particular), and the OS 2200 memory paging model is somewhat different from UNIX or VMS. I’m not an EXEC guru, but you can obtain more information about its workings on comp.sys.unisys on USENET. Some bits of information are here:
http://en.wikipedia.org/wiki/UNIVAC_1100/2200_series
http://people.cs.und.edu/~rmarsh/CLASS/CS451/HANDOUTS/os-unisys.pdf
http://www.bitsavers.org/pdf/univac/1100/
The Unisys A-series mainframe (once the primary mainframe architecture developed by Burroughs, now known as the Clearpath Libra and LX) is a stack-based machine which runs MCP and which is architectually different from almost anything else ever developed.
http://en.wikipedia.org/wiki/B5000
http://en.wikipedia.org/wiki/Master_Control_Program
http://www.bitsavers.org/pdf/burroughs/A-Series/
Sorry about the first posting…
Correct me if I’m wrong here, but didn’t the wine benchmarks from a few days ago show wine beating out xp on every single one of the memory benchmarks? And bear in mind this is wine we’re talking about. I’d imagine a native application would fare even better.
Linux memory management is slower than that for windows
Have you got any evidence for that?
but windows memory protection is way less than linux/Unix leading to effective similar performance
Only true for Windows 9x. The memory protection in Windows NT and its successors including XP is pretty much the same as in Linux/Unix.
The memory protection in Windows NT and its successors including XP is pretty much the same as in Linux/Unix.
Have you got any evidence for that?
The memory protection in Windows NT and its successors including XP is pretty much the same as in Linux/Unix.
Of course I didn’t mean that they’re the same in actual implementation, but they’re the same in principle.
Both the Linux and the NT kernel run in protection ring 0, while applications run in ring 3. The kernels have access to everything, while applications can neither read nor write other applications’ or the kernel’s memory.
Of course bugs that allow to circumvent memory protection appear on both systems (not necessarily in the kernel), but they usually require deliberate exploits rather than just buggy apps that could so easily bring down Windows 9x.
I’ve yet to any x86 OS (beside OS2?) that uses more then 2 rings of protection (Ring 0 – kernel, Ring 3 – user-land).
AFAIR Xen is the only x86 software to actually use more then these two rings. (Xen – ring 0, Guest OS – ring 1, User land – ring 3)… though I think this is about to change once Xen goes x86_64…
Considering both Windows NT/2K/XP and Linux use the same basic model, I doubt that baring minor implementation changes, you’ll see major performance and/or protection differences.
G.
I think the following is true:
OS/2 kernel -> ring 0
OS/2 some PM stuff -> ring 2
OS/2 Applications -> ring 3
Source:
http://rover.wiesbaden.netsurf.de/~meile/warpstock_2000/PDA_en/5.ht…
Heh… I remembered something about OS2 using 3 rings; Just couldn’t remember why.
Thanks for the link.
G.
Now don’t flame me, I’m just curious. How would the makers of Linux know for a fact that Microsoft wasn’t stealing code if Microsoft has a closed source policy?.
I’m not saying they are, just wanted to know how they would know if they were stealing.
I’m not saying they are, just wanted to know how they would know if they were stealing.
It’s difficult and time-consuming to do, but it is possible to find similarities by looking at disassembled code and investigating its behaviour in a debugger.
It’s easier if the stolen code has not been obfuscated and program symbols appear in the executable. The PearPC rip-off a while back was found out that way.
Now don’t flame me, I’m just curious. How would the makers of Linux know for a fact that Microsoft wasn’t stealing code if Microsoft has a closed source policy?.
Many ornagisations seem to have access to the windows source code these days…
http://www.microsoft.com/resources/sharedsource/licensing/windows.m… at least on of them would could have blown a whistle by now…??
Many ornagisations seem to have access to the windows source code these days…
They seem but they don’t. At least not in the same sense that you have access to Linux and FOSS source code.
The best I have seen on Windows, is some OEM have small parts of some component of Windows, under NDA and with lots of restrictions.
http://www.microsoft.com/resources/sharedsource/licensing/windows.m….. at least on of them would could have blown a whistle by now…??
You’re taking MS marketing as facts ?! You really believe these people have access to actual current source code of Windows that compiles and that they can modify ? Are you insane ?