Jeff Atwood explains why Vista uses so much memory. “You have to stop thinking of system memory as a resource and start thinking of it as a a cache. Just like the level 1 and level 2 cache on your CPU, system memory is yet another type of high-speed cache that sits between your computer and the disk drive. And the most important rule of cache design is that empty cache memory is wasted cache memory. Empty cache isn’t doing you any good. It’s expensive, high-speed memory sucking down power for zero benefit. The primary mission in the life of every cache is to populate itself as quickly as possible with the data that’s most likely to be needed – and to consistently deliver a high ‘hit rate’ of needed data retrieved from the cache.”
I thought they were already doing this since like NT4, they just hid the cache use from the user reports (which isn’t a horrible idea).
I thought they were already doing this since like NT4
They were, but now it’s improved/more intelligent. For example, AFAIK, it ‘remembers’ even after you restart the computer.
Finally ex-windows users won’t ask why linux ‘uses so much memory’ ๐
Looks like Vista did catch up (or more) in the ‘unused memory is wasted’ department.
Looks like Vista did catch up
No, no catching up, innovation Well, relatively speaking
Yeah, well, that’s what they will call it… Though some of the things they seem to be doing ARE interesting and probably ‘new’, I guess…
ARE interesting and probably ‘new’
Have to be, to some extent, since nobody would give a rat’s ass if they weren’t even interesting. ‘New’ is good, even if it’s just new compared to their last version, since that sometime can mean improvement. And even if they just implement stuff that’s been around elsewhere and proven to be good, that’s also fine since at least it shows they recognized they have to improve and tried to do so. Should just drop the innovation-speach already
What many of us forget is Microsoft Research sponsors and works with many of the Universities that are working on these advanced technologies that go in to Linux. Linux isn’t just made up of hobbiest hackers any more, they get much of their code base from companies like IBM, Novell, and research Universities. Many of which companies like Novell, Oracle, IBM, Microsoft, Apple, all support.
So just because Microsoft couldn’t logically fit it in to Windows XP doesn’t mean they shouldn’t put it in their newest OS. And I don’t even think Linux had it when Windows XP first came out because they were still dealing with the 2.4 kernel.
So don’t act like Linux is doing all the innovation, because the innovation is actually coming from the Universities and Linux is a volatile OS because of its openness so it is the logical place to introduce new technologies.
I don’t even think Linux had [a disk cache] when Windows XP first came out because they were still dealing with the 2.4 kernel.
That’s the funniest thing I’ve read so far today. Linux and every other major UNIX-like system back to at least SVR4 and beyond has had a disk cache that consumes any otherwise unused physically-resident virtual memory allocated to the kernel. Now, to be fair, Linux 2.4 implemented disk caching through the buffer cache, which uses disk blocks as the fundamental unit of storage. In Linux 2.6, the disk cache is implemented mostly through the page cache, which uses larger and more well-behaved (with respect to alignment) virtual memory pages as the fundamental unit of storage. The buffer cache is still used for caching inherently block-sized storage objects such as inodes and other filesystem metadata objects.
The kernel does have a hard limit on virtual address space with comes into play on 32-bit systems (or 64-bit systems running a 32-bit OS). This limit must be set no later than boot-time (usually it is set at install-time) and depends on how much physical memory you have installed on your system. Basically, if you have 4GB of memory installed, and you’re running a 32-bit OS, the kernel can only allocate 1-3GB of this memory no matter how small the memory requirements of userspace might be. If it’s set to allocate 3GB, you run the risk of starving your userspace applications of virtual memory, causing them to crash. This was a bug that some users hit while running the 32-bit Vista installer on systems with 3GB+ of memory.
These techniques are not new. There are classic OS textbooks that describe these mechanisms in great detail. In fact, I’m sure that any Windows OS that had support for NTFS must have also featured a disk cache, since the basic design consideration for NTFS is that reads should usually be satisfied from the disk cache. When the disk cache hit-rate drops, read performance on NTFS becomes unacceptable, nearly as bad as FAT32.
The fact of the matter is that physical memory is a resource. It’s obviously not just a cache. Even on systems with pervasive support for pageable kernelspace (i.e. AIX), some portions of physical memory may never be paged to disk. But more importantly, the system must make the best use of its resources on all levels of the storage hierarchy. For example, local disk storage should be used as a cache for network resources such as NFS volumes. If the NFS volume is large enough, the cachefs volume should expand to fill your unused and unreserved space. Obviously some space will need to remain reserved for paging, dumping the kernel (if so configured), and for emergency maintenance by the superuser. But any other unused space is wasted if there’s uncached data on a more distance level of the storage hierarchy.
Edited 2007-02-28 22:33
Wow did you totally miss the point. I never said Linux didn’t have a disk cache, in fact if you actually read the article it was talking about the SuperFetch technology that tries to minimize disk cache.
And that is what I was referring to the actual process that minimizes the use of disk cache. But in order to understand the comments I made on the article you actually have to read the article.
And the very fact that you had to insert the words disk cache, to go off on a tangent about a sentence you took out of context. Shows me you had no intention of actually arguing the point of the article.
Edited 2007-02-28 22:57
First, I can’t possibly figure out how that “it” I replaced could refer to SuperFetch or any similar technology because Linux doesn’t have anticipatory page-in mechanism. Not in 2.4, not now, and probably not in the near future. In addition, memory is not a cache for paging space, so any attempt to associate SuperFetch with using memory as a disk cache is simply wrong.
SuperFetch has nothing to do with the disk cache, since it doesn’t have anything to do with the filesystem. On the contrary, it is a virtual memory mechanism. It performs page-ins from the paging space before the virtual pages are referenced based on previously observed trends, replacing least-recently-used pages in memory with hopefully soon-to-be-used pages from paging space. SuperFetch doesn’t change the fact that memory should always be full, nor does it create a magic way to page something in without paging something out. It doesn’t minimize paging, either. In the best case scenario, it performs paging earlier than it would have otherwise occurred. In anything but the best case, it generates more paging operations than necessary.
I don’t understand why you take offense to my comment. Anyone can argue the point of the article, but not that many people actually have the knowledge to decide for themselves whether the whole premise of the article is full of crap. I assumed that the article must be about a disk cache, since that’s the only way the teaser makes sense. Memory is not a cache for paging space! When a page gets copied into memory, its blocks are freed on the disk’s paging space. When filesystem data gets cached, it persists on the backing store. The whole premise of the article is wrong. So why should I argue it?
This article is not about disk caching. It is about the benefits of SuperFetch, I don’t know where you got the idea it was about disk caching, but you know what they say about people who assume.
Read the article, my comments were in context with the article. This is not a social forum. The comments are suppose to be focused on the article above. Try it sometime before jumping to major conclusions and you won’t look like a troll.
Edited 2007-03-01 00:49
I read the article after you replied to my first comment. You didn’t mention SuperFetch in your first comment, so maybe I was thrown off by you being off-topic ๐ BTW, I don’t break into discussions of virtual memory and disk caching in social settings. I’ve found that these discussions are more appropriate for… OSNews.
A troll has his mind made up on how the world works and offers conclusions without justification. It’s simply too early to take sides on how effective SuperFetch will be in a wide variety of workloads, and for this reason I didn’t offer a conclusion on this. I did offer lots of detailed information that is correct to the best of my knowledge, and I justified my assertion that the article’s premise is baloney.
You, on the other hand, look like a troll. I don’t know exactly what you’re for or against, besides being adamantly opposed to my comments. But you’re telling me what to do, and you’re telling me that you make sense and I don’t without telling me why. So you’re trolling.
It’s not that you didn’t make sense, but like you said, you hadn’t read the article when you replied to my comment. If I was talking about disk caching you topic would have totally made sense.
But my reply was based on the article, which dealt with SuperFetch. I don’t have to mention SuperFetch, because it is in context of the article. And my original comment was in reply to somebody saying Linux got there first and Linux innovates and Microsoft doesn’t.
I was disagreeing with that post, because Microsoft pours money by the fist fulls in to research organizations in order keep computer science advancing along and giving value to their product. You may disagree with how they give value to their product, but funding research organizations and buying other organizations that are at the top of their science where their software is lacking. But the reason Linux gets to the goal first is because of the nature of the volatility of the kernel where anybody can patch it and post it to the internet. With out getting in to the merits of closed source vs. open soruce, I am leaving it at that.
From the article:
And the most important rule of cache design is that empty cache memory is wasted cache memory.
It’s interesting – I have heard of some other operating system that has followed this practice for a long time. Hmm, drawing a blankux.
Never mind. Probably nothing.
Doesn’t KDE act sort of like this? It eats most of my RAM, but doesn’t really feel like it while im using it
No, linux memory management does this.
KDE is actually really using all that memory,nothing to do with caching
Soo… suppose I have 16Gb of memory. Will Vista eat all of it (just for itself)? If yes, why is it good, I believe there’s a natural limit on what to fill that cache with!
Edited 2007-02-28 19:58
Because the cache is thrown away instantly when you need it for a process. It’s as cheap to give a process cache memory as it is to give it free and unused memory (applications always expect garbage in newly allocated memory).
Cache is free, really. It means a small amount of CPU use to track the cache and a small amount of memory (which would shrink as you have less cache) and you gain faster disk accesses. It is win win, and I’ve never heard of a limit to its usefulness (beyond having more memory than disk space, but that’s very unlikely).
This is part of why Unix systems improve their performance the longer they’ve been running.
I was, however, under the impression that XP already did this but didn’t report on the cache use.
> Because the cache is thrown away instantly when you need
> it for a process.
I don’t know about Vista, but I’ve seen linux start swapping instead of throwing away part of the 500+ MB it was using as cache. I tried adjusting swappiness and FS cache aggressiveness, but I couldn’t make linux not lock up for short whiles (annoying e.g. when listening to music). In fact, I got so annoyed with these anti-desktop-oriented VMM algorithms (or whatever, I’m not a linux kernel expert) that I now have 3 GiB of RAM and no swap at all. Although there are still short lock-ups when some program suddenly starts using a lot of memory, they are at least an order of magnitude shorter than before.
In fact, I got so annoyed with these anti-desktop-oriented VMM algorithms (or whatever, I’m not a linux kernel expert) that I now have 3 GiB of RAM and no swap at all.
That’s not very good. You need swap, no matter how much RAM you have: http://kerneltrap.org/node/3202
It doesn’t have to be a swap partition. You can make a swap file in one of the regular partitions, or you can set some RAM space aside for swap, which is a really neat trick if you have the RAM to spare.
You won’t need much, I’m using something like 68k right now. But you need it, because the RAM allocation system needs a place to put dirty pages, and that place needs to be outside the normal allocation space.
Plus, you can tweak the swappiness setting, which means that the swap can potentially end up being used exclusively for dirty pages and never for swapping out idle applications:
http://kerneltrap.org/node/3000
I think a few megs of swap in RAM and swappiness set to minimum will give you the ideal combination, NOT getting rid of swap.
> > In fact, I got so annoyed with these anti-desktop-oriented
> > VMM algorithms (or whatever, I’m not a linux kernel
> > expert) that I now have 3 GiB of RAM and no swap at all.
>
> That’s not very good. You need swap, no matter how much
> RAM you have: http://kerneltrap.org/node/3202
There’s a whole lotta cluelessness in that thread. Most pro-swap comments think that absolute throughput is the only thing that counts. I don’t. On the desktop I care about latencies, about responsiveness. I don’t want it to take 10 seconds for my 2000 MHz desktop to redraw a window. Not under any circumstances. I don’t want my audio or mouse pointer to freeze for 200+ ms even if it would make my concurrent file copy operation take 40 seconds instead of 55. (Actually, since I switched from 2.4 to 2.6 commands like mv and cp make everything else reeeaaalllyyy sssllooooww and unr__es_pon__s_ive.)
> you need [swap], because the RAM allocation system needs a
> place to put dirty pages, and that place needs to be outside
> the normal allocation space
I haven’t heard about that before. I guess I’ll have to try it out. Thanks.
> you can tweak the swappiness setting
“echo 0 >/proc/sys/vm/swappiness” was the first thing I tried. It helped, but very little; not nearly enough.
“echo 10000 >/proc/sys/vm/vfs_cache_pressure” also helped a bit, I think, but also way too little.
>
>There’s a whole lotta cluelessness in that thread. Most
>pro-swap comments think that absolute throughput is the
>only thing that counts. I don’t. On the desktop I care
>about latencies, about responsiveness. I don’t want it
>to take 10 seconds for my 2000 MHz desktop to redraw a
>window. Not under any circumstances. I don’t want my
>audio or mouse pointer to freeze for 200+ ms even if it
>would make my concurrent file copy operation take 40
>seconds instead of 55. (Actually, since I switched from
>2.4 to 2.6 commands like mv and cp make everything else
> reeeaaalllyyy sssllooooww and unr__es_pon__s_ive.)
>
If lower latency is your goal, reduce the niceness of X and your window manager. I renice X to -19 and my window manager to -15 to improve desktop responsiveness. If you compile your kernel, you can also set the latency timer frequency to 1000HZ. There are other options in the kernel (2.6 or better) for a low-latency desktop that you can configure.
You have some configuration option incorrect because you should NOT be having skipping and unresponsiveness doing a cp. I have run Linux on systems as slow as a 500 MHz with 256 meg of RAM and an ancient IDE drive and I never had behavior like you described. In fact, I was impressed that I could do things like three emerges at a time (Gentoo) while browsing the web and listening to music with no noticeable loss of performance.
> You have some configuration option incorrect because you should
> NOT be having skipping and unresponsiveness doing a cp.
In that case all millions of Ubuntu desktop users running with default settings have their configuration options incorrect, too. And don’t try to blame my h/w either, since I’ve tried quite a lot of different boxes and configurations.
A single mv/cp/md5sum won’t make the audio skip, but running a large compile job at the same time might. I’m usually doing fairly much at any given time, the load mostly hovering at 5-7, rarely sinking below 3-4. Still, no matter how much I do it really shouldn’t make my pointer freeze for any noticeable amount of time ever.
“On the desktop I care about latencies, about responsiveness. I don’t want it to take 10 seconds for my 2000 MHz desktop to redraw a window. Not under any circumstances.”
msundman try the -ck linux kernel. It is designed very specifically to help interactivity and gui responsiveness.
> try the -ck linux kernel. It is designed very specifically to
> help interactivity and gui responsiveness
Sounds good. I think I’ve heard that Kolivas has made some patch(es?) that moves program data pages from disk back into ram even before they are requested. That’d be sweet.
Now if I would only find an ubuntu package with both ubuntu’s and koliva’s patches (and of course compatible with all standard ubuntu packages, such as the vmware and nvidia drivers)..
As other replies have said, cp and mv shouldn’t make your computer unresponsive. I had a similar problem too and it turned out DMA was disabled. Check your DMA settings with hdparm and if necessary enable DMA for your drives.
> cp and mv shouldn’t make your computer unresponsive.
I agree, but they do. Maybe it’s because the other processes are waiting for pages from the same disk to swap back into ram and it takes a looong time to swap because cp is keeping the HD busy. I don’t know.
> I had a similar problem too and it turned out DMA was
> disabled. Check your DMA settings with hdparm and if
> necessary enable DMA for your drives.
Thanks, but I’m not completely incompetent. I would immediately recognize the symptoms of having a HD in PIO mode. Besides, I think s-ata drives always use dma.
# time hdparm -t /dev/sdc1
/dev/sdc1:
Timing buffered disk reads: 132 MB in 3.00 seconds = 44.00 MB/sec
real 0m6.322s
user 0m0.012s
sys 0m0.272s
#
And that’s with maximum AAM (=128), it’s >99.8 % full and the system load is ~5. My 3 other disks perform similarly.
One big reason I gave up using Linux as my desktop was that things slows down too much when doing I/O. Also it couldn’t handle high CPU very well either compared to Windows. And yes, I tried various experimental pre-alpha patches on various distributions and there was nothing wrong with my hardware or DMA settings.
This isn’t normal behavior… I’ve had several machines running Linux and at times I’ve run them very heavily but I rarely experienced slow down: I had a rare few kernels where my music would ever stop playing (unless I did something to bring the system to its knees, but you can do this to almost any desktop OS).
So while you say you had things straight, to those of us who’ve successfully run Linux for years we know something was either wrong or your work load is very “unique” for your hardware.
I tried adjusting swappiness and FS cache aggressiveness, but I couldn’t make linux not lock up for short whiles (annoying e.g. when listening to music).
This isn’t ‘proven’ in any way, but I had issues of this nature on Windows and Linux with intergrated sound cards.
Eventually, the drivers got better, and the problem went away. I still notice a distinct lag between hitting the play/pause key and the music stopping and starting. This problem doesn’t seem to occur with Emu10K1 based cards (SB Lives), so my guess is that it’s either a problem of a low quality sound card, or badly written drivers.
> > I tried adjusting swappiness and FS cache aggressiveness,
> > but I couldn’t make linux not lock up for short whiles
> > (annoying e.g. when listening to music).
>
> This isn’t ‘proven’ in any way, but I had issues of this
> nature on Windows and Linux with intergrated sound cards.
Unless the sound card driver also causes the on-screen pointer to freeze at the same time then that’s not it.
Anyway, let’s not delve too much into this discussion here unless it has something to do with Vista and/or memory cache management.
Unless the sound card driver also causes the on-screen pointer to freeze at the same time then that’s not it.
I had similar freezes – changing IO scheduler to CFQ resolved the problem
I’ve seen linux start swapping instead of throwing away part of the 500+ MB it was using as cache.
This reasoning is a little simplistic. It doesn’t make sense to give memory pages precedence over filesystem pages under physical memory contention. They should both be evicted from physical memory on the basis of how recently they were last referenced. If you keep referencing filesystem pages in the disk cache, then when your memory requirements increase, you’re likely to page-out memory pages instead of freeing from the page cache. You want to keep whatever pages are most likely to be referenced in physical memory, regardless of whether they’re memory or filesystem pages.
These lockups you’re experiencing sound much more like process or I/O scheduling issues that VMM issues. I don’t want to get too off-topic (keeping the discussion to general OS memory management stuff), but I recommend switching your I/O scheduler and seeing if this makes a difference for you.
> It doesn’t make sense to give memory pages precedence
> over filesystem pages under physical memory contention.
> […]
> You want to keep whatever pages are most likely to be
> referenced in physical memory, regardless of whether they’re
> memory or filesystem pages.
This comment shows quite clearly that you think throughput is above everything else (as does Torvalds and Morton). I already wrote that I disagree (on the desktop that is, OTOH on the server I agree that throughput comes first). On my desktop box I want pages needed for foreground tasks (conceptually speaking, not necessarily “threads” or “processes”) to stay in physical memory even if it means that a concurrent mv gets only 10 MiB instead of 1000 MiB of FS cache or whatever.
> These lockups you’re experiencing sound much more like
> process or I/O scheduling issues that VMM issues.
Well, they went away almost completely when I disabled my swap, so I’m (mostly) satisfied now.
I think I’ve read that 2.6 has something that boosts the priority of i/o bound processes, and that the reason for this is to make interactive programs more responsive. Maybe it’s that thing that’s backfiring and causing mv, cp, md5sum etc. to hog all my resources if I don’t nice them (or de-nice everything else).
> I recommend switching your I/O scheduler and seeing if
> this makes a difference for you.
I’m using the desktop version of Ubuntu, so it’s Canonical’s job to choose schedulers (or whatnot) suitable for a desktop. I do like that I can fiddle with a lot in linux, but I hate that I have to.
@Mystilleef
I can’t have non-critical tasks running at -19 or -15 or I’ll risk starving something that is critical. Unpredictable process starvation is one reason why I stopped using Windows.
This comment shows quite clearly that you think throughput is above everything else (as does Torvalds and Morton)
Well, I believe you’re mistaken. One of the primary design considerations for the 2.6 kernel was to be suitable for low-latency desktop environments. Some server admins still use 2.4 because they see throughput regressions in 2.6 for their workloads. I believe you are experiencing an I/O scheduling problem that’s allowing bursts of I/O to impact multimedia applications such as your audio playback. Thanksfully, selecting a new I/O scheduler is as easy as a editing a single file and rebooting. No need to recompile the kernel. That’s the synopsis, here’s more:
If data is on disk, whether because it’s in paging space or on the filesystem, then you will incur a latency in accessing this data. It will be the same latency regardless of whether it’s memory or files. Whatever it is you’re accessing, you want it to be in memory. That’s why Linux uses a least-recently-used algorithm to evict the “stalest” pages in memory. It only pages-in (and pages-out) when necessary, because doing anticipatory paging generates excess I/O. Desktop systems are largely I/O bound, so we want to keep I/O to a minimum, and we want to schedule I/O efficiently.
Linux offers an I/O scheduler that attempts to let sudden bursts of I/O complete with minimal interruption from the usual traffic (anticipatory, best for servers and traditional desktops), one that attempts to minimize the average time that processes spend waiting for I/O (CFQ, best for multimedia desktops), and one which attempts to make sure I/O is scheduled within a certain amount of time (deadline, mostly for embedded). All of these should be configured as modules in your Ubuntu kernel. You can select an I/O scheduler at boot time by adding “elevator=cfq” (for example) to your “kernel” line in /boot/grub/menu.lst file. No kernel recompilation necessary. I think you can change I/O schedulers at runtime, but I’ve never done it and I can’t recommend it.
Linux doesn’t try to give preference to “foreground” processes, and from reading your comments, I’m not sure this is really what you want to have happen anyway. You’re reporting that certain foreground tasks are causing your background tasks (like audio playback) to suffer. Windows likes to decide what applications are the most important, i.e. its brilliant new Media Player get 80% of each timeslice strategy, but Linux prefers to base its scheduling on how quickly the process yields the CPU on its own accord. In other words, it prefers processes that get on the CPU, to what they need to do, and get off. It will make sure these processes do their quick stuff before letting a CPU hog crunch numbers for a millisecond. After that hog is done, interactive tasks will be scheduled to make sure they get to do stuff they’ve been waiting to do while the hog was running. Once again, to reduce latency and increase responsiveness.
Of the processes you listed, the only interactive one would be cp. It issues a read request from the disk and goes to sleep. When the buffer is filled, it wakes up, and it needs to have timeslice left in order to run. Hence it is given a long timeslice so that it is runnable immediately when woken up. Then it issues a write request and goes to sleep again. It should use very little CPU resources, but it will consume I/O resources.
The mv command should be nearly instantaneous. It only needs to read a single block from the disk (an inode) and update its contents. It doesn’t even need to write the inode back to disk immediately. I can’t believe this is causing any problems on your system.
The md5sum command is a bit trickier. It starts off I/O bound while waiting for the file to be read into memory, then becomes very CPU bound when calculating the checksum. The anticipatory I/O scheduler might let this process hog the I/O because it is burst-like. CFQ would be better in this situation. The time spent waiting for I/O will result in slightly longer timeslices when the process suddenly becomes a CPU hog, but shortly afterwards the process scheduler will adapt and cut its future timeslices down a bit.
The default process scheduler is pretty conservative with increasing and decreasing the timeslices, because it doesn’t want to overreact. It should fair pretty well in this situation, and I wouldn’t recommend any of the out-of-tree process schedulers for your needs.
I hope this points you in the right direction. I think you’re correct in being reluctant to nice various processes in order to fix your problems. In my experience, if you have the right scheduler configuration for your workload, the 2.6 kernel usually does the right thing. But you should really have swap space. It’s really bad to rely on having enough physical memory to fit everything. You’ll end up having processes die on startup and other weird problems.
> Linux doesn’t try to give preference to “foreground” processes,
Well, in each and every instance of 2.6 I’ve seen heavy I/O has slowed down everything else considerably, and if that I/O involves the same drive as the swap partition is on then there’s virtually no chance anything will be swapped in anytime soon, causing all apps waiting for pages from there to stall A LOT. At least that’s what it has looked like to me.
> and from reading your comments, I’m not sure this is really
> what you want to have happen anyway. You’re reporting
> that certain foreground tasks are causing your background
> tasks (like audio playback) to suffer.
You’ve misunderstood me. Audio playback is not a background task, since, even though I don’t communicate with it, it communicates with me.
File transfers, md5sum and compilations, OTOH, are background tasks, since I just start them and then ignore them while they do their thing.
The difference is whether the I/O is with me or internal.
> The mv command should be nearly instantaneous.
No, I have 4 disks and 4 mount points.
(And dear lord how I hate that unix-like OSes think “rename” is the same as “move”! Not only does it have unpredictable results (e.g., you can’t know what “mv foo bar” does unless you run an “ls -d bar” first), it also makes it cumbersome to rename stuff in other places than the current directory.)
> CFQ would be better in this situation.
# grep -e sched.*def /var/log/dmesg
[17179571.508000] io scheduler cfq registered (default)
#
> you should really have swap space. It’s really bad to rely
> on having enough physical memory to fit everything.
Huh?! Certainly 2 GiB of RAM + 1 GiB of swap (which I used to have) can’t fit more than 3 GiB of RAM and no swap (which I now have).
I do like that I can fiddle with a lot in linux, but I hate that I have to.
++
Unpredictable process starvation is one reason why I stopped using Windows.
You ain’t kidding, I’ve recently been noticing some really pathetic behaviour from XP in that regard. The other day I had two Filezilla windows open, both downloading websites, at the same time as encoding some DVD video; unless I upped the priority of the Filezilla processes (or decreased the priority of the DVD encoder), the FTP transfers would drop to 10-15kb/sec.
I never had this problem… Have you considered that the application you’re using might be the problem?
Linux does swap instead of give up cache sometimes, although I believe it’s supposed to be moving memory cache into swap before hand because apparently it works out that swapping out some cache is more efficient (I don’t understand how, but it was a big huge argument).
> I never had this problem…
Maybe the load on your system isn’t 4+ normally…
> Have you considered that the application you’re using
> might be the problem?
Now which application would that be? Opera? Gaim? Gxine? Kaffeine? Avidemux? Eclipse? Claws Mail? Bash?
Certainly you must mean a combination of applications, because it should be impossible for one app to freeze an advanced OS like linux, right?
Well, I must admit that it happens less frequently when I’m not running XP in a vmplayer…
> I believe it’s supposed to be moving memory cache into
> swap before hand
I think you’re interested in /proc/sys/vm/swappiness.
> it was a big huge argument
No doubt Morton tried to convince everyone how everything will be so much faster when everything is paged out to disk, while completely ignoring responsiveness where it counts from an interacting user’s point of view.
No it’s not hard for one application to freeze a typical desktop OS setup. I’ve accidentally written many programs which make it very difficult to even kill them by the time you realize something’s gone wrong.
One method is called fork bombing: while (1) fork();
If you set the right parameters you can protect against this, but most people don’t.
Another method is creating a linked list and adding elements to it as fast as you can without end: The effect is to suck up RAM so fast that by the time you start swapping you’ll have a hard time selecting a terminal window to type kill (you’re able to do it, you just have to be patient).
If your desktop load is 4+ it might be time to look into a faster system or maybe a second one to offload some of your video tasks onto…
> it’s not hard for one application to freeze a typical
> desktop OS setup
> [wah wah fork bomb wah wah malloc wah wah]
I see no reason what so ever for a desktop OS to not be protected against fork bombs and malloc spikes by default. Normal users shouldn’t have to edit (or even know about) pam limits/configurations.
Anyway, all you people, stop saying that there’s something wrong with my system. It’s quite fine now that I’m not using any swap, which was my point. I don’t understand why you first say that it’ll work better with a swap and then say that if it doesn’t work with a swap then I should get another system instead of disabling the swap. I’ve been running without swap for some time now and my system has been performing much better than it ever did when I used swap.
You’re escalating people trying to give you advice into arguments, fights, and insults.
A great example is how I mentioned fork bombs and allocation spikes as ways to bring a desktop OS to a halt. You then went on to accuse me of complaining about it? I don’t know where you got that from, I was just pointing out how easy it is.
The fact that your system doesn’t function correctly while almost everyone elses does tells us that:
1. You use your computer in a vastly different way than we do.
or
2. Your computer is not functioning correctly.
If there’s a chance at helping you number 2 will be the problem. I don’t think anyone here but you is going to make your work flow easier. Swap should not make your desktop unresponsive, but maybe you do have a workflow that doesn’t work well with Linux’ swap methods: You’re the first case I’ve heard of in my little world.
Calm down.
> You’re escalating people trying to give you advice into
> arguments, fights, and insults.
Wait a minute there. If I’m parsing that sentence correctly you’re accusing me of making other people argument, fight and insult. Maybe you should take that up with those people instead.
Anyway, show/quote me where I’ve escalated people into anything. I just read all my comments in this thread and I found no such thing.
> A great example is how I mentioned fork bombs and allocation
> spikes as ways to bring a desktop OS to a halt. You then
> went on to accuse me of complaining about it?
Um.. if that’s supposed to be a question then I guess the answer is “no”. Show/quote me where I’ve accused you of complaining about “it” (I can’t even tell what “it” means there, maybe freezing an OS, but I certainly don’t think you’ve been complaining about that).
> I was just pointing out how easy it is.
I know it’s easy.
My main point was that I’m using the same applications as everybody else, whose systems are running fine according to you(?).
Then I tried to make another point, namely that it shouldn’t be possible for one app to freeze a system of linux’ caliber.
Then I implied that I had in fact considered if it’s an application by admitting that there is one particular application that’s, although not causing the problem, at least making it worse.
I’m sorry if I didn’t make myself clear.
> 1. You use your computer in a vastly different way than we do.
I don’t know who “we” are, but the statement is most likely true. You certainly won’t have a load of 4+ all the time if you’re just surfing the web and reading your mails.
> [or] 2. Your computer is not functioning correctly.
>
> If there’s a chance at helping you number 2 will be the problem.
Incorrect (assuming “computer” means “system”-(OS+apps), because if “computer”=”system” then you’d be saying A=>B|A, which would be obvious and silly). Even if number 2 isn’t the problem there’s still a chance at helping me. In fact, I already told you a solution, namely to turn off the swap.
Besides, I wasn’t even looking for help. I only tried to tell people of my experiences and how I fixed my problems.
Now, I don’t mind receiving help even when I’m not asking for it, but I do think I have the right to tell helpers when their help isn’t helping. If you think I’ve been rude or something, which wasn’t my intention, then just show me exactly what I did wrong and if you’re correct then I’ll apologize and try my best not to do the same mistake again.
> Swap should not make your desktop unresponsive
You’re right, it shouldn’t, but it does. And my system is not the only one I’ve seen this happen on. I’ve seen the same problems all over the place, although on a smaller scale since most other people I know don’t push their systems as much as I push mine. The main difference is that many of my friends and workmates don’t even notice when there’s a 100 ms gap in the audio, or if the pointer freezes for 300 ms, or if the audio is 100-200 ms out of sync with the video, or other similar small imperfections which I find quite annoying. If you don’t mind those kind of problems then good for you. I wish I didn’t.
> Calm down.
What could possibly make you think I’m not calm?
Edited 2007-03-02 13:37
Wait a minute there. If I’m parsing that sentence correctly you’re accusing me of making other people argument, fight and insult. Maybe you should take that up with those people instead.
Anyway, show/quote me where I’ve escalated people into anything. I just read all my comments in this thread and I found no such thing.
> it’s not hard for one application to freeze a typical
> desktop OS setup
> [wah wah fork bomb wah wah malloc wah wah]
That’s called “putting words” into someones mouth. Or, I suppose more accurately, putting motives behind ones words. You accused me of complaining when I was clearly using an informative tone. You thereby added emotion where it wasn’t: A good tactic to “get someones goat” or create tension.
I don’t know who “we” are, but the statement is most likely true. You certainly won’t have a load of 4+ all the time if you’re just surfing the web and reading your mails.
I don’t have a load of 4+ editing large images.
I don’t have a load of 4+ doing minor video edits.
I don’t have a load of 4+ building a system library while editing code and occasionally building and running it.
A load of “4+” means that for the last hour you’ve been running 4 threads which are all rarely in wait mode. That’s 4 threads that by themselves would set your CPU monitor to 100%…
More likely though, because you’re using a desktop, you have about 8 threads that are doing a bit less but are still doing heavy tasks. So, unless you’re running multiple folding (or something similar) threads I fail to see why you are running with a load average of 4+: This is not only not normal, it’s extremely rare.
Now, I don’t mind receiving help even when I’m not asking for it, but I do think I have the right to tell helpers when their help isn’t helping. If you think I’ve been rude or something, which wasn’t my intention, then just show me exactly what I did wrong and if you’re correct then I’ll apologize and try my best not to do the same mistake again.
What you said that pissed me off:
> [wah wah fork bomb wah wah malloc wah wah]
English has a system for excluding areas of text in a quote: It’s called an ellipsis. I suggest you make use of it instead of inflammatory things like “blah” and “wah” which attack the quality of the words they replace.
Seriously, what are you doing to get a load average of 4+ all the time…
> > Anyway, show/quote me where I’ve escalated people into
> > anything. I just read all my comments in this thread and I
> > found no such thing.
> > > it’s not hard for one application to freeze a typical
> > > desktop OS setup
> > > [wah wah fork bomb wah wah malloc wah wah]
>
> That’s called “putting words” into someones mouth.
No, it’s not, or at least it’s quite debatable. I hadn’t even considered that “wah” might be something negative, and I don’t think I’ve ever seen “wah” used as complaining. Anyway, I meant “[something unimportant] fork bomb [something unimportant] malloc [something unimportant]“.
And c’mon, “wah” doesn’t have a good definition, and dictionary.com doesn’t even recognize it. You can’t go around making accusations based solely on your untold interpretation of something that isn’t even properly defined. Especially not in an international place like this, where people have vastly different cultural backgrounds and language skills. (English is not my first language, and not even my second.)
So, I don’t think it was wrong of me (read: adding emotion) to use “wah” instead of “…” (although I agree that the latter would have been better), and I hope you now agree, or at least understand that I meant no harm.
> what are you doing to get a load average of 4+ all the time
Usually 1-2 avidemux(es) processing some video, a kaffeine recording and/or showing dvb, a vmplayer runing a fairly memory- and occasionally i/o-intensive (communicating with an smbd) application, an eclipse, a java vm running a test instance of a hotel chain booking/admin system, an opera (with 200-230 tabs open), a claws mail, a tomcat, an apache, gaim, a bunch of terminals and often gxine, rhythmbox or xmms.
Cache is free, really. […] I’ve never heard of a limit to its usefulness
Until recently, you were 100% correct. But now we have ReadyBoost, a disk cache in flash memory, and this perilously close to destroying our theory that caching is always good.
Disks are moderately fast for sequential access and horrendously slow for random access. Modern DRAM is a couple orders of magnitude faster than a disk in sequential access and doesn’t get much slower in random access, blowing the disk away by several orders of magnitude. The more you disk storage you can cache in DRAM, the better. Win-win, just as you said.
Flash memory, on the other hand, is not that much faster than a disk at sequential access. With a high-end disk, or an array of disks, it might actually be slower than the disk! So implementing a disk cache in flash memory is tricky. On some systems, you’ll want to bypass the cache and go straight to disk if the request is sequential enough. On others, you don’t get a significant performance improvement by reading sequential data from the cache. But for random access, the flash cache still blows the disk away. So you want to try to keep pages that are likely to be read randomly in the cache and evict pages that have been read sequentially to make room for pages that tend to be read more randomly.
Caching works under the assumption that higher-capacity devices will always be slower than smaller high-performance devices. But flash is so slow for its capacity, and disks are remarkably quick for their capacity at sequential reads, that it becomes a problem. There’s just not enough of a performance delta between flash memory and disks in many situations.
I’m not a big fan of the flash memory for disk caching either, it seems like a good way to burn out a flash chip for tiny gains if any.
I would be a fan of a flash chip to write your memory out to and read back from on boot up (from hibernate). But if what you’re saying is true they wouldn’t be very good for this because that would be heavy sequential reading.
As it is, coming back from hibernate takes longer than booting a fresh OS. It’s still worth it to get my memory state back, but it’s still quite slow on a 5200 RPM disk with 2GB of RAM!
I think the question is even more complicated then you mention. What about a notebook on battery power? Surely a flash read is much cheaper than spinning up a disk and reading. You’d likely factor that in, as a speed and power consumption issue.
suppose you have 16Gb of memory and your apps only need 1Gb, then any intelligent operating system would use the rest of 15Gb as a cache, and make it available to your applications when really needed.
It will load up your entire hard drive into RAM. Twice if necessary.
Just yesterday the codeproject daily email called my attention to the same website – it’s worth a glance to laugh at thte incompetence of many programmer-wannabe’s…
http://www.codinghorror.com/blog/archives/000781.html
Summary – ask your interviewee to write a Simple Program in front of your face, an “Oral Exam” – many will apparently fail !
Summary – ask your interviewee to write a Simple Program in front of your face, an “Oral Exam” – many will apparently fail !
Pretty interesting reads – and that was much more interesting than this article! I think this should have been submitted to be honest.
Personally, I would prefer to ask people to write some pseudo code to solve a problem rather than asking for a particular language, and they can feel free to describe as they go so you’ve got an idea of their thinking (they can communicate!) and they haven’t just seen this somewhere before.
This also gets around the problem of the candidate who is joined at the hip to a particular language, and would be horrified if you asked them to solve a real problem with anything else, be it Ruby, Python, Bash etc. They must be able to take real problems and solutions raw in their mind, and apply them to a multitude of languages and environments – even if that means learning and getting their reference books out. I know I don’t keep exact syntax floating around in my head all the time, and I wouldn’t much want to work for someone who was picky about it. This is hinted at here:
http://tickletux.wordpress.com/2007/01/24/using-fizzbuzz-to-find-de…
On occasion you meet a developer who seems like a solid programmer. They know their theory, they know their language. But once it comes down to actually producing code they just donโt seem to be able to do it well.
So they know their language………but they can’t think about the problem to be solved, and how to solve it. That’s what’s being said there.
It was a great read. The paper was a good read as well, I enjoyed it.
Some already do just what you describe though. And it’s probably fairly effective too. And I believe the solution to finding language psycho’s is by asking this question: What’s your favorite programming language, and why? Then you work them into ranting about it or admitting to problems in the language.
It’s the ones who can’t identify problems in their favorite language who would scare me. Those and the ones who swear it’s the 42.
What I’ve never heard of is some way of detecting the ability to see around-the-corner, to be able to discern which designs lead to a morass of corner-cases, a combinatorical ka-Boom.
The underlying principle is “Transitive Closure” – to choose at the start, a design you’ll be able to FINISH !
To envision designs with small closures is of course math-like, this agrees with the #2 “Lionel Barret” reply in the article you cited, the math filter is a good filter.
As to what’s going on “Under the Hood”, if you know of a better university-level-layman’s book than Marcus’s “The Birth of the Mind”, please enlighten me.
http://www.amazon.com/Birth-Mind-Creates-Complexities-Thought/dp/04…
Even the two-star-reviewer Suvro Ghosh seems to not have a better suggestion, at this level.
That is a great article. The sentence on recursion reminds me of a programmer that worked for me a few years ago.
He was supposed to be a senior level programmer from Novell (probably why they laid him off). The task at hand was to grab all of the XML files out of a given directory and all its subdirectories, parse them for certain text elements, and store those elements into a database.
His solution was to place a manifest file in the top level directory that contained a list of all the files that should be parsed (dunce). So, what happens when I add new files? Who is going to update that manifest file? How can you assure that the manifest file is correct each time?
If he knew anything about recursion, he could have simply recursively read the contents of each directory in the tree, grabbed the appropriate files, parsed them, and stored the data regardless of where the files were.
I booted him off the project and wrote the thing myself.
Interesting read but I’d take it with a grain of salt.
People applying for a job are under a lot of pressure and some guys can’t do _anything_ under pressure. They’re really smart (probably a lot smarter than me) but their minds just turn blank in all sorts of exams. Really bad disposition this…
Thanks to the local “Discovery Channel Store” for putting this on their adventure shelf – I think it is an undervalued book at $5.75 used-very-good:
http://www.amazon.com/Deep-Survival-Who-Lives-Dies/dp/0393326152
Author Gonzales details by riveting case histories, that indeed some folks are able to control their emotions under pressure, some not. We know NASA isn’t always succesful in selecting for this. IBM has taken a position against any genetic discrimination whatsoever.
The context being, rather than taking an inability to perform under pressure with a grain of salt, one ought to meditate on what-if this happens on a Real Mission.
I cannot manage to find a link to it, but there was a PBS thing from BBC called something like “Brain Sex” in which they monitored testosterone in-real-time during a go-kart-race. The winner was Jamie a successful investment banker, with a remarkably even hormone level compared to the ups-and-downs of a rival – not having the emotional right-stuff can’t be dismissed with a grain of salt.
The cache-table is nice, and for hauling in a BIG program or DLL it’s accurate to summarize it in terms of sustained throughput. However, in the case that you get a cache miss on a SMALL dll or other piece of information, your total time is mainly due to latency, which as the table shows is about 30ms for the HDD and about 30ns for the DDR – that’s a factor of a MILLION, not merely 37, even more of a diff for the inner caches !
Bacause Vista is a bloated OS. Period.
to be fair on RafaelRR comment. The article is misleading…by its very nature. Vista uses excess memory as cache.
I don’t think its deliberately misleading but does not answer the question “why Vista uses so much memory?”
I don’t think its deliberately misleading but does not answer the question “why Vista uses so much memory?”
“Vista is trying its darndest to pre-emptively populate every byte of system memory with what it thinks I might need next.”
That’s about as clear an answer as you can ask for.
“Vista uses excess memory as cache.”
And what’s new on that? Linux has a very aggresive cache subsystem.
Well, everything that Microsoft implements MUST BE “innovative”, isn’t it?
You have to stop thinking of system memory as a resource and start thinking of it as a a cache. Just like the level 1 and level 2 cache on your CPU, system memory is yet another type of high-speed cache that sits between your computer and the disk drive.
You don’t just cache everything like Vista apparently does because that’s inefficient and leads to excessive swapping on low-memory systems. That’s exactly the complaint I have heard most often among Vista users that I know.
Oh and unix arguably does what the author describes. The difference being that memory is used for applications, not to cache everything to hide the os’ speed deficiencies. Who wants to have their OS thrashing when they launch an application because all memory is used as a cache for UI eyecandy ?
Unix (well Linux at least) doesn’t keep track of which pages are used for applications versus those used for data or X or whatever. Thus, it is just as vulnerable to having recently closed apps being removed from the cache as Windows is.
.
I remember reading the same thing during the Beta’s and so far im in too minds regarding Vista. I use Mac OS X and it does the same thing, uses all memory for cache and hands back when required memory to applications.
In Vista this seems to be sort of the same thing, for example on one of my Vista Machines it’s a 3GHZ 1GB 128MB PCIe PC. The RAM usage is usually around the 60-80% mark, and i haven’t really been stressing it yet, and already there is a lot of disk thrashing. I say im in too minds as i know there are a lot of background tasks such as indexing, but so far, 60-80% memory usage hasn’t made vista any faster and is in fact visibly slow than XP with 20% memory usage.
The same goes for the new Network stack, on a 1GB network sometimes the files zip and other times they will barely copy across higher than 3/4MB Sec, again XP is fine and this is using Vista’s own Microsoft drivers for NIC’s etc..
I think it will take Microsoft a few more good years to Vista out of this Beta Stage. Remember when OS X came out? Now Tiger is running great on Intel and PPC. I’m waiting for a while before I upgrade my pc to Vista. Maybe a year, because XP runs great.
vista on my laptop is not using anything like full ram (1GB)
look at this screenshot
http://www.windows-noob.com/forums/uploads/monthly_02_2007/post-1-1…
yes you can clearly see firefox (multiple tabs) ie (2 windows) xchat, taskmanager, windows messenger live and the sidebar is running yet only 572MB of 1GB is used.
56% ram usage.
am I missing something ?
edit: i see someone else posted the same ‘topic’ as me, but i can’t change the topic via edit. Oh well.
cheers
anyweb
Edited 2007-02-28 21:40
True enough, but then you ARE swapping 1.1GB. Sheesh.
and on my linux box hosting the screenshot (512mb ram) i have this
[anyweb@www ~]$ free
total used free shared buffers cached
Mem: 514856 509308 5548 0 34636 277744
-/+ buffers/cache: 196928 317928
Swap: 1048568 65772 982796
so how does that compare to Windows Vista memory wise ?
Sheesh.
You’re really not going to compare 1.1GB vs. 65MB are you?
I think sound skipping problems have more to do with the kernel scheduler. Ever tried running the same distro with the 2.6.xx.mm (multimedia) kernel and the regular one that came with your distro? It makes all the difference in the world… And the delay when the stop(or whatever) button in the GUI music player has more to do with the amount of buffering the app is doing.
I don’t care WHAT people say. Any OS that eats up 512MB for *ANY* reason, is a steaming pile of Brie. MacOS X doesn’t do this. Why should Vista?
Remember the days when an OS could fit into ROM? You know, like the Atari ST? Ah, those were the days… when people coded OS’s and applications EFFICIENTLY! When “TIGHT” and “FAST” were the norm. Now that we have systems with gigs of RAM and 100’s of gigs of HD space, people just code any ol’ way they like and simply don’t care anymore.
The only thing that should have increased exponentially should have been processor speed. HD and RAM size should only have increased when there was absolutely NO way to make certain games/apps/filetypes fit effectively.
I dare anyone to write an even fairly modern OS (basic functionalities expected today) that can fit in 192K (or even up to 512K) of ROM nowadays. Bet people would burst a main artery in their brain, even trying!
Closest I’ve seen, recently, is RISC OS (it’s in ROM at least. Maybe not 192K, but…). I think SkyOS or Minix 3 could possibly come in 2nd. Haiku in 3rd, MacOS X in 4th and… Vista…
AHAHAHAHAHAHAHA!!!!
Don’t know if my calculator has that many digits for position… ๐
Any idea why they call Microsoft, Microsloth? Because it’s true… they’re lazy and sluggish and lumbering! They couldn’t write an efficient, secure version of Windows, if Bill Gates’ or Steve Ballmer’s life depended on it!
They copy after MacOS X and they *STILL* can’t do it!
Sad.
And those OSes had about 1% or less the functionality that modern OSes have. The bloat is the cost of functionality, flexibility and stability. I think most people will take the bloat to get those things.
The bloat is the cost of functionality, flexibility and stability.
I was with you until you said the third one.
You can’t seriously compare the old OSes with no memory protection, cooperative multitasking and no security with the new OSes which have all that and more?
I think people have a rosy view of the past. In much the same way that people see the 50s as some magical time when everyone lived in Republican happiness. The reality was quite different.
You have a good point. I remember Windows 3.1. About every few minutes a blue screen would come up. Some times you could continue, sometimes you couldn’t.
I was thinking a little further back, like my TI994-a, and the Commodore. They seemed very stable. Probably weren’t!
Your attacks don’t merit answers, but on your technical comments…
I don’t care WHAT people say. Any OS that eats up 512MB for *ANY* reason, is a steaming pile of Brie.
Windows checks to see what it can use. It uses what it has available to it, but uses it in such a way that it can be immediately vacated for any user-initiated processes. What’s wrong with that?
Remember the days when an OS could fit into ROM? You know, like the Atari ST?
Back when we only needed to run one task at a time?
When “TIGHT” and “FAST” were the norm.
I kind of miss the days when I could type in LOAD “*”,8,1, then go make lunch and finish it before my program had loaded.
I dare anyone to write an even fairly modern OS (basic functionalities expected today) that can fit in 192K (or even up to 512K) of ROM nowadays.
See MINIX, Wheels, Windows CE (if you take out any unneeded options in your compile) and others. I’d mention open-source projects, but the size of open-source projects increases exponentially with the number of people who think they are the most important person working on it.
Closest I’ve seen, recently, is RISC OS (it’s in ROM at least. Maybe not 192K, but…). I think SkyOS or Minix 3 could possibly come in 2nd.
As awesome as SkyOS is, I don’t think it’s quite a mini-OS. SkyOS’s kernel includes several complex, code-heavy options, and it’s a 32-bit operating system, requiring the attendant code size and memory.
“Windows checks to see what it can use. It uses what it has available to it, but uses it in such a way that it can be immediately vacated for any user-initiated processes. What’s wrong with that?”
If you have a system with 512Mb of RAM and Vista takes all of it (which it seems as though it would)… what wrong with that? Sheesh… to think we neven NEED to have THAT much memory in computers these days! Ah, the days when *512K* was sufficient…
“Back when we only needed to run one task at a time?”
It was never ‘we only NEEDED to run one task at a time’. It’s that TOS/GEM was only capable of that because of the way it was written. Afterall, do recall the Amiga was a *multitasking* computer, so the 80’s were not just ‘the single-tasking era’. It was all to do with the OS at the time.
“I kind of miss the days when I could type in LOAD “*”,8,1, then go make lunch and finish it before my program had loaded.”
Was that with or without the Epyx FastLoad cartridge? ๐ But, seriously, as slow as computers were back then (the Apple II’s totally blew the C64 away, in disk load speed… we had both, in classes at Passadena High School), they were more fun to use and technology was EXCITING back then! From 1MHz to 4.77Mhz. From 8Mhz to 16MHz, etc. When demos would push hardware to levels you never imagined, when voice synthesis was amazing on a C64, etc.
How things have changed… and gotten slower and sloppier, all the while. ๐
A OSNews comments post about Windows that’s grounded in reality instead of popular mythology? There’s something you don’t see every day.
I’ll waste my own memory, thank you very much.
It’s a feature.
And the most important rule of cache design is that empty cache memory is wasted cache memory.
Sounds more like an excuse for bloated code to me.
This guy thinks we are all idiots.
The raw performance level of Vista is hardly at the level of XP. Drowning it deliberately with RAM for caching purposes make it look and feel better, but it’s just a kind of compensation for sluggish performance.
In fact the best of two worlds would be to give customers XP performance with Vista caching mode. I do not think it would be too difficult to enable it technically but probably not for marketing purposes.
I find it rather bizarre that people are treating this as evidence of bloat or poor design – especially since it appears to be specifically designed to address problems with virtual memory in pre-Vista versions of Windows. And the underlying principles aren’t really that controversial – I remember reading an almost-identical description of the behaviour of BeOS’s VMM nearly a decade ago (in the BeOS Bible, I believe).
Here’s an example of the problem that’s quite easy to reproduce on XP. Open Firefox, load a half dozen or so pages in tabs, then minimize it, walk away for half an hour, then come back and restore the Firefox window. In most cases, I get at least 20-30 seconds of disk-thrashing (and unresponsiveness from FF) while Windows swaps it back into memory.* And that’s on systems where the amount of RAM is significantly more than enough for Firefox to have been left entirely in memory.
The XP VMM system (so far as I understand it) operates under the assumption that users always want as much memory free as possible, so it interprets a minimized app as less-important and swaps it out to disk to free up memory (even in situations where the amount of RAM makes it unnecessary). That probably was sensible a decade ago, when most people had much less RAM and it was quite easy to have more programs running than could fit in memory at once. But today, that strategy just causes more problems from the excessive swapping.
In contrast, the main reason I initially noticed that behaviour of XP was that I run XP side-by-side with a BeOS machine (using a KVM). Even with half as much RAM in the BeOS box, I’ve never seem the same sort of behaviour with the BeOS Firefox port – because, as I understand it, BeOS doesn’t swap memory to the disk cache until the physical RAM is actually needed for something.
There was a very great thread in the ArsTechnica forums back in ’05 that discusses the topic in much more technical detail (and accuracy, probably) than I’m capable of:
http://episteme.arstechnica.com/eve/forums/a/tpc/f/48409524/m/64300…
Somewhat OT/anecdotal, before I had any understanding of the “why,” I had noticed that I seemed to get longer battery life on an old thinkpad when running in BeOS than when running it in Windows. I thought I was going crazy or seeing some placebo effect – until the “Ah-HA!” moment of reading the thread on Ars I had never mentally connected the two factors before, but it made sense after reading the Ars thread that XP’s VMM behaviour would necessarily result in the hard drive being accessed more, and therefore drain a laptop battery faster than an OS with more sensible VMM.
Of course, I’m guessing that the other ways in which Vista is significantly heavier than XP will likely negate the improvements in VMM – but that’s another topic.
*It’s possible this has been fixed with recent Firefox builds – I’ve made a habit of never minimizing it in Windows, so I haven’t noticed one way or the other.
Edited 2007-03-01 11:46
Of course! It’s a FEATURE not a BUG!
By using all the system resources, they are more likely to push the system more into using virtual memory, which is less reliable. It’s a foolish thing to do.
XP has so much more going for it than Vista.
I’m still using Windows 2000. Except for Cleartype, and native official 48-Bit addressing, I’m doing okay. I slipstreamed an install disk, so I have my 2000 booting from 400GB drive. I can use GDI++ to help with some of the font rendering issues.
I cannot buy Vista,a ever; I am constantly changing hardware. Windows is not an operating system that that can be installed once and repaired, easily, or well. They hide much from the user to efficiently repair it. This is the cost of a DRM operating system
A Windows installation is like newborn baby that never gets changed!