Linked by Thom Holwerda on Tue 14th Aug 2007 17:55 UTC, submitted by tudyparghel
Windows "As we saw in part 1 of this series, large applications and games under Windows are getting incredibly close to hitting the 2GB barrier, the amount of virtual address space a traditional Win32 (32-bit) application can access. Once applications begin to hit these barriers, many of them will start acting up and/or crashing in unpredictable ways which makes resolving the problem even harder. Furthermore, as we saw in part 2, games are consuming greater amounts of address space under Windows Vista than Windows XP. This makes Vista less suitable for use with games when at the same time it will be the version of Windows that will see the computing industry through the transition to 64-bit operating systems becoming the new standard. Microsoft knew about the problem, but up until now we were unable to get further details on what was going on and why. As of today that has changed."
Thread beginning with comment 263514
To read all comments associated with this story, please click here.
no point comparing memory
by Yamin on Tue 14th Aug 2007 22:32 UTC
Yamin
Member since:
2006-01-10

There is little point saying 'Win2k used 256 MB', 'XP used 512 MB'...

Free memory is wasted memory. Who wants waste? I'd rather the OS/applications make use of my free memory though caching or preallocating memory than waste it. Of course it should deallocate that if a new application needs it ;)

This is why judging memory usage by what you see in the windows task manager is not the most useful thing to do. It can serve as a guide, but it is more than likely to be misinterpreted.

Rather, the effectiveness of memory usage should be on how many applications can you run, how snappy does it feel, how large datasets can you load...

Reply Score: 1

RE: no point comparing memory
by Cass on Tue 14th Aug 2007 23:33 in reply to "no point comparing memory"
Cass Member since:
2006-03-17

Free memory is wasted memory. Who wants waste? I'd rather the OS/applications make use of my free memory though caching or preallocating memory than waste it. Of course it should deallocate that if a new application needs it ;)


I wouldn't, id always like some free memory to be available to the system for when its required without it having to scan for stuff to free up .. This results in poorer performance ... All systems i have seen with high scan rates and freeing pages like theres no tomorrow run like dogs ... Free memory is good ...

Reply Parent Score: 2

PlatformAgnostic Member since:
2006-01-02

That's not how virtual memory works... at least not on Windows. The superfetch pages are backed by read-only data, so there's no cost to "freeing" the pages: they are just filled with the new on-disk data and the old data vanishes just like the 0's would have if the page was unused. And you don't have to scan for a page to free: the superfetch pages are kept on linked lists, just like free pages, and they are treated exactly like their free counterparts.

The only place where there could be an issue is servicing a user application's demand to materialize a new page. The OS must give a zeroed page to prevent information disclosure between different processes, but there may not be a large number of pre-zeroed pages. On the other hand, the Windows Memory Manager auto-tunes the number of zeroed pages to leave around and this isn't really a problem in practice.

Reply Parent Score: 3

RE: no point comparing memory
by Steven on Wed 15th Aug 2007 00:13 in reply to "no point comparing memory"
Steven Member since:
2005-07-20

This is why judging memory usage by what you see in the windows task manager is not the most useful thing to do. It can serve as a guide, but it is more than likely to be misinterpreted.

There's nothing wrong with using the task manager to determine how much memory is in use. You just have to be smarter than the screen you are looking at so that you have some idea what it means.

If you check under "Physical Memory", the place where it lists your physical memory size... you know, where you would think you are supposed to look... that usage number means nothing. It's a waste of space.

Commit Charge, however, tells you exactly what is going on.

XP says "Total Commit Charge 412MB"... that means that 412MB of memory are being used by "stuff", not caches.

Meanwhile, my stupid "Physical Memory" thing, the one on the top right, says "System Cache usage 728MB"... which means there's obviously a lot of caching going on for something... but the cached files don't show up in the commit charge, only memory that's doing something shows up there.

Now, yes, arguing over "Active memory usage" is pointless, since there are so many possible caching methods, Linux/BSD/Windows all do it completely differently, resulting in wildly different usage patterns for exactly the same thing.

Arguing over the total commit charge usage in various versions of windows is justifiable, however. That is not caching if it increases, it is plain and simple program bloat.

Yes, Free Memory may be wasted memory, from a caching perspective, but program bloat is wasted in a far worse way. You can't ever retrieve that waste.

Reply Parent Score: 3