Many people have complainedaboutFirefox’smemoryuse. Federico Mena-Quintero has a proposal for reducing the amount of memory used to store images, which, in his proof of concept code, “reduced the cumulative memory usage […] by a factor of 5.5.”
Yeah. It’s a shame amongst thousands of open source developers, only one or two seem interested in performance or elegant code! Perhaps that’s because most hackers are running 3 GHz 1G RAM boxes and don’t notice, but the vast majority of PCs in the world are around the 1 GHz range (see the tens of millions in businesses).
It’s hard to promote Linux and open source software such as Firefox and OpenOffice.org when it’s consistently slower and heavier than alternatives. Fingers crossed that other developers will start to clean up rather than over-engineer! (Parsing a 300k XML file just to set the volume in Gnome? Blimey!)
Most of these folks have day-jobs; maybe you should just avoid software in general .
Very little slow code will cause a noticeable loss in speed. There’s on old addage: Early optimization is the devil; or something along those lines. It’s because 90% of your programs time will probably be spent in 1% of your code..
It does surprise me that firefox stores all of these as pixmaps on the x server. My guess is that is in fact for performance… I’d imagine the people who worked on that said “well, RAM is cheaper than clock cycles.” What they need to do, IMO, is set a max X server memory usage so that you can view pages with 50MB of jpegs without swapping (cause swapping is slower than decompressing). Say, 100MB of X memory, and then get smart about what you don’t store on the X server: That’s a lot more work, but I think it’s worth it for a lot of users.
Where I work, we have thin client servers that use firefox over the network. Storing them in the client-side XServer, if that’s what you’re commenting on, probably makes Firefox a whole lot faster than the alternative.
Well, one of the big things in Linux is thin clients. So, you’re going to cache all images in the client, and upload them to the X driver when they need to be drawed.
Yes, that is going to cause *a lot* of network traffic.
Keeping compressed images in memory is nice. But do it in the X server, please!!
I just read the proposal and wondered how large is the groth of bandwidth requirements if client side buffers are used and firefix is running on another machine. Its seldom that someone wants to use firefox over ssh or networked X but client side buffering will result in groth of bandwidth and so reduce the usability of firefox (and other applications using client side gfx buffers) over networked X.
If only other operating systems supported mmap. I guess that Firefox can do a speciel version for Linux. Then it would just be a matter of decompression on the fly from mmaped images (stored in the cache folder) as suggested in this article .
One of the problems with Firefox is that each time you open a new window or tab with images and html in it and close the window not all the memory is released. So if you open complex pages with lots of images and keep opening and closing them Firefox can swell to 200MB easy on linux. The mozilla developers need to do more leak testing on Gecko. I wish they would stop adding new UI features for one release and just hunker down and analyze these memory problems. I used to to that at my old company. Every three major releases we had a retooling phase where no new features were added but the code was reviewed and we would decide on areas that needing cleaning up. Perhaps they need to adopt a model like this too.
One of the reasons why companies don’t do this is that they find it hard to justify developers sitting around refactoring code without adding any new shiny features to show the marketing/sales folks. One would think this would be easier to do in an open source company but perhaps they have become too marketing bound.
If you actually read what FMQ has written, you’d have read that though firefox keeps some part of the images in memory (FMQ did not look for the reason of this), it’s not leaking them, since if you load them again, it doesn’t eat anymore memory than the first time. FMQ seems to think its some sort of cache. If the developers have done this, there might be a good reason. I repeat, it’s not a leak (so probably not an error), but a cache.
Optimization has nothing to do with writing unreadable code. I optimize my code after my first write and its actually more readable and better maintanable afterwards.
Maybe I didn’t understand this but does this modification rely on X Server? What about Firefox on Windows and Mac?
It seems that scrolling will be a little jerky because of the decompressing on the fly. There seems to be a trade off between speed (or at least smoothness) and memory usage. Since RAM is so cheap and plentiful, I’d rather that Firefox be snappier and takes up double the memory…
(Not that any of this affects me ’cause I use Safari!)
I implemented a similar system to this a while back for NetSurf, an open-source web browser for RISC OS. There are a few key differences though:
1) I wrote a custom LZW based compression system rather than retaining the native format
2) We also allow images to drop out to disk (whether currently compressed or not) to work around the lack of virtual memory support
The main advantage of the LZW compression system is speed. Lots and lots of speed. Decompressing an image takes roughly the same amount of time as performing a simple byte copy, and compression is about 1/4 of this speed (all figures off the top of my head.)
We also do things like remembering whether images are opaque or not, and drop out the alpha channel prior to compression to instantly reduce the data by 25%.
With all the optimisations we actually end up with a very good compression ration, and the speed is so fast that you won’t notice it happening on a 200MHz ARM based machine.
I’m trying Opera at the moment. Reminds me that browsers can still be fairly fast and light. I guess Firefox is in danger of going the full way round the wheel. It started as a fast ‘n’ light rewrite of the heavyweight Mozilla and is now threatening to become rather a Mozilla itself, especially if you run a lot of extensions.
FMQ is a real hero. He’s come up with lots of ideas for optimizing Gnome/gtk over the last few months if you follow his blog.
Mozilla Firefox has always been slower and more memory intensive than Mozilla Suite. Early in the Phoenix days that wasn’t true, but Phoenix was featureless and buggy.
If you want a light Mozilla/Gecko-based browser, K-Meleon’s been around for a while. It’s almost as light and fast as Opera (though it has nowhere near as much customization and features as Firefox, let alone Opera).
Slower than Mozilla ? Are you kidding ? On my Mac Mini, I use – I admit – homemade builds of both. Development version, known as SeaMonkey 1.5a1 (Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.9a1) Gecko/20051124 SeaMonkey/1.5a) and Firefox 1.6a1 (Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.9a1) Gecko/20051124 Firefox/1.6a1)
Kmeleon is nothing but a shell embedding gecko, using MFC toolkit, like does Camino with Cocoa on MacOS-X or Epiphany under Gnome.
I’m using the new 1.5 Beta, and after I open and download a couple score large images, my Windows XP system becomes seriously sluggish, until finally Windows says the virtual memory limit is too low. I have to at least close and reopen Firefox, or even reboot, to recover the system
Firefox needs to fix this problem. I was told most of the leaks would be fixed by 1.5, but that doesn’t appear to be the case. The problem is less severe than it was in the earlier versions, but it has not been resolved by any means.
C’mon, guys! How long have memory leaks been known? Why are people still coding them? Start paying attention to code quality before features or OSS will start looking and acting like Microsoft products! Bloat, instability and insecurity will follow!
C’mon, guys! How long have memory leaks been known? Why are people still coding them?
That’s like saying “How long have car crashes been known? Why are people still having them?”
When working on a project that requires a large team, no matter how hard you try, there will be occasions where a person doesn’t realize the implications of what they’re doing on what other people are doing. You can’t avoid problems, you can only try to minimize how often they happen.
I would agree when it comes to bugs, but NOT any sort of memory leak. Memory leaks are usually pretty easy to detect, trace and track down, by using third party tools. Bugs are things that can’t be tested against sometimes. Memory leaks can be. There is no excuse.
Though there could be a leak (see my other comment, which I’ll write in a minute), it CANNOT be the issue Federico is dealing with, as Windows XP do NOT have an X server .
My issue isn’t so much the loading of images, I can wait… I’ve used so many systems over the years I can adjust to it. The idea of that Firefox has memory leaks is of more concern. I’m running it right now with 5 tabs, including Yahoo!, two small web pages, the “this Week in TECH” homepage, and obviously this posting page, according to my system Firefox is using 132MB of virtual memory.
I enjoy using Firefox because I believe it is superior to many other browsers and supports most web sites. The memory issue is starting to bother me and I did get a free copy of Opera a few weeks ago on the anniversary so it’s likely only a matter of time before I drop Firefox.
I don’t understand why mozilla doesn’t implemented separate threading for different tabs. Think of the possibilities. Right now if I have a flash or graphic intensive page in one tab all the other pages also creeps to a halt. Why not suspend all other tabs except the active one.
And “actual drawing” isn’t done in “main” thread, at least app main thread, but via several app_server threads.
Unfortunately BeOS multithreading technuques cannot be used in Mozilla, as Mozilla is coded according Win/Linux rules. Where all actually must happen in that poor single “main” thread
Can anyone shed light on how this affect platforms without an X server? WinXP, possibly OS X. I have no idea, how the OS X version works. Does the Win32 API provide a similar pixmap cache?
There probably are memory leaks in Firefox. I can neither prove or deny it.
However, another reason of memory bloat could be the way how the heap works. You allocate 3kB, 3kB, 3kB, then release the _middle_ 3kB, allocate another 8kB. The heap would be 17kB big, though it only should be 14kB as you could not use the small hole in the middle. See here: http://live.gnome.org/MemoryReduction (search for “heap”). Now take this to the real numbers (more holes, megabytes) and you get a bloat which is not so easy to fix.
True, its easy for leaks to occur, a few k’s here, a few k’s there, and before you know it, you’ve reached several megs; the sad part, instead of correcting those issues, too many developers wold prefer blaming the end users machine rather than saying, “well, this end user could have raised a valid issue…lets investigate it”.
nothing new. after running mozilla (suite) for some days, its working set grows to about 300 MB on my windows machine. it does not decrease very much after closing everything but one window.
long time issue with mozilla…at least, it’s quite stable nowadays – nothing compared with 0.x versions
At work i have 2GB mem and 2GB SWAP. I run Mozilla and Firefox, and after one week I often have ~1.8GB mozilla and ~700-800MB Firefox + 1~1.2GB X.org memory used.
But FF is not alone, Safari is the same, right now I have 5 windows with all together 17 tabs and it uses ~1.5GB
I think all browsers I used (FF, mozilla, Safari) have a huge memory problem.
I agree 100%. I’ve been complaining about issues to Apple for a while about Safari. Seems like its not going away quickly enough. I’ve got 4GB of RAM and when total system utilization gets around 2.6GB and Safari gets around 250MB Real & 1.1-1.5 Virtual then everything starts acting a little crazy, such as dmgs start not wanting to mount properly or files do not copy from mounted dmgs correctly… Very strange and I can’t seem to get to the bottom of it. Tried raising max number of processes via launchd and that helps, however other odd things begin happening after that. <Sigh>
My net speed is very slow and I am used disabling images(imglikeopera plugin), to get good browsing speed. I never actually had any memory problems in firefox. But after reading this, I have tested and found that I missed a big problem without knowing that because I actually have 256MB RAM and my FF memory usage kept growing like anything.
Interactivness and speed does. If you have 1GB ram, all you applications and buffers should _always_ use all of it! Else it means that your system isn’t optimized.
So what we need is a better way to classify how important memory is, and when it is going to be needed next.
Also, if we implement a layered structure, that treats the whole system swap/filesystem/memory/network etc as one big system, it would be possible to do optimization like this:
An image in hidden firefox window:
best case: need image in 0.1 sec, medium priority,
storage: need not to store image
location: http://somewhere.com/image, the excact time the image was downloaded. (Remember you can always uniquliy identify an item, if you know both the position and the time)
All the information about the image would be stored with it, like size etc.
The job of the layer, would then be to satisfy the demands of firefox as good as possible. It could look up in a table, and see how long this 800×600 image would take to decompress. If it is smaller than 0.1seconds, then it just compress it and be done with it.
If it is more than 1 seconds, it maybe tries to only compress parts of it. If the whole computer is running out of memory (the fastest ‘swap’ in the system), it just try to store the image in a faster swap. If it is faster to store it on the harddrive compressed (it looks at the bandwidth of the harddrive), it does that. If it is faster to store it uncompressed, it does that. It could even do both, and store it on the network.
This is offcause not just a memory model for images, but for every object in a program. Then there would also be a 100% correspondence between things you save on your disc (files), and the memory of your program.
If your system things your video editing takes up too much memory, it just saves it in a compressed format. If you are going to transfer it over the network, it converts it on the fly to maximize the bandwidth/cpu/memory equation for both computers.
This also means that you are never going to ‘save’ a file the usual way, you just say that it should be avaible for so much time (e.g) unlimited, and in locations you have specified – e.g either your local disc or your network storage. You could also say it should be both on the network storage in some way, and on your local disc. Automatically backup feature, except it is not the excact same bits that are stored on each machine – the network is much slower, so it is more heavily compressed.
All this could be done trough a structured filesystem layer – it could be done as an extension to LUFS. This layered structure could also handle security issues, so your data never leaves any storages that you specify.
This would mean that a program can read, write and do everything it would like to do with your data – but the only way it could send it to other computers would be through the filesystem – which would be managed.
So maybe we should aim for more effective computer subsystems, instead of keep reinventing the wheel for every program?
For reasons of speed. I hate when apps can’t keep up with the scrollbar. It’s annoying, and makes for a sluggish experience. Perhaps optimizations could be done in the “keep images for _all_ tabs in Xserver”-area, instead. Also, Firefox should more aggressively free memory when closing down tabs and windows.
I think it’s sad indeed that firefox uses so much memory! I only have 64MB, can still run firefox but quite slowly (still ok).
I have opera, firefox, dillo, links2 and elinks browsers installed. Links2 -g is quite good (javascript support) and very fast. Opera is faster then firefox and has more good features as well, I bet that’s because there’s an opera version on PDA’s and mobile phones as well so the developers are optimizing it very well.
Firefox developers, pleaze try to stop a while to get out memory leaks and optimize code more. Firefox might be called fast, but on my machine it’s just as slow as THE mozilla. :/
PS: Ofcourse a big thanks to every programmer who puts his time in free opensource software
He is doing a great job in finding ways or optimizing, reducing memory and speeding up parts of GNOME as well.
I read his blog on http://planet.gnome.org/ about this and it was interesting. Great work Federico.
Yeah. It’s a shame amongst thousands of open source developers, only one or two seem interested in performance or elegant code! Perhaps that’s because most hackers are running 3 GHz 1G RAM boxes and don’t notice, but the vast majority of PCs in the world are around the 1 GHz range (see the tens of millions in businesses).
It’s hard to promote Linux and open source software such as Firefox and OpenOffice.org when it’s consistently slower and heavier than alternatives. Fingers crossed that other developers will start to clean up rather than over-engineer! (Parsing a 300k XML file just to set the volume in Gnome? Blimey!)
Most of these folks have day-jobs; maybe you should just avoid software in general .
Very little slow code will cause a noticeable loss in speed. There’s on old addage: Early optimization is the devil; or something along those lines. It’s because 90% of your programs time will probably be spent in 1% of your code..
It does surprise me that firefox stores all of these as pixmaps on the x server. My guess is that is in fact for performance… I’d imagine the people who worked on that said “well, RAM is cheaper than clock cycles.” What they need to do, IMO, is set a max X server memory usage so that you can view pages with 50MB of jpegs without swapping (cause swapping is slower than decompressing). Say, 100MB of X memory, and then get smart about what you don’t store on the X server: That’s a lot more work, but I think it’s worth it for a lot of users.
Where I work, we have thin client servers that use firefox over the network. Storing them in the client-side XServer, if that’s what you’re commenting on, probably makes Firefox a whole lot faster than the alternative.
There’s on old addage: Early optimization is the devil; or something along those lines.
It’s “early optimization is the root of all evil.”
It’s because 90% of your programs time will probably be spent in 1% of your code..
It’s 90% in 10%, not 90% in 1% Obviously, though, it’s the general idea that matters.
-bytecoder
In my experience, it’s 90% in 1% :-p.
But it certainly is the idea, and it definitely varies.
AMEN!
Well, one of the big things in Linux is thin clients. So, you’re going to cache all images in the client, and upload them to the X driver when they need to be drawed.
Yes, that is going to cause *a lot* of network traffic.
Keeping compressed images in memory is nice. But do it in the X server, please!!
Exactly what I was thinking!
I just read the proposal and wondered how large is the groth of bandwidth requirements if client side buffers are used and firefix is running on another machine. Its seldom that someone wants to use firefox over ssh or networked X but client side buffering will result in groth of bandwidth and so reduce the usability of firefox (and other applications using client side gfx buffers) over networked X.
If only other operating systems supported mmap. I guess that Firefox can do a speciel version for Linux. Then it would just be a matter of decompression on the fly from mmaped images (stored in the cache folder) as suggested in this article .
OX supports mmap too.
*OSX
mmap is a standard syscall on Unix. It’s part of POSIX. You can even emulate it on Windows – http://www.genesys-e.de/jwalter/mix4win.htm
One of the problems with Firefox is that each time you open a new window or tab with images and html in it and close the window not all the memory is released. So if you open complex pages with lots of images and keep opening and closing them Firefox can swell to 200MB easy on linux. The mozilla developers need to do more leak testing on Gecko. I wish they would stop adding new UI features for one release and just hunker down and analyze these memory problems. I used to to that at my old company. Every three major releases we had a retooling phase where no new features were added but the code was reviewed and we would decide on areas that needing cleaning up. Perhaps they need to adopt a model like this too.
One of the reasons why companies don’t do this is that they find it hard to justify developers sitting around refactoring code without adding any new shiny features to show the marketing/sales folks. One would think this would be easier to do in an open source company but perhaps they have become too marketing bound.
Edited 2005-11-25 17:04
The best way to prove it is browsing image rich sites, like photo galeries.
Firefox 1.0.7 on my XP is now using 460MB.
I have a lot of tabs opened but if I close them all I will still get around 300MB memory usage.
If I close Firefox and let SessionSaver restore all the tabs again it will use no more than 100MB.
I haven’t tested this on 1.5 but I sure hope someone has taken a look at this issue.
If you actually read what FMQ has written, you’d have read that though firefox keeps some part of the images in memory (FMQ did not look for the reason of this), it’s not leaking them, since if you load them again, it doesn’t eat anymore memory than the first time. FMQ seems to think its some sort of cache. If the developers have done this, there might be a good reason. I repeat, it’s not a leak (so probably not an error), but a cache.
I’m looking at about:config and browser.cache.memory.capacity is set at 65536
Either that one has a leak or there is another cache involved.
I’m okay with it not being a leak… you’re saying its the cache. I understand.
But the cache is growing out of control, even though I set a max size, Firefox is using far too much memory.
I also just cleared it and Firefox hasn’t given me any of the memory it’s using back.
-Bill
What is elegant code? OpenBSD do not optimize more than they need to. The reason? It gives them more readable and easy code with less problems.
Optimization has nothing to do with writing unreadable code. I optimize my code after my first write and its actually more readable and better maintanable afterwards.
Has anyone seen/heard of a way to optimise firefox on windows?
Maybe I didn’t understand this but does this modification rely on X Server? What about Firefox on Windows and Mac?
It seems that scrolling will be a little jerky because of the decompressing on the fly. There seems to be a trade off between speed (or at least smoothness) and memory usage. Since RAM is so cheap and plentiful, I’d rather that Firefox be snappier and takes up double the memory…
(Not that any of this affects me ’cause I use Safari!)
I implemented a similar system to this a while back for NetSurf, an open-source web browser for RISC OS. There are a few key differences though:
1) I wrote a custom LZW based compression system rather than retaining the native format
2) We also allow images to drop out to disk (whether currently compressed or not) to work around the lack of virtual memory support
The main advantage of the LZW compression system is speed. Lots and lots of speed. Decompressing an image takes roughly the same amount of time as performing a simple byte copy, and compression is about 1/4 of this speed (all figures off the top of my head.)
We also do things like remembering whether images are opaque or not, and drop out the alpha channel prior to compression to instantly reduce the data by 25%.
With all the optimisations we actually end up with a very good compression ration, and the speed is so fast that you won’t notice it happening on a 200MHz ARM based machine.
I’m trying Opera at the moment. Reminds me that browsers can still be fairly fast and light. I guess Firefox is in danger of going the full way round the wheel. It started as a fast ‘n’ light rewrite of the heavyweight Mozilla and is now threatening to become rather a Mozilla itself, especially if you run a lot of extensions.
FMQ is a real hero. He’s come up with lots of ideas for optimizing Gnome/gtk over the last few months if you follow his blog.
I’m trying Opera at the moment.
Well, that’s probably the best way to reduce Firefox’s memory use
My FF and Opera seam to eat up about the same space, in fact FF needs less…
Mozilla Firefox has always been slower and more memory intensive than Mozilla Suite. Early in the Phoenix days that wasn’t true, but Phoenix was featureless and buggy.
If you want a light Mozilla/Gecko-based browser, K-Meleon’s been around for a while. It’s almost as light and fast as Opera (though it has nowhere near as much customization and features as Firefox, let alone Opera).
Edited 2005-11-26 22:06
Slower than Mozilla ? Are you kidding ? On my Mac Mini, I use – I admit – homemade builds of both. Development version, known as SeaMonkey 1.5a1 (Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.9a1) Gecko/20051124 SeaMonkey/1.5a) and Firefox 1.6a1 (Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.9a1) Gecko/20051124 Firefox/1.6a1)
Kmeleon is nothing but a shell embedding gecko, using MFC toolkit, like does Camino with Cocoa on MacOS-X or Epiphany under Gnome.
So, stop saying “nonsense” and get some infos.
I’m using the new 1.5 Beta, and after I open and download a couple score large images, my Windows XP system becomes seriously sluggish, until finally Windows says the virtual memory limit is too low. I have to at least close and reopen Firefox, or even reboot, to recover the system
Firefox needs to fix this problem. I was told most of the leaks would be fixed by 1.5, but that doesn’t appear to be the case. The problem is less severe than it was in the earlier versions, but it has not been resolved by any means.
C’mon, guys! How long have memory leaks been known? Why are people still coding them? Start paying attention to code quality before features or OSS will start looking and acting like Microsoft products! Bloat, instability and insecurity will follow!
C’mon, guys! How long have memory leaks been known? Why are people still coding them?
That’s like saying “How long have car crashes been known? Why are people still having them?”
When working on a project that requires a large team, no matter how hard you try, there will be occasions where a person doesn’t realize the implications of what they’re doing on what other people are doing. You can’t avoid problems, you can only try to minimize how often they happen.
I would agree when it comes to bugs, but NOT any sort of memory leak. Memory leaks are usually pretty easy to detect, trace and track down, by using third party tools. Bugs are things that can’t be tested against sometimes. Memory leaks can be. There is no excuse.
It’s probably not memory leak, but rather memory FRAGMENTATION, which is much, much harder to detect and to fix.
No, there was some memory leaks in Firefox. I think they were fixed in 1.5, but not in the 1.0.x series.
Though there could be a leak (see my other comment, which I’ll write in a minute), it CANNOT be the issue Federico is dealing with, as Windows XP do NOT have an X server .
My issue isn’t so much the loading of images, I can wait… I’ve used so many systems over the years I can adjust to it. The idea of that Firefox has memory leaks is of more concern. I’m running it right now with 5 tabs, including Yahoo!, two small web pages, the “this Week in TECH” homepage, and obviously this posting page, according to my system Firefox is using 132MB of virtual memory.
I enjoy using Firefox because I believe it is superior to many other browsers and supports most web sites. The memory issue is starting to bother me and I did get a free copy of Opera a few weeks ago on the anniversary so it’s likely only a matter of time before I drop Firefox.
I don’t understand why mozilla doesn’t implemented separate threading for different tabs. Think of the possibilities. Right now if I have a flash or graphic intensive page in one tab all the other pages also creeps to a halt. Why not suspend all other tabs except the active one.
multithreading will not change anything,
sooner or later all your thread need to write
data to video memory, and they will need lock
thread during this operation, so till one thread
will finish another will just wait . If anybody
developed multithreaded application for linux/win
he knows that actuall drawing is done only
in main thread
At least in BeOS multithreading helps a lot.
And “actual drawing” isn’t done in “main” thread, at least app main thread, but via several app_server threads.
Unfortunately BeOS multithreading technuques cannot be used in Mozilla, as Mozilla is coded according Win/Linux rules. Where all actually must happen in that poor single “main” thread
Can anyone shed light on how this affect platforms without an X server? WinXP, possibly OS X. I have no idea, how the OS X version works. Does the Win32 API provide a similar pixmap cache?
There probably are memory leaks in Firefox. I can neither prove or deny it.
However, another reason of memory bloat could be the way how the heap works. You allocate 3kB, 3kB, 3kB, then release the _middle_ 3kB, allocate another 8kB. The heap would be 17kB big, though it only should be 14kB as you could not use the small hole in the middle. See here: http://live.gnome.org/MemoryReduction (search for “heap”). Now take this to the real numbers (more holes, megabytes) and you get a bloat which is not so easy to fix.
True, its easy for leaks to occur, a few k’s here, a few k’s there, and before you know it, you’ve reached several megs; the sad part, instead of correcting those issues, too many developers wold prefer blaming the end users machine rather than saying, “well, this end user could have raised a valid issue…lets investigate it”.
nothing new. after running mozilla (suite) for some days, its working set grows to about 300 MB on my windows machine. it does not decrease very much after closing everything but one window.
long time issue with mozilla…at least, it’s quite stable nowadays – nothing compared with 0.x versions
At work i have 2GB mem and 2GB SWAP. I run Mozilla and Firefox, and after one week I often have ~1.8GB mozilla and ~700-800MB Firefox + 1~1.2GB X.org memory used.
But FF is not alone, Safari is the same, right now I have 5 windows with all together 17 tabs and it uses ~1.5GB
I think all browsers I used (FF, mozilla, Safari) have a huge memory problem.
I agree 100%. I’ve been complaining about issues to Apple for a while about Safari. Seems like its not going away quickly enough. I’ve got 4GB of RAM and when total system utilization gets around 2.6GB and Safari gets around 250MB Real & 1.1-1.5 Virtual then everything starts acting a little crazy, such as dmgs start not wanting to mount properly or files do not copy from mounted dmgs correctly… Very strange and I can’t seem to get to the bottom of it. Tried raising max number of processes via launchd and that helps, however other odd things begin happening after that. <Sigh>
My net speed is very slow and I am used disabling images(imglikeopera plugin), to get good browsing speed. I never actually had any memory problems in firefox. But after reading this, I have tested and found that I missed a big problem without knowing that because I actually have 256MB RAM and my FF memory usage kept growing like anything.
1. Doesn’t the memory on the XServer side include the graphics mem?
2. Is it possible to implement image decompression as a shader?
Interactivness and speed does. If you have 1GB ram, all you applications and buffers should _always_ use all of it! Else it means that your system isn’t optimized.
So what we need is a better way to classify how important memory is, and when it is going to be needed next.
Also, if we implement a layered structure, that treats the whole system swap/filesystem/memory/network etc as one big system, it would be possible to do optimization like this:
An image in hidden firefox window:
best case: need image in 0.1 sec, medium priority,
storage: need not to store image
location: http://somewhere.com/image, the excact time the image was downloaded. (Remember you can always uniquliy identify an item, if you know both the position and the time)
All the information about the image would be stored with it, like size etc.
The job of the layer, would then be to satisfy the demands of firefox as good as possible. It could look up in a table, and see how long this 800×600 image would take to decompress. If it is smaller than 0.1seconds, then it just compress it and be done with it.
If it is more than 1 seconds, it maybe tries to only compress parts of it. If the whole computer is running out of memory (the fastest ‘swap’ in the system), it just try to store the image in a faster swap. If it is faster to store it on the harddrive compressed (it looks at the bandwidth of the harddrive), it does that. If it is faster to store it uncompressed, it does that. It could even do both, and store it on the network.
This is offcause not just a memory model for images, but for every object in a program. Then there would also be a 100% correspondence between things you save on your disc (files), and the memory of your program.
If your system things your video editing takes up too much memory, it just saves it in a compressed format. If you are going to transfer it over the network, it converts it on the fly to maximize the bandwidth/cpu/memory equation for both computers.
This also means that you are never going to ‘save’ a file the usual way, you just say that it should be avaible for so much time (e.g) unlimited, and in locations you have specified – e.g either your local disc or your network storage. You could also say it should be both on the network storage in some way, and on your local disc. Automatically backup feature, except it is not the excact same bits that are stored on each machine – the network is much slower, so it is more heavily compressed.
All this could be done trough a structured filesystem layer – it could be done as an extension to LUFS. This layered structure could also handle security issues, so your data never leaves any storages that you specify.
This would mean that a program can read, write and do everything it would like to do with your data – but the only way it could send it to other computers would be through the filesystem – which would be managed.
So maybe we should aim for more effective computer subsystems, instead of keep reinventing the wheel for every program?
For reasons of speed. I hate when apps can’t keep up with the scrollbar. It’s annoying, and makes for a sluggish experience. Perhaps optimizations could be done in the “keep images for _all_ tabs in Xserver”-area, instead. Also, Firefox should more aggressively free memory when closing down tabs and windows.
I think it’s sad indeed that firefox uses so much memory! I only have 64MB, can still run firefox but quite slowly (still ok).
I have opera, firefox, dillo, links2 and elinks browsers installed. Links2 -g is quite good (javascript support) and very fast. Opera is faster then firefox and has more good features as well, I bet that’s because there’s an opera version on PDA’s and mobile phones as well so the developers are optimizing it very well.
Firefox developers, pleaze try to stop a while to get out memory leaks and optimize code more. Firefox might be called fast, but on my machine it’s just as slow as THE mozilla. :/
PS: Ofcourse a big thanks to every programmer who puts his time in free opensource software
400mb… 200mb… sheesh what do you’ll browse??????
I’d be happy if it’d just stop crashing.