Thanks to Google Summer of Code student Zhao Shuai, Haiku now has support for a swap file. “As of revision 27233 it is enabled by default, using a swap file twice the size of the accessible RAM. The swap file size can be changed (or swap support disabled) via the VirtualMemory preferences. Swap support finally allows building Haiku in Haiku on a box with less than about 800 MB RAM, as long as as the swap file is large enough. [Ingo Weinhold] tested this on a Core 2 Duo 2.2 GHz with 256 MB RAM (artificially limited) and a 1.5 GB swap file. Building a standard Haiku image with two jam jobs (jam -j2) took about 34 minutes. This isn’t particularly fast, but Haiku is not well optimized yet.” The swap implementation borrows heavily from that of FreeBSD.
Great job. I can’t really say much else other than I simply can’t wait for a alpha/beta version.
Good job guys. 7 years and still going. I can’t wait for R1. Still in love with BeOS after all these years. I really admire the work you’re doing.
it is enabled by default, using a swap file twice the size of the accessible RAM
I thought that rule of thumb was an ancient rule that doesn’t have any bearing on modern memory allocation algorithms? Isn’t it from the times where you needed at least as much swap as you had RAM?
Edited 2008-08-31 17:18 UTC
Yes, I don’t understand why people still stick to it. I have 2 GB of RAM and I have no swap, and everything works fine. It would be insane to allocate 4 GB of swap for my system.
it depends on what happens if the system crashes. some OS still dump the whole OS completely in SWAP before dying since you don’t want it saved as a regular file on the filesystem.
When rebooting, the dump in swap is then saved into a regular file.
basically swap = 2xRAM is a bad idea, but there are sometimes reaons why it’s space is needed. Don’t know how HAIKU does this though.
It’s important to explain why “some OS” on the event of a crash dumps the whole system memory over the paging file: because it’s pre-allocated and, at some point, reliable (ie: safe to write to).
There could never be something unexplained or uncotrolled at kernel space. If something odd happened at kernel space, it (the kernel) can only be trusted to a minimum, so trusting it to allocate another file could be disastrous and render the whole system corrupt or unusable, so this design of using a pre-allocated file would be the safest.
Back to the matter at hand, the fixed value of twice the RAM is completely out of parameter, but you have to start somewhere, so it’s understandable.
I would choose no hard disk paging, ever, for a few reasons: it’s simpler (and simpler is always good) and you got the “how much RAM you have is exactly how much you can expect to load on your system” approach (a lot easier for everyone: users, developers, admins, …).
I have an idea: use disk paging only after the system generated a baseline on how a particular system behaves. I think it’s the smartest way to do that.
Not only may the system dump it’s memory in the system swap for whatever reason; most modern linux distros also depend on the swap file for resuming suspend2disk, or “hibernate” if you will. So, this rule of thumb (2xRAM) isn’t that bad after all!
There’s no rule of thumb, it always depends on what you’re running. However, Haiku needs a default. For a work-in-progress operating system, I consider that default more than appropriate.
And it’s not like you can’t change it easily. It’s a swap file, not a swap partition.
Also, I believe a big chunk of the Haiku community uses Haiku in more constrained environments. Swap space almost makes no sense for regular utilization when you have 2G of RAM, but you might require twice as much swap space if you have less than 512MB of RAM.
And what’s all the fuss about this? If you have 2G of RAM in your machine, you most likely have enough disk space for a big swap file.
That’s always been my thoughts on the subject. I have 2 gigs of ram and my linux install has a 4 gig swap partition on a 500 gig disk drive. If I ever start pining for those 4 gigs It would probably be time to upgrade to a 20 terabyte disk drive. I still wish more Linux installers would have a ‘use swap file’ option instead, make things more dynamic.
Edited 2008-09-01 01:17 UTC
Face it, modern OSes need to use lots of RAM, esp. for developers who are rebuilding things like GCC, OpenOffice, etc. Not saying that’s typical or ideal, but it can be necessary for some things.
However, does anyone here know if using 2 GB RAM + 4 GB swap is even possible?? I mean, would that even work (on Haiku or any other 32-bit OS)?? Wouldn’t it be unavailable at the same time anyways, only letting you use approx. 4 GB (or less) at once?
Why “face it”? My last system had 1 GB of RAM and it worked fine. My RAM is now doubled, yet my apps haven’t become more bloated. If anything, GNOME and KDE have only become less bloated over the years as they kept optimizing things. I compiled GCC 4 years ago on an Athlon 1.4 Ghz with only 380 MB RAM, why wouldn’t I be able to do that now with a system that has even more RAM? GCC didn’t become *that* much bigger.
If my last system with less resources worked fine, and it only had 1 GB of RAM, then why would I suddenly need to have 4 GB of swap now that I have 2 GB of RAM? Couldn’t I just think of the extra GB of RAM that I’ve gained as the swap?
It seems totally illogical to me. One of the points of upgrading one’s RAM is to make sure that the system doesn’t need as much swap, so why would one need to upgrade the swap as well after having upgraded the RAM?
In days when physical ram was a tiny fraction of the 32bit address space, it made perfect sense to use a swap space that could make virtual ram look much bigger than physical and get closer to the address limit. It allowed apps to run that absolutely needed much more store than physical ram provided, I’m thinking of Sparc stations running chip design software for example.
Today it makes far less sense, if the task you are doing are light, it is best to turn it off or minimize it and that also adds more privacy, nothing written to disk but what ever the app itself writes. Since the old BeOS already had a severe memory limit, most BeOS apps were aleady light enough.
Still for large jobs that have undefined limits, it makes sense to turn it on, and any power app that wants to use the entire ram makes it hard for the rest of the system to remain stable so the VM lets the other stuff keep running even if squeezed to the wall.
Anyway, I’m glad that it will be available and that it has an on/off switch!
Yes, it’s an outdated rule but since a) it’s the initial implementation and b) it’s just the default, this is not a big deal.
Years ago I remember some nimrod saying the haiku project would never include open-source into its code base. I wonder what street in Redmond, California the bum is laying in now.
Redmond is not in California.
Since Haiku was always been open-source I cant fathom what you’re on about.
The person is probably confused and referring to early questions about if it were a good idea to include GNU software with Haiku. Haiku is under the MIT license. As I understand it, you can use MIT-license code in a GPL project freely, but using GPL code in an MIT-licensed project would cause license conflict. Some people were concerned over having GPL code in the codebase at all. This is my recollection, from a long time ago.
Yeah! Big news!
You can choose any swap size.
You can read in the news item that Ingo tested with 256 of RAM and 1.5GB swap.
I hope we get something to use sooner than later, since I absolutely refuse to use any activated Windows, W2K and all my software, hardware is getting pretty long in the tooth and the various Linuxes just don’t do it for me.
I’d like to see some comments from those that use the current versions about how stable it is and what hardware works really well and what doesn’t. I don’t need anything fancy like wifi, just basic BeOS replacement for a current CoreDuo plus as many gigs of ram as the board allows (some allow 8 or 16GB) and good support for Sata, FW, USB2 etc.
With that I am prepped for a nice review posted by an OsNews reader.I know I know I should do it myself….
People seem to have persistent misconceptions about the purpose and use of paging space (‘swap’ is an inaccurate term on modern systems since memory is only ever sent out to disk in page-sized chunks… the earliest unixes used to actually move the entire program memory image into and out of RAM when programs were switched.. hence the name ‘swap’).
The pagefile is mostly orthogonal to virtual memory and the fact that your virtual memory space for a program can potentially look bigger than the physical address space of the system is mostly a tangential benefit. Even if you have 4 GB of RAM, there’s some value to having a pagefile.
For one, on a 32-bit system, you can still have more than 4 GB of program memory (if you open 5 instances of Photoshop and edit 1 GB of data in each, you’ve got 5 GB of virtual memory, 1 GB of which needs to be backed somewhere other than RAM).
Secondly, not having a pagefile hampers the ability of the memory manager to make a complete set of choices about what to do with infrequently-used pages. If a program touches a data page, no matter how few times it uses that page again, it is lost forever as a potentially more valuable cache page or executable binary page. Modern OSes page even without a paging file since all of them have memory mapping and some use memory mapping for executable code. By eliminating the pagefile, you’re just increasing the amount of executable code paging by reducing the possibility of data paging.
Windows also happens to use the pagefile as the target for a crash dump, so if you want to be able to write a full memory dump you need to have a pagefile that’s a few MB bigger than the size of your RAM.
Very well said. While this doesn’t really apply to Haiku at this point, I would also like to remind people that not all “desktop” machines run just one user’s desktop. My Linux servers run many simultaneous X desktops. As many as 60 concurrent desktops, in fact, on one server. That server has 8GB of ram and performance is quite good. But the swap is essential for that good performance. We don’t page in much during the work day. But the OS is able to page out a few GB of stuff, freeing the memory up for disk cache and other *useful* things.
Oh this is fantastic! I can’t wait to see more stable versions. I think haiku is one of the most exciting projects of our time.
I hope they will continue their great work!
Swap file is one of the most useless features in modern OS. It slows down the system too much to read disk for running processes – completely unacceptable in a desktop PC!
If your RAM isn’t enough for all the running apps, just buy more RAM! Here I always have more than 1GB left for file cache, and if I accidentally hit something too big for my RAM, I’d prefer to have it killed for “out of memory” error rather than to screw up other programs and the whole system.
If your RAM isn’t enough for all the running apps, just buy more RAM! Here I always have more than 1GB left for file cache, and if I accidentally hit something too big for my RAM, I’d prefer to have it killed for “out of memory” error rather than to screw up other programs and the whole system.
And what if you were working on some very important document when you ran out of memory and the app got killed?
If it runs out of physical memory, the speed would be unbearable anyway – and I’d kill it or reboot if killing cannot be done fast.
No, you’re doing it wrong.
I thought a partition is a better idea since the swap is written to so often, on its own partition, the material will not fragment the system.
If the swap file is created as a contiguous file and doesn’t resize every now and then then there’s not really much of a difference.
I haven’t checked the recent Haiku build to see if it allows the option of the swapfile being on other than the root partition, but if it does like I suspect, there’s absolutely nothing stopping you from creating a partition dedicated to a swapfile: this gives you the best of both worlds, because you can tell where to stick the swapfile (dedicated partition on the fastest part of the disk) and the size can be static, plus it isn’t having to fight for I/O latencies on the drive if it is on a separate physical disk from all your data and/or applications. With hard drives having such ridiculously large storage capacities these days, why not do that? Of course, the minor issue is that a lot of people these days may run with a single drive machine, but at least you have options.
My new PC has 8GB of RAM. I have disabled swap in Windows and Linux, and plan to never use it again!
I really enjoy Haiku :o)