Home > General Development > Learn about the Conquest File System Learn about the Conquest File System Submitted by teller 2003-04-21 General Development 4 Comments Conquest is a new filesystem which uses memory to store all metadata, small files (currently based on a size threshold), executables, and shared libraries, leaving only the content of large files on disk. About The Author Eugenia Loli Ex-programmer, ex-editor in chief at OSNews.com, now a visual artist/filmmaker. Follow me on Twitter @EugeniaLoli 4 Comments 2003-04-21 6:05 pm Anonymous Novell’s “traditional” file system stored all the directory tables and the file allocation tables in RAM. I think their new NSS filesystem can also do this, but it is not the default configuration. While storing all that information in RAM does make for blazing performance, you tend to run out of RAM pretty quickly due to the 4GB RAM limitation of 32-bit Intel systems, which is why Novell’s NSS filesystem defaults to not keeping metadata in RAM. The filesystems were getting so large that it was no longer practical. Novell recently made an announcement that they will soon be providing some of their services (file and print, in addition to directory) on Linux. Perhaps they will make these available for the 64-bit Linux systems too! 2003-04-21 7:22 pm Anonymous First, I’d like to say that this kind of thing should put to rest the question “Why would someone need 64-bit addressing?”. Simple in-place execution, yum… I would like to comment on the reliability issue, though. Conquest currently manages to keep memory persistent by putting the machine on a UPS (according to the papers, not the final solution I’m sure). I think it wouldn’t be difficult to turn that into something more reliable, just by mirroring the in-memory portion of their file system to a contiguous file on the disk. The mirroring process wouldn’t have to be synchronous like a write-through cache – it could be done through a journaling-like process. Then, all they would have to do is make sure that the UPS monitoring software would tell the mirror process to finish any left-over incremental updates as part of the shutdown event. Making the mirroring an incremental background process would solve the problem of backing up 4 GB or more of memory to disk in emergency time, an iffy proposition. Please tell me if this is a stupid idea, at least in comparison to how the system works already… 2003-04-21 10:19 pm Anonymous Isn’t this what intelligent disk caching is supposed to do? The only difference is that this never writes back to disk. I wonder how this would handle some sort of hardware crash? 2003-04-21 10:51 pm Anonymous What real difference is it between this and a modified conventional filesystem (modified to not force writeout of dirty data blocks if they are in persistent memory)?