posted by sbergman27 on Sat 9th Feb 2008 21:33
Conversations I believe I read that OSNews conversions were for tech discussion, and I have an issue which I've Googled and Googled but the answer is still not clear to me.

I know that NFS does some client-side caching, and have read that ensuring complete cache consistency between clients is expensive, so NFS implements a lesser guaranty which is good enough for "everyday file sharing".

I have a server that runs many sessions of a multiuser Cobol accounting application. I have a need to run one session of the app on another box, with the Cobol C/ISAM data files mounted via NFS. The application, obviously, will be reading and writing individual records within the files, and proper locking is employed such that clients running on the server do not step on each others' changes to the files. But can I trust that NFS is going to handle this properly and not cause corruption?
Permalink for comment 650
To read all comments associated with this story, please click here.
RE[3]: How about this...
by sbergman27 on Sun 10th Feb 2008 16:59 UTC
Member since:

Thank you, Greg. One encouraging thing that I see is that while I can copy a large file over the NFS4 mount and then copy it again, and see the client side caching bring the copy time down from the amount of time it takes to transfer over the network to the time it takes to pull from local cache, when I actually try to run a report through the C/ISAM<->SQL interface, it takes exactly the same amount of time each time I run it. It reads everything over the network each time. And this is on read-only access. So I guess U/SQL is being particularly careful about accesses to the data.

Also, I do not have multiple NFS clients accessing the data. I have multiple users locally on the NFS server and only one user on a single NFS client which is also running on Linux, so I'm not mixing clients or clients and servers.

What I *really* don't understand is this. If there were even a *hint* that a local disk filesystem, like ext3, *might* possibly corrupt someone's data under certain unusual circumstances, it would be considered a major and embarrassing bug and word would be put out for everyone to apply the patch *immediately*. But with network filesystems, the attitude seems to be, gee, if we do this the right way it won't be as fast, so why don't we do it this other way instead? It'll only corrupt people's data a small fraction of the time and will be much faster! Yeah, that'll be OK.

ReplyParent Score: 2