posted by sbergman27 on Sat 9th Feb 2008 21:33
Conversations I believe I read that OSNews conversions were for tech discussion, and I have an issue which I've Googled and Googled but the answer is still not clear to me.

I know that NFS does some client-side caching, and have read that ensuring complete cache consistency between clients is expensive, so NFS implements a lesser guaranty which is good enough for "everyday file sharing".

I have a server that runs many sessions of a multiuser Cobol accounting application. I have a need to run one session of the app on another box, with the Cobol C/ISAM data files mounted via NFS. The application, obviously, will be reading and writing individual records within the files, and proper locking is employed such that clients running on the server do not step on each others' changes to the files. But can I trust that NFS is going to handle this properly and not cause corruption?
Permalink for comment 646
To read all comments associated with this story, please click here.
How about this...
by Adam S on Sat 9th Feb 2008 22:01 UTC
Adam S
Member since:
2005-04-01

When a client is working on a "record," it gets stamped with a last modification record. If another user tries to update remotely, it's updated, and when user 1 submits his changes back, the timestamp no longer matches, because that record's last modifcation time is no longer in sync with the record, so it warns the user - not by deleting or discarding the info, but via a more subtle method.

I use this on our internal helpdesk system so that two people can't both edit a record at once.

If you use some sort of method like this, you don't have to work about NFS or any networked filesystem gotchas.

Reply Score: 1