posted by sbergman27 on Sat 9th Feb 2008 21:33
Conversations I believe I read that OSNews conversions were for tech discussion, and I have an issue which I've Googled and Googled but the answer is still not clear to me.

I know that NFS does some client-side caching, and have read that ensuring complete cache consistency between clients is expensive, so NFS implements a lesser guaranty which is good enough for "everyday file sharing".

I have a server that runs many sessions of a multiuser Cobol accounting application. I have a need to run one session of the app on another box, with the Cobol C/ISAM data files mounted via NFS. The application, obviously, will be reading and writing individual records within the files, and proper locking is employed such that clients running on the server do not step on each others' changes to the files. But can I trust that NFS is going to handle this properly and not cause corruption?
Previous ConversationNext Conversation
Comments:
How about this...
by Adam S on Sat 9th Feb 2008 22:01 UTC
Adam S
Member since:
2005-04-01

When a client is working on a "record," it gets stamped with a last modification record. If another user tries to update remotely, it's updated, and when user 1 submits his changes back, the timestamp no longer matches, because that record's last modifcation time is no longer in sync with the record, so it warns the user - not by deleting or discarding the info, but via a more subtle method.

I use this on our internal helpdesk system so that two people can't both edit a record at once.

If you use some sort of method like this, you don't have to work about NFS or any networked filesystem gotchas.

Reply Score: 1

RE: How about this...
by sbergman27 on Sat 9th Feb 2008 22:15 in reply to "How about this..."
sbergman27 Member since:
2005-07-24

Thanks for the reply. Unfortunately, this is a proprietary, closed source, accounting system. I didn't include all the detail in my description due to space limitations. But what I am having to run on the nfs client box is actually a clunky old C/ISAM<->SQL gateway called U/SQL which I talk to from an intranet web app to generate ad hoc reports. And also, eventually, to allow adding and editing inventory records. It is designed to work with the Cobol app and to do locking sufficient to avoid corruption when running on the local server. Unfortunately, it is compiled against an ancient version of glibc, and some of the symbols it needs have been deprecated... and now dropped. So it won't run on our shiny new Fedora 8 installation. Thus I need to run it on CentOS 4.6 via NFS.

i.e. the whole problem is that I have no control over the way the app works. :-(

Reply Score: 2

RE[2]: How about this...
by gregthecanuck on Sun 10th Feb 2008 11:11 in reply to "RE: How about this..."
gregthecanuck Member since:
2006-05-30

I wouldn't trust NFS to get record locking 100% correct. There are too many ways this can go south.

If you are taking about read-only data, you _could_ periodically snapshot the data and sync it over to another PC. It depends how real-time your requirements turn out to be.

If you need read/write access (which you hinted at) then that is a different kettle-o-fish.

Anyone who has tried running an MS-Access database with multiple users can tell you all kinds of horror stories about corrupted databases.

Reply Score: 1

RE[3]: How about this...
by sbergman27 on Sun 10th Feb 2008 16:59 in reply to "RE[2]: How about this..."
sbergman27 Member since:
2005-07-24

Thank you, Greg. One encouraging thing that I see is that while I can copy a large file over the NFS4 mount and then copy it again, and see the client side caching bring the copy time down from the amount of time it takes to transfer over the network to the time it takes to pull from local cache, when I actually try to run a report through the C/ISAM<->SQL interface, it takes exactly the same amount of time each time I run it. It reads everything over the network each time. And this is on read-only access. So I guess U/SQL is being particularly careful about accesses to the data.

Also, I do not have multiple NFS clients accessing the data. I have multiple users locally on the NFS server and only one user on a single NFS client which is also running on Linux, so I'm not mixing clients or clients and servers.

What I *really* don't understand is this. If there were even a *hint* that a local disk filesystem, like ext3, *might* possibly corrupt someone's data under certain unusual circumstances, it would be considered a major and embarrassing bug and word would be put out for everyone to apply the patch *immediately*. But with network filesystems, the attitude seems to be, gee, if we do this the right way it won't be as fast, so why don't we do it this other way instead? It'll only corrupt people's data a small fraction of the time and will be much faster! Yeah, that'll be OK.

Reply Score: 2

RE: How about this...
by gregthecanuck on Sun 10th Feb 2008 11:06 in reply to "How about this..."
gregthecanuck Member since:
2006-05-30

Adam - that solution is fine as long as the client solution doesn't cache the data, otherwise two users can still collide. Your suggestion basically matches the optimistic locking model.

Reply Score: 1