posted by sbergman27 on Sat 9th Feb 2008 21:33
Conversations I believe I read that OSNews conversions were for tech discussion, and I have an issue which I've Googled and Googled but the answer is still not clear to me.

I know that NFS does some client-side caching, and have read that ensuring complete cache consistency between clients is expensive, so NFS implements a lesser guaranty which is good enough for "everyday file sharing".

I have a server that runs many sessions of a multiuser Cobol accounting application. I have a need to run one session of the app on another box, with the Cobol C/ISAM data files mounted via NFS. The application, obviously, will be reading and writing individual records within the files, and proper locking is employed such that clients running on the server do not step on each others' changes to the files. But can I trust that NFS is going to handle this properly and not cause corruption?
Permalink for comment 647
To read all comments associated with this story, please click here.
RE: How about this...
by sbergman27 on Sat 9th Feb 2008 22:15 UTC
Member since:

Thanks for the reply. Unfortunately, this is a proprietary, closed source, accounting system. I didn't include all the detail in my description due to space limitations. But what I am having to run on the nfs client box is actually a clunky old C/ISAM<->SQL gateway called U/SQL which I talk to from an intranet web app to generate ad hoc reports. And also, eventually, to allow adding and editing inventory records. It is designed to work with the Cobol app and to do locking sufficient to avoid corruption when running on the local server. Unfortunately, it is compiled against an ancient version of glibc, and some of the symbols it needs have been deprecated... and now dropped. So it won't run on our shiny new Fedora 8 installation. Thus I need to run it on CentOS 4.6 via NFS.

i.e. the whole problem is that I have no control over the way the app works. :-(

ReplyParent Score: 2