“In this tutorial I will describe how to set up a highly available NFS server that can be used as storage solution for other high-availability services like, for example, a cluster of web servers that are being loadbalanced. If you have a web server cluster with two or more nodes that serve the same web site(s), than these nodes must access the same pool of data so that every node serves the same data, no matter if the loadbalancer directs the user to node 1 or node n. This can be achieved with an NFS share on an NFS server that all web server nodes (the NFS clients) can access.”
Setting up a High-Availability NFS Server
Submitted by Falko Timme 2006-03-26 Internet 6 Comments
There are a few more subtleties to implementing a highly available NFS cluster than the article mentions. For example, there are problems with non-idempotent operations such as rename interacting with XID caches (which are not replicated on disk). Failing even to mention the existence of such issues can lead to users who follow the offered advice experiencing serious problems, and IMO is thus a bit irresponsible.
If you want to access to an NFS server from an OSX machine, you have to export a folder with the “insecure” option.
It is not very nice, but otherwise you’ll not be able to access it, linux will reject the connection.
Btw, from OSX, Option+K in Finder and then nfs://myserver/my/share.
For a complete no-brainer NAS (network attached storage) install, take a look at OpenFiler http://www.openfiler.com/
This is a really nice distro I use for a home media server and is based off of CentOS. Very easy to configure and manage even if you aren’t much of a Linux buff.
It’s nice to see a mostly-complete article on high availability NFS. An alternative method would be to use Solaris 11, and use the ZFS file system to great highly redundant disk pools (aka software RAID’s).
This would add another layer of failure proofing.