Linux export considerations
Linux® does not allow a file system to be NFS V4 exported unless it supports POSIX ACLs. For more information, see Linux ACLs and extended attributes.
For Linux nodes only, issue the exportfs -ra command to initiate a reread of the /etc/exports file.
/gpfs/dir1 cluster1(rw,fsid=745)
- The values must be unique for each file system.
- The values must not change after reboots. The file system should be unexported before any change is made to an already assigned fsid.
- Entries in the /etc/exports file are not necessarily file system
roots. You can export multiple directories within a file system. In the case of different
directories of the same file system, the fsids must be different. For
example, in the GPFS file system
/gpfs, if two directories are exported (dir1 and
dir2), the entries might look like this:
/gpfs/dir1 cluster1(rw,fsid=745) /gpfs/dir2 cluster1(rw,fsid=746)
- If a GPFS file system is exported from multiple nodes, the fsids should be the same on all nodes.
- Define the root of the overall exported file system (also referred to as the pseudo root file
system) and the pseudo file system tree. For example, to define /export as the pseudo root
and export /gpfs/dir1 and /gpfs/dir2 which are not below /export,
run:
In this example, /gpfs/dir1 and /gpfs/dir2 are bound to a new name under the pseudo root using the bind option of the mount command. These bind mount points should be explicitly unmounted after GPFS is stopped and bind-mounted again after GPFS is started. To unmount, use the umount command. In the preceding example, run:mkdir –m 777 /export /export/dir1 /export/dir2 mount --bind /gpfs/dir1 /export/dir1 mount –-bind /gpfs/dir2 /export/dir2
umount /export/dir1; umount /export/dir2
- Edit the /etc/exports file. There must be one line for the pseudo root with
fsid=0. For the preceding
example:
The two exported directories (with their newly bound paths) are entered into the /etc/exports file./export cluster1(rw,fsid=0) /export/dir1 cluster1(rw,fsid=745) /export/dir2 cluster1(rw,fsid=746)
Large installations with hundreds of compute nodes and a few login nodes or NFS-exporting nodes require tuning of the GPFS parameters maxFilesToCache and maxStatCache with the mmchconfig command.
This tuning is required for the GPFS token manager (file locking), which can handle approximately 1,000,000 files in memory. The token manager keeps track of a total number of tokens, which equals 5000 * number of nodes. This will exceed the memory limit of the token manager on large configurations. By default, each node holds 5000 tokens.
For information about the default values of maxFilesToCache and maxStatCache, see the description of the maxStatCache attribute in the topic mmchconfig command.
In versions of IBM Storage Scale earlier than 5.0.2, the stat cache is not effective on the Linux platform unless the Local Read-Only Cache (LROC) is configured. For more information, see the description of the maxStatCache parameter in the topic mmchconfig command.
If you are running at SLES 9 SP 1, the kernel defines the sysctl variable fs.nfs.use_underlying_lock_ops, which determines whether the NFS lockd is to consult the file system when granting advisory byte-range locks. For distributed file systems like GPFS, this must be set to true (the default is false).
sysctl fs.nfs.use_underlying_lock_ops
sysctl -p
Because the fs.nfs.use_underlying_lock_ops variable is currently not available in SLES 9 SP 2 or later, when NFS-exporting a GPFS file system, ensure that your NFS server nodes are at the SP 1 level (unless this variable is made available in later service packs).
For additional considerations when NFS exporting your GPFS file system, refer to File system creation considerations.