NFS server on z/OS

Read this information to learn about various aspects on the installation of an NFS server on z/OS. It is recommended that you run one dedicated NFS server as this greatly simplifies the setup and future diagnostics.

It is highly recommended that you read topic NFS Server on z/OS in the Planning Guide for SAP on IBM® Db2® for z/OS to learn what is necessary to set up a standard NFS server under z/OS. This Planning Guide is also available from the SAP on Db2 for z/OS Community at https://www.sap.com/community/topic/db2-for-zos.html. Then, follow this path: Summary of SAP Information → Release dependent Information → Planning Guide.

A highly available z/OS NFS Server setup has additional requirements. For example, the mount handle database must be accessible from all LPARs, where the NFS Server can run on. Also, the start procedure with the moddvipa statement to activate the NFS Server VIPA, must be accessible on all those LPARs.

It must be possible to move the z/OS NFS server used by SAP high availability NFS clients between z/OS LPARs within the same SYSPLEX. The associated dynamic VIPA can only be moved within the same TCPIP subplex.

It is recommended to start the associated dynamic VIPA within the start procedure of the NFS server using the MODDVIPA utility. Add the following as a first step to your NFS start procedure, using your NFS dynamic VIPA IP:
//TCPDVP EXEC PGM=MODDVIPA,REGION=0K,TIME=1440,
//PARM='POSIX(ON) ALL31(ON)/-p TCPIP -c <dynamic_VIPA>'

PARMLIB member BPXPRMxx must contain the keyword SYSPLEX(YES) for the SAP z/OS UNIX System Services file systems to be accessible from multiple z/OS LPARs within a SYSPLEX.

For the NFS client running on the SAP application server, the movement of the NFS server is transparently handled via a z/OS dynamic VIPA, dynamic routing (usually OSPF), and their automatic reconnect ability.

NFS server security model

It is highly recommended that you use security(exports). This allows NFS clients to transparently connect or reconnect to NFS file systems even when there are NFS server failures and restarts across different z/OS hosts.

Transparent connect is possible because security(exports) does not require any RACF® definitions or the use of the mvslogin command. It is the default UNIX NFS server mode of operation.

The export list for the movable NFS server is limited to global SAP directories, which do not contain sensitive data. Access is also limited to specific client IP addresses only.

For further information about setting up NFS, see z/OS Network File System Guide and Reference, chapter Customization and Operations.

Mount handle databases and the remount site attribute

To allow transparent failover of the NFS server, the mount handle databases must be shared between the z/OS hosts.

If your NFS clients use protocol version 3 to connect to the z/OS NFS Server, you must use the remount attribute. In an NFS client/server setup where protocol version 4 only is used, it is not necessary to set the remount remount because most NFS version 4 clients handle volatile file handles automatically.

For information about how to upgrade from NFS Version 3 to NFS Version 4, see NFSv4 migration hints and tips.

The nlm site attribute

If you have NFS version 3, you must make sure that the NFS Lock Manager (NLM) is started on the z/OS NFS server. Add the nlm attribute to the NFS server attributes file.

NLM creates TCP connections from z/OS to the NFS clients. Because you must use a dynamic VIPA for the high availability NFS server, this dynamic VIPA must be a source VIPA. This can be configured in the z/OS TCP PROFILE(s) by using the SRCIP statement. See z/OS VIPA usage for the high availability solution for SAP. Read the section about NLM and z/OS NFS server and set up your environment accordingly.

Note: If you have NFS version 4 clients only, it is not necessary to set the nlm site attribute or to use source VIPA because NFS version 4 protocol handles locking automatically.

The restimeout site attribute

restimeout(n,m) specifies a retention period and a clock time for the removal of mount points and control blocks that have been inactive longer than the specified retention period.

If n is set to 0, the z/OS NFS server does not remove any mount points or associated resources.

It is recommended that you specify restimeout(720,0) (30 days) or restimeout(0,0), which means no timeout.

Moving the ownership of z/OS NFS server exported z/OS UNIX file systems to a specific LPAR

For best performance of the NFS server exported file systems, these file systems must be defined as sysplex-aware zFS. If the NFS server is (re)started for whatever reason on another LPAR by SA z/OS, zFS then automatically moves the zFS ownership of a zFS file system to the system that has the most read/write activity. When all exported zFS file systems are defined as sysplex-aware, then no further action is required to move the file system ownership when the NFS server moves to another LPAR.

The configuration of a sysplex-aware zFS is described in z/OS Distributed File Service zFS Administration.

NFS client root access

The SAP installation on a remote (AIX, Linux®, or Windows) application server requires root (uid=0) access to all NFS-mounted SAP file systems. The z/OS NFS server exports file must specify the NFS clients (by name or IP) and also use the suffix option <root>.

For example, to allow NFS clients using VIPAs of 10.101.4.214, 10.101.4.215, and 10.101.4.216 to have read and write root access to the SAP profile subdirectory, you enter:

/hfs/sapmnt/HA1/profile  -access=10.101.4.214<root>|\
                                 10.101.4.215<root>|\
                                 10.101.4.216<root>
Note: z/OS UNIX System Services file system path names for a mount path directory must be prefixed. The prefix /hfs in the exports file is the z/OS system default prefix. It does not indicate that the exported directory must reside in an HFS file system. In fact, it is recommended to use zFS file systems for all SAP related file systems in z/OS UNIX. If you want to change the default prefix, you can do so by setting the NFS server attribute HFSPREFIX (for details see z/OS Network File System Guide and Reference).