[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here HP TCP/IP Services for OpenVMS

HP TCP/IP Services for OpenVMS
Management


Previous Contents Index

22.14.1 File Locking Service Startup and Shutdown

The file locking services can be shut down and started independently of TCP/IP Services. This is useful when you change parameters or logical names that require the service to be restarted.

The following files are provided:

  • SYS$STARTUP:TCPIP$LOCKD_STARTUP.COM allows you to start up the LOCKD component independently.
  • SYS$STARTUP:TCPIP$STATD_STARTUP.COM allows you to start up the STATD component independently.
  • SYS$STARTUP:TCPIP$LOCKD_SHUTDOWN.COM allows you to shut down the LOCKD component independently.
  • SYS$STARTUP:TCPIP$STATD_SHUTDOWN.COM allows you to shut down the STATD component independently.

To preserve site-specific parameter settings and commands, create the following files. These files are not overwritten when you reinstall TCP/IP Services:

  • SYS$STARTUP:TCPIP$LOCKD_SYSTARTUP.COM can be used as a repository for site-specific definitions and parameters to be invoked when the LOCKD component is started.
  • SYS$STARTUP:TCPIP$LOCKD_SYSHUTDOWN.COM can be used as a repository for site-specific definitions and parameters to be invoked when the LOCKD component is shut down.

22.15 Improving NFS Server Performance

This section provides information to help you identify and resolve problems and tune system performance.

22.15.1 Displaying NFS Server Performance Information

The SHOW NFS_SERVER command displays information about the running NFS server. You can use the information to tune NFS server performance.

You can enter SHOW NFS_SERVER for a specific client or host if it is listed in the proxy database. The counter information can be especially useful in determining the load on your system.

For more information about the SHOW NFS_SERVER command, refer the HP TCP/IP Services for OpenVMS Management Command Reference.

22.15.2 Increasing the Number of Active Threads

The NFS server is an asynchronous, multithreaded process. This means that multiple NFS requests can be processed concurrently. Each NFS request is referred to as a thread. With increased server activity, client users may experience timeout conditions. Assuming the server host has the available resources (CPU, memory, and disk speed), you can improve server response by increasing the number of active threads. You do this by changing the value for the appropriate NFS server attributes, as described in Section 22.12.

The NFS server supports both TCP and UDP connections. You can control the maximum number of concurrent threads for each type of connection.

  • To set the maximum number of TCP threads, set the tcp_threads attribute.
  • To set the maximum number of UPD threads, set the udp_threads attribute.

Do not set the UDP maximum threads to zero. If you set the variable to zero, the protocol will be disabled.

If you increase the number of active threads, you should also consider increasing the timeout period on UNIX clients. You do this with the /TIMEOUT option to the TCP/IP Services MOUNT command.

If your clients still experience timeout conditions after increasing the number of active threads and the timout period on the client, you may need to upgrade your hardware.

22.15.3 Managing the File Name Cache

The NFS server caches the contents of directory files in addition to the content of other files. The server must access the directory files to cache them.

You can manage the performance of the NFS server using the following logical names:

  • TCPIP$CFS_NAME_CACHE_SIZE
    This logical name establishes the size of the file name cache. The cache size is represented as the number of 128-byte entries. File names up to 88 bytes long are stored in each 128-byte name cache entry. The cache is retained in least recently used (LRU) order, so that the most referenced entries are retained. Certain directory modification operations remove all entries for the directory from the cache.
    The file name cache reduces the number of QIO operations required by the NFS server to look up files by name. The cache increases the virtual memory requirements of the NFS server by 128 bytes times the number of entries configured by the logical name.

  • TCPIP$CFS_ODS_CACHE_SIZE
    This logical name establishes the size of the ODS cache, which retains information about sequential files that will require record format conversion.
    The cache size is expressed as the number of 64-byte entries. Entries are retained in LRU order, so that the most referenced entries are retained.
    In addition to 64-bytes per entry, the record conversion information created when the file is first accessed is retained as well. This change increases the virtual memory required by the NFS server, but also greatly improves performance for files that require format conversion. The ODS cache is used on internal file access and deaccess operations and when attribute information is read.

In addition, you can also use the NFS sysconfig attribute ovms_xqp_plus_enabled to modify the behavior of the NFS server to take advantage of the directory and name caches. This attribute is specified as a bit mask. The default value is 0, or OFF.

The following list describes the mask values:

  • 1 (open directory on LOOKUP)
    When an NFS LOOKUP operation is performed on a directory, the directory is accessed. This allows subsequent operations to use the directory cache. If the name cache is enabled, entries will be posted to it.
  • 2 (open directory on READDIR)
    When an NFS READDIR operation is received, the directory is accessed. This allows subsequent operations to use the directory cache.
  • 4 (open file on GETATTR)
    When the attributes of a file subject to record format conversion are read and the MODUS_OPERANDI mask 512 is enabled, the file's true size (that is, its converted size) is to be returned. If this option is enabled, then the access to convert the file will be cached for up to the number of seconds specified by the subsystem attribute vnode_age . If the ODS name cache is also enabled, the size and conversion information will be retained in the ODS cache until either the file is deleted or the entry is replaced by another, subject to the LRU behavior.

Obtain a combination of choices by adding the desired values. For example, enter 7 for a combination of the three.

When directory caching is enabled, the system must be configured to be able to handle the increased directory cache requirements. The following SYSGEN parameters may need to be increased, depending on the maximum number of files that the NFS server may access at any given time. This maximum is determined by the FILLM quota of the NFS$SERVER account and the SYSGEN parameter CHANNELCNT .

Use the MODPARAMS.DAT file and AUTOGEN to make the changes. Define the following parameters:

  • ACP_DINDXCACHE
    Increase this value by the number of NFS server channels.
  • ACP_DIRCACHE
    Increase this value by four times ACP_DINDXCACHE.

To calculate the PAGEDYN value, add the values of these parameters and multiply by 512.

22.15.4 OpenVMS SYSGEN Parameters That Affect Performance

The following OpenVMS SYSGEN parameters impact NFS server performance:

  • CHANNELCNT
    The CHANNELCNT parameter sets the maximum number of channels that a process can use. Ensure that CHANNELCNT is set large enough to handle the total number of files accessed by all clients.

    Note

    The NFS server process is also limited by the FILLM of the TCPIP$NFS account's SYSUAF record. The effective value is the lower of the FILLM and CHANNELCNT values.
  • ACP parameters
    The NFS server issues a large number of ACP QIO calls through CFS. Altering certain ACP parameters could yield better performance. Directory searching and file attribute management constitutes a majority of the ACP operations. Therefore, HP recommends that you monitor and adjust the following parameters as necessary:
    • ACP_HDRCACHE
    • ACP_MAPCACHE
    • ACP_DIRCACHE
    • ACP_FIDCACHE
    • ACP_DATACACHE

    To monitor these parameters, use the MONITOR utility (for example, MONITOR FILE_SYSTEM_CACHE), and the AUTGEN FEEDBACK command. For more information, refer to the HP OpenVMS System Management Utilities Reference Manual: M-Z.
  • LOCK parameters
    The various lock manager parameters may need some alteration because CFS uses the lock manager extensively. A lock is created for each file system, each referenced file, and each data buffer that is loaded into the CFS cache.

  • VIRTUALPAGECNT
    Maximum virtual size of a process in pages. The NFS server requires larger-than-normal amounts of virtual address space to accommodate structures and buffer space.
  • WSMAX
    Maximum physical size of a process in pages. The larger the working set, the more pages of virtual memory that can remain resident. Larger values reduce page faults and increase the server's performance.


Chapter 23
Configuring and Managing the NFS Client

The Network File System (NFS) client software enables client users to access file systems made available by an NFS server. These files and directories physically reside on the remote (server) host but appear to the client as if they were on the local system. For example, any files accessed by an OpenVMS client --- even a UNIX file --- appear to be OpenVMS files and have typical OpenVMS file names.

This chapter reviews key concepts and describes:

For information about the NFS server, see Chapter 22.

23.1 Key Concepts

Because the NFS software was originally developed on and used for UNIX machines, NFS implementations use UNIX file system conventions and characteristics. This means that the rules and conventions that apply to UNIX file types, file names, file ownership, and user identification also apply to NFS.

Because the TCP/IP Services NFS client runs on OpenVMS, the client must accommodate the differences between the two file systems, for example, by converting file names and mapping file ownership information. You must understand these differences to configure NFS properly and to successfully mount file systems from an NFS server.

The following sections serve as a review only. If you are not familiar with these topics, see the Compaq TCP/IP Services for OpenVMS Concepts and Planning guide for a more detailed discussion of the NFS implementation available with the TCP/IP Services software.

23.1.1 NFS Clients and Servers

NFS is a client/server environment that allows computers to share disk space and users to work with their files from multiple computers without copying them to the local system. Computers that make files available to remote users are NFS servers. Computers with local users accessing and creating remote files are NFS clients. A computer can be an NFS server or an NFS client, or both a server and a client.

Attaching a remote directory to the local file system is called mounting a directory. A directory cannot be mounted unless it is first exported by an NFS server. The NFS client identifies each file system by the name of its mount point on the server. The mount point is the name of the device or directory at the top of the file system hierarchy. An NFS device is always named DNFSn.

All files below the mount point are available to client users as if they reside on the local system. The NFS client requests file operations by contacting a remote NFS server. The server then performs the requested operation. The NFS client automatically converts all mounted directories and file structures, contents, and names to the format required by OpenVMS. For example, a UNIX file named /usr/webster/.login would appear to an OpenVMS client as DNFS1:[USR.WEBSTER].LOGIN;1

For more information on how NFS converts file names, see Appendix C.

23.1.2 Storing File Attributes

The OpenVMS operating system supports multiple file types and record formats. In contrast, NFS and UNIX systems support only byte-stream files, seen to the OpenVMS client as sequential STREAM_LF files.

This means the client must use special record handling to store and access non-STREAM_LF files. The OpenVMS NFS client accomplishes this with attribute description files (ADFs). These are special companion files the client uses to hold the attribute information that would otherwise be lost in the translation to STREAM_LF format. For example, a SET FILE/NOBACKUP command causes the client to create an ADF, because NFS has no concept of this OpenVMS attribute.

23.1.2.1 Using Default ADFs

The client provides default ADFs for files with the following extensions: .EXE, .HLB, .MLB, .OBJ, .OLB, .STB, and .TLB. (The client does not provide ADFs for files with the .TXT and .C extensions, because these are STREAM_LF.) The client maintains these ADFs on the server.

For example, SYS$SYSTEM:TCPIP$EXE.ADF is the default ADF for all .EXE type files. When you create .EXE files (or if they exist on the server), they are defined with the record attributes from the single default ADF file. The client refers only to the record attributes and file characteristics fields in the default ADF.

23.1.2.2 How the Client Uses ADFs

By default, the client uses ADFs if they exist on the server. The client updates existing ADFs or creates them as needed for new files. If you create a non-STREAM_LF OpenVMS file or a file with access control lists (ACLs) associated with it on the NFS server, the NFS client checks to see whether a default ADF can be applied. If not, the client creates a companion ADF to hold the attributes.

The client hides these companion files from the user's view. If a user renames or deletes the orginal file, the client automatically renames or deletes the companion file. However, if a user renames or deletes a file on the server side, the user must also rename the companion file; otherwise, file attributes are lost.

You can modify this behavior with the /NOADF qualifier to the MOUNT command. The /NOADF qualifier tells the client to handle all files as STREAM_LF unless a default ADF matches. This mode is only appropriate for read-only file systems because the client cannot adequately handle application-created files when /NOADF is operational.

23.1.2.3 Creating Customized Default ADFs

You can create customized default ADFs for special applications. To do so:

  1. On the client, create a special application file that results in creating an ADF on the server. Suppose that application file is called TEST.GAF .
  2. On the server, check the listing for the newly created file. For example:


    > ls -a
    .
    ..
    .$ADF$test.gaf;1
    test.gaf
    
    

    Note that the ADF ( .$ADF$test.gaf;1 ) was created with the data file ( TEST.GAF ).
  3. On the server, copy the ADF file to a newly created default ADF file on the client. For example:


    > cp .\$ADF\$test.gaf\;1 gaf.adf
    
    

    Note that the backslashes (\) are required for entering the UNIX system nonstandard dollar sign ($) and semicolon (;) symbols.
  4. On the client, copy the new default ADF file to the SYS$SYSTEM directory. For example:


    $ COPY GAF.ADF SYS$COMMON:[SYSEXE]TCPIP$GAF.ADF
    
  5. Dismount all the NFS volumes and mount them again. This starts another NFS ancillary control process (ACP) so that the newly copied default ADF file can take effect.

23.1.3 NFS Client Support for Extended File Specifications

The NFS client supports the extended character set supported by the OpenVMS operating system. The NFS client does not support NUL (ASCII 0). The length of a file name is limited to 232 characters, including the file name, dot, file extension, semicolon, and version number.

If you do not include the /STRUCTURE qualifier on the MOUNT command, the NFS client assumes that the file system structure being accessed is an ODS-2 volume. You can change this default by defining the following logical name:


TCPIP$NFS_CLIENT_MOUNT_DEFAULT_STRUCTURE_LEVEL

You can use this logical name to ensure that all NFS disks on the system have ODS-5 support enabled. Set the value of the logical to 2 for ODS-2 (the default), or 5 for ODS-5. To override this logical, include the /STRUCTURE qualifier to the TCP/IP management command MOUNT.

Extended file specifications are provided by the ODS-5 file system. To mount an ODS-5 volume, add the /STRUCTURE=5 qualifier to the TCP/IP management command MOUNT. For example:


$ TCPIP
TCPIP> MOUNT DNFS0: BOOK1 BEATRICE -
_TCPIP> /PATH="/INFERNO" /HOST="FOO.BAR.EREWHON" -
_TCPIP> /STRUCTURE=5 /SYSTEM

The /STRUCTURE qualifier accepts the following values:

  • 5 to indicate ODS-5
  • 2 to indicate ODS-2 (the default)

For more information about the MOUNT/STRUCTURE command, display the online help by entering the following command:


TCPIP> HELP MOUNT/STRUCTURE

Note

When you display device information using the DCL command SHOW DEVICE/FULL, the NFS disk is incorrectly shown as being accessed by DFS. For example:


$ SHOW DEVICE/FULL
...
Disk DNFS1:, device type Foreign disk type 7, is online, mounted,
file-oriented device, shareable, accessed via DFS
...

23.1.4 How the NFS Client Authenticates Users

Both the NFS server and NFS client use the proxy database to authenticate users. The proxy database is a collection of entries used to register user identities. To access file systems on the remote server, local users must have valid accounts on the remote server system.

The proxy entries map each user's OpenVMS identity to a corresponding NFS identity on the server host. When a user initiates a file access request, NFS checks the proxy database before granting or denying access to the file.

The proxy database is an index file called TCPIP$PROXY.DAT. If you use the configuration procedure to configure NFS, this empty file is created for you. You populate this file by adding entries for each NFS user. See Section 23.3 for instructions on how to add entries to the proxy database.

Note

The configuration procedure for the NFS server creates a nonprivileged account with the user name TCPIP$NOBODY. You might want to add a proxy record for the default user (-2/-2) that maps to the TCPIP$NOBODY account.

23.1.5 How the NFS Client Maps User Identities

Both OpenVMS and UNIX based systems use identification codes as a general method of resource protection and access control. Just as OpenVMS employs user names and UICs for identification, UNIX identifies users with a user name and a user identifier (UID) and group identifier (GID) pair. Both UIDs and GIDs are used to identify a user on a system.

The proxy database contains entries for each user wanting to access files on a server host. Each entry contains the user's local OpenVMS account name, the UID/GID pair that identifies the user's account on the server system, and the name of the server host. This file is loaded into dynamic memory when the NFS client starts. Whenever you modify the UID/GID to UIC mapping, you must restart the NFS client software by dismounting and remounting all the client devices. (Proxy mapping always occurs even when operating in OpenVMS to OpenVMS mode.)

The only permission required by the UNIX file system for deleting a file is write access to the last directory in the path specification.

You can print a file that is located on a DNFSn: device. However, the print symbiont, which runs as user SYSTEM, opens the file only if it is world readable or if there is an entry in the proxy database that allows read access to user SYSTEM.

23.1.6 NFS Client Default User

You can associate a client device with a user by designating the user with the /UID and /GID qualifiers to the MOUNT command. If you do not specify a user with the /UID and /GID qualifiers, NFS uses the default user --2/--2. If the local user or the NFS client has no proxy for the host serving a DNFS device, all operations performed by that user on that device are seen as coming from the default user (--2/--2).

To provide universal access to world-readable files, you can use the default UID instead of creating a proxy entry for every NFS client user.

HP strongly recommends that, for any other purposes, you provide a proxy with a unique UID for every client user. Otherwise, client users may see unpredictable and confusing results when they try to create files.

23.1.7 How the NFS Client Maps UNIX Permissions to OpenVMS Protections

Both OpenVMS and UNIX based systems use a protection mask that defines categories assigned to a file and the type of access granted to each category. The NFS server file protection categories, like those on UNIX systems, include: user, group and other , each having read ( r ), write ( w ), or execute ( x ) access. The OpenVMS categories are SYSTEM, OWNER, GROUP, and WORLD. Each category can have up to four types of access: read (R), write, (W), execute (E), and delete (D). The NFS client handles file protection mapping from server to client.

OpenVMS delete access does not directly translate to a UNIX protection category. A UNIX user can delete a file as long as he or she has write access to the parent directory. The user can see whether or not he or she has permissions to delete a file by looking at the protections on the parent directory. This design corresponds to OpenVMS where the absence of write access to the parent directory prevents users from deleting files, even when protections on the file itself appear to allow delete access. For this reason, the NFS client always displays the protection mask of remote UNIX files as permitting delete access for all categories of users.

Since a UNIX file system does not have a SYSTEM protection mask (the superuser has all permissions for all files) the NFS client displays the SYSTEM as identical to the OWNER mask.


Previous Next Contents Index