[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Cluster Systems


Previous Contents Index

10.12 Cluster Performance

Sometimes performance issues involve monitoring and tuning applications and the system as a whole. Tuning involves collecting and reporting on system and network processes to improve performance. A number of tools can help you collect information about an active system and its applications.

10.12.1 Using the SHOW Commands

The following table briefly describes the SHOW commands available with the OpenVMS operating system. Use the SHOW DEVICE commands and qualifiers shown in the table.

Command Purpose
SHOW DEVICE/FULL Shows the complete status of a device, including:
  • Whether the disk is available to the cluster
  • Whether the disk is MSCP served or dual ported
  • The name and type of the primary and secondary hosts
  • Whether the disk is mounted on the system where you enter the command
  • The systems in the cluster on which the disk is mounted
SHOW DEVICE/FILES Displays a list of the names of all files open on a volume and their associated process name and process identifier (PID). The command:
  • Lists files opened only on this node.
  • Finds all open files on a disk. You can use either the SHOW DEVICE/FILES command or SYSMAN commands on each node that has the disk mounted.
SHOW DEVICE/SERVED Displays information about disks served by the MSCP server on the node where you enter the command. Use the following qualifiers to customize the information:
  • /HOST displays the names of processors that have devices online through the local MSCP server, and the number of devices.
  • /RESOURCE displays the resources available to the MSCP server, total amount of nonpaged dynamic memory available for I/O buffers, and number of I/O request packets.
  • /COUNT displays the number of each size and type of I/O operation the MSCP server has performed since it was started.
  • /ALL displays all of the information listed for the SHOW DEVICE/SERVED command.

The SHOW CLUSTER command displays a variety of information about the OpenVMS Cluster system. The display output provides a view of the cluster as seen from a single node, rather than a complete view of the cluster.

Reference: The HP OpenVMS System Management Utilities Reference Manual contains complete information about all the SHOW commands and the Show Cluster utility.

10.12.2 Using the Monitor Utility

The following table describes using the OpenVMS Monitor utility to locate disk I/O bottlenecks. I/O bottlenecks can cause the OpenVMS Cluster system to appear to hang.
Step Action
1 To determine which clusterwide disks may be problem disks:
  1. Create a node-by-node summary of disk I/O using the MONITOR/NODE command
  2. Adjust the "row sum" column for MSCP served disks as follows:
    • I/O rate on serving node includes local requests and all requests from other nodes
    • I/O rate on other nodes includes requests generated from that node
    • Requests from remote nodes are counted twice in the row sum column
  3. Note disks with the row sum more than 8 I/Os per second
  4. Eliminate from the list of cluster problem disks the disks that are:
    • Not shared
    • Dedicated to an application
    • In the process of being backed up
2 For each node, determine the impact of potential problem disks:
  • If a disproportionate amount of a disk's I/O comes from a particular node, the problem is most likely specific to the node.
  • If a disk's I/O is spread evenly over the cluster, the problem may be clusterwide overuse.
  • If the average queue length for a disk on a given node is less than 0.2, then the disk is having little impact on the node.
3 For each problem disk, determine whether:
  • Page and swap files from any node are on the disk.
  • Commonly used programs or data files are on the disk (use the SHOW DEVICE/FILES command).
  • Users with default directories on the disk are causing the problem.

10.12.3 Using HP Availability Manager

HP Availability Manager is a real-time monitoring, diagnostic, and correction tools used by system managers to improve the availability and throughput of a system. Availability Manager runs on OpenVMS Integrity servers and OpenVMS Alpha servers and on a Windows node.

These products, which are included with the operating system, help system managers correct system resource utilization problems for CPU usage, low memory, lock contention, hung or runaway processes, I/O, disks, page files, and swap files.

Availability Manager enables you to monitor one or more OpenVMS nodes on an extended LAN from either an OpenVMS Alpha or a Windows node. Availability Manager collects system and process data from multiple OpenVMS nodes simultaneously. It analyzes the data and displays the output using a native Java GUI.

DECamds collects and analyzes data from multiple nodes (VAX and Alpha) simultaneously, directing all output to a centralized DECwindows display. DECamds helps you observe and troubleshoot availability problems, as follows:

  • Alerts users to resource availability problems, suggests paths for further investigation, and recommends actions to improve availability.
  • Centralizes management of remote nodes within an extended LAN.
  • Allows real-time intervention, including adjustment of node and process parameters, even when remote nodes are hung.
  • Adjusts to site-specific requirements through a wide range of customization options.

Reference: For more information about Availability Manager, see the HP OpenVMS Availability Manager User's Guide, which is available at:

http://h71000.www7.hp.com/openvms/products/availman/index.html

For more information about DECamds, see the DECamds User's Guide.

10.12.4 Monitoring LAN Activity

It is important to monitor LAN activity on a regular basis. Using the SCACP, you can monitor LAN activity as well as set and show default ports, start and stop LAN devices, and assign priority values to channels.

Reference: For more information about SCACP, see the HP OpenVMS System Management Utilities Reference Manual: A--L.

Using NCP commands like the following, you can set up a convenient monitoring procedure to report activity for each 12-hour period. Note that DECnet event logging for event 0.2 (automatic line counters) must be enabled.

Reference: For detailed information on DECnet for OpenVMS event logging, refer to the DECnet for OpenVMS Network Management Utilities manual.

In these sample commands, BNA-0 is the line ID of the Ethernet line.


NCP> DEFINE LINE BNA-0 COUNTER TIMER 43200
NCP> SET LINE BNA-0 COUNTER TIMER 43200

At every timer interval (in this case, 12 hours), DECnet will create an event that sends counter data to the DECnet event log. If you experience a performance degradation in your cluster, check the event log for increases in counter values that exceed normal variations for your cluster. If all computers show the same increase, there may be a general problem with your Ethernet configuration. If, on the other hand, only one computer shows a deviation from usual values, there is probably a problem with that computer or with its Ethernet interface device.

The following layered products can be used in conjunction with one of HP's LAN bridges to monitor the LAN traffic levels: RBMS, DECelms, DECmcc, and LAN Traffic Monitor (LTM).

Note that some of these products are no longer supported by HP.

10.12.5 LAN or PEDRIVER Fast Path Settings

Save the LAN DEVICE and PEDRIVER, on which SCS communication is enabled, on the same CPU by executing the following command:


SET DEVICE EWA/Pref=1 
SET DEVICE PEA0/Pref=1 

If a node uses IP as the interconnect for cluster communication, ensure that LAN, BG, and PE devices are in the same CPU. If the CPU is saturated, set off load devices on to a different CPU.


Appendix A
Cluster System Parameters

For systems to boot properly into a cluster, certain system parameters must be set on each cluster computer. Table A-1 lists system parameters used in cluster configurations.

A.1 Values

Some system parameters are in units of pagelets, whereas others are in pages. AUTOGEN determines the hardware page size and records it in the PARAMS.DAT file.

Caution: When reviewing AUTOGEN recommended values or when setting system parameters with SYSGEN, note carefully which units are required for each parameter.

Table A-1 describes system parameters that are specific to OpenVMS Cluster configurations that may require adjustment in certain configurations. Table A-2 describes OpenVMS Cluster specific system parameters that are reserved for OpenVMS use.

Reference: System parameters, including cluster and volume shadowing system parameters, are documented in the HP OpenVMS System Management Utilities Reference Manual.

Table A-1 Adjustable Cluster System Parameters
Parameter Description
ALLOCLASS Specifies a numeric value from 0 to 255 to be assigned as the disk allocation class for the computer. The default value is 0.
CHECK_CLUSTER Serves as a VAXCLUSTER parameter sanity check. When CHECK_CLUSTER is set to 1, SYSBOOT outputs a warning message and forces a conversational boot if it detects the VAXCLUSTER parameter is set to 0.
CLUSTER_CREDITS Specifies the number of per-connection buffers a node allocates to receiving VMS$VAXcluster communications.

If the SHOW CLUSTER command displays a high number of credit waits for the VMS$VAXcluster connection, you might consider increasing the value of CLUSTER_CREDITS on the other node. However, in large cluster configurations, setting this value unnecessarily high will consume a large quantity of nonpaged pool. Each receive buffer is at least SCSMAXMSG bytes in size but might be substantially larger depending on the underlying transport.

It is not required that all nodes in the cluster have the same value for CLUSTER_CREDITS. For small or memory-constrained systems, the default value of CLUSTER_CREDITS should be adequate.

CWCREPRC_ENABLE Controls whether an unprivileged user can create a process on another OpenVMS Cluster node. The default value of 1 allows an unprivileged user to create a detached process with the same UIC on another node. A value of 0 requires that a user have DETACH or CMKRNL privilege to create a process on another node.
DISK_QUORUM The physical device name, in ASCII, of an optional quorum disk. ASCII spaces indicate that no quorum disk is being used. DISK_QUORUM must be defined on one or more cluster computers capable of having a direct (not MSCP served) connection to the disk. These computers are called quorum disk watchers. The remaining computers (computers with a blank value for DISK_QUORUM) recognize the name defined by the first watcher computer with which they communicate.
DR_UNIT_BASE Specifies the base value from which unit numbers for DR devices (StorageWorks RAID Array 200 Family logical RAID drives) are counted. DR_UNIT_BASE provides a way for unique RAID device numbers to be generated. DR devices are numbered starting with the value of DR_UNIT_BASE and then counting from there. For example, setting DR_UNIT_BASE to 10 will produce device names such as $1$DRA10, $1$DRA11, and so on. Setting DR_UNIT_BASE to appropriate, nonoverlapping values on all cluster members that share the same (nonzero) allocation class will ensure that no two RAID devices are given the same name.
EXPECTED_VOTES Specifies a setting that is used to derive the initial quorum value. This setting is the sum of all VOTES held by potential cluster members.

By default, the value is 1. The connection manager sets a quorum value to a number that will prevent cluster partitioning (see Section 2.3). To calculate quorum, the system uses the following formula:

estimated quorum = (EXPECTED_VOTES + 2)/2

LAN_FLAGS (Integrity servers and Alpha) LAN_FLAGS is a bit mask used to enable features in the local area networks port drivers and support code. The default value for LAN_FLAGS is 0.

The bit definitions as follows:

Bit Description
0 The default value of zero indicates that ATM devices run in the SONET mode. If set to 1, this bit indicates that ATM devices run in the SDH mode.
1 If set, this bit enables a subset of the ATM trace and debug messages in the LAN port drivers and support code.
2 If set, this bit enables all ATM trace and debug messages in the LAN port drivers and support code.
3 If set, this bit runs UNI 3.0 over all ATM adapters. Auto-sensing of the ATM UNI version is enabled if both bit 3 and bit 4 are off (0).
4 If set, this bit runs UNI 3.1 over all ATM adapters. Auto-sensing of the ATM UNI version is enabled if both bit 3 and bit 4 are off (0).
5 If set, this bit disables auto-negotiation over all Gigabit Ethernet Adapters.
6 If set, this bit enables the use of jumbo frames over all Gigabit Ethernet Adapters.
7 Reserved.
8 If set, this bit disables the use of flow control over all LAN adapters that support flow control.
9 Reserved.
10 Reserved.
11 If set, this bit disables the logging of error log entries by LAN drivers.
12 If set, this bit enables a fast timeout on transmit requests, usually between 1 and 1.2 seconds instead of 3 to 4 seconds for most LAN drivers.
13 If set, this bit transmits that are given to the LAN device and never completed by the device (transmit timeout condition) are completed with error status (SS$_ ABORT) rather than success status (SS$_NORMAL).
LOCKDIRWT Lock manager directory system weight. Determines the portion of lock manager directory to be handled by this system. The default value is adequate for most systems.
LOCKRMWT Lock manager remaster weight. This parameter, in conjunction with the lock remaster weight from a remote node, determines the level of activity necessary for remastering a lock tree.
MC_SERVICES_P0 (dynamic) Controls whether other MEMORY CHANNEL nodes in the cluster continue to run if this node bugchecks or shuts down.

A value of 1 causes other nodes in the MEMORY CHANNEL cluster to fail with bugcheck code MC_FORCED_CRASH if this node bugchecks or shuts down.

The default value is 0. A setting of 1 is intended only for debugging purposes; the parameter should otherwise be left at its default state.

MC_SERVICES_P2 (static) Specifies whether to load the PMDRIVER (PMA0) MEMORY CHANNEL cluster port driver. PMDRIVER is a new driver that serves as the MEMORY CHANNEL cluster port driver. It works together with MCDRIVER (the MEMORY CHANNEL device driver and device interface) to provide MEMORY CHANNEL clustering. If PMDRIVER is not loaded, cluster connections will not be made over the MEMORY CHANNEL interconnect.

The default for MC_SERVICES_P2 is 1. This default value causes PMDRIVER to be loaded when you boot the system.

HP recommends that this value not be changed. This parameter value must be the same on all nodes connected by MEMORY CHANNEL.

MC_SERVICES_P3 (dynamic) Specifies the maximum number of tags supported. The maximum value is 2048 and the minimum value is 100.

The default value is 800. HP recommends that this value not be changed.

This parameter value must be the same on all nodes connected by MEMORY CHANNEL.

MC_SERVICES_P4 (static) Specifies the maximum number of regions supported. The maximum value is 4096 and the minimum value is 100.

The default value is 200. HP recommends that this value not be changed.

This parameter value must be the same on all nodes connected by MEMORY CHANNEL.

MC_SERVICES_P6 (static) Specifies MEMORY CHANNEL message size, the body of an entry in a free queue, or a work queue. The maximum value is 65536 and the minimum value is 544. The default value is 992, which is suitable in all cases except systems with highly constrained memory.

For such systems, you can reduce the memory consumption of MEMORY CHANNEL by slightly reducing the default value of 992. This value must always be equal to or greater than the result of the following calculation:

  1. Select the larger of SCS_MAXMSG and SCS_MAXDG.
  2. Round that value to the next quadword.

This parameter value must be the same on all nodes connected by MEMORY CHANNEL.

MC_SERVICES_P7 (dynamic) Specifies whether to suppress or display messages about cluster activities on this node. Can be set to a value of 0, 1, or 2. The meanings of these values are:
Value Meaning
0 Nonverbose mode---no informational messages will appear on the console or in the error log.
1 Verbose mode---informational messages from both MCDRIVER and PMDRIVER will appear on the console and in the error log.
2 Same as verbose mode plus PMDRIVER stalling and recovery messages.

The default value is 0. HP recommends that this value not be changed except for debugging MEMORY CHANNEL problems or adjusting the MC_SERVICES_P9 parameter.

MC_SERVICES_P9 (static) Specifies the number of initial entries in a single channel's free queue. The maximum value is 2048 and the minimum value is 10.

Note that MC_SERVICES_P9 is not a dynamic parameter; you must reboot the system after each change in order for the change to take effect.

The default value is 150. HP recommends that this value not be changed.

This parameter value must be the same on all nodes connected by MEMORY CHANNEL.

MPDEV_AFB_INTVL (disks only) Specifies the automatic failback interval in seconds. The automatic failback interval is the minimum number of seconds that must elapse before the system will attempt another failback from an MSCP path to a direct path on the same device.

MPDEV_POLLER must be set to ON to enable automatic failback. You can disable automatic failback without disabling the poller by setting MPDEV_AFB_INTVL to 0. The default is 300 seconds.

MPDEV_D1 (disks only) Reserved for use by the operating system.
MPDEV_D2 (disks only) Reserved for use by the operating system.
MPDEV_D3 (disks only) Reserved for use by the operating system.
MPDEV_D4 (disks only) Reserved for use by the operating system.
MPDEV_ENABLE Enables the formation of multipath sets when set to ON (1). When set to OFF (0), the formation of additional multipath sets and the addition of new paths to existing multipath sets is disabled. However, existing multipath sets remain in effect. The default is ON.

MPDEV_REMOTE and MPDEV_AFB_INTVL have no effect when MPDEV ENABLE is set to OFF.

MPDEV_LCRETRIES (disks only) Controls the number of times the system retries the direct paths to the controller that the logical unit is online to, before moving on to direct paths to the other controller, or to an MSCP served path to the device. The valid range for retries is 1 through 256. The default is 1.
MPDEV_POLLER Enables polling of the paths to multipath set members when set to ON (1). Polling allows early detection of errors on inactive paths. If a path becomes unavailable or returns to service, the system manager is notified with an OPCOM message. When set to OFF (0), multipath polling is disabled. The default is ON. Note that this parameter must be set to ON to use the automatic failback feature.
MPDEV_REMOTE (disks only) Enables MSCP served disks to become members of a multipath set when set to ON (1). When set to OFF (0), only local paths to a SCSI or Fibre Channel device are used in the formation of additional multipath sets. MPDEV_REMOTE is enabled by default. However, setting this parameter to OFF has no effect on existing multipath sets that have remote paths.

To use multipath failover to a served path, MPDEV_REMOTE must be enabled on all systems that have direct access to shared SCSI/Fibre Channel devices. The first release to provide this feature is OpenVMS Alpha Version 7.3-1. Therefore, all nodes on which MPDEV_REMOTE is enabled must be running OpenVMS Alpha Version 7.3-1 (or later).

If MPDEV_ENABLE is set to OFF (0), the setting of MPDEV_REMOTE has no effect because the addition of all new paths to multipath sets is disabled. The default is ON.

MSCP_BUFFER This buffer area is the space used by the server to transfer data between client systems and local disks.

On VAX systems, MSCP_BUFFER specifies the number of pages to be allocated to the MSCP server's local buffer area.

MSCP_BUFFER specifies the number of pagelets to be allocated the MSCP server's local buffer area.

MSCP_CMD_TMO Specifies the time in seconds that the OpenVMS MSCP server uses to detect MSCP command timeouts. The MSCP server must complete the command within a built-in time of approximately 40 seconds plus the value of the MSCP_CMD_TMO parameter.

An MSCP_CMD_TMO value of 0 is normally adequate. A value of 0 provides the same behavior as in previous releases of OpenVMS (which did not have an MSCP_CMD_TMO system parameter). A nonzero setting increases the amount of time before an MSCP command times out.

If command timeout errors are being logged on client nodes, setting the parameter to a nonzero value on OpenVMS servers reduces the number of errors logged. Increasing the value of this parameter reduces the numb client MSCP command timeouts and increases the time it takes to detect faulty devices.

If you need to decrease the number of command timeout errors, set an initial value of 60. If timeout errors continue to be logged, you can increase this value in increments of 20 seconds.

MSCP_CREDITS Specifies the number of outstanding I/O requests that can be active from one client system.
MSCP_LOAD Controls whether the MSCP server is loaded. Specify 1 to load the server, and use the default CPU load rating. A value greater than 1 loads the server and uses this value as a constant load rating. By default, the value is set to 0 and the server is not loaded.
MSCP_SERVE_ALL Controls the serving of disks. The settings take effect when the system boots. You cannot change the settings when the system is running.

Starting with OpenVMS Version 7.2, the serving types are implemented as a bit mask. To specify the type of serving your system will perform, locate the type you want in the following table and specify its value. For some systems, you may want to specify two serving types, such as serving the system disk and serving locally attached disks. To specify such a combination, add the values of each type, and specify the sum.

In a mixed-version cluster that includes any systems running OpenVMS Version 7.1- x or earlier, serving all available disks is restricted to serving all disks except those whose allocation class does not match the system's node allocation class (pre-Version 7.2 meaning). To specify this type of serving, use the value 9 (which sets bit 0 and bit 3).

The following table describes the serving type controlled by each bit and its decimal value.

Bit and Value When Set Description
Bit 0 (1) Serve all available disks (locally attached and those connected to HS x and DSSI controllers). Disks with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are also served if bit 3 is not set.
Bit 1 (2) Serve locally attached (non-HS x and non-DSSI) disks.
Bit 2 (4) Serve the system disk. This is the default setting. This setting is important when other nodes in the cluster rely on this system being able to serve its system disk. This setting prevents obscure contention problems that can occur when a system attempts to complete I/O to a remote system disk whose system has failed.
Bit 3 (8) Restrict the serving specified by bit 0. All disks except those with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are served.

This is pre-Version 7.2 behavior. If your cluster includes systems running Open 7.1- x or earlier, and you want to serve all available disks, you must specify 9, the result of setting this bit and bit 0.

Bit 4 (15) By default, the bit 4 is not set, hence the DUDRIVER will accept the devices with unit number greater than 9999. On the client side, if bit 4 is set (10000 binary) in the MSCP_SERVE_ALL parameter, the client will reject devices with unit number greater than 9999 and retains the earlier behavior.

Although the serving types are now implemented as a bit mask, the values of 0, 1, and 2, specified by bit 0 and bit 1, retain their original meanings:

  • 0 --- Do not serve any disks (the default for earlier versions of OpenVMS).
  • 1 --- Serve all available disks.
  • 2 --- Serve only locally attached (non-HSx and non-DSSI) disks.

If the MSCP_LOAD system parameter is 0, MSCP_SERVE_ALL is ignored. For more information about this system parameter, see Section 6.3.1.

NISCS_CONV_BOOT During booting as an OpenVMS Cluster satellite, specifies whether conversational bootstraps are enabled on the computer. The default value of 0 specifies that conversational bootstraps are disabled. A value of 1 enables conversational bootstraps.
NISCS_LAN_OVRHD Starting with OpenVMS Version 7.3, this parameter is obsolete. This parameter was formerly provided to reserve space in a LAN packet for encryption fields applied by external encryption devices. PEDRIVER now automatically determines the maximum packet size a LAN path can deliver, including any packet-size reductions required by external encryption devices.
NISCS_LOAD_PEA0 Specifies whether the port driver (PEDRIVER) must be loaded to enable cluster communications over the local area network (LAN) or IP. The default value of 0 specifies that the driver is not loaded.

Caution: If the NISCS_LOAD_PEA0 parameter is set to 1, the VAXCLUSTER system parameter must be set to 2. This ensures coordinated access to shared resources in the OpenVMS Cluster and prevents accidental data corruption.

NISCS_MAX_PKTSZ Specifies an upper limit, in bytes, on the size of the user data area in the largest packet sent by NISCA on any LAN network.

NISCS_MAX_PKTSZ allows the system manager to change the packet size used for cluster communications on network communication paths. PEDRIVER automatically allocates memory to support the largest packet size that is usable by any virtual circuit connected to the system up to the limit set by this parameter. Its default values are different for OpenVMS Integrity servers and Alpha.

On Integrity servers and Alpha, to optimize performance, the default value is the largest packet size currently supported by OpenVMS.

PEDRIVER uses NISCS_MAX_PKTSZ to compute the maximum amount of data to transmit in any LAN or IP packet:

LAN packet size <= LAN header (padded Ethernet format)

  • NISCS_MAX_PKTSZ
  • NISCS checksum (only if data checking is enabled)
  • LAN CRC or FCS

The actual packet size automatically used by PEDRIVER might be smaller than the NISCS_MAX_PKTSZ limit for any of the following reasons:

  • On a per-LAN-path basis, if PEDRIVER determines that the LAN path between two nodes, including the local and remote LAN adapters and intervening LAN equipment, can convey only a lesser size.

    In other words, only nodes with large-packet LAN adapters connected end-to-end by large-packet LAN equipment can use large packets. Nodes connected to large-packet LANs but having an end-to-end path that involves an Ethernet segment restrict packet size to that of an Ethernet packet (1498 bytes).

  • For performance reasons, PEDRIVER might further limit the upper bound on packet size so that the packets can be allocated from a lookaside list in the nonpaged pool.

The actual memory allocation includes the required data structure overhead used by PEDRIVER and the LAN drivers, in addition to the actual LAN packet size.

The following table shows the minimum NISCS_MAX_PKTSZ value required to use the maximum packet size supported by specified LAN types.

Type of LAN Minimum Value for NISCS_MAX_PKTSZ
Ethernet 1498
Gigabit Ethernet 8192
10 Gigabit Ethernet 8192
  Note that the maximum packet size for some Gigabit Ethernet adapters is larger than the maximum value of NISCS_MAX_ PKTSZ (8192 bytes). For information on how to enable jumbo frames on Gigabit Ethernet (packet sizes larger than those noted for Ethernet), see the LAN_FLAGS parameter.

OpenVMS Alpha Version 7.3-2 or later supports the DEGXA Gigabit Ethernet adapter, which is a Broadcom BCM5703 chip (TIGON3) network interface card (NIC). The introduction of the DEGXA Gigabit Ethernet adapter continues the existing Gigabit Ethernet support as both a LAN device as well as a cluster interconnect device.

Note that starting with OpenVMS Version 8.4, OpenVMS can use HP TCP/IP services for cluster communications using UDP protocol. NISCS_MAX_PKTSZ will only affect the LAN channel payload size. To affect the IP channel payload size see NISCS_UDP_PKTSZ system parameter.

NISCS_PORT_SERV NISCS_PORT_SERV provides flag bits for PEDRIVER port services. Setting bits 0 and 1 (decimal value 3) enables data checking. The remaining bits are reserved for future use. Starting with OpenVMS Version 7.3-1, you can use the SCACP command SET VC/CHECKSUMMING to specify data checking on the VCs to certain nodes. You can do this on a running system. (Refer to the SCACP documentation in the HP OpenVMS System Management Utilities Reference Manual for more information.)

On the other hand, changing the setting of NISCS_PORT_SERV requires a reboot. Furthermore, this parameter applies to all virtual circuits between the node on which it is set and other nodes in the cluster.

NISCS_PORT_SERV has the AUTOGEN attribute.

NISCS_PORT_SERV can be used for enabling PEdriver data compression. The SCACP SET VC command now includes a /COMPRESSION (or /NOCOMPRESSION) qualifier, which enables or disables sending compressed data by the specified PEdriver VCs. The default is /NOCOMPRESSION.

You can also enable the VC use of compression by setting bit 3 of the NISCS_ PORT_SERV system parameter. The /NOCOMPRESSION qualifier does not override compression enabled by setting bit 2 of NISCS_PORT_SERV. For more information, see the SCACP utility chapter, and NISCS_PORT_SERV in the HP OpenVMS System Management Utilities Reference Manual and the HP OpenVMS Availability Manager User's Guide.

NISCS_UDP_PKTSZ This parameter specifies an upper limit on the size, in bytes, of the user data area in the largest packet sent by NISCA on any IP network.

NISCS_UDP_PKTSZ allows the system manager to change the packet size used for cluster communications over IP on network communication paths.

PEDRIVER uses NISCS_UDP_PKTSZ to compute the maximum amount of data to transmit in any packet.

Currently, the maximum payload over an IP channel is defined by one of the following three parameters. The least of the three values will be in effect.

  • NISCS_UDP_PKTSZ
  • 1500 bytes
  • IP_MTU of the interface supported by TCP/IP stack
Note that this parameter only affects the IP channel payload and not the LAN channel payload. The LAN channel payload is controlled by NISCS_MAX_PKTSZ.
NISCS_USE_UDP If NISCS_USE_UDP is set to 1, the PEdriver uses IP in addition to the LAN driver for cluster communication. The bit setting of 1 loads the IPCI configuration information in the configuration files, which are loaded during the boot sequence. SYS$SYSTEM:PE$IP_CONFIG.DAT and SYS$SYSTEM:TCPIPCLUSTER.DAT are the two configuration files used for IP Cluster interconnect.
PASTDGBUF Specifies the number of datagram receive buffers to queue initially for the cluster port driver's configuration poller. The initial value is expanded during system operation, if needed.

MEMORY CHANNEL devices ignore this parameter.

QDSKINTERVAL Specifies, in seconds, the disk quorum polling interval. The maximum is 32767, the minimum is 1, and the default is 3. Lower values trade increased overhead cost for greater responsiveness.

This parameter should be set to the same value on each cluster computer.

QDSKVOTES Specifies the number of votes contributed to the cluster votes total by a quorum disk. The maximum is 127, the minimum is 0, and the default is 1. This parameter is used only when DISK_QUORUM is defined.
RECNXINTERVAL Specifies, in seconds, the interval during which the connection manager attempts to reconnect a broken connection to another computer. If a new connection cannot be established during this period, the connection is declared irrevocably broken, and either this computer or the other must leave the cluster. This parameter trades faster response to certain types of system failures for the ability to survive transient faults of increasing duration.

This parameter should be set to the same value on each cluster computer. This parameter also affects the tolerance of the OpenVMS Cluster system for LAN bridge failures (see Section 3.2.10).

SCSBUFFCNT On Integrity server systems and Alpha systems, the SCS buffers are allocated as needed, and SCSBUFFCNT is reserved for OpenVMS use, only.
SCSCONNCNT The initial number of SCS connections that are configured for use by all system applications, including the one used by Directory Service Listen. The initial number will be expanded by the system if needed.

If no SCS ports are configured on your system, this parameter is ignored. The default value is adequate for all SCS hardware combinations.

Note: AUTOGEN provides feedback for this parameter on VAX systems only.

SCSNODE 1 Specifies the name of the computer. This parameter is not dynamic.

Specify SCSNODE as a string of up to six characters. Enclose the string in quotation marks.

If the computer is in an OpenVMS Cluster, specify a value that is unique within the cluster. Do not specify the null string.

If the computer is running DECnet for OpenVMS, the value must be the same as the DECnet node name.

SCSRESPCNT SCSRESPCNT is the total number of response descriptor table entries (RDTEs) configured for use by all system applications.

If no SCS or DSA port is configured on your system, this parameter is ignored.

SCSSYSTEMID 1 Specifies a number that identifies the computer. This parameter is not dynamic. SCSSYSTEMID is the low-order 32 bits of the 48-bit system identification number.

If the computer is in an OpenVMS Cluster, specify a value that is unique within the cluster.

If the computer is running DECnet for OpenVMS, calculate the value from the DECnet address using the following formula:

SCSSYSTEMID = (
DECnet-area-number * 1024)

+ DECnet-node-number

Example: If the DECnet address is 2.211, calculate the value as follows:

SCSSYSTEMID = (2 * 1024) + 211 = 2259

SCSSYSTEMIDH Specifies the high-order 16 bits of the 48-bit system identification number. This parameter must be set to 0. It is reserved by OpenVMS for future use.
TAPE_ALLOCLASS Specifies a numeric value from 0 to 255 to be assigned as the tape allocation class for tape devices connected to the computer. The default value is 0.
TIMVCFAIL Specifies the time required for a virtual circuit failure to be detected. HP recommends that you use the default value. HP further recommends that you decrease this value only in OpenVMS Cluster systems of three or fewer CPUs, use the same value on each computer in the cluster, and use dedicated LAN segments for cluster I/O.
TMSCP_LOAD Controls whether the TMSCP server is loaded. Specify a value of 1 to load the server and set all available TMSCP tapes served. By default, the value is set to 0, and the server is not loaded.
TMSCP_SERVE_ALL Controls the serving of tapes. The settings take effect when the system boots. You cannot change the settings when the system is running.

Starting with OpenVMS Version 7.2, the serving types are implemented as a bit mask. To specify the type of serving your system will perform, locate the type you want in the following table and specify its value. For some systems, you may want to specify two serving types, such as serving all tapes except those whose allocation class does not match. To specify such a combination, add the values of each type, and specify the sum.

In a mixed-version cluster that includes any systems running OpenVMS Version 7.1- x or earlier, serving all available tapes is restricted to serving all tapes except those whose allocation class does not match the system's allocation class (pre-Version 7.2 meaning). To specify this type of serving, use the value 9, which sets bit 0 and bit 3.

The following table describes the serving type controlled by each bit and its decimal value.

Bit Value When Set Description
Bit 0 1 Serve all available tapes (locally attached and those connected to HS x and DSSI controllers). Tapes with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are also served if bit 3 is not set.
Bit 1 2 Serve locally attached (non-HS x and non-DSSI) tapes.
Bit 2 n/a Reserved.
Bit 3 8 Restrict the serving specified by bit 0. All tapes except those with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are served.

This is pre-Version 7.2 behavior. If your cluster includes systems running OpenVMS Version 7.1- x or earlier, and you want to serve all available tapes, you must specify 9, the result of setting this bit and bit 0.

Bit 4 15 By default, the bit 4 is not set, hence the TUDRIVER will accept the devices with unit number greater than 9999. On the client side, if bit 4 is set (10000 binary) in the TMSCP_SERVE_ALL parameter, the client will reject devices with unit number greater than 9999 and retains the earlier behavior.

Although the serving types are now implemented as a bit mask, the values of 0, 1, and 2, specified by bit 0 and bit 1, retain their original meanings:

  • 0 --- Do not serve any disks (the default for earlier versions of OpenVMS).
  • 1 --- Serve all available disks.
  • 2 --- Serve only locally attached (non-HSx and non-DSSI) disks.

If the TMSCP_LOAD system parameter is 0, TMSCP_SERVE_ALL is ignored.

VAXCLUSTER Controls whether the computer should join or form a cluster. This parameter accepts the following three values:
  • 0 --- Specifies that the computer will not participate in a cluster.
  • 1 --- Specifies that the computer should participate in a cluster if hardware supporting SCS (CI or DSSI) is present or if NISCS_LOAD_PEA0 is set to 1, indicating that cluster communications is enabled over the local area network (LAN) or IP.
  • 2 --- Specifies that the computer should participate in a cluster.

You should always set this parameter to 2 on computers intended to run in a cluster, to 0 on computers that boot from a UDA disk controller and are not intended to be part of a cluster, and to 1 (the default) otherwise.

Caution: If the NISCS_LOAD_PEA0 system parameter is set to 1, the VAXCLUSTER parameter must be set to 2. This ensures coordinated access to shared resources in the OpenVMS Cluster system and prevents accidental data corruption. Data corruption may occur on shared resources if the NISCS_LOAD_PEA0 parameter is set to 1 and the VAXCLUSTER parameter is set to 0.

VOTES Specifies the number of votes toward a quorum to be contributed by the computer. The default is 1.

1Once a computer has been recognized by another computer in the cluster, you cannot change the SCSSYSTEMID or SCSNODE parameter without either changing both or rebooting the entire cluster.


Previous Next Contents Index