[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Cluster Systems


Previous Contents Index

E.5.2 Error Messages

SYS$LAVC_DEFINE_NET_PATH can return the error condition codes shown in the following table.

Condition Code Description
SS$_ACCVIO This status value can be returned under the following conditions:
  • No access to the descriptor or the network component ID value buffer
  • No access to the argument list
  • No write access to the used_for_analysis_status address
  • No write access to the bad_component_id address
SS$_DEVACTIVE Analysis already running. You must stop the analysis by calling the SYS$LAVC_DISABLE_ANALYSIS function before defining the network components and the network component lists.
SS$_INSFARG Not enough arguments supplied.
SS$_INVCOMPID Invalid network component ID specified in the buffer. The bad_component_id address contains the failed component ID.
SS$_INVCOMPLIST This status value can be returned under the following conditions:
  • Fewer than two nodes were specified in the node list.
  • More than two nodes were specified in the list.
  • The first network component ID was not a COMP$C_NODE type.
  • The last network component ID was not a COMP$C_NODE type.
  • Fewer than two adapters were specified in the list.
  • More than two adapters were specified in the list.
SS$_IVBUFLEN Length of the network component ID buffer is less than 16, is not a multiple of 4, or is greater than 508.
SS$_RMTPATH Network path is not associated with the local node. This status is returned only to indicate whether this path was needed for network failure analysis on the local node.

E.6 Starting Network Component Failure Analysis

The SYS$LAVC_ENABLE_ANALYSIS subroutine starts the network component failure analysis.

Example: The following is an example of using the SYS$LAVC_ENABLE_ANALYSIS subroutine:


STATUS = SYS$LAVC_ENABLE_ANALYSIS ( ) 

E.6.1 Status

This subroutine attempts to enable the network component failure analysis code. The attempt will succeed if at least one component list is defined.

SYS$LAVC_ENABLE_ANALYSIS returns a status in register R0.

E.6.2 Error Messages

SYS$LAVC_ENABLE_ANALYSIS can return the error condition codes shown in the following table.

Condition Code Description
SS$_DEVOFFLINE PEDRIVER is not properly initialized. ROOT or PORT block is not available.
SS$_NOCOMPLSTS No network connection lists exist. Network analysis is not possible.
SS$_WASSET Network component analysis is already running.

E.7 Stopping Network Component Failure Analysis

The SYS$LAVC_DISABLE_ANALYSIS subroutine stops the network component failure analysis.

Example: The following is an example of using SYS$LAVC_DISABLE_ANALYSIS:


STATUS = SYS$LAVC_DISABLE_ANALYSIS ( ) 

This subroutine disables the network component failure analysis code and, if analysis was enabled, deletes all the network component definitions and network component list data structures from nonpaged pool.

E.7.1 Status

SYS$LAVC_DISABLE_ANALYSIS returns a status in register R0.

E.7.2 Error Messages

SYS$LAVC_DISABLE_ANALYSIS can return the error condition codes shown in the following table.

Condition Code Description
SS$_DEVOFFLINE PEDRIVER is not properly initialized. ROOT or PORT block is not available.
SS$_WASCLR Network component analysis already stopped.


Appendix F
Troubleshooting the NISCA Protocol

NISCA is the transport protocol responsible for carrying messages, such as disk I/Os and lock messages, across Ethernet LANs to other nodes in the cluster. The acronym NISCA refers to the protocol that implements an Ethernet network interconnect (NI) according to the System Communications Architecture (SCA).

Using the NISCA protocol, an OpenVMS software interface emulates the CI port interface, that is, the software interface is identical to that of the CI bus, except that data is transferred over a LAN or IP network. The NISCA protocol allows OpenVMS Cluster communication over the LAN or IP network without the need for any special hardware.

This appendix describes the NISCA transport protocol and provides troubleshooting strategies to help a network manager pinpoint network-related problems. Because troubleshooting hard component failures in the LAN is best accomplished using a LAN analyzer, this appendix also describes the features and setup of a LAN analysis tool.

Note

Additional troubleshooting information specific to the revised PEDRIVER is planned for the next revision of this manual.

F.1 How NISCA Fits into the SCA

The NISCA protocol is an implementation of the Port-to-Port Driver (PPD) protocol of the SCA.

F.1.1 SCA Protocols

As described in Chapter 2, the SCA is a software architecture that provides efficient communication services to low-level distributed applications (for example, device drivers, file services, network managers).

The SCA specifies a number of protocols for OpenVMS Cluster systems, including System Applications (SYSAP), System Communications Services (SCS), the Port-to-Port Driver (PPD), and the Physical Interconnect (PI) of the device driver and LAN adapter. Figure F-1 shows these protocols as interdependent levels that make up the SCA architecture. Figure F-1 shows the NISCA protocol as a particular implementation of the PPD layer of the SCA architecture.

Figure F-1 Protocols in the SCA Architecture


Table F-1 describes the levels of the SCA protocol shown in Figure F-1.

Table F-1 SCA Protocol Layers
Protocol Description
SYSAP Represents clusterwide system applications that execute on each node. These system applications share communication paths in order to send messages between nodes. Examples of system applications are disk class drivers (such as DUDRIVER), the MSCP server, and the connection manager.
SCS Manages connections around the OpenVMS Cluster and multiplexes messages between system applications over a common transport called a virtual circuit (see Section F.1.2). The SCS layer also notifies individual system applications when a connection fails so that they can respond appropriately. For example, an SCS notification might trigger DUDRIVER to fail over a disk, trigger a cluster state transition, or notify the connection manager to start timing reconnect (RECNXINTERVAL) intervals.
PPD Provides a message delivery service to other nodes in the OpenVMS Cluster system.
PPD Level Description
Port-to-Port Driver (PPD) Establishes virtual circuits and handles errors.
Port-to-Port Communication (PPC) Provides port-to-port communication, datagrams, sequenced messages, and block transfers. "Segmentation" also occurs at the PPC level. Segmentation of large blocks of data is done differently on a LAN than on a CI or a DSSI bus. LAN data packets are fragmented according to the size allowed by the particular LAN communications path, as follows:
Port-to-Port Communications Packet Size Allowed
Ethernet-to-Ethernet 1498 bytes
Gb Ethernet-to-Gb Ethernet up to 8192 bytes
Gb Ethernet-to-10Gb Ethernet up to 8192 bytes
10Gb Ethernet-to-10Gb Ethernet up to 8192 bytes
Note: The default value is 1498 bytes for both Ethernet and FDDI.
Transport (TR) Provides an error-free path, called a virtual circuit (see Section F.1.2), between nodes. The PPC level uses a virtual circuit for transporting sequenced messages and datagrams between two nodes in the cluster.
Channel Control (CC) Manages network paths, called channels, between nodes in an OpenVMS Cluster. The CC level maintains channels by sending HELLO datagram messages between nodes. A node sends a HELLO datagram messages to indicate it is still functioning. The TR level uses channels to carry virtual circuit traffic.
Datagram Exchange (DX) Interfaces to the LAN driver.
PI Provides connections to LAN devices. PI represents LAN drivers and adapters over which packets are sent and received.
PI Component Description
LAN drivers Multiplex NISCA and many other clients (such as DECnet, TCP/IP, LAT, LAD/LAST) and provide them with datagram services on Ethernet and FDDI network interfaces.
LAN adapters Consist of the LAN network driver and adapter hardware.

Figure F-2 shows the NISCA protocol as a particular implementation of the TCP/IP layer of the SCA architecture.

Figure F-2 Protocols in the SCA Architecture for Cluster over IP


Table F-2 describes the levels of the SCA protocol shown in Figure F-2.

Table F-2 SCA Protocol Layers for Cluster over IP
Protocol Description
SYSAP Represents clusterwide system applications that execute on each node. These system applications share communication paths in order to send messages between nodes. Examples of system applications are disk class drivers (such as DUDRIVER), the MSCP server, and the connection manager.
SCS Manages connections around the OpenVMS Cluster and multiplexes messages between system applications over a common transport called a virtual circuit (see Section F.1.2). The SCS layer also notifies individual system applications when a connection fails so that they can respond appropriately. For example, an SCS notification might trigger DUDRIVER to fail over a disk, trigger a cluster state transition, or notify the connection manager to start timing reconnect (RECNXINTERVAL) intervals.
PPD Provides a message delivery service to other nodes in the OpenVMS Cluster system.
PPD Level Description
Port-to-Port Driver (PPD) Establishes virtual circuits and handles errors.
Port-to-Port Communication (PPC) Provides port-to-port communication, datagrams, sequenced messages, and block transfers. "Segmentation" also occurs at the PPC level. Segmentation of large blocks of data is done differently on a LAN than on a CI or a DSSI bus. LAN data packets are fragmented according to the size allowed by the particular LAN communications path, as follows:
Port-to-Port Communications Packet Size Allowed
Ethernet-to-Ethernet 1498 bytes
Gb Ethernet-to-Gb Ethernet up to 8192 bytes
Gb Ethernet-to-10Gb Ethernet up to 8192 bytes
10Gb Ethernet-to-10Gb Ethernet up to 8192 bytes
Note: The default value is 1498 bytes for both Ethernet and FDDI.
Transport (TR) Provides an error-free path, called a virtual circuit (see Section F.1.2), between nodes. The PPC level uses a virtual circuit for transporting sequenced messages and datagrams between two nodes in the cluster.
Channel Control (CC) Manages network paths, called channels, between nodes in an OpenVMS Cluster. The CC level maintains channels by sending HELLO datagram messages between nodes. A node sends a HELLO datagram messages to indicate it is still functioning. The TR level uses channels to carry virtual circuit traffic.
IP header exchange Interfaces to the TCP/IP stack.
TCP/IP Cluster over IP uses UDP for cluster communication
PI Provides connections to LAN devices. PI represents LAN drivers and adapters over which packets are sent and received.
PI Component Description
LAN drivers Multiplex NISCA and many other clients (such as DECnet, TCP/IP, LAT, LAD/LAST) and provide them with datagram services on Ethernet and FDDI network interfaces.
LAN adapters Consist of the LAN network driver and adapter hardware.

F.1.2 Paths Used for Communication

The NISCA protocol controls communications over the paths described in Table F-3.

Table F-3 Communication Paths
Path Description
Virtual circuit A common transport that provides reliable port-to-port communication between OpenVMS Cluster nodes in order to:
  • Ensure the delivery of messages without duplication or loss, each port maintains a virtual circuit with every other remote port.
  • Ensure the sequential ordering of messages, virtual circuit sequence numbers are used on the individual packets. Each transmit message carries a sequence number; duplicates are discarded.

The virtual circuit descriptor table in each port indicates the status of it's port-to-port circuits. After a virtual circuit is formed between two ports, communication can be established between SYSAPs in the nodes.

Channel A logical communication path between two LAN adapters located on different nodes. Channels between nodes are determined by the pairs of adapters and the connecting network. For example, two nodes, each having two adapters, could establish four channels. The messages carried by a particular virtual circuit can be sent over any of the channels connecting the two nodes.

Note: The difference between a channel and a virtual circuit is that channels provide a path for datagram service. Virtual circuits, layered on channels, provide an error-free path between nodes. Multiple channels can exist between nodes in an OpenVMS Cluster but only one virtual circuit can exist between any two nodes at a time.

F.1.3 PEDRIVER

The port emulator driver, PEDRIVER, implements the NISCA protocol and establishes and controls channels for communication between local and remote LAN ports.

PEDRIVER implements a packet delivery service (at the TR level of the NISCA protocol) that guarantees the sequential delivery of messages. The messages carried by a particular virtual circuit can be sent over any of the channels connecting two nodes. The choice of channel is determined by the sender (PEDRIVER) of the message. Because a node sending a message can choose any channel, PEDRIVER, as a receiver, must be prepared to receive messages over any channel.

At any point in time, the TR level uses single "preferred channel" to carry the traffic for a particular virtual circuit.

Starting with OpenVMS Version 8.3, the PEDRIVER also supports the following features:

  • Data compression
  • Multi-gigabit line speed and long distance performance scaling

Data compression can be used to reduce the time to transfer data between two OpenVMS nodes when the LAN speed between them is limiting the data transfer rate, and there is idle CPU capacity available. For example, it may be used to reduce shadow copy times, or improve MSCP serving performance between Disaster Tolerant cluster sites connected by relatively low-speed links, such as E3 or DS3, FDDI, or 100Mb Ethernet. PEdriver data compression can be enabled by using SCACP, Availability Manager, or the NISCS_PORT_SERV sysgen parameter.

The number of packets in flight between nodes needs to increase proportionally to both the speed of LAN links and the inter-node distance. Historically, PEdriver had fixed transmit and receive windows (buffering capacity) of 31 outstanding packets. Beginning with OpenVMS Version 8.3, PEdriver now automatically selects transmit and receive window sizes (sometimes called pipe quota by other network protocols) based on the speed of the current set of local and remote LAN adapters being used for cluster communications between nodes. Additionally, SCACP and Availability Manager now provide management override of the automatically-selected window sizes.

For more information, see the SCACP utility chapter, and NISCS_PORT_SERV in the HP OpenVMS System Management Utilities Reference Manual and the HP OpenVMS Availability Manager User's Guide.

Reference: See Appendix G for more information about how transmit channels are selected.


Previous Next Contents Index