[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Cluster Systems


Previous Contents Index

2.10.1 Controlling Queues

To control queues, you use one or several queue managers to maintain a clusterwide queue database that stores information about queues and jobs.

Reference: For detailed information about setting up OpenVMS Cluster queues, see Chapter 7.


Chapter 3
OpenVMS Cluster Interconnect Configurations

This chapter provides an overview of various types of OpenVMS Cluster configurations and the ways they are interconnected.

For definitive information about supported OpenVMS Cluster configurations, see:

  • OpenVMS Cluster Software Software Product Description (SPD 29.78.xx)
  • Guidelines for OpenVMS Cluster Configurations

3.1 Overview

Every node in an OpenVMS Cluster must have direct connections to all other nodes. Sites can choose to use one or more of the following interconnects:

  • LANs
    • Ethernet (Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet)
  • Internet Protocol (IP)
    • Ethernet (Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet)
  • MEMORY CHANNEL (Alpha only)
  • SMCI (Shared memory CI) (Alpha only) in OpenVMS Galaxy configurations, as described in the HP OpenVMS Alpha Partitioning and Galaxy Guide
  • SCSI (supported only as a node-to-storage interconnect, requires a second interconnect for node-to-node (SCS) communications for limited configurations)
  • Fibre Channel (supported only as a node-to-storage interconnect, requires a second interconnect for node-to-node (SCS) communications)
  • SAS (supported only as a node-to-storage interconnect, requires a second interconnect for node-to-node (SCS) communications for limited configurations) (Integrity servers only)

Processing needs and available hardware resources determine how individual OpenVMS Cluster systems are configured. The configuration discussions in this chapter are based on these physical interconnects.

You can use bridges or switches to connect the OpenVMS Integrity server nodes Fast Ethernet/Gigabit Ethernet NIC(s) to any intersite interconnect the WAN supplier provides, such as [D]WDM, Gigabit Ethernet, Fibre Channel or others.

Note

Multihost shared storage on a SCSI interconnect, commonly known as SCSI clusters, is not supported. It is also not supported on OpenVMS Alpha systems for newer SCSI adapters. However, multihost shared storage on industry-standard Fibre Channel is supported.

Locally attached storage, on both OpenVMS Alpha systems (FC or SCSI storage) and OpenVMS Integrity server systems (Fibre Channel, SAS, or SCSI storage), can be served to any other member of the cluster.

3.2 OpenVMS Cluster Systems Interconnected by LANs

All Ethernet interconnects are industry-standard local area networks that are generally shared by a wide variety of network consumers. When OpenVMS Cluster systems are based on LAN, cluster communications are carried out by a port driver (PEDRIVER) that emulates CI port functions.

3.2.1 Design

The OpenVMS Cluster software is designed to use the Ethernet and interconnects simultaneously with the DECnet, TCP/IP, and SCS protocols. This is accomplished by allowing LAN data link software to control the hardware port. This software provides a multiplexing function so that the cluster protocols are simply another user of a shared hardware resource. See Figure 2-1 for an illustration of this concept.

3.2.1.1 PEDRIVER Fast Path Support

PEdriver, the software that enables OpenVMS Cluster communications over a LAN, also provides Fast Path support. This PEdriver feature provides the following benefits:

  • Improves SMP performance scalability.
  • Reduces the contention for the SCS/IOLOCK8 spinlock. PEdriver uses a private port mainline spinlock to synchronize its internal operation.
  • Allows PEdriver to perform cluster communications processing on a secondary CPU, thus offloading the primary CPU.
  • Allows PEdriver to process cluster communications using a single CPU.
  • Reduces CPU cost by providing a Fast Path streamlined code path for DSA and served blocked data operations.

For more detailed information, see the HP OpenVMS I/O User's Reference Manual, the HP OpenVMS System Manager's Manual, and the HP OpenVMS System Management Utilities Reference Manual.

3.2.2 Cluster Group Numbers and Cluster Passwords

A single LAN can support multiple LAN-based OpenVMS Cluster systems. Each OpenVMS Cluster is identified and secured by a unique cluster group number and a cluster password. Chapter 2 describes cluster group numbers and cluster passwords in detail.

3.2.3 Servers

OpenVMS Cluster computers interconnected by a LAN are generally configured as either servers or satellites. The following table describes servers.

Server Type Description
MOP servers Downline load the OpenVMS boot driver to satellites by means of the Maintenance Operations Protocol (MOP).
Disk servers Use MSCP server software to make their locally connected disks available to satellites over the LAN.
Tape servers Use TMSCP server software to make their locally connected tapes available to satellite nodes over the LAN.
Boot servers A combination of a MOP server and a disk server that serves one or more Alpha system disks. Boot and disk servers make user and application data disks available across the cluster. These servers must be the most powerful computers in the OpenVMS Cluster and must use the highest-bandwidth LAN adapters in the cluster. Boot servers must always run the MSCP server software.

3.2.4 Satellites

Satellites are computers without a local system disk. Generally, satellites are consumers of cluster resources, although they can also provide facilities for disk serving, tape serving, and batch processing. If satellites are equipped with local disks, they can enhance performance by using such local disks for paging and swapping.

Satellites are booted remotely from a boot server (or from a MOP server and a disk server) serving the system disk. Section 3.2.5 describes MOP and disk server functions during satellite booting.

3.2.5 Satellite Booting (Alpha)

When a satellite requests an operating system load, a MOP server for the appropriate OpenVMS Alpha operating system sends a bootstrap image to the satellite that allows the satellite to load the rest of the operating system from a disk server and join the cluster. The sequence of actions during booting is described in Table 3-1.

Table 3-1 Satellite Booting Process
Step Action Comments
1 Satellite requests MOP service. This is the original boot request that a satellite sends out across the network. Any node in the OpenVMS Cluster that has MOP service enabled and has the LAN address of the particular satellite node in its database can become the MOP server for the satellite.
2 MOP server loads the Alpha system. The MOP server responds to an Alpha satellite boot request by downline loading the SYS$SYSTEM:APB.EXE program along with the required parameters.

For Alpha computers, Some of these parameters include:

  • System disk name
  • Root number of the satellite
3 Satellite finds additional parameters located on the system disk and root. The satellite finds OpenVMS Cluster system parameters, such as SCSSYSTEMID, SCSNODE, and NISCS_CONV_BOOT. The satellite also finds the cluster group code and password.
4 Satellite executes the load program The program establishes an SCS connection to a disk server for the satellite system disk and loads the SYSBOOT.EXE program.

Configuring and starting a satellite booting service for Alpha computers is described in detail in Section 4.5.

3.2.6 Satellite Booting (Integrity servers)

Configuring and starting a satellite booting service for Integrity server systems is described in detail in Section 4.5.

3.2.7 Configuring Multiple LAN Adapters

LAN support for multiple adapters allows PEDRIVER (the port driver for the LAN) to establish more than one channel between the local and remote cluster nodes. A channel is a network path between two nodes that is represented by a pair of LAN adapters.

3.2.7.1 System Characteristics

OpenVMS Cluster systems with multiple LAN adapters have the following characteristics:

  • At boot time, all Ethernet adapters are automatically configured for local area OpenVMS Cluster use.
  • PEDRIVER automatically detects and creates a new channel between the local node and each remote cluster node for each unique pair of LAN adapters.
  • Channel viability is monitored continuously.
  • In many cases, channel failure does not interfere with node-to-node (virtual circuit) communications as long as there is at least one remaining functioning channel between the nodes.

3.2.7.2 System Requirements

Configurations for OpenVMS Cluster systems with multiple LAN adapters must meet the following requirements:

  • The MOP server and the system disk server for a given satellite must be connected to the same extended LAN segment. (LANs can be extended using bridges that manage traffic between two or more local LANs.)
  • All nodes must have a direct path to all other nodes. A direct path can be a bridged or a nonbridged LAN segment.

Rule: For each node, DECnet for OpenVMS (Phase IV) and MOP serving (Alpha or VAX, as appropriate) can be performed by only one adapter per extended LAN to prevent LAN address duplication.

3.2.7.3 Guidelines

The following guidelines are for configuring OpenVMS Cluster systems with multiple LAN adapters. If you configure these systems according to the guidelines, server nodes (nodes serving disks, tape, and lock traffic) can typically use some of the additional bandwidth provided by the added LAN adapters and increase the overall performance of the cluster. However, the performance increase depends on the configuration of your cluster and the applications it supports.

Configurations with multiple LAN adapters should follow these guidelines:

  • Connect each LAN adapter to a separate LAN segment. A LAN segment can be bridged or nonbridged. Doing this can help provide higher performance and availability in the cluster. The LAN segments can be Ethernet segments.
  • Distribute satellites equally among the LAN segments. Doing this can help to distribute the cluster load more equally across all of the LAN segments.
  • Systems providing MOP service should be distributed among the LAN segments to ensure that LAN failures do not prevent satellite booting. Systems should be bridged to multiple LAN segments for performance and availability.
  • For the number of LAN adapters supported per node, refer to the OpenVMS Cluster Software SPD.

3.2.8 LAN Examples

Figure 3-1 shows an OpenVMS Cluster system based on a LAN interconnect with a single Alpha server node and a single Alpha system disk.

Figure 3-1 LAN OpenVMS Cluster System with Single Server Node and System Disk


In Figure 3-1, the server node (and its system disk) is a single point of failure. If the server node fails, the satellite nodes cannot access any of the shared disks including the system disk. Note that some of the satellite nodes have locally connected disks. If you convert one or more of these into system disks, satellite nodes can boot from their own local system disk.

3.2.9 Fast Path for LAN Devices

With OpenVMS Version 7.3-2, further enhancements have been made to Fast Path for LAN devices, which will continue to help streamline I/O processing and improve symmetric-multiprocessing (SMP) performance scalability on newer AlphaServer systems. Enhancements include:

  • Reduced contention for the SCS/IOLOCK8 spinlock. The LAN drivers now synchronize using a LAN port-specific spinlock where possible.
  • Offload of the primary CPU. The LAN drivers may be assigned to a secondary CPU so that I/O processing can be initiated and completed on the secondary CPU. This offloads the primary CPU and reduces cache contention between processors.

These features enhance the Fast Path functionality that already exist in LAN drivers. The enhanced functionality includes additional optimizations, preallocating of resources, and providing an optimized code path for mainline code.

For more information, see the HP OpenVMS I/O User's Reference Manual

3.2.10 LAN Bridge Failover Process

The following table describes how the bridge parameter settings can affect the failover process.

Option Comments
Decreasing the LISTEN_TIME value allows the bridge to detect topology changes more quickly. If you reduce the LISTEN_TIME parameter value, you should also decrease the value for the HELLO_INTERVAL bridge parameter according to the bridge-specific guidelines. However, note that decreasing the value for the HELLO_INTERVAL parameter causes an increase in network traffic.
Decreasing the FORWARDING_DELAY value can cause the bridge to forward packets unnecessarily to the other LAN segment. Unnecessary forwarding can temporarily cause more traffic on both LAN segments until the bridge software determines which LAN address is on each side of the bridge.

Note: If you change a parameter on one LAN bridge, you should change that parameter on all bridges to ensure that selection of a new root bridge does not change the value of the parameter. The actual parameter value the bridge uses is the value specified by the root bridge.

3.2.11 Virtual LAN Support in OpenVMS

Virtual LAN (VLAN) is a mechanism for segmenting a LAN broadcast domain into smaller sections. The IEEE 802.1Q specification defines the operation and behavior of a VLAN. The OpenVMS implementation adds IEEE 802.1Q support to selected OpenVMS LAN drivers so that OpenVMS can now route VLAN tagged packets to LAN applications using a single LAN adapter.

You can use VLAN to do the following:

  • Segment specific LAN traffic on a network for the purposes of network security or traffic containment, or both.
  • Use VLAN isolated networks to simplify address management.

3.2.11.1 VLAN Design

In OpenVMS, VLAN presents a virtual LAN device to LAN applications. The virtual LAN device associates a single IEE 802.1Q tag with communications over a physical LAN device. The virtual device provides the ability to run any LAN application (for example, SCA, DECnet, TCP/IP, or LAT) over a physical LAN device, allowing host-to-host communications as shown in Figure 3-2.

Note

DECnet-Plus and DECnet Phase IV can be configured to run over a VLAN device.

Figure 3-2 Virtual LAN


OpenVMS VLAN has been implemented through a new driver, SYS$VLANDRIVER.EXE, which provides the virtual LAN devices. Also, existing LAN drivers have been updated to handle VLAN tags. LANCP.EXE and LANACP.EXE have been updated with the ability to create and deactivate VLAN devices and to display status and configuration information.

The OpenVMS VLAN subsystem was designed with particular attention to performance. Thus, the performance cost of using VLAN support is negligible.

When configuring VLAN devices, remember that VLAN devices share the same locking mechanism as the physical LAN device. For example, running OpenVMS cluster protocol on a VLAN device along with the underlying physical LAN device does not result in increased benefit and might, in fact, hinder performance.

3.2.11.2 VLAN Support Details

All supported Gigabit and 10-Gb (Integrity servers-only) LAN devices are capable of handling VLAN traffic on Alpha and Integrity server systems.

The following list describes additional details of VLAN-related support:

  • Switch support
    For VLAN configuration, the only requirement of a switch is conformance to the IEEE 802.1Q specification. The VLAN user interface to the switch is not standard; therefore, you must pay special attention when you configure a switch and especially when you configure VLANs across different switches.
  • LAN Failover support Figure 3-3 illustrates LAN Failover support.

    Figure 3-3 LAN Failover Support



    You can create VLAN devices using a LAN Failover set as a source if all members of the set are VLAN-capable devices. However, you cannot build a Failover set using VLAN devices.
  • Supported capabilities
    VLAN devices inherit the capability of the underlying physical LAN device, including fast path, auto-negotiation, and jumbo frame setting. If a capability needs to be modified, you must modify the underlying physical LAN device.
  • Restrictions
    No support exists for satellite booting over a VLAN device. The OpenVMS LAN boot drivers do not include VLAN support; therefore, you cannot use a
    VLAN device to boot an OpenVMS system. Currently, no support exists in OpenVMS for automatic configuration of VLAN devices. You must create VLAN devices explicitly using LANCP commands.

3.3 Cluster over IP

OpenVMS Version 8.4 has been enhanced with the Cluster over IP (Internet Protocol) feature. Cluster over IP provides the ability to form clusters beyond a single LAN or VLAN segment using industry standard Internet Protocol. This feature provides improved disaster tolerant capability.

System managers also have the ability to manage or monitor OpenVMS cluster that uses IP for cluster communication using SCACP management utility.

Cluster protocol (SCS also known as SCA) over LAN is provided by Port Emulator driver (PEDRIVER). PEDRIVER uses User Datagram Protocol (UDP) and IP in addition to directly using 802.3 interfacing with LAN for cluster communication as shown in Figure 1-0. The datagram characteristics of UDP combined with PEDRIVER's inbuilt reliable delivery mechanism is used for transporting cluster messages which is used by SYSAP (system level application) to communicate between two cluster nodes.

Cluster over IP is an optional feature that can be enabled in addition to the traditional LAN based communication. However, if both LAN and IP mode of communication exist between nodes in a cluster, PEDRIVER prefers LAN communication instead of IP.

Note

OpenVMS Cluster over IP and IP Cluster Interconnect (IPCI) terms are interchangeably used in the document and refers to using TCP/IP stack for cluster communication.

3.3.1 Design

Cluster over IP solution is an integration of the following:

  • PEDRIVER support for UDP protocol
  • TCP/IP Services boot time loading and initialization

Figure 3-4 shows the cluster over IP architecture.

Figure 3-4 Cluster Communication Design Using IP


3.3.1.1 PEDRIVER Support for UDP

This consists of enhancing PEdriver to use the IP UDP protocol. Some of the features of this solution include:

  • The IP UDP service has the same packet delivery characteristics as 802 LANs. PEDRIVER implements the transport layer of NISCA which has inbuilt delay probing, reliable delivery for sequenced messages (retransmission), implement datagram service and also variable buffer size for block transfers for I/O suitable for cluster traffic.
  • The kernel VCI (KVCI) is a kernel mode. It acts as a highly efficient interface to the HP OpenVMS TCP/IP Services stack. It is a variant of the VCI interface, which PEdriver uses to communicate with OpenVMS LAN drivers. PEDRIVER interfaces to UDP similar to a LAN device.
  • Only the lowest layer of PEDRIVER is extended to support UDP. The PEDRIVER changes are transparent to PEDRIVER's upper layers.
  • Providing management interface ability to control and configure IP interfaces to PEDRIVER.

3.3.1.2 TCP/IP Services Boot Time Loading and Initialization

To ensure that cluster communication is available in an IP only network environment, it is essential to have TCP/IP stack loaded when the cluster formation starts. This also retains the existing functionality of cluster formation of OpenVMS clusters. Normal booting sequence includes loading of LAN drivers followed by PEDRIVER. TCP/IP drivers are loaded when TCP/IP services are started. If cluster over IP is enabled, LAN, TCP/IP excelets, and PEDRIVER are loaded sequentially. Once the system comes up, TCP/IP services can be started to use other TCP/IP components, such as TELNET, FTP and so on.

Note

Ensure that the TCP/IP software is configured before configuring cluster over IP. To ensure that network and TCP/IP is configured properly, use the PING utility and ping the node from outside the subnet.


Previous Next Contents Index