[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

OpenVMS Cluster Systems


Previous Contents Index

2.5 OpenVMS Cluster Membership

OpenVMS Cluster systems based on LAN use a cluster group number and a cluster password to allow multiple independent OpenVMS Cluster systems to coexist on the same extended LAN and to prevent accidental access to a cluster by unauthorized computers.

2.5.1 Cluster Group Number

The cluster group number uniquely identifies each OpenVMS Cluster system on a LAN. This number must be from 1 to 4095 or from 61440 to 65535.

Rule: If you plan to have more than one OpenVMS Cluster system on a LAN, you must coordinate the assignment of cluster group numbers among system managers.

Note: OpenVMS Cluster systems operating on CI and DSSI do not use cluster group numbers and passwords.

2.5.2 Cluster Password

The cluster password prevents an unauthorized computer using the cluster group number, from joining the cluster. The password must be from 1 to 31 alphanumeric characters in length, including dollar signs ($) and underscores (_).

2.5.3 Location

The cluster group number and cluster password are maintained in the cluster authorization file, SYS$COMMON:[SYSEXE]CLUSTER_AUTHORIZE.DAT. This file is created during installation of the operating system if you indicate that you want to set up a cluster that utilizes the LAN. The installation procedure then prompts you for the cluster group number and password.

Note: If you convert an OpenVMS Cluster that uses only the CI or DSSI interconnect to one that includes a LAN interconnect, the SYS$COMMON:[SYSEXE]CLUSTER_AUTHORIZE.DAT file is created when you execute the CLUSTER_CONFIG.COM command procedure, as described in Chapter 8.

Reference: For information about OpenVMS Cluster group data in the CLUSTER_AUTHORIZE.DAT file, see Sections 8.4 and 10.9.

2.5.4 Example

If all nodes in the OpenVMS Cluster do not have the same cluster password, an error report similar to the following is logged in the error log file.


 V A X / V M S        SYSTEM ERROR REPORT         COMPILED 30-JAN-1994 15:38:03
                                                                      PAGE  19.

 ******************************* ENTRY     161. *******************************
 ERROR SEQUENCE 24.                              LOGGED ON:        SID 12000003
 DATE/TIME 30-JAN-1994 15:35:47.94                            SYS_TYPE 04010002
 SYSTEM UPTIME: 5 DAYS 03:46:21
 SCS NODE: DAISIE                                              VAX/VMS V6.0

 DEVICE ATTENTION  KA46  CPU FW REV# 3.  CONSOLE FW REV# 0.1

 NI-SCS SUB-SYSTEM, DAISIE$PEA0:

       INVALID CLUSTER PASSWORD RECEIVED

       STATUS          00000000
                       00000000
       DATALINK UNIT       0001
       DATALINK NAME   41534503
                       00000000
                       00000000
                       00000000
                                       DATALINK NAME = ESA1:
       REMOTE NODE     554C4306
                       00203132
                       00000000
                       00000000
                                       REMOTE NODE = CLU21
       REMOTE ADDR     000400AA
                           FC15
                                       ETHERNET ADDR = AA-00-04-00-15-FC
       LOCAL ADDR      000400AA
                           4D34
                                       ETHERNET ADDR = AA-00-04-00-34-4D
       ERROR CNT           0001
                                       1. ERROR OCCURRENCES THIS ENTRY
       UCB$W_ERRCNT        0003
                                       3. ERRORS THIS UNIT

2.6 Synchronizing Cluster Functions by the Distributed Lock Manager

The distributed lock manager is an OpenVMS feature for synchronizing functions required by the distributed file system, the distributed job controller, device allocation, user-written OpenVMS Cluster applications, and other OpenVMS products and software components.

The distributed lock manager uses the connection manager and SCS to communicate information between OpenVMS Cluster computers.

2.6.1 Distributed Lock Manager Functions

The functions of the distributed lock manager include the following:

  • Synchronizes access to shared clusterwide resources, including:
    • Devices
    • Files
    • Records in files
    • Any user-defined resources, such as databases and memory

    Each resource is managed clusterwide by an OpenVMS Cluster computer.
  • Implements the $ENQ and $DEQ system services to provide clusterwide synchronization of access to resources by allowing the locking and unlocking of resource names.
    Reference: For detailed information about system services, refer to the OpenVMS System Services Reference Manual.
  • Queues process requests for access to a locked resource. This queuing mechanism allows processes to be put into a wait state until a particular resource is available. As a result, cooperating processes can synchronize their access to shared objects, such as files and records.
  • Releases all locks that an OpenVMS Cluster computer holds if the computer fails. This mechanism allows processing to continue on the remaining computers.
  • Supports clusterwide deadlock detection.

2.6.2 System Management of the Lock Manager

The lock manager is fully automated and usually requires no explicit system management. However, the LOCKDIRWT system parameter can be used to adjust how control of lock resource trees is distributed across the cluster.

The node that controls a lock resource tree is called the resource master. Each resource tree may be mastered by a different node.

For most configurations, large computers and boot nodes perform optimally when LOCKDIRWT is set to 1 and satellite nodes have LOCKDIRWT set to 0. These values are set automatically by the CLUSTER_CONFIG.COM procedure.

In some circumstances, you may want to change the values of the LOCKDIRWT across the cluster to control which nodes master resource trees. The following list describes how the value of the LOCKDIRWT system parameter affects resource tree mastership:

  • If multiple nodes have locks on a resource tree, the tree is mastered by the node with the highest value for LOCKDIRWT, regardless of actual locking rates.
  • If multiple nodes with the same LOCKDIRWT value have locks on a resource, the tree is mastered by the node with the highest locking rate on that tree.
  • Note that if only one node has locks on a resource tree, it becomes the master of the tree, regardless of the LOCKDIRWT value.

Thus, using varying values for the LOCKDIRWT system parameter, you can implement a resource tree mastering policy that is priority based. Using equal values for the LOCKDIRWT system parameter, you can implement a resource tree mastering policy that is activity based. If necessary, a combination of priority-based and activity-based remastering can be used.

2.6.3 Large-Scale Locking Applications

The Enqueue process limit (ENQLM), which is set in the SYSUAF.DAT file and which controls the number of locks that a process can own, can be adjusted to meet the demands of large scale databases and other server applications.

Prior to OpenVMS Version 7.1, the limit was 32767. This limit was removed to enable the efficient operation of large scale databases and other server applications. A process can now own up to 16,776,959 locks, the architectural maximum. By setting ENQLM in SYSUAF.DAT to 32767 (using the Authorize utility), the lock limit is automatically extended to the maximum of 16,776,959 locks. $CREPRC can pass large quotas to the target process if it is initialized from a process with the SYSUAF Enqlm quota of 32767.

Reference: See the OpenVMS Programming Concepts Manual for additional information about the distributed lock manager and resource trees. See the OpenVMS System Manager's Manual for more information about Enqueue Quota.

2.7 Resource Sharing

Resource sharing in an OpenVMS Cluster system is enabled by the distributed file system, RMS, and the distributed lock manager.

2.7.1 Distributed File System

The OpenVMS Cluster distributed file system allows all computers to share mass storage and files. The distributed file system provides the same access to disks, tapes, and files across the OpenVMS Cluster that is provided on a standalone computer.

2.7.2 RMS and Distributed Lock Manager

The distributed file system and OpenVMS Record Management Services (RMS) use the distributed lock manager to coordinate clusterwide file access. RMS files can be shared to the record level.

Any disk or tape can be made available to the entire OpenVMS Cluster system. The storage devices can be:

  • Connected to an HSC, HSJ, HSD, HSG, HSZ, DSSI, or SCSI subsystem
  • A local device that is served to the OpenVMS Cluster

All cluster-accessible devices appear as if they are connected to every computer.

2.8 Disk Availability

Locally connected disks can be served across an OpenVMS Cluster by the MSCP server.

2.8.1 MSCP Server

The MSCP server makes locally connected disks, including the following, available across the cluster:

  • DSA disks local to OpenVMS Cluster members using SDI
  • HSC and HSJ disks in an OpenVMS Cluster using mixed interconnects
  • ISE and HSD disks in an OpenVMS Cluster using mixed interconnects
  • SCSI and HSZ disks
  • FC and HSG disks
  • Disks on boot servers and disk servers located anywhere in the OpenVMS Cluster

In conjunction with the disk class driver (DUDRIVER), the MSCP server implements the storage server portion of the MSCP protocol on a computer, allowing the computer to function as a storage controller. The MSCP protocol defines conventions for the format and timing of messages sent and received for certain families of mass storage controllers and devices designed by Compaq. The MSCP server decodes and services MSCP I/O requests sent by remote cluster nodes.

Note: The MSCP server is not used by a computer to access files on locally connected disks.

2.8.2 Device Serving

Once a device is set up to be served:

  • Any cluster member can submit I/O requests to it.
  • The local computer can decode and service MSCP I/O requests sent by remote OpenVMS Cluster computers.

2.8.3 Enabling the MSCP Server

The MSCP server is controlled by the MSCP_LOAD and MSCP_SERVE_ALL system parameters. The values of these parameters are set initially by answers to questions asked during the OpenVMS installation procedure (described in Section 8.4), or during the CLUSTER_CONFIG.COM procedure (described in Chapter 8).

The default values for these parameters are as follows:

  • MSCP is not loaded on satellites.
  • MSCP is loaded on boot server and disk server nodes.

Reference: See Section 6.3 for more information about setting system parameters for MSCP serving.

2.9 Tape Availability

Locally connected tapes can be served across an OpenVMS Cluster by the TMSCP server.

2.9.1 TMSCP Server

The TMSCP server makes locally connected tapes, including the following, available across the cluster:

  • HSC and HSJ tapes
  • ISE and HSD tapes
  • SCSI tapes

The TMSCP server implements the TMSCP protocol, which is used to communicate with a controller for TMSCP tapes. In conjunction with the tape class driver (TUDRIVER), the TMSCP protocol is implemented on a processor, allowing the processor to function as a storage controller.

The processor submits I/O requests to locally accessed tapes, and accepts the I/O requests from any node in the cluster. In this way, the TMSCP server makes locally connected tapes available to all nodes in the cluster. The TMSCP server can also make HSC tapes and DSSI ISE tapes accessible to OpenVMS Cluster satellites.

2.9.2 Enabling the TMSCP Server

The TMSCP server is controlled by the TMSCP_LOAD system parameter. The value of this parameter is set initially by answers to questions asked during the OpenVMS installation procedure (described in Section 4.2.3) or during the CLUSTER_CONFIG.COM procedure (described in Section 8.4). By default, the setting of the TMSCP_LOAD parameter does not load the TMSCP server and does not serve any tapes.

2.10 Queue Availability

The distributed job controller makes queues available across the cluster in order to achieve the following:

Function Description
Permit users on any OpenVMS Cluster computer to submit batch and print jobs to queues that execute on any computer in the OpenVMS Cluster Users can submit jobs to any queue in the cluster, provided that the necessary mass storage volumes and peripheral devices are accessible to the computer on which the job executes.
Distribute the batch and print processing work load over OpenVMS Cluster nodes System managers can set up generic batch and print queues that distribute processing work loads among computers. The distributed job controller directs batch and print jobs either to the execution queue with the lowest ratio of jobs-to-queue limit or to the next available printer.

The job controller uses the distributed lock manager to signal other computers in the OpenVMS Cluster to examine the batch and print queue jobs to be processed.

2.10.1 Controlling Queues

To control queues, you use one or several queue managers to maintain a clusterwide queue database that stores information about queues and jobs.

Reference: For detailed information about setting up OpenVMS Cluster queues, see Chapter 7.


Chapter 3
OpenVMS Cluster Interconnect Configurations

This chapter provides an overview of various types of OpenVMS Cluster configurations and the ways they are interconnected.

References: For definitive information about supported OpenVMS Cluster configurations, refer to:

  • OpenVMS Cluster Software Software Product Description (SPD 29.78.xx)
  • Guidelines for OpenVMS Cluster Configurations

3.1 Overview

All Alpha and VAX nodes in any type of OpenVMS Cluster must have direct connections to all other nodes. Sites can choose to use one or more of the following interconnects:

  • LANs
    • ATM
    • Ethernet (10/100 and Gigabit Ethernet)
    • FDDI
  • CI
  • DSSI
  • MEMORY CHANNEL
  • SCSI (requires a second interconnect for node-to-node [SCS] communications)
  • Fibre Channel (requires a second interconnect for node-to-node [SCS] communications)

Processing needs and available hardware resources determine how individual OpenVMS Cluster systems are configured. The configuration discussions in this chapter are based on these physical interconnects.

3.2 OpenVMS Cluster Systems Interconnected by CI

The CI was the first interconnect used for OpenVMS Cluster communications. The CI supports the exchange of information among VAX and Alpha nodes, and HSC and HSJ nodes at the rate of 70 megabits per second on two paths.

3.2.1 Design

The CI is designed for access to storage and for reliable host-to-host communication. CI is a high-performance, highly available way to connect Alpha and VAX nodes to disk and tape storage devices and to each other. An OpenVMS Cluster system based on the CI for cluster communications uses star couplers as common connection points for computers, and HSC and HSJ subsystems.

3.2.2 Example

Figure 3-1 shows how the CI components are typically configured.

Figure 3-1 OpenVMS Cluster Configuration Based on CI


Note: If you want to add workstations to a CI OpenVMS Cluster system, you must utilize an additional type of interconnect, such as Ethernet or FDDI, in the configuration. Workstations are typically configured as satellites in an OpenVMS Cluster system (see Section 3.4.4).

Reference: For instructions on adding satellites to an existing CI OpenVMS Cluster system, refer to Section 8.2.

3.2.3 Star Couplers

What appears to be a single point of failure in the CI configuration in Figure 3-1 is the star coupler that connects all the CI lines. In reality, the star coupler is not a single point of failure because there are actually two star couplers in every cabinet.

Star couplers are also immune to power failures because they contain no powered components but are constructed as sets of high-frequency pulse transformers. Because they do no processing or buffering, star couplers also are not I/O throughput bottlenecks. They operate at the full-rated speed of the CI cables. However, in very heavy I/O situations, exceeding CI bandwidth may require multiple star couplers.

3.3 OpenVMS Cluster Systems Interconnected by DSSI

The DIGITAL Storage Systems Interconnect (DSSI) is a medium-bandwidth interconnect that Alpha and VAX nodes can use to access disk and tape peripherals. Each peripheral is an integrated storage element (ISE) that contains its own controller and its own MSCP server that works in parallel with the other ISEs on the DSSI.

3.3.1 Design

Although the DSSI is designed primarily to access disk and tape storage, it has proven an excellent way to connect small numbers of nodes using the OpenVMS Cluster protocols. Each DSSI port connects to a single DSSI bus. As in the case of the CI, several DSSI ports can be connected to a node to provide redundant paths between nodes. However, unlike CI, DSSI does not provide redundant paths.

3.3.2 Availability

OpenVMS Cluster configurations using ISE devices and the DSSI bus offer high availability, flexibility, growth potential, and ease of system management.

DSSI nodes in an OpenVMS Cluster configuration can access a common system disk and all data disks directly on a DSSI bus and serve them to satellites. Satellites (and users connected through terminal servers) can access any disk through any node designated as a boot server. If one of the boot servers fails, applications on satellites continue to run because disk access fails over to the other server. Although applications running on nonintelligent devices, such as terminal servers, are interrupted, users of terminals can log in again and restart their jobs.

3.3.3 Guidelines

Generic configuration guidelines for DSSI OpenVMS Cluster systems are as follows:

  • Currently, a total of four Alpha and/or VAX nodes can be connected to a common DSSI bus.
  • Multiple DSSI buses can operate in an OpenVMS Cluster configuration, thus dramatically increasing the amount of storage that can be configured into the system.

References: Some restrictions apply to the type of CPUs and DSSI I/O adapters that can reside on the same DSSI bus. Consult your service representative or see the OpenVMS Cluster Software Software Product Description (SPD) for complete and up-to-date configuration details about DSSI OpenVMS Cluster systems.

3.3.4 Example

Figure 3-2 shows a typical DSSI configuration.

Figure 3-2 DSSI OpenVMS Cluster Configuration


3.4 OpenVMS Cluster Systems Interconnected by LANs

The Ethernet (10/100 and Gigabit), FDDI, and ATM interconnects are industry-standard local area networks (LANs) that are generally shared by a wide variety of network consumers. When OpenVMS Cluster systems are based on LAN, cluster communications are carried out by a port driver (PEDRIVER) that emulates CI port functions.

3.4.1 Design

The OpenVMS Cluster software is designed to use the Ethernet, ATM, and FDDI ports and interconnects simultaneously with the DECnet, TCP/IP, and SCS protocols. This is accomplished by allowing LAN data link software to control the hardware port. This software provides a multiplexing function so that the cluster protocols are simply another user of a shared hardware resource. See Figure 2-1 for an illustration of this concept.

3.4.2 Cluster Group Numbers and Cluster Passwords

A single LAN can support multiple LAN-based OpenVMS Cluster systems. Each OpenVMS Cluster is identified and secured by a unique cluster group number and a cluster password. Chapter 2 describes cluster group numbers and cluster passwords in detail.

3.4.3 Servers

OpenVMS Cluster computers interconnected by a LAN are generally configured as either servers or satellites. The following table describes servers.

Server Type Description
MOP servers Downline load the OpenVMS boot driver to satellites by means of the Maintenance Operations Protocol (MOP).
Disk servers Use MSCP server software to make their locally connected disks and any CI or DSSI connected disks available to satellites over the LAN.
Tape servers Use TMSCP server software to make their locally connected tapes and any CI or DSSI connected tapes available to satellite nodes over the LAN.
Boot servers A combination of a MOP server and a disk server that serves one or more Alpha or VAX system disks. Boot and disk servers make user and application data disks available across the cluster. These servers should be the most powerful computers in the OpenVMS Cluster and should use the highest-bandwidth LAN adapters in the cluster. Boot servers must always run the MSCP server software.


Previous Next Contents Index