[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

C.3 Configuration Support and Restrictions

The CIPCA adapter is supported by AlphaServers with PCI buses, by CI-connected VAX host systems, by storage controllers, and by the CI star coupler expander.

C.3.1 AlphaServer Support

Table C-2 describes CIPCA support on AlphaServer systems with PCI buses, including the maximum number of CIPCAs supported on each system.

Table C-2 AlphaServer Support for CIPCAs
System Maximum CIPCAs Comments
AlphaServer 8400 26 Can use a combination of CIPCA and CIXCD adapters, not to exceed 26. Prior to OpenVMS Version 7.1, the maximum is 10.
AlphaServer 8200 26 Prior to OpenVMS Version 7.1, the maximum is 10.
AlphaServer 4000, 4100 3 When using three CIPCAs, one must be a CIPCA-AA and two must be CIPCA-BA.
AlphaServer 4000 plus I/O expansion module 6 When using six CIPCAs, only three can be CIPCA-AA.
AlphaServer 1200 2 First supported in OpenVMS Version 7.1-1H1.
AlphaServer 2100A 3  
AlphaServer 2000, 2100 2 Only one can be a CIPCA-BA.

C.3.2 CI-Connected Host System Compatibility

For CI-connected host systems, CIPCA is supported by any OpenVMS VAX host using CIXCD or CIBCA-B as well as by any OpenVMS Alpha server host using CIPCA or CIXCD. This means that an Alpha server using the CIPCA adapter can coexist on a CI bus with VAX systems using CIXCD and CIBCA-B CI adapters.

The maximum number of systems supported in an OpenVMS Cluster system, 96, is not affected by the use of one or more CIPCAs, although the maximum number of CI nodes is limited to 16 (see Section C.3.4).

C.3.3 Storage Controller Support

The CIPCA adapter can coexist on a CI bus with all variants of the HSC/HSJ controllers except the HSC50. Certain controllers require specific firmware and hardware, as shown in Table C-3.

Table C-3 Controller Requirements for Supporting CIPCA
Controller Requirement
HSJ30, HSJ40 HSOF Version 2.5 (or higher) firmware
HSC40, HSC70 Revision F (or higher) L109 module
HSJ80 ACS V8.5J (or higher) firmware

C.3.4 Star Coupler Expander Support

A CI star coupler expander (CISCE) can be added to any star coupler to increase its connection capacity to 32 ports. The maximum number of CPUs that can be connected to a star coupler is 16, regardless of the number of ports.

C.3.5 Configuration Restrictions

Note the following configuration restrictions:

CIPCA-AA with EISA-Slot Link Module Rev. A01

For the CIPCA-AA adapter with the EISA-slot link module Rev. A01, use the DIP switch settings described here to prevent arbitration timeout errors. Under heavy CI loads, arbitration timeout errors can cause CI path errors and CI virtual circuit closures.

The DIP switch settings on the CIPCA-AA link module are used to specify cluster size and the node address. Follow these instructions when setting the DIP switches for link module Rev. A01 only:

  • If the cluster size is set to 16, do not set a CI adapter to node address 15 on that star coupler.
  • If the cluster size is set to 32, do not set a CI adapter to node address 31 on that star coupler. Also, do not set any CIPCA to node address 0 or do not set any CI adapter to node address 16.

These restrictions do not apply to the EISA slot link module Rev. B01 and higher or to the PCI-slot link module of the CIPCA-BA.

HSJ50 Firmware Requirement for Use of 4K CI Packets

Do not attempt to enable the use of 4K CI packets by the HSJ50 controller unless the HSJ50 firmware is Version 5.0J--3 or higher. If the HSJ50 firmware version is less than Version 5.0J--3 and 4K CI packets are enabled, data can become corrupted. If your HSJ50 firmware does not meet this requirement, contact your Compaq support representative.

C.4 Installation Requirements

When installing CIPCA adapters in your cluster, observe the following version-specific requirements.

C.4.1 Managing Bus Addressable Pool (BAP) Size

The CIPCA, CIXCD, and KFMSB adapters use bus-addressable pool (BAP). Starting with OpenVMS Version 7.1, AUTOGEN controls the allocation of BAP. After installing or upgrading the operating system, you must run AUTOGEN with the FEEDBACK qualifier. When you run AUTOGEN in this way, the following four system parameters are set:

  • NPAG_BAP_MIN
  • NPAG_BAP_MAX
  • NPAG_BAP_MIN_PA
  • NPAG_BAP_MAX_PA

The BAP allocation amount depends on the adapter type, the number of adapters, and the version of the operating system. The size of physical memory determines whether the BAP remains separate or is merged with normal, nonpaged dynamic memory (NPAGEDYN), as shown in Table C-4.

Table C-4 BAP Allocation by Adapter Type and OpenVMS Version
Adapter Version 7.1 Version 7.2 Separate BAP or Merged
CIPCA 4 MB 2 MB Separate if physical memory >1 GB; otherwise merged
CIXCD 4 MB 2 MB Separate if physical memory >4 GB; otherwise merged
KFMSB 8 MB 4 MB Separate if physical memory >4 GB; otherwise merged

For systems whose BAP is merged with nonpaged pool, the initial amount and maximum amount of nonpaged pool (as displayed by the DCL command SHOW MEMORY/POOL/FULL) do not match the value of the SYSGEN parameters NPAGEDYN and NPAGEVIR. Instead, the value of SYSGEN parameter NPAG_BAP_MIN is added to NPAGEDYN to determine the initial size, and the value of NPAG_BAP_MAX is added to NPAGEVIR to determine the maximum size.

Your OpenVMS system may not require as much merged pool as the sum of these SYSGEN parameters. After your system has been running a few days, use AUTOGEN with the FEEDBACK qualifier to fine-tune the amount of memory allocated for the merged, nonpaged pool.

C.4.2 AUTOCONFIGURE Restriction for OpenVMS Version 6.2-1H2 and OpenVMS Version 6.2-1H3

When you perform a normal installation boot, AUTOCONFIGURE runs automatically. AUTOCONFIGURE is run from SYS$STARTUP:VMS$DEVICE_STARTUP.COM (called from SYS$SYSTEM:STARTUP.COM), unless disabled by SYSMAN. If you are running OpenVMS Version 6.2-1H2 or OpenVMS Version 6.2-1H3 and you have customized your booting sequence, make sure that AUTOCONFIGURE runs or that you explicitly configure all CIPCA devices before SYSTARTUP_VMS.COM exits.

C.5 DECevent for Analyzing CIPCA Errors

To analyze error log files for CIPCA errors, use DECevent. The DCL command ANALYZE/ERROR_LOG has not been updated to support CIPCA and other new devices; using that command will result in improperly formatted error log entries.

Install the DECevent kit supplied on the OpenVMS Alpha CD-ROM. Then use the following DCL commands to invoke DECevent to analyze dump files:

  • DIAGNOSE --- Analyzes the current system error log file
  • DIAGNOSE filename --- Analyzes the error log file named filename.sys

For more information about using DECevent, use the DCL HELP DIAGNOSE command.

C.6 Performance Recommendations

To enhance performance, follow the recommendations that pertain to your configuration.

C.6.1 Synchronous Arbitration

CIPCA uses a new, more optimal CI arbitration algorithm called synchronous arbitration instead of the older asynchronous arbitration algorithm. The two algorithms are completely compatible. Under CI saturation conditions, both the old and new algorithms are equivalent and provide equitable round-robin access to all nodes. However, with less traffic, the new algorithm provides the following benefits:

  • Reduced packet transmission latency due to reduced average CI arbitration time.
  • Increased node-to-node throughput.
  • Complete elimination of CI collisions that waste bandwidth and increase latency in configurations containing only synchronous arbitration nodes.
  • Reduced CI collision rate in configurations with mixed synchronous and asynchronous arbitration CI nodes. The reduction is roughly proportional to the fraction of CI packets being sent by the synchronous arbitration CI nodes.

Support for synchronous arbitration is latent in the HSJ controller family. In configurations containing both CIPCAs and HSJ controllers, enabling the HSJs to use synchronous arbitration is recommended.

The HSJ CLI command to do this is:


CLI> SET THIS CI_ARB = SYNC

This command will take effect upon the next reboot of the HSJ.

C.6.2 Maximizing CIPCA Performance With an HSJ50 and an HSJ80

To maximize the performance of the CIPCA adapter with an HSJ50 or HSJ80 controller, it is advisable to enable the use of 4K CI packets by the HSJ50 or HSJ80. For the HSJ50, the firmware revision level must be at Version 5.0J--3 or higher. For the HSJ80, the firmware revision level must be at ACS V8.5J or higher.

Caution

Do not attempt to use 4K CI packets if your HSJ50 firmware revision level is not Version 5.0J--3 or higher, because data can become corrupted.

To enable the use of 4K CI packets, specify the following command at the HSJ50 or HSJ80 console prompt:


CLI> SET THIS_CONTROLLER CI_4K_PACKET_CAPABILITY

This command takes effect when the HSJ50 is rebooted.


Appendix D
Multiple-Site OpenVMS Clusters

This appendix describes multiple-site OpenVMS Cluster configurations in which multiple nodes are located at sites separated by relatively long distances, from approximately 25 to 125 miles, depending on the technology used. This configuration was introduced in OpenVMS Version 6.2. General configuration guidelines are provided and the three technologies for connecting multiple sites are discussed. The benefits of multiple site clusters are cited and pointers to additional documentation are provided.

The information in this appendix supersedes the Multiple-Site VMScluster Systems addendum manual.

D.1 What is a Multiple-Site OpenVMS Cluster System?

A multiple-site OpenVMS Cluster system is an OpenVMS Cluster system in which the member nodes are located in geographically separate sites. Depending on the technology used, the distances can be as great as 150 miles.

When an organization has geographically dispersed sites, a multiple-site OpenVMS Cluster system allows the organization to realize the benefits of OpenVMS Cluster systems (for example, sharing data among sites while managing data center operations at a single, centralized location).

Figure D-1 illustrates the concept of a multiple-site OpenVMS Cluster system for a company with a manufacturing site located in Washington, D.C., and corporate headquarters in Philadelphia. This configuration spans a geographical distance of approximately 130 miles (210 km).

Figure D-1 Site-to-Site Link Between Philadelphia and Washington


D.1.1 ATM, DS3, and FDDI Intersite Links

The following link technologies between sites are approved for OpenVMS VAX and OpenVMS Alpha systems:

  • Asynchronous transfer mode (ATM)
  • DS3
  • FDDI

High-performance local area network (LAN) technology combined with the ATM, DS3, and FDDI interconnects allows you to utilize wide area network (WAN) communication services in your OpenVMS Cluster configuration. OpenVMS Cluster systems configured with the GIGAswitch crossbar switch and ATM, DS3, or FDDI interconnects approve the use of nodes located miles apart. (The actual distance between any two sites is determined by the physical intersite cable-route distance, and not the straight-line distance between the sites.) Section D.3 describes OpenVMS Cluster systems and the WAN communications services in more detail.

Note

To gain the benefits of disaster tolerance across a multiple-site OpenVMS Cluster, use Disaster Tolerant Cluster Services for OpenVMS, a system management and software package from HP.

Consult your HP Services representative for more information.

D.1.2 Benefits of Multiple-Site OpenVMS Cluster Systems

Some of the benefits you can realize with a multiple-site OpenVMS Cluster system include the following:

Benefit Description
Remote satellites and nodes A few systems can be remotely located at a secondary site and can benefit from centralized system management and other resources at the primary site, as shown in Figure D-2. For example, a main office data center could be linked to a warehouse or a small manufacturing site that could have a few local nodes with directly attached site-specific devices. Alternatively, some engineering workstations could be installed in an office park across the city from the primary business site.
Data center management consolidation A single management team can manage nodes located in data centers at multiple sites.
Physical resource sharing Multiple sites can readily share devices such as high-capacity computers, tape libraries, disk archives, or phototypesetters.
Remote archiving Backups can be made to archival media at any site in the cluster. A common example would be to use disk or tape at a single site to back up the data for all sites in the multiple-site OpenVMS Cluster. Backups of data from remote sites can be made transparently (that is, without any intervention required at the remote site).
Increased availability In general, a multiple-site OpenVMS Cluster provides all of the availability advantages of a LAN OpenVMS Cluster. Additionally, by connecting multiple, geographically separate sites, multiple-site OpenVMS Cluster configurations can increase the availability of a system or elements of a system in a variety of ways:
  • Logical volume/data availability---Volume shadowing or redundant arrays of independent disks (RAID) can be used to create logical volumes with members at both sites. If one of the sites becomes unavailable, data can remain available at the other site.
  • Site failover---By adjusting the VOTES system parameter, you can select a preferred site to continue automatically if the other site fails or if communications with the other site are lost.
  • Disaster tolerance---When combined with the software, services, and management procedures provided by the Disaster Tolerant Cluster Services for OpenVMS, you can achieve a high level of disaster tolerance. Consult your HP Services representative for further information.

Figure D-2 shows an OpenVMS Cluster system with satellites accessible from a remote site.

Figure D-2 Multiple-Site OpenVMS Cluster Configuration with Remote Satellites


D.1.3 General Configuration Guidelines

The same configuration rules that apply to OpenVMS Cluster systems on a LAN also apply to a multiple-site OpenVMS Cluster configuration that includes ATM, DS3, or FDDI intersite interconnect. General LAN configuration rules are stated in the following documentation:

  • OpenVMS Cluster Software Software Product Description (SPD 29.78.xx)
  • Chapter 8 of this manual

Some configuration guidelines are unique to multiple-site OpenVMS Clusters; these guidelines are described in Section D.3.4.

D.2 Using FDDI to Configure Multiple-Site OpenVMS Cluster Systems

Since VMS Version 5.4--3, FDDI has been the most common method to connect two distant OpenVMS Cluster sites. Using high-speed FDDI fiber-optic cables, you can connect sites with an intersite cable-route distance of up to 25 miles (40 km), the cable route distance between sites.

You can connect sites using these FDDI methods:

  • To obtain maximum performance, use a full-duplex FDDI link at 100 Mb/s both ways between GIGAswitch/FDDI bridges at each site for maximum intersite bandwidth.
  • To obtain maximum availability, use a dual FDDI ring at 100 Mb/s between dual attachment stations (DAS) ports of wiring concentrators or GIGAswitch/FDDI bridges for maximum link availability.
  • For maximum performance and availability, use two disjoint FDDI LANs, each with dedicated host adapters and full-duplex FDDI intersite links connected to GIGAswitch/FDDI bridges at each site.

Additional OpenVMS Cluster configuration guidelines and system management information can be found in this manual and in HP OpenVMS Cluster Systems. See the HP OpenVMS Version 8.2 Release Notes for information about ordering the current version of these manuals.

The inherent flexibility of OpenVMS Cluster systems and improved OpenVMS Cluster LAN protocols also allow you to connect multiple OpenVMS Cluster sites using the ATM or DS3 or both communications services.

D.3 Using WAN Services to Configure Multiple-Site OpenVMS Cluster Systems

This section provides an overview of the ATM and DS3 wide area network (WAN) services, describes how you can bridge an FDDI interconnect to the ATM or DS3 or both communications services, and provides guidelines for using these services to configure multiple-site OpenVMS Cluster systems.

The ATM and DS3 services provide long-distance, point-to-point communications that you can configure into your OpenVMS Cluster system to gain WAN connectivity. The ATM and DS3 services are available from most common telephone service carriers and other sources.

Note

DS3 is not available in Europe and some other locations. Also, ATM is a new and evolving standard, and ATM services might not be available in all localities.

ATM and DS3 services are approved for use with the following OpenVMS versions:

Service Approved Versions of OpenVMS
ATM OpenVMS Version 6.2 or later
DS3 OpenVMS Version 6.1 or later

The following sections describe the ATM and DS3 communication services and how to configure these services into multiple-site OpenVMS Cluster systems.

D.3.1 The ATM Communications Service

The ATM communications service that uses the SONET physical layer (ATM/SONET) provides full-duplex communications (that is, the bit rate is available simultaneously in both directions as shown in Figure D-3). ATM/SONET is compatible with multiple standard bit rates. The SONET OC-3 service at 155 Mb/s full-duplex rate is the best match to FDDI's 100 Mb/s bit rate. ATM/SONET OC-3 is a standard service available in most parts of the world. In Europe, ATM/SONET is a high-performance alternative to the older E3 standard.

Figure D-3 ATM/SONET OC-3 Service


To transmit data, ATM frames (packets) are broken into cells for transmission by the ATM service. Each cell has 53 bytes, of which 5 bytes are reserved for header information and 48 bytes are available for data. At the destination of the transmission, the cells are reassembled into ATM frames. The use of cells permits ATM suppliers to multiplex and demultiplex multiple data streams efficiently at differing bit rates. This conversion of frames into cells and back is transparent to higher layers.

D.3.2 The DS3 Communications Service (T3 Communications Service)

The DS3 communications service provides full-duplex communications as shown in Figure D-4. DS3 (also known as T3) provides the T3 standard bit rate of 45 Mb/s. T3 is the standard service available in North America and many other parts of the world.

Figure D-4 DS3 Service


D.3.3 FDDI-to-WAN Bridges

You can use FDDI-to-WAN (for example, FDDI-to-ATM or FDDI-to-DS3 or both) bridges to configure an OpenVMS Cluster with nodes in geographically separate sites, such as the one shown in Figure D-5. In this figure, the OpenVMS Cluster nodes at each site communicate as though the two sites are connected by FDDI. The FDDI-to-WAN bridges make the existence of ATM and DS3 transparent to the OpenVMS Cluster software.

Figure D-5 Multiple-Site OpenVMS Cluster Configuration Connected by DS3


In Figure D-5, the FDDI-to-DS3 bridges and DS3 operate as follows:

  1. The local FDDI-to-DS3 bridge receives FDDI packets addressed to nodes at the other site.
  2. The bridge converts the FDDI packets into DS3 packets and sends the packets to the other site via the DS3 link.
  3. The receiving FDDI-to-DS3 bridge converts the DS3 packets into FDDI packets and transmits them on an FDDI ring at that site.

HP recommends using the GIGAswitch/FDDI system to construct FDDI-to-WAN bridges. The GIGAswitch/FDDI, combined with the DEFGT WAN T3/SONET option card, was used during qualification testing of the ATM and DS3 communications services in multiple-site OpenVMS Cluster systems.


Previous Next Contents Index