[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

11.6 State Transition Strategies

OpenVMS Cluster state transitions occur when a system joins or leaves an OpenVMS Cluster system and when the OpenVMS Cluster recognizes a quorum-disk state change. The connection manager handles these events to ensure the preservation of data integrity throughout the OpenVMS Cluster.

State transitions should be a concern only if systems are joining or leaving an OpenVMS Cluster system frequently enough to cause disruption.

A state transition's duration and effect on users and applications is determined by the reason for the transition, the configuration, and the applications in use. By managing transitions effectively, system managers can control:

  • Detection of failures and how long the transition takes
  • Side effects of the transition, such as volume shadowing copy and merge operations

11.6.1 Dealing with State Transitions

The following guidelines describe effective ways of dealing with transitions so that you can minimize the actual transition time as well as the side effects after the transition.

  • Be proactive in preventing nodes from leaving an OpenVMS Cluster by:
    • Providing interconnect redundancy between all systems.
    • Preventing resource exhaustion of disks and memory as well as saturation of interconnects, processors, and adapters.
    • Using an uninterruptible power supply (UPS).
    • Informing users that shutting off a workstation in a large OpenVMS Cluster disrupts the operation of all systems in the cluster.
  • Do not use a quorum disk unless your OpenVMS Cluster has only two nodes.
  • Where possible, ensure that shadow set members reside on shared buses to increase availability.
  • The time to detect the failure of nodes, disks, adapters, interconnects, and virtual circuits is controlled by system polling parameters. Reducing polling time makes the cluster react quickly to changes, but it also results in lower tolerance to temporary outages. When setting timers, try to strike a balance between rapid recovery from significant failures and "nervousness" resulting from temporary failures.
    Table 11-5 describes OpenVMS Cluster polling parameters that you can adjust for quicker detection time. HP recommends that these parameters be set to the same value in each OpenVMS Cluster member.

    Table 11-5 OpenVMS Cluster Polling Parameters
    Parameter Description
    QDSKINTERVAL Specifies the quorum disk polling interval.
    RECNXINTERVL Specifies the interval during which the connection manager attempts to restore communication to another system.
    TIMVCFAIL Specifies the time required for detection of a virtual circuit failure.
  • Include application recovery in your plans. When you assess the effect of a state transition on application users, consider that the application recovery phase includes activities such as replaying a journal file, cleaning up recovery units, and users logging in again.

Reference: For more detailed information about OpenVMS Cluster transitions and their phases, system parameters, quorum management, see HP OpenVMS Cluster Systems.

11.7 Migration and Warranted Support for Multiple Versions

HP provides two levels of support, warranted and migration, for mixed-version and mixed-architecture OpenVMS Cluster systems.

Warranted support means that HP has fully qualified different versions coexisting in an OpenVMS Cluster and will answer all problems identified by customers using these configurations. The warranted configurations for this release include:

  • Alpha Version 8.2 and I64 Version 8.2
  • Alpha Version 7.3-2 and I64 Version 8.2
  • Alpha Version 7.3-2 and Alpha Version 8.2
  • Alpha Version 7.3-2, Alpha Version 8.2, and I64 Version 8.2
  • Alpha Version 7.3-2 and VAX Version 7.3
  • VAX Version 7.3 and Alpha Version 8.2

Migration support helps customers move to warranted OpenVMS Cluster configurations with minimal impact on their cluster environments. Migration support means that HP has qualified the versions for use together in configurations that are migrating in a staged fashion to a newer version of OpenVMS VAX or of OpenVMS Alpha. Problem reports submitted against these configurations will be answered by HP. However, in exceptional cases, HP might request that you move to a warranted configuration as part of the solution.

This release of OpenVMS includes no configurations specific to migration support.

In a mixed-version cluster, you must install remedial kits on earlier versions of OpenVMS. For a complete list of required remedial kits, see the HP OpenVMS Version 8.2 Release Notes.

11.8 Alpha, VAX, and I64 Systems in the Same OpenVMS Cluster

A combination of OpenVMS Alpha and OpenVMS VAX systems or a combination of OpenVMS Alpha and OpenVMS I64 systems can work together in the same OpenVMS Cluster to provide both flexibility and migration capability. In addition, using different platforms enables you to us applications that are system specific or hardware specific.

11.8.1 OpenVMS Cluster Satellite Booting Across Architectures

OpenVMS Alpha Version 7.1 (and higher) and OpenVMS VAX Version 7.1 (and higher) enable VAX boot nodes to provide boot service to Alpha satellites and Alpha boot nodes to provide boot service to VAX satellites. This support, called cross-architecture booting, increases configuration flexibility and higher availability of boot servers for satellites.

Two configuration scenarios make cross-architecture booting desirable:

  • You want the Alpha system disk configured in the same highly available and high-performance area as your VAX system disk.
  • Your Alpha boot server shares CI or DSSI storage with the VAX boot server. If your only Alpha boot server fails, you want to be able to reboot an Alpha satellite before the Alpha boot server reboots.

11.8.2 Restrictions

You cannot perform OpenVMS operating system and layered product installations and upgrades across architectures. For example, you must install and upgrade OpenVMS Alpha software using an Alpha system. When you configure OpenVMS Cluster systems that take advantage of cross-architecture booting, ensure that at least one system from each architecture is configured with a disk that can be used for installations and upgrades.

System disks can contain only a single version of the OpenVMS operating system and are architecture-specific. For example, OpenVMS VAX Version 7.3 cannot coexist on a system disk with OpenVMS Alpha Version 7.3.


Appendix A
SCSI as an OpenVMS Cluster Interconnect

One of the benefits of OpenVMS Cluster systems is that multiple computers can simultaneously access storage devices connected to an OpenVMS Cluster storage interconnect. Together, these systems provide high performance and highly available access to storage.

This appendix describes how OpenVMS Cluster systems support the Small Computer Systems Interface (SCSI) as a storage interconnect. Multiple Alpha computers, also referred to as hosts or nodes, can simultaneously access SCSI disks over a SCSI interconnect. Such a configuration is called a SCSI multihost OpenVMS Cluster. A SCSI interconnect, also called a SCSI bus, is an industry-standard interconnect that supports one or more computers, peripheral devices, and interconnecting components.

The discussions in this chapter assume that you already understand the concept of sharing storage resources in an OpenVMS Cluster environment. OpenVMS Cluster concepts and configuration requirements are also described in the following OpenVMS Cluster documentation:

  • HP OpenVMS Cluster Systems
  • OpenVMS Cluster Software Software Product Description (SPD 29.78.xx)

This appendix includes two primary parts:

  • Section A.1 through Section A.6.6 describe the fundamental procedures and concepts that you would need to plan and implement a SCSI multihost OpenVMS Cluster system.
  • Section A.7 and its subsections provide additional technical detail and concepts.

A.1 Conventions Used in This Appendix

Certain conventions are used throughout this appendix to identify the ANSI Standard and for elements in figures.

A.1.1 SCSI ANSI Standard

OpenVMS Cluster systems configured with the SCSI interconnect must use standard SCSI-2 or SCSI-3 components. The SCSI-2 components must be compliant with the architecture defined in the American National Standards Institute (ANSI) Standard SCSI-2, X3T9.2, Rev. 10L. The SCSI-3 components must be compliant with approved versions of the SCSI-3 Architecture and Command standards. For ease of discussion, this appendix uses the term SCSI to refer to both SCSI-2 and SCSI-3.

A.1.2 Symbols Used in Figures

Figure A-1 is a key to the symbols used in figures throughout this appendix.

Figure A-1 Key to Symbols Used in Figures


A.2 Accessing SCSI Storage

In OpenVMS Cluster configurations, multiple VAX and Alpha hosts can directly access SCSI devices in any of the following ways:

  • CI interconnect with HSJ or HSC controllers
  • Digital Storage Systems Interconnect (DSSI) with HSD controller
  • SCSI adapters directly connected to VAX or Alpha systems

You can also access SCSI devices indirectly using the OpenVMS MSCP server.

The following sections describe single-host and multihost access to SCSI storage devices.

A.2.1 Single-Host SCSI Access in OpenVMS Cluster Systems

Prior to OpenVMS Version 6.2, OpenVMS Cluster systems provided support for SCSI storage devices connected to a single host using an embedded SCSI adapter, an optional external SCSI adapter, or a special-purpose RAID (redundant arrays of independent disks) controller. Only one host could be connected to a SCSI bus.

A.2.2 Multihost SCSI Access in OpenVMS Cluster Systems

Beginning with OpenVMS Alpha Version 6.2, multiple Alpha hosts in an OpenVMS Cluster system can be connected to a single SCSI bus to share access to SCSI storage devices directly. This capability allows you to build highly available servers using shared access to SCSI storage.

Figure A-2 shows an OpenVMS Cluster configuration that uses a SCSI interconnect for shared access to SCSI devices. Note that another interconnect (for example, a local area network [LAN]) is required for host-to-host OpenVMS Cluster (System Communications Architecture [SCA]) communications.

Figure A-2 Highly Available Servers for Shared SCSI Access


You can build a three-node OpenVMS Cluster system using the shared SCSI bus as the storage interconnect, or you can include shared SCSI buses within a larger OpenVMS Cluster configuration. A quorum disk can be used on the SCSI bus to improve the availability of two- or three-node configurations. Host-based RAID (including host-based shadowing) and the MSCP server are supported for shared SCSI storage devices.

A.3 Configuration Requirements and Hardware Support

This section lists the configuration requirements and supported hardware for multihost SCSI OpenVMS Cluster systems.

A.3.1 Configuration Requirements

Table A-1 shows the requirements and capabilities of the basic software and hardware components you can configure in a SCSI OpenVMS Cluster system.

Table A-1 Requirements for SCSI Multihost OpenVMS Cluster Configurations
Requirement Description
Software All Alpha hosts sharing access to storage on a SCSI interconnect must be running:
  • OpenVMS Alpha Version 6.2 or later
  • OpenVMS Cluster Software for OpenVMS Alpha Version 6.2 or later
Hardware Table A-2 lists the supported hardware components for SCSI OpenVMS Cluster systems. See also Section A.7.7 for information about other hardware devices that might be used in a SCSI OpenVMS Cluster configuration.
SCSI tape, floppies, and CD-ROM drives You cannot configure SCSI tape drives, floppy drives, or CD-ROM drives on multihost SCSI interconnects. If your configuration requires SCSI tape, floppy, or CD-ROM drives, configure them on single-host SCSI interconnects. Note that SCSI tape, floppy, or CD-ROM drives may be MSCP or TMSCP served to other hosts in the OpenVMS Cluster configuration.
Maximum hosts on a SCSI bus You can connect up to three hosts on a multihost SCSI bus. You can configure any mix of the hosts listed in Table A-2 on the same shared SCSI interconnect.
Maximum SCSI buses per host You can connect each host to a maximum of six multihost SCSI buses. The number of nonshared (single-host) SCSI buses that can be configured is limited only by the number of available slots on the host bus.
Host-to-host communication All members of the cluster must be connected by an interconnect that can be used for host-to-host (SCA) communication; for example, DSSI, CI, Ethernet, FDDI, or MEMORY CHANNEL.
Host-based RAID (including host-based shadowing) Supported in SCSI OpenVMS Cluster configurations.
SCSI device naming The name of each SCSI device must be unique throughout the OpenVMS Cluster system. When configuring devices on systems that include a multihost SCSI bus, adhere to the following requirements:
  • A host can have, at most, one adapter attached to a particular SCSI interconnect.
  • All host adapters attached to a given SCSI interconnect must have the same OpenVMS device name (for example, PKA0), unless port allocation classes are used (see HP OpenVMS Cluster Systems).
  • Each system attached to a SCSI interconnect must have a nonzero node disk allocation class value. These node disk allocation class values may differ as long as either of the following conditions is true:
    • The SCSI interconnect has a positive, non-zero port allocation class
    • The only devices attached to the SCSI interconnect are accessed by HSZ70 or HSZ80 controllers that have a non-zero controller allocation class.

    If you have multiple SCSI interconnects, you must consider all the SCSI interconnects to determine whether you can chose a different value for the node disk allocation class on each system. Note, also, that the addition of a SCSI device to an existing SCSI interconnect requires a revaluation of whether the node disk allocation classes can still be different. Therefore, HP recommends that you use the same node disk allocation class value for all systems attached to the same SCSI interconnect. For more information about allocation classes, see HP OpenVMS Cluster Systems.

A.3.2 Hardware Support

Table A-2 shows the supported hardware components for SCSI OpenVMS Cluster systems; it also lists the minimum required revision for these hardware components. That is, for any component, you must use either the version listed in Table A-2 or a subsequent version. For host support information, go to the HP servers web site:


http://www.hp.com/country/us/eng/prodserv/servers.html

There, you will find documentation for your AlphaServer or AlphaStation system.

For disk support information, refer to the HP storage web site:


http://www.hp.com/country/us/eng/prodserv/storage.html

The SCSI interconnect configuration and all devices on the SCSI interconnect must meet the requirements defined in the ANSI Standard SCSI-2 document, or the SCSI-3 Architecture and Command standards, and the requirements described in this appendix. See also Section A.7.7 for information about other hardware devices that might be used in a SCSI OpenVMS Cluster configuration.

Table A-2 Supported Hardware for SCSI OpenVMS Cluster Systems
Component Supported Item Minimum Firmware (FW) Version1
Controller HSZ40--B 2.5 (FW)
  HSZ50  
  HSZ70  
  HSZ80 8.3 (FW)
Adapters 2 Embedded (NCR-810 based)  
  KZPAA (PCI to SCSI)  
  KZPSA (PCI to SCSI) A11 (FW)
  KZPBA-CB (PCI to SCSI) 5.53 (FW)
  KZTSA (TURBOchannel to SCSI) A10-1 (FW)

1Unless stated in this column, the minimum firmware version for a device is the same as required for the operating system version you are running. There are no additional firmware requirements for a SCSI multihost OpenVMS Cluster configuration.
2You can configure other types of SCSI adapters in a system for single-host access to local storage.

A.4 SCSI Interconnect Concepts

The SCSI standard defines a set of rules governing the interactions between initiators (typically, host systems) and SCSI targets (typically, peripheral devices). This standard allows the host to communicate with SCSI devices (such as disk drives, tape drives, printers, and optical media devices) without having to manage the device-specific characteristics.

The following sections describe the SCSI standard and the default modes of operation. The discussions also describe some optional mechanisms you can implement to enhance the default SCSI capabilities in areas such as capacity, performance, availability, and distance.

A.4.1 Number of Devices

The SCSI bus is an I/O interconnect that can support up to 16 devices. A narrow SCSI bus supports up to 8 devices; a wide SCSI bus support up to 16 devices. The devices can include host adapters, peripheral controllers, and discrete peripheral devices such as disk or tape drives. The devices are addressed by a unique ID number from 0 through 15. You assign the device IDs by entering console commands, or by setting jumpers or switches, or by selecting a slot on a StorageWorks enclosure.

Note

In order to connect 16 devices to a wide SCSI bus, the devices themselves must also support wide addressing. Narrow devices do not talk to hosts above ID 7. Presently, the HSZ40 does not support addresses above 7. Host adapters that support wide addressing are KZTSA, KZPSA, and the QLogic wide adapters (KZPBA, KZPDA, ITIOP, P1SE, and P2SE). Only the KZPBA-CB is supported in a multihost SCSI OpenVMS Cluster configuration.

When configuring more devices than the previous limit of eight, make sure that you observe the bus length requirements (see Table A-4).

To configure wide IDs on a BA356 box, refer to the BA356 manual StorageWorks Solutions BA356-SB 16-Bit Shelf User's Guide (order number EK-BA356-UG). Do not configure a narrow device in a BA356 box that has a starting address of 8.

To increase the number of devices on the SCSI interconnect, some devices implement a second level of device addressing using logical unit numbers (LUNs). For each device ID, up to eight LUNs (0--7) can be used to address a single SCSI device as multiple units. The maximum number of LUNs per device ID is eight.

Note

When connecting devices to a SCSI interconnect, each device on the interconnect must have a unique device ID. You may need to change a device's default device ID to make it unique. For information about setting a single device's ID, refer to the owner's guide for the device.

A.4.2 Performance

The default mode of operation for all SCSI devices is 8-bit asynchronous mode. This mode, sometimes referred to as narrow mode, transfers 8 bits of data from one device to another. Each data transfer is acknowledged by the device receiving the data. Because the performance of the default mode is limited, the SCSI standard defines optional mechanisms to enhance performance. The following list describes two optional methods for achieving higher performance:

  • Increase the amount of data that is transferred in parallel on the interconnect. The 16-bit and 32-bit wide options allow a doubling or quadrupling of the data rate, respectively. Because the 32-bit option is seldom implemented, this appendix discusses only 16-bit operation and refers to it as wide.
  • Use synchronous data transfer. In synchronous mode, multiple data transfers can occur in succession, followed by an acknowledgment from the device receiving the data. The standard defines a slow mode (also called standard mode) and a fast mode for synchronous data transfers:
    • In standard mode, the interconnect achieves up to 5 million transfers per second.
    • In fast mode, the interconnect achieves up to 10 million transfers per second.
    • In ultra mode, the interconnect achieves up to 20 million transfers per second.

Because all communications on a SCSI interconnect occur between two devices at a time, each pair of devices must negotiate to determine which of the optional features they will use. Most, if not all, SCSI devices implement one or more of these options.

Table A-3 shows data rates when using 8- and 16-bit transfers with standard, fast, and ultra synchronous modes.

Table A-3 Maximum Data Transfer Rates (MB/s)
Mode Narrow (8-bit) Wide (16-bit)
Standard 5 10
Fast 10 20
Ultra 20 40


Previous Next Contents Index