[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here Guidelines for OpenVMS Cluster Configurations

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

A.7.6.3 Procedures for Ensuring That a Device or Segment is Inactive

Use the following procedures to ensure that a device or a segment is inactive:

  • To ensure that a disk is inactive:
    1. Dismount the disk on all members of the OpenVMS Cluster system.
    2. Ensure that any I/O that can occur to a dismounted disk is stopped, for example:
      • Disable the disk as a quorum disk.
      • Allocate the disk (using the DCL command ALLOCATE) to block further mount or initialization attempts.
      • Disable console polling by all halted hosts on the logical SCSI bus (by setting the console variable SCSI_POLL to OFF and entering the INIT command).
      • Ensure that no host on the logical SCSI bus is executing power-up or initialization self-tests, booting, or configuring the SCSI bus (using SYSMAN IO commands).
  • To ensure that an HSZxx controller is inactive:
    1. Dismount all of the HSZxx virtual disks on all members of the OpenVMS Cluster system.
    2. Shut down the controller, following the procedures in the HS Family of Array Controllers User's Guide.
    3. Power down the HSZxx (optional).
  • To ensure that a host adapter is inactive:
    1. Halt the system.
    2. Power down the system, or set the console variable SCSI_POLL to OFF and then enter the INIT command on the halted system. This ensures that the system will not poll or respond to polls.
  • To ensure that a segment is inactive, follow the preceding procedures for every device on the segment.

A.7.6.4 Procedure for Hot Plugging StorageWorks SBB Disks

To remove an SBB (storage building block) disk from an active SCSI bus, use the following procedure:

  1. Use an ESD grounding strap that is attached either to a grounding stud or to unpainted metal on one of the cabinets in the system. Refer to the system installation procedures for guidance.
  2. Follow the procedure in Section A.7.6.3 to make the disk inactive.
  3. Squeeze the clips on the side of the SBB, and slide the disk out of the StorageWorks shelf.

To plug an SBB disk into an active SCSI bus, use the following procedure:

  1. Use an ESD grounding strap that is attached either to a grounding stud or to unpainted metal on one of the cabinets in the system. Refer to the system installation procedures for guidance.
  2. Ensure that the SCSI ID associated with the device (either by jumpers or by the slot in the StorageWorks shelf) conforms to the following:
    • The SCSI ID is unique for the logical SCSI bus.
    • The SCSI ID is already configured as a DK device on all of the following:
      • Any member of the OpenVMS Cluster system that already has that ID configured
      • Any OpenVMS processor on the same SCSI bus that is running the MSCP server
  3. Slide the SBB into the StorageWorks shelf.
  4. Configure the disk on OpenVMS Cluster members, if required, using SYSMAN IO commands.

A.7.6.5 Procedure for Hot Plugging HSZxx

To remove an HSZxx controller from an active SCSI bus:

  1. Use an ESD grounding strap that is attached either to a grounding stud or to unpainted metal on one of the cabinets in the system. Refer to the system installation procedures for guidance.
  2. Follow the procedure in Section A.7.6.3 to make the HSZxx inactive.
  3. The HSZxx can be powered down, but it must remain plugged in to the power distribution system to maintain grounding.
  4. Unscrew and remove the differential triconnector from the HSZxx.
  5. Protect all exposed connector pins from ESD and from contacting any electrical conductor while they are disconnected.

To plug an HSZxx controller into an active SCSI bus:

  1. Use an ESD grounding strap that is attached either to a grounding stud or to unpainted metal on one of the cabinets in the system. Refer to the system installation procedures for guidance. Also, ensure that the ground offset voltages between the HSZxx and all components that will be attached to it are within the limits specified in Section A.7.8.
  2. Protect all exposed connector pins from ESD and from contacting any electrical conductor while they are disconnected.
  3. Power up the HSZxx and ensure that the disk units associated with the HSZxx conform to the following:
    • The disk units are unique for the logical SCSI bus.
    • The disk units are already configured as DK devices on the following:
      • Any member of the OpenVMS Cluster system that already has that ID configured
      • Any OpenVMS processor on the same SCSI bus that is running the MSCP server
  4. Ensure that the HSZxx will make a legal stubbing connection to the active segment. (The connection is legal when the triconnector is attached directly to the HSZxx controller module, with no intervening cable.)
  5. Attach the differential triconnector to the HSZxx, using care to ensure that it is properly aligned. Tighten the screws.
  6. Configure the HSZxx virtual disks on OpenVMS Cluster members, as required, using SYSMAN IO commands.

A.7.6.6 Procedure for Hot Plugging Host Adapters

To remove a host adapter from an active SCSI bus:

  1. Use an ESD grounding strap that is attached either to a grounding stud or to unpainted metal on one of the cabinets in the system. Refer to the system installation procedures for guidance.
  2. Verify that the connection to be broken is a stubbing connection. If it is not, then do not perform the hot plugging procedure.
  3. Follow the procedure in Section A.7.6.3 to make the host adapter inactive.
  4. The system can be powered down, but it must remain plugged in to the power distribution system to maintain grounding.
  5. Remove the "Y" cable from the host adapter's single-ended connector.
  6. Protect all exposed connector pins from ESD and from contacting any electrical conductor while they are disconnected.
  7. Do not unplug the adapter from the host's internal bus while the host remains powered up.
    At this point, the adapter has disconnected from the SCSI bus. To remove the adapter from the host, first power down the host, then remove the adapter from the host's internal bus.

To plug a host adapter into an active SCSI bus:

  1. Use an ESD grounding strap that is attached either to a grounding stud or to unpainted metal on one of the cabinets in the system. Refer to the system installation procedures for guidance. Also, ensure that the ground offset voltages between the host and all components that will be attached to it are within the limits specified in Section A.7.8.
  2. Protect all exposed connector pins from ESD and from contacting any electrical conductor while they are disconnected.
  3. Ensure that the host adapter will make a legal stubbing connection to the active segment (the stub length must be within allowed limits, and the host adapter must not provide termination to the active segment).
  4. Plug the adapter into the host (if it is unplugged).
  5. Plug the system into the power distribution system to ensure proper grounding. Power up, if desired.
  6. Attach the "Y" cable to the host adapter, using care to ensure that it is properly aligned.

A.7.6.7 Procedure for Hot Plugging DWZZx Controllers

Use the following procedure to remove a DWZZx from an active SCSI bus:

  1. Use an ESD grounding strap that is attached either to a grounding stud or to unpainted metal on one of the cabinets in the system. Refer to the system installation procedures for guidance.
  2. Verify that the connection to be broken is a stubbing connection. If it is not, then do not perform the hot plugging procedure.
  3. Do not power down the DWZZx. This can disrupt the operation of the attached SCSI bus segments.
  4. Determine which SCSI bus segment will remain active after the disconnection. Follow the procedure in Section A.7.6.3 to make the other segment inactive.
    When the DWZZx is removed from the active segment, the inactive segment must remain inactive until the DWZZx is also removed from the inactive segment, or until proper termination is restored to the DWZZx port that was disconnected from the active segment.
  5. The next step depends on the type of DWZZx and the segment that is being hot plugged, as follows:
    DWZZx Type Condition Action
    SBB 1 Single-ended segment will remain active. Squeeze the clips on the side of the SBB, and slide the DWZZ x out of the StorageWorks shelf.
    SBB 1 Differential segment will remain active. Unscrew and remove the differential triconnector from the DWZZ x.
    Table top Single-ended segment will remain active. Remove the "Y" cable from the DWZZ x's single-ended connector.
    Table top Differential segment will remain active. Unscrew and remove the differential triconnector from the DWZZ x.

    1SSB is the StorageWorks abbreviation for storage building block.

  6. Protect all exposed connector pins from ESD and from contacting any electrical conductor while they are disconnected.

To plug a DWZZx into an active SCSI bus:

  1. Use an ESD grounding strap that is attached either to a grounding stud or to unpainted metal on one of the cabinets in the system. Refer to the system installation procedures for guidance. Also, ensure that the ground offset voltages between the DWZZx and all components that will be attached to it are within the limits specified in Section A.7.8.
  2. Protect all exposed connector pins from ESD and from contacting any electrical conductor while they are disconnected.
  3. Ensure that the DWZZx will make a legal stubbing connection to the active segment (the stub length must be within allowed limits, and the DWZZx must not provide termination to the active segment).
  4. The DWZZx must be powered up. The SCSI segment that is being added must be attached and properly terminated. All devices on this segment must be inactive.
  5. The next step depends on the type of DWZZx, and which segment is being hot plugged, as follows:
    DWZZx Type Condition Action
    SBB 1 Single-ended segment is being hot plugged. Slide the DWZZ x into the StorageWorks shelf.
    SBB 1 Differential segment is being hot plugged. Attach the differential triconnector to the DWZZ x, using care to ensure that it is properly aligned. Tighten the screws.
    Table top Single-ended segment is being hot plugged. Attach the "Y" cable to the DWZZ x, using care to ensure that it is properly aligned.
    Table top Differential segment is being hot plugged. Attach the differential triconnector to the DWZZ x, using care to ensure that it is properly aligned. Tighten the screws.

    1SSB is the StorageWorks abbreviation for storage building block.

  6. If the newly attached segment has storage devices on it, then configure them on OpenVMS Cluster members, if required, using SYSMAN IO commands.

A.7.7 OpenVMS Requirements for Devices Used on Multihost SCSI OpenVMS Cluster Systems

At this time, the only devices approved for use on multihost SCSI OpenVMS Cluster systems are those listed in Table A-2. While not specifically approved for use, other disk devices might be used in a multihost OpenVMS Cluster system when they conform to the following requirements:

  • Support for concurrent multi-initiator I/O.
  • Proper management for the following states or conditions on a per-initiator basis:
    • Synchronous negotiated state and speed
    • Width negotiated state
    • Contingent Allegiance and Unit Attention conditions
  • Tagged command queuing. This is needed to provide an ordering guarantee used in OpenVMS Cluster systems to ensure that I/O has been flushed. The drive must implement queuing that complies with Section 7.8.2 of the SCSI-2 standard, which says (in part):
    "...All commands received with a simple queue tag message prior to a command received with an ordered queue tag message, regardless of initiator, shall be executed before that command with the ordered queue tag message." (Emphasis added.)
  • Support for command disconnect.
  • A reselection timeout procedure compliant with Option b of Section 6.1.4.2 of the SCSI-2 standard. Furthermore, the device shall implement a reselection retry algorithm that limits the amount of bus time spent attempting to reselect a nonresponsive initiator.
  • Automatic read reallocation enabled (ARRE) and automatic write reallocation enabled (AWRE) (that is, drive-based bad block revectoring) to prevent multiple hosts from unnecessarily revectoring the same block. To avoid data corruption, it is essential that the drive comply with Section 9.3.3.6 of the SCSI-2 Standard, which says (in part):
    "...The automatic reallocation shall then be performed only if the target successfully recovers the data." (Emphasis added.)
  • Storage devices should not supply TERMPWR. If they do, then it is necessary to apply configuration rules to ensure that there are no more than four sources of TERMPWR on a segment.

Finally, if the device or any other device on the same segment will be hot plugged, then the device must meet the electrical requirements described in Section A.7.6.2.

A.7.8 Grounding Requirements

This section describes the grounding requirements for electrical systems in a SCSI OpenVMS Cluster system.

Improper grounding can result in voltage differentials, called ground offset voltages, between the enclosures in the configuration. Even small ground offset voltages across the SCSI interconnect (as shown in step 3 of Table A-8) can disrupt the configuration and cause system performance degradation or data corruption.

Table A-8 describes important considerations to ensure proper grounding.

Table A-8 Steps for Ensuring Proper Grounding
Step Description
1 Ensure that site power distribution meets all local electrical codes.
2 Inspect the entire site power distribution system to ensure that:
  • All outlets have power ground connections.
  • A grounding prong is present on all computer equipment power cables.
  • Power-outlet neutral connections are not actual ground connections.
  • All grounds for the power outlets are connected to the same power distribution panel.
  • All devices that are connected to the same circuit breaker as the computer equipment are UL® or IEC approved.
3 If you have difficulty verifying these conditions, you can use a hand-held multimeter to measure the ground offset voltage between any two cabinets. To measure the voltage, connect the multimeter leads to unpainted metal on each enclosure. Then determine whether the voltage exceeds the following allowable ground offset limits:
  • Single-ended signaling: 50 millivolts (maximum allowable offset)
  • Differential signaling: 800 millivolts (maximum allowable offset)

The multimeter method provides data for only the moment it is measured. The ground offset values may change over time as additional devices are activated or plugged into the same power source. To ensure that the ground offsets remain within acceptable limits over time, HP recommends that you have a power survey performed by a qualified electrician.

4 If you are uncertain about the grounding situation or if the measured offset exceeds the allowed limit, HP recommends that a qualified electrician correct the problem. It may be necessary to install grounding cables between enclosures to reduce the measured offset.
5 If an unacceptable offset voltage was measured and a ground cable was installed, then measure the voltage again to ensure it is less than the allowed limits. If not, an electrician must determine the source of the ground offset voltage and reduce or eliminate it.


Appendix B
MEMORY CHANNEL Technical Summary

This appendix contains information about MEMORY CHANNEL, a high-performance cluster interconnect technology. MEMORY CHANNEL, which was introduced in OpenVMS Alpha Version 7.1, supports several configurations.

This chapter contains the following sections:

Section Content
Product Overview High-level introduction to the MEMORY CHANNEL product and its benefits, hardware components, and configurations.
Technical Overview More in-depth technical information about how MEMORY CHANNEL works.

B.1 Product Overview

MEMORY CHANNEL is a high-performance cluster interconnect technology for PCI-based Alpha systems. With the benefits of very low latency, high bandwidth, and direct memory access, MEMORY CHANNEL complements and extends the unique ability of an OpenVMS Cluster to work as a single, virtual system.

MEMORY CHANNEL offloads internode cluster traffic (such as lock management communication) from existing interconnects---CI, DSSI, FDDI, and Ethernet---so that they can process storage and network traffic more effectively. MEMORY CHANNEL significantly increases throughput and decreases the latency associated with traditional I/O processing.

Any application that must move large amounts of data among nodes will benefit from MEMORY CHANNEL. It is an optimal solution for applications that need to pass data quickly, such as real-time and transaction processing. MEMORY CHANNEL also improves throughput in high-performance databases and other applications that generate heavy OpenVMS Lock Manager traffic.

B.1.1 MEMORY CHANNEL Features

MEMORY CHANNEL technology provides the following features:

  • Offers excellent price/performance.
    With several times the CI bandwidth, MEMORY CHANNEL provides a 100 MB/s interconnect with minimal latency. MEMORY CHANNEL architecture is designed for the industry-standard PCI bus.

  • Requires no change to existing applications.
    MEMORY CHANNEL works seamlessly with existing cluster software, so that no change is necessary for existing applications. The new MEMORY CHANNEL drivers, PMDRIVER and MCDRIVER, integrate with the Systems Communication Services layer of OpenVMS Clusters in the same way as existing port drivers. Higher layers of cluster software are unaffected.
  • Offloads CI, DSSI, and the LAN in SCSI clusters.
    You cannot connect storage directly to MEMORY CHANNEL.
    While MEMORY CHANNEL is not a replacement for CI and DSSI, when used in combination with those interconnects, it offloads their node-to-node traffic. This enables them to be dedicated to storage traffic, optimizing communications in the entire cluster.
    When used in a cluster with SCSI and LAN interconnects, MEMORY CHANNEL offloads node-to-node traffic from the LAN, enabling it to handle more TCP/IP or DECnet traffic.
  • Provides fail-separately behavior.
    When a system failure occurs, MEMORY CHANNEL nodes behave like any failed node in an OpenVMS Cluster. The rest of the cluster continues to perform until the failed node can rejoin the cluster.

B.1.2 MEMORY CHANNEL Version 2.0 Features

When first introduced in OpenVMS Version 7.1, MEMORY CHANNEL supported a maximum of four nodes in a 10-foot radial topology. Communication occurred between one sender-receiver pair at a time. MEMORY CHANNEL Version 1.5 introduced support for eight nodes, a new adapter (CCMAA-BA), time stamps on all messages, and more robust performance.

MEMORY CHANNEL Version 2.0 provides the following new capabilities:

  • Support for a new adapter (CCMAB-AA) and new hubs (CCMHB-AA and CCMHB-BA)
  • Support for simultaneous communication between four sender-receiver pairs
  • Support for longer cables for a radial topology up to 3 km

B.1.3 Hardware Components

A MEMORY CHANNEL cluster is joined together by a hub, a desktop-PC sized unit which provides a connection among systems. The hub is connected to a system's PCI adapter by a link cable. Figure B-1 shows all three hardware components required by a node to support MEMORY CHANNEL:

  • A PCI-to-MEMORY CHANNEL adapter
  • A link cable
  • A port in a MEMORY CHANNEL hub (except for a two-node configuration in which the cable connects just two PCI adapters.)

Figure B-1 MEMORY CHANNEL Hardware Components


The PCI adapter pictured in Figure B-1 has memory mapping logic that enables each system to communicate with the others in the MEMORY CHANNEL cluster.

Figure B-2 shows an example of a four-node MEMORY CHANNEL cluster with a hub at its center.

Figure B-2 Four-Node MEMORY CHANNEL Cluster


A MEMORY CHANNEL hub is not required in clusters that contain only two nodes. In a two-node configuration like the one shown Figure B-3, the same adapters and cable are used, and one of the PCI adapters serves as a virtual hub. You can continue to use the adapters and cable if you expand to a larger configuration later.

Figure B-3 Virtual Hub MEMORY CHANNEL Cluster



Previous Next Contents Index