[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

A.4.3 Distance

The maximum length of the SCSI interconnect is determined by the signaling method used in the configuration and by the data transfer rate. There are two types of electrical signaling for SCSI interconnects:

  • Single-ended signaling
    The single-ended method is the most common and the least expensive. The distance spanned is generally modest.
  • Differential signaling
    This method provides higher signal integrity, thereby allowing a SCSI bus to span longer distances.

Table A-4 summarizes how the type of signaling method affects SCSI interconnect distances.

Table A-4 Maximum SCSI Interconnect Distances
Signaling Technique Rate of Data Transfer Maximum Cable Length
Single ended Standard 6 m 1
Single ended Fast 3 m
Single ended Ultra 20.5 m 2
Differential Standard or fast 25 m
Differential Ultra 25.5 m 2

1The SCSI standard specifies a maximum length of 6 m for this type of interconnect. However, where possible, it is advisable to limit the cable length to 4 m to ensure the highest level of data integrity.
2For more information, refer to the StorageWorks UltraSCSI Configuration Guidelines, order number EK--ULTRA--CG.

The DWZZA, DWZZB, and DWZZC converters are single-ended to differential converters that you can use to connect single-ended and differential SCSI interconnect segments. The DWZZA is for narrow (8-bit) SCSI buses, the DWZZB is for wide (16-bit) SCSI buses, and the DWZZC is for wide Ultra SCSI buses.

The differential segments are useful for the following:

  • Overcoming the distance limitations of the single-ended interconnect
  • Allowing communication between single-ended and differential devices

Because the DWZZA, the DWZZB, and the DWZZC are strictly signal converters, you can not assign a SCSI device ID to them. You can configure a maximum of two DWZZA or two DWZZB converters in the path between any two SCSI devices. Refer to the StorageWorks UltraSCSI Configuration Guidelines for information on configuring the DWZZC.

A.4.4 Cabling and Termination

Each single-ended and differential SCSI interconnect must have two terminators, one at each end. The specified maximum interconnect lengths are measured from terminator to terminator.

The interconnect terminators are powered from the SCSI interconnect line called TERMPWR. Each StorageWorks host adapter and enclosure supplies the TERMPWR interconnect line, so that as long as one host or enclosure is powered on, the interconnect remains terminated.

Devices attach to the interconnect by short cables (or etch) called stubs. Stubs must be short in order to maintain the signal integrity of the interconnect. The maximum stub lengths allowed are determined by the type of signaling used by the interconnect, as follows:

  • For single-ended interconnects, the maximum stub length is .1 m.
  • For differential interconnects, the maximum stub length is .2 m.

Additionally, the minimum distance between stubs on a single-ended interconnect is .3 m. Refer to Figure A-3 for an example of this configuration.

Note

Terminate single-ended and differential buses individually, even when using DWZZx converters.

When you are extending the SCSI bus beyond an existing terminator, it is necessary to disable or remove that terminator.

Figure A-3 Maximum Stub Lengths


A.5 SCSI OpenVMS Cluster Hardware Configurations

The hardware configuration that you choose depends on a combination of factors:

  • Your computing needs---for example, continuous availability or the ability to disconnect or remove a system from your SCSI OpenVMS Cluster system
  • Your environment---for example, the physical attributes of your computing facility
  • Your resources---for example, your capital equipment or the available PCI slots

Refer to the OpenVMS Cluster Software Software Product Description (SPD 29.78.xx) for configuration limits.

The following sections provide guidelines for building SCSI configurations and describe potential configurations that might be suitable for various sites.

A.5.1 Systems Using Add-On SCSI Adapters

Shared SCSI bus configurations typically use optional add-on KZPAA, KZPSA, KZPBA, and KZTSA adapters. These adapters are generally easier to configure than internal adapters because they do not consume any SCSI cable length. Additionally, when you configure systems using add-on adapters for the shared SCSI bus, the internal adapter is available for connecting devices that cannot be shared (for example, SCSI tape, floppy, and CD-ROM drives).

When using add-on adapters, storage is configured using BA350, BA353, or HSZxx StorageWorks enclosures. These enclosures are suitable for all data disks, and for shared OpenVMS Cluster system and quorum disks. By using StorageWorks enclosures, it is possible to shut down individual systems without losing access to the disks.

The following sections describe some SCSI OpenVMS Cluster configurations that take advantage of add-on adapters.

A.5.1.1 Building a Basic System Using Add-On SCSI Adapters

Figure A-4 shows a logical representation of a basic configuration using SCSI adapters and a StorageWorks enclosure. This configuration has the advantage of being relatively simple, while still allowing the use of tapes, floppies, CD-ROMs, and disks with nonshared files (for example, page files and swap files) on internal buses. Figure A-5 shows this type of configuration using AlphaServer 1000 systems and a BA350 enclosure.

The BA350 enclosure uses 0.9 m of SCSI cabling, and this configuration typically uses two 1-m SCSI cables. (A BA353 enclosure also uses 0.9 m, with the same total cable length.) The resulting total cable length of 2.9 m allows fast SCSI mode operation.

Although the shared BA350 storage enclosure is theoretically a single point of failure, this basic system is a very reliable SCSI OpenVMS Cluster configuration. When the quorum disk is located in the BA350, you can shut down either of the AlphaStation systems independently while retaining access to the OpenVMS Cluster system. However, you cannot physically remove the AlphaStation system, because that would leave an unterminated SCSI bus.

If you need the ability to remove a system while your OpenVMS Cluster system remains operational, build your system using DWZZx converters, as described in Section A.5.1.2. If you need continuous access to data if a SCSI interconnect fails, you should do both of the following:

  • Add a redundant SCSI interconnect with another BA350 shelf.
  • Shadow the data.

In Figure A-4 and the other logical configuration diagrams in this appendix, the required network interconnect is not shown.

Figure A-4 Conceptual View: Basic SCSI System


Figure A-5 Sample Configuration: Basic SCSI System Using AlphaServer 1000, KZPAA Adapter, and BA350 Enclosure


A.5.1.2 Building a System with More Enclosures or Greater Separation or with HSZ Controllers

If you need additional enclosures, or if the needs of your site require a greater physical separation between systems, or if you plan to use HSZ controllers, you can use a configuration in which DWZZx converters are placed between systems with single-ended signaling and a differential-cabled SCSI bus.

DWZZx converters provide additional SCSI bus length capabilities, because the DWZZx allows you to connect a single-ended device to a bus that uses differential signaling. As described in Section A.4.3, SCSI bus configurations that use differential signaling may span distances up to 25 m, whereas single-ended configurations can span only 3 m when fast-mode data transfer is used.

DWZZx converters are available as standalone, desktop components or as StorageWorks compatible building blocks. DWZZx converters can be used with the internal SCSI adapter or the optional KZPAA adapters.

The HSZ40 is a high-performance differential SCSI controller that can be connected to a differential SCSI bus, and supports up to 72 SCSI devices. An HSZ40 can be configured on a shared SCSI bus that includes DWZZx single-ended to differential converters. Disk devices configured on HSZ40 controllers can be combined into RAID sets to further enhance performance and provide high availability.

Figure A-6 shows a logical view of a configuration that uses additional DWZZAs to increase the potential physical separation (or to allow for additional enclosures and HSZ40s), and Figure A-7 shows a sample representation of this configuration.

Figure A-6 Conceptual View: Using DWZZAs to Allow for Increased Separation or More Enclosures


Figure A-7 Sample Configuration: Using DWZZAs to Allow for Increased Separation or More Enclosures


Figure A-8 shows how a three-host SCSI OpenVMS Cluster system might be configured.

Figure A-8 Sample Configuration: Three Hosts on a SCSI Bus


A.5.1.3 Building a System That Uses Differential Host Adapters

Figure A-9 is a sample configuration with two KZPSA adapters on the same SCSI bus. In this configuration, the SCSI termination has been removed from the KZPSA, and external terminators have been installed on "Y" cables. This allows you to remove the KZPSA adapter from the SCSI bus without rendering the SCSI bus inoperative. The capability of removing an individual system from your SCSI OpenVMS Cluster configuration (for maintenance or repair) while the other systems in the cluster remain active gives you an especially high level of availability.

Please note the following about Figure A-9:

  • Termination is removed from the host adapter.
  • Termination for the single-ended bus inside the BA356 is provided by the DWZZB in slot 0 and by the automatic terminator on the personality module. (No external cables or terminators are attached to the personality module.)
  • The DWZZB's differential termination is removed.

Figure A-9 Sample Configuration: SCSI System Using Differential Host Adapters (KZPSA)


The differential SCSI bus in the configuration shown in Figure A-9 is chained from enclosure to enclosure and is limited to 25 m in length. (The BA356 does not add to the differential SCSI bus length. The differential bus consists only of the BN21W-0B "Y" cables and the BN21K/BN21L cables.) In configurations where this cabling scheme is inconvenient or where it does not provide adequate distance, an alternative radial scheme can be used.

The radial SCSI cabling alternative is based on a SCSI hub. Figure A-10 shows a logical view of the SCSI hub configuration, and Figure A-11 shows a sample representation of this configuration.

Figure A-10 Conceptual View: SCSI System Using a SCSI Hub


Figure A-11 shows a sample representation of a SCSI hub configuration.

Figure A-11 Sample Configuration: SCSI System with SCSI Hub Configuration


A.6 Installation

This section describes the steps required to set up and install the hardware in a SCSI OpenVMS Cluster system. The assumption in this section is that a new OpenVMS Cluster system, based on a shared SCSI bus, is being created. If, on the other hand, you are adding a shared SCSI bus to an existing OpenVMS Cluster configuration, then you should integrate the procedures in this section with those described in HP OpenVMS Cluster Systems to formulate your overall installation plan.

Table A-5 lists the steps required to set up and install the hardware in a SCSI OpenVMS Cluster system.

Table A-5 Steps for Installing a SCSI OpenVMS Cluster System
Step Description Reference
1 Ensure proper grounding between enclosures. Section A.6.1 and Section A.7.8
2 Configure SCSI host IDs. Section A.6.2
3 Power up the system and verify devices. Section A.6.3
4 Set SCSI console parameters. Section A.6.4
5 Install the OpenVMS operating system. Section A.6.5
6 Configure additional systems. Section A.6.6

A.6.1 Step 1: Meet SCSI Grounding Requirements

You must ensure that your electrical power distribution systems meet local requirements (for example, electrical codes) prior to installing your OpenVMS Cluster system. If your configuration consists of two or more enclosures connected by a common SCSI interconnect, you must also ensure that the enclosures are properly grounded. Proper grounding is important for safety reasons and to ensure the proper functioning of the SCSI interconnect.

Electrical work should be done by a qualified professional. Section A.7.8 includes details of the grounding requirements for SCSI systems.

A.6.2 Step 2: Configure SCSI Node IDs

This section describes how to configure SCSI node and device IDs. SCSI IDs must be assigned separately for multihost SCSI buses and single-host SCSI buses.

Figure A-12 shows two hosts; each one is configured with a single-host SCSI bus and shares a multihost SCSI bus. (See Figure A-1 for the key to the symbols used in this figure.)

Figure A-12 Setting Allocation Classes for SCSI Access


The following sections describe how IDs are assigned in this type of multihost SCSI configuration. For more information about this topic, see HP OpenVMS Cluster Systems.

A.6.2.1 Configuring Device IDs on Multihost SCSI Buses

When configuring multihost SCSI buses, adhere to the following rules:

  • Set each host adapter on the multihost bus to a different ID. Start by assigning ID 7, then ID 6, and so on, using decreasing ID numbers.
    If a host has two multihost SCSI buses, allocate an ID to each SCSI adapter separately. There is no requirement that you set the adapters to the same ID, although using the same ID may simplify configuration management. ( Section A.6.4 describes how to set host IDs for the internal adapter using SCSI console parameters.)
  • When assigning IDs to devices and storage controllers connected to multihost SCSI buses, start at ID 0 (zero), assigning the highest ID numbers to the disks that require the fastest I/O response time.
  • Devices connected to a multihost SCSI bus must have the same name as viewed from each host. To achieve this, you must do one of the following:
    • Ensure that all hosts connected to a multihost SCSI bus are set to the same node allocation class, and all host adapters connected to a multihost SCSI bus have the same controller letter, as shown in Figure A-12.
    • Use port allocation classes (see HP OpenVMS Cluster Systems) or HSZ allocation classes (see Section 6.5.3).

A.6.2.2 Configuring Device IDs on Single-Host SCSI Buses

The device ID selection depends on whether you are using a node allocation class or a port allocation class. The following discussion applies to node allocation classes. Refer to HP OpenVMS Cluster Systems for a discussion of port allocation classes.

In multihost SCSI configurations, device names generated by OpenVMS use the format $allocation_class$DKA300. You set the allocation class using the ALLOCLASS system parameter. OpenVMS generates the controller letter (for example, A, B, C, and so forth) at boot time by allocating a letter to each controller. The unit number (for example, 0, 100, 200, 300, and so forth) is derived from the SCSI device ID.

When configuring devices on single-host SCSI buses that are part of a multihost SCSI configuration, take care to ensure that the disks connected to the single-host SCSI buses have unique device names. Do this by assigning different IDs to devices connected to single-host SCSI buses with the same controller letter on systems that use the same allocation class. Note that the device names must be different, even though the bus is not shared.

For example, in Figure A-12, the two disks at the bottom of the picture are located on SCSI bus A of two systems that use the same allocation class. Therefore, they have been allocated different device IDs (in this case, 2 and 3).

For a given allocation class, SCSI device type, and controller letter (in this example, $4$DKA), there can be up to eight devices in the cluster, one for each SCSI bus ID. To use all eight IDs, it is necessary to configure a disk on one SCSI bus at the same ID as a processor on another bus. See Section A.7.5 for a discussion of the possible performance impact this can have.

SCSI bus IDs can be effectively "doubled up" by configuring different SCSI device types at the same SCSI ID on different SCSI buses. For example, device types DK and MK could produce $4$DKA100 and $4$MKA100.

A.6.3 Step 3: Power Up and Verify SCSI Devices

After connecting the SCSI cables, power up the system. Enter a console SHOW DEVICE command to verify that all devices are visible on the SCSI interconnect.

If there is a SCSI ID conflict, the display may omit devices that are present, or it may include nonexistent devices. If the display is incorrect, then check the SCSI ID jumpers on devices, the automatic ID assignments provided by the StorageWorks shelves, and the console settings for host adapter and HSZxx controller IDs. If changes are made, type INIT, then SHOW DEVICE again. If problems persist, check the SCSI cable lengths and termination.

Example A-1 is a sample output from a console SHOW DEVICE command. This system has one host SCSI adapter on a private SCSI bus (PKA0), and two additional SCSI adapters (PKB0 and PKC0), each on separate, shared SCSI buses.

Example A-1 SHOW DEVICE Command Sample Output

>>>SHOW DEVICE
dka0.0.0.6.0               DKA0                          RZ26L  442D
dka400.4.0.6.0             DKA400                        RRD43  2893
dkb100.1.0.11.0            DKB100                        RZ26  392A
dkb200.2.0.11.0            DKB200                        RZ26L  442D
dkc400.4.0.12.0            DKC400                        HSZ40   V25
dkc401.4.0.12.0            DKC401                        HSZ40   V25
dkc500.5.0.12.0            DKC500                        HSZ40   V25
dkc501.5.0.12.0            DKC501                        HSZ40   V25
dkc506.5.0.12.0            DKC506                        HSZ40   V25
dva0.0.0.0.1               DVA0
jkb700.7.0.11.0            JKB700                        OpenVMS  V62
jkc700.7.0.12.0            JKC700                        OpenVMS  V62
mka300.3.0.6.0             MKA300                        TLZ06  0389
era0.0.0.2.1               ERA0                          08-00-2B-3F-3A-B9
pka0.7.0.6.0               PKA0                          SCSI Bus ID 7
pkb0.6.0.11.0              PKB0                          SCSI Bus ID 6
pkc0.6.0.12.0              PKC0                          SCSI Bus ID 6

The following list describes the device names in the preceding example:

  • DK devices represent SCSI disks. Disks connected to the SCSI bus controlled by adapter PKA are given device names starting with the letters DKA. Disks on additional buses are named according to the host adapter name in a similar manner (DKB devices on adapter PKB, and so forth).
    The next character in the device name represents the device's SCSI ID. Make sure that the SCSI ID for each device is unique for the SCSI bus to which it is connected.
  • The last digit in the DK device name represents the LUN number. The HSZ40 virtual DK device in this example is at SCSI ID 4, LUN 1. Note that some systems do not display devices that have nonzero LUNs.
  • JK devices represent nondisk or nontape devices on the SCSI interconnect. In this example, JK devices represent other processors on the SCSI interconnect that are running the OpenVMS operating system. If the other system is not running, these JK devices do not appear in the display. In this example, the other processor's adapters are at SCSI ID 7.
  • MK devices represent SCSI tapes. The A in device MKA300 indicates that it is attached to adapter PKA0, the private SCSI bus.
  • PK devices represent the local SCSI adapters. The SCSI IDs for these adapters is displayed in the rightmost column. Make sure this is different from the IDs used by other devices and host adapters on its bus.
    The third character in the device name (in this example, a) is assigned by the system so that each adapter has a unique name on that system. The fourth character is always zero.


Previous Next Contents Index