[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

As Table 4-3 shows, OpenVMS Clusters support a wide range of interconnects. The most important factor to consider is how much I/O you need, as explained in Chapter 2.

In most cases, the I/O requirements will be less than the capabilities of any one OpenVMS Cluster interconnect. Ensure that you have a reasonable surplus I/O capacity, then choose your interconnects based on other needed features.

Reference: For detailed information about the interconnects and adapters supported on each AlphaServer system, go to the OpenVMS web page at:


http://www.hp.com/go/openvms

Select AlphaSystems (from the left navigation panel under related links). Then select the AlphaServer system of interest and its QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe all options, including the adapters, supported on that system.

4.6 Fibre Channel Interconnect (Alpha Only)

Fibre Channel is a high-performance ANSI standard network and storage interconnect for PCI-based Alpha systems. It is a full-duplex serial interconnect and can simultaneously transmit and receive 100 megabytes per second. Fibre Channel supports simultaneous access of SCSI storage by multiple nodes connected to a Fibre Channel switch. A second type of interconnect is needed for node-to-node communications.

For multihost access to Fibre Channel storage, the following components are required:

  • Fibre Channel host adapter (KGPSA-BC, KGPSA-CA)
  • Multimode fiber-optic cable (BNGBX-nn), where nn represents distance in meters
  • Fibre Channel switch (DSGGA, DSGGB)
  • Storage devices that are supported in a multihost configuration (HSG60, HSG80, HSV, Modular Data Router [MDR])

4.6.1 Advantages

The Fibre Channel interconnect offers the following advantages:

  • High-speed transmission, 2 Gb/s
  • Scalable configuration to support department to enterprise configurations.
  • Long-distance interconnects
    Fibre Channel supports multimode fiber at 500 meters per link. Fibre Channel supports longer-distance interswitch links (ISLs) --- up to 100 kilometers per link, using single-mode fiber and up to 600 kilometers per link with FC/ATM links.
    In addition, SANworks Data Replication Manager (DRM) configurations provide long distance ISLs through the use of the Open Systems Gateway and Wave Division Multiplexors.
  • High availability
    Multipath support is available to provide configurations with no single point of failure.

4.6.2 Throughput

The Fibre Channel interconnect transmits up to 2 Gb/s. It is a full-duplex serial interconnect that can simultaneously transmit and receive 100 MB/s.

4.6.3 Supported Adapter

The Fibre Channel adapter, the KGPSA, connects to the PCI bus.

Reference: For information about the Fibre Channel adapters supported on each AlphaServer system, go to the OpenVMS web page at:


http://www.hp.com/go/openvms

Select AlphaSystems (from the left navigation panel under related links). Then select the AlphaServer system of interest and its QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe all options, including the adapters, supported on that system.

4.7 MEMORY CHANNEL Interconnect (Alpha Only)

MEMORY CHANNEL is a high-performance cluster interconnect technology for PCI-based Alpha systems. With the benefits of very low latency, high bandwidth, and direct memory access, MEMORY CHANNEL complements and extends the unique ability of OpenVMS Clusters to work as a single, virtual system.

Three hardware components are required by a node to support a MEMORY CHANNEL connection:

  • A PCI-to-MEMORY CHANNEL adapter
  • A link cable (3 m or 10 feet long)
  • A port in a MEMORY CHANNEL hub (except for a two-node configuration in which the cable connects just two PCI adapters)

A MEMORY CHANNEL hub is a PC size unit that provides a connection among systems. MEMORY CHANNEL can support up to four Alpha nodes per hub. You can configure systems with two MEMORY CHANNEL adapters in order to provide failover in case an adapter fails. Each adapter must be connected to a different hub.

A MEMORY CHANNEL hub is not required in clusters that comprise only two nodes. In a two-node configuration, one PCI adapter is configured, using module jumpers, as a virtual hub.

4.7.1 Advantages

MEMORY CHANNEL technology provides the following features:

  • Offers excellent price/performance.
    With several times the CI bandwidth, MEMORY CHANNEL provides a 100 MB/s interconnect with minimal latency. MEMORY CHANNEL architecture is designed for the industry-standard PCI bus.
  • Requires no change to existing applications.
    MEMORY CHANNEL works seamlessly with existing cluster software, so that no change is necessary for existing applications. The new MEMORY CHANNEL drivers, PMDRIVER and MCDRIVER, integrate with the System Communications Services layer of OpenVMS Clusters in the same way that existing port drivers do. Higher layers of cluster software are unaffected.
  • Offloads CI, DSSI, and the LAN in SCSI clusters.
    You cannot connect storage directly to MEMORY CHANNEL, but you can use it to make maximum use of each interconnect's strength.
    While MEMORY CHANNEL is not a replacement for CI and DSSI, when used in combination with those interconnects, it offloads their node-to-node traffic. This enables them to be dedicated to storage traffic, optimizing communications in the entire cluster.
    When used in a cluster with SCSI and LAN interconnects, MEMORY CHANNEL offloads node-to-node traffic from the LAN, enabling it to handle more TCP/IP or DECnet traffic.
  • Provides fail-separately behavior.
    When a system failure occurs, MEMORY CHANNEL nodes behave like any failed node in an OpenVMS Cluster. The rest of the cluster continues to perform until the failed node can rejoin the cluster.

4.7.2 Throughput

The MEMORY CHANNEL interconnect has a very high maximum throughput of 100 MB/s. If a single MEMORY CHANNEL is not sufficient, up to two interconnects (and two MEMORY CHANNEL hubs) can share throughput.

4.7.3 Supported Adapter

The MEMORY CHANNEL adapter connects to the PCI bus. The MEMORY CHANNEL adapter, CCMAA--BA, provides improved performance over the earlier adapter.

Reference: For information about the CCMAA-BA adapter support on AlphaServer systems, go to the OpenVMS web page at:


http://www.hp.com/go/openvms

Select AlphaSystems (from the left navigation panel under related links). Next, select the AlphaServer system of interest and then its QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe all options, including the adapters, supported on that system.

4.8 SCSI Interconnect (Alpha Only)

The SCSI interconnect is an industry standard interconnect that supports one or more computers, peripheral devices, and interconnecting components. SCSI is a single-path, daisy-chained, multidrop bus. It is a single 8-bit or 16-bit data path with byte parity for error detection. Both inexpensive single-ended and differential signaling for longer distances are available.

In an OpenVMS Cluster, multiple Alpha computers on a single SCSI interconnect can simultaneously access SCSI disks. This type of configuration is called multihost SCSI connectivity. A second type of interconnect is required for node-to-node communication. For multihost access to SCSI storage, the following components are required:

  • SCSI host adapter that is supported in a multihost configuration (see Table 4-6)
  • SCSI interconnect
  • Terminators, one for each end of the SCSI interconnect
  • Storage devices that are supported in a multihost configuration (RZnn; refer to the OpenVMS Cluster SPD [29.78.nn])

For larger configurations, the following components are available:

  • Storage controllers (HSZnn)
  • Bus isolators (DWZZA, DWZZB, or DWZZC) to convert single-ended to differential signaling and to effectively double the SCSI interconnect length

Note

This support is restricted to Alpha systems and further restricted to certain adapters. OpenVMS does not provide this support for the newest SCSI adapters, including the Ultra SCSI adapters KZPEA, KZPDC, A6828A, A6829A, and A7173A.

Reference: For a detailed description of how to connect SCSI configurations, see Appendix A.

4.8.1 Advantages

The SCSI interconnect offers the following advantages:

  • Lowest cost, shared direct access to storage
    Because SCSI is an industry standard and is used extensively throughout the industry, it is available from many manufacturers at competitive prices.
  • Scalable configuration to achieve high performance at a moderate price
    You can choose:
    • Width of SCSI interconnect
      Narrow (8 bits) or wide (16 bits).
    • Transmission mode
      Single-ended signaling, the most common and least expensive, or differential signaling, which provides higher signal integrity and allows a longer SCSI interconnect.
    • Signal speed (standard, fast, or ultra mode)
    • Number of nodes sharing the SCSI bus (two or three)
    • Number of shared SCSI buses to which a node can connect (maximum of six)
    • Storage type and size (RZnn or HSZnn)
    • Computer type and size (AlphaStation or AlphaServer)

4.8.2 Throughput

Table 4-4 show throughput for the SCSI interconnect.

Table 4-4 Maximum Data Transfer Rates in Megabytes per Second
Mode Narrow (8-Bit) Wide (16-Bit)
Standard 5 10
Fast 10 20
Ultra 20 40

4.8.3 SCSI Interconnect Distances

The maximum length of the SCSI interconnect is determined by the signaling method used in the configuration and, for single-ended signaling, by the data transfer rate.

There are two types of electrical signaling for SCSI interconnects: single ended and differential. Both types can operate in standard mode, fast mode, or ultra mode. For differential signaling, the maximum SCSI cable length possible is the same for standard mode and fast mode.

Table 4-5 summarizes how the type of signaling method affects SCSI interconnect distances.

Table 4-5 Maximum SCSI Interconnect Distances
Signaling Technique Rate of Data Transfer Maximum Cable Length
Single ended Standard 6 m 1
Single ended Fast 3 m
Single ended Ultra 20.5 m 2
Differential Standard or Fast 25 m
Differential Ultra 25.5 m 3

1The SCSI standard specifies a maximum length of 6 m for this interconnect. However, it is advisable, where possible, to limit the cable length to 4 m to ensure the highest level of data integrity.
2This length is attainable if devices are attached only at each end. If devices are spaced along the interconnect, they must be at least 1 m apart, and the interconnect cannot exceed 4 m.
3More than two devices can be supported.

4.8.4 Supported Adapters, Bus Types, and Computers

Table 4-6 shows SCSI adapters with the internal buses and computers they support.

Table 4-6 SCSI Adapters
Adapter Internal Bus Supported
Computers
Embedded (NCR-810 based)/KZPAA 1 PCI See the options specifications for your system.
KZPSA 2 PCI Supported on all Alpha computers that support KZPSA in single-host configurations. 3
KZTSA 2 TURBOchannel DEC 3000
KZPBA-CB 4 PCI Supported on all Alpha computers that support KZPBA in single-host configurations. 3

1Single-ended.
2Fast-wide differential (FWD).
3See the system-specific hardware manual.
4Ultra differential. The ultra single-ended adapter (KZPBA-CA) does not support multihost systems.

Reference: For information about the SCSI adapters supported on each AlphaServer system, go to the OpenVMS web page at:


http://www.hp.com/go/openvms

Select AlphaSystems (from the left navigation panel under related links). Next, choose the AlphaServer system of interest and then its QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe all options, including the adapters, supported on that system.

4.9 CI Interconnect (Alpha and VAX Only)

The CI interconnect is a radial bus through which OpenVMS Cluster systems communicate. It comprises the following components:

  • CI host adapter.
  • Star coupler---A passive device that serves as a common connection point for signals between OpenVMS nodes and HSC or HSJ controllers that are connected by the CI.
  • Optional star coupler expander (CISCE)---Consists of two amplifiers, one for each of its two paths.
  • CI cable.

4.9.1 Advantages

The CI interconnect offers the following advantages:

  • High speed
    Suitable for larger processors and I/O-intensive applications.
  • Efficient access to large amounts of storage
    HSC and HSJ controllers can connect large numbers of disk and tape drives to the OpenVMS Cluster system, with direct access from all OpenVMS nodes on the CI.
  • Minimal CPU overhead for communication
    CI adapters are intelligent interfaces that perform much of the work required for communication among OpenVMS nodes and storage. The CI topology allows all nodes attached to a CI bus to communicate directly with the HSC and HSJ controllers on the same CI bus.
  • High availability through redundant, independent data paths
    Each CI adapter is connected with two pairs of CI cables. If a single CI cable connection fails, failover automatically occurs.
  • Multiple access paths to disks and tapes
    Dual HSC and HSJ controllers and dual-ported devices create alternative paths to storage devices.

4.9.2 Throughput

The CI interconnect has a high maximum throughput. CI adapters use high-performance microprocessors that perform many of the processing activities usually performed by the CPU. As a result, they consume minimal CPU processing power.

Because the effective throughput of the CI bus is high, a single CI interconnect is not likely to be a bottleneck in a large OpenVMS Cluster configuration. If a single CI is not sufficient, multiple CI interconnects can increase throughput.

4.9.3 Supported Adapters and Bus Types

The following are CI adapters and internal buses that each supports:

  • CIPCA (PCI/EISA)
  • CIXCD (XMI)
  • CIBCA-B (VAXBI)

Reference: For detailed information about the proprietary CI adapters supported on each AlphaServer system, go to the OpenVMS web page at:


http://www.hp.com/go/openvms

Select AlphaSystems (from the left navigation panel under related links). Next, select the AlphaServer system of interest and then its QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe all options, including the adapters, supported on that system.

4.9.4 Multiple CI Adapters

You can configure multiple CI adapters on some OpenVMS nodes. Multiple star couplers can be used in the same OpenVMS Cluster.

With multiple CI adapters on a node, adapters can share the traffic load. This reduces I/O bottlenecks and increases the total system I/O throughput.

For the maximum number of CI adapters supported on your system, check the options list for your system in your hardware manual or on the AlphaServer web pages.

4.9.5 Configuration Guidelines for CI Clusters

Use the following guidelines when configuring systems in a CI cluster:

  • The maximum number of nodes that you can connect to a star coupler is 32. Up to 16 of these nodes can be OpenVMS systems, and the remainder can be HSJ and HSC storage controllers.
  • The number of star couplers is limited by the number of CI adapters configured on a system.
  • Dual porting of devices between HSJ and HSC controllers is supported as long as they are connected to the same or separate star couplers. Dual porting of devices between HSJ and HSC controllers and local controllers is not supported.
  • With the exception of the CIPCA and CIXCD, different types of CI adapters cannot be combined in the same system.
  • You can use multiple CI adapters for redundancy and throughput. You can increase throughput by connecting additional CI adapters to separate star couplers; throughput does not increase substantially when you connect a second CI adapter to the same star coupler.

4.10 Digital Storage Systems Interconnect (DSSI) (Alpha and VAX Only)

DSSI is a single-path, daisy-chained, multidrop bus. It provides a single, 8-bit parallel data path with both byte parity and packet checksum for error detection.

4.10.1 Advantages

DSSI offers the following advantages:

  • High reliability
  • Shared, direct access to storage at a lower cost than CI
  • Direct communication between systems and storage
  • High-performance, intelligent storage controllers with embedded caches

4.10.2 Maintenance Consideration

DSSI storage often resides in the same cabinet as the CPUs. For these configurations, the whole system may need to be shut down for service, unlike configurations and interconnects with separately housed systems and storage devices.

4.10.3 Throughput

The maximum throughput is 32 Mb/s.

DSSI has highly intelligent adapters that require minimal CPU processing overhead.

4.10.4 DSSI Adapter Types

There are two types of DSSI adapters:

  • Embedded adapter, which is part of the system.
  • Optional adapter, which you can purchase separately and add to the system.

4.10.5 Supported Adapters and Bus Types

The following are DSSI adapters and internal bus that each supports:

  • KFESA (EISA)
  • KFESB (EISA)
  • KFPSA (PCI)
  • KFMSA (XMI)---VAX only
  • KFMSB (XMI)---Alpha only
  • KFQSA (Q-bus)
  • N710 (embedded)
  • SHAC (embedded)
  • EDA640 (embedded)

Reference: For detailed information about the Fibre Channel adapters supported on each AlphaServer system, go to the OpenVMS web page at:


http://www.hp.com/go/openvms

Select AlphaSystems (from the left navigation panel under related links). Next, select the AlphaServer system of interest and then its QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe all options, including the adapters, supported on that system.

4.10.6 DSSI-Connected Storage

DSSI configurations use HSD intelligent controllers to connect disk drives to an OpenVMS Cluster. HSD controllers serve the same purpose with DSSI as HSJ controllers serve with CI: they enable you to configure more storage.

Alternatively, DSSI configurations use integrated storage elements (ISEs) connected directly to the DSSI bus. Each ISE contains either a disk and disk controller or a tape and tape controller.

4.10.7 Multiple DSSI Adapters

Multiple DSSI adapters are supported for some systems, enabling higher throughput than with a single DSSI bus.

For the maximum number of DSSI adapters supported on a system, check the options list for the system of interest on the AlphaServer web pages.

4.10.8 Configuration Guidelines for DSSI Clusters

The following configuration guidelines apply to all DSSI clusters:

  • Each DSSI interconnect can have up to eight nodes attached; four can be systems and the rest can be storage devices. Each of the following counts as a DSSI node:
    • DSSI adapter
    • Any member of the HSDxx family of DSSI and SCSI controllers
    • Any RF, TF, or EF integrated storage element (ISE)

    In some cases, physical cabling and termination limitations may restrict the number of systems to two or three that may be connected to a DSSI interconnect. For example:
    • Some variants of the DSSI adapter terminate the bus; for example, the N710. For this reason, only two DEC 4000 systems can be configured on a DSSI interconnect.
    • The size of a DEC or VAX 10000 system generally limits the number of systems that can be connected to a DSSI interconnect.
  • Each DSSI adapter in a single system must be connected to a different DSSI bus.
  • Configure VAX 6000, VAX 7000, and VAX 10000 systems with KFMSA adapters.
  • Configure DEC 7000 and DEC 10000 systems with KFMSB adapters.
  • Configure PCI-based AlphaServer systems with KFPSA adapters. EISA adapters (KFESA/KFESB) adapters can also be configured on most AlphaServer systems, but use of KFPSA is recommended whenever possible.
  • Dual porting of devices between HSD controllers is supported as long as they are connected to the same or separate DSSI interconnects. Dual porting of devices between HSD controllers and local controllers is not supported.
  • All systems connected to the same DSSI bus must have a common power or ground.


Previous Next Contents Index