 |
Guidelines for OpenVMS Cluster Configurations
Guidelines for OpenVMS Cluster Configurations
Chapter 3 Choosing OpenVMS Cluster Systems
This chapter provides information to help you select systems for your
OpenVMS Cluster to satisfy your business and application requirements.
An OpenVMS cluster can include systems running OpenVMS Integrity
servers or a combination of systems running OpenVMS Integrity servers
and OpenVMS Alpha. See the OpenVMS Software Product Description for a
listing of the models currently supported.
- OpenVMS Integrity servers operating system
Based on the Intel
Itanium architecture, OpenVMS Integrity servers provide the price or
performance, reliability, and scalability benefits of OpenVMS on the
industry-standard HP Integrity server systems.
- OpenVMS Alpha operating system
Based on a 64-bit RISC (reduced
instruction set computing) architecture, OpenVMS Alpha provides
industry-leading price or performance benefits with standard I/O
subsystems for flexibility and expansion.
3.2 Types of Systems
HP Integrity server systems span a range of computing environments,
including:
- Entry
- Standalone servers
- Scalable blades
- Integrity VM guest systems
3.3 Choosing Systems
Your choice of systems depends on your business, your application
needs, and your budget. With a high-level understanding of systems and
their characteristics, you can make better choices. See the Software
Product Description or visit
http://www.hp.com/go/openvms
for the complete list of supported Integrity server systems.
An OpenVMS Cluster system is a highly integrated environment in which
multiple systems share access to resources. This resource sharing
increases the availability of services and data. OpenVMS Cluster
systems also offer failover mechanisms that are transparent and
automatic, and require little intervention by the system manager or the
user.
Reference: See Chapter 8 for more information about
these failover mechanisms and about availability.
The HP web site provides ordering and configuring information for
workstations and servers. It also contains detailed information about
storage devices, printers, and network application support.
To access the HP web site, visit:
http://www.hp.com/
Chapter 4 Choosing OpenVMS Cluster Interconnects
An interconnect is a physical path that connects computers to other
computers, and to storage subsystems. OpenVMS Cluster systems support a
variety of interconnects (also referred to as buses) so that members
can communicate with each other and with storage, using the most
appropriate and effective method available.
The software that enables OpenVMS Cluster systems to communicate over
an interconnect is the System Communications Services (SCS). An
interconnect that supports node-to-node SCS communications is called a
cluster interconnect. An interconnect that provides
node-to-storage connectivity within a cluster is called a
shared-storage interconnect.
OpenVMS supports the following types of interconnects:
- Cluster interconnects (node-to-node only)
- Ethernet
- Fast Ethernet
- Gigabit Ethernet
- 10 Gigabit Ethernet (Integrity severs only)
- Shared-storage interconnects (node-to-storage only)
- Fibre Channel
- Small Computer Systems Interface (SCSI) (Integrity servers or
Alpha, Integrity servers limited to specific configurations)
- Serial Attached SCSI (SAS) (Integrity servers only)
- Both node-to-node and node-to-storage interconnects
- Ethernet, Fast Ethernet, Gigabit Ethernet
- 10 Gigabit Ethernet (Integrity servers only)
Note
Cluster over IP is supported on Ethernet, Fast Ethernet, Gigabit
Ethernet, and 10 Gigabit Ethernet.
|
Note
The CI, DSSI, and FDDI interconnects are supported on Alpha and VAX
systems. Memory Channel and ATM interconnects are supported only on
Alpha systems. For documentation related to these interconnects, see
the previous version of the manual.
|
4.1 Characteristics
The interconnects described in this chapter share some general
characteristics. Table 4-1 describes these characteristics.
Table 4-1 Interconnect Characteristics
Characteristic |
Description |
Throughput
|
The quantity of data transferred across the interconnect.
Some interconnects require more processor overhead than others. For
example, Ethernet and FDDI interconnects require more processor
overhead than do CI or DSSI.
Larger packet sizes allow higher data-transfer rates (throughput)
than do smaller packet sizes.
|
Cable length
|
Interconnects range in length from 3 m to 40 km.
|
Maximum number of nodes
|
The number of nodes that can connect to an interconnect varies among
interconnect types. Be sure to consider this when configuring your
OpenVMS Cluster system.
|
Supported systems and storage
|
Each OpenVMS Cluster node and storage subsystem requires an adapter to
connect the internal system bus to the interconnect. First consider the
storage and processor I/O performance, then the adapter performance,
when choosing an interconnect type.
|
Table 4-2 shows key statistics for a variety of interconnects.
Table 4-2 Comparison of Cluster Interconnect Types
Interconnect |
Maximum Throughput (Mb/s) |
Hardware-Assisted Data Link1 |
Storage Connection |
Topology |
Maximum Nodes per Cluster |
Maximum Length |
General-purpose |
Ethernet
Fast
Gigabit
10 Gigabit
|
10/100/1000
|
No
|
MSCP served
|
Linear or radial to a hub or switch
|
96
2
|
100 m
4/
100 m
4/
550 m
3
|
Shared-storage only |
Fibre Channel
|
1000
|
No
|
Direct
5
|
Radial to a switch
|
96
2
|
10 km
6
/100 km
7
|
SCSI
|
160
|
No
|
Direct
5
|
Bus or radial to a hub
|
8-12
8
|
25 m
|
SAS
|
6000
|
No
|
Direct
|
Point to Point, Radial to a switch
|
96
2
|
6 m
|
1Hardware-assisted data link reduces the processor overhead.
2OpenVMS Cluster computers.
3Based on multimode fiber (MMF). Longer distances can be
achieved by bridging between this interconnect and WAN interswitch
links using common carriers such as [D]WDM and so on.
4Based on unshielded twisted-pair wiring (UTP). Longer
distances can be achieved by bridging between this interconnect and WAN
interswitch links (ISLs), using common carriers such as [D]WDM and so
on.
5Direct-attached SCSI and Fibre Channel storage can be MSCP
served over any of the general-purpose cluster interconnects.
6Based on single-mode fiber, point-to-point link.
7Support for longer distances (up to 100 km) based on
inter-switch links (ISLs) using single-mode fiber. In addition, DRM
configurations provide longer distance ISLs using the Open Systems
Gateway and Wave Division Multiplexors.
8Up to 3 OpenVMS Cluster computers, up to 4 with the
DWZZH-05 and fair arbitration; up to 15 storage devices.
4.3 Multiple Interconnects
You can use multiple interconnects to achieve the following benefits:
- Failover
If one interconnect or adapter fails, the node
communications automatically move to another interconnect.
- MSCP server load balancing
In a multiple MSCP server configuration, an OpenVMS Cluster
performs load balancing to automatically choose the best path. This
reduces the chances that a single adapter could cause an I/O
bottleneck. Depending on your configuration, multiple paths from one
node to another node may transfer more information than would a single
path. Reference: See Section 9.3.3 for an example
of dynamic MSCP load balancing.
4.4 Mixed Interconnects
You can use two or more different types of interconnects in an OpenVMS
Cluster system. You can use different types of interconnects to combine
the advantages of each type and to expand your OpenVMS Cluster system.
Note
If any one node in a cluster requires IP for cluster communication, all
the other members in the cluster must be enabled for IP cluster
communication.
|
4.5 Interconnect Support
For the latest information on supported interconnects, see the most
recent OpenVMS Cluster Systems SPD.
Reference: For detailed information about the
interconnects and adapters supported on each Integrity server system
and AlphaServer system, visit the OpenVMS web page at:
http://www.hp.com/go/openvms
Select HP Integrity servers (from the left navigation panel under
related links). Then select the Integrity system of interest and its
QuickSpecs. The QuickSpecs for each system briefly describe all
options, including the adapters, supported on that system.
Select HP AlphaSystems (from the left navigation panel under related
links). Then select the AlphaServer system of interest and its
QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe
all options, including the adapters, supported on that system.
Fibre Channel is a high-performance ANSI standard network and storage
interconnect for PCI-based Alpha systems. It is a full-duplex serial
interconnect and can simultaneously transmit and receive over 100
megabytes per second. Fibre Channel supports simultaneous access of
SCSI storage by multiple nodes connected to a Fibre Channel switch. A
second type of interconnect is needed for node-to-node communications.
The Fibre Channel interconnect offers the following advantages:
- High-speed transmission, 2 Gb/s, 4 Gb/s, 8 Gb/s (depending on
adapter)
- Scalable configuration to support department to enterprise
configurations.
- Long-distance interconnects
Fibre Channel supports multimode
fiber at 500 meters per link. Fibre Channel supports longer-distance
interswitch links (ISLs) --- up to 100 kilometers per link, using
single-mode fiber and up to 600 kilometers per link with FC/ATM links.
In addition, SANworks Data Replication Manager (DRM)
configurations provide long distance ISLs through the use of the Open
Systems Gateway and Wave Division Multiplexors.
- High availability
Multipath support is available to provide
configurations with no single point of failure.
4.6.2 Throughput
The Fibre Channel interconnect transmits up to 2 Gb/s, 4 Gb/s, 8 Gb/s
(depending on adapter). It is a full-duplex serial interconnect that
can simultaneously transmit and receive over 100 MB/s.
MEMORY CHANNEL is a high-performance cluster interconnect technology
for PCI-based Alpha systems. With the benefits of very low latency,
high bandwidth, and direct memory access, MEMORY CHANNEL complements
and extends the unique ability of OpenVMS Clusters to work as a single,
virtual system.
Three hardware components are required by a node to support a MEMORY
CHANNEL connection:
- A PCI-to-MEMORY CHANNEL adapter
- A link cable (3 m or 10 feet long)
- A port in a MEMORY CHANNEL hub (except for a two-node configuration
in which the cable connects just two PCI adapters)
A MEMORY CHANNEL hub is a PC size unit that provides a connection among
systems. MEMORY CHANNEL can support up to four Alpha nodes per hub. You
can configure systems with two MEMORY CHANNEL adapters in order to
provide failover in case an adapter fails. Each adapter must be
connected to a different hub.
A MEMORY CHANNEL hub is not required in clusters that comprise only two
nodes. In a two-node configuration, one PCI adapter is configured,
using module jumpers, as a virtual hub.
MEMORY CHANNEL technology provides the following features:
- Offers excellent price or performance.
With several times the
CI bandwidth, MEMORY CHANNEL provides a 100 MB/s interconnect with
minimal latency. MEMORY CHANNEL architecture is designed for the
industry-standard PCI bus.
- Requires no change to existing applications.
MEMORY CHANNEL
works seamlessly with existing cluster software, so that no change is
necessary for existing applications. The new MEMORY CHANNEL drivers,
PMDRIVER and MCDRIVER, integrate with the System Communications
Services layer of OpenVMS Clusters in the same way that existing port
drivers do. Higher layers of cluster software are unaffected.
- Offloads CI, DSSI, and the LAN in SCSI clusters.
You cannot
connect storage directly to MEMORY CHANNEL, but you can use it to make
maximum use of each interconnect's strength. While MEMORY CHANNEL
is not a replacement for CI and DSSI, when used in combination with
those interconnects, it offloads their node-to-node traffic. This
enables them to be dedicated to storage traffic, optimizing
communications in the entire cluster. When used in a cluster with
SCSI and LAN interconnects, MEMORY CHANNEL offloads node-to-node
traffic from the LAN, enabling it to handle more TCP/IP or DECnet
traffic.
- Provides fail-separately behavior.
When a system failure
occurs, MEMORY CHANNEL nodes behave like any failed node in an OpenVMS
Cluster. The rest of the cluster continues to perform until the failed
node can rejoin the cluster.
4.7.2 Throughput
The MEMORY CHANNEL interconnect has a very high maximum throughput of
100 MB/s. If a single MEMORY CHANNEL is not sufficient, up to two
interconnects (and two MEMORY CHANNEL hubs) can share throughput.
The MEMORY CHANNEL adapter connects to the PCI bus. The MEMORY CHANNEL
adapter, CCMAA--BA, provides improved performance over the earlier
adapter.
Reference: For information about the CCMAA-BA adapter
support on AlphaServer systems, go to the OpenVMS web page at:
http://www.hp.com/go/openvms
Select AlphaSystems (from the left navigation panel under related
links). Next, select the AlphaServer system of interest and then its
QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe
all options, including the adapters, supported on that system.
The SCSI interconnect is an industry standard interconnect that
supports one or more computers, peripheral devices, and interconnecting
components. SCSI is a single-path, daisy-chained, multidrop bus. It is
a single 8-bit or 16-bit data path with byte parity for error
detection. Both inexpensive single-ended and differential signaling for
longer distances are available.
In an OpenVMS Cluster, multiple computers on a single SCSI interconnect
can simultaneously access SCSI disks. This type of configuration is
called multihost SCSI connectivity or shared SCSI storage and is
restricted to certain adapters and limited configurations. A second
type of interconnect is required for node-to-node communication.
Shared SCSI storage in an OpenVMS Cluster system enables computers
connected to a single SCSI bus to share access to SCSI storage devices
directly. This capability makes it possible to build highly available
servers using shared access to SCSI storage.
For multihost access to SCSI storage, the following components are
required:
- SCSI host adapter that is supported in a multihost configuration
(see Table 4-5)
- SCSI interconnect
- Terminators, one for each end of the SCSI interconnect
- Storage devices that are supported in a multihost configuration
(RZnn; refer to the OpenVMS Cluster SPD [29.78.nn])
For larger configurations, the following components are available:
- Storage controllers (HSZnn)
- Bus isolators (DWZZA, DWZZB, or DWZZC) to convert single-ended to
differential signaling and to effectively double the SCSI interconnect
length
Note
This support is restricted to certain adapters. OpenVMS does
not provide this support for the newest SCSI adapters,
including the Ultra SCSI adapters KZPEA, KZPDC, A6828A, A6829A, and
A7173A.
|
Reference: For a detailed description of how to
connect OpenVMS Alpha SCSI configurations, see Appendix A.
Shared SCSI storage support for two-node OpenVMS Integrity servers
Cluster systems was introduced in OpenVMS Version 8.2-1. Prior to this
release, shared SCSI storage was supported on OpenVMS Alpha systems
only, using an earlier SCSI host-based adapter (HBA).
Shared SCSI storage in an OpenVMS Integrity servers Cluster system is
subject to the following restrictions:
- A maximum of two OpenVMS Integrity server systems can be connected
to a single SCSI bus.
- A maximum of four shared-SCSI buses can be connected to each system.
- Systems supported are the rx1600 family, the rx2600 family, and the
rx4640 system.
- The A7173A HBA is the only supported HBA.
- MSA30-MI storage enclosure is the only supported SCSI storage type.
- Ultra320 SCSI disk family is the only supported disk family.
Figure 4-1 illustrates two-node shared SCSI configuration. Note that
a second interconnect, a LAN, is required for host-to-host OpenVMS
Cluster communications. (OpenVMS Cluster communications are also known
as System Communications Architecture (SCA) communications.)
Note, the SCSI IDs of 6 and 7 are required in this configuration. One
of the systems must have a SCSI ID of 6 for each A7173A adapter port
connected to a shared SCSI bus, instead of the factory-set default of
7. You use the U320_SCSI pscsi.efi utility, included on the IPF Offline
Diagnostics and Utilities CD, to change the SCSI ID. The procedure for
doing this is documented in the HP A7173A PCI-X Dual Channel
Ultra320 SCSI Host Bus Adapter Installation Guide at:
http://docs.hp.com/en/netcom.html
Figure 4-1 Two-Node OpenVMS Integrity servers Cluster
System
4.8.3 Advantages
The SCSI interconnect offers the following advantages:
- Lowest cost, shared direct access to storage
Because SCSI is an
industry standard and is used extensively throughout the industry, it
is available from many manufacturers at competitive prices.
- Scalable configuration to achieve high performance at a moderate
price
You can choose:
- Width of SCSI interconnect
Narrow (8 bits) or wide (16 bits).
- Transmission mode
Single-ended signaling, the most common and
least expensive, or differential signaling, which provides higher
signal integrity and allows a longer SCSI interconnect.
- Signal speed (standard, fast, or ultra mode)
- Number of nodes sharing the SCSI bus (two or three)
- Number of shared SCSI buses to which a node can connect (maximum of
six)
- Storage type and size (RZnn or HSZnn)
- Computer type and size (AlphaStation or AlphaServer)
4.8.4 Throughput
Table 4-3 show throughput for the SCSI interconnect.
|