[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index


Chapter 7
Configuring Fibre Channel as an OpenVMS Cluster Storage Interconnect

A major benefit of OpenVMS is its support of a wide range of interconnects and protocols for network configurations and for OpenVMS Cluster System configurations. This chapter describes OpenVMS support for Fibre Channel as a storage interconnect for single systems and as a shared storage interconnect for multihost OpenVMS Cluster systems. With few exceptions, as noted, this chapter applies equally to OpenVMS Alpha systems and OpenVMS I64 systems.

The following topics are discussed:

For information about multipath support for Fibre Channel configurations, see Chapter 6.

Note

The Fibre Channel interconnect is shown generically in the figures in this chapter. It is represented as a horizontal line to which the node and storage subsystems are connected. Physically, the Fibre Channel interconnect is always radially wired from a switch, as shown in Figure 7-1.

The representation of multiple SCSI disks and SCSI buses in a storage subsystem is also simplified. The multiple disks and SCSI buses, which one or more HSGx controllers serve as a logical unit to a host, are shown in the figures as a single logical unit.

For ease of reference, the term HSG, which represents a Fibre Channel hierarchical storage controller, is used throughout this chapter to represent both an HSG60 and an HSG80, except where it is important to note any difference, as in Table 7-2.

7.1 Overview of OpenVMS Fibre Channel Support

Fibre Channel is an ANSI standard network and storage interconnect that offers many advantages over other interconnects. Its most important features and the support offered by OpenVMS for these features are shown in Table 7-1.

Table 7-1 Fibre Channel Features and OpenVMS Support
Feature OpenVMS Support
High-speed transmission OpenVMS supports 2 Gb/s, full-duplex, serial interconnect (can simultaneously transmit and receive 200 MB of data per second).
Choice of media OpenVMS supports fiber-optic media.
Long interconnect distances OpenVMS supports multimode fiber-optic media at 500 meters per link and single-mode fiber-optic media (for interswitch links [ISLs]) for distances up to 100 kilometers per link.
Multiple protocols OpenVMS supports SCSI--3. Possible future support for IP.
Numerous topologies OpenVMS supports switched FC (highly scalable, multiple concurrent communications) and multiple switches (fabric). Support is planned for arbitrated loop on the StorageWorks Modular Storage Array (MSA) 1000 storage system, only, and will be announced on the OpenVMS home page: http://www.hp.com/go/openvms .

Figure 7-1 shows a logical view of a switched topology. The FC nodes are either Alpha hosts, or storage subsystems. Each link from a node to the switch is a dedicated FC connection. The switch provides store-and-forward packet delivery between pairs of nodes. Concurrent communication between disjoint pairs of nodes is supported by the switch.

Figure 7-1 Switched Topology (Logical View)


Figure 7-2 shows a physical view of a Fibre Channel switched topology. The configuration in Figure 7-2 is simplified for clarity. Typical configurations will have multiple Fibre Channel interconnects for high availability, as shown in Section 7.3.4.

Figure 7-2 Switched Topology (Physical View)


Figure 7-3 shows an arbitrated loop topology. Two hosts are connected to a dual-ported StorageWorks MSA 1000 storage system. OpenVMS supports an arbitrated loop topology only on this storage system.

Note

Support for this topology is planned to be available soon and will be announced on the OpenVMS home page: http://www.hp.com/go/openvms .

Figure 7-3 Arbitrated Loop Topology Using MSA 1000


7.2 Fibre Channel Configuration Support

OpenVMS Alpha supports the Fibre Channel devices listed in Table 7-2. For Fibre Channel fabric components supported by OpenVMS, refer to the latest version of HP StorageWorks SAN Design Reference Guide (order number AA-RMPNM-TE).

Note that Fibre Channel hardware names typically use the letter G to designate hardware that is specific to Fibre Channel. Fibre Channel configurations with other Fibre Channel equipment are not supported. To determine the required minimum versions of the operating system and firmware, see the release notes.

HP recommends that all OpenVMS Fibre Channel configurations use the latest update kit for the OpenVMS version they are running.

The root name of these kits is FIBRE_SCSI, a change from the earlier naming convention of FIBRECHAN. The kits are available from the following web site:


http://welcome.hp.com/country/us/eng/support.html

Table 7-2 Fibre Channel Hardware Components
Component Name Description
AlphaServer 800, 1 1000A, 2 1200, 4000, 4100, 8200, 8400, DS10, DS20, DS25, DS20E, ES40, ES45, ES47, ES80, GS60, GS60E, GS80, GS140, GS160, GS320, and GS1280 Alpha host.
HP Integrity server rx1600-2, rx2600-2, rx4640-8 HP Integrity host
Enterprise Virtual Array (EVA) Fibre Channel "virtual" RAID storage with two Fibre Channel host ports and support for 240 physical Fibre Channel drives.
HSV110 Fibre Channel hierarchical storage controller module (for the EVA5000) with two Fibre Channel host ports and support for 6 or 12 SCSI drive enclosures.
HSG80 Fibre Channel hierarchical storage controller module with two Fibre Channel host ports and support for six SCSI drive buses.
HSG60 Fibre Channel hierarchical storage controller module with two Fibre Channel host ports and support for two SCSI buses.
MDR Fibre Channel Modular Data Router, a bridge to a SCSI tape or a SCSI tape library, with a 1-Gb capacity. The MDR must be connected to a Fibre Channel switch. It cannot be connected directly to an Alpha system.
NSR Fibre Channel Network Storage Router, a bridge to a SCSI tape or a SCSI tape library, with a 2-Gb capacity. The NSR must be connected to a Fibre Channel switch. It cannot be connected directly to an Alpha system.
Fibre Channel fabric components, including host adapters, switches, bridges, Gigabit interface converters (GBICs) for long distance configurations, and cables See the latest version of the HP StorageWorks SAN Design Reference Guide (order number: AA-RMPNM-TE).

1On the AlphaServer 800, the integral S3 Trio must be disabled when the KGPSA is installed.
2Console support for FC disks is not available on this model.
3For the most up-to-date list, refer to the OpenVMS Cluster Software SPD.

OpenVMS supports the Fibre Channel SAN configurations described in the latest HP StorageWorks SAN Design Reference Guide (order number: AA-RMPNM-TE) and the Data Replication Manager (DRM) user documentation. This includes support for:

  • Multiswitch FC fabrics.
  • Support for up to 500 meters of multimode fiber, and support for up to 100-kilometer interswitch links (ISLs) using single-mode fiber. In addition, DRM configurations provide longer-distance ISLs through the use of the Open Systems Gateway and Wave Division Multiplexors.
  • Sharing of the fabric and the HSG storage with non-OpenVMS systems.

The StorageWorks documentation is available from their web site, which you can access from the OpenVMS web page:


http://www.hp.com/go/openvms

Select HP Storage (from related links in the left navigational bar). Next, locate the storage product. Then you can access the product's documentation.

Within the configurations described in the StorageWorks documentation, OpenVMS provides the following capabilities and restrictions:

  • All OpenVMS disk functions are supported: system disk, dump disks, shadow set member, quorum disk, and MSCP served disk. Each virtual disk must be assigned an identifier that is unique clusterwide.
  • OpenVMS provides support for the number of hosts, switches, and storage controllers specified in the StorageWorks documentation. In general, the number of hosts and storage controllers is limited only by the number of available fabric connections.
  • The number of Fibre Channel host bus adapters per platform depends on the platform type. Currently, the largest platforms support up to 26 adapters (independent of the number of OpenVMS instances running on the platform).
  • OpenVMS requires that the HSG operate in SCSI-3 mode, and if the HSG is in a dual redundant configuration, then the HSG must be in multibus failover mode. The HSG can only be shared with other systems that operate in these modes.
  • The OpenVMS Fibre Channel host bus adapter must be connected directly to the FC switch. The host bus adapter is not supported on a Fibre Channel loop, nor in a point-to-point connection to another Fibre Channel end node.
  • Neither the KGPSA-BC nor the KGPSA-CA can be connected to the same PCI bus as the S3 Trio 64V+ Video Card (PB2GA-JC/JD). On the AlphaServer 800, the integral S3 Trio must be disabled when the KGPSA is installed.
  • Hosts on the fabric can be configured as a single cluster or as multiple clusters and/or nonclustered nodes. It is critical to ensure that each cluster and each nonclustered system has exclusive access to its storage devices. HSG/HSV selective storage presentation and FC switch zoning or both can be used to ensure that each HSG/HSV storage device is accessible to only one cluster or one nonclustered system.
  • The HSG supports a limited number of connections. A connection is a nonvolatile record of a particular host bus adapter communicating with a particular port on the HSG. (Refer to the HSG CLI command SHOW CONNECTIONS.) The HSG ACS V8.6 supports a maximum of 96 connections, whereas HSG ACS V8.5 allows a maximum of 64 connections, and HSG ACS V8.4 allows a maximum of 32 connections. The connection limit is the same for both single and dual redundant controllers.
    If your FC fabric is large, and the number of active connections exceeds the HSG limit, then you must reconfigure the fabric, or use FC switch zoning to "hide" some of the adapters from some of the HSG ports, in order to reduce the number of connections.
    The HSG does not delete connection information from the connection table when a host bus adapter is disconnected. Instead, the user must prevent the table from becoming full by explicitly deleting the connection information using a CLI command.

This configuration support is in effect as of the revision date of this document. OpenVMS plans to increase these limits in future releases.

In addition to the configurations already described, OpenVMS also supports the SANworks Data Replication Manager. This is a remote data vaulting solution that enables the use of Fibre Channel over longer distances. For more information, see the HP StorageWorks web site, which you can access from the OpenVMS web page:


http://www.hp.com/go/openvms

Select HP Storage (from related links in the left navigational bar). Next, locate the storage product.

7.2.1 Fibre Channel Remedial Kits

Qualification of new Fibre Channel hardware and larger configurations is ongoing. New hardware and larger configurations may necessitate enhancements to the Fibre Channel support in OpenVMS. Between releases of OpenVMS, enhancements and corrections to Fibre Channel software are made available by means of remedial kits on the HP support web site at:


http://h18007.www1.hp.com.support/files/index.html

The latest version of each kit is the one posted to the HP support web site. HP recommends that you monitor this web site.

HP also recommends that you monitor the Fibre Channel web site at:


http://h71000.www7.hp.com/openvms/fibre/

The Fibre Channel web site is periodically updated with important news and new slide presentations.

7.2.2 Mixed-Version and Mixed-Architecture Cluster Support

Shared Fibre Channel OpenVMS Cluster storage is supported in both mixed-version and mixed-architecture OpenVMS Cluster systems. Mixed-version support is described in Section 11.7. Mixed-architecture support means a combination of OpenVMS Alpha systems with either OpenVMS VAX systems or OpenVMS I64 systems. Certain restrictions apply to the size of a mixed-architecture OpenVMS Alpha and OpenVMS I64 Cluster system, as described in the HP OpenVMS Version 8.2 New Features and Documentation Overview manual.

The following configuration requirements must be observed:

  • All hosts configured for shared access to the same storage devices must be in the same OpenVMS Cluster.
  • All hosts in the cluster require a common cluster communication interconnect, such as a LAN, CI, DSSI, or MEMORY CHANNEL.
  • All hosts with a direct connection to the FC must be running one of the OpenVMS Alpha versions named in Section 11.7.
  • All hosts must have the remedial kits for mixed-version clusters installed, as documented in the Release Notes.
  • If you use DECevent for error tracing, Version 2.9 or later is required. Earlier versions of DECevent do not support Fibre Channel.

7.2.3 Fibre Channel and OpenVMS Galaxy Configurations

Fibre Channel is supported in all OpenVMS Galaxy configurations. For more information about Galaxy configurations, see the HP OpenVMS Alpha Partitioning and Galaxy Guide.

7.3 Example Configurations

This section presents example Fibre Channel configurations.

Note

These configurations are valid for HSG storage controllers and for HSV storage controllers, except for Section 7.3.1 and Section 7.3.2, which apply only to HSG storage controllers.

The configurations build on each other, starting with the smallest valid configuration and adding redundant components for increasing levels of availability, performance, and scalability.

7.3.1 Single Host with Dual-Ported Storage

Figure 7-4 shows a single system using Fibre Channel as a storage interconnect.

Figure 7-4 Single Host With One Dual-Ported Storage Controller


Note the following about this configuration:

  • Dual ports of the HSG or HSV storage controller increase the availability and performance of the storage subsystem.
  • Extra ports on the switch enable system growth.
  • To maximize performance, logical units can be spread over the two HSG or HSV ports.
  • The switch and the HSG or HSV are single points of failure. To provide higher availability, Volume Shadowing for OpenVMS can be used to replicate the data to another Fibre Channel switch and HSG or HSV controller.

7.3.2 Multiple Hosts With One Dual-Ported Storage Controller

Figure 7-5 shows multiple hosts connected to a dual-ported storage subsystem.

Figure 7-5 Multiple Hosts With One Dual-Ported Storage Controller


Note the following about this configuration:

  • Multiple hosts increase availability of the entire system.
  • Extra ports on the switch enable system growth.
  • The switch and the HSG or HSV are single points of failure. To provide higher availability, Volume Shadowing for OpenVMS can be used to replicate the data to another Fibre Channel switch and HSG or HSV controller.

7.3.3 Multiple Hosts With Storage Controller Redundancy

Figure 7-6 shows multiple hosts connected to two dual-ported storage controllers.

Figure 7-6 Multiple Hosts With Storage Controller Redundancy


This configuration offers the following advantages:

  • Logical units can be spread over the four HSG or HSV ports, offering higher performance.
  • HSGs or HSVs can be configured in multibus failover mode, even though there is just one Fibre Channel "bus."
  • The switch is still a single point of failure. To provide higher availability, Volume Shadowing for OpenVMS can be used to replicate the data to another Fibre Channel switch and HSG or HSV controller.

7.3.4 Multiple Hosts With Multiple Independent Switches

Figure 7-7 shows multiple hosts connected to two switches, each of which is connected to a pair of dual-ported storage controllers.

Figure 7-7 Multiple Hosts With Multiple Independent Switches


This two-switch configuration offers the advantages of the previous configurations plus the following:

  • Higher level of availability afforded by two switches. There is no single point of failure.
  • Better performance because of the additional host bus adapter.
  • Each host has multiple independent paths to a storage subsystem. The two switches are not connected to each other to ensure that the paths are completely independent.

7.3.5 Multiple Hosts With Dual Fabrics

Figure 7-8 shows multiple hosts connected to two fabrics; each fabric consists of two switches.

Figure 7-8 Multiple Hosts With Dual Fabrics


This dual-fabric configuration offers the advantages of the previous configurations plus the following advantages:

  • More ports are available per fabric for connecting to additional hosts and storage subsystems.
  • Each host has four host bus adapters, one for each switch. Only two adapters are required, one per fabric. The additional adapters increase availability and performance.

7.3.6 Multiple Hosts With Larger Fabrics

The configurations shown in this section offer even higher levels of performance and scalability.

Figure 7-9 shows multiple hosts connected to two fabrics. Each fabric has four switches.

Figure 7-9 Multiple Hosts With Larger Dual Fabrics


Figure 7-10 shows multiple hosts connected to four fabrics. Each fabric has four switches.

Figure 7-10 Multiple Hosts With Four Fabrics


7.4 Fibre Channel Addresses, WWIDs, and Device Names

Fibre Channel devices for disk and tape storage come with factory-assigned worldwide IDs (WWIDs). These WWIDs are used by the system for automatic FC address assignment. The FC WWIDs and addresses also provide the means for the system manager to identify and locate devices in the FC configuration. The FC WWIDs and addresses are displayed, for example, by the Alpha console and by the HSG or HSV console. It is necessary, therefore, for the system manager to understand the meaning of these identifiers and how they relate to OpenVMS device names.

7.4.1 Fibre Channel Addresses and WWIDs

In most situations, Fibre Channel devices are configured to have temporary addresses. The device's address is assigned automatically each time the interconnect initializes. The device may receive a new address each time a Fibre Channel is reconfigured and reinitialized. This is done so that Fibre Channel devices do not require the use of address jumpers. There is one Fibre Channel address per port, as shown in Figure 7-11.

Figure 7-11 Fibre Channel Host and Port Addresses


In order to provide more permanent identification, each port on each device has a WWID, which is assigned at the factory. Every Fibre Channel WWID is unique. Fibre Channel also has node WWIDs to identify multiported devices. WWIDs are used by the system to detect and recover from automatic address changes. They are useful to system managers for identifying and locating physical devices.

Figure 7-12 shows Fibre Channel components with their factory-assigned WWIDs and their Fibre Channel addresses.

Figure 7-12 Fibre Channel Host and Port WWIDs and Addresses


Note the following about this figure:

  • Host adapter's port name and node name are each a 64-bit, factory-assigned WWID.
  • Host adapter's address is a 24-bit automatic, transient assignment.
  • Each HSG or HSV storage port has a 64-bit, factory-assigned WWID, and a 24-bit transient address that is automatically assigned.
  • HSG or HSV controller pair share a node name that is a 64-bit, factory-assigned WWID.

You can display the FC node name and FC port name for a Fibre Channel host bus adapter with the SHOW DEVICE/FULL command. For example:


$ SHOW DEVICE/FULL FGA0

Device FGA0:, device type KGPSA Fibre Channel, is online, shareable, error
    logging is enabled.

    Error count                    0    Operations completed                  0
    Owner process                 ""    Owner UIC                      [SYSTEM]
    Owner process ID        00000000    Dev Prot              S:RWPL,O:RWPL,G,W
    Reference count                0    Default buffer size                   0
    FC Port Name 1000-0000-C923-0E48    FC Node Name        2000-0000-C923-0E48


Previous Next Contents Index