[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

5.3 Choosing Disk Performance Optimizers

Estimating your anticipated disk performance work load and analyzing the work load data can help you determine your disk performance requirements.

You can use the Monitor utility and DECamds to help you determine which performance optimizer best meets your application and business needs.

5.3.1 Performance Optimizers

Performance optimizers are software or hardware products that improve storage performance for applications and data. Table 5-3 explains how various performance optimizers work.

Table 5-3 Disk Performance Optimizers
Optimizer Description
DECram for OpenVMS A disk device driver that enables system managers to create logical disks in memory to improve I/O performance. Data on an in-memory DECram disk can be accessed at a faster rate than data on hardware disks. DECram disks are capable of being shadowed with Volume Shadowing for OpenVMS and of being served with the MSCP server. 1
Solid-state disks In many systems, approximately 80% of the I/O requests can demand information from approximately 20% of the data stored on line. Solid-state devices can yield the rapid access needed for this subset of the data.
Disk striping Disk striping (RAID level 0) lets applications access an array of disk drives in parallel for higher throughput. Disk striping works by grouping several disks into a "stripe set" and then dividing the application data into "chunks" that are spread equally across the disks in the stripe set in a round-robin fashion.

By reducing access time, disk striping can improve performance, especially if the application:

  • Performs large data transfers in parallel.
  • Requires load balancing across drives.

Two independent types of disk striping are available:

  • Controller-based striping, in which HSJ and HSG controllers combine several disks into a single stripe set. This stripe set is presented to OpenVMS as a single volume. This type of disk striping is hardware based.
  • Host-based striping, using RAID for OpenVMS, which creates stripe sets on an OpenVMS host. The OpenVMS software breaks up an I/O request into several simultaneous requests that it sends to the disks of the stripe set. This type of disk striping is software based.

Note: You can use Volume Shadowing for OpenVMS software in combination with disk striping to make stripe set members redundant. You can shadow controller-based stripe sets, and you can shadow host-based disk stripe sets.

Extended file cache (XFC) OpenVMS Alpha supports host-based caching with extended file cache (XFC), which can replace and can coexist with virtual I/O cache (VIOC). XFC is a clusterwide, file-system data cache that offers several features not available with VIOC, including read-ahead caching and automatic resizing of the cache to improve performance. OpenVMS I64 also supports XFC but does not support VIOC.
Controllers with disk cache Some storage technologies use memory to form disk caches. Accesses that can be satisfied from the cache can be done almost immediately and without any seek time or rotational latency. For these accesses, the two largest components of the I/O response time are eliminated. The HSC, HSJ, HSD, HSZ, and HSG controllers contain caches. Every RF and RZ disk has a disk cache as part of its embedded controller.

1The MSCP server makes locally connected disks to which it has direct access available to other systems in the OpenVMS Cluster.

Reference: See Section 10.8 for more information about how these performance optimizers increase an OpenVMS Cluster's ability to scale I/Os.

5.4 Determining Disk Availability Requirements

For storage subsystems, availability is determined by the availability of the storage device as well as the availability of the path to the device.

5.4.1 Availability Requirements

Some costs are associated with optimizing your storage subsystems for higher availability. Part of analyzing availability costs is weighing the cost of protecting data against the cost of unavailable data during failures. Depending on the nature of your business, the impact of storage subsystem failures may be low, moderate, or high.

Device and data availability options reduce and sometimes negate the impact of storage subsystem failures.

5.4.2 Device and Data Availability Optimizers

Depending on your availability requirements, choose among the availability optimizers described in Table 5-4 for applications and data with the greatest need.

Table 5-4 Storage Availability Optimizers
Availability Optimizer Description
Redundant access paths Protect against hardware failures along the path to the device by configuring redundant access paths to the data.
Volume Shadowing for OpenVMS software Replicates data written to a virtual disk by writing the data to one or more physically identical disks that form a shadow set. With replicated data, users can access data even when one disk becomes unavailable. If one shadow set member fails, the shadowing software removes the drive from the shadow set, and processing continues with the remaining drives. Shadowing is transparent to applications and allows data storage and delivery during media, disk, controller, and interconnect failure.

A shadow set can contain up to three members, and shadow set members can be anywhere within the storage subsystem of an OpenVMS Cluster system.

Reference: See HP Volume Shadowing for OpenVMS for more information about volume shadowing.

System disk redundancy Place system files judiciously on disk drives with multiple access paths. OpenVMS Cluster availability increases when you form a shadow set that includes the system disk. You can also configure an OpenVMS Cluster system with multiple system disks.

Reference: For more information, see Section 11.2.

Database redundancy Keep redundant copies of certain files or partitions of databases that are, for example, updated overnight by batch jobs. Rather than using shadow sets, which maintain a complete copy of the entire disk, it might be sufficient to maintain a backup copy on another disk or even on a standby tape of selected files or databases.
Newer devices Protect against failure by choosing newer devices. Typically, newer devices provide improved reliability and mean time between failures (MTBF). Newer controllers also improve reliability by employing updated chip technologies.
Comprehensive backup strategies Frequent and regular backups are the most effective way to ensure the availability of your data.

Reference: For information about Fibre Channel tape support, see Section 7.5. For information about backup strategies and OpenVMS Backup, refer to the HP OpenVMS System Manager's Manual. For information about additional backup software and solutions, visit: http://h18006.www1.hp.com/storage/tapestorage.html and http://h71000.www7.hp.com/openvms/storage.html.

5.5 CI-Based Storage

The CI interconnect provides the highest OpenVMS Cluster availability with redundant, independent transmit-and-receive CI cable pairs. The CI offers multiple access paths to disks and tapes by means of dual-ported devices between HSC or HSJ controllers.

5.5.1 Supported Controllers and Devices

The following controllers and devices are supported by the CI interconnect:

  • HSJ storage controllers
    • SCSI devices (RZ,TZ, EZ)
  • HSC storage controllers
    • SDI and STI devices (RA, ESE, TA)
    • K.SCSI devices (RZ, TZ, EZ)

5.6 DSSI Storage

DSSI-based configurations provide shared direct access to storage for systems with moderate storage capacity. The DSSI interconnect provides the lowest-cost shared access to storage in an OpenVMS Cluster.

The storage tables in this section may contain incomplete lists of products.

5.6.1 Supported Devices

DSSI configurations support the following devices:

  • EF-series solid-state disks
  • RF-series disks
  • TF-series tapes
  • DECarray storage arrays
  • HSD storage controller
    • SCSI devices (RZ,TZ, EZ)

Reference: RZ, TZ, and EZ SCSI storage devices are described in Section 5.7.

5.7 SCSI-Based Storage

The Small Computer Systems Interface (SCSI) bus is a storage interconnect based on an ANSI industry standard. You can connect up to a total of 8 or 16 nodes (3 of which can be CPUs) to the SCSI bus.

5.7.1 Supported Devices

The following devices can connect to a single host or multihost SCSI bus:

  • RZ-series disks
  • HSZ storage controllers

The following devices can connect only to a single host SCSI bus:

  • EZ-series disks
  • RRD-series CD-ROMs
  • TZ-series tapes

5.8 Fibre Channel Based Storage

The Fibre Channel interconnect is a storage interconnect that is based on an ANSI industry standard.

5.8.1 Storage Devices

The HSG and HSV storage controllers can connect to a single host or to a multihost Fibre Channel interconnect. For more information about Fibre Channel hardware support, see Section 7.2.

5.9 Host-Based Storage

Host-based storage devices can be connected locally to OpenVMS Cluster member systems using local adapters. You can make this locally connected storage available to other OpenVMS Cluster members by configuring a node as an MSCP server.

You can use local adapters to connect each disk to two access paths (dual ports). Dual porting allows automatic failover of disks between nodes.

5.9.1 Internal Buses

Locally connected storage devices attach to a system's internal bus.

Alpha systems use the following internal buses:

  • PCI
  • EISA
  • XMI
  • SCSI
  • TURBOchannel
  • Futurebus+

VAX systems use the following internal buses:

  • VAXBI
  • XMI
  • Q-bus
  • SCSI

5.9.2 Local Adapters

Following is a list of local adapters and their bus types:

  • KGPSA (PCI)
  • KZPSM (PCI)
  • KZPDA (PCI)
  • KZPSC (PCI)
  • KZPAC (PCI)
  • KZESC (EISA)
  • KZMSA (XMI)
  • PB2HA (EISA)
  • PMAZB (TURBOchannel)
  • PMAZC (TURBOchannel)
  • KDM70 (XMI)
  • KDB50 (VAXBI)
  • KDA50 (Q-bus)


Chapter 6
Configuring Multiple Paths to SCSI and Fibre Channel Storage

This chapter describes multipath SCSI support, which is available on:

  • OpenVMS Alpha Version 7.2 (and later) for parallel SCSI and Fibre Channel disk devices
  • OpenVMS Alpha Version 7.3--1 (and later) for Fibre Channel tape devices.
  • OpenVMS I64 Version 8.2 for Fibre Channel disk and tape devices

This chapter applies to disks and tapes except where noted. The SCSI protocol is used on both the parallel SCSI interconnect and the Fibre Channel interconnect. The term SCSI is used to refer to either parallel SCSI or Fibre Channel (FC) devices throughout the chapter.

Note

OpenVMS Alpha Version 7.3-1 introduced support for failover between local and MSCP served paths to SCSI disk devices. This type of failover does not apply to tape devices. This capability is enabled by the MPDEV_REMOTE system parameter setting of 1, which is the default setting.

This SCSI multipath feature may be incompatible with some third-party disk caching, disk shadowing, or similar products. HP advises that you not use such software on SCSI devices that are configured for multipath failover (for example, SCSI devices that are connected to HSZ70 and HSZ80 controllers in multibus mode) until this feature is supported by the producer of the software.

See Section 6.2 for important requirements and restrictions for using the multipath SCSI function.

Note that the Fibre Channel and parallel SCSI interconnects are shown generically in this chapter. Each is represented as a horizontal line to which the node and storage subsystems are connected. Physically, the Fibre Channel interconnect is always radially wired from a switch, as shown in Figure 7-1. Parallel SCSI can be radially wired to a hub or can be a daisy-chained bus.

The representation of multiple SCSI disks and SCSI buses in a storage subsystem is also simplified. The multiple disks and SCSI buses, which one or more HSZx, HSGx, or HSVx controllers serve as a logical unit to a host, are shown in the figures as a single logical unit.

The following topics are presented in this chapter:

6.1 Overview of Multipath SCSI Support

A multipath SCSI configuration provides failover from one path to a device to another path to the same device. Multiple paths to the same device increase the availability of that device for I/O operations. Multiple paths also offer higher aggregate performance. Figure 6-1 shows a multipath SCSI configuration. Two paths are configured from a computer to the same virtual storage device.

Multipath SCSI configurations for disk devices can use either parallel SCSI or Fibre Channel as the storage interconnect, as illustrated by Figure 6-1. Multipath SCSI configurations for tape devices can use only Fibre Channel as the storage interconnect.

Two or more paths to a single device are called a multipath set. When the system configures a path to a device, it checks for an existing device with the same name but a different path. If such a device is found, and multipath support is enabled, the system either forms a multipath set or adds the new path to an existing set. If multipath support is not enabled, then no more than one path to a device is configured.

The system presents a multipath set as a single device. The system selects one path to the device as the "current" path, and performs all I/O over this path until there is a failure or the system manager requests that the system switch to another path.

Multipath SCSI support provides the following types of failover:

  • Direct SCSI to direct SCSI
  • Direct SCSI to MSCP served (disks only)
  • MSCP served to direct SCSI (disks only)

Direct SCSI to direct SCSI failover requires the use of multiported SCSI devices. Direct SCSI to MSCP served failover requires multiple hosts per SCSI bus, but does not require multiported SCSI devices. These two failover types can be combined. Each type and the combination of the two are described next.

6.1.1 Direct SCSI to Direct SCSI Failover

Direct SCSI to direct SCSI failover can be used on systems with multiported SCSI devices. The dual HSZ70, the HSZ80, the HSG80, the dual MDR, and the HSV110 are examples of multiported SCSI devices. A multiported SCSI device can be configured with multiple ports on the same physical interconnect so that if one of the ports fails, the host can continue to access the device through another port. This is known as transparent failover mode and has been supported by OpenVMS for disk devices since Version 6.2.

OpenVMS Version 7.2 introduced support for a new failover mode in which the multiported disk device can be configured with its ports on different physical interconnects. This is known as multibus failover mode.

The HSx failover modes are selected by HSx console commands. Transparent and multibus modes are described in more detail in Section 6.3.

Figure 6-1 is a generic illustration of a multibus failover configuration.

Note

Configure multiple direct SCSI paths to a disk device only when multipath support is enabled on all connected nodes, and the HSZ/G is in multibus failover mode.

The two logical disk devices shown in Figure 6-1 represent virtual storage units that are presented to the host by the HSx controller modules. Each logical storage unit is "on line" to one of the two HSx controller modules at a time. When there are multiple logical units, they can be on line to different HSx controllers so that both HSx controllers can be active at the same time.

In transparent mode, a logical unit switches from one controller to the other when an HSx controller detects that the other controller is no longer functioning.

In multibus mode, as shown in Figure 6-1, a logical unit switches from one controller to the other when one of the following events occurs:

  • One HSx controller detects that the other controller is no longer functioning.
  • The OpenVMS multipath software detects that the current path has failed and issues a command to cause a switch.
  • The OpenVMS system manager issues a command to cause a switch.

Figure 6-1 Multibus Failover Configuration


Note the following about Figure 6-1:

  • Host has two adapters.
  • Interconnects can both be parallel SCSI (HSZ70 or HSZ80) or both be Fibre Channel (HSGx or HSVx) but not mixed.
  • Storage cabinet contains two HSx controllers configured for multibus failover mode.

The multibus configuration offers the following advantages over transparent failover:

  • Higher aggregate performance with two host adapters and two HSx controller modules in operation.
  • Higher availability because the storage is still accessible when a host adapter, the interconnect, or the HSx controller module on a path fails.

6.1.2 Direct SCSI to MSCP Served Failover (Disks Only)

OpenVMS provides support for multiple hosts that share a SCSI bus. This is known as a multihost SCSI OpenVMS Cluster system. In this configuration, the SCSI bus is a shared storage interconnect. Cluster communication occurs over a second interconnect (LAN, DSSI, CI, or MEMORY CHANNEL).

Multipath support in a multihost SCSI OpenVMS Cluster system enables failover from directly attached SCSI storage to MSCP served SCSI storage, as shown in Figure 6-2.

Figure 6-2 Direct SCSI to MSCP Served Configuration With One Interconnect


Note the following about this configuration:

  • Two hosts are connected to a shared storage interconnect.
  • Two hosts are connected by a second interconnect (LAN, CI, DSSI, or MEMORY CHANNEL) for cluster communications.
  • The storage devices can have a single port or multiple ports.
  • If node Edgar's SCSI connection to the storage fails, the SCSI storage is MSCP served by the remaining host over the cluster interconnect.

Multipath support in such a multihost SCSI OpenVMS Cluster system also enables failover from MSCP served SCSI storage to directly attached SCSI storage. For example, the following sequence of events can occur on the configuration shown in Figure 6-2:

  • Node POE is using node EDGAR as an MSCP server to access some storage device on the shared storage interconnect.
  • On node EDGAR, the direct connection to the shared storage fails, or node EDGAR is shut down, or node EDGAR becomes unreachable via the cluster interconnect.
  • Node POE switches to using its direct path to the shared storage.

Note

In this document, the capability to fail over from direct SCSI to MSCP served paths implies the ability to fail over in either direction between direct and served paths.


Previous Next Contents Index