[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here Guidelines for OpenVMS Cluster Configurations

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index


Chapter 5
Choosing OpenVMS Cluster Storage Subsystems

This chapter describes how to design a storage subsystem. The design process involves the following steps:

  1. Understanding storage product choices
  2. Estimating storage capacity requirements
  3. Choosing disk performance optimizers
  4. Determining disk availability requirements
  5. Understanding advantages and tradeoffs for:
    • SAS based storage
    • SCSI based storage
    • Fibre Channel based storage
    • Host-based storage
    • LAN InfoServer

The rest of this chapter contains sections that explain these steps in detail.

5.1 Understanding Storage Product Choices

In an OpenVMS Cluster, storage choices include the StorageWorks family of products, a modular storage expansion system based on the Small Computer Systems Interface (SCSI-2) standard. StorageWorks helps you configure complex storage subsystems by choosing from the following modular elements:

  • Storage devices such as disks, tapes, CD-ROMs, and solid-state disks
  • Array controllers
  • Power supplies
  • Packaging
  • Interconnects
  • Software

5.1.1 Criteria for Choosing Devices

Consider the following criteria when choosing storage devices:

  • Supported interconnects
  • Capacity
  • I/O rate
  • Floor space
  • Purchase, service, and maintenance cost

5.1.2 How Interconnects Affect Storage Choices

One of the benefits of OpenVMS Cluster systems is that you can connect storage devices directly to OpenVMS Cluster interconnects to give member systems access to storage.

In an OpenVMS Cluster system, the following storage devices and adapters can be connected to OpenVMS Cluster interconnects:

  • LSI 1068 and LSI Logic 1068e (on SAS)
  • HSZ and RZ series (on SCSI)
  • HSG and HSV controllers (on Fibre Channel)
  • Local system adapters

Table 5-1 lists the kinds of storage devices that you can attach to specific interconnects.

Table 5-1 Interconnects and Corresponding Storage Devices
Storage Interconnect Storage Devices
SCSI HSZ controllers and SCSI storage
Fibre Channel HSG and HSV controllers and SCSI storage
SAS LSI 1068 and LSI Logic 1068e controllers and SCSI storage

5.1.3 How Floor Space Affects Storage Choices

If the cost of floor space is high and you want to minimize the floor space used for storage devices, consider these options:

  • Choose disk storage arrays for high capacity with small footprint. Several storage devices come in stackable cabinets for labs with higher ceilings.
  • Choose high-capacity disks over high-performance disks.
  • Make it a practice to upgrade regularly to newer storage arrays or disks. As storage technology improves, storage devices are available at higher performance and capacity and reduced physical size.
  • Plan adequate floor space for power and cooling equipment.

5.2 Determining Storage Capacity Requirements

Storage capacity is the amount of space needed on storage devices to hold system, application, and user files. Knowing your storage capacity can help you to determine the amount of storage needed for your OpenVMS Cluster configuration.

5.2.1 Estimating Disk Capacity Requirements

To estimate your online storage capacity requirements, add together the storage requirements for your OpenVMS Cluster system's software, as explained in Table 5-2.

Table 5-2 Estimating Disk Capacity Requirements
Software Component Description
OpenVMS operating system Estimate the number of blocks 1 required by the OpenVMS operating system.

Reference: Your OpenVMS installation documentation and Software Product Description (SPD) contain this information.

Page, swap, and dump files Use AUTOGEN to determine the amount of disk space required for page, swap, and dump files.

Reference: The OpenVMS System Manager's Manual provides information about calculating and modifying these file sizes.

Site-specific utilities and data Estimate the disk storage requirements for site-specific utilities, command procedures, online documents, and associated files.
Application programs Estimate the space required for each application to be installed on your OpenVMS Cluster system, using information from the application suppliers.

Reference: Consult the appropriate Software Product Description (SPD) to estimate the space required for normal operation of any layered product you need to use.

User-written programs Estimate the space required for user-written programs and their associated databases.
Databases Estimate the size of each database. This information should be available in the documentation pertaining to the application-specific database.
User data Estimate user disk-space requirements according to these guidelines:
  • Allocate from 10,000 to 100,000 blocks for each occasional user.

    An occasional user reads, writes, and deletes electronic mail; has few, if any, programs; and has little need to keep files for long periods.

  • Allocate from 250,000 to 1,000,000 blocks for each moderate user.

    A moderate user uses the system extensively for electronic communications, keeps information on line, and has a few programs for private use.

  • Allocate 1,000,000 to 3,000,000 blocks for each extensive user.

    An extensive user can require a significant amount of storage space for programs under development and data files, in addition to normal system use for electronic mail. This user may require several hundred thousand blocks of storage, depending on the number of projects and programs being developed and maintained.

Total requirements The sum of the preceding estimates is the approximate amount of disk storage presently needed for your OpenVMS Cluster system configuration.

1Storage capacity is measured in blocks. Each block contains 512 bytes.

5.2.2 Additional Disk Capacity Requirements

Before you finish determining your total disk capacity requirements, you may also want to consider future growth for online storage and for backup storage.

For example, at what rate are new files created in your OpenVMS Cluster system? By estimating this number and adding it to the total disk storage requirements that you calculated using Table 5-2, you can obtain a total that more accurately represents your current and future needs for online storage.

To determine backup storage requirements, consider how you deal with obsolete or archival data. In most storage subsystems, old files become unused while new files come into active use. Moving old files from online to backup storage on a regular basis frees online storage for new files and keeps online storage requirements under control.

Planning for adequate backup storage capacity can make archiving procedures more effective and reduce the capacity requirements for online storage.

5.3 Choosing Disk Performance Optimizers

Estimating your anticipated disk performance work load and analyzing the work load data can help you determine your disk performance requirements.

You can use the Monitor utility and DECamds to help you determine which performance optimizer best meets your application and business needs.

5.3.1 Performance Optimizers

Performance optimizers are software or hardware products that improve storage performance for applications and data. Table 5-3 explains how various performance optimizers work.

Table 5-3 Disk Performance Optimizers
Optimizer Description
DECram for OpenVMS A disk device driver that enables system managers to create logical disks in memory to improve I/O performance. Data on an in-memory DECram disk can be accessed at a faster rate than data on hardware disks. DECram disks are capable of being shadowed with Volume Shadowing for OpenVMS and of being served with the MSCP server. 1
Solid-state disks In many systems, approximately 80% of the I/O requests can demand information from approximately 20% of the data stored on line. Solid-state devices can yield the rapid access needed for this subset of the data.
Disk striping Disk striping (RAID level 0) lets applications access an array of disk drives in parallel for higher throughput. Disk striping works by grouping several disks into a "stripe set" and then dividing the application data into "chunks" that are spread equally across the disks in the stripe set in a round-robin fashion.

By reducing access time, disk striping can improve performance, especially if the application:

  • Performs large data transfers in parallel.
  • Requires load balancing across drives.

Two independent types of disk striping are available:

  • Controller-based striping, in which HSJ and HSG controllers combine several disks into a single stripe set. This stripe set is presented to OpenVMS as a single volume. This type of disk striping is hardware based.
  • Host-based striping, using RAID for OpenVMS, which creates stripe sets on an OpenVMS host. The OpenVMS software breaks up an I/O request into several simultaneous requests that it sends to the disks of the stripe set. This type of disk striping is software based.

Note: You can use Volume Shadowing for OpenVMS software in combination with disk striping to make stripe set members redundant. You can shadow controller-based stripe sets, and you can shadow host-based disk stripe sets.

Extended file cache (XFC) OpenVMS Alpha supports host-based caching with extended file cache (XFC), which can replace and can coexist with virtual I/O cache (VIOC). XFC is a clusterwide, file-system data cache that offers several features not available with VIOC, including read-ahead caching and automatic resizing of the cache to improve performance. OpenVMS Integrity servers also supports XFC but does not support VIOC.
Controllers with disk cache Some storage technologies use memory to form disk caches. Accesses that can be satisfied from the cache can be done almost immediately and without any seek time or rotational latency. For these accesses, the two largest components of the I/O response time are eliminated. The HSZ and HSG controllers contain caches. Every RF and RZ disk has a disk cache as part of its embedded controller.

1The MSCP server makes locally connected disks to which it has direct access available to other systems in the OpenVMS Cluster.

Reference: See Section 9.5 for more information about how these performance optimizers increase an OpenVMS Cluster's ability to scale I/Os.

5.4 Determining Disk Availability Requirements

For storage subsystems, availability is determined by the availability of the storage device as well as the availability of the path to the device.

5.4.1 Availability Requirements

Some costs are associated with optimizing your storage subsystems for higher availability. Part of analyzing availability costs is weighing the cost of protecting data against the cost of unavailable data during failures. Depending on the nature of your business, the impact of storage subsystem failures may be low, moderate, or high.

Device and data availability options reduce and sometimes negate the impact of storage subsystem failures.

5.4.2 Device and Data Availability Optimizers

Depending on your availability requirements, choose among the availability optimizers described in Table 5-4 for applications and data with the greatest need.

Table 5-4 Storage Availability Optimizers
Availability Optimizer Description
Redundant access paths Protect against hardware failures along the path to the device by configuring redundant access paths to the data.
Volume Shadowing for OpenVMS software Replicates data written to a virtual disk by writing the data to one or more physically identical disks that form a shadow set. With replicated data, users can access data even when one disk becomes unavailable. If one shadow set member fails, the shadowing software removes the drive from the shadow set, and processing continues with the remaining drives. Shadowing is transparent to applications and allows data storage and delivery during media, disk, controller, and interconnect failure.

A shadow set can contain up to three members, and shadow set members can be anywhere within the storage subsystem of an OpenVMS Cluster system.

Reference: See HP Volume Shadowing for OpenVMS for more information about volume shadowing.

System disk redundancy Place system files judiciously on disk drives with multiple access paths. OpenVMS Cluster availability increases when you form a shadow set that includes the system disk. You can also configure an OpenVMS Cluster system with multiple system disks.

Reference: For more information, see Section 10.2.

Database redundancy Keep redundant copies of certain files or partitions of databases that are, for example, updated overnight by batch jobs. Rather than using shadow sets, which maintain a complete copy of the entire disk, it might be sufficient to maintain a backup copy on another disk or even on a standby tape of selected files or databases.
Newer devices Protect against failure by choosing newer devices. Typically, newer devices provide improved reliability and mean time between failures (MTBF). Newer controllers also improve reliability by employing updated chip technologies.
Comprehensive backup strategies Frequent and regular backups are the most effective way to ensure the availability of your data.

Reference: For information about Fibre Channel tape support, see Section 7.5. For information about backup strategies and OpenVMS Backup, refer to the OpenVMS System Manager's Manual. For information about additional backup software and solutions, visit: http://h18006.www1.hp.com/storage/tapestorage.html and http://h71000.www7.hp.com/openvms/storage.html.

5.5 SAS Based Storage

SAS is a point-to-point architecture that transfers data to and from SCSI storage devices by using serial communication (one bit at a time).

5.5.1 Storage Devices

Dual-domain SAS creates an additional domain to address the single SAS domain pathway failure. The additional domain uses an open port on an HP Smart Array controller that is capable of supporting dual-domain SAS. The second port on the dual-domain capable Smart Array controller generates a unique identifier and can support its own domain.

The following SAS controllers are supported:

  • LSI 1068
  • LSI Logic 1068e

The following SMART Arrays, that are supported have a SAS backplane but cannot be considered as SAS HBA:

  • P400i
  • P411
  • P700m
  • P800

There are no external controllers supported on SAS, you can only connect JABODs such as MSA60/70 and internal disks to SAS HBA. However, P700m can be connected to MSA2000SA (SAS version of MSA2000).

5.6 SCSI-Based Storage

The Small Computer Systems Interface (SCSI) bus is a storage interconnect based on an ANSI industry standard. You can connect up to a total of 8 or 16 nodes (3 of which can be CPUs) to the SCSI bus.

5.6.1 Supported Devices

The following devices can connect to a single host or multihost SCSI bus:

  • RZ-series disks
  • HSZ storage controllers

The following devices can connect only to a single host SCSI bus:

  • EZ-series disks
  • RRD-series CD-ROMs
  • TZ-series tapes

5.7 Fibre Channel Based Storage

The Fibre Channel interconnect is a storage interconnect that is based on an ANSI industry standard.

5.7.1 Storage Devices

The HSG and HSV storage controllers can connect to a single host or to a multihost Fibre Channel interconnect. For more information about Fibre Channel hardware support, see Section 7.2.

5.8 Host-Based Storage

Host-based storage devices can be connected locally to OpenVMS Cluster member systems using local adapters. You can make this locally connected storage available to other OpenVMS Cluster members by configuring a node as an MSCP server.

You can use local adapters to connect each disk to two access paths (dual ports). Dual porting allows automatic failover of disks between nodes.

5.8.1 Internal Buses

Locally connected storage devices attached to a system's internal bus.

  • PCI
  • PCI-X
  • PCI-Express
  • EISA
  • ISA
  • XMI
  • SCSI
  • TURBOchannel
  • Futurebus+

For more information about the buses supported, see the HP OpenVMS I/O User's Reference Manual.

5.8.2 Local Adapters

Following is a list of local adapters and their bus types:

  • KGPSA (PCI)
  • KZPSM (PCI)
  • KZPDA (PCI)
  • KZPSC (PCI)
  • KZPAC (PCI)
  • KZESC (EISA)
  • KZMSA (XMI)
  • PB2HA (EISA)
  • PMAZB (TURBOchannel)
  • PMAZC (TURBOchannel)
  • KDM70 (XMI)
  • KDB50 (VAXBI)
  • KDA50 (Q-bus)

For the list of supported internal buses and local adapters, see the Software Product Description.


Previous Next Contents Index