[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

4.11 LAN Interconnects

Ethernet (including Fast Ethernet and Gigabit Ethernet), ATM, and FDDI are LAN-based interconnects. OpenVMS supports LAN emulation over ATM.

These interconnects provide the following features:

  • Single-path connections within an OpenVMS Cluster system and a local area network (LAN)
  • Support for multiple paths using multiple adapters
  • Long-distance interconnect
    In addition to the maximum length specific to each LAN type, as shown in Table 4-2, longer distances can be achieved by bridging between LANs and WAN interswitch links.
  • Extended physical distribution of nodes
  • Support for multiple clusters (up to 96 nodes each) on a single interconnect

The LANs that are supported as OpenVMS Cluster interconnects on each OpenVMS platform (Alpha, VAX, and I64) are shown in Table 4-7.

Table 4-7 LAN Interconnect Support for OpenVMS Clusters
LAN Type Platform
Ethernet VAX, Alpha, I64
Fast Ethernet Alpha, I64
Gigabit Ethernet Alpha, I64
FDDI VAX, Alpha
ATM Alpha

Following the discussion of multiple LAN adapters, information specific to each supported LAN interconnect (Ethernet, ATM, and FDDI) is provided.

4.11.1 Multiple LAN Adapters

Multiple LAN adapters are supported. The adapters can be for different LAN types or for different adapter models for the same LAN type.

Multiple LAN adapters can be used to provide the following:

  • Increased node-to-node throughput by distributing the load across multiple LAN paths.
  • Increased availability of node-to-node LAN communications.

4.11.1.1 Multiple LAN Path Load Distribution

When multiple node-to-node LAN paths are available, the OpenVMS Cluster software chooses the set of paths to use based on the following criteria, which are evaluated in strict precedence order:

  1. Recent history of packet loss on the path
    Paths that have recently been losing packets at a high rate are termed lossy and will be excluded from consideration. Channels that have an acceptable loss history are termed tight and will be further considered for use.
  2. Priority
    Management priority values can be assigned to both individual LAN paths and to local LAN devices. A LAN path's priority value is the sum of these priorities. Only tight LAN paths with a priority value equal to, or one less than, the highest priority value of any tight path will be further considered for use.
  3. Maximum packet size
    Tight, equivalent-priority channels whose maximum packet size is equivalent to that of the largest packet size of any tight equivalent-priority channel will be further considered for use.
  4. Equivalent latency
    LAN paths that meet the preceding criteria will be used if their latencies (computed network delay) are closely matched to that of the fastest such channel. The delay of each LAN path is measured using cluster communications traffic on that path. If a LAN path is excluded from cluster communications use because it does not meet the preceding criteria, its delay will be measured at intervals of a few seconds to determine if its delay, or packet loss rate, has improved enough so that it then meets the preceding criteria.

Packet transmissions are distributed in round-robin fashion across all communication paths between local and remote adapters that meet the preceding criteria.

4.11.1.2 Increased LAN Path Availability

Because LANs are ideal for spanning great distances, you may want to supplement an intersite link's throughput with high availability. You can do this by configuring critical nodes with multiple LAN adapters, each connected to a different intersite LAN link.

A common cause of intersite link failure is mechanical destruction of the intersite link. This can be avoided by path diversity, that is, physically separating the paths of the multiple intersite links. Path diversity helps to ensure that the configuration is unlikely to be affected by disasters affecting an intersite link.

4.11.2 Configuration Guidelines for LAN-Based Clusters

The following guidelines apply to all LAN-based OpenVMS Cluster systems:

  • OpenVMS Alpha, OpenVMS VAX, and OpenVMS I64 systems can be configured with any mix of LAN adapters supported on those architectures, as shown in Table 4-7.
  • All LAN paths used for OpenVMS Cluster communication must operate with a minimum of 10 Mb/s throughput and low latency. You must use translating bridges or switches when connecting nodes on one type of LAN to nodes on another LAN type. LAN segments can be bridged to form an extended LAN.
  • Multiple, distinct OpenVMS Cluster systems can be configured onto a single, extended LAN. OpenVMS Cluster software performs cluster membership validation to ensure that systems join the correct LAN OpenVMS cluster.

4.11.3 Ethernet (10/100) and Gigabit Ethernet Advantages

The Ethernet (10/100) interconnect is typically the lowest cost of all OpenVMS Cluster interconnects.

Gigabit Ethernet interconnects offer the following advantages in addition to the advantages listed in Section 4.11:

  • Very high throughput (1 Gb/s)
  • Support of jumbo frames (7552 bytes per frame) for cluster communications

4.11.4 Ethernet (10/100) and Gigabit Ethernet Throughput

The Ethernet technology offers a range of baseband transmission speeds:

  • 10 Mb/s for standard Ethernet
  • 100 Mb/s for Fast Ethernet
  • 1 Gb/s for Gigabit Ethernet

Ethernet adapters do not provide hardware assistance, so processor overhead is higher than for CI or DSSI.

Consider the capacity of the total network design when you configure an OpenVMS Cluster system with many Ethernet-connected nodes or when the Ethernet also supports a large number of PCs or printers. General network traffic on an Ethernet can reduce the throughput available for OpenVMS Cluster communication. Fast Ethernet and Gigabit Ethernet can significantly improve throughput. Multiple Ethernet adapters can be used to improve cluster performance by offloading general network traffic.

Reference: For LAN configuration guidelines, see Section 4.11.2.

4.11.5 Ethernet Adapters and Buses

The following Ethernet adapters and their internal buses are supported in an OpenVMS Cluster configuration:

  • DEFTA-xx (TURBOchannel)
  • DE2xx (ISA)
  • DE425 (EISA)
  • DE435 (PCI)
  • TULIP (PCI)
  • KZPSM (PCI)
  • DE450 (PCI)
  • DE500-xx (PCI)
  • DE600-xx (PCI)
  • DE602-xx (PCI)
  • DEGPA-xx (PCI)
  • DEGXA (PCI)
  • BCM5703 (PCI, embedded)
  • P2SE+ (PCI)
  • Trifecta (PCI)
  • 3COM (PCMCIA)
  • DEMNA (XMI)
  • TGEC (embedded)
  • COREIO (TURBOchannel)
  • PMAD (TURBOchannel)
  • DE422 (EISA)
  • DEBNI (VAXBI)
  • DEBNA (VAXBI)
  • SGEC (embedded)
  • TGEC (embedded)
  • DESVA (embedded)
  • DESQA (Q-bus)
  • DELQA (Q-bus)

Reference:: For detailed information about the Ethernet adapters supported on each AlphaServer system or HP Integrity system, refer to the OpenVMS web page at:


http://www.hp.com/go/openvms

Select AlphaSystems (from the left navigation panel under related links). Next, select the AlphaServer system of interest and then its QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe all options, including the adapters, supported on that system.

For the Ethernet adapters supported on HP Integrity systems, refer to the following HP Integrity server web page:


http://www.hp.com/products1/servers/integrity/

Next, select the system of interest. Under Product Information, select Supplies and Accessories, which includes the adapters supported on that system.

4.11.6 Ethernet-to-FDDI Bridges and Switches

You can use transparent Ethernet-to-FDDI translating bridges to provide an interconnect between a 10-Mb/s Ethernet segment and a 100-Mb/s FDDI ring. These Ethernet-to-FDDI bridges are also called 10/100 bridges. They perform high-speed translation of network data packets between the FDDI and Ethernet frame formats.

Reference: See Figure 10-21 for an example of these bridges.

You can use switches to isolate traffic and to aggregate bandwidth, which can result in greater throughput.

4.11.7 Configuration Guidelines for Gigabit Ethernet Clusters (Alpha and I64)

Use the following guidelines when configuring systems in a Gigabit Ethernet cluster:

  • Two-node Gigabit Ethernet clusters do not require a switch. They can be connected point to point, as shown in Figure 4-1.

    Figure 4-1 Point-to-Point Gigabit Ethernet OpenVMS Cluster


  • Most Gigabit Ethernet switches can be configured with Gigabit Ethernet or a combination of Gigabit Ethernet and Fast Ethernet (100 Mb/s).
  • Each node can have a single connection to the switch or can be configured with multiple paths, thereby increasing availability, as shown in Figure 4-2.

    Figure 4-2 Switched Gigabit Ethernet OpenVMS Cluster


  • Support for jumbo frames (7552 bytes each) is available starting with OpenVMS Version 7.3. (Prior to the introduction of jumbo-frame support, the only frame size supported for cluster communications was the standard 1518-byte maximum Ethernet frame size.)
  • The DEGPA cannot be used as the boot device, but satellites can be booted over standard 10/100 Ethernet network adapters configured on a Gigabit switch.
  • The DEGXA can be used as a boot device.

4.11.8 ATM Advantages (Alpha Only)

ATM offers the following advantages, in addition to those listed in Section 4.11:

  • High-speed transmission, up to 622 Mb/s
  • OpenVMS support for LAN Emulation over ATM allows for the following maximum frame sizes: 1516, 4544 and 9234.
  • LAN emulation over ATM provides the ability to create multiple emulated LANs over one physical ATM adapter. Each emulated LAN appears as a separate network. For more information, see the HP OpenVMS I/O User's Reference Manual.
  • An ATM switch that provides Quality of Service on a per-emulated-LAN basis can be used to favor cluster traffic over other protocols running on different emulated LANs. For more information, see the documentation for your ATM switch.

4.11.9 ATM Throughput

The ATM interconnect transmits up to 622 Mb/s. The adapter that supports this throughput is the DAPCA.

4.11.10 ATM Adapters

ATM adapters supported in an OpenVMS Cluster system and the internal buses on which they are supported are shown in the following list:

  • DAPBA (PCI)
  • DAPCA (PCI) 351 (PCI)

4.12 Fiber Distributed Data Interface (FDDI) (Alpha and VAX)

FDDI is an ANSI standard LAN interconnect that uses fiber-optic or copper cable.

4.12.1 FDDI Advantages

FDDI offers the following advantages in addition to the LAN advantages listed in Section 4.11:

  • Combines high throughput and long distances between nodes
  • Supports a variety of topologies

4.12.2 FDDI Node Types

The FDDI standards define the following two types of nodes:

  • Stations --- The ANSI standard single-attachment station (SAS) and dual-attachment station (DAS) can be used as an interconnect to the FDDI ring. It is advisable to attach stations to wiring concentrators and to attach the wiring concentrators to the dual FDDI ring, making the ring more stable.
  • Wiring concentrator --- The wiring concentrator (CON) provides a connection for multiple SASs or CONs to the FDDI ring. A DECconcentrator 500 is an example of this device.

4.12.3 FDDI Distance

FDDI limits the total fiber path to 200 km (125 miles). The maximum distance between adjacent FDDI devices is 40 km with single-mode fiber and 2 km with multimode fiber. In order to control communication delay, however, it is advisable to limit the maximum distance between any two OpenVMS Cluster nodes on an FDDI ring to 40 km.

4.12.4 FDDI Throughput

The maximum throughput of the FDDI interconnect (100 Mb/s) is 10 times higher than that of Ethernet.

In addition, FDDI supports transfers using large packets (up to 4468 bytes). Only FDDI nodes connected exclusively by FDDI can make use of large packets.

Because FDDI adapters do not provide processing assistance for OpenVMS Cluster protocols, more processing power is required than for CI or DSSI.

4.12.5 FDDI Adapters and Bus Types

Following is a list of supported FDDI adapters and the buses they support:

  • DEFPA (PCI)
  • DEFPZ (integral)
  • DEMFA (XMI)
  • DEFAA (Futurebus+)
  • DEFTA (TURBOchannel)
  • DEFEA (EISA)
  • DEFQA (Q-bus)

Reference: For detailed information about the adapters supported on each AlphaServer system, go to the OpenVMS web page at:


http://www.hp.com/go/openvms

Select AlphaSystems (from the left navigation panel under related links). Next, select the AlphaServer system of interest and then its QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe all options, including the adapters, supported on that system.

4.12.6 Storage Servers for FDDI-Based Clusters

FDDI-based configurations use FDDI for node-to-node communication. The HS1xx and HS2xx family of storage servers provide FDDI-based storage access to OpenVMS Cluster nodes.


Chapter 5
Choosing OpenVMS Cluster Storage Subsystems

This chapter describes how to design a storage subsystem. The design process involves the following steps:

  1. Understanding storage product choices
  2. Estimating storage capacity requirements
  3. Choosing disk performance optimizers
  4. Determining disk availability requirements
  5. Understanding advantages and tradeoffs for:
    • CI based storage
    • DSSI based storage
    • SCSI based storage
    • Fibre Channel based storage
    • Host-based storage
    • LAN InfoServer

The rest of this chapter contains sections that explain these steps in detail.

5.1 Understanding Storage Product Choices

In an OpenVMS Cluster, storage choices include the StorageWorks family of products, a modular storage expansion system based on the Small Computer Systems Interface (SCSI-2) standard. StorageWorks helps you configure complex storage subsystems by choosing from the following modular elements:

  • Storage devices such as disks, tapes, CD-ROMs, and solid-state disks
  • Array controllers
  • Power supplies
  • Packaging
  • Interconnects
  • Software

5.1.1 Criteria for Choosing Devices

Consider the following criteria when choosing storage devices:

  • Supported interconnects
  • Capacity
  • I/O rate
  • Floor space
  • Purchase, service, and maintenance cost

5.1.2 How Interconnects Affect Storage Choices

One of the benefits of OpenVMS Cluster systems is that you can connect storage devices directly to OpenVMS Cluster interconnects to give member systems access to storage.

In an OpenVMS Cluster system, the following storage devices and adapters can be connected to OpenVMS Cluster interconnects:

  • HSJ and HSC controllers (on CI)
  • HSD controllers and ISEs (on DSSI)
  • HSZ and RZ series (on SCSI)
  • HSG and HSV controllers (on Fibre Channel)
  • Local system adapters

Table 5-1 lists the kinds of storage devices that you can attach to specific interconnects.

Table 5-1 Interconnects and Corresponding Storage Devices
Storage Interconnect Storage Devices
CI HSJ and HSC controllers and SCSI storage
DSSI HSD controllers, ISEs, and SCSI storage
SCSI HSZ controllers and SCSI storage
Fibre Channel HSG and HSV controllers and SCSI storage
FDDI HS xxx controllers and SCSI storage

5.1.3 How Floor Space Affects Storage Choices

If the cost of floor space is high and you want to minimize the floor space used for storage devices, consider these options:
  • Choose disk storage arrays for high capacity with small footprint. Several storage devices come in stackable cabinets for labs with higher ceilings.
  • Choose high-capacity disks over high-performance disks.
  • Make it a practice to upgrade regularly to newer storage arrays or disks. As storage technology improves, storage devices are available at higher performance and capacity and reduced physical size.
  • Plan adequate floor space for power and cooling equipment.

5.2 Determining Storage Capacity Requirements

Storage capacity is the amount of space needed on storage devices to hold system, application, and user files. Knowing your storage capacity can help you to determine the amount of storage needed for your OpenVMS Cluster configuration.

5.2.1 Estimating Disk Capacity Requirements

To estimate your online storage capacity requirements, add together the storage requirements for your OpenVMS Cluster system's software, as explained in Table 5-2.

Table 5-2 Estimating Disk Capacity Requirements
Software Component Description
OpenVMS operating system Estimate the number of blocks 1 required by the OpenVMS operating system.

Reference: Your OpenVMS installation documentation and Software Product Description (SPD) contain this information.

Page, swap, and dump files Use AUTOGEN to determine the amount of disk space required for page, swap, and dump files.

Reference: The HP OpenVMS System Manager's Manual provides information about calculating and modifying these file sizes.

Site-specific utilities and data Estimate the disk storage requirements for site-specific utilities, command procedures, online documents, and associated files.
Application programs Estimate the space required for each application to be installed on your OpenVMS Cluster system, using information from the application suppliers.

Reference: Consult the appropriate Software Product Description (SPD) to estimate the space required for normal operation of any layered product you need to use.

User-written programs Estimate the space required for user-written programs and their associated databases.
Databases Estimate the size of each database. This information should be available in the documentation pertaining to the application-specific database.
User data Estimate user disk-space requirements according to these guidelines:
  • Allocate from 10,000 to 100,000 blocks for each occasional user.

    An occasional user reads, writes, and deletes electronic mail; has few, if any, programs; and has little need to keep files for long periods.

  • Allocate from 250,000 to 1,000,000 blocks for each moderate user.

    A moderate user uses the system extensively for electronic communications, keeps information on line, and has a few programs for private use.

  • Allocate 1,000,000 to 3,000,000 blocks for each extensive user.

    An extensive user can require a significant amount of storage space for programs under development and data files, in addition to normal system use for electronic mail. This user may require several hundred thousand blocks of storage, depending on the number of projects and programs being developed and maintained.

Total requirements The sum of the preceding estimates is the approximate amount of disk storage presently needed for your OpenVMS Cluster system configuration.

1Storage capacity is measured in blocks. Each block contains 512 bytes.

5.2.2 Additional Disk Capacity Requirements

Before you finish determining your total disk capacity requirements, you may also want to consider future growth for online storage and for backup storage.

For example, at what rate are new files created in your OpenVMS Cluster system? By estimating this number and adding it to the total disk storage requirements that you calculated using Table 5-2, you can obtain a total that more accurately represents your current and future needs for online storage.

To determine backup storage requirements, consider how you deal with obsolete or archival data. In most storage subsystems, old files become unused while new files come into active use. Moving old files from online to backup storage on a regular basis frees online storage for new files and keeps online storage requirements under control.

Planning for adequate backup storage capacity can make archiving procedures more effective and reduce the capacity requirements for online storage.


Previous Next Contents Index