[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

2.2.4 System Management Tools from OpenVMS Partners

OpenVMS Partners offer a wide selection of tools to meet diverse system management needs, as shown in Table 2-5. The types of tools are described in the following list:

  • Enterprise management
    Enables monitoring and management of a heterogeneous environment.
  • Schedule managers
    Enable specific actions to be triggered at determined times, including repetitive and periodic activities, such as nightly backups.
  • Event managers
    Monitor a system and report occurrences and events that may require an action or that may indicate a critical or alarming situation, such as low memory or an attempted security breakin.
  • Console managers
    Enable a remote connection to and emulation of a system console so that system messages can be displayed and commands can be issued.
  • Performance managers
    Monitor system performance by collecting and analyzing data to allow proper tailoring and configuration of system resources. Performance managers may also collect historical data for capacity planning.

Table 2-5 System Management Products from OpenVMS Partners
Business Partner Application Type or Function
Appmind HP OpenVMS System Management Agent Enterprise management
BMC Patrol Perform & Predict
Patrol for OpenVMS
Control "M"
Performance manager
Event manager
Scheduling manager
Computer Associates Unicenter Performance Management for OpenVMS Performance manager
  Unicenter Console Management for OpenVMS Console manager
  Unicenter Job Management for OpenVMS Schedule manager
  Unicenter System Watchdog for OpenVMS Event manager
  Unicenter TNG Package of various products
Heroix/Itheon RoboMon Event manager
  RoboCentral Console manager
ISE Schedule Schedule manager
LEGATO NetWorker Backup solution
MVP System, Inc. JAMS Schedule manager
PointSecure System Detective AO/AS OpenVMS security
RAXCO Perfect Cache Storage performance
  Perfect Disk Storage management
TECsys Development Inc. ConsoleWorks Console manager

For current information about OpenVMS Partners and the tools they provide, access the OpenVMS web site:


http://www.hp.com/go/openvms

2.2.5 Other Configuration Aids

In addition to these utilities and partner products, several commands are available that allow the system manager to set parameters on HSC, HSJ, HSD, HSZ, HSG, and RF subsystems to help configure the system. See the appropriate hardware documentation for more information.


Chapter 3
Choosing OpenVMS Cluster Systems

This chapter provides information to help you select systems for your OpenVMS Cluster to satisfy your business and application requirements.

3.1 Alpha, VAX, and HP Integrity Systems

An OpenVMS Cluster can include systems running OpenVMS Alpha or OpenVMS VAX or OpenVMS I64 or a combination of the following two:

  • OpenVMS Alpha and OpenVMS VAX
  • OpenVMS Alpha and OpenVMS I64

HP provides a full range of systems for the Alpha and VAX architectures. HP supports OpenVMS I64 on several HP Integrity server system models and will be qualified on additional models in the future.

  • OpenVMS Alpha operating system
    Based on a 64-bit RISC (reduced instruction set computing) architecture, OpenVMS Alpha provides industry-leading price/performance benefits with standard I/O subsystems for flexibility and expansion.
  • OpenVMS VAX operating system
    Based on a 32-bit CISC (complex instruction set computing) architecture, OpenVMS VAX provides high CISC performance and a rich, powerful instruction set. OpenVMS VAX also supports a wide variety of standard I/O subsystems.
  • HP OpenVMS I64 operating system
    Based on the Intel® Itanium® architecture, OpenVMS I64 provides the price/performance, reliability, and scalability benefits of OpenVMS on the industry-standard HP Integrity server systems.

3.2 Types of Systems

Alpha, VAX, and HP Integrity systems span a range of computing environments, including:

  • Workstations
  • Low-end systems
  • Midrange systems
  • Enterprise systems

3.3 Choosing Systems

Your choice of systems depends on your business, your application needs, and your budget. With a high-level understanding of systems and their characteristics, you can make better choices.

Table 3-1 is a comparison of recently shipped OpenVMS Cluster systems. While a cluster can provide services to Windows and MS-DOS laptop and personal computers, this topic is not discussed extensively in this manual. For more information about configuring a cluster to serve PCs and laptops, consult your HP representative.

Table 3-1 System Types
System Type Useful for Examples
Workstations Users who require their own systems with high processor performance. Examples include users running mechanical computer-aided design, scientific analysis, and data-reduction and display applications. Workstations offer the following features:
  • Lower cost than midrange and enterprise systems
  • Small footprint
  • Useful for modeling and imaging
  • 2D and 3D graphics capabilities
AlphaStation DS10/XP900
AlphaStation DS20E
AlphaStation DS25
AlphaStation ES40
Low end systems Low end systems offer the following capabilities:
  • High processor and I/O performance
  • Supports a moderate number of users and client PCs
  • Expandability and flexibility
AlphaServer DS10
AlphaServer DS10L
AlphaServer DS20E
AlphaServer DS25
HP Integrity rx1600-2 server
HP Integrity rx2600-2 server
HP Integrity rx4640-8 server
Midrange systems Midrange office computing. Midrange systems offer the following capabilities:
  • High processor and I/O performance
  • Supports a moderate number of users, client PCs, and workstations
AlphaServer ES40
AlphaServer ES45
AlphaServer ES47
AlphaServer ES80
Enterprise systems Large-capacity configurations and highly available technical and commercial applications. Enterprise systems have a high degree of expandability and flexibility and offer the following features:
  • Highest CPU and I/O performance
  • Ability to support thousands of terminal users, hundreds of PC clients, and up to 95 workstations
AlphaServer GS60, GS80
AlphaServer GS140
AlphaServer GS160, GS320
AlphaServer GS1280

Note

The rx1600-2 and the rx2600-2 systems were formerly named the rx1600 and the rx2600. They are the same systems. The suffix "-2" indicates the maximum number of processors supported in the system.

3.4 Scalability Considerations

When you choose a system based on scalability, consider the following:

  • Maximum processor capacity
  • Maximum memory capacity
  • Maximum storage capacity

The OpenVMS environment offers a wide range of alternative ways for growing and expanding processing capabilities of a data center, including the following:

  • Many Alpha, VAX, and HP Integrity systems can be expanded to include additional memory, processors, or I/O subsystems.
  • You can add systems to your OpenVMS Cluster at any time to support increased work load. The vast range of systems, from small workstation to high-end multiprocessor systems, can interconnect and be reconfigured easily to meet growing needs.
  • You can add storage to your OpenVMS Cluster system by increasing the quantity and speed of disks, CD-ROM devices, and tapes.
    Reference: For more information about storage devices, see Chapter 5.

Reference: For more information about scalability, see Chapter 10.

3.5 Availability Considerations

An OpenVMS Cluster system is a highly integrated environment in which multiple systems share access to resources. This resource sharing increases the availability of services and data. OpenVMS Cluster systems also offer failover mechanisms that are transparent and automatic, and require little intervention by the system manager or the user.

Reference: See Chapter 8 and Chapter 9 for more information about these failover mechanisms and about availability.

3.6 Performance Considerations

The following factors affect the performance of systems:

  • Applications and their performance requirements
  • The number of users that the system must support
  • The type of storage subsystem that you require

With these requirements in mind, compare the specifications for processor performance, I/O throughput, memory capacity, and disk capacity in the systems that interest you.

3.7 System Specifications

The HP web site provides ordering and configuring information for workstations and servers. It also contains detailed information about storage devices, printers, and network application support.

To access the HP web site, use the following URL:


http://www.hp.com/


Chapter 4
Choosing OpenVMS Cluster Interconnects

An interconnect is a physical path that connects computers to other computers, and to storage subsystems. OpenVMS Cluster systems support a variety of interconnects (also referred to as buses) so that members can communicate with each other and with storage, using the most appropriate and effective method available.

The software that enables OpenVMS Cluster systems to communicate over an interconnect is the System Communications Services (SCS). An interconnect that supports node-to-node SCS communications is called a cluster interconnect. An interconnect that provides node-to-storage connectivity within a cluster is called a shared-storage interconnect. Some interconnects, such as CI and DSSI, can serve as both cluster and storage interconnects.

OpenVMS supports the following types of interconnects:

  • Cluster interconnects (node to node only)
    • Ethernet (Alpha, VAX, and I64)
    • Fast Ethernet and Gigabit Ethernet (Alpha and I64)
    • Asynchronous transfer mode (ATM) (Alpha only)
    • FDDI (Fiber Distributed Data Interface) (VAX and Alpha)
    • MEMORY CHANNEL (Alpha only)
    • Shared Memory CI (SMCI) (Galaxy instance to Galaxy instance) (Alpha only)
  • Shared-storage interconnects (node to storage only)
    • Fibre Channel (Alpha only)
    • SCSI (Small Computer Systems Interface) (Alpha only and limited to older adapters)
  • Both node-to-node and node-to-storage interconnects
    • CI (computer interconnect) (VAX and Alpha)
    • DSSI (Digital Storage Systems Interconnect) (VAX and Alpha)

Note

SMCI is unique to OpenVMS Galaxy instances. For more information about SMCI and Galaxy configurations, refer to the HP OpenVMS Alpha Partitioning and Galaxy Guide.

4.1 Characteristics

The interconnects described in this chapter share some general characteristics. Table 4-1 describes these characteristics.

Table 4-1 Interconnect Characteristics
Characteristic Description
Throughput The quantity of data transferred across the interconnect.

Some interconnects require more processor overhead than others. For example, Ethernet and FDDI interconnects require more processor overhead than do CI or DSSI.

Larger packet sizes allow higher data-transfer rates (throughput) than do smaller packet sizes.

Cable length Interconnects range in length from 3 m to 40 km.
Maximum number of nodes The number of nodes that can connect to an interconnect varies among interconnect types. Be sure to consider this when configuring your OpenVMS Cluster system.
Supported systems and storage Each OpenVMS Cluster node and storage subsystem requires an adapter to connect the internal system bus to the interconnect. First consider the storage and processor I/O performance, then the adapter performance, when choosing an interconnect type.

4.2 Comparison of Interconnect Types

Table 4-2 shows key statistics for a variety of interconnects.

Table 4-2 Comparison of Cluster Interconnect Types
Interconnect Maximum Throughput (Mb/s) Hardware-Assisted Data Link1 Storage Connection Topology Maximum Nodes per Cluster Maximum
Length
General-purpose
ATM 155/622 No MSCP served Radial to a switch 96 2 2 km 3/300 m 3
Ethernet/
Fast/
Gigabit
10/100/1000 No MSCP served Linear or radial to a hub or switch 96 2 100 m 4/
100 m 4/
550 m 3
FDDI 100 No MSCP served Dual ring to a tree, radial to a hub or switch 96 2 40 km 5
CI 140 Yes Direct and MSCP served Radial to a hub 32 6 45 m
DSSI 32 Yes Direct and MSCP served Bus 8 7 6 m 8
Shared-storage only
Fibre Channel 1000 No Direct 9 Radial to a switch 96 2 10 km 10
/100 km 11
SCSI 160 No Direct 9 Bus or radial to a hub 8-16 12 25 m
Node-to-node (SCS traffic only)
MEMORY CHANNEL 800 No MSCP served Radial 4 3 m

1Hardware-assisted data link reduces the processor overhead.
2OpenVMS Cluster computers.
3Based on multimode fiber (MMF). Longer distances can be achieved by bridging between this interconnect and WAN interswitch links using common carriers such as ATM, DS3, and so on.
4Based on unshielded twisted-pair wiring (UTP). Longer distances can be achieved by bridging between this interconnect and WAN interswitch links (ISLs), using common carriers such as ATM, DS3, and so on.
5Based on single-mode fiber, point-to-point link. Longer distances can be achieved by bridging between FDDI and WAN interswitch links (ISLs) using common carriers such as ATM, DS3, and so on.
6Up to 16 OpenVMS Cluster computers; up to 31 HSJ controllers.
7Up to 4 OpenVMS Cluster computers; up to 7 storage devices.
8DSSI cabling lengths vary based on cabinet cables.
9Direct-attached SCSI and Fibre Channel storage can be MSCP served over any of the general-purpose cluster interconnects.
10Based on single-mode fiber, point-to-point link.
11Support for longer distances (up to 100 km) based on inter-switch links (ISLs) using single-mode fiber. In addition, DRM configurations provide longer distance ISLs using the Open Systems Gateway and Wave Division Multiplexors.
12Up to 3 OpenVMS Cluster computers, up to 4 with the DWZZH-05 and fair arbitration; up to 15 storage devices.

4.3 Multiple Interconnects

You can use multiple interconnects to achieve the following benefits:

  • Failover
    If one interconnect or adapter fails, the node communications automatically move to another interconnect.
  • MSCP server load balancing
    In a multiple MSCP server configuration, an OpenVMS Cluster performs load balancing to automatically choose the best path. This reduces the chances that a single adapter could cause an I/O bottleneck. Depending on your configuration, multiple paths from one node to another node may transfer more information than would a single path.
    Reference: See Section 10.7.3 for an example of dynamic MSCP load balancing.

4.4 Mixed Interconnects

You can use two or more different types of interconnects in an OpenVMS Cluster system. You can use different types of interconnects to combine the advantages of each type and to expand your OpenVMS Cluster system. For example, an Ethernet cluster that requires more storage can expand with the addition of Fibre Channel, SCSI, or CI connections.

4.5 Interconnects Supported by Alpha, VAX, and HP Integrity Systems

Table 4-3 shows the OpenVMS Cluster interconnects supported by Alpha, VAX, and HP Integrity systems.

You can also refer to the most recent OpenVMS Cluster SPD to see the latest information on supported interconnects.

Table 4-3 System Support for Cluster (Including Shared Storage) Interconnects
Systems CI DSSI FDDI Ethernet ATM MEMORY CHANNEL SCSI Fibre
Channel
AlphaServer ES47, ES80, GS1280 X   X X X X X X
AlphaServer GS160, GS320 X   X X X X X X
AlphaServer GS60, GS80, GS140 X   X 1 X X X X X
AlphaServer ES40 X X X X X X X X
AlphaServer ES45 X   X X X X X X
AlphaServer DS25, DS20E, DS10L, DS10 X 2   X X X X X X
AlphaStation ES40 X X X X X X 3 X  
AlphaStation DS25, DS20E X 2   X X X   X  
AlphaStation DS10/XP900   X X X X   X  
AlphaStation XP1000     X X X   X  
AlphaServer 8400, 8200 X X X X 1 X X X X
AlphaServer 4100, 2100, 2000 X X X 1 X 1 X X X X 4
AlphaServer 1000A   X X X X 1 X X X 5
AlphaServer 400   X X X 1     X  
DEC 7000/10000 X X X 1 X        
DEC 4000   X X X 1        
DEC 3000     X 1 X 1     X  
DEC 2000     X X 1        
HP Integrity rx1600-2 server       X        
HP Integrity rx2600-2 server       X        
HP Integrity rx4640-8 server       X        
VAX 6000/7000/10000 X X X X        
VAX 4000, MicroVAX 3100   X X X 1        
VAXstation 4000     X X 1        

1Able to boot over the interconnect as a satellite node.
2Support not available for AlphaServer DS25 or AlphaStation DS25.
3Support for MEMORY CHANNEL Version 2.0 hardware only.
4Support on AlphaServer 4100 only.
5Console support not available.


Previous Next Contents Index