[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

11.2.5 Summary: Single Versus Multiple System Disks

Use the information in Table 11-3 to determine whether you need a system disk for the entire OpenVMS Cluster or multiple system disks.

Table 11-3 Comparison of Single and Multiple System Disks
Single System Disk Multiple System Disks
Node may have to wait longer for access to a file on the system disk. Node does not have to wait for access to the system disk and has faster processor performance.
Contention for a single resource increases. Contention for a single resource decreases.
Boot time for satellites increases. Boot time for satellites decreases.
Only one system disk to manage. More than one system disk to manage.
Less complex system management. More complex system management, such as coordinating system parameters and files clusterwide.
Lower hardware and software costs. Higher hardware and software costs, especially if disks are shadowed.
Lower cost of system management because less time and experience required to manage a single system disk. Higher cost of system management because more time and experience required to manage multiple system disks.

11.3 OpenVMS Cluster Environment Strategies

Depending on your processing needs, you can prepare either a common environment, in which all environment files are shared clusterwide, or a multiple environment, in which some files are shared clusterwide and others are accessible only by certain OpenVMS Cluster members.

The following are the most frequently used and manipulated OpenVMS Cluster environment files:

SYS$SYSTEM:SYSUAF.DAT
SYS$SYSTEM:NETPROXY.DAT
SYS$SYSTEM:VMSMAIL_PROFILE.DATA
SYS$SYSTEM:NETNODE_REMOTE.DAT
SYS$MANAGER:NETNODE_UPDATE.COM
SYS$SYSTEM:RIGHTSLIST.DAT
SYS$SYSTEM:QMAN$MASTER.DAT

Reference: For more information about managing these files, see OpenVMS Cluster Systems.

11.3.1 Common Environment

A common OpenVMS Cluster environment is an operating environment that is identical on all nodes in the OpenVMS Cluster. A common environment is easier to manage than multiple environments because you use a common version of each system file. The environment is set up so that:

  • All nodes run the same programs, applications, and utilities.
  • All users have the same type of accounts, and the same logical names are defined.
  • All users can have common access to storage devices and queues.
  • All users can log in to any node in the configuration and can work in the same environment as all other users.

The simplest and most inexpensive environment strategy is to have one system disk for the OpenVMS Cluster with all environment files on the same disk, as shown in Figure 11-1. The benefits of this strategy are:

  • Software products need to be installed only once.
  • All environment files are on the system disk and are easier to locate and manage.
  • Booting dependencies are clear.

11.3.2 Putting Environment Files on a Separate, Common Disk

For an OpenVMS Cluster in which every node share the same system disk and environment, most common environment files are located in the SYS$SYSTEM directory.

However, you may want to move environment files to a separate disk so that you can improve OpenVMS Cluster performance. Because the environment files typically experience 80% of the system-disk activity, putting them on a separate disk decreases activity on the system disk. Figure 11-3 shows an example of a separate, common disk.

If you move environment files such as SYSUAF.DAT to a separate, common disk, SYSUAF.DAT will not be located in its default location of SYS$SYSTEM:SYSUAF.DAT.

Reference: See OpenVMS Cluster Systems for procedures to ensure that every node in the OpenVMS Cluster can access SYSUAF.DAT in its new location.

11.3.3 Multiple Environments

Multiple environments can vary from node to node. You can set up an individual node or a subset of nodes to:

  • Provide multiple access according to the type of tasks users perform and the resources they use.
  • Share a set of resources that are not available on other nodes.
  • Perform specialized functions using restricted resources while other processors perform general timesharing work.
  • Allow users to work in environments that are specific to the node where they are logged in.

Figure 11-4 shows an example of a multiple environment.

Figure 11-4 Multiple-Environment OpenVMS Cluster


In Figure 11-4, the multiple-environment OpenVMS Cluster consists of two system disks: one for VAX nodes and one for Alpha nodes. The common disk contains environment files for each node or group of nodes. Although many OpenVMS Cluster system managers prefer the simplicity of a single (common) environment, duplicating environment files is necessary for creating multiple environments that do not share resources across every node. Each environment can be tailored to the types of tasks users perform and the resources they use, and the configuration can have many different applications installed.

Each of the four DSSI nodes has its own page and swap disk, offloading the Alpha and VAX system disks on the DSSI interconnect from page and swap activity. All of the disks are shadowed across the DSSI interconnects, which protects the disks if a failure occurs.

11.4 Additional Multiple-Environment Strategies

This section describes additional multiple-environment strategies, such as using multiple SYSUAF.DAT files and multiple queue managers.

11.4.1 Using Multiple SYSUAF.DAT Files

Most OpenVMS Clusters are managed with one user authorization (SYSUAF.DAT) file, but you can use multiple user authorization files to limit access for some users to certain systems. In this scenario, users who need access to all systems also need multiple passwords.

Be careful about security with multiple SYSUAF.DAT files. The OpenVMS VAX and OpenVMS Alpha operating systems do not support multiple security domains.

Reference: See OpenVMS Cluster Systems for the list of fields that need to be the same for a single security domain, including SYSUAF.DAT entries.

Because Alpha systems require higher process quotas, system managers often respond by creating multiple SYSUAF.DAT files. This is not an optimal solution. Multiple SYSUAF.DAT files are intended only to vary environments from node to node, not to increase process quotas. To increase process quotas, HP recommends that you have one SYSUAF.DAT file and that you use system parameters to override process quotas in the SYSUAF.DAT file with system parameters to control resources for your Alpha systems.

11.4.2 Using Multiple Queue Managers

If the number of batch and print transactions on your OpenVMS Cluster is causing congestion, you can implement multiple queue managers to distribute the batch and print loads between nodes.

Every OpenVMS Cluster has only one QMAN$MASTER.DAT file. Multiple queue managers are defined through multiple *.QMAN$QUEUES and *.QMAN$JOURNAL files. Place each pair of queue manager files on different disks. If the QMAN$MASTER.DAT file has contention problems, place it on a solid-state disk to increase the number of batch and print transactions your OpenVMS Cluster can process. For example, you can create separate queue managers for batch queues and print queues.

Reference: See HP OpenVMS System Manager's Manual, Volume 1: Essentials for examples and commands to implement multiple queue managers.

11.5 Quorum Strategies

OpenVMS Cluster systems use a quorum algorithm to ensure synchronized access to storage. The quorum algorithm is a mathematical method for determining whether a majority of OpenVMS Cluster members exists so that they can "vote" on how resources can be shared across an OpenVMS Cluster system. The connection manager, which calculates quorum as a dynamic value, allows processing to occur only if a majority of the OpenVMS Cluster members are functioning.

Quorum votes are contributed by:

  • Systems with the parameter VOTES set to a number greater than zero
  • A designated disk, called a quorum disk

Each OpenVMS Cluster system can include only one quorum disk. The disk cannot be a member of a shadow set, but it can be the system disk.

The connection manager knows about the quorum disk from "quorum disk watchers," which are any systems that have a direct, active connection to the quorum disk.

11.5.1 Quorum Strategy Options

At least two systems should have a direct connection to the quorum disk. This ensures that the quorum disk votes are accessible if one of the systems fails.

When you consider quorum strategies, you must decide under what failure circumstances you want the OpenVMS Cluster to continue. Table 11-4 describes four options from which to choose.

Table 11-4 Quorum Strategies
Strategy Option1 Description
Continue if the majority of the maximum "expected" nodes still remain. Give every node a vote and do not use a quorum disk. This strategy requires three or more nodes.
Continue with only one node remaining (of three or more nodes). This strategy requires a quorum disk.

By increasing the quorum disk's votes to one less than the total votes from all systems (and by increasing the value of the EXPECTED_VOTES system parameter by the same amount), you can boot and run the cluster with only one node as a quorum disk watcher. This prevents having to wait until more than half the voting systems are operational before you can start using the OpenVMS Cluster system.

Continue with only one node remaining (two-node OpenVMS Cluster). Give each node and the quorum disk a vote.

The two-node OpenVMS Cluster is a special case of this alternative. By establishing a quorum disk, you can increase the availability of a two-node OpenVMS Cluster. Such configurations can maintain quorum and continue to operate in the event of failure of either the quorum disk or one node. This requires that both nodes be directly connected to storage (by CI, DSSI, SCSI, or Fibre Channel) for both to be quorum disk watchers.

Continue with only critical nodes in the OpenVMS Cluster. Generally, this strategy gives servers votes and gives satellites none. This assumes three or more servers and no quorum disk.

1These strategies are mutually exclusive; choose only one.

Reference: For more information about quorum disk management, see OpenVMS Cluster Systems.

11.6 State Transition Strategies

OpenVMS Cluster state transitions occur when a system joins or leaves an OpenVMS Cluster system and when the OpenVMS Cluster recognizes a quorum-disk state change. The connection manager handles these events to ensure the preservation of data integrity throughout the OpenVMS Cluster.

State transitions should be a concern only if systems are joining or leaving an OpenVMS Cluster system frequently enough to cause disruption.

A state transition's duration and effect on users and applications is determined by the reason for the transition, the configuration, and the applications in use. By managing transitions effectively, system managers can control:

  • Detection of failures and how long the transition takes
  • Side effects of the transition, such as volume shadowing copy and merge operations

11.6.1 Dealing with State Transitions

The following guidelines describe effective ways of dealing with transitions so that you can minimize the actual transition time as well as the side effects after the transition.

  • Be proactive in preventing nodes from leaving an OpenVMS Cluster by:
    • Providing interconnect redundancy between all systems.
    • Preventing resource exhaustion of disks and memory as well as saturation of interconnects, processors, and adapters.
    • Using an uninterruptible power supply (UPS).
    • Informing users that shutting off a workstation in a large OpenVMS Cluster disrupts the operation of all systems in the cluster.
  • Do not use a quorum disk unless your OpenVMS Cluster has only two nodes.
  • Where possible, ensure that shadow set members reside on shared buses to increase availability.
  • The time to detect the failure of nodes, disks, adapters, interconnects, and virtual circuits is controlled by system polling parameters. Reducing polling time makes the cluster react quickly to changes, but it also results in lower tolerance to temporary outages. When setting timers, try to strike a balance between rapid recovery from significant failures and "nervousness" resulting from temporary failures.
    Table 11-5 describes OpenVMS Cluster polling parameters that you can adjust for quicker detection time. HP recommends that these parameters be set to the same value in each OpenVMS Cluster member.

    Table 11-5 OpenVMS Cluster Polling Parameters
    Parameter Description
    QDSKINTERVAL Specifies the quorum disk polling interval.
    RECNXINTERVL Specifies the interval during which the connection manager attempts to restore communication to another system.
    TIMVCFAIL Specifies the time required for detection of a virtual circuit failure.
  • Include application recovery in your plans. When you assess the effect of a state transition on application users, consider that the application recovery phase includes activities such as replaying a journal file, cleaning up recovery units, and users logging in again.

Reference: For more detailed information about OpenVMS Cluster transitions and their phases, system parameters, quorum management, see OpenVMS Cluster Systems.

11.7 Migration and Warranted Support for Multiple Versions

HP provides two levels of support, warranted and migration, for mixed-version and mixed-architecture OpenVMS Cluster systems.

Warranted support means that HP has fully qualified the two versions coexisting in an OpenVMS Cluster and will answer all problems identified by customers using these configurations.

Migration support helps customers move to warranted OpenVMS Cluster version mixes with minimal impact on their cluster environments. Migration support means that HP has qualified the versions for use together in configurations that are migrating in a staged fashion to a newer version of OpenVMS VAX or of OpenVMS Alpha. Problem reports submitted against these configurations will be answered by HP. However, in exceptional cases, HP may request that you move to a warranted configuration as part of answering the problem.

Note

HP supports only two versions of OpenVMS running in a cluster at the same time, regardless of architecture. This means that any two versions shown in Table 11-6 are supported, subject to the limitations of migration support already described.

Table 11-6 shows the level of support provided for all possible version pairings.

Table 11-6 OpenVMS Cluster Warranted and Migration Support
  Alpha V7.3x/VAX V7.3 Alpha V7.2-2/VAX V7.2
Alpha V7.3 x/VAX V7.3 Warranted Migration
Alpha V7.2-2/VAX V7.2 Migration Warranted

Note

Standard support for OpenVMS Alpha Version 7.3 remains in effect until December 31, 2003. Prior version support will not be available for this version. For more information, visit the HP services web site at:


ftp://ftp.compaq.com/pub/services/software/ovms.pdf

In a mixed-version cluster, you must install remedial kits on earlier versions of OpenVMS. For a complete list of required remedial kits, see the OpenVMS Alpha Version 7.3-1 Release Notes.

11.8 Alpha and VAX Systems in the Same OpenVMS Cluster

OpenVMS Alpha and OpenVMS VAX systems can work together in the same OpenVMS Cluster to provide both flexibility and migration capability. You can add Alpha processing power to an existing VAXcluster, enabling you to utilize applications that are system specific or hardware specific.

11.8.1 OpenVMS Cluster Satellite Booting Across Architectures

OpenVMS Alpha Version 7.1 (and higher) and OpenVMS VAX Version 7.1 (and higher) enable VAX boot nodes to provide boot service to Alpha satellites and Alpha boot nodes to provide boot service to VAX satellites. This support, called cross-architecture booting, increases configuration flexibility and higher availability of boot servers for satellites.

Two configuration scenarios make cross-architecture booting desirable:

  • You want the Alpha system disk configured in the same highly available and high-performance area as your VAX system disk.
  • Your Alpha boot server shares CI or DSSI storage with the VAX boot server. If your only Alpha boot server fails, you want to be able to reboot an Alpha satellite before the Alpha boot server reboots.

11.8.2 Restrictions

You cannot perform OpenVMS operating system and layered product installations and upgrades across architectures. For example, you must install and upgrade OpenVMS Alpha software using an Alpha system. When you configure OpenVMS Cluster systems that take advantage of cross-architecture booting, ensure that at least one system from each architecture is configured with a disk that can be used for installations and upgrades.

System disks can contain only a single version of the OpenVMS operating system and are architecture-specific. For example, OpenVMS VAX Version 7.3 cannot coexist on a system disk with OpenVMS Alpha Version 7.3.


Previous Next Contents Index