[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Version 8.2 Release Notes


Previous Contents Index

4.18 OpenVMS Cluster Systems

The release notes in this section pertain to OpenVMS Cluster systems.

4.18.1 OpenVMS I64 Cluster Support

V8.2

With few exceptions, OpenVMS Cluster software provides the same features on OpenVMS I64 systems as it offers on OpenVMS Alpha and OpenVMS VAX systems.

4.18.2 Temporary Exceptions

V8.2

The following exceptions are temporary:

  • The number of systems permitted in a cluster depends on the platform configuration, as follows:
    • Maximum of 8 I64 systems
    • Maximum of 16 Alpha and I64 systems in a mixed-architecture cluster (not to exceed 8 I64 systems)

    Support for more than eight I64 systems in a cluster will be announced during the first half of 2005 (up to the maximum of 96 total nodes).
  • A supported production cluster containing an I64 system cannot include a VAX system. VAX systems can be included in these clusters for the purposes of development and migration with the understanding that any problems arising from the existence of VAX systems in these clusters will result in the need for either the VAX or I64 systems to be removed. See the OpenVMS Cluster Software SPD for more information.
  • Currently, only two architectures are allowed for supported production environments in an OpenVMS Cluster system. Refer to the HP OpenVMS Version 8.2 Upgrade and Installation Manual for a list of supported cluster configurations.
  • Satellite booting for OpenVMS I64 systems, not available with this release, will be supported in a future release.

4.18.3 Permanent Exceptions

V8.2

OpenVMS Cluster software supports three proprietary interconnects on Alpha systems that are not supported on OpenVMS I64 systems: DSSI (DIGITAL Storage Systems Interconnect), CI (cluster interconnect), and Memory Channel. Although DSSI and CI storage cannot be directly connected to OpenVMS I64 systems, data stored on CI and DSSI disks (connected to Alpha systems) can be served to OpenVMS I64 systems in the same cluster.

Multihost shared storage on a SCSI interconnect, commonly known as SCSI clusters, is not supported. (It is also not supported on OpenVMS Alpha systems for newer SCSI adapters.) However, multihost, shared storage on industry-standard Fibre Channel is supported.

Note

Locally attached storage, on both OpenVMS Alpha systems (CI, DSSI, FC, or SCSI storage) and OpenVMS I64 systems (Fibre Channel or SCSI storage), can be served to any other member of the cluster.

4.18.4 Patch Kits Needed for Cluster Compatibility

V8.2

Before introducing an OpenVMS Version 8.2 system into an existing OpenVMS Cluster system, you must apply certain patch kits (also known as remedial kits) to your systems running earlier versions of OpenVMS. If you are using Fibre Channel, XFC, or Volume Shadowing, additional patch kits are required. Note that these kits are version specific.

The versions listed in Table 4-1 are supported in either a warranted configuration or a migration pair configuration. For more information about these configurations, refer to either HP OpenVMS Cluster Systems or the HP OpenVMS Version 8.2 Upgrade and Installation Manual.

Table 4-1 lists the facilities that require patch kits and the patch ID names. Each patch kit has a corresponding readme file with the same name (file extension is .README).

You can either download the patch kits from the following web site (select the OpenVMS Software Patches option), or contact your HP support representative to receive the patch kits on media appropriate for your system:

http://h18007.www1.hp.com/support/files/index.html

Note

Patch kits are periodically updated on an as-needed basis. Always use the most recent patch kit for the facility, as indicated by the version number in the kit's readme file. The most recent version of each kit is the version posted on the web site.

Table 4-1 Patch Kits Required for Cluster Compatibility
Facility Patch ID
OpenVMS Alpha Version 7.3-2
Update kit with most patch kits except those also listed in this section VMS732_UPDATE-V0300
Fibre Channel/SCSI VMS732_FIBRE_SCSI-V0400
System VMS732_SYS-V0600
OpenVMS VAX Version 7.3 1
Audit Server VAXAUDS01_073
Cluster VAXSYSL01_073
DECnet-Plus VAX_DNVOSIECO04-V73
DECwindows Motif VAXDWMOTMUP01_073
DTS VAXDTSS01_073
Files 11 VAXF11X02_073
MAIL VAXMAIL01_073
MIME VAXMIME01_073
MOUNT VAXMOUN01_073
RMS VAXRMS01_073
RPC VAXRPC02_073
Volume Shadowing VAXSHAD01_073
System VAXSYS01_073

1For operating guidelines when using VAX systems in a cluster, refer to Section 4.18.2.

Note that VAX systems cannot be in a cluster with I64 systems. For a complete list of warranted groupings within a cluster, refer to the HP OpenVMS Version 8.2 Upgrade and Installation Manual.

4.18.5 New API Can Correct Incompatibility of Fibre Channel and SCSI Multipath with Some Third-Party Products

V7.3-2

OpenVMS Alpha Version 7.2-1 introduced the multipath feature, which provides support for failover between the multiple paths that can exist between a system and a SCSI or Fibre Channel device. OpenVMS Alpha Version 7.3-1 introduced support for failover between Fibre Channel multipath tape devices.

This multipath feature can be incompatible with some third-party disk-caching, disk-shadowing, or similar products. HP advises that you do not use such software on SCSI or Fibre Channel devices that are configured for multipath failover until this feature is supported by the producer of the software.

Third-party products that rely on altering the Driver Dispatch Table (DDT) of either the OpenVMS Alpha SCSI disk class driver (SYS$DKDRIVER.EXE), the OpenVMS Alpha SCSI tape class driver (SYS$MKDRIVER.EXE), or the SCSI generic class driver (SYS$GKDRIVER) may need to be modified in order to function correctly with the SCSI multipath feature.

Producers of such software can now modify their software using new DDT Intercept Establisher routines introduced in OpenVMS Alpha Version 7.3-2. For more information about these routines, refer to the HP OpenVMS Alpha Version 7.3--2 New Features and Documentation Overview manual.

Note

If you are using a third-party disk-caching product or disk shadowing application, refrain from using it in an OpenVMS SCSI or Fibre Channel multipath configuration until you confirm that the application has been revised using these new routines.

For more information about OpenVMS Alpha SCSI and Fibre Channel multipath features, refer to Guidelines for OpenVMS Cluster Configurations.

4.18.6 CLUSTER_CONFIG.COM and Limits on Root Directory Names

V7.3-2

This note updates Table 8-3 (Data Requested by CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM) in the HP OpenVMS Cluster Systems manual.

The documentation specifies a limit on the number of hexadecimal digits you can use for computers with direct access to the system disk. The limit is correct for VAX computers but not for Alpha computers.

The command procedure prompts for the following information:


Computer's root directory name on cluster system disk:

The documentation currently states:

Press Return to accept the procedure-supplied default, or specify a name in the form SYSx:

  • For computers with direct access to the system disk, x is a hexadecimal digit in the range of 1 through 9 or A through D (for example, SYS1 OR SYSA)
  • For satellites, x must be in the range of 10 through FFFF

The limit on the range of hexadecimal values with direct access to the system disk is correct for VAX computers. For Alpha computers with direct access to the system disk, the valid range of hexadecimal values is much larger. It includes both the VAX range of 1 through 9 or A through D, and also the range 10 through FFFF. Note that SYSE and SYSF are reserved for system use.

The HP OpenVMS Cluster Systems manual will include this information in its next revision.

4.18.7 Cluster Performance Reduced with CI-LAN Circuit Switching

V7.3-1

In rare cases, in an OpenVMS Cluster configuration with both CI and multiple FDDI, 100 Mb/s or Gb/s Ethernet-based circuits, you might observe that SCS connections are moving between CI and LAN circuits at intervals of approximately 1 minute. This frequent circuit switching can result in reduced cluster performance and may trigger mount verification of shadow set members.

PEdriver can detect and respond to LAN congestion that persists for a few seconds. When it detects a significant delay increase or packet losses on a LAN path, the PEdriver removes the path from use. When it detects that the path has improved, it begins using it again.

Under marginal conditions, the additional load on a LAN path resulting from its use for cluster traffic may cause its delay or packet losses to increase beyond acceptable limits. When the cluster load is removed, the path might appear to be sufficiently improved so that it will again come into use.

If a marginal LAN path's contribution to the LAN circuit's load class increases the circuit's load class above the CI's load class value of 140 when the marginal path is included (and, conversely, decreases the LAN circuit's load class below 140 when the path is excluded), SCS connections will move between CI and LAN circuits.

You can observe connections moving between LAN and CI circuits by using SHOW CLUSTER with the CONNECTION and CIRCUITS classes added.

Workarounds

If excessively frequent connection moves are observed, you can use one of the following workarounds:

  • You can use SCACP or Availability Manager to assign a higher priority to the circuit, or the port you wish to be used, thus overriding automatic connection assignment and moving.
    Examples of SCACP commands are:


    $ MC SCACP
    SCACP> SET PORT PNA0 /PRIORITY=2    ! This will cause circuits from local
                                        ! CI port PNA0 to be chosen over
                                        ! lower priority circuits.
    
    
    SCACP> SET PORT PEA0 /PRIORITY=2    ! This will cause LAN circuits to be
                                        ! chosen over lower priority circuits.
    
  • You can use the SCACP SHOW CHANNEL commands to determine which channels are being switched into/out of use. Then you can use SCACP to explicitly exclude a specific channel by assigning it a lower priority value than the desired channels. For example:


    SCACP> SET CHANNEL LARRY /LOCAL=EWB/REMOTE=EWB /PRIORITY=-2
    

    Note that CHANNEL and LAN device priority values in the range of max, max-1 are considered equivalent; that is, they are treated as if they both had the maximum priority value. A difference of 2 or more in priority values is necessary to exclude a channel or LAN device from use.

4.18.8 Gigabit Ethernet Switch Restriction in an OpenVMS Cluster System

Permanent Restriction

Attempts to add a Gigabit Ethernet node to an OpenVMS Cluster system over a Gigabit Ethernet switch fail if the switch settings are different from the setting of the Gigabit Ethernet adapter with respect to autonegotiation. If the switch is set to autonegotiation, the adapter must be as well, and conversely.

Most Gigabit Ethernet adapters default to having autonegotiation enabled. An exception is the DEGXA on Alpha systems where the EGn0_MODE console environment variable contains the desired setting, which must match the switch setting.

When an attempt to add a node fails because of the switch and adapter mismatch, the messages that are displayed can be misleading. If you are using CLUSTER_CONFIG.COM to add the node and the option to install a local page and swap disk is selected, the problem might look like a disk-serving problem. The node running CLUSTER_CONFIG.COM displays the message "waiting for node-name to boot," while the booting node displays "waiting to tune system." The list of available disks is never displayed because of a missing network path. The network path is missing because of the autonegotiation mismatch between the Gigabit adapter and the switch.

To avoid this problem, disable autonegotiation on the new node's Gigabit Ethernet adapter, as follows:

  1. Perform a conversational boot when you first boot the node into the cluster.
  2. Set the new node's system parameter LAN_FLAGS to a value of 32 to disable autonegotiation on all Gigabit adapters in the system.

After this initial configuration, the LAN_FLAGS system parameter setting for autonegotiation must be consistent with the switch settings for all Gigabit Ethernet adapters in the system. If not all adapters need to disable autonegotiation, the run-time settings must be set appropriately in the LANCP device database. See the LANCP chapter in the HP OpenVMS System Management Utilities Reference Manual for details.

4.18.9 Multipath Tape Failover Restriction

V7.3-1

While the INITIALIZE command is in progress on a device in a Fibre Channel multipath tape set, multipath failover to another member of the set is not supported. If the current path fails while another multipath tape device is being initialized, retry the INITIALIZE command after the tape device fails over to a functioning path.

This restriction will be removed in a future release.

4.18.10 No Automatic Failover for SCSI Multipath Medium Changers

V7.3-1

Automatic path switching is not implemented in OpenVMS Alpha Version 7.3-1 or higher for SCSI medium changers (tape robots) attached to Fibre Channel using a Fibre-to-SCSI tape bridge. Multiple paths can be configured for such devices, but the only way to switch from one path to another is to use manual path switching with the SET DEVICE/SWITCH command.

This restriction will be removed in a future release.

4.19 OpenVMS Galaxy (Alpha Only)

The following sections contain notes pertaining to OpenVMS Galaxy systems.

Note that OpenVMS Galaxy is supported on OpenVMS Alpha systems only.

4.19.1 Galaxy Definitions

V8.2

Because the HP OpenVMS Alpha Partitioning and Galaxy Guide is not being updated for OpenVMS Version 8.2, this note provides improved definitions of the word Galaxy, which depends on context.

Table 4-2 Galaxy Definitions
Galaxy as a: Functions this way:
License Is required to create and run multiple instances of OpenVMS in a single computer. Without this license, only one instance of OpenVMS can be run in a single computer.
System parameter Sets memory sharing. GALAXY set to 1 specifies that OpenVMS instances with the parameter set in a hard partition will share memory between soft partitions within that hard partition. (You can run more than two soft partitions in a hard partition, and you may not want to share memory among all of them.) Note that this parameter only specifies whether a node uses shared memory. There is no need to use this parameter to run multiple, cooperative instances of OpenVMS; this is achieved by console setup of the desired configuration tree. GALAXY set to 0 means that memory is not shared (the default).
Soft partition Provides the capability of several OpenVMS instances to execute cooperatively in a single computer so as to be able to migrate CPUs, use APIs, share memory, and so on. Platform partitioning makes possible the separation of resources into multiple soft partitions, each of which can run an OS instance. A soft partition is that subset of resources that the OS instance running in it can see and use.

4.19.2 OpenVMS Graphical Configuration Manager

V8.2

The OpenVMS Graphical Configuration Manager (GCM) is now supported for AlphaServer ES47/ES80/GS1280 Galaxy configurations. Previously, only the Graphical Configuration Utility (GCU) was supported.

4.19.3 Galaxy on ES40: Uncompressed Dump Limitation

Permanent Restriction

On AlphaServer ES40 Galaxy systems, you cannot write a raw (uncompressed) dump from instance 1 if instance 1's memory starts at or above 4 GB (physical). Instead, you must write a compressed dump.

4.19.4 Galaxy on ES40: Turning Off Fast Path

V7.3-1

When you implement Galaxy on an AlphaServer ES40 system, you must turn off Fast Path on instance 1. Do this by setting the SYSGEN parameter FAST_PATH to 0 on that instance.

If you do not turn off Fast Path on instance 1, I/O on instance 1 will hang when instance 0 is rebooted. This hang will continue until the PCI bus is reset and instance 1 rebooted. If there is shared SCSI or Fibre Channel, I/O will hang on the sharing nodes and all paths to those devices will be disabled.

4.20 OpenVMS Management Station

V8.2

Version 3.2D is the recommended version of OpenVMS Management Station for OpenVMS I64 Version 8.2 and OpenVMS Alpha Version 8.2. However, OpenVMS Management Station is backward compatible with OpenVMS Version 6.2 and higher.

The OpenVMS Version 8.2 installation includes OpenVMS Management Station Version 3.2D.

4.21 OpenVMS Registry Can Corrupt Version 2 Format Database

V7.3-2

If you create eight or more volatile subkeys in a key tree and then reboot a standalone system or a cluster, the OpenVMS Registry server can corrupt a Version 2 format Registry database when the server starts up after the reboot.

To avoid this problem, do one of the following:

  • Do not use volatile keys.
  • Use a Version 1 format database.

Note that Advanced Server for OpenVMS and COM for OpenVMS do not create volatile keys.

4.22 Security: Changes to DIRECTORY Command Output

V7.3-2

In OpenVMS Version 7.1 and higher, if you execute the DCL command DIRECTORY/SECURITY or DIRECTORY/FULL for files that contain Advanced Server (PATHWORKS) access control entries (ACEs), the hexadecimal representation for each Advanced Server ACE is no longer displayed. Instead, the total number of Advanced Server ACEs encountered for each file is summarized in the message, "Suppressed n PATHWORKS ACEs."

To display the suppressed ACEs, use the SHOW SECURITY command. You must have the SECURITY privilege to display these ACEs. Note that, in actuality, the command displays OpenVMS ACEs, including the %x86 ACE that reveals the Windows NT® security descriptor information. The Windows NT security descriptor information pertains to the Advanced Server.

4.23 Server Management Process (SMHANDLER)

V7.3-2

The server management process, SMHANDLER, now starts automatically on Alpha systems that support it. System managers should remove references to the obsolete startup file, SYS$STARTUP:SYS$SMHANDLER_STARTUP.COM, from SYSTARTUP_VMS.COM or other site-specific startup files. This reference has been removed from SYSTARTUP_VMS.TEMPLATE.

Background: What is SMHANDLER?

On certain Alpha systems, the server management process is started to assist the system firmware in reporting and responding to imminent hardware failures. Failure conditions vary but typically include over-temperature conditions, fan failures, or power supply failures. SMHANDLER may report warning conditions to OPCOM, and may initiate a shutdown of OpenVMS if system firmware is about to power off a failing system. In most situations, a controlled shutdown of OpenVMS would be less disruptive than abrupt loss of system power.

To ensure the longest possible up time, system managers can set the POWEROFF system parameter to 0. This prevents SMHANDLER from shutting down OpenVMS on a failing system but does not prevent system firmware from powering off the system.

4.24 SYSGEN: Security Auditing Fixed

V7.3-2

Previously, enabling SYSGEN audits, or alarms, did not provide any audits or alarms with information about the parameters being modified. As of OpenVMS Version 7.3-2, this problem is corrected. Audits or alarms now provide a list of the changed parameters along with their old and new values.

4.25 SYSMAN Utility: DUMP_PRIORITY Command Changes

V8.2

The following changes have been made to the SYSMAN DUMP_PRIORITY command:

  • DUMP_PRIORITY ADD, MODIFY, and REMOVE
    The new qualifier /[NO]INFORMATIONAL has been added to allow users to control the output of informational messages, for example, in command procedures. The default is /INFORMATIONAL.
  • DUMP_PRIORITY ADD and MODIFY
    If you attempt to add an entry that already exists, or if you use DUMP_PRIORITY MODIFY to modify an existing entry where the modification would result in a duplicate, you will get an "SMI-I-SDPRDUPIGN, duplicate record creation ignored" message. (Previously, you would have gotten an "SMI-F-SDPRNODUP, duplicate records not allowed" message.) In the case of MODIFY, the existing record will not be removed.
  • DUMP_PRIORITY REMOVE
    If you try to remove a nonexistent entry from the System Dump Priority registry, you now get an "SMI-I-SDPRNOTREM, no record removed" message. (Previously, you would have gotten an "SMI-F-SDPRNOTFOUND, system dump priority record not found" message, which is still returned by DUMP_PRIORITY MODIFY when the entry to modify is not found.)


Previous Next Contents Index