[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Cluster Systems


Previous Contents Index

Example 8-13 Sample Interactive CLUSTER_CONFIG_LAN.COM Session to Convert a Standalone Computer to a Cluster Boot Server

$ @CLUSTER_CONFIG_LAN.COM
IA64 platform support is in procedure CLUSTER_CONFIG_LAN.COM.
    The currently running procedure, CLUSTER_CONFIG.COM, will call 
    it for you.               
                   Cluster/IPCI Configuration Procedure 
                   CLUSTER_CONFIG_LAN Version V2.84 
                     Executing on an IA64 System 
 
    DECnet-Plus is installed on this node. 
    IA64 satellites will use TCP/IP BOOTP and TFTP services for downline loading. 
    TCP/IP is installed and running on this node. 
 
        Enter a "?" for help at any prompt.  If you are familiar with 
        the execution of this procedure, you may want to mute extra notes 
        and explanations by invoking it with "@CLUSTER_CONFIG_LAN BRIEF". 
 
    This IA64 node is not currently a cluster member. 
 
MAIN Menu 
 
   1. ADD MOON to existing cluster, or form a new cluster. 
   2. MAKE a directory structure for a new root on a system disk. 
   3. DELETE a root from a system disk. 
   4. EXIT from this procedure. 
 
Enter choice [4]: 1 
Is the node to be a clustered node with a shared SCSI/FIBRE-CHANNEL bus (Y/N)? N 
 
What is the node's SCS node name? moon 
 
    DECnet is running on this node. Even though you are configuring a LAN- 
    based cluster, the DECnet database will provide some information and 
    may be updated. 
 
Do you want to define a DECnet synonym [Y]? N 
    IA64 node, using LAN for cluster communications.  PEDRIVER will be loaded. 
    No other cluster interconnects are supported for IA64 nodes. 
Enter this cluster's group number: 123 
Enter this cluster's password: 
Re-enter this cluster's password for verification: 
 
Will MOON be a boot server [Y]? [Return]
 
        TCP/IP BOOTP and TFTP services must be enabled on IA64 boot nodes. 
 
        Use SYS$MANAGER:TCPIP$CONFIG.COM on MOON to enable BOOTP and TFTP services 
        after MOON has booted into the cluster. 
 
Enter a value for MOON's ALLOCLASS parameter [0]:[Return]
Does this cluster contain a quorum disk [N]? [Return]
 
    The EXPECTED_VOTES system parameter of members of a cluster indicates the 
    total number of votes present when all cluster members are booted, and is 
    used to determine the minimum number of votes (QUORUM) needed for cluster 
    operation. 
 
EXPECTED_VOTES value for this cluster: 1 
 
Warning:  Setting EXPECTED_VOTES to 1 allows this node to boot without 
          being able to see any other nodes in the cluster.  If there is 
          another instance of the cluster in existence that is unreachable 
          via SCS but shares common drives (such as a Fibrechannel fabric) 
          this may result in severe disk corruption. 
 
Do you wish to re-enter the value of EXPECTED_VOTES [Y]? N 
 
    The use of a quorum disk is recommended for small clusters to maintain 
    cluster quorum if cluster availability with only a single cluster node is 
    a requirement. 
 
    For complete instructions, check the section on configuring a cluster 
    in the "OpenVMS Cluster Systems" manual. 
 
 
  WARNING: MOON will be a voting cluster member. EXPECTED_VOTES for 
           this and every other cluster member should be adjusted at 
           a convenient time before a reboot. For complete instructions, 
           check the section on configuring a cluster in the "OpenVMS 
           Cluster Systems" manual. 
 
    Execute AUTOGEN to compute the SYSGEN parameters for your configuration 
    and reboot MOON with the new parameters. This is necessary before 
    MOON can become a cluster member. 
 
Do you want to run AUTOGEN now [Y]? [Return]
    Running AUTOGEN -- Please wait. 
 
%AUTOGEN-I-BEGIN, GETDATA phase is beginning. 
. 
. 
. 
 

8.5 Creating a Duplicate System Disk

As you continue to add Integrity servers running on a common Integrity common system disk, or Alpha computers running on an Alpha common system disk, you eventually reach the disk's storage or I/O capacity. In that case, you want to add one or more common system disks to handle the increased load.

Reminder: Remember that a system disk cannot be shared between two architectures. Furthermore, you cannot create a system disk for one architecture from a system disk of a different architecture.

8.5.1 Preparation

You can use either CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM to set up additional system disks. After you have coordinated cluster common files as described in Chapter 5, proceed as follows:

  1. Locate an appropriate scratch disk for use as an additional system disk.
  2. Log in as system manager.
  3. Invoke either CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM and select the CREATE option.

8.5.2 Example

As shown in Example 8-14, the cluster configuration command procedure:

  1. Prompts for the device names of the current and new system disks.
  2. Backs up the current system disk to the new one.
  3. Deletes all directory roots (except SYS0) from the new disk.
  4. Mounts the new disk clusterwide.

Note: OpenVMS RMS error messages are displayed while the procedure deletes directory files. You can ignore these messages.

Example 8-14 Sample Interactive CLUSTER_CONFIG_LAN.COM CREATE Session

$ @CLUSTER_CONFIG_LAN.COM                 
              Cluster/IPCI Configuration Procedure 
                   CLUSTER_CONFIG_LAN Version V2.84 
                     Executing on an IA64 System 
 
    DECnet-Plus is installed on this node. 
    IA64 satellites will use TCP/IP BOOTP and TFTP services for downline loading. 
    TCP/IP is installed and running on this node. 
 
        Enter a "?" for help at any prompt.  If you are familiar with 
        the execution of this procedure, you may want to mute extra notes 
        and explanations by invoking it with "@CLUSTER_CONFIG_LAN BRIEF". 
 
    BHAGAT is an IA64 system and currently a member of a cluster 
    so the following functions can be performed: 
 
MAIN Menu 
 
   1. ADD an IA64 node to the cluster. 
   2. REMOVE a node from the cluster. 
   3. CHANGE a cluster member's characteristics. 
   4. CREATE a duplicate system disk for BHAGAT. 
   5. MAKE a directory structure for a new root on a system disk. 
   6. DELETE a root from a system disk. 
   7. EXIT from this procedure. 
 
Enter choice [7]: 4 
 
    The CREATE function generates a duplicate system disk. 
 
            o It backs up the current system disk to the new system disk. 
 
            o It then removes from the new system disk all system roots. 
 
  WARNING: Do not proceed unless you have defined appropriate logical names 
           for cluster common files in SYLOGICALS.COM.  For instructions, 
           refer to the "OpenVMS Cluster Systems" manual. 
 
Do you want to continue [N]? Y 
 
    This procedure will now ask you for the device name of the current 
    system disk. The default device name (DISK$BHAGAT_SYS:) is the logical 
    volume name of SYS$SYSDEVICE:. 
 
What is the device name of the current system disk [DISK$BHAGAT_SYS:]? 
What is the device name of the new system disk? 
. 
. 
. 

8.6 Postconfiguration Tasks

Some configuration functions, such as adding or removing a voting member or enabling or disabling a quorum disk, require one or more additional operations.

These operations are listed in Table 8-10 and affect the integrity of the entire cluster. Follow the instructions in the table for the action you should take after executing either CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM to make major configuration changes.

Table 8-10 Actions Required to Reconfigure a Cluster
After running the cluster configuration procedure to... You should...
Add or remove a voting member Update the AUTOGEN parameter files and the current system parameter files for all nodes in the cluster, as described in Section 8.6.1.
Enable a quorum disk Perform the following steps:
  1. Update the AUTOGEN parameter files and the current system parameter files for all quorum watchers in the cluster, as described in Section 8.6.1.
  2. Reboot the nodes that have been enabled as quorum disk watchers (Section 2.3.9).

Reference: See also Section 8.2.4 for more information about adding a quorum disk.

Disable a quorum disk Perform the following steps:

Caution: Do not perform these steps until you are ready to reboot the entire OpenVMS Cluster system. Because you are reducing quorum for the cluster, the votes cast by the quorum disk being removed could cause cluster partitioning.

  1. Update the AUTOGEN parameter files and the current system parameter files for all quorum watchers in the cluster, as described in Section 8.6.1.
  2. Evaluate whether or not quorum will be lost without the quorum disk:
    IF... THEN...
    Quorum will not be lost Perform these steps:
    1. Use the DCL command SET CLUSTER/EXPECTED_VOTES to reduce the value of quorum.
    2. Reboot the nodes that have been disabled as quorum disk watchers. (Quorum disk watchers are described in Section 2.3.9.)
    Quorum will be lost Shut down and reboot the entire cluster.
    Reference: Cluster shutdown is described in Section 8.6.2.

Reference: See also Section 8.3.2 for more information about removing a quorum disk.

Add a satellite node Perform these steps:
  • Update the volatile network databases on other cluster members (Section 8.6.4).
  • Optionally, alter the satellite's local disk label (Section 8.6.5).
Enable or disable the LAN or IP for cluster communications Update the current system parameter files and reboot the node on which you have enabled or disabled the LAN or IP (Section 8.6.1).
Change allocation class values Update the current system parameter files and shut down and reboot the entire cluster (Sections 8.6.1 and 8.6.2).
Change the cluster group number or password Shut down and reboot the entire cluster (Sections 8.6.2 and 8.6.7).

8.6.1 Updating Parameter Files

The cluster configuration command procedures (CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM) can be used to modify parameters in the AUTOGEN parameter file for the node on which it is run.

In some cases, such as when you add or remove a voting cluster member, or when you enable or disable a quorum disk, you must update the AUTOGEN files for all the other cluster members.

Use either of the methods described in the following table.

Method Description
Update MODPARAMS.DAT files Edit MODPARAMS.DAT in all cluster members' [SYS x.SYSEXE] directories and adjust the value for the EXPECTED_VOTES system parameter appropriately.

For example, if you add a voting member or if you enable a quorum disk, you must increment the value by the number of votes assigned to the new member (usually 1). If you add a voting member with one vote and enable a quorum disk with one vote on that computer, you must increment the value by 2.

Update AGEN$ files Update the parameter settings in the appropriate AGEN$ include files:
  • For satellites, edit SYS$MANAGER:AGEN$NEW_SATELLITE_DEFAULTS.DAT.
  • For nonsatellites, edit
    SYS$MANAGER:AGEN$NEW_NODE_DEFAULTS.DAT.

Reference: These files are described in Section 8.2.2.

You must also update the current system parameter files (IA64VMSSYS.PAR or ALPHAVMSSYS.PAR, as appropriate) so that the changes take effect on the next reboot.

Use either of the methods described in the following table.

Method Description
SYSMAN utility Perform the following steps:
  1. Log in as system manager.
  2. Run the SYSMAN utility to update the EXPECTED_VOTES system parameter on all nodes in the cluster. For example:
     $ RUN SYS$SYSTEM:SYSMAN
    
    %SYSMAN-I-ENV, current command environment:
    Clusterwide on local cluster
    Username SYSTEM will be used on nonlocal nodes

    SYSMAN> SET ENVIRONMENT/CLUSTER
    SYSMAN> PARAM USE CURRENT
    SYSMAN> PARAM SET EXPECTED_VOTES 2
    SYSMAN> PARAM WRITE CURRENT
    SYSMAN> EXIT
AUTOGEN utility Perform the following steps:
  1. Log in as system manager.
  2. Run the AUTOGEN utility to update the EXPECTED_VOTES system parameter on all nodes in the cluster. For example:
     $ RUN SYS$SYSTEM:SYSMAN
    
    %SYSMAN-I-ENV, current command environment:
    Clusterwide on local cluster
    Username SYSTEM will be used on nonlocal nodes

    SYSMAN> SET ENVIRONMENT/CLUSTER
    SYSMAN> DO @SYS$UPDATE:AUTOGEN GETDATA SETPARAMS
    SYSMAN> EXIT

Do not specify the SHUTDOWN or REBOOT option.

Hints: If your next action is to shut down the node, you can specify SHUTDOWN or REBOOT (in place of SETPARAMS) in the DO @SYS$UPDATE:AUTOGEN GETDATA command.

Both of these methods propagate the values to the computer's ALPHAVMSSYS.PAR file on Alpha computers or to the IA64VMSSYS.PAR file on Integrity server systems. In order for these changes to take effect, continue with the instructions in either Section 8.6.2 to shut down the cluster or in Section 8.6.3 to shut down the node.

8.6.2 Shutting Down the Cluster

Using the SYSMAN utility, you can shut down the entire cluster from a single node in the cluster. Follow these steps to perform an orderly shutdown:

  1. Log in to the system manager's account on any node in the cluster.
  2. Run the SYSMAN utility and specify the SET ENVIRONMENT/CLUSTER command. Be sure to specify the /CLUSTER_SHUTDOWN qualifier to the SHUTDOWN NODE command. For example:


$ RUN SYS$SYSTEM:SYSMAN
SYSMAN> SET ENVIRONMENT/CLUSTER
%SYSMAN-I-ENV, current command environment:
  Clusterwide on local cluster
  Username SYSTEM will be used on nonlocal nodes
SYSMAN> SHUTDOWN NODE/CLUSTER_SHUTDOWN/MINUTES_TO_SHUTDOWN=5 -
_SYSMAN> /AUTOMATIC_REBOOT/REASON="Cluster Reconfiguration"
%SYSMAN-I-SHUTDOWN, SHUTDOWN request sent to node 
%SYSMAN-I-SHUTDOWN, SHUTDOWN request sent to node 
SYSMAN> 
 
SHUTDOWN message on BHAGAT from user SYSTEM at BHAGAT Batch   11:02:10 
BHAGAT will shut down in 5 minutes; back up shortly via automatic reboot. 
Please log off node BHAGAT. 
Cluster Reconfiguration 
SHUTDOWN message on BHAGAT from user SYSTEM at BHAGAT Batch   11:02:10 
PLUTO will shut down in 5 minutes; back up shortly via automatic reboot. 
Please log off node PLUTO. 
Cluster Reconfiguration

For more information, see Section 10.6.

8.6.3 Shutting Down a Single Node

To stop a single node in an OpenVMS Cluster, you can use either the SYSMAN SHUTDOWN NODE command with the appropriate SET ENVIRONMENT command or the SHUTDOWN command procedure. These methods are described in the following table.

Method Description
SYSMAN utility Follow these steps:
  1. Log in to the system manager's account on any node in the OpenVMS Cluster.
  2. Run the SYSMAN utility to shut down the node, as follows:
     $ RUN SYS$SYSTEM:SYSMAN
    
    SYSMAN> SET ENVIRONMENT/NODE=JUPITR
    Individual nodes: JUPITR
    Username SYSTEM will be used on nonlocal nodes

    SYSMAN> SHUTDOWN NODE/REASON="Maintenance" -
    _SYSMAN> /MINUTES_TO_SHUTDOWN=5

    Hint: To shut down a subset of nodes in the cluster, you can enter several node names (separated by commas) on the SET ENVIRONMENT/NODE command. The following command shuts down nodes JUPITR and SATURN:

    SYSMAN> SET ENVIRONMENT/NODE=(JUPITR,SATURN)
    
SHUTDOWN command procedure Follow these steps:
  1. Log in to the system manager's account on the node to be shut down.
  2. Invoke the SHUTDOWN command procedure as follows:
     $ @SYS$SYSTEM:SHUTDOWN
    

For more information, see Section 10.6.

8.6.4 Updating Network Data

Whenever you add a satellite, the cluster configuration command procedure you use (CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM) updates both the permanent and volatile remote node network databases (NETNODE_REMOTE.DAT) on the boot server. However, the volatile databases on other cluster members are not automatically updated.

To share the new data throughout the cluster, you must update the volatile databases on all other cluster members. Log in as system manager, invoke the SYSMAN utility, and enter the following commands at the SYSMAN> prompt:


$ RUN SYS$SYSTEM:SYSMAN
SYSMAN> SET ENVIRONMENT/CLUSTER
%SYSMAN-I-ENV, current command environment: 
        Clusterwide on local cluster 
        Username SYSTEM        will be used on nonlocal nodes
SYSMAN> SET PROFILE/PRIVILEGES=(OPER,SYSPRV)
SYSMAN> DO MCR NCP SET KNOWN NODES ALL
%SYSMAN-I-OUTPUT, command execution on node X...
   .
   .
   .
SYSMAN> EXIT
$ 

The file NETNODE_REMOTE.DAT must be located in the directory SYS$COMMON:[SYSEXE].

8.6.5 Altering Satellite Local Disk Labels

If you want to alter the volume label on a satellite node's local page and swap disk, follow these steps after the satellite has been added to the cluster:

Step Action
1 Log in as system manager and enter a DCL command in the following format: SET VOLUME/LABEL=volume-label device-spec[:]

Note: The SET VOLUME command requires write access (W) to the index file on the volume. If you are not the volume's owner, you must have either a system user identification code (UIC) or the SYSPRV privilege.

2 Update the [SYS n.SYSEXE]SATELLITE_PAGE.COM procedure on the boot server's system disk to reflect the new label.

8.6.6 Changing Allocation Class Values

If you must change allocation class values on any HSG or HSV subsystem, you must do so while the entire cluster is shut down.

Reference: To change allocation class values on computer systems, see Section 6.2.2.1.

8.6.7 Rebooting

The following table describes booting actions for satellite and storage subsystems:
For configurations with... You must...
HSG and HSV subsystems Reboot each computer after all HSG and HSV subsystems have been set and rebooted.
Satellite nodes Reboot boot servers before rebooting satellites.

Note that several new messages might appear. For example, if you have used the CLUSTER_CONFIG.COM CHANGE function to enable cluster communications over the LAN, one message reports that the LAN OpenVMS Cluster security database is being loaded.

Reference: See also Section 9.3 for more information about booting satellites.

For every disk-serving computer, a message reports that the MSCP server is being loaded.

To verify that all disks are being served in the manner in which you designed the configuration, at the system prompt ($) of the node serving the disks, enter the SHOW DEVICE/SERVED command. For example, the following display represents a DSSI configuration:


$ SHOW DEVICE/SERVED


Device:  Status  Total Size  Current  Max  Hosts 
$1$DIA0   Avail     1954050        0    0      0 
$1$DIA2   Avail     1800020        0    0      0 

Caution: If you boot a node into an existing OpenVMS Cluster using minimum startup (the system parameter STARTUP_P1 is set to MIN), a number of processes (for example, CACHE_SERVER, CLUSTER_SERVER, and CONFIGURE) are not started. HP recommends that you start these processes manually if you intend to run the node in an OpenVMS Cluster system. Running a node without these processes enabled prevents the cluster from functioning properly.

Reference: Refer to the HP OpenVMS System Manager's Manual for more information about starting these processes manually.


Previous Next Contents Index