[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Cluster Systems


Previous Contents Index

6.2.2.1 Assigning Node Allocation Class Values on Computers

There are two ways to assign a node allocation class: by using CLUSTER_CONFIG.COM or CLUSTER_CONFIG_LAN.COM, which is described in Section 8.4, or by using AUTOGEN, as shown in the following table.

Step Action
1 Edit the root directory [SYS n.SYSEXE]MODPARAMS.DAT on each node that boots from the system disk. The following example shows a MODPARAMS.DAT file. The entries are hypothetical and should be regarded as examples, not as suggestions for specific parameter settings.
!

! Site-specific AUTOGEN data file. In an OpenVMS Cluster
! where a common system disk is being used, this file
! should reside in SYS$SPECIFIC:[SYSEXE], not a common
! system directory.
!
! Add modifications that you want to make to AUTOGEN's
! hardware configuration data, system parameter
! calculations, and page, swap, and dump file sizes
! to the bottom of this file.
SCSNODE="NODE01"
SCSSYSTEMID=99999
NISCS_LOAD_PEA0=1
VAXCLUSTER=2
MSCP_LOAD=1
MSCP_SERVE_ALL=1
ALLOCLASS=1
TAPE_ALLOCLASS=1
2 Invoke AUTOGEN to set the system parameter values:
$ @SYS$UPDATE:AUTOGEN start-phase end-phase

3 Shut down and reboot the entire cluster in order for the new values to take effect.

6.2.2.2 Node Allocation Class Example With a DSA Disk and Tape

Figure 6-4 shows a DSA disk and tape that are dual pathed between two computers.

Figure 6-4 Disk and Tape Dual Pathed Between Computers


In this configuration:

  • URANUS and NEPTUN access the disk either locally or through the other computer's MSCP server.
  • When satellites ARIEL and OBERON access $1$DGA8, a path is made through either URANUS or NEPTUN.
  • If, for example, the node URANUS has been shut down, the satellites can access the devices through NEPTUN. When URANUS reboots, access is available through either URANUS or NEPTUN.

6.2.2.3 Node Allocation Class Example With Mixed Interconnects

Figure 6-5 shows how device names are typically specified in a mixed-interconnect cluster. This figure also shows how relevant system parameter values are set for each FC computer.

Figure 6-5 Device Names in a Mixed-Interconnect Cluster


In this configuration:

  • A disk and a tape are dual pathed to the HSG or HSV subsystems named VOYGR1 and VOYGR2; these subsystems are connected to JUPITR, SATURN, URANUS and NEPTUN through the star coupler.
  • The MSCP and TMSCP servers are loaded on JUPITR and NEPTUN (MSCP_LOAD = 1, TMSCP_LOAD = 1) and the ALLOCLASS and TAPE_ALLOCLASS parameters are set to the same value (1) on these computers and on both HSG or HSV subsystems.

Note: For optimal availability, two or more FC connected computers can serve HSG or HSV devices to the cluster.

6.2.2.4 Node Allocation Classes and RAID Array 210 and 230 Devices

If you have RAID devices connected to StorageWorks RAID Array 210 or 230 subsystems, you might experience device-naming problems when running in a cluster environment if nonzero node allocation classes are used. In this case, the RAID devices will be named $n$DRcu, where n is the (nonzero) node allocation class, c is the controller letter, and u is the unit number.

If multiple nodes in the cluster have the same (nonzero) node allocation class and these same nodes have RAID controllers, then RAID devices that are distinct might be given the same name (for example, $1$DRA0). This problem can lead to data corruption.

To prevent such problems, use the DR_UNIT_BASE system parameter, which causes the DR devices to be numbered sequentially, starting with the DR_UNIT_BASE value that you specify. For example, if the node allocation class is $1, the controller letter is A, and you set DR_UNIT_BASE on one cluster member to 10, the first device name generated by the RAID controller will be $1$DRA10, followed by $1$DRA11, $1$DRA12, and so forth.

To ensure unique DR device names, set the DR_UNIT_BASE number on each cluster member so that the resulting device numbers do not overlap. For example, you can set DR_UNIT_BASE on three cluster members to 10, 20, and 30 respectively. As long as each cluster member has 10 or fewer devices, the DR device numbers will be unique.

6.2.3 Reasons for Using Port Allocation Classes

When the node allocation class is nonzero, it becomes the device name prefix for all attached devices, whether the devices are on a shared interconnect or not. To ensure unique names within a cluster, it is necessary for the ddcu part of the disk device name (for example, DKB0) to be unique within an allocation class, even if the device is on a private bus.

This constraint is relatively easy to overcome for DIGITAL Storage Architecture (DSA) devices, because a system manager can select from a large unit number space to ensure uniqueness. The constraint is more difficult to manage for other device types, such as SCSI devices whose controller letter and unit number are determined by the hardware configuration.

For example, in the configuration shown in Figure 6-6, each system has a private SCSI bus with adapter letter A. To obtain unique names, the unit numbers must be different. This constrains the configuration to a maximum of 8 devices on the two buses (or 16 if wide addressing can be used on one or more of the buses). This can result in empty StorageWorks drive bays and in a reduction of the system's maximum storage capacity.

Figure 6-6 SCSI Device Names Using a Node Allocation Class


6.2.3.1 Constraint of the SCSI Controller Letter in Device Names

The SCSI device name is determined in part by the SCSI controller through which the device is accessed (for example, B in DKBn). Therefore, to ensure that each node uses the same name for each device, all SCSI controllers attached to a shared SCSI bus must have the same OpenVMS device name. In Figure 6-6, each host is attached to the shared SCSI bus by controller PKB.

This requirement can make configuring a shared SCSI bus difficult, because a system manager has little or no control over the assignment of SCSI controller device names. It is particularly difficult to match controller letters on different system types when one or more of the systems have:

  • Built-in SCSI controllers that are not supported in SCSI clusters
  • Long internal cables that make some controllers inappropriate for SCSI clusters

6.2.3.2 Constraints Removed by Port Allocation Classes

The port allocation class feature has two major benefits:

  • A system manager can specify an allocation class value that is specific to a port rather than nodewide.
  • When a port has a nonzero port allocation class, the controller letter in the device name that is accessed through that port is always the letter A.

Using port allocation classes for naming SCSI, IDE, floppy disk, and PCI RAID controller devices removes the configuration constraints described in Section 6.2.2.4, in Section 6.2.3, and in Section 6.2.3.1. You do not need to use the DR_UNIT_BASE system parameter recommended in Section 6.2.2.4. Furthermore, each bus can be given its own unique allocation class value, so the ddcu part of the disk device name (for example, DKB0) does not need to be unique across buses. Moreover, controllers with different device names can be attached to the same bus, because the disk device names no longer depend on the controller letter.

Figure 6-7 shows the same configuration as Figure 6-6, with two additions: a host named CHUCK and an additional disk attached to the lower left SCSI bus. Port allocation classes are used in the device names in this figure. A port allocation class of 116 is used for the SCSI interconnect that is shared, and port allocation class 0 is used for the SCSI interconnects that are not shared. By using port allocation classes in this configuration, you can do what was not allowed previously:

  • Attach an adapter with a name (PKA) that differs from the name of the other adapters (PKB) attached to the shared SCSI interconnect, as long as that port has the same port allocation class (116 in this example).
  • Use two disks with the same controller name and number (DKA300) because each disk is attached to a SCSI interconnect that is not shared.

Figure 6-7 Device Names Using Port Allocation Classes


6.2.4 Specifying Port Allocation Classes

A port allocation class is a designation for all ports attached to a single interconnect. It replaces the node allocation class in the device name.

The three types of port allocation classes are:

  • Port allocation classes of 1 to 32767 for devices attached to a multihost interconnect or a single-host interconnect, if desired
  • Port allocation class 0 for devices attached to a single-host interconnect
  • Port allocation class -1 when no port allocation class is in effect

Each type has its own naming rules.

6.2.4.1 Port Allocation Classes for Devices Attached to a Multi-Host Interconnect

The following rules pertain to port allocation classes for devices attached to a multihost interconnect:

  1. The valid range of port allocation classes is 1 through 32767.
  2. When using port allocation classes, the controller letter in the device name is always A, regardless of the actual controller letter. The $GETDVI item code DVI$_DISPLAY_DEVNAM displays the actual port name.
    Note that it is now more important to use fully specified names (for example, $101$DKA100 or ABLE$DKA100) rather than abbreviated names (such as DK100), because a system can have multiple DKA100 disks.
  3. Each port allocation class must be unique within a cluster.
  4. A port allocation class cannot duplicate the value of another node's tape or disk node allocation class.
  5. Each node for which MSCP serves a device should have the same nonzero allocation class value.

Examples of device names that use this type of port allocation class are shown in Table 6-2.

Table 6-2 Examples of Device Names with Port Allocation Classes 1-32767
Device Name Description
$101$DKA0 The port allocation class is 101; DK represents the disk device category, A is the controller name, and 0 is the unit number.
$147$DKA0 The port allocation class is 147; DK represents the disk device category, A is the controller name, and 0 is the unit number.

6.2.4.2 Port Allocation Class 0 for Devices Attached to a Single-Host Interconnect

The following rules pertain to port allocation class 0 for devices attached to a single-host interconnect:

  1. Port allocation class 0 does not become part of the device name. Instead, the name of the node to which the device is attached becomes the first part of the device name.
  2. The controller letter in the device name remains the designation of the controller to which the device is attached. (It is not changed to A as it is for port allocation classes greater than zero.)

Examples of device names that use port allocation class 0 are shown in Table 6-3.

Table 6-3 Examples of Device Names With Port Allocation Class 0
Device Name Description
ABLE$DKD100 ABLE is the name of the node to which the device is attached. D is the designation of the controller to which it is attached, not A as it is for port allocation classes with a nonzero class. The unit number of this device is 100. The port allocation class of $0$ is not included in the device name.
BAKER$DKC200 BAKER is the name of the node to which the device is attached, C is the designation of the controller to which it is attached, and 200 is the unit number. The port allocation class of $0$ is not included in the device name.

6.2.4.3 Port Allocation Class -1

The designation of port allocation class -1 means that a port allocation class is not being used. Instead, a node allocation class is used. The controller letter remains its predefined designation. (It is assigned by OpenVMS, based on the system configuration. It is not affected by a node allocation class.)

6.2.4.4 How to Implement Port Allocation Classes

Port allocation classes were introduced in OpenVMS Alpha Version 7.1 with support in OpenVMS VAX. VAX computers can serve disks connected to Alpha systems that use port allocation classes in their names.

To implement port allocation classes, you must do the following:

  • Enable the use of port allocation classes.
  • Assign one or more port allocation classes.
  • At a minimum, reboot the nodes on the shared SCSI bus.

Enabling the Use of Port Allocation Classes

To enable the use of port allocation classes, you must set a new SYSGEN parameter DEVICE_NAMING to 1. The default setting for this parameter is zero. In addition, the SCSSYSTEMIDH system parameter must be set to zero. Check to make sure that it is.

Assigning Port Allocation Classes

You can assign one or more port allocation classes with the OpenVMS Cluster configuration procedure, CLUSTER_CONFIG.COM (or CLUSTER_CONFIG_LAN.COM).

If it is not possible to use CLUSTER_CONFIG.COM or CLUSTER_CONFIG_LAN.COM to assign port allocation classes (for example, if you are booting a private system disk into an existing cluster), you can use the new SYSBOOT SET/CLASS command.

The following example shows how to use the new SYSBOOT SET/CLASS command to assign an existing port allocation class of 152 to port PKB.


SYSBOOT> SET/CLASS PKB 152 

The SYSINIT process ensures that this new name is used in successive boots.

To deassign a port allocation class, enter the port name without a class number. For example:


SYSBOOT> SET/CLASS PKB 

The mapping of ports to allocation classes is stored in SYS$SYSTEM:SYS$DEVICES.DAT, a standard text file. You use the CLUSTER_CONFIG.COM (or CLUSTER_CONFIG_LAN.COM) command procedure or, in special cases, SYSBOOT to change SYS$DEVICES.DAT.

6.2.4.5 Clusterwide Reboot Requirements for SCSI Interconnects

Changing a device's allocation class changes the device name. A clusterwide reboot ensures that all nodes see the device under its new name, which in turn means that the normal device and file locks remain consistent.

Rebooting an entire cluster when a device name changes is not mandatory. You may be able to reboot only the nodes that share the SCSI bus, as described in the following steps. The conditions under which you can do this and the results that follow are also described.

  1. Dismount the devices whose names have changed from all nodes.
    This is not always possible. In particular, you cannot dismount a disk on nodes where it is the system disk. If the disk is not dismounted, a subsequent attempt to mount the same disk using the new device name will fail with the following error:


    %MOUNT-F-VOLALRMNT, another volume of same label already mounted 
    

    Therefore, you must reboot any node that cannot dismount the disk.
  2. Reboot all nodes connected to the SCSI bus.
    Before you reboot any of these nodes, make sure the disks on the SCSI bus are dismounted on the nodes not rebooting.

    Note

    OpenVMS ensures that a node cannot boot if the result is a SCSI bus with naming different from another node already accessing the same bus. (This check is independent of the dismount check in step 1.)

    After the nodes that are connected to the SCSI bus reboot, the device exists with its new name.
  3. Mount the devices systemwide or clusterwide.
    If no other node has the disk mounted under the old name, you can mount the disk systemwide or clusterwide using its new name. The new device name will be seen on all nodes running compatible software, and these nodes can also mount the disk and access it normally.
    Nodes that have not rebooted still see the old device name as well as the new device name. However, the old device name cannot be used; the device, when accessed by the old name, is off line. The old name persists until the node reboots.

6.3 MSCP and TMSCP Served Disks and Tapes

The MSCP server and the TMSCP server make locally connected disks and tapes available to all cluster members. Locally connected disks and tapes are not automatically cluster accessible. Access to these devices is restricted to the local computer unless you explicitly set them up as cluster accessible using the MSCP server for disks or the TMSCP server for tapes.

6.3.1 Enabling Servers

To make a disk or tape accessible to all OpenVMS Cluster computers, the MSCP or TMSCP server must be:

  • Loaded on the local computer, as described in Table 6-4
  • Made functional by setting the MSCP and TMSCP system parameters, as described in Table 6-5

Table 6-4 MSCP_LOAD and TMSCP_LOAD Parameter Settings
Parameter Value Meaning
MSCP_LOAD 0 Do not load the MSCP_SERVER. This is the default.
  1 Load the MSCP server with attributes specified by the MSCP_SERVE_ALL parameter using the default CPU load capacity.
  >1 Load the MSCP server with attributes specified by the MSCP_SERVE_ALL parameter. Use the MSCP_LOAD value as the CPU load capacity.
TMSCP_LOAD 0 Do not load the TMSCP server and do not serve any tapes (default value).
  1 Load the TMSCP server and serve all available tapes, including all local tapes and all multihost tapes with a matching TAPE_ALLOCLASS value.

Table 6-5 summarizes the system parameter values you can specify for MSCP_SERVE_ALL and TMSCP_SERVE_ALL to configure the MSCP and TMSCP servers. Initial values are determined by your responses when you execute the installation or upgrade procedure or when you execute the CLUSTER_CONFIG.COM command procedure described in Chapter 8 to set up your configuration.

Starting with OpenVMS Version 7.2, the serving types are implemented as a bit mask. To specify the type of serving your system will perform, locate the type you want in Table 6-5 and specify its value. For some systems, you may want to specify two serving types, such as serving the system disk and serving locally attached disks. To specify such a combination, add the values of each type, and specify the sum.

Note

In a mixed-version cluster that includes any systems running OpenVMS Version 7.1-x or earlier, serving all available disks is restricted to serving all disks whose allocation class matches the system's node allocation class (pre-Version 7.2 meaning). To specify this type of serving, use the value 9 (which sets bit 0 and bit 3).

Table 6-5 MSCP_SERVE_ALL and TMSCP_SERVE_ALL Parameter Settings
Parameter Bit Value When Set Meaning
MSCP_SERVE_ALL 0 1 Serve all available disks (locally attached and those connected to HS x and DSSI controllers). Disks with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are also served if bit 3 is not set.
  1 2 Serve locally attached (non-HS x and non-DSSI) disks. The server does not monitor its I/O traffic and does not participate in load balancing.
  2 4 Serve the system disk. This is the default setting. This setting is important when other nodes in the cluster rely on this system being able to serve its system disk. This setting prevents obscure contention problems that can occur when a system attempts to complete I/O to a remote system disk whose system has failed. For more information, see Section 6.3.1.1.
  3 8 Restrict the serving specified by bit 0. All disks except those with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are served.

This is pre-Version 7.2 behavior. If your cluster includes systems running Open 7.1- x or earlier, and you want to serve all available disks, you must specify 9, the result of setting this bit and bit 0.

  4 15 By default, the bit 4 is not set, hence the DUDRIVER will accept the devices with unit number greater than 9999. On the client side, if bit 4 is set (10000 binary) in the MSCP_SERVE_ALL parameter, the client will reject devices with unit number greater than 9999 and retains the earlier behavior.
TMSCP_SERVE_ALL 0 1 Serve all available tapes (locally attached and those connected to HS x and DSSI controllers). Tapes with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are also served if bit 3 is not set.
  1 2 Serve locally attached (non-HS x and non-DSSI) tapes.
  3 8 Restrict the serving specified by bit 0. Serve all tapes except those with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter).

This is pre-Version 7.2 behavior. If your cluster includes systems running OpenVMS Version 7.1- x or earlier, and you want to serve all available tapes, you must specify 9, the result of setting this bit and bit 0.

  4 15 By default, the bit 4 is not set, hence the TUDRIVER will accept the devices with unit number greater than 9999. On the client side, if bit 4 is set (10000 binary) in the TMSCP_SERVE_ALL parameter, the client will reject devices with unit number greater than 9999 and retains the earlier behavior.

Although the serving types are now implemented as a bit mask, the values of 0, 1, and 2, specified by bit 0 and bit 1, retain their original meanings. These values are shown in the following table:

Value Description
0 Do not serve any disks (tapes). This is the default.
1 Serve all available disks (tapes).
2 Serve only locally attached (non-HS x and non-DSSI) disks (tapes).


Previous Next Contents Index