![]() |
Software > OpenVMS Systems > Documentation > 84final > 4477 ![]() HP OpenVMS Systems Documentation |
![]() |
HP OpenVMS Cluster Systems
2.4.2 Losing a MemberTable 2-3 describes the phases of a transition caused by the failure of a current OpenVMS Cluster member.
2.5 OpenVMS Cluster MembershipOpenVMS Cluster systems based on LAN or IP network use a cluster group number and a cluster password to allow multiple independent OpenVMS Cluster systems to coexist on the same extended LAN or IP network and to prevent accidental access to a cluster by unauthorized computers.
2.5.1 Cluster Group NumberThe cluster group number uniquely identifies each OpenVMS Cluster system on a LAN or IP or communicates by a common memory region (that is, communicating using SMCI). This group number must be either from 1 to 4095 or from 61440 to 65535. Rule: If you plan to have more than one OpenVMS Cluster system on a LAN or an IP network, you must coordinate the assignment of cluster group numbers among system managers. 2.5.2 Cluster PasswordThe cluster password prevents an unauthorized computer using the cluster group number, from joining the cluster. The password must be from 1 to 31 characters; valid characters are letters, numbers, the dollar sign ($), and the underscore (_). 2.5.3 LocationThe cluster group number and cluster password are maintained in the cluster authorization file, SYS$COMMON:[SYSEXE]CLUSTER_AUTHORIZE.DAT. This file is created during the installation of the operating system, if you indicate that you want to set up a cluster that utilizes the shared memory or the LAN. The installation procedure then prompts you for the cluster group number and password.
Reference: For information about OpenVMS Cluster group data in the CLUSTER_AUTHORIZE.DAT file, see Sections 8.4 and 10.8. 2.5.4 ExampleIf all nodes in the OpenVMS Cluster do not have the same cluster password, an error report similar to the following is logged in the error log file.
2.6 Synchronizing Cluster Functions by the Distributed Lock ManagerThe distributed lock manager is an OpenVMS feature for synchronizing functions required by the distributed file system, the distributed job controller, device allocation, user-written OpenVMS Cluster applications, and other OpenVMS products and software components. The distributed lock manager uses the connection manager and SCS to communicate information between OpenVMS Cluster computers. 2.6.1 Distributed Lock Manager FunctionsThe functions of the distributed lock manager include the following:
2.6.2 System Management of the Lock ManagerThe lock manager is fully automated and usually requires no explicit system management. However, the LOCKDIRWT and LOCKRMWT system parameters can be used to adjust the distribution of activity and control of lock resource trees across the cluster. A lock resource tree is an abstract entity on which locks can be placed. Multiple lock resource trees can exist within a cluster. For every resource tree, there is one node known as the directory node and another node known as the lock resource master node. A lock resource master node controls a lock resource tree and is aware of all the locks on the lock resource tree. All locking operations on the lock tree must be sent to the resource master. These locks can come from any node in the cluster. All other nodes in the cluster only know about their specific locks on the tree. Furthermore, all nodes in the cluster have many locks on many different lock resource trees, which can be mastered on different nodes. When creating a new lock resource tree, the directory node must first be queried if a resource master already exists. The LOCKDIRWT parameter allocates a node as the directory node for a lock resource tree. The higher a node's LOCKDIRWT setting, the higher the probability that it will be the directory node for a given lock resource tree. For most configurations, large computers and boot nodes perform optimally when LOCKDIRWT is set to 1 and satellite nodes have LOCKDIRWT set to 0. These values are set automatically by the CLUSTER_CONFIG.COM procedure. Nodes with a LOCKDIRWT of 0 will not be the directory node for any resources unless all nodes in the cluster have a LOCKDIRWT of 0. In some circumstances, you may want to change the values of the LOCKDIRWT parameter across the cluster to control the extent to which nodes participate as directory nodes. LOCKRMWT influences which node is chosen to remaster a lock resource tree. Because there is a performance advantage for nodes mastering a lock resource tree (as no communication is required when performing a locking operation), the lock resource manager supports remastering lock trees to other nodes in the cluster. Remastering a lock resource tree means to designate another node in the cluster as the lock resource master for that lock resource tree and to move the lock resource tree to it. A node is eligible to be a lock resource master node if it has locks on that lock resource tree. The selection of the new lock resource master node from the eligible nodes is based on each node's LOCKRMWT system parameter setting and each node's locking activity. LOCKRMWT can contain a value between 0 and 10; the default is 5. The following list describes how the value of the LOCKRMWT system parameter affects resource tree mastery and how lock activity can affect the decision:
In most cases, maintaining the default value of 5 for LOCKRMWT is appropriate, but there may be cases where assigning some nodes a higher or lower LOCKRMWT is useful for determining which nodes master a lock tree. The LOCKRMWT parameter is dynamic, hence it can be adjusted, if necessary. 2.6.3 Large-Scale Locking ApplicationsThe Enqueue process limit (ENQLM), which is set in the SYSUAF.DAT file and which controls the number of locks that a process can own, can be adjusted to meet the demands of large scale databases and other server applications. Prior to OpenVMS Version 7.1, the limit was 32767. This limit was removed to enable the efficient operation of large scale databases and other server applications. A process can now own up to 16,776,959 locks, the architectural maximum. By setting ENQLM in SYSUAF.DAT to 32767 (using the Authorize utility), the lock limit is automatically extended to the maximum of 16,776,959 locks. $CREPRC can pass large quotas to the target process if it is initialized from a process with the SYSUAF Enqlm quota of 32767. Reference: See the HP OpenVMS Programming Concepts Manual for additional information about the distributed lock manager and resource trees. See the HP OpenVMS System Manager's Manual for more information about Enqueue Quota. 2.7 Resource SharingResource sharing in an OpenVMS Cluster system is enabled by the distributed file system, RMS, and the distributed lock manager. 2.7.1 Distributed File SystemThe OpenVMS Cluster distributed file system allows all computers to share mass storage and files. The distributed file system provides the same access to disks, tapes, and files across the OpenVMS Cluster that is provided on a standalone computer. 2.7.2 RMS and the Distributed Lock ManagerThe distributed file system and OpenVMS Record Management Services (RMS) use the distributed lock manager to coordinate clusterwide file access. RMS files can be shared to the record level. Almost any disk or tape device can be made available to the entire OpenVMS Cluster system. The devices can be:
All cluster-accessible devices appear as if they are connected to every computer. 2.8 Disk AvailabilityLocally connected disks can be served across an OpenVMS Cluster by the MSCP server. 2.8.1 MSCP ServerThe MSCP server makes locally connected disks, including the following, available across the cluster:
In conjunction with the disk class driver (DUDRIVER), the MSCP server implements the storage server portion of the MSCP protocol on a computer, allowing the computer to function as a storage controller. The MSCP protocol defines conventions for the format and timing of messages sent and received for certain families of mass storage controllers and devices designed by HP. The MSCP server decodes and services MSCP I/O requests sent by remote cluster nodes. Note: The MSCP server is not used by a computer to access files on locally connected disks. 2.8.2 Device ServingOnce a device is set up to be served:
2.8.3 Enabling the MSCP ServerThe MSCP server is controlled by the MSCP_LOAD and MSCP_SERVE_ALL system parameters. The values of these parameters are set initially by answers to questions asked during the OpenVMS installation procedure (described in Section 8.4), or during the CLUSTER_CONFIG.COM procedure (described in Chapter 8). The default values for these parameters are as follows:
Reference: See Section 6.3 for more information about setting system parameters for MSCP serving. 2.9 Tape AvailabilityLocally connected tapes can be served across an OpenVMS Cluster by the TMSCP server. 2.9.1 TMSCP ServerThe TMSCP server makes locally connected tapes, available across the cluster including the following:
The TMSCP server implements the TMSCP protocol, which is used to communicate with a controller for TMSCP tapes. In conjunction with the tape class driver (TUDRIVER), the TMSCP protocol is implemented on a processor, allowing the processor to function as a storage controller. The processor submits I/O requests to locally accessed tapes, and accepts the I/O requests from any node in the cluster. In this way, the TMSCP server makes locally connected tapes available to all nodes in the cluster. The TMSCP server can also mak HSG and HSV tapes accessible to OpenVMS Cluster satellites. 2.9.2 Enabling the TMSCP ServerThe TMSCP server is controlled by the TMSCP_LOAD system parameter. The value of this parameter is set initially by answers to questions asked during the OpenVMS installation procedure (described in Section 4.2.3) or during the CLUSTER_CONFIG.COM procedure (described in Section 8.4). By default, the setting of the TMSCP_LOAD parameter does not load the TMSCP server and does not serve any tapes. 2.10 Queue AvailabilityThe distributed queue manager makes queues available across the cluster to achieve the following:
The distributed queue manager uses the distributed lock manager to signal other computers in the OpenVMS Cluster to examine the batch and print queue jobs to be processed.
|