[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here Guidelines for OpenVMS Cluster Configurations

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

7.7 Booting on a Fibre Channel Storage Device on OpenVMS Integrity server Systems

This section describes how to boot the fibre channel (FC) storage device on OpenVMS Integrity server systems. FC storage is supported on all storage arrays that are supported on OpenVMS systems.

OpenVMS Integrity servers Version 8.2 supports the HP A6826A, a PCI-X dual-channel, 2-Gb Fibre Channel host-based adapter (HBA) and its variants. The A6826A HBA requires the following software and firmware:

  • EFI driver Version 1.40
  • RISC firmware Version 3.03.001

Fibre channel device booting supports point-to-point topology. There is no plan to support FC arbitrated loop topology.

7.7.1 Installing the Bootable Firmware

Before you can boot on a FC device on OpenVMS Integrity server systems, you must update the EFI bootable firmware of the flash memory of the FC HBA.

To flash the memory of the FC HBA, update the firmware of the following components:

  • EFI driver firmware
  • RISC firmware
  • NVRAM resident in the FLASH ROM on the HBA

To update the firmware, use the efiutil.efi utility, which is located on the IPF Offline Diagnostics and Utilities CD.

To perform these firmware updates, complete the following steps:

  1. Insert the IPF Offline Diagnostics and Utilities IPF CD. You can obtain the image file in either of the following ways:
  2. To flash all adapters found on the system in batch mode, select the EFI Shell from the Boot Options list on the EFI Boot Manager menu.
    At the EFI console, enter the following commands (where fs0: represents the bootable partition on the CD-ROM):
    1. fs0:\efi\hp\tools\io_cards\fc2p2g\ efiutil all info
      This command provides the current EFI driver and RISC firmware version on all adapters in the system.
    2. fs0:\efi\hp\tools\io_cards\fc2p2g\ efiutil all efi_write
      This command updates the EFI driver.
    3. fs0:\efi\hp\tools\io_cards\fc2p2g\ efiutil all risc_fw_write
      This command updates the RISC firmware.
    4. fs0:\efi\hp\tools\io_cards\fc2p2g\ efiutil all nvram_write
      This command updates the NVRAM.
    5. fs0:\> reset
      This command resets the system.
  3. Alternatively, you can flash each adapter separately by specifying the adapter ID and firmware file name to write to the ROM, as follows:
    1. Boot the entry that corresponds to the DVD-ROM from the Boot Options list; or specify the CD Media by selecting the "Boot Option Maintenance Menu," then selecting "Boot from a File," then selecting "Removable Media Boot."
    2. From the CD main menu, select "View I/O Cards FW Update and Configuration Utilities, and MCA Menu," then select "2Gb Fibre Channel HBA Utility." This invokes the efiutil CLI utility and displays a list of fibre channel adapters found in the system.
    3. Select the fibre channel adapter by specifying the index number. Update the EFI driver, RISC firmware driver, and the NVRAM. Repeat this step until all adapters have been updated. For example:


      efiutil.efi> adapter 
       Adapter index number [0]? 
      efiutil.efi> efi_write 
      efiutil.efi> risc_fw_write 
      efiutil.efi> nvram_write 
      
    4. Exit the efiutil CLI by typing Quit from the utility. This will bring you to the "I/O Cards Firmware and Configuration Menu." Type q to return to the Main Menu. From the Main Menu, select X to exit and reboot the system.

7.7.2 Checking the Firmware Version

You can check the installation of the firmware version in two ways: from the console during system initialization, or by using the efiutil utility:

  • The firmware version is shown in the booting console message that is displayed during system initialization, as shown in the following example:


    HP 2 Port 2Gb Fibre Channel Adapter (driver 1.40, firmware 3.03.001) 
    
  • The firmware version number is also shown in the display of the efiutil info command:


    fs0:\efi\hp\tools\io_cards\fc2p2g\efiutil info 
     
    Fibre Channel Card Efi Utility  1.20  (1/30/2003) 
     
     2 Fibre Channel Adapters found: 
     
     Adapter        Path                  WWN           Driver (Firmware) 
      A0  Acpi(000222F0,200)/Pci(1|0)  50060B00001CF2DC  1.40  (3.03.001) 
      A1  Acpi(000222F0,200)/Pci(1|1)  50060B00001CF2DE  1.40  (3.03.001) 
     
    

7.7.3 Configuring the Boot Device Paths on the FC

For configuration booting on a Fibre Channel storage device, HP recommends that you use the OpenVMS Integrity servers Boot Manager (BOOT_OPTIONS.COM) after completing the installation of HP OpenVMS. Follow these steps:

  1. From the OpenVMS Installation Menu, choose Option 7 "Execute DCL commands and procedures" to access the DCL prompt.
  2. At the DCL prompt, enter the following command to invoke the OpenVMS Integrity servers Boot Manager utility:


    $$$ @SYS$MANAGER:BOOT_OPTIONS 
    
  3. When the utility is invoked, the main menu is displayed. To add your system disk as a boot option, enter 1 at the prompt, as shown in the following example:


    OpenVMS Integrity server Boot Manager Boot Options List Management Utility 
     
    (1) ADD an entry to the Boot Options list 
    (2) DISPLAY the Boot Options list 
    (3) REMOVE an entry from the Boot Options list 
    (4) MOVE the position of an entry in the Boot Options list 
    (5) VALIDATE boot options and fix them as necessary 
    (6) Modify Boot Options TIMEOUT setting 
     
    (B) Set to operate on the Boot Device Options list 
    (D) Set to operate on the Dump Device Options list 
    (G) Set to operate on the Debug Device Options list 
     
    (E) EXIT from the Boot Manager utility 
     
    You can also enter Ctrl-Y at any time to abort this utility 
     
    Enter your choice: 1
    

    Note

    While using this utility, you can change a response made to an earlier prompt by typing the "^" character as many times as needed. To abort and return to the DCL prompt, enter Ctrl/Y.
  4. The utility prompts you for the device name. Enter the system disk device you are using for this installation, as in the following example where the device is a multipath Fibre Channel device $1$DGA1: (press Return):


    Enter the device name (enter "?" for a list of devices): $1$DGA1:
    
  5. The utility prompts you for the position you want your entry to take in the EFI boot option list. Enter 1 as in the following example:


    Enter the desired position number (1,2,3,,,) of the entry. 
    To display the Boot Options list, enter "?" and press Return. 
    Position [1]: 1
    
  6. The utility prompts you for OpenVMS boot flags. By default, no flags are set. Enter the OpenVMS flags (for example, 0,1) followed by a Return, or press Return to set no flags as in the following example:


    Enter the value for VMS_FLAGS in the form n,n. 
    VMS_FLAGS [NONE]: 
    
  7. The utility prompts you for a description to include with your boot option entry. By default, the device name is used as the description. You can enter more descriptive information as in the following example.


    Enter a short description (do not include quotation marks). 
    Description ["$1$DGA1"]: $1$DGA1 OpenVMS V8.2 System
     
    efi$bcfg: $1$dga1 (Boot0001) Option successfully added 
     
    efi$bcfg: $1$dga1 (Boot0002) Option successfully added 
     
    efi$bcfg: $1$dga1 (Boot0003) Option successfully added 
    
  8. When you have successfully added your boot option, exit from the utility by entering E at the prompt.


    Enter your choice: E
    
  9. Log out from the DCL prompt and shut down the Integrity server System.

For more information on this utility, refer to the OpenVMS System Manager's Manual, Volume 1: Essentials.

7.8 Setting Up a Storage Controller for Use with OpenVMS

The HP storage array controllers and the manuals that provide specific information for configuring them for use with OpenVMS follow:

  • HSG60/80
    HSG80 ACS Solution Software Version 8.6 for HP OpenVMS Installation and Configuration Guide, order number AA-RH4BD-TE.
    This manual is available at the following location:


    ftp://ftp.compaq.com/pub/products/storageworks/techdoc/raidstorage/AA-RH4BD-TE.pdf 
    
  • Enterprise Virtual Array
    OpenVMS Kit V2.0 for Enterprise Virtual Array Installation and Configuration Guide, order number AA-RR03B-TE.
    This manual is available at the following location:


     
    ftp://ftp.compaq.com/pub/products/storageworks/techdoc/enterprise/AA-RR03B-TE.pdf 
    
  • HP StorageWorks Modular Smart Array 1000
    The documentation for the MSA1000 is available at the following location:


    ftp://ftp.compaq.com/pub/products/storageworks/techdoc/msa1000/ 
    
  • HP StorageWorks XP Disk Array
    Product information about HP StorageWorks XP Arrays is available at the following location:

    http://h18006.www1.hp.com/storage/xparrays.html

7.8.1 Setting Up the Device Identifier for the CCL

Defining a unique device identifier for the Command Console LUN (CCL) of the HSG and HSV is not required by OpenVMS, but it may be required by some management tools. OpenVMS suggests that you always define a unique device identifier since this identifier causes the creation of a CCL device visible using the SHOW DEVICE command. Although this device is not directly controllable on OpenVMS, you can display the multiple paths to the storage controller using the SHOW DEVICE/FULL command, and diagnose failed paths, as shown in the following example for $1$GGA3, where one of the two paths has failed.


Paver> sh dev gg /mul 
Device                  Device           Error         Current 
 Name                   Status           Count  Paths    path 
$1$GGA1:                Online               0   2/ 2  PGA0.5000-1FE1-0011-AF08 
$1$GGA3:                Online               0   1/ 2  PGA0.5000-1FE1-0011-B158 
$1$GGA4:                Online               0   2/ 2  PGA0.5000-1FE1-0015-2C58 
$1$GGA5:                Online               0   2/ 2  PGA0.5000-1FE1-0015-22A8 
$1$GGA6:                Online               0   2/ 2  PGA0.5000-1FE1-0015-2D18 
$1$GGA7:                Online               0   2/ 2  PGA0.5000-1FE1-0015-2D08 
$1$GGA9:                Online               0   2/ 2  PGA0.5000-1FE1-0007-04E3 
 
Paver> show dev /full $1$gga9: 
 
Device $1$GGA9:, device type Generic SCSI device, is online, shareable, device 
    has multiple I/O paths. 
 
    Error count                    0    Operations completed                  0 
    Owner process                 ""    Owner UIC                      [SYSTEM] 
    Owner process ID        00000000    Dev Prot    S:RWPL,O:RWPL,G:RWPL,W:RWPL 
    Reference count                0    Default buffer size                   0 
    WWID   02000008:5000-1FE1-0007-04E0 
 
  I/O paths to device              2 
  Path PGA0.5000-1FE1-0007-04E3  (PAVER), primary path, current path. 
    Error count                    0    Operations completed                  0 
  Path PGA0.5000-1FE1-0007-04E1  (PAVER). 
    Error count                    0    Operations completed                  0 

7.8.2 Setting Up the Device Identifier for Disk Devices

The device identifier for disks is appended to the string $1$DGA to form the complete device name. It is essential that all disks have unique device identifiers within a cluster. Device identifiers can be between 0 and 32767, except a device identifier of 0 is not valid on the HSV.

7.9 Creating a Cluster with a Shared FC System Disk

To configure nodes in an OpenVMS Cluster system, you must execute the CLUSTER_CONFIG.COM (or CLUSTER_CONFIG_LAN.COM) command procedure. (You can run either the full version, which provides more information about most prompts, or the brief version.)

For the purposes of CLUSTER_CONFIG, a shared Fibre Channel (FC) bus is treated like a shared SCSI bus, except that the allocation class parameters do not apply to FC. The rules for setting node allocation class and port allocation class values remain in effect when parallel SCSI storage devices are present in a configuration that includes FC storage devices.

To configure a new OpenVMS Cluster system, you must first enable clustering on a single, or standalone, system. Then you can add additional nodes to the cluster.

Example 7-5 shows how to enable clustering using brief version of CLUSTER_CONFIG_LAN.COM on a standalone node called FCNOD1. At the end of the procedure, FCNOD1 reboots and forms a one-node cluster.

Example 7-6 shows how to run the brief version of CLUSTER_CONFIG_LAN.COM on FCNOD1 to add a second node, called FCNOD2, to form a two-node cluster. At the end of the procedure, the cluster is configured to allow FCNOD2 to boot off the same FC system disk as FCNOD1.

The following steps are common to both examples:

  1. Select the default option [1] for ADD.
  2. Answer Yes when CLUSTER_CONFIG_LAN.COM asks whether there will be a shared SCSI bus. SCSI in this context refers to FC as well as to parallel SCSI.
    The allocation class parameters are not affected by the presence of FC.
  3. Answer No when the procedure asks whether the node will be a satellite.

Example 7-5 Enabling Clustering on a Standalone FC Node

$ @CLUSTER_CONFIG_LAN BRIEF 
 
                   Cluster Configuration Procedure 
                    Executing on an Alpha System 
 
    DECnet Phase IV is installed on this node. 
 
    The LAN, not DECnet, will be used for MOP downline loading. 
    This Alpha node is not currently a cluster member 
    
 
MAIN MENU 
 
   1. ADD FCNOD1 to existing cluster, or form a new cluster. 
   2. MAKE a directory structure for a new root on a system disk. 
   3. DELETE a root from a system disk. 
   4. EXIT from this procedure. 
 
Enter choice [1]: 1 
Is the node to be a clustered node with a shared SCSI or Fibre Channel bus (Y/N)? Y 
 
    Note: 
        Every cluster node must have a direct connection to every other 
        node in the cluster.  Since FCNOD1 will be a clustered node with 
        a shared SCSI or FC bus, and Memory Channel, CI, and DSSI are not present, 
        the LAN will be used for cluster communication. 
 
Enter this cluster's group number: 511 
Enter this cluster's password: 
Re-enter this cluster's password for verification: 
 
Will FCNOD1 be a boot server [Y]? Y 
    Verifying LAN adapters in LANACP database... 
    Updating LANACP LAN server process volatile and permanent databases... 
    Note: The LANACP LAN server process will be used by FCNOD1 for boot 
          serving satellites. The following LAN devices have been found: 
    Verifying LAN adapters in LANACP database... 
 
    LAN TYPE    ADAPTER NAME    SERVICE STATUS 
    ========    ============    ============== 
    Ethernet    EWA0            ENABLED 
 
 
  CAUTION: If you do not define port allocation classes later in this 
           procedure for shared SCSI buses, all nodes sharing a SCSI bus 
           must have the same non-zero ALLOCLASS value. If multiple 
           nodes connect to a shared SCSI bus without the same allocation 
           class for the bus, system booting will halt due to the error or 
           IO AUTOCONFIGURE after boot will keep the bus offline. 
 
Enter a value for FCNOD1's ALLOCLASS parameter [0]: 5 
Does this cluster contain a quorum disk [N]? N 
    Each shared SCSI bus must have a positive allocation class value. A shared 
    bus uses a PK adapter. A private bus may use: PK, DR, DV. 
 
    When adding a node with SCSI-based cluster communications, the shared 
    SCSI port allocation classes may be established in SYS$DEVICES.DAT. 
    Otherwise, the system's disk allocation class will apply. 
 
    A private SCSI bus need not have an entry in SYS$DEVICES.DAT. If it has an 
    entry, its entry may assign any legitimate port allocation class value: 
 
       n   where n = a positive integer, 1 to 32767 inclusive 
       0   no port allocation class and disk allocation class does not apply 
      -1   system's disk allocation class applies (system parameter ALLOCLASS) 
 
    When modifying port allocation classes, SYS$DEVICES.DAT must be updated 
    for all affected nodes, and then all affected nodes must be rebooted. 
    The following dialog will update SYS$DEVICES.DAT on FCNOD1. 
 
    There are currently no entries in SYS$DEVICES.DAT for FCNOD1. 
    After the next boot, any SCSI controller on FCNOD1 will use 
    FCNOD1's disk allocation class. 
 
 
Assign port allocation class to which adapter [RETURN for none]: PKA 
Port allocation class for PKA0: 10 
 
        Port Alloclass   10    Adapter FCNOD1$PKA 
 
Assign port allocation class to which adapter [RETURN for none]: PKB 
Port allocation class for PKB0: 20 
 
        Port Alloclass   10    Adapter FCNOD1$PKA 
        Port Alloclass   20    Adapter FCNOD1$PKB 
 
  WARNING: FCNOD1 will be a voting cluster member. EXPECTED_VOTES for 
           this and every other cluster member should be adjusted at 
           a convenient time before a reboot. For complete instructions, 
           check the section on configuring a cluster in the "OpenVMS 
           Cluster Systems" manual. 
 
    Execute AUTOGEN to compute the SYSGEN parameters for your configuration 
    and reboot FCNOD1 with the new parameters. This is necessary before 
    FCNOD1 can become a cluster member. 
 
Do you want to run AUTOGEN now [Y]? Y 
 
    Running AUTOGEN -- Please wait. 
 
The system is shutting down to allow the system to boot with the 
generated site-specific parameters and installed images. 
 
The system will automatically reboot after the shutdown and the 
upgrade will be complete. 

Example 7-6 Adding a Node to a Cluster with a Shared FC System Disk

$ @CLUSTER_CONFIG_LAN BRIEF 
 
                   Cluster Configuration Procedure 
                    Executing on an Alpha System 
 
    DECnet Phase IV is installed on this node. 
 
    The LAN, not DECnet, will be used for MOP downline loading. 
    FCNOD1 is an Alpha system and currently a member of a cluster 
    so the following functions can be performed: 
 
MAIN MENU 
 
   1. ADD an Alpha node to the cluster. 
   2. REMOVE a node from the cluster. 
   3. CHANGE a cluster member's characteristics. 
   4. CREATE a duplicate system disk for FCNOD1. 
   5. MAKE a directory structure for a new root on a system disk. 
   6. DELETE a root from a system disk. 
   7. EXIT from this procedure. 
 
Enter choice [1]: 1 
 
    This ADD function will add a new Alpha node to the cluster. 
 
  WARNING: If the node being added is a voting member, EXPECTED_VOTES for 
           every cluster member must be adjusted.  For complete instructions 
           check the section on configuring a cluster in the "OpenVMS Cluster 
           Systems" manual. 
 
  CAUTION: If this cluster is running with multiple system disks and 
           common system files will be used, please, do not proceed 
           unless appropriate logical names are defined for cluster 
           common files in SYLOGICALS.COM. For instructions, refer to 
           the "OpenVMS Cluster Systems" manual. 
 
Is the node to be a clustered node with a shared SCSI or Fibre Channel bus (Y/N)? Y 
Will the node be a satellite [Y]? N 
What is the node's SCS node name? FCNOD2 
What is the node's SCSSYSTEMID number? 19.111 
    NOTE: 19.111 equates to an SCSSYSTEMID of 19567 
Will FCNOD2 be a boot server [Y]? Y 
What is the device name for FCNOD2's system root 
[default DISK$V72_SSB:]? Y 
What is the name of FCNOD2's system root [SYS10]? 
    Creating directory tree SYS10 ... 
    System root SYS10 created 
 
  CAUTION: If you do not define port allocation classes later in this 
           procedure for shared SCSI buses, all nodes sharing a SCSI bus 
           must have the same non-zero ALLOCLASS value. If multiple 
           nodes connect to a shared SCSI bus without the same allocation 
           class for the bus, system booting will halt due to the error or 
           IO AUTOCONFIGURE after boot will keep the bus offline. 
 
Enter a value for FCNOD2's ALLOCLASS parameter [5]: 
Does this cluster contain a quorum disk [N]? N 
Size of pagefile for FCNOD2 [RETURN for AUTOGEN sizing]? 
 
    A temporary pagefile will be created until resizing by AUTOGEN. The 
    default size below is arbitrary and may or may not be appropriate. 
 
Size of temporary pagefile [10000]? 
Size of swap file for FCNOD2 [RETURN for AUTOGEN sizing]? 
 
    A temporary swap file will be created until resizing by AUTOGEN. The 
    default size below is arbitrary and may or may not be appropriate. 
 
Size of temporary swap file [8000]? 
    Each shared SCSI bus must have a positive allocation class value. A shared 
    bus uses a PK adapter. A private bus may use: PK, DR, DV. 
 
    When adding a node with SCSI-based cluster communications, the shared 
    SCSI port allocation classes may be established in SYS$DEVICES.DAT. 
    Otherwise, the system's disk allocation class will apply. 
 
    A private SCSI bus need not have an entry in SYS$DEVICES.DAT. If it has an 
    entry, its entry may assign any legitimate port allocation class value: 
 
       n   where n = a positive integer, 1 to 32767 inclusive 
       0   no port allocation class and disk allocation class does not apply 
      -1   system's disk allocation class applies (system parameter ALLOCLASS) 
 
    When modifying port allocation classes, SYS$DEVICES.DAT must be updated 
    for all affected nodes, and then all affected nodes must be rebooted. 
    The following dialog will update SYS$DEVICES.DAT on FCNOD2. 
 
Enter [RETURN] to continue: 
 
    $20$DKA400:<VMS$COMMON.SYSEXE>SYS$DEVICES.DAT;1 contains port 
    allocation classes for FCNOD2. After the next boot, any SCSI 
    controller not assigned in SYS$DEVICES.DAT will use FCNOD2's 
    disk allocation class. 
 
 
Assign port allocation class to which adapter [RETURN for none]: PKA 
Port allocation class for PKA0: 11 
 
        Port Alloclass   11    Adapter FCNOD2$PKA 
 
Assign port allocation class to which adapter [RETURN for none]: PKB 
Port allocation class for PKB0: 20 
 
        Port Alloclass   11    Adapter FCNOD2$PKA 
        Port Alloclass   20    Adapter FCNOD2$PKB 
 
Assign port allocation class to which adapter [RETURN for none]: 
 
  WARNING: FCNOD2 must be rebooted to make port allocation class 
           specifications in SYS$DEVICES.DAT take effect. 
Will a disk local only to FCNOD2 (and not accessible at this time to FCNOD1) 
be used for paging and swapping (Y/N)? N 
 
    If you specify a device other than DISK$V72_SSB: for FCNOD2's 
    page and swap files, this procedure will create PAGEFILE_FCNOD2.SYS 
    and SWAPFILE_FCNOD2.SYS in the [SYSEXE] directory on the device you 
    specify. 
 
What is the device name for the page and swap files [DISK$V72_SSB:]? 
%SYSGEN-I-CREATED, $20$DKA400:[SYS10.SYSEXE]PAGEFILE.SYS;1 created 
%SYSGEN-I-CREATED, $20$DKA400:[SYS10.SYSEXE]SWAPFILE.SYS;1 created 
    The configuration procedure has completed successfully. 
 
    FCNOD2 has been configured to join the cluster. 
 
    The first time FCNOD2 boots, NETCONFIG.COM and 
    AUTOGEN.COM will run automatically. 


Previous Next Contents Index