[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Volume Shadowing for OpenVMS


Previous Contents Index

7.10 Managing Write Bitmaps With DCL Commands

The SHOW DEVICE, SHOW CLUSTER, and DELETE commands have been extended for managing write bitmaps.

7.10.1 Determining Write Bitmap Support and Activity

You can find out whether a write bitmap exists for a shadow set by using the DCL command SHOW DEVICE/FULL device-name. If a shadow set supports write bitmaps, device supports bitmaps is displayed along with either bitmaps active or no bitmaps active. If the device does not support write bitmaps, no message pertaining to write bitmaps is displayed.

The following command example shows that no write bitmap is active:



$ SHOW DEVICE/FULL DSA0

Disk DSA0:, device type RAM Disk, is online, mounted, file-oriented device,
    shareable, available to cluster, error logging is enabled, device supports     bitmaps (no bitmaps active).

    Error count                    0    Operations completed                 47
    Owner process                 ""    Owner UIC                      [SYSTEM]
    Owner process ID        00000000    Dev Prot            S:RWPL,O:RWPL,G:R,W
    Reference count                2    Default buffer size                 512
    Total blocks                1000    Sectors per track                    64
    Total cylinders                1    Tracks per cylinder                  32
  Volume label              "TST0"    Relative volume number                0
    Cluster size                   1    Transaction count                     1
    Free blocks                  969    Maximum files allowed               250
    Extend quantity                5    Mount count                           1
    Mount status              System    Cache name      "_$252$DUA721:XQPCACHE"
    Extent cache size             64    Maximum blocks in extent cache       96
    File ID cache size            64    Blocks currently in extent cache      0
    Quota cache size               0    Maximum buffers in FCP cache        404
    Volume owner UIC        [SYSTEM]    Vol Prot    S:RWCD,O:RWCD,G:RWCD,W:RWCD

  Volume Status:  ODS-2, subject to mount verification, file high-water marking,
      write-back caching enabled.

Disk $252$MDA0:, device type RAM Disk, is online, member of shadow set DSA0:.

    Error count                    0    Shadow member operation count       128
    Allocation class             252


Disk $252$MDA1:, device type RAM Disk, is online, member of shadow set DSA0:.

    Error count                    0    Shadow member operation count       157
    Allocation class             252

7.10.2 Displaying Write Bitmap IDs

You can find out the ID of each write bitmap on a node with the DCL command SHOW DEVICE/BITMAP device-name. The /BITMAP qualifier cannot be combined with other SHOW DEVICE qualifiers except /FULL. The SHOW DEVICE/BITMAP display can be brief or full; brief is the default.

If no bitmap is active, no bitmap ID is displayed. The phrase no bitmaps active is displayed.

The following example shows a SHOW DEVICE/BITMAP display:


$ SHOW DEVICE/BITMAP DSA1
Device         BitMap        Size        Percent of
Name           ID            (Bytes)     Full Copy
DSA1:          00010001      652         11%

The following example shows a SHOW DEVICE/BITMAP/FULL display:


$ SHOW DEVICE DSA12/BITMAP/FULL
Device  Bitmap  Size   Percent of  Active Creation             Master  Cluster Local Delete  Bitmap
Name    ID     (bytes) Full Copy          Date/Time            Node    Size    Set   Pending Name

DSA12: 00010001  652    11%        Yes  5-MAY-2000 13:30:25:30 300F2   127     2%    No      SHAD$TEST

7.10.3 Displaying Write Bitmap Status of Cluster Members

You can specify bitmap information in the SHOW CLUSTER display by issuing the ADD BITMAPS command, as shown in the following example:


$ SHOW CLUSTER/CONTINUOUS

Command > ADD BITMAPS
Command > ADD CSID

View of Cluster from system ID 57348  node: WPCM1          14-FEB-2000 13:38:53

      SYSTEMS              MEMBERS
  NODE   SOFTWARE    CSID   STATUS    BITMAPS

 CSGF1   VMS X6TF    300F2   MEMBER    MINICOPY

 HSD30Y  HSD YA01    300E6

 HS1CP2  HSD V31D    300F4

 CSGF2   VMS X6TF    300D0  MEMBER    MINICOPY

In this example, MINICOPY means that nodes CSGF1 and CSGF2 are capable of supporting minicopy operations. If a cluster node does not support minicopy, the term UNSUPPORTED replaces MINICOPY in the display, and the minicopy function is disabled in the cluster.

7.10.4 Deleting Write Bitmaps

After a minicopy operation is completed, the corresponding write bitmap is automatically deleted.

There may be times when you would like to delete one or more bitmaps. Reasons for deleting bitmaps include the following:

  • To recover the memory consumed by a write bitmap
  • To stop the recording of the write bitmap

You can delete write bitmaps with the DCL command DELETE with the /BITMAP qualifer. You use the bitmap qualifer to specify the ID of the bitmap you want to delete. For example:


$ DELETE/BITMAP/LOG 00010001

%DELETE-I-DELETED, 00010001 deleted

7.11 Performance Implications of Write Bitmaps

There are two aspects of write bitmaps that affect performance; the message traffic that occurs between local and master write bitmaps and the size requirements of each bitmap.

The message traffic can be adjusted by changing the message mode. Single message mode is the default. Buffered message mode can improve overall system performance, but the time to record each process's write in the master write bitmap usually takes longer. These modes are described in detail in Section 7.9.

Additional memory is required to support write bitmaps, as described in Section 1.3.1. Depending on the memory usage of your system, it may require additional memory.

7.12 Guidelines for Using a Shadow Set Member for Backup

Volume Shadowing for OpenVMS can be used as an online backup mechanism. With proper application design and proper operating procedures, shadow set members removed from mounted shadow sets constitute a valid backup.

To obtain a copy of a file system or application database for backup purposes using Volume Shadowing for OpenVMS, the standard recommendation has been to determine that the virtual unit is not in a merge state, to dismount the virtual unit, then to remount the virtual unit minus one member. Prior to OpenVMS Version 7.3, there was a documented general restriction on dismounting an individual shadow set member for backup purposes from a virtual unit that is mounted and in active use. This restriction relates to data consistency of the file system, application data, or database located on that virtual unit, at the time the member is removed.

However, Compaq recognizes that this restriction is unacceptable when true 24x7 application availability is a requirement, and that it is unnecessary if appropriate data-consistency measures can be ensured through a combination of application software and system management practice.

7.12.1 Removing a Shadow Set Member for Backup

With currently supported OpenVMS releases, DISMOUNT can be used to remove members from shadow sets for the purpose of backing up data, provided that the following requirements are met:

  • The shadow set must not be in a merge state. Compaq also recommends that the shadow set not have a copy operation in progress.
  • Adequate redundancy must be maintained after member removal. Compaq recommends that the active shadow set never be reduced to less than two members; alternatively, the shadow sets should employ controller mirroring or RAID 5.

Follow these steps to remove the member:

  1. Establish data consistency over the virtual units through system management procedures or application software, or both. This is a complex topic and is the subject of most of the rest of this chapter.
  2. Ensure that the requirements regarding merge state and adequate redundancy are met.
  3. Remove the members to be backed up from the virtual units.
  4. Terminate the data consistency measures taken in step 1.

7.12.2 Data Consistency Requirements

Removal of a shadow set member results in what is called a crash-consistent copy. That is, the copy of the data on the removed member is of the same level of consistency as what would result if the system had failed at that instant. The ability to recover from a crash-consistent copy is ensured by a combination of application design, system and database design, and operational procedures. The procedures to ensure recoverability depend on application and system design and will be different for each site.

The conditions that might exist at the time of a system failure range from no data having been written, to writes that occurred but were not yet written to disk, to all data having been written. The following sections describe components and actions of the operating system that may be involved if a failure occurs and there are outstanding writes, that is, writes that occurred but were not written to disk. You must consider these issues when establishing procedures to ensure data consistency in your environment.

7.12.3 Application Activity

To achieve data consistency, application activity should be suspended and no operations should be in progress. Operations in progress can result in inconsistencies in the backed-up application data. While many interactive applications tend to become quiet if there is no user activity, the reliable suspension of application activity requires cooperation in the application itself. Journaling and transaction techniques can be used to address in-progress inconsistencies but must be used with extreme care. In addition to specific applications, miscellaneous interactive use of the system that might affect the data to be backed up must also be suspended.

7.12.4 RMS Considerations

Applications that use RMS file access must be aware of the following issues.

7.12.4.1 Caching and Deferred Writes

RMS can, at the application's option, defer disk writes to some time after it has reported completion of an update to the application. The data on disk will be updated in response to other demands on the RMS buffer cache and to references to the same or nearby data by cooperating processes in a shared file environment.

Writes to sequential files are always buffered in memory and are not written to disk until the buffer is full.

7.12.4.2 End of File

The end-of-file pointer of a sequential file is normally updated only when the file is closed.

7.12.4.3 Index Updates

The update of a single record in an indexed file may result in multiple index updates. Any of these updates can be cached at the application's option. Splitting a shadow set with an incomplete index update will result in inconsistencies between the indexes and data records. If deferred writes are disabled, RMS orders writes so that an incomplete index update may result in a missing update but never in a corrupt index. However, if deferred writes are enabled, the order in which index updates are written is unpredictable.

7.12.4.4 Run-Time Libraries

The I/O libraries of various languages use a variety of RMS buffering and deferred write options. Some languages allow application control over the RMS options.

7.12.4.5 $FLUSH

Applications can use the $FLUSH service to guarantee data consistency. The $FLUSH service guarantees that all updates completed by the application (including end of file for sequential files) have been recorded on the disk.

7.12.4.6 Journaling and Transactions

RMS provides optional roll-forward, roll-back, and recovery unit journals, and supports transaction recovery using the OpenVMS transaction services. These features can be used to back out in-progress updates from a removed shadow set member. Using such techniques requires careful data and application design. It is critical that virtual units containing journals be backed up along with the base data files.

7.12.5 Mapped Files

OpenVMS allows access to files as backing store for virtual memory through the process and global section services. In this mode of access, the virtual address space of the process acts as a cache on the file data. OpenVMS provides the $UPDSEC service to force updates to the backing file.

7.12.6 Database Systems

Database management systems, such as those from Oracle, are well suited to backup by splitting shadow sets, since they have full journaling and transaction recovery built in. Before dismounting shadow set members, an Oracle database should be put into "backup mode" using SQL commands of the following form:


ALTER TABLESPACE tablespace-name BEGIN BACKUP;

This command establishes a recovery point for each component file of the tablespace. The recovery point ensures that the backup copy of the database can subsequently be recovered to a consistent state. Backup mode is terminated with commands of the following form:


ALTER TABLESPACE tablespace-name END BACKUP;

It is critical to back up the database logs and control files as well as the database data files.

7.12.7 Base File System

The base OpenVMS file system caches free space. However, all file metadata operations (such as create and delete) are made with a "careful write-through" strategy so that the results are stable on disk before completion is reported to the application. Some free space may be lost, which can be recovered with an ordinary disk rebuild. If file operations are in progress at the instant the shadow member is dismounted, minor inconsistencies may result that can be repaired with ANALYZE/DISK. The careful write ordering ensures that any inconsistencies do not jeopardize file integrity before the disk is repaired.

7.12.8 $QIO File Access and VIOC

OpenVMS maintains a virtual I/O cache (VIOC) to cache file data. However, this cache is write through. OpenVMS Version 7.3 introduces extended file cache (XFC), which is also write through.

File writes using the $QIO service are completed to disk before completion is reported to the caller.

7.12.9 Multiple Shadow Sets

Multiple shadow sets present the biggest challenge to splitting shadow sets for backup. While the removal of a single shadow set member is instantaneous, there is no way to remove members of multiple shadow sets simultaneously. If the data that must be backed up consistently spans multiple shadow sets, application activity must be suspended while all shadow set members are being dismounted. Otherwise, the data will not be crash consistent across the multiple volumes. Command procedures or other automated techniques are recommended to speed the dismount of related shadow sets. If multiple shadow sets contain portions of an Oracle database, putting the database into backup mode ensures recoverability of the database.

7.12.10 Host-Based RAID

The OpenVMS software RAID driver presents a special case for multiple shadow sets. A software RAID set may be constructed of multiple shadow sets, each consisting of multiple members. With the management functions of the software RAID driver, it is possible to dismount one member of each of the constituent shadow sets in an atomic operation. Management of shadow sets used under the RAID software must always be done using the RAID management commands to ensure consistency.

7.12.11 OpenVMS Cluster Operation

All management operations used to attain data consistency must be performed for all members of an OpenVMS Cluster system on which the affected applications are running.

7.12.12 Testing

Testing alone cannot guarantee the correctness of a backup procedure. However, testing is a critical component of designing any backup and recovery process.

7.12.13 Restoring Data

Too often, organizations concentrate on the backup process with little thought to how their data will be restored. Remember that the ultimate goal of any backup strategy is to recover data in the event of a disaster. Restore and recovery procedures must be designed and tested as carefully as the backup procedures.

7.12.14 Revalidation of Data Consistency Methods

The discussion in this section is based on features and behavior of OpenVMS Version 7.3 and applies to prior versions as well. Future versions of OpenVMS may have additional features or different behavior that affect the procedures necessary for data consistency. Sites that upgrade to future versions of OpenVMS must reevaluate their procedures and be prepared to make changes or to employ nonstandard settings in OpenVMS to ensure that their backups remain consistent.


Chapter 8
Performing System Management Tasks on Shadowed Systems

This chapter explains how to accomplish system maintenance tasks on a standalone system or an OpenVMS Cluster system that uses volume shadowing. Refer to Chapter 3 for information about setting up and booting a system to use volume shadowing.

8.1 Upgrading the Operating System on a System Disk Shadow Set

It is important to upgrade the operating system at a time when your system can afford to have its shadowing support disabled. This is because you cannot upgrade to new versions of the OpenVMS operating system on a shadowed system disk. If you attempt to upgrade a system disk while it is an active member of a shadow set, the upgrade procedure will fail.

Procedure for Upgrading Your Operating System

This procedure is divided into four parts. Part 1 describes how to prepare a shadowed system disk for the upgrade. Part 2 describes how to perform the upgrade. Part 3 describes how to enable volume shadowing on the upgraded system. Part 4 shows how to boot other nodes in an OpenVMS Cluster system with and without volume shadowing.

Part 1: Preparing a Shadowed System Disk

  1. On OpenVMS Cluster systems, choose the node on which you want to perform the upgrade.
  2. Create a nonshadowed system disk to do the upgrade using either of these methods:
    • Prepare a copy of the current system disk to use as the target of the upgrade procedure. See Section 8.3.2.
    • Use BACKUP to create a compressed copy of the shadow set on a single scratch disk (a disk with no useful data). See in Section 8.3.4 for an example.
  3. Enter the MOUNT/OVERRIDE=SHADOW_MEMBERSHIP command on the upgrade disk to zero the shadowing-specific information on the storage control block (SCB) of the disk. Do not mount the disk for systemwide or clusterwide access; omit the /SYSTEM and /CLUSTER qualifiers on the MOUNT command line.
  4. Use the DCL command SET VOLUME/LABEL=volume-label device-spec[:] to change the label on the upgrade disk. (The SET VOLUME/LABEL command requires write access [W] to the index file on the volume. If you are not the volume owner, you must have either a system UIC or the SYSPRV privilege.) For OpenVMS Cluster systems, ensure that the volume label is a unique name across the cluster.

    Note

    If you need to change the volume label of a disk that is mounted across the cluster, be sure you change the label on all nodes in the OpenVMS Cluster system. For example, you could propagate the volume label change to all nodes in the cluster with one SYSMAN utility command, after you define the environment as the cluster:


    SYSMAN> SET ENVIRONMENT/CLUSTER
    SYSMAN> DO SET VOLUME/LABEL=new-label disk-device-name:
    
  5. Ensure that the boot command line or file boots from the upgrade disk. The manner in which you store the boot command information depends on the processor on which you are working. For more information about storing boot commands, see the instructions in your hardware installation guide, the upgrade and installation supplement for your VAX computer, or the upgrade and installation manual for your Alpha computer.
    If volume shadowing is enabled on the node, disable it according to the instructions in step 6. Otherwise, continue with Part 2 to upgrade the system.
  6. Prepare to perform the upgrade procedure by disabling system disk shadowing (if it is enabled) on the node to be upgraded.

    Note

    You cannot perform an upgrade on a shadowed system disk. If your system is set up to boot from a shadow set, you must disable shadowing the system disk before performing the upgrade. This requires changing SYSGEN parameter values interactively using the SYSGEN utility.

    Invoke SYSGEN by entering the following command:


    $ RUN SYS$SYSTEM:SYSGEN
    

    On OpenVMS Alpha systems, enter the following:


    SYSGEN> USE upgrade-disk:[SYSn.SYSEXE]ALPHAVMSSYS.PAR
    SYSGEN>
    
    
    

    On OpenVMS VAX systems, enter the following:


    SYSGEN> USE upgrade-disk:[SYSn.SYSEXE]VAXVMSSYS.PAR
    SYSGEN>
    
    
    
    The USE command defines the system parameter file from which data is to be retrieved. You should replace the variable upgrade-disk with the name of the disk to be upgraded. For the variable n in [SYSn.SYSEXE], use the system root directory you want to boot from (this is generally the same root you booted from before you started the upgrade procedure).
    Disable shadowing of the system disk by setting the SYSGEN parameter SHADOW_SYS_DISK to 0, as follows:


    SYSGEN> SET SHADOW_SYS_DISK 0
    

    On OpenVMS Alpha systems, enter:


    SYSGEN> WRITE upgrade-disk:[SYSn.SYSEXE]ALPHAVMSSYS.PAR
    
    
    

    On OpenVMS VAX systems, enter:


    SYSGEN> WRITE upgrade-disk:[SYSn.SYSEXE]VAXVMSSYS.PAR
    
    
    

    Type EXIT or press Ctrl/Z to exit the SYSGEN utility and return to the DCL command level.
    You must also change parameters in the MODPARAMS.DAT file before shutting down the system. Changing parameters before shutdown ensures that the new system parameter values take effect when AUTOGEN reads the MODPARAMS.DAT file and reboots the nodes. Edit upgrade-disk:[SYSn:SYSEXE]MODPARAMS.DAT and set SHADOWING and SHADOW_SYS_DISK to 0.

Even if you plan to use the upgraded system disk to upgrade the operating system on other OpenVMS Cluster nodes, you should complete the upgrade on one node before altering parameters for other nodes. Proceed to Part 2.

Part 2: Performing the Upgrade

  1. Boot from and perform the upgrade on the single, nonshadowed disk. Follow the upgrade procedure described in the OpenVMS upgrade and installation manual.
  2. If you are upgrading a system that already has the volume shadowing software installed and licensed, then skip to Part 3.
    Otherwise, you must register the Volume Shadowing for OpenVMS Product Authorization Key (PAK) or keys. PAK registration is described in the release notes and cover letter supplied with your installation kit.

Part 3: Enabling Volume Shadowing on the Upgraded System

Once the upgrade is complete and the upgraded node has finished running AUTOGEN, you can enable shadowing for the upgraded node using the following steps.

  1. Invoke the System Generation utility (SYSGEN) by entering the following command:


    $ RUN SYS$SYSTEM:SYSGEN
    SYSGEN> USE CURRENT
    SYSGEN>
    

    The USE CURRENT command initializes the SYSGEN work area with the source information from the current system parameter file on disk. (To find out the current value of system parameters, use the SHOW command [for example, SHOW SHADOWING] to see the current system parameter values as well as the minimum, maximum, and default values of the parameters.)
    To enable shadowing, set the system parameter SHADOWING to 2. If the system disk is to be a shadow set, set the system parameter SHADOW_SYS_DISK to 1, and set the SHADOW_SYS_UNIT parameter to the unit number of the virtual unit, as follows (assume the system disk virtual unit is DSA54):


    SYSGEN> SET SHADOWING 2
    SYSGEN> SET SHADOW_SYS_DISK 1
    SYSGEN> SET SHADOW_SYS_UNIT 54
    SYSGEN> WRITE CURRENT
    

    Type EXIT or press Ctrl/Z to exit the SYSGEN utility and return to the DCL command level.
  2. To ensure that volume shadowing is enabled each time AUTOGEN executes, edit the SYS$SYSTEM:MODPARAMS.DAT file to set the shadowing parameters. For OpenVMS Cluster systems, set system parameters in MODPARAMS.DAT on each node that uses volume shadowing. See Chapter 3 for more information about editing the MODPARAMS.DAT file.
  3. Shut down the system on which you performed the upgrade, and reboot.

Part 4: Booting Other Nodes in the OpenVMS Cluster from the Upgraded Disk

If other nodes boot from the upgraded disk, the OpenVMS upgrade procedure automatically upgrades and runs AUTOGEN on each node when it is booted. The procedure for booting other nodes from the upgraded disk differs based on whether the upgraded disk has been made a shadow set.

  1. If the upgraded disk is not yet a shadow set:
    1. Disable shadowing (if it is enabled) for the system disk on the nodes to be upgraded.
    2. Alter the boot files for those nodes so they boot from the upgraded disk.
    3. Make sure the system parameters in the node-specific SYS$SYSTEM:MODPARAMS.DAT files are correct (as described in Section 3.3.1). When the OpenVMS upgrade procedure invokes AUTOGEN, it will use these parameter settings.
    4. Boot the nodes from the upgraded disk.
  2. If the upgraded disk is already a shadow set member, additional steps are required:
    1. For each node to be booted from the upgraded disk, edit VAXVMSSYS.PAR for VAX systems and ALPHAVMSSYS.PAR for Alpha systems, and MODPARAMS.DAT to enable system disk shadowing. Set SHADOWING to 2, SHADOW_SYS_DISK to 1, and SHADOW_SYS_UNIT to the number of the system disk's virtual unit name. Remember to modify the files on the upgraded disk, not on the system disk, prior to upgrade.
    2. Modify the computer console so that the system boots from the upgraded disk.
      On VAX computers, depending on which model you have, you can alter the boot file on the console media or use a console command to change nonvolatile RAM.
      On Alpha computers, you can use the SET BOOTDEF_DEV console command. For more information, see the hardware information or the upgrade and installation manual for your system.
    3. Boot each node. With shadowing enabled in each node's ALPHAVMSSYS.PAR or VAXVMSSYS.PAR on the upgraded disk, the node will be able to boot from the shadowed (upgraded) system disk.


Previous Next Contents Index