HP OpenVMS Systems Documentation
The OpenVMS Frequently Asked Questions (FAQ)
5.42 Please help me with the OpenVMS BACKUP utility?
5.42.1 Why isn't BACKUP/SINCE=BACKUP working?
If you are seeing more files backed up than previously, you are seeing
the result of a change that was made to ensure BACKUP can perform an
incrementation restoration of the files. In particular, if a directory
file modification date changes, all files underneath it are included in
the BACKUP, in order to permit incremental restoration should a
directory file get renamed.
When a directory is renamed, the modified date is changed. When the restoration needs to restore the directory and its contents, and the restoration should not result in the restoration of the older directory name when a series of incremental BACKUPs are restored. Thus an incremental BACKUP operation needs to pick up all of the changes.
Consider performing an incremental restoration, to test the procedures.
This testing was how OpenVMS Engineering found out about the problem
that was latent with the old BACKUP selection scheme---the old
incremental BACKUP scheme would have missed restoring any files under a
renamed directory. Hence the change to the selection mechanisms
mentioned in Section 5.42.1.
Yes, please see the /NOINCREMENTAL qualifier available on recent
OpenVMS versions (and ECO kits). Use of this qualifier informs BACKUP
that you are aware of the limitations of the old BACKUP behaviour
around incremental disk restorations.
Use the documented commands in the manual for performing incremental BACKUPs. Use the documented incremental procedures. Don't try to use incremental commands in a non-incremental context.
Also consider understanding and then using /NOALIAS, which will likely be a bigger win than will anything to do with the incremental BACKUPs, particularly on system disks and any other disks with directory aliases.
See the OpenVMS documentation for additional details.
Ignoring hardware performance and process quotas, the performance of BACKUP during a disk saveset creation is typically limited by three factors:
5.42.3 Why is BACKUP not working as expected?
First, please take the time to review the BACKUP documentation, and particularly the BACKUP command examples. Then please download and install the most current BACKUP eco kit. Finally, please please set the process quotas per the System Management documentation. These steps tend to resolve most problems seen.
BACKUP has a very complex interface, and there are numerous command examples and extensive user documentation available. For a simpler user interface for BACKUP, please see the documentation for the BACKUP$MANAGER tool.
As for recent BACKUP changes, oddities, bugs, etc:
When working with BACKUP, you will want to:
When working with the BACKUP callable API:
5.42.4 How do I fix a corrupt BACKUP saveset?
BACKUP savesets can be corrupted by FTP file transfers and by tools such as zip (particularly when the zip tool has not been asked to save and restore OpenVMS file attributes or when it does not support OpenVMS file attributes), as well as via other means of corruptions.
If you have problems with the BACKUP savesets after unzipping them or after an FTP file transfer, you can try restoring the appropriate saveset attributes using the tool:
This tool is available on the OpenVMS Freeware (in the [000TOOLS] directory). The Freeware is available at various sites---see the Freeware location listings elsewhere in the FAQ---and other similar tools are also available from various sources.
In various cases, a SET FILE/ATTRIBUTES command can also be used. As the parameters of this command must be varied as the target BACKUP saveset attributes vary, this approach is not recommended.
Also see the "SITE VMS", /FDL, and various other file-attributes options available in various FTP tools. (Not all available FTP tools support any or all of these options.)
Browser downloads (via FTP) and incorrect (binary or ascii FTP transfer modes) are notorious for causing RMS file corruptions and particularly BACKUP saveset corruptions. You can sometimes help encourage the browser to select the correct FTP transfer type code (via RFC1738):
You can also often configure the particular web browser to choose the
appropriate transfer mode by default, based on the particular file
extensions, using a customization menu available in most web browsers.
You can select that the specific file extentions involved use the FTP
binary transfer mode, which will reduce the number of corruptions seen.
How to do this correctly was described at DECUS long ago. On the OpenVMS host with the tape drive, create the following SAVE-SET.FDL file:
Then create BACKUP_SERVER.COM:
On the node where you want to do the backup, use the DCL command:
One area which does not function here is the volume switch; multi-reel or multi-cartridge savesets. Since the tape is being written through DECnet and RMS and the magtape ACP, BACKUP won't see the media switch and will split an XOR group across the reel boundary. BACKUP might well be willing to read such a multi-reel or multi-cartridge saveset (directly, not over the net) as the XOR blocks are effectively ignored until and unless needed for error recovery operations. BACKUP likely will not be able to perform an XOR-based recovery across reel or cartridge boundaries.
Unfortunately BACKUP can't read tapes over the network because the RMS
file attributes on a network task access look wrong; the attributes
reported include variable length records.
Sometimes refered to as disk, tape, or media declassification, as formatting, as pattern erasure, or occasionally by the generic reference of data remanence. Various references to the US Deparment of Defence (DoD) or NCSC "Rainbow Books" documentation are also seen in this context.
While this erasure task might initially appear quite easy, basic characteristics of the storage media and of the device error recovery and bad block handling can make this effort far more difficult than it might initially appear.
Obviously, data security and sensitivity, the costs of exposure, applicable legal or administrative requirements (DoD, HIPPA or otherwise), and the intrinsic value of the data involved are all central factors in this discussion and in the decision of the appropriate resolution, as is the value of the storage hardware involved.
With data of greater value or with data exposure (sometimes far) more costly than the residual value of the disk storage involved, the physical destruction of the platters may well be the most expedient, economical, and appropriate approach. The unintended exposure of a bad block containing customer healthcare data or of credit card numbers can quite be costly, of course, both in terms of the direct loss, and the longer-term and indirect costs of such exposures.
Other potential options include the Freeware RZDISK package, the OpenVMS INITIALIZE/ERASE command (and potentially in conjunction with the $erapat system service) and OpenVMS Ask The Wizard (ATW) topics including (841), (3926), (4286), (4598), and (7320). For additional information on sys$erapat, see the OpenVMS Programming Concepts manual and the OpenVMS VAX examples module SYS$EXAMPLES:DOD_ERAPAT.MAR. Some disk controllers and even a few disks contain support for data erasure. Some DSSI Disk ISEs, for instance.
For the prevention of casual disk data exposures, a generic INITIALIZE/ERASE operation is probably sufficient. This is not completely reliable, particularly if the data is valuable, or if legal, administrative or contractual restrictions are stringent---there may well be revectored blocks that are not overwritten or not completely overwritten by this erasure, as discussed above, and these blocks can obviously contain at least part of most any data that was stored on the disk -- but this basic disk overwrite operation is likely sufficient to prevent the typical information disclosures.
You will want to consult with your site security officer, your
corporate security or legal office, with HP Services or your prefered
service organization, or with a firm that specializes in erasure or
data declassification tasks. HP Services does traditionally offer a
secure disk declassification service.
In SYSTARTUP_VMS.COM, ensure that a command similar to the following is invoked:
In MODPARAMS.DAT, add the following line or (if already present) mask the specified hexidecimal value into an existing TTY_DEFCHAR2, and perform a subsequent AUTOGEN with an eventual reboot:
On older TCP/IP Services---versions prior to V5.0---you will have to perform the following UCX command:
22.214.171.124 Volume Shadowing MiniCopy vs MiniMerge?
MiniMerge support has been available for many years with OpenVMS host-based volume shadowing, so long as you had MSCP controllers (eg: HSC, HSJ, or HSD) which supported the Volume Shadowing Assist known as "Write History Logging".
If you are interested in mini-merge and similar technologies, please see the Fibre Channel webpage and the information available there:
Host-based Mini-Merge (HBMM) is now available for specific OpenVMS releases via a shadowing ECO kit, and is also present in OpenVMS V8.2 and later. HBMM applies to the HSG80 series and---like host-based volume shadowing---to most other (all other?) supported storage devices.
The following sections describe both Mini-Copy and Mini-Merge, and can
provide a basis for discussions.
A Shadowing Full Copy occurs when you add a disk to an existing shadowset using a MOUNT command; the entire contents of the disk are effectively copied to the new member (using an algorithm that goes through in 127-block increments and reads one member, compares with the target disk, and if the data differs, writes the data to the target disk and loops back to the read step, until the data is equal for that 127-block section). (This is one of the reasons why the traditional recommendation for adding new volumes to a shadowset was to use a BACKUP/PHYSICAL copy of an existing shadowset volume, simply because the reads then usually matched and thus shadowing usually avoided the need for the writes.)
If you warn OpenVMS ahead of time (at dismount time) that you're
planning to remove a disk from a shadowset but re-add it later, OpenVMS
will keep a bitmap tracking what areas of the disk have been modified
while the disk was out of the shadowset, and when you re-add it later
with a MOUNT command OpenVMS only has to update the areas of the
returned disk that the bit-map indicates are now out-of-date. OpenVMS
does this with a read source / write target algorithm, which is much
faster than the shenanigans the Full Copy does, so even if all of the
disk has changed, a Mini-Copy is faster than a Full Copy.
A Shadowing Merge is initiated when an OpenVMS node in the cluster (which had a shadowset mounted) crashes or otherwise leaves unexpectedly, without dismounting the shadowset first. In this case, OpenVMS must ensure that the data is identical, since Shadowing guarantees that the data on the disks in a shadowset will be identical. In a regular Merge operation, Shadowing uses an algorithm similar to the Full Copy algorithm (except that it can choose either of the members' contents as the source data, since both are considered equally valid), and scans the entire disk. Also, to make things worse, for any read operations in the area ahead of what has been merged, Shadowing will first merge the area containing the read data, then allow the read to occur.
A Merge can be very time-consuming and very I/O intensive. If a node
crashes, the surviving nodes can query to determine what exact areas of
the disk the departed node was writing to just before the crash, and
thus Shadowing only needs to merge just those few areas, so this tends
to take seconds, as opposed to potentially requiring many minutes or
even hours for a regular full Merge.
DELETE/ERASE holds the file lock and also holds a lock on the parent
directory for the duration of the erasure. This locking can obviously
cause an access conflict on either the file or on the directory---it
might well pay to rename files into a temporary directory location
before issuing the DELETE/ERASE, particularly for large files and/or
for systems with multiple overwrite erase patterns in use; for any
systems where the DELETE/ERASE erasure operation will take a while.
Some applications will automatically roll file version numbers over, and some will require manual intervention. Some will continue to operate without the ability to update the version, and some will be unable to continue. Some sites will specifically (attempt to) create a file with a version of ;32767 to prevent the creation of additional files, too.
To monitor and resolve file versions, you can use commands including:
And you can also monitor file version numbers, and can report problems with ever-increasing file versions to the organization(s) supporting the application(s) generating files with ever-increasing version numbers for details on potential problems, and for any recommendations on resetting the version numbers for the particular product or package. If required, of course.
The following pair of DCL commands---though obviously subject to timing windows--- can be used to rename all the versions of a file back down to a contiguous sequence of versions starting at 1:
The key to the success of this RENAME sequence is the specification of (only) the trailing semicolon on the second parameter of each of the RENAME commands.
You may also see the numbers of files reduced with DELETE commands, with multiple directories, or with PURGE commands such as the following examples:
If you are creating or supporting an application, selecting temporary or log file filenames from among a set of filenames---selecting filenames based on time, on process id, on the day of week, week number, or month, on the f$unique lexical (V7.3-2 and later), etc---is often useful, as this approach more easily permits on-line adjustments to the highest file versions and easily permits on-line version compression using techniques shown above. With differing filenames, you are less likely to encounter errors resulting from files that are currently locked. You can also detect the impending version number limit within the application, and can clean up older versions and roll the next file version creation to ;1 or such.
Host-based Volume Shadowing (HBVS) is Disk Mirroring is RAID Level 1.
HBVS is capable of shadowing devices of different geometries, of different block counts (with dissimilar device shadowing; allowing for mixtures of hardware) and---with dynamic volume expansion---of growing volumes on the fly, and HBVS is capable of shadowing/mirroring/raid-1 operations across cluster configurations up to the full span---please see the Cluster SPD for the current supported span; the supported span limit is currently multiple hundreds of kilometers---of a cluster. HBVS can be layered onto controller (hardware) RAID, as well.