|
|
|
|
The following sections describe master and slave controllers, and data check and error recovery capabilities in greater detail.
Dual-Path HSC Tape Drives
A dual-path HSC tape drive is a drive
that connects to two HSCs, both of which have the same nonzero tape allocation
class. The operating system recognizes the dual-pathed capability
of such a tape drive under the following circumstances: (1) the
operating system has access to both HSCs and (2) select buttons
for both ports are depressed on the tape drive.
If one port fails, the operating system switches access to the operational port automatically, provided that the allocation class information has been defined correctly.
Dynamic Failover and Mount Verification
Dynamic failover occurs on dual-pathed tape drives if mount
verification is unable to recover on the current path and an alternate
path is available. The failover occurs automatically and transparently
and then mount verification proceeds.
A device enters mount verification when an I/O request fails because the device has become inoperative. This might occur in the following instances:
When the device comes back on line, either through automatic failover or operator intervention, the operating system validates the volume, restores the tape to the position when the I/O failure occurred, and retries the failed request.
Tape Caching
The RV20, TA90,
TK70, and TU81-Plus contain write-back volatile caches.
The host enables write-back volatile caches explicitly, either on
a per-unit basis or on a per-command basis. To enable caching on
a per-unit basis, enter the DCL MOUNT command specifying the qualifier
/CACHE=TAPE_DATA.
The Backup utility enables caching on a per-command basis. The user can implement caching on a per-command basis at the QIO level by using the IO$M_NOWAIT function modifiers on commands where it is legal (see Magnetic Tape I/O Functions). In the unlikely event that cached data is lost, the system returns a fatal error and the device accepts no further I/O requests. Use the IO$M_FLUSH function code to ensure that all write-back-cached data is written out to the specified tape unit. The IO$_PACKACK, IO$_UNLOAD, IO$_REWINDOFF, and IO$_AVAILABLE function codes also flush the cache.
Master Adapters and Slave Formatters
The
operating system supports the use of many master adapters of the
same type on a system. For example, more than one MASSBUS adapter
(MBA) can be used on the same system. A master adapter is a device controller
capable of performing and synchronizing data transfers between memory
and one or more slave formatters.
The operating system also supports the use of multiple slave formatters per master adapter on a system. For example, more than one TM03 or TM78 magnetic tape formatter per MBA can be used on a system. A slave formatter accepts data and commands from a master adapter and directs the operation of one or more slave drives. The TM03 and the TM78 are slave formatters. The TE16, TU45, TU77, and TU78 magnetic tape drives are slave drives.
Data Check
After successful completion
of an I/O operation, a data check is made to compare the data in
memory with that on the tape. After a write or read (forward) operation,
the tape drive spaces backward and then performs a write-check data
operation. After a read operation in the reverse direction, the
tape drive spaces forward and then performs a write-check data reverse
operation. With the exception of TS04 and TU80 drives, magnetic
tape drivers support data checks at the following three levels:
Data check is distinguished from a BACKUP/VERIFY operation, which writes an entire save set, rewinds, and then compares the tape to the original tape.
See TK50 Cartridge Tape System (VAX Only) for information on TK50 data check.
Read and write operations with data check can result in very slow performance on streaming tape drives. |
Error Recovery
Error recovery
is aimed at performing all possible operations that enable an I/O
operation to complete successfully. Magnetic tape error recovery
operations fall into the following two categories:
The error recovery algorithm uses a combination of these types of error recovery operations to complete an I/O operation.
Power failure recovery consists of repositioning the reel to the position held at the start of the I/O operation in progress at the time of the power failure, and then reexecuting this operation. This repositioning might or might not require operator intervention to reload the drives. When such operator intervention is required, "device not ready" messages are sent to the operator console to solicit reloading of mounted drives. Power failure recovery is not supported on VAXstation 2000 and MicroVAX 2000 systems.
Device timeout is treated as a fatal error, with a loss of tape position. A tape on which a timeout has occurred must be dismounted and rewound before the drive position can be established.
If a nonfatal controller/drive error occurs, the driver (or the controller, depending on the type of drive) attempts to reexecute the I/O operation up to 16 times before returning a fatal error. The driver repositions the tape before each retry.
The inhibit retry function modifier (IO$M_INHRETRY) inhibits all normal (nonspecial conditions) error recovery. If an error occurs, and the request includes that modifier, the operation is terminated immediately and the driver returns a failure status. IO$M_INHRETRY has no effect on power failure and timeout recovery.
The driver can write up to 16 extended interrecord gaps during the error recovery for a write operation. For the TE16, TU45, and TU77 magnetic tape drives, writing these gaps can be suppressed by specifying the inhibit extended interrecord gap function modifier (IO$M_INHEXTGAP). This modifier is ignored for the other magnetic tape drives.
Streaming Tape Systems
Streaming
tape systems, such as the TK50, TK70, TU80, TU81, TU81-Plus, TA81,
and TZ30, use the supply and takeup reel mechanisms to control tape
speed and tension directly, which eliminates the need for more complex
and costly tension and drive components. Streaming tapes have a
very simple tape path, much like an audio reel-to-reel recorder.
Read and write operations with data check can result in very slow performance on streaming tape drives. |
These steps, allowing the tape to reposition, require approximately one-half second to complete on TU8x tapes and about 3 seconds on TK50 tapes. If the operating system is not capable of writing to, or reading from, a streaming tape drive at a rate that will keep the drive in constant motion (streaming) the drive repositions itself when it runs out of commands to execute. That produces a situation known as thrashing, in which the relatively long reposition times exceed the time spent processing data and the result is lower-than-expected data throughput.
Thrashing is entirely dependent on how fast the system can process data relative to the tape drive speed while streaming. Consequently, the greatest efficiency is obtained when you provide sufficient buffering to ensure continuous tape motion. Some streaming tape drives such as the TU80, TU81, TU81-Plus, and TA81 are dual-speed devices that automatically adjust the tape speed to maximize data throughput and minimize thrashing.
The TK50 writes up to seven filler records to keep the tape in motion. These records are ignored when the data is read.
|
|