HP OpenVMS Systems Documentation
OpenVMS Programming Concepts Manual
7.6 Dequeuing Locks
When a process no longer needs a lock on a resource, you can dequeue the lock by using the Dequeue Lock Request (SYS$DEQ) system service. Dequeuing locks means that the specified lock request is removed from the queue it is in. Locks are dequeued from any queue: Granted, Waiting, or Conversion (see Section 7.3.6). When the last lock on a resource is dequeued, the lock management services delete the name of the resource from its data structures.
The four arguments to the SYS$DEQ macro (lkid, valblk, acmode, and flags) are optional. The lkid argument allows the process to specify a particular lock to be dequeued, using the lock identification returned in the lock status block.
The valblk argument contains the address of a 16-byte lock value block. If the lock being dequeued is in protected write or exclusive mode, the contents of the lock value block are stored in the lock value block associated with the resource. If the lock being dequeued is in any other mode, the lock value block is not used. The lock value block can be used only if a particular lock is being dequeued.
Three flags are available:
The following is an example of dequeuing locks:
User-mode locks are automatically dequeued when the image exits.
The lock management services provide methods for applications to
perform local buffer caching (also called distributed
buffer management). Local buffer caching allows a number of processes
to maintain copies of data (disk blocks, for example) in buffers local
to each process and to be notified when the buffers contain invalid
data because of modifications by another process. In applications where
modifications are infrequent, substantial I/O can be saved by
maintaining local copies of buffers. You can use either the lock value
block or blocking ASTs (or both) to perform buffer caching.
To support local buffer caching using the lock value block, each process maintaining a cache of buffers maintains a null mode lock on a resource that represents the current contents of each buffer. (For this discussion, assume that the buffers contain disk blocks.) The value block associated with each resource is used to contain a disk block "version number." The first time a lock is obtained on a particular disk block, the current version number of that disk block is returned in the lock value block of the process. If the contents of the buffer are cached, this version number is saved along with the buffer. To reuse the contents of the buffer, the null lock must be converted to protected read mode or exclusive mode, depending on whether the buffer is to be read or written. This conversion returns the latest version number of the disk block. The version number of the disk block is compared with the saved version number. If they are equal, the cached copy is valid. If they are not equal, a fresh copy of the disk block must be read from disk.
Whenever a procedure modifies a buffer, it writes the modified buffer
to disk and then increments the version number before converting the
corresponding lock to null mode. In this way, the next process that
attempts to use its local copy of the same buffer finds a version
number mismatch and must read the latest copy from disk rather than use
its cached (now invalid) buffer.
Blocking ASTs support local buffer caching in two ways. One technique
involves deferred buffer writes; the other technique is an alternative
method of local buffer caching without using value blocks.
When local buffer caching is being performed, a modified buffer must be
written to disk before the exclusive mode lock can be released. If a
large number of modifications are expected (particularly over a short
period of time), you can reduce disk I/O by both maintaining the
exclusive mode lock for the entire time that the modifications are
being made and by writing the buffer once. However, this prevents other
processes from using the same disk block during this interval. This
problem can be avoided if the process holding the exclusive mode lock
has a blocking AST. The AST notifies the process if another process
needs to use the same disk block. The holder of the exclusive mode lock
can then write the buffer to disk and convert its lock to null mode
(thereby allowing the other process to access the disk block). However,
if no other process needs the same disk block, the first process can
modify it many times but write it only once.
To perform local buffer caching using blocking ASTs, processes do not
convert their locks to null mode from protected read or exclusive mode
when finished with the buffer. Instead, they receive blocking ASTs
whenever another process attempts to lock the same resource in an
incompatible mode. With this technique, processes are notified that
their cached buffers are invalid as soon as a writer needs the buffer,
rather than the next time the process tries to use the buffer.
The choice between using either version numbers or blocking ASTs to perform local buffer caching depends on the characteristics of the application. An application that uses version numbers performs more lock conversions; whereas one that uses blocking ASTs delivers more ASTs. Note that these techniques are compatible; some processes can use one technique, and other processes can use the other at the same time. Generally, blocking ASTs are preferable in a low-contention environment; whereas version numbers are preferable in a high-contention environment. You can even invent combined or adaptive strategies.
In a combined strategy, the applications use specific techniques. If a process is expected to reuse the contents of a buffer in a short amount of time, the application uses blocking ASTs; if there is no reason to expect a quick reuse, the application uses version numbers.
In an adaptive strategy, an application makes evaluations based on the rate of blocking ASTs and conversions. If blocking ASTs arrive frequently, the application changes to using version numbers; if many conversions take place and the same cached copy remains valid, the application changes to using blocking ASTs.
For example, suppose one process continually displays the state of a database, while another occasionally updates it. If version numbers are used, the displaying process must always make sure that its copy of the database is valid (by performing a lock conversion); if blocking ASTs are used, the display process is informed every time the database is updated. On the other hand, if updates occur frequently, the use of version numbers is preferable to continually delivering blocking ASTs.
To share a terminal between a parent process and a subprocess, each process requests a null lock on a shared resource name. Then, each time one of the processes wants to perform terminal I/O, it requests an exclusive lock, performs the I/O, and requests a null lock.
Because the lock manager is effective only between cooperating programs, the program that created the subprocess should not exit until the subprocess has exited. To ensure that the parent does not exit before the subprocess, specify an event flag to be set when the subprocess exits (the num argument of LIB$SPAWN). Before exiting from the parent program, use SYS$WAITFR to ensure that the event flag has been set. (You can suppress the logout message from the subprocess by using the SYS$DELPRC system service to delete the subprocess instead of allowing the subprocess to exit.)
After the parent process exits, a created process cannot synchronize access to the terminal and should use the SYS$BRKTHRU system service to write to the terminal.
This part describes the use of asynchronous system traps (ASTs), and
the use of condition-handling routines and services.
|System Service||Task Performed|
|SYS$SETAST||Enable or disable reception of AST requests|
The system services that use the AST mechanism accept as an argument the address of an AST service routine, that is, a routine to be given control when the event occurs.
Table 8-2 shows some of the services that use ASTs.
|System Service||Task Performed|
|SYS$ENQ||Enqueue Lock Request|
|SYS$GETDVI||Get Device/Volume Information|
|SYS$GETJPI||Get Job/Process Information|
|SYS$GETSYI||Get Systemwide Information|
|SYS$QIO||Queue I/O Request|
|SYS$SETPRA||Set Power Recovery AST|
|SYS$UPDSEC||Update Section File on Disk|
The following sections describe in more detail how ASTs work and how to
8.2 Declaring and Queuing ASTs
Most ASTs occur as the result of the completion of an asynchronous event that is initiated by a system service (for example, a SYS$QIO or SYS$SETIMR request) when the process requests notification by means of an AST.
The Declare AST (SYS$DCLAST) system service can be called to invoke a subroutine as an AST. With this service, a process can declare an AST only for the same or for a less privileged access mode.
The following sections present programming information about declaring
and using ASTs.
8.2.1 Reentrant Code and ASTs
Compiled code that is generated by Compaq compilers is reentrant. Furthermore, Compaq compilers normally generate AST routine local data that is reentrant. Data that is shared static, shared external data, Fortran COMMON, and group or system global section data are not inherently reentrant, and usually require explicit synchronization.
Because the queuing mechanism for an AST does not provide for returning a function value or passing more than one argument, you should write an AST routine as a subroutine. This subroutine should use nonvolatile storage that is valid over the life of the AST. To establish nonvolatile storage, you can use the LIB$GET_VM run-time routine. You can also use a high-level language's storage keywords to create permanent nonvolatile storage. For instance, you can use the C language's keywords as follows:
extern static routine malloc().
In some cases, a system service that queues an AST (for example,
SYS$GETJPI) allows you to specify an argument for the AST routine . If
you choose to pass the argument, the AST routine must be written to
accept the argument.
22.214.171.124 The Call Frame
When a routine is active under OpenVMS, it has available to it temporary storage on a stack, in a construct known as a stack frame, or call frame. Each time a subroutine call is made, another call frame is pushed onto the stack and storage is made available to that subroutine. Each time a subroutine returns to its caller, the subroutine's call frame is pulled off the stack, and the storage is made available for reuse by other subroutines. Call frames therefore are nested. Outer call frames remain active longer, and the outermost call frame, the call frame associated with the main routine, is normally always available.
A primary exception to this call frame condition is when an exit handler runs. With an exit handler running, only static data is available. The exit handler effectively has its own call frame. Exit handlers are declared with the SYS$DCLEXH system service.
The use of call frames for storage means that all routine-local data is reentrant; that is, each subroutine has its own storage for the routine-local data.
The allocation of storage that is known to the AST must be in memory
that is not volatile over the possible interval the AST might be
pending. This means you must be familiar with how the compilers
allocate routine-local storage using the stack pointer and the frame
pointer. This storage is valid only while the stack frame is active.
Should the routine that is associated with the stack frame return, the
AST cannot write to this storage without having the potential for some
severe application data corruptions.
8.2.2 Shared Data Access with Readers and Writers
The following are two types of shared data access:
If there is shared data access with multiple readers, your application must be able to tolerate having a stale counter that allows frequent looping back and picking up a new value from the code.
With multiple writers, often the AST is the writer, and the mainline code is the reader or updater. That is, the mainline processes all available work until it cannot dequeue any more requests, releasing each work request to the free queue as appropriate, and then hibernates when no more work is available. The AST then activates, pulls free blocks off the free queue, fills entries into the pending work queue, and then wakes the mainline code. In this situation, you should use a scheduled wakeup call for the mainline code in case work gets into the queue and no wakeup is pending.
Having multiple writers is possibly the most difficult to code, because
you cannot always be sure where the mainline code is in its processing
when the AST is activated. A suggestion is to use a work queue and a
free queue at a known shared location, and to use entries in the queue
to pass the work or data between the AST and the mainline code.
Interlocked queue routines, such as LIB$INSQHI and LIB$REMQTI, are
available in the Run-Time Library.
8.2.3 Shared Data Access and AST Synchronization
An AST routine might invoke subroutines that are also invoked by another routine. To prevent conflicts, a program unit can use the SYS$SETAST system service to disable AST interrupts before calling a routine that might be invoked by an AST. You use the SYS$SETAST service typically only if there is noninterlocked (nonreentrant) variables, or if the code itself is nonreentrant. Once the shared routine has executed, the program unit can use the same service to reenable AST interrupts. In general you should avoid using the SYS$SETAST call because of implications for application performance.
Implicit synchronization can be achieved for data that is shared for write by using only AST routines to write the data, since only one AST can be running at any one time. You can also use the SYS$DCLAST system service to call a subroutine in AST mode.
Explicit synchronization can be achieved through a lack of read-modify cells, in cases of where there is one writer with one or more readers. However, if there are multiple writers, you must consider explicit synchronization of access to the data cells. This can be achieved using bitlocks (LIB$BBCCI), hardware interlocked queues (LIB$INSQHI), interlocked add and subtract (LIB$ADAWI) routines, or by other techniques. These routines are available directly in assembler by language keywords in C and other languages, and by OpenVMS RTL routines from all languages. On Alpha systems, you can use PALcode calls such as load-locked (LDx_L) and store-conditional (STx_C) instructions to manage synchronization.
For details of synchronization, see Chapter 6. Also see processor architecture manuals about the necessary synchronization techniques and for common synchronization considerations.