![Content starts here](http://welcome.hp-ww.com/img/s.gif) |
OpenVMS Programming Concepts Manual
- The event flag argument is specified in each
SYS$QIO request. Both of these event flags are explicitly declared in
event flag cluster 0. These variables contain the event flag numbers,
and not the event flag masks.
- The I/O Status Blocks are declared. Ensure
that the storage associated with these structures is valid over the
lifetime of the asychronous call. Ensure that these structures are not
declared within the local context of a call frame of a function that
can exit before the asynchronous call completes. Be sure that these
calls are declared with static or external context, within the stack
frame of a function that will either remain active, or was located
within other non-volatile storage.
The use of either LIB$GET_EF or
EFN$C_ENF (defined in efndef.h) is strongly recommended over the static
declaration of local event flags, because the consistent use of either
of these techniques will avoid the unintended reuse of local event
flags within different parts of the same program, and the intermittent
problems that can ensue. Common event flags are somewhat less likely to
encounter similar problems due to the requirement to associate with the
cluster before use. But the use and switching of event flag clusters
and the use of event flags within each cluster should still be
carefully coordinated.
- Set up the event flag mask. Since both of
these event flags are located in the same event flag cluster, you can
use a simple OR to create the bit mask. Since these event flags are in
the same cluster, you can use them in the SYS$WSFLAND call.
- After both I/O requests are queued
successfully, the program calls the SYS$WFLAND system service to wait
until the I/O operations complete. In this service call, the
Efn1 argument can specify any event flag number within
the event flag cluster containing the event flags to be waited for,
since the argument indicates which event flag cluster is associated
with the mask. The EFMask argument specifies to wait
for flags 1 and 2.
You should specify a unique event flag (or of
EFN$C_ENF) and a unique I/O Status Block for any asynchronous call.
- Note that the SYS$WFLAND system service (and
the other wait system services) waits for the event flag to be set; it
does not wait for the I/O operation to complete. If some other event
were to set the required event flags, the wait for event flag would
complete prematurely. Use of event flags must be coordinated carefully.
- Use the I/O Status Block to determine which
of the two calls have completed. The I/O status block is initialized to
zero by the $qio call, and is set to non-zero when the call is
completed. An event flag can be set spuriously---typically if there is
unintended sharing or reuse of event flags---and thus you should check
the I/O status block. For a mechanism that can check both the event
flag and the IOSB and thus ensure that the call has completed, see the
$synch system service call.
6.6.9 Setting and Clearing Event Flags
System services that use event flags clear the event flag specified in
the system service call before they queue the timer or I/O request.
This ensures that the process knows the state of the event flag. If you
are using event flags in local clusters for other purposes, be sure the
flag's initial value is what you want before you use it.
The Set Event Flag (SYS$SETEF) and Clear Event Flag (SYS$CLREF) system
services set and clear specific event flags. For example, the following
system service call clears event flag 32:
The SYS$SETEF and SYS$CLREF services return successful status codes
that indicate whether the specified flag was set or cleared when the
service was called. The caller can thus determine the previous state of
the flag, if necessary. The codes returned are SS$_WASSET and
SS$_WASCLR.
All event flags in a common event flag cluster are initially clear when
the cluster is created. Section 6.6.10 describes the creation of common
event flag clusters.
6.6.10 Example of Using a Common Event Flag Cluster
The following example shows four cooperating processes that share a
common event flag cluster. The processes are named COLUMBIA, ENDEAVOUR,
ATLANTIS, and DISCOVERY, and are all in the same UIC group.
/* **** Common Header File **** (1)
.
.
.
#define EFC0 0 // EFC 0 (Local)
#define EFC1 32 // EFC 1 (Local)
#define EFC2 64 // EFC 2 (Common)
#define EFC3 96 // EFC 3 (Common)
int Efn0 = 0, Efn1 = 1, Efn2 = 2, Efn3 = 3;
int EFMask;
$DESCRIPTOR(EFCname,"ENTERPRISE");
.
.
.
// **** Process COLUMBIA **** (2)
//
// The image running within process COLUMBIA creates a common
// event flag cluster, associating it with Cluster 2
.
.
.
RetStat = sys$ascefc(EFC2, &EFCname,...); (3)
if (!$VMS_STATUS_SUCCESS(RetStat))
lib$signal(RetStat);
.
.
.
EFMask = 1L<<Efn1 | 1L<<Efn2 | 1L<<Efn3; (4)
// Wait for the specified event flags
RetStat = sys$wfland(EFC2, EFMask); (5)
if (!$VMS_STATUS_SUCCESS(RetStat))
lib$signal(RetStat);
.
.
.
// Disassociate the event flag cluster
RetStat = sys$dacefc(EFC2); (6)
// **** Process ENDEAVOUR ****
//
// The image running within process ENDEAVOUR associates with the
// specified event flag cluster, specifically associating it with
// the common event flag cluster 3.
.
.
.
// Associate the event flag cluster, using Cluster 3
RetStat = sys$ascefc(EFC3,&EFCname,...); (7)
if (!$VMS_STATUS_SUCCESS(RetStat))
lib$signal(RetStat);
// Set the event flag, and check for errors
RetStat = sys$setef(Efn2+EFC3); (8)
if (!$VMS_STATUS_SUCCESS(RetStat))
lib$signal(RetStat);
.
.
.
RetStat = sys$dacefc(EFC3);
// **** Process ATLANTIS ****
//
// The image running within process ATLANTIS associates with the
// specified event flag cluster, specifically associating it with
// the common event flag cluster 2.
// Associate the event flag cluster, using Cluster 2
RetStat = sys$ascefc(EFC2, &EFCname);
if (!$VMS_STATUS_SUCCESS(RetStat))
lib$signal(RetStat);
// Set the event flag, and check for errors
RetStat = sys$setef(Efn2+EFC2);
if (!$VMS_STATUS_SUCCESS(RetStat))
lib$signal(RetStat);
.
.
.
retstat = sys$dacefc(EFC2);
// **** Process DISCOVERY **** (9)
// The image running within process DISCOVERY associates with the
// specified event flag cluster, specifically associating it with
// the common event flag cluster 3.
RetStat = sys$ascefc(EFC3, &EFCname);
if (!$VMS_STATUS_SUCCESS(RetStat))
lib$signal(RetStat);
// Wait for the flag, and check for errors
RetStat = sys$waitfr(Efn2+EFC3);
if (!$VMS_STATUS_SUCCESS(RetStat))
lib$signal(RetStat);
// Set event flag 2, and check for errors
RetStat = sys$setef(Efn2+EFC3);
if (!$VMS_STATUS_SUCCESS(RetStat))
lib$signal(RetStat);
.
.
.
RetStat = sys$dacefc(EFC2);
|
- Set up some common definitions used by the
various applications, including preprocessor defines for the event flag
clusters, and some variables and values for particular event flags
within the clusters.
- Assume that COLUMBIA is the first process to
issue the SYS$ASCEFC system service and therefore is the creator of the
ENTERPRISE event flag cluster. Because this is a newly created common
event flag cluster, all event flags in it are clear. COLUMBA then waits
for the specified event flags, and then exits---the process will remain
in a common event flag (CEF) wait state.
- Use bit-shifts and an OR operation to create
a bit mask from the bit numbers.
- The SYS$ASCEFC call creates the relationship
of the named event flag cluster, the specified range of common event
flags, and the process. It also creates the event flag cluster, if
necessary.
- The SYS$DACEFC call disassociates the
specified event flag cluster from the COLUMBIA process.
- In process ENDEAVOUR, the argument
EFCname in the SYS$ASCEFC system service call is a
pointer to the string descriptor containing the name to be assigned to
the event flag cluster; in this example, the cluster is named
ENTERPRISE and was created by process COLUMBIA. While COLUMBIA mapped
this cluster as cluster 2, this service call associates this name with
cluster 3, event flags 96 through 127. Cooperating processes ENDEAVOUR,
ATLANTIS, and DISCOVERY must use the same character string name to
refer to this cluster.
- The continuation of process COLUMBIA depends
on (unspecified) work done by processes ENDEAVOUR, ATLANTIS, and
DISCOVERY. The SYS$WFLAND system service call specifies a mask
indicating the event flags that must be set before process COLUMBIA can
continue. The mask in this example (binary 1110) indicates that the
second, third, and fourth flags in the cluster must be set. Process
ENDEAVOUR sets the second event flag in the event flag cluster
longword, using the SYS$SETEF system service call.
- Process ATLANTIS associates with the cluster,
but instead of referring to it as cluster 2, it refers to it as cluster
3 (with event flags in the range 96 through 127). Thus, when process
ATLANTIS sets the event flag, it must bias the flag for the particular
event flag cluster longword.
- Process DISCOVERY associates with the
cluster, waits for an event flag set by process ENDEAVOUR, and sets an
event flag itself.
6.6.11 Example of Using Event Flag Routines and Services
This section contains an example of how to use event flag services.
Common event flags are often used for communicating between a parent
process and a created subprocess. In the following example, REPORT.FOR
creates a subprocess to execute REPORTSUB.FOR, which performs a number
of operations.
After REPORTSUB.FOR performs its first operation, the two processes can
perform in parallel. REPORT.FOR and REPORTSUB.FOR use the common event
flag cluster named JESSIER to communicate.
REPORT.FOR associates the cluster name with a common event flag
cluster, creates a subprocess to execute REPORTSUB.FOR and then waits
for REPORTSUB.FOR to set the first event flag in the cluster.
REPORTSUB.FOR performs its first operation, associates the cluster name
JESSIER with a common event flag cluster, and sets the first flag. From
then on, the processes execute concurrently.
REPORT.FOR
.
.
.
! Associate common event flag cluster
STATUS = SYS$ASCEFC (%VAL(64),
2 'JESSIER',,)
IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS))
! Create subprocess to execute concurrently
MASK = IBSET (MASK,0)
STATUS = LIB$SPAWN ('RUN REPORTSUB', ! Image
2 'INPUT.DAT', ! SYS$INPUT
2 'OUTPUT.DAT', ! SYS$OUTPUT
2 MASK
IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS))
! Wait for response from subprocess.
STATUS = SYS$WAITFR (%VAL(64))
IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS))
.
.
.
REPORTSUB.FOR
.
.
.
! Do operations necessary for
! continuation of parent process.
.
.
.
! Associate common event flag cluster
STATUS = SYS$ASCEFC (%VAL(64),
2 'JESSIER',,)
IF (.NOT. STATUS)
2 CALL LIB$SIGNAL (%VAL(STATUS))
! Set flag for parent process to resume
STATUS = SYS$SETEF (%VAL(64))
.
.
.
|
6.7 Synchronizing Operations with System Services
A number of system services can be executed either synchronously or
asynchronously such as the following:
- SYS$GETJPI and SYS$GETJPIW
- SYS$QIO and SYS$QIOW
The W at the end of the system service name indicates the synchronous
version of the service.
The asynchronous version of a system service queues a request and
immediately returns control to your program pending the completion of
the request. You can perform other operations while the system service
executes. To avoid data corruptions, you should not attempt any read or
write access to any of the buffers or itemlists referenced by the
system service call prior to the completion of the asynchronous portion
of the system service call. Further, no self-referential or
self-modifying itemlists should be used.
Typically, you pass an event flag and an I/O status block to an
asynchronous system service. When the system service completes, it sets
the event flag and places the final status of the request in the I/O
status block. Use the SYS$SYNCH system service to ensure that the
system service has completed. You pass to SYS$SYNCH the event flag and
I/O status block that you passed to the asynchronous system service;
SYS$SYNCH waits for the event flag to be set and then examines the I/O
status block to be sure that the system service rather than some other
program set the event flag. If the I/O status block is still 0,
SYS$SYNCH waits until the I/O status block is filled.
The following example shows the use of the SYS$GETJPI system service:
! Data structure for SYS$GETJPI
.
.
.
INTEGER*4 STATUS,
2 FLAG,
2 PID_VALUE
! I/O status block
STRUCTURE /STATUS_BLOCK/
INTEGER*2 JPISTATUS,
2 LEN
INTEGER*4 ZERO /0/
END STRUCTURE
RECORD /STATUS_BLOCK/ IOSTATUS
.
.
.
! Call SYS$GETJPI and wait for information
STATUS = LIB$GET_EF (FLAG)
IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS))
STATUS = SYS$GETJPI (%VAL(FLAG),
2 PID_VALUE,
2 ,
2 NAME_BUF_LEN,
2 IOSTATUS,
2 ,)
IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS))
.
.
.
STATUS = SYS$SYNCH (%VAL(FLAG),
2 IOSTATUS)
IF (.NOT. IOSTATUS.JPISTATUS) THEN
CALL LIB$SIGNAL (%VAL(IOSTATUS.JPISTATUS))
END IF
END
|
The synchronous version of a system service acts as if you had used the
asynchronous version followed immediately by a call to SYS$SYNCH;
however, it behaves this way only if you specify a status block. If you
omit the I/O status block, the result is as though you called the
asynchronous version followed by a call to SYS$WAITFR. Regardless of
whether you use the synchronous or asynchronous version of a system
service, if you omit the efn argument, the service
uses event flag 0.
Chapter 7 Synchronizing Access to Resources
This chapter describes the use of the lock manager to synchronize
access to shared resources and contains the following sections:
Section 7.1 describes how the lock manager synchronizes processes to
a specified resource.
Section 7.2 describes how to use the dedicated CPU lock manager to
enhance system performance.
Section 7.3 describes the concepts of resources and locks.
Section 7.4 describes how to use the SYS$ENQ and SYS$ENQW system
services to queue lock requests.
Section 7.5 describes specialized features of locking techniques.
Section 7.6 describes how to use the SYS$DEQ system service to
dequeue the lock.
Section 7.7 describes how applications can perform local buffer
caching.
Section 7.8 presents a code example of how to use lock management
services.
7.1 Synchronizing Operations with the Lock Manager
Cooperating processes can use the lock manager to synchronize access to
a shared resource (for example, a file, program, or device). This
synchronization is accomplished by allowing processes to establish
locks on named resources. All processes that access the shared
resources must use the lock management services; otherwise, the
syncronization is not effective.
Note
The use of the term resource throughout this chapter means
shared resource.
|
To synchronize access to resources, the lock management services
provide a mechanism that allows processes to wait in a queue until a
particular resource is available.
The lock manager does not ensure proper access to the resource; rather,
the programs must respect the rules for using the lock manager. The
rules required for proper synchronization to the resource are as
follows:
- The resource must always be referred to by an agreed-upon name.
- Access to the resource is always accomplished by queuing a lock
request with the SYS$ENQ or SYS$ENQW system service.
- All lock requests that are placed in a wait queue must wait for
access to the resource.
A process can choose to lock a resource and then create a subprocess to
operate on this resource. In this case, the program that created the
subprocess (the parent program) should not exit until the subprocess
has exited. To ensure that the parent program does not exit before the
subprocess, specify an event flag to be set when the subprocess exits
(use the completion-efn argument of LIB$SPAWN). Before
exiting from the parent program, use SYS$WAITFR to ensure that the
event flag is set. (You can suppress the logout message from the
subprocess by using the SYS$DELPRC system service to delete the
subprocess instead of allowing the subprocess to exit.)
Table 7-1 summarizes the lock manager services.
Table 7-1 Lock Manager Services
Routine |
Description |
SYS$ENQ(W)
|
Queues a new lock or lock conversion on a resource
|
SYS$DEQ
|
Releases locks and cancels lock requests
|
SYS$GETLKI(W)
|
Obtains information about the lock database
|
7.2 Using the Dedicated CPU Lock Manager (Alpha Only)
The Dedicated CPU Lock Manager is a feature that improves performance
on large SMP systems that have heavy lock manager activity. The feature
dedicates a CPU to performing lock manager operations.
A dedicated CPU has the following advantages for overall system
performance:
- Reduces the amount of MP_SYNCH time
- Provides good CPU cache utilization
7.2.1 Implementing the Dedicated CPU Lock Manager
For the Dedicated CPU Lock Manager to be effective, systems must have a
high CPU count and a high amount of MP_SYNCH due to the lock manager.
The amount of MP_SYNCH can be seen with the MONITOR utility by using
the MONITOR MODE command. If your system has more than 5 CPUs and if
MP_SYNCH is higher than 200 percent, then your system may be able to
take advantage of the Dedicated CPU Lock Manager. Usage of the spinlock
trace feature under SDA can help determine if the lock manager is
contributing to the high amount of MP_SYNCH time.
The dedicated CPU Lock Manager is implemented by LCKMGR_SERVER process.
This process runs at priority 63. When the Dedicated CPU Lock Manager
is turned on, this process runs in a compute bound loop looking for
lock manager work to perform. Because this process polls for work, it
is always computable, and with a priority of 63, the process will never
give up the CPU. Thus, a whole CPU is consumed by this process.
When a program calls either the $ENQ or $DEQ System Services, and if
the Dedicated CPU Lock Manager is running, a lock manager request is
placed on a work queue for the Dedicated CPU Lock Manager. While the
process is waiting for the lock request to be processed, the process
spins in kernel mode at IPL 2. After the dedicated CPU processes the
request, the status for the system service is returned to the process.
The Dedicated CPU Lock Manager is dynamic and can be turned off if
there are no perceived benefits, such as a small number of CPUs or a
small amount of locking activity. When the Dedicated CPU Lock Manager
is turned off, the LCKMGR_SERVER process is in a HIB (hibernate) state.
The process may not be deleted once started.
7.2.2 Enabling the Dedicated CPU Lock Manager
To use the Dedicated CPU Lock Manager, perform the following steps in
order:
- Enable the Dedicated CPU Lock Manager by doing the following:
Set the dynamic system parameter LCKMGR_MODE to an appropriate value.
By default, LCKMGR_MODE=0. When LCKMGR_MODE=0, the Dedicated CPU
Lock Manager is disabled. When LCKMGR_MODE=1, the system behaves as if
LCKMGR_MODE=2. When LCKMGR_MODE=n (where n is between
2 and 255, inclusive), n denotes the minimum number of active
CPUs required to enable the Dedicated CPU Lock Manager. When there are
at least n active CPUs in the current system, the system
automatically enables the Dedicated CPU Lock Manager.
- Activate the Dedicated CPU Lock Manager by starting the
LCKMGR_SERVER process by entering the following command:
$RUN SYS$SYSTEM:LCKMGR_SERVER
|
This command creates a detached process named LCKMGR_SERVER.
LCKMGR_SERVER runs whenever the Dedicated CPU Lock Manager is enabled.
While running, it performs the operations of the Dedicated CPU Lock
Manager. If the number of active CPUs present drops below n,
the system automatically disables the Dedicated CPU Lock Manager. As a
result, LCKMGR_SERVER hibernates at the latest within 1 second. You can
deactivate the CPU with a STOP/CPU command or with a Galaxy CPU
reassignment operation. When the number of active CPUs becomes at
least n, the system re-enables the Dedicated CPU Lock Manager.
When this occurs, LCKMGR_SERVER awakens and resumes running.
|