[an error occurred while processing this directive]

HP OpenVMS Systems

ask the wizard
Content starts here

Shared Memory, Threads, Interprocess Communicati

» close window

The Question is:

 
Using DECthreads and C++ under OpenVMS Alpha 7.1, I need to set up
inter-process communication. The OpenVMS Programming Concepts Manual says in
section 2.2 that Global Sections provide the fastest way to do this.
 
I have allocated a global page file section and used the memory to allocate
hardware queue data structures (supported by the __PAL_INSQ... and
__PAL_REMQ... built-ins in DEC C++). My application needs to perform various
tasks on an on-going basis but proc
ess messages as soon as possible, and here I have encountered difficulties.
 
* I considered using an Event Flag from a Common Event Flag Cluster to
indicate the transition from 0 messages to 1 message, and having the queue
reader wait for that flag if it encounters an empty queue; but the
DECthreads manual says in section B.11.8 t
hat this will suspend the entire process, not just the thread that calls
SYS$WAIT.
 
* I could delay a short time and try again, but this kind of polling loop
drains CPU time from the process's other threads, which have useful work to
do.
 
Given that I must cooperate with code that uses DECthreads, have I chosen
the fastest tools available for inter-process communication? If so, how can
I respond most quickly to the appearance of a message without suspending
other threads? If not, what bett
er combination of VMS Alpha tools would you recommend?
 


The Answer is :

 
  Shared memory -- including group and system global sections, as
  well as galactic memory -- is the fastest available communications
  mechanism on OpenVMS.
 
  Shared memory does not provide event notification.  Event notification
  can be implemented via AST, OpenVMS locks, event flags, $qio, mailboxes,
  $hiber/$wake, intracluster communications (ICC), DECnet, IP, and
  various other interprocess communications mechanisms available on
  OpenVMS systems.
 
  The usual approach is to monitor the status returned by the insertion,
  and to use a notification call when the message is inserted into an
  empty queue.  The process retrieving messages from the queue continues
  to read messages from the queue until no more are available, then waits
  for a notification.  (Many programmers will further include a periodic
  timer using $setimr or $schdwk, triggering a routine that causes the
  reader to revisit the queue, to avoid problems that could arise from
  errant (lost) notification processing.)
 
  As for notification in the DECthreads environment, if your application
  enables the kernel support for threads (i.e., if the main executable
  image is linked with /THREADS), then the best approach is to have the
  thread which is responsible for draining the queue wait in a call to
  $hiber when it finds the queue is empty.  (The thread or process
  inserting messages would use $wake.)
 
  If the application does not enable the kernel support, then the
  consumer thread should block on a condition variable to avoid
  blocking the entire process.  You can use an AST routine resulting
  from an interprocess request such as a mailbox I/O, an OpenVMS AST
  that is triggered when an OpenVMS lock is granted or is blocking
  another ("doorbell" locks), C signals, or similar -- make sure you
  use the appropriate DECthreads function for use at AST level.
 
  Certainly, using shared memory is the fastest communication mechanism.
  When working with shared memory, you will want to be familiar with
  issues of read-write ordering, as well as with the necessity for use
  of memory barriers.
 
  Shared memory has programming considerations around read-write ordering
  and memory barriers -- with C, you can be using keywords such as
  volatile (which instructs the compiler to avoid caching values in
  registers), as well as the C asm directives and PALcode memory barrier
  (__MB) calls.  The interlocked primitives -- the interlocked queue and
  bitlock PALcode calls -- include memory barriers in the PALcode executed
  for the call.  Regardless of whether or not a particular PALcode call
  includes a memory barrier, your application should explicitly include
  any necessary and appropriate memory barriers.
 
  The discussion of memory barriers assumes the application is now or
  will eventually operate on an OpenVMS symmetric multiprocessor (SMP)
  system.  Any assumption of a non-SMP will fail if/when the application
  is moved to an SMP system -- the OpenVMS Wizard strongly recommends
  that the correct use of SMP-capable memory barriers be included.
 
  For instance, your producer must issue a "memory barrier" instruction
  after writing the data to shared memory and before inserting it on
  the queue; likewise, your consumer must issue a memory barrier
  instruction after removing an item from the queue and before reading
  from its memory.  Otherwise, you risk seeing stale data, since, while
  the Alpha processor does provide coherent memory, it does not provide
  implicit ordering of reads and writes.  (That is, the write of the
  producer's data might reach memory after the write of the queue, such
  that the consumer might read the new item from the queue but get the
  previous values from the item's memory.
 
  Of course, these read-write ordering and memory barrier issues do not
  occur in communications between threads in the same process, so long
  as the threads use mutexes to protect the shared data.  Like PALcode
  calls, DECthreads mutexes provide memory barriers implicitly.  Note
  that DECthreads does not provide any mutexes that can synchronize
  across multiple processes on OpenVMS -- the DECthreads mutex support
  operates only among threads within a single OpenVMS process.
 
  Of course, these read-write ordering and memory barrier issues do not
  occur in communications between threads in the same process -- processes
  where kernel-threads are not enabled -- as long as they use mutexes to
  protect the shared data, since the mutexes provide the memory barriers
  implicitly; however, mutexes are not available to synchronize memory
  access across threads across processors in an OpenVMS SMP configuration.
 
  Of the available interprocess synchronization mechanisms -- examples
  include common event flags, $hiber/$wake, mailbox I/O, the distributed
  OpenVMS Lock Manager -- $hiber/$wake is among the cheapest.  However,
  as you may be aware, using $hiber in a multithreaded process without
  the kernel support enabled can be unreliable and will block all threads
  from executing during the time that the calling thread is scheduled to
  "run".  Thus, in that situation, it is important to block the thread on
  a condition variable -- otherwise, the particular communications and
  synchronization mechanism chosen can be irrelevent, as they will all
  have nearly the same performance impact.
 
  The difficulty is, how do you signal the condition variable when an
  item is placed on the queue?  Since there is no way to do this from
  outside the process, the immediate solution is to provide a way to do
  it from inside the process, via an AST.  The two obvious possibilities
  are a completion AST from an asynchronous $qio call such as a mailbox
  read, and a "doorbell" AST set up by using the blocking AST support
  in the $enqw call.  Either of these mechanisms can be triggered by
  the producer when it inserts an item on the queue and the status
  indicates that the queue was previously empty (i.e., there is no need
  to expend the cost of the system service if the queue was not previously
  empty, since the first insertion would have already done so).  The AST
  routine should call the appropriate DECthreads function to signal the
  condition variable (note that the usual signal function is not supported
  from an AST and you must use one with the "_int" suffix in its name) and
  then re-arm the notification mechansim (i.e., by calling $qio or $enqw
  again).
 
  For additional information on memory barriers, shared memory requirements,
  and read-write ordering, please see the Alpha Architecture Reference
  Manual.  The reference is available for download via pointers in the FAQ.
 
  Also please see existing discussions of shared memory and related
  topics, including (4487), (4051), (3791), (3635), (3365), (2486),
  (2637), (2181), (860), and others.
 

answer written or last revised on ( 27-AUG-2001 )

» close window