[an error occurred while processing this directive]

HP OpenVMS Systems

ask the wizard
Content starts here

Mixing Threads and ASTs?

» close window

The Question is:

 
Mr Wizard,
   Ref: WIZ_5843 (thanks for the reply)
 
   In my previous question a wizard said...
 
>>  The aio POSIX.4 routines are part of the existing POSIX environment,
>> and are also part of the new DII COE work.
Does the "existing POSIX environment" mean it's available NOW, prior to
DII COE, if so where can I get it for OpenVMS V7.2-1 ?
 
  In the same topic, it was also stated that...
>>  The difficulties involved in mixing DECthreads and ASTs are inherent
>> in the internal implementations of DECthreads and of ASTs.
 
Since asynchronous I/O via the AIO_xxx routines and Pthreads can be
mixed, does this imply that AST's + Pthreads will be able to be mixed
as well with the DII COE implementation ?
 
thanks inadvance
 
-Fred
 
 
 


The Answer is :

 
  A POSIX kit is (was) available on older OpenVMS releases, but it is
  not available on OpenVMS V7.2-1, and existing POSIX kits are not
  compatible with OpenVMS V7.2-1, and are no longer supported.  DII COE
  (V7.2-6C1) is in beta-test at present, integration of the DII COE work
  back into the OpenVMS mainline releases is not expected prior to OpenVMS
  V7.4 -- at the earliest.
 
  There may be an updated POSIX kit for OpenVMS V7.2-1 or V7.3, but --
  assuming that a decision is made to provide it -- the kit will not be
  made available until after the release of V7.2-6C1.
 
  As for the second question...
 
  The aio_xxx routines should be assumed to be quite different from the
  OpenVMS AST routines.  (These routines are currently implemented using
  $qio, $synch, and AST completions, but any assumptions of the internal
  implementation are perilous at best.)
 
  Threads (of any sort) and OpenVMS AST routines must be carefully mixed.
  There are stringent restrictions on what an application can do within
  an AST routine executing within a multithreaded process.  If the
  application is calling routines other than (most) system services, or
  writing longword or quadword-aligned data cells, the routine should
  likely be translated into a thread.  (Many APIs are either AST-safe
  or thread-safe; most are not both AST- and thread-safe, and -- in a
  multithreaded process -- the calls that claim to be thread-safe are
  likely not AST-safe.  The OpenVMS Wizard is aware of one OpenVMS
  system service -- sys$mount -- that is known to be not thread-safe.)
 
  Upcalls use ASTs, meaning you must have ASTs enabled in order
  to timeslice between threads.  From the perspective of the OpenVMS
  executive, upcalls and thread time-slicing are implemented via ASTs
  -- these ASTs are refer to here as "system ASTs".  Also from the
  perspective of OpenVMS, "application ASTs" are not administered by
  the executive -- rather, they are handed off to the threads library
  via an upcall (which happens to be a "system AST").  The threads
  library then causes the initial thread to begin executing the
  application AST, but the AST environment is not what OpenVMS would
  consider AST mode -- the threads library simulates the AST.  (The
  threads library does not set the USER bit in the ASTSR field of the
  PHD.)  That said, the "application AST" will behave very similarly to
  the behaviour of a hardware AST within a non-threaded application.
  The threads library emulation underlying the "application AST" extends
  to support of application calls to AST-related routines including
  lib$ast_in_prog, $getjpi, $setast, etc.  (With upcalls enabled, the
  application can freely use and alter the "application AST" state,
  short of making direct PALcode calls.)
 
  An additional complicating factor involves the two modes for threaded
  applications.  If the application has upcalls disabled, then threads
  library has no way to participate in AST delivery.  In this senario,
  time-slicing involves a timer AST and thus if the application disables
  AST delivery, time-slicing will not function.  Nor, for that matter,
  will other aspects of the threads library function when upcalls are
  disabled and when ASTs are disabled.
 
  There is currently no mechanism to target an AST at a particular
  thread, as all ASTs run in the initial thread when upcalls are
  enabled.  (There has been some consideration given to targeting an
  AST back to the thread that generated it, but that support is not
  available.)  With upcalls disabled, the ASTs can run in any thread.
  Specifically, with upcalls disabled the ASTs will run in whichever
  thread was executing when the AST became pending.
 
  While event flag operation works the same in all situations -- with
  threading or not, and with upcalls enabled or disabled.  With upcalls
  disabled, if a thread blocks on an event flag, it blocks the entire
  process until the thread is pre-empted (eg: at the end of its execution
  quantum).  This also implies that threads that are not currently
  executing will not notice an event flag state transition until they
  are scheduled and resume execution.  Furthermore, if the event flag
  of interest is set and then cleared, a ready thread may not notice.
  (Accordingly, use of IOSBs is strongly recommended.)
 
  With upcalls enabled, a thread which blocks on a local event flag
  is immediately removed from scheduling contention -- the result of
  a synchronous upcall -- and another thread is scheduled and run.
  When the local event flag becomes set, the threads library receives
  an upcall via a "system AST" and schedules all threads blocked for
  that local event flag.  If there are IOSBs specified -- as there
  should be -- the threads library will schedule the threads only when
  the IOSB has been written.
 
  With upcalls and kernel threading enabled, more than one thread can
  be active at a time (and can operate across multiple processors in
  an SMP system), but only one AST can be active at a time.  You
  might well see a thread and an AST active in parallel.
 
  Threaded applications freely assume that wait states are permitted
  and not affect other threads, while AST routines generally do not
  and should not include waits.
 
  AST and interrupt routines cannot access a thread mutex, nor can AST
  or interrupt code call most of the thread routines.
 
  With upcalls, AST routines do not block threads running on other
  processors.  (Conversely, with upcalls disabled, threads are not
  running in parallel on multiple processors.)
 
  With upcalls enabled, any thread can call $setast and can enable or
  disable ASTs across all threads.   If a thread disables AST delivery
  via $setast, then ASTs will not be delivered while that thread is running.
 
  With upcalls enabled, the AST delivery status (enabled or disabled)
  is maintained on a per-thread bases, and ASTs which are not directed
  to a particular thread (eg: all "application ASTs", at present) will
  not be delivered to the process as long as any non-terminated thread
  has ASTs delivery disabled; with upcalls disabled, the AST delivery
  state is context-switched along with the rest of the thread context.
  (eg: If a thread had AST delivery enabled when it last ran, then AST
  delivery will be enabled when it next runs, regardless of the AST
  delivery state from the previous thread.)
 
  AST re-entrancy is a subset of thread re-entrancy -- a routine that
  is AST re-entrant may not be thread re-entrant.  (Why?  AST routines
  can implicitly assume that only one AST routine is active at a time.)
 
  That a routine is or is not AST-safe does not imply it is or is not
  thread-safe.  Two terms apply here: "thread-safe", and "thread-reentrant".
  A thread-safe routine refers simply to the routine being safe to call
  in a multi-threaded environment -- that is, the function will operate
  correctly, without access violations or memory corruptions, when called
  simultaneously or concurrently across multiple threads.  (How the thread
  might achieve this is unspecified.)  One of the most typical ways to
  make a function thread-safe is to place synchronization within the
  routine to ensure that only one thread can execute the critical code
  within the routine at one time.   The second term, "thread-reentrant",
  refers to the subset of thread-safe functions which operate correctly
  without serializing access to their operation by blocking concurrent
  callers.  For instance, lib$get_vm is re-entrant, because it uses
  atomic operations to manage the look-aside lists and thus multiple
  threads can allocate memory simultaneously.
 
  Similar terminology applies to ASTs:  functions can be "AST-safe" or
  "AST-reentrant".  The typical way to make a function AST-safe is to
  disable ASTs inside the function.  This works from within an AST
  routine -- this call is possible even within an AST routine, though
  obviously unnecessary -- and it works from the main-line code.  With
  ASTs disabled, the function is serialized.  As with a thread-reentrant
  routine, an AST-reentrant function is one that works properly when
  called by both main-line and AST code -- without blocking ASTs.
  Again, lib$get_vm is AST-reentrant because it uses atomic operations.
 
  A problem can arise when a function which is not re0entrant wants to
  be both thread-safe and AST-safe.  It is straightfoward enough for such
  a function to disable AST delivery, since that can be done for both the
  main-line caller and for ASTs.  However, the function cannot use an
  execution-blocking serialization, such as mutex, to control access to
  the core of the routine because once AST execution has begun, it cannot
  be blocked without risking a deadlock.  Thus, APIs are forced to choose
  either thread-safety or AST-safety if they cannot be full-reentrant.  If
  the routine is thread-safe, then it cannot be called in an AST.  A routine
  that is AST-safe is generally either not thread-safe or the routine is
  fully-reentrant.
 
  Simple thread-reentrancy of an API does not imply that the API is also
  AST-reentrant.  One of the techniques frequently employed to make a
  routine thread-reentrant is to use per-thread storage to allocate what
  would otherwise be global or static variables.  Using this technique, no
  call in one thread will conflict with a call in any other thread.  However,
  a call from an AST would be executing in the context of some thread; the
  call would conflict with a concurrent call already in progress in the
  same thread.
 
  If you must use ASTs and threads, you will want to perform the
  absolute minimum processing within the AST routines, passing off
  all processing to threads via application-specific work request
  packets and interlocked queues or other re-entrant technique.
 
  The OpenVMS Wizard recommends using great care -- carefully avoid the
  causes of deadlocks and of data corruptions, as outlined above -- when
  mixing threads and ASTs within the same application image.
 
  Related topics include (2790), (4647), (6099), and (6984).
 

answer written or last revised on ( 9-JUL-2003 )

» close window