HP OpenVMS Systems

ask the wizard
Content starts here

Malloc, LIB$ RTL VM Memory Management?

» close window

The Question is:

When using the LIB$... interlocked queue insert/remove routines on a "work"
 queue of entries which have been allocated using malloc is there then an
 advantage in recycling such entries via a "free" queue as opposed to recycling
 via the heap? What is the a
lgorithm of malloc? Does it apply the LIB$... routines?
Thank you very much.

The Answer is :

  It depends...
  Your free queue structure can be very simple since it need only deal with
  a single block type. A simple singly linked stack will suffice. Your
  allocation algorithm is "take next entry from free queue, if none allocate
  a new one". Your deallocation algorithm is "add to head of free queue".
  This is fast and simple. On the other hand, malloc and free need to be
  general, so the allocation and deallocation algorithms are more complex and
  will be slower (though exactly how much is difficult to predict in advance).
  The disadvantage of a dedicated free queue is if there is a large "spike"
  in the distribution of queue lengths, it may result in your free queue
  growing very large, hogging memory which might be better deployed elsewhere.
  In these days of cheap, large memories, this may not be an issue.
  From an engineering perspective, it might be worth implementing specific
  allocate/deallocate routines for your work queue entries. That way you
  can modify the mechanisms in a single place if you find your initial choice
  is inappropriate.
  In particular, the OpenVMS Wizard would also recommend consideration of
  the existing OpenVMS memory management calls -- the RTL LIB$ VM "zones"
  can be set up to maintain lookaside lists, and provide far greater
  flexibility than is available via the generic "malloc" and "free"
  routines.  The RTL LIB$ VM calls also provide a callable zone verification
  routine, pattern overwrites on allocation and/or deallocation, and various
  other customizations.  VM zones can also be part of more advanced techniques,
  such as temporary memory allocations -- where multiple temporary memory
  allocations can be easily "flushed" via a single call, for instance.

answer written or last revised on ( 29-NOV-2000 )

» close window