[an error occurred while processing this directive]

HP OpenVMS Systems

ask the wizard
Content starts here

Deleting spooled files, printing, ANALYZE/DISK?

» close window

The Question is:

 
I have a number of clients who have the IDX (a hospital system). Man of the
 applications print directly to a device spooled to SYS$SYSDEVICE:.
 
I understand that when you print to a spooled device the file is not entered in
 a directory on the specified disk, and the disk device FDT routine enters
 CLF_SPOOLFILE which causes the RMS create routines to set FCB$V_SPOOL in the
 File Control Block.
 
It appears that the TCPIP$TELNETSYM (and it's predecessor UCX 4.2 eco xx) do
 not always delete the spooled file upon completion/failure, and a monthly $
 ANALYZE/DISK/REPAIR puts hundreds of these files (there are several hundred
 concurrent users) into SYS
$SYSDEVICE:[SYSLOST].;* where we then delete them. (After typing quite a few of
 them we were able to determine that they were mainly on-demand reports).
 
Unless you have something special up your sleeve, perhaps the appropriate
 support unit should be made aware of this problem.
 
Thank you.
 
 


The Answer is :

 
  This appears to be a known incompatibility with certain uses of the
  OpenVMS spooled device capabilities and the underlying assumptions
  -- some code assumes that the DELETE/ENTRY and similar commands can
  be used on these spooled files, and OpenVMS (unfortunately) preserves
  the contents of the spooled file in the file structure.  The OpenVMS
  Wizard has also heard of cases where an incorrectly configured queue
  manager or cluster cannot access the files to delete them, due to
  protections (use security alarm commands such as SET AUDIT/ALARM
  /ENABLE=ACCESS=(FAILURE:DELETE), or set up the queue to retain jobs
  on error) or due to (a lack of) access (privileges, not mounted, etc)
  to the files stored on the intermediate (spooled) disk device.  All
  that said, the temporary file(s) on the intermediate device should
  be automatically deleted -- under normal and typical circumstances.
 
  RMS itself does not set the spooled attribute on the file, that is
  something that is only set by the application, either directly or
  when an $open operation is performed on a spooled device -- the
  subsequent $close operation calls $sndjbc, which then (should) delete
  the file.  (The OpenVMS Wizard has encountered cases where a file is
  not correctly closed, and thus the $sndjbc submission never occurs.)
 
  Please consider an upgrade of your OpenVMS version to a more current
  release -- though this upgrade will not likely alter nor address this
  file-preserving behaviour of OpenVMS.  Please also consider configuring
  a (scratch) disk as the target for spooled file operations, for easier
  clean-up ($ SET DEVICE /SPOOL=(telnet_queue, ddcu:) TNcu:).  And please
  check with the other software vendor(s), to determine if there are any
  ECOs or workarounds within the (other) software involved here.
 
  Alternatively, you could potentially use a null print symbiont -- jobs
  queued will succeed of course, and should then be deleted.  (There are
  null symbionts available from various sources, including on the OpenVMS
  Freeware distributions.)
 
  If you wish to formally report a problem with OpenVMS or need/want a
  formal resolution to a report, please consider direct contact with
  the Compaq customer support center.
 

answer written or last revised on ( 18-FEB-2002 )

» close window