[an error occurred while processing this directive]

HP OpenVMS Systems

ask the wizard
Content starts here

RMS indexed file tuning and disk cluster factors

» close window

The Question is:

 
We have currently have just completed an upgrade from a Vax cluster to an
Alpha one, the performance of the system is very good but I have a question
concerning the performance of indexed RMS files.
Previously the discs on which our data resided had cluster size of 3 or 4
and accordingly kept the bucket sizes in the files to a multiple of this.
The new disc on which our data now lives has a much larger cluster size of
52. Is there any benefit in keeping the bucket size lower than 52 or should
I increase them all to 52.
I understand the effect bucket sizes have on file performance but I am not
sure about the link with the large cluster size.
A related question is also is there anything to make EDIT/FDL more usable
with these high bucket sizes as the graphs etc stop well short of these
values.
 
Some more info which may be relevant
We use RUJ and point the journals to a disc with a cluster size of 9.
We have approx 500 RMS files ranging from single key files of only 10 blocks
or less up to files with 4 keys and a files sizes just short of 2 million
blocks.
VMS version is 7.1-2
 
 
 


The Answer is :

 
    Is there any benefit in keeping the bucket size lower than 52?  It is
    the other way around -- if there is a benefit for a bucket size of 52,
    it is equally valid for a cluster size of 9 and you would have been
    short-changing yourself for not making that change.  If it was not
    good for you earlier, then it is likely still not good.
 
    The relation between clustersize and bucksize is relativly unimportant.
    There is some modest potential for space saving due to round-up and
    some modest performance (1%?) due to allignment, when everything else
    is set up perfectly and completely under control.
 
    The bucketsize itself extremely important for production system
    performance where as the clustersize is really only important during
    allocation. If your files are created or converted with reasonable
    allocation and extend sizes, then the cluster size can be readily
    ignored.
 
    Larger bucketsizes are very beneficial when sequential reads ($GET next
    by the primary key) form a dominant activity, and/or when the index
    gets deep.  Smalerl bucketsizes are beneficial when 'contention' plays
    a (negative) role (eg: many processes going after the same buckets) and
    when frequent updates and insertions occur.  Remember, if the application
    changes just one byte in a record, RMS will write out a whole bucket
    of blocks.  The cluster factor plays NO role for either of these.
 
    If you use EDIT/FDL/NOINTERACTIVE to automatically tune your FDL
    files, you may want to (have to!) pre-process them and hardcode a
    SMALL clustersize value (such as the 3 or 4 you had previously used)
    in order for EDIT/FDL not to get carried away. An alternative to
    changing FDL files is to use TWO FDL files for input to EDIT/FDL.
    The main argument should be a true description of the file. The
    second argument would be provided in the /ANALYSIS and can be
    limited to the following lines (adjusted to your file, of course!):
 
      IDENT "minimal template for EDIT/FDL/NOINTERACTIVE analyze input"
      FILE;
        CLUSTER_SIZE            12
      ANALYSIS_OF_KEY 0;
        MEAN_DATA_LENGTH        123
        DATA_RECORD_COUNT       1234
 
    The mean record length tends to be stable for a given file.
    The data record count can be obtained from the last CONVERT and
    corrected to the expected values for the current convert.
 
    The OpenVMS Wizard does not see a particular relationship between
    RU Journalling and cluster or bucket size. The bucket size is
    important for AI journalling (which writes out the used bytes in
    buckets to its log).
 

answer written or last revised on ( 22-JUN-1999 )

» close window