[an error occurred while processing this directive]

HP OpenVMS Systems

Ask the Wizard
» 

HP OpenVMS Systems

OpenVMS information

» What's new on our site
» Upcoming events
» Configuration and buying assistance
» Send us your comments

HP OpenVMS systems

» OpenVMS software
» Supported Servers
» OpenVMS virtualization
» OpenVMS solutions and partners
» OpenVMS success stories
» OpenVMS service and support
» OpenVMS resources and information
» OpenVMS documentation
» Education and training

Quick Links

» Non-javascript page
» Ask the Wizard
» OpenVMS FAQ

Test Drive OpenVMS

» OpenVMS I64 test drive
» Java test drive

Other information resources available to you include:

» OpenVMS freeware
» ECO kits, software and hardware support, prior version support
» Alpha SRM, ARC, and AlphaBIOS firmware updates
» ENCOMPASS - HP user group
» OpenVMS software downloads, OpenVMS freeware CD-ROM
» OpenVMS firmware locations
» DECconnect passive adaptor charts
» Cables reference guide
» MicroVAX console commands
» OpenVMS student research

Select a topic below to see Questions Frequently Asked by partners

» Using the online documentation library(installing BNU from the asap SDK)
» Global sections(how to create and use.)
» xx$CREATE_BUFOBJ system service(usage)
» Ethernet address(methods of determination)
» Shareable images(cookbook approach to creating one)
» Sharing data/code at absolute addresses(a guide with examples)
» Determining the Alpha microprocessor
» Using GETSYI to get hardware status

Evolving business value

» Business Systems Evolution
» AlphaServer systems transition planning
» Alpha RetainTrust program

Related links

» HP Integrity servers
» HP Alpha systems
» HP storage
» HP software
» HP products and services
» HP solutions
» HP support
disaster proof
HP Integrity server animation
HP Integrity server animation
Content starts here

Ask the Wizard Questions

System configuration and performance

The Question is:

We have 2 Alpha 2100 4/200's with 128Mb RAM each, single CPU. We want
to configure these for maximum disk performance in a redundant
configuration. They will be running primarily a DEC-Cobol MRP-II
application with RMS files, though a Client-server version in
Uniface/Rdb may be implemented in a few years. They will also be
serving around 200 PC's with Pathworks, and a Payroll in Acucobol/RMS
files. We have just placed an order for 2 5/250 CPU's (to go in 1 box,
both 2/200's in the other). Also a 512Mb board for each. Both will be
upgraded to a pair of Fast-Wide SCSI bays in each with 10 RZ29's in
each. They already have DEFPA PCI/FDDI cards. They will have Volume
Shadowing and Disk Striping, with each disk shadowed, and a 10-way
stripe set. Our intention here is to allow any disk to fail without
disruption, or either system to be down without data loss or
disruption. The current plan is for each to be in separate buildings
linked by fibre. Any comments on this configuration?  Does it make
sense?  How does this system compare with using HSZ40's for the disk
striping/shadowing (IO/sec rate mainly)? Can SCSI-clustering with
HSZ40's improve the performance over FDDI? (PS DLT tapes for backup
are also on order). How will memory-channel affect us if it were
available now?  All input greatly appreciated.


The Answer is:


I gather we are talking about the system enclosure variant as opposed to the
rack mounted or cabinet.

>Both will be upgraded to a pair of Fast-Wide SCSI bays in each with 10 RZ29's
>in each. They already have DEFPA PCI/FDDI cards.

Each bay can hold a maximum of 8 3½" devices.  Each bay can be configured for a
single SCSI bus or dual SCSI busses.  To date I have only heard of 7 devices on
a SCSI bus due to the host using on of the unit numbers.  So, you would use the
bays in single SCSI mode with 5 disks in each.  This will require two
controllers in each system - most likely the KZPSA.  They take one PCI slot
each.  Since the DEFPA takes one too it is a good thing the AlphaServer 2100 can
accomodate 3 PCI options.  Technically one slot is a PCI/EISA slot.

>They will have Volume Shadowing and Disk Striping, with each disk shadowed, and
>a 10-way stripe set.
>Our intention here is to allow any disk to fail without
>disruption, or either system to be down without data loss or disruption.

Ah, I get it.  Via host-based volume shadowing, you'll form 10 2-way shadow
sets.  Each shadow set will have one member volume in each of the two systems.

That then is how you get to a 10-way stripe set.  Neat.  Again, using host-based
RAID software.

Do not shadow the system disks between systems.  Rather, shadow the system disk
within the systems.  It'll take a total of four disks to do this, two in each of
the systems.

Watch out for that big stripe set.  That'll be a 40GB logical volume with a
cluster factor of 80!  Lots of little files could be a problem.
You can get around the cluster factor problem.  The StorageWorks RAID Software
for OpenVMS (which is what you'd be using to do host-based disk striping on
Alpha) supports partitioning; you can divide that 40 GB array up into as many
as 64 pieces, and use each one as if it were an independent (and smaller) disk.

>The current plan is for each to be
>in separate buildings linked by fibre.

I recommend a dedicated fiber ring if possible.  Shadow copying 40GB is going to
swamp even an FDDI.  It will take well over an hour of FDDI saturation at least.
It is possible to manage this by defering copy operations but this lengthens the
window of vulnerability.

>How does this system compare with using HSZ40's for the disk striping/shadowing
>(IO/sec rate mainly)?

Controller-based RAID does not come into play with a multi-site configuration.
I suppose you could stripe first and then shadow (yuk!).  One disk failure and
an entire stripe set is down.

The two big advantages I see in using HSZ40s are:
  1) they have the option of read and write-back cache
  2) they allow greater fan-out (the ability to connect more disks to the
     system than you could if you the connected disks directly to the CPUs)

Read cache helps performance where there is a high degree of
locality-of-reference in the read I/O stream (which we tend to find at most
customer sites).  Write cache helps write performance (regardless of the degree
of locality-of-reference).

Write-back cache hides the latency of the writes to multiple members in a
controller-based mirrorset or a host-based shadow set, so in a multi-site
configuration, having write-back cache in HSZ40s on the remote side can help
the remote shadow writes complete faster (because the write-back cache hides
the latency of the the seek/rotate/transfer operations).

Note that while the controller-to-host interface in the HSZ40 is Fast Wide
Differential, the connections to the disks are Fast Single-Ended (Narrow),
so having Wide disks doesn't really help when they're behind the HSZ40.

>Can SCSI-clustering with HSZ40's improve the performance over FDDI? (PS DLT
>tapes for backup are also on order).

The limited SCSI cable lengths will preclude it from competing with FDDI.

>How will memory-channel affect us if it were available now?

Short cable lengths too.


You should be able to use vddriver to partition giant volumes too.
VDdriver supports virtual disks down to at least RX01 size (488 blocks)
and as large as you want, and can support basically as many of them as
you like. Supplied on sigtapes or freeware CD (V2 at any rate). You can
of course also make volume sets of them.