announcing the CIPCA adapter CIPCA support for Alpha Servers |
Abstract:
This paper describes the CI-to-PCI adapter (CIPCA) that is supported
in OpenVMS Alpha Versions 6.2-1H3 and 7.1 (the CIPCA is not supported in
Version 7.0). CIPCA supports specific Alpha servers and OpenVMS Cluster
configurations.
Table of Contents
» Product Overview
» Technical Specifications
» Configuration Requirements
» Figure 1 - CIPCA in a Mixed-Architecture OpenVMS Cluster
» Figure 2 - CIPCA in an Alpha OpenVMS Cluster
» Table 1 - CIPCA and CIXCD Performance
Product Overview
With the release of OpenVMS Alpha Version 6.2-1H2, Digital introduced
a CI to PCI adapter (CIPCA) that was developed in partnership with CMD
Technologies. With this adapter, Alpha servers that contain a combination
of the PCI and the EISA buses can now connect to the CI.
CIPCA support for Alpha servers provides the following features and
benefits to customers:
- Lower entry cost and more configuration choices
- If you require midrange compute power for your business needs, CIPCA
enables you to integrate midrange Alpha servers into your existing CI OpenVMS
Cluster.
- High-end Alpha speed and power
- If you require maximum compute power, you can use the CIPCA with both
the AlphaServer 8200 systems and AlphaServer 8400 systems that have PCI
and EISA I/O subsystems.
- Cost-effective Alpha migration path
- If you want to add Alpha servers to an existing CI VAXcluster, CIPCA
provides a cost-effective way to start migrating to a mixed-architecture
OpenVMS Cluster in the price/performance range that you need.
- Advantages of the CI
- The CIPCA connects to the CI, which offers the following advantages:
- High speed to accommodate larger processors and I/O-intensive applications.
- Efficient, direct access to large amounts of storage.
- Minimal CPU overhead for communication. CI adapters are intelligent
interfaces that perform much of the I/O communication work in OpenVMS Clusters.
- High availability through redundant, independent data paths, because
each CI adapter is connected with two pairs of CI cables.
- Multiple access paths to disks and tapes.
Figure 1 shows an example of a mixed-architecture CI OpenVMS Cluster
that has two servers: an Alpha and a VAX.
Figure 1 - CIPCA in a Mixed-Architecture OpenVMS
Cluster
![[PICTURE]](zk-8484a.gif)
As Figure 1 shows, you can use the CIPCA adapter to connect an Alpha
server to a CI OpenVMS Cluster that contains a VAX server with a CIXCD
(or CIBCA-B) adapter. This enables you to smoothly integrate an Alpha server
into a cluster that previously comprised only high-end VAX systems.
Figure 2 shows another example of a configuration that uses the CIPCA
to connect systems with the CI. In this example, each Alpha has two CIPCA
adapters to increase performance. Also, the Alpha systems are connected
to a high-speed FDDI interconnect, which provides scalable connectivity
to PC clients and OpenVMS satellites.
Figure 2 - CIPCA in an Alpha OpenVMS Cluster
![[PICTURE]](zk-8485a.gif)
Figures 1 and 2 show that the CIPCA makes the performance, availability,
and large storage access of the CI available to a wide variety of users.
The CI has a high maximum throughput. Both the PCI-based CIPCA and the
XMI-based CIXCD are highly intelligent microprocessor-controlled adapters
that consume minimal CPU overhead.
Because the effective throughput of the CI bus is high, the CI interconnect
is not likely to be a bottleneck. In large configurations like the one
shown in Figure 2, multiple adapters and CI connections provide excellent
availability and throughput. Although not shown in Figures 1 and 2, you
can increase availablity by placing disks on a SCSI interconnect between
a pair of HSJ controllers and connecting each HSJ to the CI.
Technical Specifications
The CIPCA is a two-slot adapter that requires one PCI slot and one EISA
slot on the host computer. The EISA slot supplies only power (not bus signals)
to the CIPCA. CIPCA's driver, PCAdriver, is supported in OpenVMS Alpha
Versions 6.2-1H2, 6.2-1H3 and 7.1. Table 1 shows the performance of the
CIPCA in relation to the CIXCD adapter.
Table 1 - CIPCA and CIXCD Performance
PERFORMANCE METRIC CIPCA CIXCD
Read request rate (I/Os) 4900 5500
Read data rate (MB/s) 10.6 10.5
Write request rate (I/Os) 4900 4500
Write data rate (MB/s) 9.8 5.8
Mixed request rate (I/Os) 4800 5400
Mixed data rate (MB/s) 10.8 9.2
Configuration Requirements
Following are the configuration requirements for the CIPCA:
- Configuration
For OpenVMS Alpha Version 6.2-1H3, the CIPCA has the following configuration
requirements:
- Up to 16 CI host systems per star coupler.
- Up to 3 CIPCA adapters per AlphaServer 2xxx system.
- Up to 4 CPICA adapters per AlphaServer 4xxx system.
- Up to 10 CIPCA adpters per AlphaServer 8xxx system with V6.2-1H3,
and up to 26 CIPCA adapters per AlphaServer 8xxx system with V7.1.
- AlphaServer 8400s can mix CIXCDs & CIPCAs in the same system.
- Up to 32 CI products per star coupler (16 without CISCE).
- Up to 96 OpenVMS Cluster member systems.
- Host systems
The CIPCA can have the following hosts:
- AlphaServer 8400
- AlphaServer 8200
- AlphaServer 4100
- AlphaServer 4000
- AlphaServer 2100
- AlphaServer 2100A
- AlphaServer 2000
- CI-connected hosts
Any OpenVMS Alpha or VAX host using CIXCD or CIBCA-B.
- Storage controllers
The CIPCA supports the following storage controllers:
- HSJ30 and HSJ40 with HS0F Version 2.5 firmware or higher.
- All HSCs except HSC50. (See Restrictions)
- Restrictions
The following restrictions apply to CIPCA:
- Recommendation
CIPCA uses a new, more optimal CI arbitration algorithm called Synchronous
Arbitration instead of the older Asynchronous Arbitration algorithm. The
two algorithms are completely compatible with each other. Under CI saturation
conditions, both the old and new algorithms are equivalent and provide
equitable round-robin access to all nodes. However, under less than saturation
conditions, the new algorithm provides the following benefits:
- Reduced packet transmission latency due to reduced average CI arbitration
time.
- Increased node-to-node throughput.
- Complete elimination of CI collisions that waste bandwidth and increase
latency in configurations containing only Synchronous Arbitration nodes.
- Reduced CI collision rate in configurations with mixed Synchronous
and Asynchronous Arbitration CI nodes. The reduction is roughly porportional
to the fraction of CI packets being sent by the Synchronous Arbitration
CI nodes.
Support for Synchronous Arbitration is latent in the HSJ controller
family. In configurations containing both CIPCAs and HSJ controllers, Digital
recommends enabling the HSJs to use Synchronous Arbitration. The HSJ CLI
command to do this is:
CLI> SET THIS CI_ARB = SYNC
This command will take effect upon the next reboot of the HSJ.
For information about installing and operating the CIPCA, please refer
to the hardware manual that came with your CIPCA adapter: CIPCA PCI-CI
Adapter User's Guide (EK-CIPCA-UG).
» Back to the beginning and Table of Contents
|