[an error occurred while processing this directive]
![]() |
![]() HP OpenVMS Systems Documentation |
![]() |
Guidelines for OpenVMS Cluster Configurations
D.3.4.2 RecommendationsWhen configuring the DS3 interconnect, apply the configuration guidelines for OpenVMS Cluster systems interconnected by LAN that are stated in the OpenVMS Cluster Software SPD (SPD 29.78.nn) and in this manual. OpenVMS Cluster members at each site can include any mix of satellites, systems, and other interconnects, such as CI and DSSI. This section provides additional recommendations for configuring a multiple-site OpenVMS Cluster system. The GIGAswitch with the WAN T3/SONET option card provides a full-duplex, 155 Mb/s ATM/SONET link. The entire bandwidth of the link is dedicated to the WAN option card. However, the GIGAswitch/FDDI's internal design is based on full-duplex extensions to FDDI. Thus, the GIGAswitch/FDDI's design limits the ATM/SONET link's capacity to 100 Mb/s in each direction. The GIGAswitch with the WAN T3/SONET option card provides several protocol options that can be used over a DS3 link. Use the DS3 link in clear channel mode, which dedicates its entire bandwidth to the WAN option card. The DS3 link capacity varies with the protocol option selected. Protocol options are described in Table D-1.
1Asynchronous transfer mode 2ATM Adaptation Layer 3Physical Layer Convergence Protocol 4High-Speed Datalink Control For maximum link capacity, HP recommends configuring the WAN T3/SONET option card to use ATM AAL--5 mode with PLCP disabled. The intersite bandwidth can limit application locking and I/O performance (including volume shadowing or RAID set copy times) and the performance of the lock manager. To promote reasonable response time, HP recommends that average traffic in either direction over an intersite link not exceed 60% of the link's bandwidth in that direction for any 10-second interval. Otherwise, queuing delays within the FDDI-to-WAN bridges can adversely affect application performance. Remember to account for both OpenVMS Cluster communications (such as locking and I/O) and network communications (such as TCP/IP, LAT, and DECnet) when calculating link utilization. An intersite link introduces a one-way delay of up to 1 ms per 100 miles of intersite cable route distance plus the delays through the FDDI-to-WAN bridges at each end. HP recommends that you consider the effects of intersite delays on application response time and throughput. For example, intersite link one-way path delays have the following components:
Calculate the delays for a round trip as follows: WAN round-trip delay = 2 x (N miles x 0.01 ms per mile + 2 x 0.5 ms per FDDI-WAN bridge) An I/O write operation that is MSCP served requires a minimum of two round-trip packet exchanges: WAN I/O write delay = 2 x WAN round-trip delay Thus, an I/O write over a 100-mile WAN link takes at least 8 ms longer than the same I/O write over a short, local FDDI. Similarly, a lock operation typically requires a round-trip exchange of packets: WAN lock operation delay = WAN round-trip delay An I/O operation with N locks to synchronize it incurs the following delay due to WAN: WAN locked I/O operation delay = (N x WAN lock operation delay) + WAN I/O delay The bit error ratio (BER) parameter is an important measure of the frequency that bit errors are likely to occur on the intersite link. You should consider the effects of bit errors on application throughput and responsiveness when configuring a multiple-site OpenVMS Cluster. Intersite link bit errors can result in packets being lost and retransmitted with consequent delays in application I/O response time (see Section D.3.6). You can expect application delays ranging from a few hundred milliseconds to a few seconds each time a bit error causes a packet to be lost. Interruptions of intersite link service can result in the resources at one or more sites becoming unavailable until connectivity is restored (see Section D.3.5). Sites with nodes contributing quorum votes should have a local system disk or disks for those nodes. A large, multiple-site OpenVMS Cluster requires a system management staff trained to support an environment that consists of a large number of diverse systems that are used by many people performing varied tasks.
You can provide portions of a DS3 link with microwave radio equipment.
The specifications in Section D.3.6 apply to any DS3 link. The BER and
availability of microwave radio portions of a DS3 link are affected by
local weather and the length of the microwave portion of the link.
Consider working with a microwave consultant who is familiar with your
local environment if you plan to use microwaves as portions of a DS3
link.
If the FDDI-to-WAN bridges and the link that connects multiple sites become temporarily unavailable, the following events could occur:
Many communication service carriers offer availability-enhancing
options, such as path diversity, protective switching, and other
options that can significantly increase the intersite link's
availability.
This section describes the requirements for successful communications and performance with the WAN communications services. To assist you in communicating your requirements to a WAN service supplier, this section uses WAN specification terminology and definitions commonly used by telecommunications service providers. These requirements and goals are derived from a combination of Bellcore Communications Research specifications and a Digital analysis of error effects on OpenVMS Clusters. Table D-2 describes terminology that will help you understand the Bellcore and OpenVMS Cluster requirements and goals used in Table D-3. Use the Bellcore and OpenVMS Cluster requirements for ATM/SONET - OC3 and DS3 service error performance (quality) specified in Table D-3 to help you assess the impact of the service supplier's service quality, availability, down time, and service-interruption frequency goals on the system.
1Application pauses may occur every hour or so (similar to what is described under OpenVMS Cluster Requirements) because of packet loss caused by bit error. 2Pauses are due to a virtual circuit retransmit timeout resulting from a lost packet on one or more NISCA transport virtual circuits. Each pause might last from a few hundred milliseconds to a few seconds.
1Application requirements might need to be more rigorous than those shown in the OpenVMS Cluster Requirements column. 2Averaged over many days. 3Does not include any burst errored seconds occurring in the measurement period. 4The average number of channel down-time periods occurring during a year. This parameter is useful for specifying how often a channel might become unavailable. Table Key
D.4 Managing OpenVMS Cluster Systems Across Multiple SitesIn general, you manage a multiple-site OpenVMS Cluster using the same tools and techniques that you would use for any OpenVMS Cluster interconnected by a LAN. The following sections describe some additional considerations and recommends some system management tools and techniques. The following table lists system management considerations specific to multiple-site OpenVMS Cluster systems:
D.4.1 Methods and ToolsYou can use the following system management methods and tools to manage both remote and local nodes:
D.4.2 Shadowing DataVolume Shadowing for OpenVMS allows you to shadow data volumes across multiple sites. System disks can be members of a volume shadowing or RAID set within a site; however, use caution when configuring system disk shadow set members in multiple sites. This is because it may be necessary to boot off a remote system disk shadow set member after a failure. If your system does not support FDDI booting, it will not be possible to do this.
See the Software Product Descriptions (SPDs) for complete and
up-to-date details about Volume Shadowing for OpenVMS (SPD
27.29.xx) and StorageWorks RAID for OpenVMS (SPD
46.49.xx).
Monitor performance for multiple-site OpenVMS Cluster systems as follows:
|