[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here Guidelines for OpenVMS Cluster Configurations

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

Appendix B
Appendix B MEMORY CHANNEL Technical Summary
     B.1     Product Overview
         B.1.1         MEMORY CHANNEL Features
         B.1.2         MEMORY CHANNEL Version 2.0 Features
         B.1.3         Hardware Components
         B.1.4         Backup Interconnect for High-Availability Configurations
         B.1.5         Software Requirements
             B.1.5.1             Memory Requirements
             B.1.5.2             Large-Memory Systems' Use of NPAGEVIR Parameter
         B.1.6         Configurations
             B.1.6.1             Configuration Support
     B.2     Technical Overview
         B.2.1         Comparison With Traditional Networks and SMP
         B.2.2         MEMORY CHANNEL in the OpenVMS Cluster Architecture
         B.2.3         MEMORY CHANNEL Addressing
         B.2.4         MEMORY CHANNEL Implementation
Appendix C
Appendix C Multiple-Site OpenVMS Clusters
     C.1     What is a Multiple-Site OpenVMS Cluster System?
         C.1.1         ATM, DS3, FDDI, and [D]WDM Intersite Links
         C.1.2         Benefits of Multiple-Site OpenVMS Cluster Systems
         C.1.3         General Configuration Guidelines
     C.2     Using Cluster over IP to Configure Multiple-Site OpenVMS Cluster Systems
     C.3     Using FDDI to Configure Multiple-Site OpenVMS Cluster Systems
     C.4     Using WAN Services to Configure Multiple-Site OpenVMS Cluster Systems
         C.4.1         The ATM Communications Service
         C.4.2         The DS3 Communications Service (T3 Communications Service)
         C.4.3         FDDI-to-WAN Bridges
         C.4.4         Guidelines for Configuring ATM and DS3 in an OpenVMS Cluster System
             C.4.4.1             Requirements
             C.4.4.2             Recommendations
         C.4.5         Availability Considerations
         C.4.6         Specifications
     C.5     Managing OpenVMS Cluster Systems Across Multiple Sites
         C.5.1         Methods and Tools
         C.5.2         Monitoring Performance
Index
Index
Examples
6-1 Messages Resulting from Manual Path Switch
6-2 Messages Displayed When Other Nodes Detect a Path Switch
7-1 Using wwidmgr -show wwid
7-2 Using wwidmgr -show wwid -full
7-3 Using widmgr -quickset
7-4 Boot Sequence from an FC System Disk
7-5 Enabling Clustering on a Standalone FC Node
7-6 Adding a Node to a Cluster with a Shared FC System Disk
A-1 SHOW DEVICE Command Sample Output
A-2 Adding a Node to a SCSI Cluster
Figures
1 OpenVMS Cluster System Components and Features
1-1 Hardware and Operating System Components
4-1 Two-Node OpenVMS Integrity servers Cluster System
4-2 Point-to-Point 10 Gigabit Ethernet OpenVMS Cluster
4-3 Switched 10 Gigabit Ethernet OpenVMS Cluster
6-1 Multibus Failover Configuration
6-2 Direct SCSI to MSCP Served Configuration With One Interconnect
6-3 Direct SCSI to MSCP Served Configuration With Two Interconnects
6-4 Storage Subsystem in Transparent Mode
6-5 Storage Subsystem in Multibus Mode
6-6 Port Addressing for Parallel SCSI Controllers in Multibus Mode
6-7 Port Addressing for Fibre Channel Controllers in Multibus Mode
6-8 Parallel SCSI Configuration With Transparent Failover
6-9 Parallel SCSI Configuration With Multibus Failover and Multiple Paths
6-10 Multiported Parallel SCSI Configuration With Single Interconnect in Transparent Mode
6-11 Multiported Parallel SCSI Configuration With Multiple Paths in Transparent Mode
6-12 Multiported Parallel SCSI Configuration With Multiple Paths in Multibus Mode
6-13 Devices Named Using a Node Allocation Class
6-14 Devices Named Using a Port Allocation Class
6-15 Devices Named Using an HSZ Allocation Class
6-16 Single Host With Two Dual-Ported Storage Controllers, One Dual-Ported MDR, and Two Buses
6-17 Single Host With Two Dual-Ported Storage Controllers, One Dual-Ported MDR, and Four Buses
6-18 Two Hosts With Two Dual-Ported Storage Controllers, One Dual-Ported MDR, and Four Buses
6-19 Two Hosts With Shared Buses and Shared Storage Controllers
6-20 Two Hosts With Shared, Multiported Storage Controllers
6-21 Invalid Multipath Configuration
6-22 Fibre Channel Path Naming
6-23 Configuration With Multiple Direct Paths
7-1 Switched Topology (Logical View)
7-2 Switched Topology (Physical View)
7-3 Arbitrated Loop Topology Using MSA 1000
7-4 Single Host With One Dual-Ported Storage Controller
7-5 Multiple Hosts With One Dual-Ported Storage Controller
7-6 Multiple Hosts With Storage Controller Redundancy
7-7 Multiple Hosts With Multiple Independent Switches
7-8 Multiple Hosts With Dual Fabrics
7-9 Multiple Hosts With Larger Dual Fabrics
7-10 Multiple Hosts With Four Fabrics
7-11 Fibre Channel Host and Port Addresses
7-12 Fibre Channel Host and Port WWIDs and Addresses
7-13 Fibre Channel Initiator and Target Names
7-14 Fibre Channel Disk Device Naming
8-1 LAN OpenVMS Cluster System
8-2 Two-LAN Segment OpenVMS Cluster Configuration
8-3 Three-LAN Segment OpenVMS Cluster Configuration
8-4 Logical LAN Failover IP OpenVMS Cluster System
8-5 MEMORY CHANNEL Cluster
8-6 OpenVMS Cluster with Satellites
8-7 Multiple-Site OpenVMS Cluster Configuration Connected by WAN Link
9-1 OpenVMS Cluster Growth Dimensions
9-2 Three-Node Fast-Wide SCSI Cluster
9-3 Four-Node Ultra SCSI Hub Configuration
9-4 Six-Satellite LAN OpenVMS Cluster
9-5 Six-Satellite LAN OpenVMS Cluster with Two Boot Nodes
9-6 Twelve-Satellite OpenVMS Cluster with Two LAN Segments
9-7 Forty-Five Satellite OpenVMS Cluster with Intersite Link
9-8 High-Powered Workstation Server Configuration 1995
9-9 High-Powered Workstation Server Configuration 2004
9-10 Multiple node IP based Cluster System
9-11 Comparison of Direct and MSCP Served Access
9-12 Hot-File Distribution
10-1 Simple LAN OpenVMS Cluster with a Single System Disk
10-2 Multiple System Disks in a Common Environment
10-3 Multiple-Environment OpenVMS Cluster
A-1 Key to Symbols Used in Figures
A-2 Highly Available Servers for Shared SCSI Access
A-3 Maximum Stub Lengths
A-4 Conceptual View: Basic SCSI System
A-5 Sample Configuration: Basic SCSI System Using AlphaServer 1000, KZPAA Adapter, and BA350 Enclosure
A-6 Conceptual View: Using DWZZAs to Allow for Increased Separation or More Enclosures
A-7 Sample Configuration: Using DWZZAs to Allow for Increased Separation or More Enclosures
A-8 Sample Configuration: Three Hosts on a SCSI Bus
A-9 Sample Configuration: SCSI System Using Differential Host Adapters (KZPSA)
A-10 Conceptual View: SCSI System Using a SCSI Hub
A-11 Sample Configuration: SCSI System with SCSI Hub Configuration
A-12 Setting Allocation Classes for SCSI Access
A-13 SCSI Bus Topology
A-14 Hot Plugging a Bus Isolator
B-1 MEMORY CHANNEL Hardware Components
B-2 Four-Node MEMORY CHANNEL Cluster
B-3 Virtual Hub MEMORY CHANNEL Cluster
B-4 MEMORY CHANNEL- and SCSI-Based Cluster
B-5 MEMORY CHANNEL CI- and SCSI-Based Cluster
B-6 MEMORY CHANNEL DSSI-Based Cluster
B-7 OpenVMS Cluster Architecture and MEMORY CHANNEL
B-8 Physical Memory and I/O Address Space
B-9 MEMORY CHANNEL Bus Architecture
C-1 Site-to-Site Link Between Philadelphia and Washington
C-2 Multiple-Site OpenVMS Cluster Configuration with Remote Satellites
C-3 Multiple-Site OpenVMS Cluster Configuration with Cluster over IP
C-4 ATM/SONET OC-3 Service
C-5 DS3 Service
C-6 Multiple-Site OpenVMS Cluster Configuration Connected by DS3


Previous Next Contents Index