[an error occurred while processing this directive]

HP OpenVMS Systems

Ask the Wizard
» 

HP OpenVMS Systems

OpenVMS information

» What's new on our site
» Upcoming events
» Configuration and buying assistance
» Send us your comments

HP OpenVMS systems

» OpenVMS software
» Supported Servers
» OpenVMS virtualization
» OpenVMS solutions and partners
» OpenVMS success stories
» OpenVMS service and support
» OpenVMS resources and information
» OpenVMS documentation
» Education and training

Quick Links

» Non-javascript page
» Ask the Wizard
» OpenVMS FAQ

Test Drive OpenVMS

» OpenVMS I64 test drive
» Java test drive

Other information resources available to you include:

» OpenVMS freeware
» ECO kits, software and hardware support, prior version support
» Alpha SRM, ARC, and AlphaBIOS firmware updates
» ENCOMPASS - HP user group
» OpenVMS software downloads, OpenVMS freeware CD-ROM
» OpenVMS firmware locations
» DECconnect passive adaptor charts
» Cables reference guide
» MicroVAX console commands
» OpenVMS student research

Select a topic below to see Questions Frequently Asked by partners

» Using the online documentation library(installing BNU from the asap SDK)
» Global sections(how to create and use.)
» xx$CREATE_BUFOBJ system service(usage)
» Ethernet address(methods of determination)
» Shareable images(cookbook approach to creating one)
» Sharing data/code at absolute addresses(a guide with examples)
» Determining the Alpha microprocessor
» Using GETSYI to get hardware status

Evolving business value

» Business Systems Evolution
» AlphaServer systems transition planning
» Alpha RetainTrust program

Related links

» HP Integrity servers
» HP Alpha systems
» HP storage
» HP software
» HP products and services
» HP solutions
» HP support
disaster proof
HP Integrity server animation
HP Integrity server animation
Content starts here

Ask the Wizard Questions

How to keep LAVC traffic on the FDDI?

The Question is:

I am helping with a 4 node cluster 2 alpha's (3000/400) and 2 vax's (4000/90). Each machine has 2 network interfaces the built in ethernet interface and a turbo channel FDDI interface. The 4 FDDI nodes are connected using an FDDI concentrator and they are an isolated ring (ie no connection to any other machines). The 4 ethernet ports are connected using a hub which is also connected to the buildings backbone.

DECNET is enabled on both of the lines (ie FDDI and ethernet)

My problem is this one of vaxes (call it sv1) tends to send a large number of LAVC traffic to one of the Alpha's (call it sv4) using the ethernet interface. Using a packet sniffer I have observed that the traffic seems to be one way (ie sv4 sends its data to sv1 using the FDDI link) but sv1 seems to send data to sv4 using the ethernet link. Obviously I would like the nodes to use the FDDI link for LAVC traffic. I have checked the line and circuit counters and there does not seem to be any high error rates or other things to indicate why it is using the ethernet interface.

When I do a show net on sv1 it indicates that it is using the FDDI interface to talk to sv4 so I do not understand why it is using the ethernet inteface to send packets.

I have also set the cost of the FDDI circuit to less the and ethernet so it should force the traffic to the FDDI ring.

Any suggestions on what to set up so the sv1 will use FDDI to talk to sv4.


The Answer is:

Modifying the DECnet cost values has no effect, as cluster traffic runs under another completely different protocol called SCS, or actually the LAN variant of SCS in this case, which is called NISCS.

By default, the cluster software enables the use of the NISCS protocol on all LAN adapters it finds. About every 3 seconds, a Hello packet is multicast from each LAN adapter. PEDRIVER checks the transit times of the multicast packets, and remembers, for each remote node, which path had the lowest latency. It then uses that "best" path for the next ~3-second interval as the path by which to transmit packets to that remote node. In your case, the Ethernet path is sometimes seen as having a lower latency, and thus gets used.

There are two ways to prevent NISCS traffic from taking the Ethernet path:

  1. Raise NISCS_MAX_PKTSZ (LRPSIZE for V5.5-2 and earlier) to a value above what Ethernet can handle. The exact value varies by release (see the VMScluster manual) but instead of being at around 1500 for Ethernet, it can be raised as high as close to 4500 for FDDI. Because when two nodes make an SCS connection, the maximum transfer size is one of the parameters, if you raise the maximum packet size, PEDRIVER will form a virtual circuit using only FDDI paths. It is only if the last FDDI path between two nodes goes away that PEDRIVER will fail over to using an Ethernet path, which it does by closing and then immediately re-opening the virtual circuit with a smaller maximum transfer size.
  2. Use the LAVC$STOP_BUS program from SYS$EXAMPLES: to completely turn off use of the Ethernet adapters for the NISCS protocol. Unlike option 1 above, this prevents failover to the Ethernet in case of a failure on the FDDI.