[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Version 8.2 New Features and Documentation Overview


Previous Contents Index


Chapter 2
General User Features

This chapter provides information about new features for all users of the HP OpenVMS Alpha and OpenVMS I64 operating systems.

2.1 DCL Commands and Lexical Functions

Table 2-1 and Table 2-2 summarize new and changed DCL commands, qualifiers, and lexical functions for OpenVMS Version 8.2. For more information, refer to online help or the HP OpenVMS DCL Dictionary.

Table 2-1 Updates to DCL Commands and DCL Documentation
DCL Command Documentation Update
ANALYZE/IMAGE New qualifiers: /FLAG_VALUES, /SECTIONS, /SEGMENTS, /SELECT
ANALYZE/OBJECT New qualifiers: /FLAG_VALUES, /SECTIONS, /SELECT
ANALYZE/SSLOG New command. Refer to the HP OpenVMS System Analysis Tools Manual for details.
APPEND New /BLOCK_SIZE qualifier.
ASSIGN New /CLUSTER_SYSTEM qualifier.
CHECKSUM New command.
COPY New /BLOCK_SIZE qualifier.
CREATE/MAILBOX New command.
DEASSIGN New /CLUSTER_SYSTEM qualifier.
DEFINE New /CLUSTER_SYSTEM qualifier.
DELETE 'file' New /GRAND_TOTAL qualifier.
DELETE/BITMAP New command.
DELETE/MAILBOX New command.
DIRECTORY New keyword VERSION for /SELECT.
INITIALIZE New /GPT qualifier, new keywords for /ERASE, changes to /LIMIT, and /CLUSTER_SIZE default raised to 16.
OPEN New /NOSHARE qualifier.
PATCH Former VAX-only command now runs on all three platforms.
PURGE New /GRAND_TOTAL qualifier.
SEARCH New qualifiers: /LIMIT, /SKIP, /WILDCARD_MATCHING
SET BOOTBLOCK New command (for I64 only).
SET DISPLAY Additional values for /TRANSPORT and /PMTRANSPORT to support Internet Protocol version 6 (IPv6).
SET IMAGE New command.
SET PROCESS New /TOKEN and /SSLOG qualifiers and updated /RESOURCE_WAIT qualifier.
SET SERVER Revised /CONFIGURE description. SET SERVER is now documented as three unique commands, which makes online help more user friendly.
SET SHADOW Several additions to support Host-Based Minimerge.
SET TERMINAL New /BACKSPACE qualifier.
SHOW DEVICES Support added to display data in bytes when /FULL is specified.
SHOW FASTPATH New command.
SHOW IMAGE New command.
SHOW LICENSE New /HIERARCHY and /OE qualifiers.
SHOW LOGICAL New /CLUSTER qualifier; expanded display for /FULL.
SHOW PROCESS New /CASE_LOOKUP and /TOKEN qualifiers; revised example for Red Sox fans.
SHOW SERVER SHOW SERVER is now documented as two unique commands, which makes online help more user friendly.
SHOW SHADOW Several additions to support host-based minimerge.
SHOW SYSTEM New /IMAGE qualifier.
SHOW TERMINAL New field at end of display.
SHOW WORKING_SETS New capability to display in bytes.
WRITE New /WAIT[/NOWAIT] qualifier.

Table 2-2 Updates to DCL Lexicals and Lexicals Documentation
DCL Lexical Documentation Update
F$FID_TO_NAME New lexical function.
F$GETDVI New pathname argument and many new item codes.
F$GETJPI New TOKEN item code.
F$GETSYI List of item codes updated.
F$LICENSE New lexical function.
F$MULTIPATH New lexical function.
F$UNIQUE New lexical function.

2.2 License Management Facility Enhancements (LMF)

The License Management Facility has been updated to support two new business practices for OpenVMS on I64 systems: operating environments (OEs) and per-processor licensing (PPL).

Operating environments include the operating system and bundled applications in an integrated package. Currently, three different operating environments are available for OpenVMS I64 systems:

  • HP OpenVMS Foundation Operating Environment (FOE)
  • HP OpenVMS Enterprise Operating Environment (EOE)
  • HP OpenVMS Mission Critical Operating Environment (MCOE)

New qualifiers to LMF commands allow you to manage your operating environment licenses. One license now turns on FOE, EOE, or MCOE depending on which one you have purchased. This change reduces complexity of LMF management and improves flexibility of operations.

A new license type---per-processor or PPL---is required to run the operating environments. You need a license for each active processor in your I64 system.

Refer to the HP OpenVMS License Management Utility Manual manual for more information about licensing.

2.3 Monitor Utility Enhancements

The Monitor utility has been ported as a native utility for both OpenVMS Alpha and OpenVMS I64 systems. Several enhancements were made to the utility during the port:

  • Several classes of data (such as DISK) now utilize the entire screen height when displaying information.
  • The SYSTEM class now displays the number of "current" processes in Process States. Previously, current processes were grouped in the "Other" category.
  • The format of the recorded data file has changed with this release. The format changes were made to improve the alignment of the recorded data. For information about the Supplemental MONITOR Information - Record Formats, see the appendix in HP OpenVMS System Management Utilities Reference Manual. These changes do not allow pre-V8.2 systems to read the new format of a recorded data file. There is, however, a utility in SYS$EXAMPLES which allows a new format data file to be converted to the pre-V8.2 format. The utility is called SYS$EXAMPLES:MONITOR_CONVERT.C. An executable is also provided. To use the utility, use the following command:


    $ mc system$examples:monitor_convert input-file output-file
    
  • Various performance improvements were made to the MONITOR data collection routines to reduce system overhead when using the Monitor utility.

2.4 OpenVMS I64 Operating Environments (OEs)

OpenVMS Version 8.2 introduces a new way of providing the OpenVMS operating system and its layered products with the OpenVMS I64 system. Unlike OpenVMS Alpha, the OpenVMS I64 operating system provides three different packages available on Integrity servers:

  • HP OpenVMS Foundation Operating Environment (FOE) is an Internet-ready feature-rich offering with leading price and performance.
  • HP OpenVMS Enterprise Operating Environment (EOE) delivers enhanced manageability functions, single-system availability, and performance.
  • HP OpenVMS Mission Critical Operating Environment (MCOE) delivers the highest levels of multi-system availability and workload management.

All three operating environments are included on one DVD. Your license agreement determines to which operating environment you have access.

Note

These OpenVMS I64 operating environments are licensed on a per-processor basis (PPL) and not on system capacity. Alpha licensing remains unchanged.

For a comprehensive list of technical specifications, go to the Software Products Description Web site:

http://www.hp.com/info/spd

For more information about HP OpenVMS Operating Environments, contact your HP Sales Representative.


Chapter 3
System Management Features

This chapter provides information about new features, changes, and enhancements for system managers.

3.1 OpenVMS I64 Boot Manager (BOOT_OPTIONS.COM) Utility

The OpenVMS I64 Boot Manager (BOOT_OPTIONS.COM) utility is a menu-based utility that allows you to easily manage EFI boot options on an Integrity server running OpenVMS I64. The utility allows you to:

  • Set your Integrity server with a boot option for your system disk, dump device, or debug device
  • Display OpenVMS boot entries
  • Determine the position of the entry in the EFI Boot Manager; for example, you can ensure that your system disk is first on the option list so that it boots automatically when the system is powered on or rebooted
  • Set boot flags for the entry
  • Remove an entry
  • Set or disable the EFI timeout (the time the EFI Boot Manager waits before booting the first or next available entry on the boot list)
  • Validate boot entries
  • Configure boot, dump, and debug devices while OpenVMS is running. To configure boot devices, you do not have to shut down the operating system and enter commands at the console as you do on Alpha systems.

After installing OpenVMS I64, HP recommends using the the utility to add your system disk as the first boot option in the EFI Boot Manager list. This utility is required for configuring booting on Fibre Channel storage devices; it is optional for all other devices. Because it is so easy to use, HP recommends using this utility rather than the EFI Boot Manager wherever possible. For information on configuring Fibre Channel devices, refer to the Guidelines for OpenVMS Cluster Configurations manual. For more information on the OpenVMS Boot Manager utility, refer to the HP OpenVMS System Manager's Manual.

3.2 Clustering on OpenVMS I64 Systems

With few exceptions, OpenVMS Cluster software provides the same features on OpenVMS I64 systems as it currently offers on OpenVMS Alpha and VAX systems.

Key OpenVMS Cluster features include:

  • Fully shared, multiple-node read/write disk access
  • Clusterwide file system
  • Clusterwide batch/print queue subsystem
  • Distributed lock manager
  • Votes/quorum-based membership management
  • Single security domain
  • Single system management domain
  • Rich, clusterwide API
  • Mixed-architecture clusters
  • Support for rolling upgrades
  • Support for multiple interconnects (see Section 3.2.1)
  • Support for a maximum of sixteen systems in a mixed-architecture cluster, eight of which can be I64 systems
  • Failover and load balancing
  • Cluster network alias
  • Disk and tape serving
  • Disaster-tolerant capabilities with support for distances up to 500 miles (800 kilometers) using Disaster-Tolerant Cluster Services (DTCS)

Satellite booting is not supported in this release. It is planned for a future release.

3.2.1 OpenVMS I64 Cluster Interconnect Support

Ethernet, Fast Ethernet, and Gb Ethernet can be used for cluster communications (SCS traffic) on OpenVMS I64 systems. However, FDDI and ATM, which are supported for cluster communications on OpenVMS Alpha systems, are not supported on OpenVMS I64 systems.

While FDDI and ATM adapters are not supported as cluster interconnects on OpenVMS I64 systems, they are supported as inter-site interconnects in a multiple-site cluster. You can use bridges or switches to connect the OpenVMS I64 node's FastEthernet/GigabitEthernet NIC(s) to any inter-site interconnect the WAN supplier provides, such as T3, E3, SONET, ATM, FDDI, DWDM, or others.

OpenVMS Cluster software supports the following three proprietary cluster interconnects on Alpha systems, but they are not supported on OpenVMS I64 systems: DSSI (DIGITAL Systems Storage Interconnect), CI (Cluster Interconnect), and MEMORY CHANNEL.

Although DSSI and CI are not supported on OpenVMS I64 systems, data stored on DSSI and CI disks connected to Alpha systems can be served to OpenVMS I64 systems in the same cluster.

Fibre Channel is supported as a shared-storage cluster interconnect on OpenVMS I64 systems but SCSI is not. (SCSI as a shared-storage cluster interconnect is also not supported for OpenVMS Alpha systems for the recent SCSI adapters.)

However, data stored on SCSI disks directly attached to either OpenVMS I64 systems or to OpenVMS Alpha systems can be served to any other members of the cluster. This is also true for any locally attached storage in an OpenVMS Cluster system.

3.2.2 Mixed-Architecture Clusters

OpenVMS supports both OpenVMS Alpha and OpenVMS I64 systems in a mixed-architecture cluster. The OpenVMS Alpha version supported in this configuration is OpenVMS Alpha Version 7.3--2. Mixed-version support for all these versions requires the installation of one or more remedial kits, as described in the HP OpenVMS Version 8.2 Release Notes. See the following web site for the HP OpenVMS Version 8.2 documentation set:


http://www.hp.com/go/openvms/doc

Figure 3-1 shows an OpenVMS Cluster system to which OpenVMS I64 systems have been added.

Figure 3-1 OpenVMS Cluster Systems with Alpha and I64 Systems


A LAN interconnect is used for cluster communications for all systems in the cluster. In this configuration, the same Fibre Channel storage can be accessed by both OpenVMS Alpha and OpenVMS I64 systems at the same time. Note that I64 systems directly connected to the Fibre Channel disks can be served data from the CI disks. In an OpenVMS mixed-architecture cluster, each architecture requires a minimum of one system disk. For this release, up to eight I64 systems are supported in a cluster. In a mixed-architecture cluster, this means you can include up to eight I64 systems with Alpha systems so that the total number of systems does not exceed sixteen.

3.2.2.1 Storage in a Mixed-Architecture Cluster

This section describes the rules pertaining to storage, including system disks, in a mixed-architecture cluster consisting of OpenVMS I64 and OpenVMS Alpha systems.

Figure 3-2 is a simplified version of a mixed-architecture cluster of OpenVMS I64 and OpenVMS Alpha systems with locally attached storage and a shared Storage Area Network (SAN).

Figure 3-2 Storage in Mixed-Architecture OpenVMS Cluster


I64 systems in a mixed-architecture OpenVMS Cluster system:

  • Must have an I64 system disk, either a local disk or a shared Fibre Channel disk.
  • Can use served Alpha disks and served Alpha tapes.
  • Can use SAN disks and tapes.
  • Can share the same SAN data disk with Alpha systems.
  • Can serve disks and tapes to other cluster members, both I64 and Alpha systems.

Alpha systems in a mixed-architecture OpenVMS Cluster system:

  • Must have an Alpha system disk, which can be shared with other clustered Alpha systems.
  • Can use locally attached tapes and disks.
  • Can serve disks and tapes to both I64 and Alpha systems.
  • Can use I64 served data disks.
  • Can use SAN disks and tapes.
  • Can share the same SAN data disk with I64 systems.

3.3 EFI for OpenVMS Utilities

EFI Utilities provide device management capabilities for Integrity servers with OpenVMS I64 systems. These utilities interact with the EFI Shell. The commands that invoke them must be issued from \efi\vms at the EFI Shell> prompt, after shutting down the OpenVMS operating system:

  • VMS_BCFG: Adds a boot entry to the EFI Boot Manager, allowing you to specify an OpenVMS device name for the entry. (HP recommends using the OpenVMS I64 Boot Manager (BOOT_OPTIONS.COM) utility for this purpose.)
  • VMS_SET: Sets the dump device and the debug device to the specified OpenVMS device name.
  • VMS_SHOW: Displays the equivalent OpenVMS device name for devices mapped by the EFI console.

For more information, refer to the EFI utilities chapter in the HP OpenVMS System Management Utilities Reference Manual.

3.4 HP Performance Data Collector (TDC)

HP Performance Data Collector for OpenVMS (TDC V2.1) is available for use with OpenVMS Version 8.2. The Performance Data Collector (TDC) can be used to collect approximately 1100 system performance metrics from Alpha and I64 systems for analysis by other application software.

Metrics that are provided include the following:

  • Cache and memory utilization and performance
  • Cluster configuration and communications
  • CPU utilization
  • Disk utilization and performance
  • Distributed Lock Manager performance
  • Distributed Transaction Manager performance
  • File system performance
  • Networking hardware and software performance
  • Process metrics
  • Miscellaneous system performance metrics (for example, paging, swapping, and faulting)
  • System parameter settings

A run-time-only variant of the Performance Data Collector (TDC_RT V2.1) is installed with OpenVMS Version 8.2. The run-time variant provides a data collector application and support files. The collector application does not run automatically; however, a suitably privileged user can start and stop it manually.

A downloadable kit provides a Software Developer Kit (SDK) as well as run-time environments for all supported system configurations:

Platform OpenVMS Version
Alpha systems Version 7.3-2 or Version 8.2
I64 systems Version 8.2

The SDK provides both a programmer manual that documents the TDC Application Programming Interface (API) and C header files and sample code. The API can be used to develop software to integrate TDC with other applications in various ways, including:

  • Extracting data from a TDC data file for analysis
  • Feeding data "live" to another application as the data is collected by TDC, without first storing the data in a file
  • Supplementing the metrics provided by TDC with other metrics of interest, in a fully integrated and supported fashion

Software built using the SDK will work with any runtime environment provided either by the TDC_RT kit, which is distributed and installed with OpenVMS, or by the full TDC kit.

The downloadable full kit and additional documentation are available at the following web site:


      http://h71000.www.7hp.com/openvms/products/tdc/

3.5 Ethernet LAN Drivers: Full-Duplex or Half-Duplex Mode Mismatch

Ethernet LAN drivers can operate in full-duplex or half-duplex mode under one of the following conditions:

  • On Alpha systems, according to the console environment variable setting
  • On Alpha or I64 systems, according to the LANCP device database setting
  • If autonegotiation is enabled, in negotiation with the switch or link partner

For any of these conditions, if the duplex mode is set incorrectly, a duplex mode mismatch condition occurs.

An example of a duplex mode mismatch condition is the following:

A LAN device is set to operate in full-duplex mode at 100 megabits per second. However, the switch port is set to autonegotiate. The switch port determines the speed correctly, but selects half-duplex mode.

3.5.1 Result of Duplex Mode Mismatch

When a duplex mode mismatch condition occurs, the end of the link in full-duplex mode transmits whenever transmit data is available. It transmits without checking whether an incoming receive packet already occupies the link. This situation results in transmit and receive errors.

Any error represents a lost packet, which requires the application to do one of the following:

  • Perform error recovery to detect and retransmit the packet.
  • Have the link partner retransmit the packet.

Depending on the application, you might observe a significant degradation in performance. Therefore, it is important to detect and correct this condition.

3.5.2 Detection and Correction of Duplex Mode Mismatch

Ethernet LAN drivers for all full-duplex-capable LAN devices have been modified to detect and report this condition so that a system manager can correct it. Each driver checks error counters periodically. If it appears that a duplex mismatch condition exists, the driver displays the following console message:


    %EWAO, Possible duplex mode mismatch condition detected

In addition, the LAN driver makes an error log entry that you can identify by the type OxDD (if the error log viewer does not decode the entry into English). The LAN drivers make other error log entries for link-up and link-down transitions.

You can decipher error log entries by searching LAN driver error logs for the following error types:

Error Type Description
OxCA Connection available (link up)
OxCD Connection down (link down)
OxDD Dubious duplex (possible duplex mismatch)

Note that each error log entry has the same format, and the type code is in the same location.

The error log entry and console message are repeated every hour until the condition is resolved.

You can use LANCP or ANALYZE/SYSTEM to gather more information from device counters.


Previous Next Contents Index