HP OpenVMS Systems Documentation

Content starts here HP OpenVMS Version 8.4 Release Notes

HP OpenVMS Version 8.4 Release Notes

Previous Contents Index

Chapter 1
OpenVMS Software Installation and Upgrade Release Notes

This chapter contains prerequisites for installing and upgrading to OpenVMS Version 8.4. Topics of interest to both Alpha and Integrity server users are covered first. Later sections group notes of interest to users of specific platforms.

HP recommends that you read the following manuals before installing or upgrading OpenVMS Version 8.4:

  • HP OpenVMS Version 8.4 Release Notes (this manual)
  • HP OpenVMS Version 8.4 New Features and Documentation Overview
  • HP OpenVMS Version 8.4 Upgrade and Installation Manual

For more information about the associated products, see Chapter 2. For more information about the hardware release notes, see Chapter 6.

1.1 HP Software Technical Support Policy

HP provides software technical support for the OpenVMS operating system software for the latest version and the immediate prior version of the product. Each version is supported for 24 months from its date of release, or until the release of the second subsequent version, whichever is greater. "Version" is defined as a release containing new features and enhancements. Patch kits and maintenance-only releases do not meet the definition of "version" in the context of this support policy.

Current version-level support (Standard Support or SS) and Prior Version Support (PVS) for the OpenVMS operating system software is provided for OpenVMS versions in accordance with these guidelines. The current level of support for recent versions of OpenVMS Integrity servers, OpenVMS Alpha, and OpenVMS VAX is updated at:


The Operating System Support Policy applies to all OpenVMS major releases, new feature releases, and enhancement releases, which are defined as follows:

  • Major Releases for OpenVMS contain substantial new functionality. The version number increases to the next integer (for example, from 7.3-2 to 8.2).
    Application impact: The OpenVMS internal interfaces have changed. Although binary compatibility will be maintained for a majority of applications, independent software vendors (ISVs) must test the new version and may need to release a new application kit. Some application partners may want to release a new application kit to take advantage of the new features in the operating system.
  • New Feature Releases for OpenVMS contain new features, enhancements, and maintenance updates. The version number increases to the next decimal number (for example, from 8.2 to 8.3).
    Application impact: The release has not retired any published application programming interfaces (APIs). However, OpenVMS internal interfaces may have been modified with the addition of significant new functionality or implementation of performance improvements. It is unlikely that a new application kit will be required for 95 percent of all applications that use documented APIs. Device driver and kernel-level applications (that is, those that use nonstandard or undocumented APIs) may need qualification testing.
  • Enhancement Releases for OpenVMS contain enhancements to the existing features and maintenance updates. The version number increases to show a revision by using a dash (for example, OpenVMS Version 8.2-1).
    Application impact: The release may contain new hardware support, software enhancements, and maintenance, but the changes are isolated and have no impact on applications that use published APIs. Independent software vendors (ISVs) do not need to test the new release or produce a new application kit.
  • Hardware Releases provide current version-level support until 12 months after a subsequent release containing that particular hardware support. Hardware releases are shipped with the new hardware sales only and are not distributed to the existing service customers.

The following OpenVMS core products are supported at the same level (Standard Support or Prior Version Support) and duration as the operating system version on which they ship:

  • HP DECnet (Phase IV)
  • HP DECnet-Plus for OpenVMS
  • HP OpenVMS Cluster Client Software
  • HP OpenVMS Cluster Software for OpenVMS
  • HP RMS Journaling for OpenVMS
  • HP TCP/IP Services for OpenVMS
  • HP Volume Shadowing for OpenVMS
  • HP DECram for OpenVMS

These products require individual support contracts and are not included in the operating system support contract.

1.2 General Application Compatibility Statement

OpenVMS has consistently held the policy that published APIs be supported on all subsequent releases. It is unlikely, that applications that use published APIs will need to be updated to support a new release of OpenVMS. APIs may be "retired", and thus removed from the documentation; however, the API will continue to be available on OpenVMS as an undocumented interface.

1.3 Obtaining Remedial Kits

Remedial kits for HP products are available online at the HP IT Resource Center (ITRC). To access the ITRC patch download site, you need to register and log in. Registration is open to all users and no service contract is required. You can register and log in from the following URL:


You can also use FTP to access patches from the following location:


1.4 Intel Itanium 9300 Based Servers Pre-enablement Information


OpenVMS Version 8.4 contains functionality that allows HP to support new Integrity servers in the future via a update kit.

1.5 HP DECprint Supervisor Installation Restriction


If you attempt to install HP DECprint Supervisor (DCPS) Installation with the current directory set to any location on the CD or DVD media, it fails with the following error message:

%DCL-W-INVRANGE, field specification is out of bounds - check sign and size 
%SYSTEM-F-IVLOGNAM, invalid logical name 

The workaround for this problem is to set the working directory to a system disk and then install the kit.

$ PROD INST /SOURCE=VM141$DKA1:[000000.DCPS_I64027.KIT] * 

This will be fixed in a future release of HP DECprint Supervisor (DCPS).

1.6 Networking Options


OpenVMS provides customers with the flexibility to choose the network protocol of their choice. Whether you require DECnet or TCP/IP, OpenVMS allows you to choose the protocol or combination of protocols that works best for your network. OpenVMS can operate with both HP and third-party networking products.

During the main installation procedure for OpenVMS Version 8.4, you have the option of installing the following supported HP networking software:

  • Either HP DECnet-Plus Version 8.4 for OpenVMS or HP DECnet Phase IV Version 8.4 for OpenVMS. (Note that both DECnet products cannot run concurrently on your system.)
    DECnet-Plus contains all the functionality of the DECnet Phase IV product, and the ability to run DECnet over TCP/IP or OSI protocols.

For information about configuring and managing your HP networking software after installation, see the TCP/IP, DECnet-Plus, or DECnet documentation. The manuals are available in online format on the OpenVMS Documentation website. You must use the HP TCP/IP Services for OpenVMS Version 5.7 after upgrading to OpenVMS Version 8.4.

1.7 Disk Incompatibility with Older Versions of OpenVMS


The OpenVMS Version installation procedure initializes the target disk with volume expansion (INITIALIZE/LIMIT). This renders the disk incompatible with versions of OpenVMS prior to Version 7.2. In most cases, this does not present a problem. If you intend to mount the new disk on a version of OpenVMS before Version 7.2, ensure that the disk is compatible for that operating system version. For more information, see the HP OpenVMS Version 8.3 Upgrade and Installation Manual.

Note that by taking these steps, your new system disk might include a relatively large minimum allocation size (as defined by /CLUSTER_SIZE). As a result, small files will use more space than required. Therefore, perform these steps only for system disks that must be mounted on versions of OpenVMS prior to Version 7.2.


ODS-5 disks are also incompatible with versions of OpenVMS earlier than Version 7.2.

1.8 HP DECwindows Motif for OpenVMS


The following table lists the versions of DECwindows Motif supported on various platforms of OpenVMS Version 8.4 operating system.

Table 1-1 Supported Versions of DECwindows Motif
OpenVMS Version DECwindows Motif Version
OpenVMS Integrity servers Version 8.4 DECwindows Motif for OpenVMS Integrity servers Version 1.7
OpenVMS Alpha Version 8.4 DECwindows Motif for OpenVMS Alpha Version 1.7

The DECwindows Motif software relies on specific versions of the OpenVMS server and device driver images. Ensure that you install or upgrade to the version of DECwindows Motif that is appropriate for your operating system environment, as listed in Table 1-1.

For information on support for earlier versions of DECwindows Motif, see the HP DECwindows Motif for OpenVMS Release Notes.

For information about installing the DECwindows Motif software, see the HP DECwindows Motif for OpenVMS Installation Guide.

1.9 Upgrade Paths


You can upgrade directly to OpenVMS Alpha Version 8.4 from the following versions of OpenVMS Alpha.

For Alpha:

Version 7.3-2 to V8.4
Version 8.2 to V8.4
Version 8.3 to V8.4

For Integrity servers:

Version 8.2-1 to V8.3
Version 8.3 to V8.4
Version 8.3-1H1 to V8.4

If you are currently running OpenVMS Alpha Version 6.2x through 7.3, inclusive, you must first upgrade to Version 7.3-2, and then to Version 8.4. For more information about the OpenVMS operating system support, see the support chart on the following website:


If you are running other versions of OpenVMS that are no longer supported, you must do multiple upgrades in accordance with the upgrade paths that were documented for earlier versions.

Cluster Concurrent Upgrades

During a concurrent upgrade, you must shut down the entire cluster and upgrade each system disk. The cluster cannot be used until you upgrade and reboot each computer. Once you reboot, each computer will run the upgraded version of the operating system.

Cluster Rolling Upgrades

During a cluster rolling upgrade, upgrade each system disk individually, allowing the old and new versions of the operating system to run together in the same cluster (a mixed-version cluster). There must be more than one system disk. The systems that are not being upgraded remain available.

Only the following OpenVMS Alpha and OpenVMS VAX versions are supported in mixed-version clusters that include OpenVMS Alpha Version 8.4:

Version 7.3-2 (Alpha)
Version 7.3 (VAX)

If you are upgrading in a cluster environment, rolling upgrades are supported from Version 7.3-2 of the OpenVMS Alpha operating system. If you have other versions in a cluster, you cannot perform a rolling upgrade until those versions are upgraded to a supported version.

To use a mixed-version support for these versions, you must install one or more remedial kits. For more information, see Section 4.37.1.


HP currently supports only two versions of OpenVMS (regardless of architecture) running in a cluster at the same time. Only two architectures are supported in the same OpenVMS Cluster. Warranted support is available for pairings with the OpenVMS Integrity servers Version 8.4. For more information, see the HP OpenVMS Version 8.4 Upgrade and Installation Manual.

For information about the warranted pairs and migration pairs of the OpenVMS operating systems, for complete instructions to install or upgrade to OpenVMS Alpha Version 8.4, and for instructions on installing OpenVMS Integrity servers Version 8.4, see the HP OpenVMS Version 8.4 Upgrade and Installation Manual.

1.10 OpenVMS Integrity server Users

The following notes are for users of the OpenVMS Integrity servers.

1.10.1 Storage Controllers


For HP Integrity servers such as rx7620, rx8620, and HP Integrity Superdome servers, the HP sx1000 chipset provides the CPU, memory, and I/O subsystem. The cell controller is combined with four CPU chips into the computing cell in the sx1000 chipset architecture. The cell controller chip also provides paths to the I/O devices and off-cell memory.

These servers provide a varying number of sx1000 chipset cells. The rx7620 provides up to 2 cells (8 CPUs), the rx8620 provides up to 4 cells (16 CPUs), and the Superdome provides up to 16 cells (64 CPUs).

OpenVMS Integrity servers Version 8.3 supports two primary storage interconnects:

  • The SCSI storage type is U320, used for core I/O for certain Integrity server systems, as well as for external direct attached storage using the A7173A U320 SCSI adapter. For connection to the external SCSI storage, the supported storage shelves are the DS2100 or the MSA30.
  • The external Fibre Channel storage connection is through the dual-port 2 GB Fibre Channel Universal PCI-X adapter (A6826A). This adapter allows connectivity to any external SAN-based Fibre Channel storage infrastructure supported by OpenVMS.

Support for SAS-based storage is provided by the OpenVMS Integrity servers Version 8.3-1H1 and later.


If you are using earlier versions of OpenVMS, the following are some important considerations:

  • The U160 adapter (A6829A) is not officially supported on OpenVMS Integrity servers Version 8.3 and later, and has reached end-of-life in 2005. However, you can use this adapter for the existing hardware configurations as long as the system remains as it is currently configured. Any additional adapters, or movement to another server environment, requires you to move to the U320 SCSI adapter technology.
  • In the case of Fibre Channel, the AB232A or KGPSA-EA FC adapters are no longer supported on the OpenVMS Integrity servers Version 8.3 and later. You must upgrade to the A6826A FC adapter before running the production applications on Version 8.2.

1.10.2 U160 SCSI Support for rx7620 and rx8620


HP Integrity servers such as the rx7620 and rx8620 systems have an internal U160 (SCSI), which is included in the system as core I/O. The internal connections from these SCSI controllers to the racks of internal SCSI disks (which appear on the front of the system box) are supported by OpenVMS. The internal box also has two external ports. HP does not support attaching them (using cables) to an external SCSI rack.

1.10.3 Clearing the System Event Log on Integrity servers


HP Integrity servers maintain a System Event Log (SEL) within the system console storage. The OpenVMS Integrity servers automatically transfer the contents of the SEL to the OpenVMS error log. If you are operating from the console during a successful boot operation, you might see a message indicating that the Baseboard Management Controller (BMC) SEL is full. You can continue the operation by following the prompts; OpenVMS processes the contents of the SEL. To clear the SEL manually, enter the following command at the EFI Shell prompt:

Shell> clearlogs SEL 

The clearlogs SEL command deletes the contents of the SEL. The command is available with the current system firmware versions.

If your Integrity server is configured with a Management Processor (MP) and you see a BMC event log warning while connected to the MP console, you can clear the BMC event log by using MP. Press Ctrl/B to drop to the MP> prompt. At the MP> prompt, enter SL (from the main menu) and use the C option to clear the log.

HP recommends that you load and use the most current system firmware. For more information about updating the system firmware, see the HP OpenVMS Version 8.3 Upgrade and Installation Manual.

1.10.4 Firmware for Integrity Servers


The OpenVMS Integrity servers Version 8.4 was tested with the latest firmware for each of the supported Integrity servers.

For the entry-class Integrity servers, HP recommends that you use the most current system firmware. For information about updating the system firmware for entry-class Integrity servers, see the HP OpenVMS Version 8.4 Upgrade and Installation Manual. (For rx7620, rx8620, and Superdome servers, call HP Customer Support to update your firmware.)

Table 1-2 lists the recommended firmware versions for entry-class Integrity servers:

Table 1-2 Firmware Versions for Entry-Class Integrity Servers
System System
rx1600 4.27 4.01 E.03.30 N/A
rx1620 4.27 4.01 E.03.30 N/A
rx2600 2.31 1.53 E.03.32 N/A
rx2620 4.29 4.04 E.03.32 N/A
rx4640 4.29 4.06 E.03.32 1.10
rx2660* 4.11 5.24 F.02.23 N/A
rx3600* 4.11 5.25 F.02.23 1.23
rx6600* 4.11 5.25 F.02.23 1.23

*If you have Intel Itanium 9100 processors on your rx2660, rx3600, or rx660, you need firmware that is at least one version greater than the ones listed in Table 1-2.

For cell-based servers, you must access the MP Command Menu and issue the sysrev command to list the MP firmware revision level. The sysrev command is available on all the HP Integrity servers that have an MP. Note the EFI info fw command does not display the Management Processor (MP) firmware version on cell-based Integrity servers.

To check the firmware version information on an entry-class Integrity server that does not have the MP, enter the info fw command at the EFI prompt:

Shell> INFO FW 
   Firmware Revision: 2.13 [4412]          (1)
   PAL_A Revision: 7.31/5.37 
   PAL_B Revision: 5.65 
   HI Revision: 1.02 
   SAL Spec Revision: 3.01 
   SAL_A Revision: 2.00 
   SAL_B Revision: 2.13 
   EFI Spec Revision: 1.10 
   EFI Intel Drop Revision: 14.61 
   EFI Build Revision: 2.10 
   POSSE Revision: 0.10 
   ACPI Revision: 7.00 
   BMC Revision: 2.35                       (2)
   IPMI Revision: 1.00 
   SMBIOS Revision: 2.3.2a 
   Management Processor Revision: E.02.29   (3)
  1. The system firmware revision is 2.13.
  2. The BMC firmware revision is 2.35.
  3. The MP firmware revision is E.02.29.

The HP Integrity rx4640 server contains the Dual Hot Plug Controller (DHPC) hardware with upgradeable firmware. To check the current version of your DHPC firmware, use the EFI command INFO CHIPREV , as shown in the following example. The hot-plug controller version will be displayed. A display of 0100 indicates version 1.0; a display of 0110 means version 1.1.

   Chip                  Logical     Device       Chip 
   Type                     ID         ID       Revision 
   -------------------   -------     ------     -------- 
   Memory Controller         0       122b         0023 
   Root Bridge               0       1229         0023 
     Host Bridge          0000       122e         0032 
     Host Bridge          0001       122e         0032 
     Host Bridge          0002       122e         0032 
     Host Bridge          0004       122e         0032 
      HotPlug Controller     0          0         0110  
     Host Bridge          0005       122e         0032 
      HotPlug Controller     0          0         0110 
     Host Bridge          0006       122e         0032 
     Other Bridge            0          0         0002 
       Other Bridge          0          0         0008 
         Baseboard MC        0          0         0235 

For information on accessing and using EFI, see the HP OpenVMS Version 8.4 Upgrade and Installation Manual. For more information, see the hardware documentation that is provided with your server.

1.10.5 Booting from the Installation DVD


On Integrity servers that have a minimum supported memory (512 MB), the following message appears when booting from the installation DVD:

********* XFC-W-MemmgtInit Misconfigure Detected ******** 
XFC-E-MemMisconfigure MPW_HILIM + FREEGOAL > Physical Memory and no reserved memory for XFC 
XFC-I-RECONFIG Setting MPW$GL_HILIM to no more than 25% of physical memory XFC-I-RECONFIG 
Setting FREEGOAL to no more than 10% of physical memory 
********* XFC-W-MemMisconfigure AUTOGEN should be run to correct configuration ******** 
********* XFC-I-MemmgtInit Bootstrap continuing ******** 

The message means that the system cache (XFC) initialization has successfully adjusted the SYSGEN parameters, MPW_HILIM and FREEGOAL, to allow caching to be effective during the installation. You can continue with the installation.

1.10.6 Booting from USB or vMedia Devices


The %SYSTEM-I-MOUNTVER messages and the Universal Serial Bus Configuration Manager message are new to OpenVMS Version 8.4 and are seen when using USB or vMedia devices for booting the Integrity servers.

This is an informational message and can be ignored.

1.10.7 Small Memory Configurations Error Message


When booting OpenVMS Integrity server systems over the network or while booting OpenVMS as a Guest Operating System on Integrity VM, it allocates a memory disk from the main memory.

On OpenVMS Version 8.4, the size of this memory disk defaults to 256 MB. However, for some of the older systems with relatively small memory configurations, this size cannot be allocated, and the following error message is displayed:

ERROR: Unable to allocate aligned memory 
%VMS_LOADER-I-Cannot allocate 256Meg for memory disk. Falling back to 64Meg. 
%VMS_LOADER-I-Memorydisk allocated at:0x0000000010000000 

After this message is displayed, OpenVMS adopts a fallback strategy by allocating only 64 MB and excludes some newer drivers from the initial boot. The fallback message indicates that the action was performed. If the fallback message is printed with no further error message, the initial error message can be ignored.

1.10.8 HP DECwindows Motif Release Notes

The following DECwindows Motif release notes are for the users of the OpenVMS Integrity server. Keyboard Support


The only model keyboard supported on HP DECwindows Motif for OpenVMS Integrity servers is the LK463 (AB552A for Integrity servers) keyboard. Although other types of keyboards may function in the OpenVMS Integrity servers environment, HP does not currently support them.

1.11 OpenVMS Alpha Users

The following notes are for users of the OpenVMS Alpha systems.

1.11.1 Firmware for OpenVMS Alpha Version 8.4


OpenVMS Alpha Version 8.4 was tested with the platform-specific firmware included on Alpha Systems Firmware CD Version 7.3. For older platforms no longer included on the Firmware CD, OpenVMS Alpha Version 8.4 was tested with the latest released firmware version. HP recommends that you upgrade to the latest firmware before upgrading OpenVMS.

Read the firmware release notes before installing the firmware. For Version 7.3 and the latest firmware information, see:


1.12 Kerberos for OpenVMS


Before configuring or starting Kerberos, check the HP TCP/IP local host database to determine whether your hostname definition is the short name (for example, node1) or the fully qualified domain name (FQDN) (for example, node1.hp.com).

If your host name definition is the short name, run TCPIP$CONFIG to change the definition to the fully qualified name.

The following example shows that the hostname is the short name:

     LOCAL database 
Host address    Host name  node1 

The following log is an example of how to change the host name definition to the FQDN.

                TCP/IP Network Configuration Procedure 
        This procedure helps you define the parameters required 
        to run HP TCP/IP Services for OpenVMS on this system. 
        Checking TCP/IP Services for OpenVMS configuration database files. 
        HP TCP/IP Services for OpenVMS Configuration Menu 
        Configuration options: 
                 1  -  Core environment 
                 2  -  Client components 
                 3  -  Server components 
                 4  -  Optional components 
                 5  -  Shutdown HP TCP/IP Services for OpenVMS 
                 6  -  Startup HP TCP/IP Services for OpenVMS 
                 7  -  Run tests 
                 A  -  Configure options 1 - 4 
                [E] -  Exit configuration procedure 
Enter configuration option: 1 
        HP TCP/IP Services for OpenVMS Core Environment Configuration Menu 
        Configuration options: 
                 1  -  Domain 
                 2  -  Interfaces 
                 3  -  Routing 
                 4  -  BIND Resolver 
                 5  -  Time Zone 
                 A  -  Configure options 1 - 5 
                [E] -  Exit menu 
Enter configuration option: 2 
      HP TCP/IP Services for OpenVMS Interface & Address Configuration Menu 
 Hostname Details: Configured=node1, Active=node1 
 Configuration options: 
   1  -  WE0 Menu (EWA0: TwistedPair 1000mbps) 
   2  -    node1                Configured,Active 
   3  -  IE0 Menu (EIA0: TwistedPair 100mbps) 
   I  -  Information about your configuration 
  [E] -  Exit menu 
Enter configuration option: 2 
      HP TCP/IP Services for OpenVMS Address Configuration Menu 
      WE0 node1 Configured,Active WE0 
 Configuration options: 
         1  - Change address 
         2  - Set "node1" as the default hostname 
         3  - Delete from configuration database 
         4  - Remove from live system 
         5  - Add standby aliases to configuration database (for failSAFE IP) 
        [E] - Exit menu 
Enter configuration option: 1 
    IPv4 Address may be entered with CIDR bits suffix. 
    E.g. For a 16-bit netmask enter 
Enter IPv4 Address []: 
Enter hostname [node1]: node1.hp.com 
Requested configuration: 
      Address  : 
      Netmask  : (CIDR bits: 21) 
      Hostname : node1.hp.com 
* Is this correct [YES]: 
  "node1" is currently associated with address "". 
  Continuing will associate "node1.hp.com" with "". 
* Continue [NO]: YES 
Deleted host node1 from host database 
Added hostname node1.hp.com ( to host database 
* Update the address in the configuration database [NO]: YES 
Updated address WE0: in configuration database 
* Update the active address [NO]: YES 
WE0: delete active inet address node1.hp.com 
Updated active address to be WE0: 

To exit the TCP/IP Services configuration menus and to return to the DCL ($) prompt, type E three times.

To verify the change, enter the following command:

     LOCAL database 
Host address    Host name  node1.hp.com 

If you have not configured an earlier version of Kerberos on your system, or if you changed your TCP/IP hostname definition to the FQDN as shown in the example, run the Kerberos configuration program before you start Kerberos.

To reconfigure Kerberos, enter the following command:


After you have a valid configuration, start Kerberos with the following command:


For more information, see the Kerberos for OpenVMS Installation Guide and Release Notes.

1.13 Modifying SYSTARTUP_VMS.COM


The startup command procedures for Encrypt and SSL are now called from the VMS$LPBEGIN-050_STARTUP.COM procedure. If you are upgrading from a previous version of OpenVMS that had the Encrypt and SSL products installed, edit SYS$MANAGER:SYSTARTUP_VMS.COM to remove the calls to SYS$STARTUP:ENCRYPT_ START.COM and SYS$STARTUP:SSL$STARTUP.COM. This will prevent these command procedures from executing twice.

1.14 Encryption for OpenVMS


When you install or upgrade OpenVMS, Encryption for OpenVMS creates its own ENCRYPT and DECRYPT commands. Encryption for OpenVMS starts automatically (after SSL for OpenVMS, which also starts automatically). For more information about Encryption for OpenVMS, see HP OpenVMS Version 8.3 New Features and Documentation Overview.


With Version 8.3 of OpenVMS, the DCL command DECRAM is removed because it conflicts with the new DECRYPT command (DECRYPT overwrites the default definition of DECRAM, which you might have been using to start DECram). You must update any command procedures that use the DECRAM command so that they use the foreign command style of DCL. For example:


This change affects the use of the DCL command only; all other aspects of the DECram product remain the same. If you have older versions of DECram on your OpenVMS Alpha system, remove them before upgrading. See Section 1.15.

1.15 Upgrading HP DECram V3.n


Starting with the OpenVMS Alpha and OpenVMS Integrity servers Version 8.2, DECram ships with the OpenVMS operating system as a System Integrated Product (SIP). If you want to upgrade to Version 8.3 on an OpenVMS Alpha system from OpenVMS Version 7.3-2, you must remove any old versions of DECram. For more information, see the HP OpenVMS Version 8.4 Upgrade and Installation Manual for details.

More DECram release notes are included in Section 2.14.

1.16 Converting the LANCP Device Database


When you upgrade to OpenVMS Alpha Version 8.3 from OpenVMS Version 7.3-2, you might need to convert the LAN device database to the Version 8.3 format if this is not automatically done by LANACP when LANACP is first run after the upgrade.

To convert the database, issue the following LANCP commands to convert the device database and to stop LANACP so that it can be restarted to use the new database:


1.17 DECnet-Plus Requires a New Version


When you install or upgrade to OpenVMS Alpha Version 7.3-2 or later, you must also install a new version of DECnet-Plus. One of the reasons that make this necessary is a change in the AUTOGEN behavior that was introduced in Version 7.3-2.

Unlike the behavior of previous versions, DECnet-Plus for OpenVMS Version 7.3-2 and later versions now provide product information in the NEWPARAMS.DAT records, as required by AUTOGEN. AUTOGEN anticipates this change in DECnet-Plus, so that it does not print any warnings when it removes "bad" records from CLU$PARAMS.DAT; AUTOGEN presumes these records are made by an older DECnet-Plus kit and are be replaced by the new DECnet-Plus kit. So, under normal conditions, there will not be any striking differences in behavior during an OpenVMS Version 7.3-2 or later installation or upgrade.

However, if other products do not provide product information in the NEWPARAMS.DAT records, as now required by AUTOGEN, AUTOGEN prints warning messages to both the report and the user's SYS$OUTPUT device. The warnings state that AUTOGEN cannot accept the parameter assignment found in NEWPARAMS.DAT (because no product name is attached) and that no records will be added to CLU$PARAMS.DAT. Because no records are added, the expected additions or other alterations to the SYSGEN parameters will not be made, which could lead to resource exhaustion. Developers and testers of software products should be aware of this requirement; it may also be of interest to system managers.

This new behavior is intended to protect both the users and providers of layered products.

A description of NEWPARAMS.DAT and CLU$PARAMS.DAT is included in the AUTOGEN chapter of the HP OpenVMS System Management Utilities Reference Manual.

1.18 Remove TIE Kit Before Upgrade


The Translated Image Environment (TIE) is integrated into OpenVMS Integrity servers Version 8.2--1. For more information, see the HP OpenVMS Systems Migration Software website:


If you have installed any version of the TIE PCSI kit (HP-I64VMS-TIE) on OpenVMS Integrity servers Version 8.2 or Version 8.2-1, you must manually remove the TIE kit before you upgrade to OpenVMS Integrity servers Version 8.3.

Use the following command to remove the TIE product kit:


Do not install the TIE product kit, HP I64VMS TIE V1.0, on OpenVMS Integrity servers Version 8.2-1 or later.

1.19 Installation Failure of Layered Products on Alternate Devices or Directories


By default, the PRODUCT INSTALL command installs a layered product on the system device in the SYS$COMMON directory tree. If you choose to install a layered product to an alternate device or directory using the /DESTINATION=dev:[dir] qualifier (or by defining the logical name PCSI$DESTINATION ), the installation might fail with an error message stating that one of the following files cannot be found: [SYSLIB]DCLTABLES.EXE , [SYSHLP]HELPLIB.HLB , or [SYSLIB]STARLET*.* . If this happens, answer YES to the question, "Do you want to terminate? [YES]," and then retry the installation using the /NORECOVERY_MODE qualifier.

Previous Next Contents Index