[an error occurred while processing this directive]
HP OpenVMS Systems Documentation |
HP OpenVMS Version 8.4 Release Notes
6.11 DECwindows X11 Display Server (Alpha Only)
This section contains release notes pertaining to the DECwindows X11
display server for OpenVMS Alpha systems.
Permanent Restriction
Alpha computers equipped with S3 Trio32 or Trio64 graphics cards
support single-screen display only. Multihead graphics are not
supported.
This section contains release notes pertaining to DMCC.
Permanent Restriction
The KZPDA SCSI Controller and the PBXGA Graphics Card cannot be placed
in a slot behind the bridge on the DIGITAL Modular Computing Components
(DMCC) Alpha 5/366 and 5/433 PICMG SBCs.
To update the SRM console on the Alpha 4/233 (21064a), 4/266 (21164a), 5/366, and 5/433 DMCC systems, you must choose either the SRM console or the AlphaBIOS setup. You can store only one console.
If you update both the SRM and the AlphaBIOS consoles, you will enter the AlphaBIOS Setup menu, and you will not have the option to return to the SRM console. The only way to exit the AlphaBIOS Setup menu and return to the SRM console is to use a Firmware Update Utility located at the following website: 6.13 Digital Personal Workstation: Booting OpenVMS V7.3-1 and HigherV7.3-1 If you are using the Digital Personal Workstation 433au, 500au, and 600au series systems, you can boot OpenVMS Version 7.3-1 or higher from an IDE CD if the controller chip is a Cypress PCI Peripheral Controller. You cannot boot OpenVMS on a Digital Personal Workstation au series system from an IDE CD with an Intel Saturn I/O (SIO) 82378 chip in your configuration. You must use a SCSI CD, if the Intel SIO chip is present. To determine which IDE chip you have in your configuration, enter the following SRM console command:
If you see Cypress PCI Peripheral Controller, you can boot OpenVMS.
If you see Intel SIO 82378, you will need to use and boot from
a SCSI CD.
V7.3-2 A combination of improvements in driver performance and faster systems has uncovered a limit to the amount of I/O that a dual-controller HSGnn storage array configured with a relatively large number of LUNs can handle. When this limit is reached, it is possible for the array to be busy in processing I/O that it is unable to complete the normal periodic synchronization between controllers, causing a controller hang or failure and a loss of host access to some or all LUNs until a manual controller reset is performed. In the case of such a controller failure, the Last Failure Codes most likely to be reported on the HSG console are 01960186, 01942088, and 018600A0. Most HSGnn devices will continue to run with no problems. If your site experiences an HSG controller hang or failure when a heavy load is applied and the HSG has more than approximately 24 LUNs configured, you may be able to avoid future hangs or failures by reconfiguring the controller with fewer LUNs or distributing I/O so that the HSG is not so heavily loaded.
This issue is being investigated by the appropriate HP engineering
groups.
V8.2 The 3D graphics display capability has traditionally been licensed separately from the OpenVMS operating system. Since its initial offering, the Open3D layered product has required a separately orderable license. When Open3D software began shipping as part of the OpenVMS operating system, the 3D graphics display feature continued to be a separately licensed capability. An example of this licensing is Open3D licensing to support 3D graphics display with the 3X-PBXGG-AA ATI RADEON 7500 PCI 2D/3D graphics adapter. Starting with OpenVMS Version 8.2, the 3D graphics display feature is licensed with the operating system for both AlphaServers and Integrity servers. Therefore, the Open3D license is not available for Version 8.2 of OpenVMS. Previous versions of OpenVMS still require the Open3D license to be installed on the system for 3D display operation. HP will continue to support 3D device drivers shipped with OpenVMS Version 7.3-2 under standard contract or Mature Product Support, depending on your support agreement. Device drivers for the following adapters have shipped with Version 7.3-2 of OpenVMS:
These adapters will continue to run 3D graphics display under OpenVMS Version 8.2 but will no longer require a license. In addition, the following 2D graphics adapters continue to be supported with OpenVMS Version 8.2:
The ATI RADEON 7500 PCI graphics adapter will be supported on OpenVMS Integrity servers Version 8.2 in the near future. Testing is currently in progress. An announcement will be posted on the following website when support for this graphics card is available: 6.16 PowerStorm 300/350 PCI Graphics Support for OpenVMSV8.2 For release notes on the PowerStorm 300/350 PCI graphics controller support for a Compaq Workstation running OpenVMS Alpha, refer to the PowerStorm 300/350 OpenVMS Graphics Release Notes Version 2.0. These documents, release notes, and installation guides are shipped with the graphics cards.
Open3D License No Longer Checked
Starting with OpenVMS Version 8.2, the license to use 3D (OpenGL) support is included as part of the OpenVMS license. See Section 6.15 for details.
Defining the DECW$OPENGL_PROTOCOL Logical
When you run a 3D graphics application and display output to a system with a PowerStorm 350/300 graphics card, you must make sure that the DECW$OPENGL_PROTOCOL logical is defined as follows on the system on which you are running the application:
Previously, the P350 would sometimes fail to reinitialize properly on session exit. This problem has been fixed by two modifications:
6.17 RFnn DSSI Disk Devices and Controller Memory ErrorsV6.2 A problem exists with the microcode for earlier versions of RF31T, RF31T+, RF35, RF35+, RF73, and RF74 DSSI disk devices. The problem can cause data loss, and occurs when reading data from one of these devices, if the device has had a controller memory error (also known as an error detection and correction (EDC) error). The error could have been induced by a virtual circuit closure or faulty hardware. HP advises customers with any of these devices to check their microcode revision levels. If the microcode revision levels are lower than the numbers shown in Table 6-2, HP recommends that you update the microcode. The microcode for all models, except RF31T, RF31T+, and RF35+, is provided on the latest OpenVMS binary distribution CD. The RF_VERS utility, a utility program that displays the microcode revision level of the DSSI disk devices, is also provided on the CD. Instructions both for using the utility program and for updating the microcode are provided in this section.
The earliest supportable revision levels of the DSSI disk microcode are listed in Table 6-2.
To display the microcode revision level of your DSSI disk devices, perform the following steps:
The following is an example of the display produced by the RF_VERS utility:
To update the microcode in your device, use the appropriate command for your device and platform from Table 6-3.
6.18 RZnn Disk Drive Considerations
The following notes describe issues related to various RZ disk drives.
V7.1 During the testing of HP supported SCSI disk drives on configurations with DWZZAs and long differential SCSI buses, two drives, RZ25M and RZ26N, were found to have bus phase problems. For this reason, do not use these drives in configurations where the differential bus length connecting DWZZAs equals or exceeds 20 meters.
This recommendation applies only to the RZ25M and RZ26N drives. All
other disk drives that are listed as supported in the OpenVMS SPD can
be used in configurations to the full bus lengths of the SCSI-2
specification.
V6.2-1H3 The minimum firmware revision level recommended for RZ26N and RZ28M disks is Revision 0568.
If the latest firmware revision level is not used with these disks,
multiple problems can occur.
V6.2 If you install RZ26L or RZ28 disks on a multihost SCSI bus in an OpenVMS Cluster, the disk's minimum firmware revision is 442. The following sections describe a procedure that you can use to update the firmware on some RZ26L and RZ28 drives. This procedure can only be used with drives that are directly connected to a SCSI adapter on a host system. Drives that are attached through an intelligent controller (such as an HSZ40 or KZPSC) cannot be updated using this procedure. Refer to the intelligent controller's documentation to determine whether an alternative firmware update procedure exists.
6.18.3.1 Firmware Revision Level 442 RequirementsOnly the combinations of disk drives and firmware revision levels listed in Table 6-4 are capable of being upgraded safely to firmware revision level 442. Performing the update procedure on any other combination can permanently damage the disk.
6.18.3.2 Firmware Revision Level 442 Installation ProcedureIf you determine that your disk requires revision level 442 firmware and it is capable of being upgraded safely, use the following procedure to update the firmware. (See Table 6-4 for the file name of the disk you are upgrading.)
6.19 sx1000 Integrity SuperdomeV8.3
The HP Integrity Superdome cannot boot as a satellite over the Core I/O
LAN card. If you specify the LAN card as a boot option to
BOOT_OPTION.COM and then shut down the operating system, the LAN card
does not appear in EFI. The problem will be fixed in a future release
of the firmware.
V8.2 The following families of graphics controller boards are not supported on OpenVMS Version 8.2:
Starting with OpenVMS Version 8.2, only 2D support, using the base 2D capabilities shipped with OpenVMS, is provided for the following families of graphics controller boards. Do not install Open3D to obtain 2D support for the following:
6.21 Recompiling and Relinking OpenVMS Device DriversThe following sections contain release notes pertaining to recompiling and relinking OpenVMS device drivers.
For related release notes, see Section 5.11.
V7.3-1 All OpenVMS Alpha SCSI device drivers from previous versions of OpenVMS must be recompiled and relinked to run correctly on OpenVMS Version 7.3-1 or higher. If you have an OpenVMS Alpha SCSI driver that you are upgrading from a version prior to OpenVMS Alpha 7.0, see Section 6.21.2.
Note that for OpenVMS Version 7.1, all OpenVMS VAX SCSI device drivers
required recompiling and relinking. OpenVMS VAX device drivers that
were recompiled and relinked to run on OpenVMS Version 7.1 will run
correctly on OpenVMS Version 7.3 and later.
V7.1 Device drivers that were recompiled and relinked to run on OpenVMS Alpha Version 7.0 do not require source-code changes and do not need to be recompiled and relinked to run on OpenVMS Alpha Version 7.1 and later. (Note that Alpha SCSI drivers, however, must be recompiled and relinked as described in Section 6.21.1.) Device drivers from releases prior to OpenVMS Alpha Version 7.0 that were not recompiled and relinked for OpenVMS Alpha Version 7.0 must be recompiled and relinked to run on OpenVMS Alpha Version 7.1 and later.
OpenVMS Alpha Version 7.0 included significant changes to OpenVMS Alpha
privileged interfaces and data structures. As a result of these
changes, device drivers from releases prior to OpenVMS Alpha Version
7.0 may also require source-code changes to run correctly on OpenVMS
Alpha Version 7.0 and higher. For more details about OpenVMS Alpha
Version 7.0 changes that may require source changes to customer-written
drivers, refer to the HP OpenVMS Guide to Upgrading Privileged-Code Applications.
V7.3
As of OpenVMS Version 7.3, when SYSTEM_CHECK is enabled, device driver
images with names of the form SYS$nnDRIVER_MON.EXE will be
automatically loaded by the system loader. If a corresponding _MON
version does not exist, the system will use the default image name:
SYS$nnDRIVER.EXE.
V7.2
See Section 5.11.7 for information about how possible per-thread
security impacts OpenVMS Alpha device drivers.
V6.2 Alpha hardware platforms that support PCI, EISA, and ISA buses deliver I/O device interrupts at different IPLs, either 20 or 21. The IPL at which device interrupts are delivered can change if you move the device from one platform to another. This is a problem if the driver declares its device IPL to be 20, and then that driver is executed on a machine that delivers I/O device interrupts at IPL 21. The solution to this problem is for the PCI, EISA, and ISA device drivers to use IPL 21. This works correctly on platforms that deliver I/O device interrupts at IPL 20 and on platforms that deliver I/O device interrupts at IPL 21.
The system routines that you can use to manage the Counted Resource Context Block (CRCTX) data structure have been improved. The following routines now set and check the status (CRCTX$V_ITEM_VALID) of the CRCTX data structure:
These routines have changed as follows: If you call IOC$DEALLOC_CRCTX with a valid CRCTX status (CRCTX$V_ITEM_VALID set to 1), the service returns a bad status. If the SYSBOOT parameter SYSTEM_CHECK is set, the system will fail. This prevents users from deallocating a CRCTX when they have valid resources that have not been deallocated. You must call IOC$ALLOC_CNT_RES with an invalid CRCTX status (CRCTX$V_ITEM_VALID set to 0). If you call this routine with a valid status, OpenVMS assumes that you will lose the resources mapped by this CRCTX. OpenVMS does not allocate new resources and returns a bad status. If SYSTEM_CHECK is set, the system will fail. IOC$ALLOC_CNT_RES sets the valid bit before it returns. IOC$DEALLOC_CNT_RES must be called with a valid CRCTX (CRCTX$V_ITEM_VALID set to 1). If you call IOC$DEALLOC_CNT_RES with an invalid CRCTX, OpenVMS assumes that the other parameters are not valid, and returns a bad status. If SYSTEM_CHECK is set, the system will fail. IOC$DEALLOC_CNT_RES clears the valid bit before it returns. IOC$LOAD_MAP must be called with a valid CRCTX. If it is called with an invalid CRCTX (CRCTX$V_ITEM_VALID set to 0), it assumes that the other parameters are also invalid, and returns a bad status. If the SYSBOOT parameter SYSTEM_CHECK is set, the system will fail.
These improvements indicate to device support and privileged-code
application developers whether they need to deallocate scatter gather
registers, which are treated by OpenVMS as generic resources. If the
CRCTX$V_ITEM_VALID bit is set, IOC$DEALLOC_CNT_RES still needs to be
called.
V8.2-1
The following sections provide release notes for adapters supported
with OpenVMS Version 8.2--1.
OpenVMS Version 8.3 for Integrity servers requires that the HP A6826A 2 GB Fibre Channel host-based adapter and its variants have the following minimum version: EFI driver: 1.47 RISC firmware: 3.03.154; HP AB378A and AB379A 4 GB Fibre Channel host-based adapter have the following minimum version: EFI driver: 1.05 RISC firmware: 4.00.70.
To determine the latest, currently supported versions of the RISC
firmware and EFI driver, see the README text file provided on the HP
IPF Offline Diagnostics and Utilities CD. To locate this file, navigate
to the (\efi\hp\tools\io_ cards\fc2) directory for the 2 GB Fibre
Channel HBA or \efi\hp\tools\io_ cards\fc4 for the 4 GB HBA. To update
the driver and firmware, execute the
fcd_update2.nsh
or the
fcd_update4.nsh
, depending on your HBA type. Instructions for obtaining the Offline
Diagnostics and Utilities CD are included in the HP OpenVMS Version 8.3 Upgrade and Installation Manual.
On cell-based systems and newer entry-level systems, the first fibre channel boot entry list is the only valid boot entry. To boot from the other Fibre Channel Integrity servers system disk, go to the EFI Shell and execute "search all", exit the EFI Shell, then select the specified boot entry. This is also required when booting multi-member shadowed system disk.
|