[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Version 8.4 New Features and Documentation Overview


Previous Contents Index


Chapter 7
Security Features

This chapter describes the new security features of the OpenVMS operating system.

7.1 HP SSL Version 1.4 for OpenVMS Features

Secure Sockets Layer (SSL) is the open standard security protocol for the secure transfer of sensitive information over the Internet. HP SSL Version 1.4 is based on OpenSSL 0.9.8h and it also includes the latest security updates from OpenSSL.org. SSL Version 1.4 includes the following features:

  • Support for PKCS-12 files
  • Support for CMS
  • New cipher Camellia
    The new cipher includes the following features:
    • New encryption algorithm
    • Supports block size of 128 bits
    • Supports key lengths of 128, 192, and 256 bits
    • Supports RFCs 3657, 3713, 4051, and 4132
  • Added support for Korean symmetric 128-bit cipher SEED
  • Added support for Datagram Transport Layer Service (DTLS)
  • Added support for RSA Probabilistic Signature Scheme (PSS) encryption
  • Update Elliptical Curve Cryptography (ECC)
  • Supports Federal Information Processing Standards (FIPS) 180-2 algorithms SHA224, SHA256, SHA384, and SHA512

SSL Version 1.4 includes the following security patches:

  • CVE-2008-5077 - Incorrect checks for malformed signatures
  • CVE-2009-0590 - ASN1 printing crash
  • CVE-2009-0591 - Incorrect error checking during CMS verification
  • CVE-2009-0789 - Invalid ASN1 clearing check
  • CVE-2009-3245 - bn_wexpand function call does not check for a NULL return value

For more information about these features, see the HP SSL Version 1.4 for OpenVMS Installation Guide and Release Notes.

7.2 Global and Local Mapping of LDAP Users

The authentication method for OpenVMS version ACME LDAP agent on Version 8.3 and Version 8.3-1H1 supports only one-to-one mapping for users.

In one-to-one mapping, the user logging in to an OpenVMS system from an LDAP server must have a matching username in the SYSUAF.DAT file. Hence, a user must login with the exact username entry stored in the SYSUAF.DAT file. With OpenVMS Version 8.4 or later, LDAP ACME agent uses the concept of global and local mapping.

Using the global and local mapping:

  • User can enter the user name that is common across the domain, at the login prompt.
  • User name is mapped to a different name in the SYSUAF.DAT file during login.
  • OpenVMS session after login uses the name and the privileges in the SYSUAF.DAT file for all purposes.
  • The SET PASSWORD command has the capability to understand that this is a mapped user and synchronize any password change to the directory server.

In global mapping, the user's login name is mapped based on some attributes stored in the directory server. In local mapping, a text database file is used to store the LDAP user name (name of the user in the domain) and the name in SYSUAF.DAT in a .CSV format.

To activate global or local mapping, the following attributes must be added to the LDAP INI file (See SYS$HELP: LDAPACME$README-STD.TXT):

Attributes Description
mapping Specifies whether the mapping is global or local. You are provided two options for this directive:
  • Server
  • Local

For example:

mapping=server indicates that global mapping is enabled for the user.
mapping=local indicates the local mapping is enabled for the user.

If mapping directive is not used, mapping will be one-to-one.

mapping_attribute This directive is applicable only for global mapping.

Set this to the attribute on directory server that is used for user mapping.

For example:

mapping_attribute can be referenced to description field for the user in the directory server.
mapping_attribute=description

You can also use any newly created attribute on the directory server for mapping. The attribute should be an IA5 multi-valued string.

mapping_target This directive is applicable only for global mapping. The mapping_target is searched in the value of directory server's mapping_attribute field.

For example:

Consider that the LDAP INI file has the following attributes:

mapping_attribute=description

mapping_target= VMSUsers.hp.com

Consider that the "description" attribute in the Directory Server be populated with: VMSUsers.hp.com/jdoe

The ACME LDAP agent then searches in "VMSUsers.hp.com/jdoe", for a prefix of "VMSUsers.hp.com/" (With a forward slash (/) along with the mapping_target). The rest of the value that is, "jdoe" is considered as the user name present in SYSUAF.DAT file.

If a multi-valued string attribute is used, the "VMSUsers.hp.com/jdoe" must be one of the array elements of the multi-valued string.

mapping_file This directive is applicable only for local mapping. Set this to the complete path of the text database file to be searched for mapping users.

A template file is present in SYS$STARTUP:LDAP_LOCALUSER_DATABASE.TXT_TEMPLATE. This file includes the LDAP username and VMS usernames separated by a comma, where LDAP username is the name of the user in the domain (entered at the username prompt during login).

For information on how to populate and load the contents of the database file, see SYS$STARTUP:LDAP_LOCALUSER_DATABASE.TXT_TEMPLATE.

Examples for global mapping

Two users, John Doe and Joe Hardy have the following attributes specified in the user profile of the Active directory:

DN: cn=john doe,...
samaccountname: John Doe
description: VMSUsers.hp.com/jdoe

DN: cn=jhardy,...
samaccountname: jhardy
description: VMSUsers.hp.com/jhardy

In the SYSUAF.DAT file, the username is "jdoe" and "jhardy".

In global mapping:

  1. Update the attributes in SYS$STARTUP:LDAPACME$CONFIG-STD.INI file along with the other mandatory attributes:


    mapping = server 
    mapping_attribute = description 
    mapping_target = VMSusers.hp.com 
    
  2. Restart the ACME server:


    SET SERVER ACME/RESTART 
    
  3. Login to the host system using the login "John Doe" for the user "John Doe" (Note that at the user name prompt, you need to give this name in quotes, as the name has a space in between)
  4. Login to the host system using the login, jhardy for the other user.

Examples for local mapping

Two users John Doe and Joe Hardy have the following attributes specified in the user profile of the Active directory:

DN: cn=john doe,...
samaccountname: John Doe

DN: cn=jhardy,...
samaccountname: jhardy

  1. Make a copy of the SYS$STARTUP:LDAP_LOCALUSER_DATABASE.TXT _TEMPLATE and rename it to say SYS$STARTUP:LDAP_LOCALUSER_DATABASE.TXT on the OpenVMS system.
  2. Update the SYS$STARTUP:LDAP_LOCALUSER_DATABASE.TXT with the LDAP username and VMS username separated by a comma. If the LDAP username contains special characters, such as space, comma or exclamation mark, provide it within quotes.
    "JOHN DOE",JDOE
    JHARDY,JHARDY
  3. Update the attributes in SYS$STARTUP:LDAPACME$CONFIG-STD.INI fie along with the other mandatory attributes:


    mapping = local 
    mapping_file = SYS$COMMON:[SYS$STARTUP]LDAP_LOCALUSER_DATABASE.TXT 
    
  4. Load the new database file by performing the following:
    1. Restart the ACME server:


      $ SET SERVER ACME/RESTART 
      

      OR
    2. Using LDAP_LOAD_LOCALUSER_DATABASE.EXE:


      $ load_localuser_db:=="$SYS$SYSTEM:LDAP_LOAD_LOCALUSER_DATABASE.EXE" 
      $ load_localuser_db SYS$COMMON:[SYS$STARTUP]LDAP_LOCALUSER_DATABASE.TXT 
      
  5. Login to the host system using the login "John Doe" and jhardy.

7.2.1 Restrictions

  • SSH login is not supported for mapped users.
  • While performing DECnet operations, such as DECnet COPY, you must use the user name and password that is present in the SYSUAF.DAT file.
  • The "SYSTEM" account is not mapped for the following scenarios:
    • If a user enters "SYSTEM" at the user name prompt, the user is mapped only to the "SYSTEM" account in SYSUAF.DAT.
    • If the mapping is done for any user to SYSTEM, for example, "johnd" is mapped to "SYSTEM" account in SYSUAF.DAT, this mapping will not happen and user gets an Operation failure error at the login prompt.


Chapter 8
System Management Features

This chapter provides information about the new features, changes, and enhancements for system management functionality.

8.1 Provisioning Enhancements using HP SIM

Provisioning OpenVMS using HP SIM, Version 4.0 provides the following new features:

  • Deploying OpenVMS Version 8.4
  • Configuring OpenVMS TCP/IP

8.1.1 Deploying OpenVMS Version 8.4

Provisioning has been enhanced to deploy OpenVMS Version 8.4 on selected Integrity servers from HP SIM. Provisioning allows you to install OpenVMS Version 8.4, or upgrade to OpenVMS Version 8.4 from a previous version of OpenVMS by using InfoServer or vMedia. For more information about deployment, see the HP OpenVMS Upgrade and Installation Manual.

8.1.2 Configuring OpenVMS TCP/IP

Provisioning has been enhanced to configure HP TCP/IP Services for OpenVMS on selected Integrity servers from HP SIM. Provisioning allows you to configure the TCP/IP core environment, and client or server components on up to eight OpenVMS Integrity servers simultaneously. An OpenVMS server can be configured with static IP address settings or as a dynamic host configuration protocol (DHCP) client. For more information, see the HP OpenVMS Version 8.4 Upgrade and Installation Manual.

8.2 WBEM Providers for OpenVMS Version 8.4

The WBEM Providers software is supported on the following Integrity server systems with OpenVMS Version 8.4:

  • Integrity Blade servers - BL860c and BL870c
  • Non-cell based Integrity server systems - rx1620, rx2600, rx3600, rx4640, and rx6600
  • Cell based Integrity server systems - Superdomes, rx7620, rx8620, and rx8640

8.3 vKVM Capability for OpenVMS (Integrity servers Only)

OpenVMS Version 8.4 adds support for the Integrated Lights Out (iLO) Integrated Remote Console capability provided by Integrity servers and BladeSystems supporting iLO. The enabling functionality that provides this support is referred to as virtual Keyboard, Video, and Mouse or vKVM.

The Integrated Remote Console capability allows the display from the built-in graphics chip to be viewed on a remote web browser connected to the iLO firmware on an Integrity server system. The mouse and keyboard on the computer are used to simulate a USB device on the Integrity server. As a result, a remote user can interact with an Integrity server as if they are using a local keyboard, video, and mouse transparently to the local system.

Text based VGA console and the DECwindows display are available in the iLO Integrated Remote Console window. A maximum DECwindows display resolution of 1024x768 (OpenVMS default) is imposed by the iLO firmware.

The iLO Integrated Console firmware transmits compressed images of changes on the screen over the network to a web browser, which limits performance and responsiveness. Because of this performance limitation, use of graphics intensive DECwindows is not recommended.

The local keyboard, video, and mouse and the remote Integrated Remote Console capability can be used simultaneously.

The vKVM enabling software on OpenVMS adds additional capabilities beyond the iLO Integrated Remote Console:

  • DECwindows can start without an attached keyboard and mouse. The fifteen second countdown waiting for a USB keyboard and mouse to be configured is not required.
  • Keyboards and one or more mouse can be dynamically removed and attached at any time and these devices continue to function on DECwindows and the OpenVMS VGA console.
  • Multiple keyboard and multiple mouse can be connected and used simultaneously on a single DECwindows session in the OpenVMS VGA console. For example, an auxiliary USB keypad can be used with a standard keyboard, or a touchpad and a mouse.

Note

If Motif is installed, DECwindows starts on the systems that have a built-in graphics chip (the graphics chip is integrated with the management processor on Integrity servers) though the keyboard, mouse or monitor are not attached. DECwindows can be disabled by editing SYS$MANAGER:SYSTARTUP_VMS.COM. The symbol definition of DECW$IGNORE_DECWINDOWS must be un-commented to set it to TRUE.

Several pseudo devices, such as IMX0, IKX0, KBX0, and MOX0 are created to enable vKVM.

8.4 CPU Component Indictment - Dynamic Processor Resilience

OpenVMS Version 8.4 supports CPU Component Indictment - Dynamic Processor Resilience (DPR) on Integrity servers. This feature was introduced for Alpha processors with OpenVMS Version 7.3-2. The component indictment process works in conjunction with HP Web-Based Enterprise Services (WEBES).

In addition to the existing features available on Alpha, Integrity server systems support the following features:

  • An indicted processor can be deconfigured and will not be restarted when the system is rebooted.
  • If a spare iCAP CPU is available, it is automatically started to replace the indicted CPU.

For information about support and usage of DPR, see the WEBES and iCAP documentation.

8.4.1 Enabling and Disabling the Indictment Server

SYS$MANAGER:SYS$INDICTMENT_POLICY.COM enables the system manager to turn the indictment server and indictment mechanism on or off. Indictment mechanism is a policy to enable or disable an attempt by the operating system to STOP a CPU that has been indicted. By default, when the indictment server is started, the indictment mechanism is disabled; the system manager must manually modify SYS$INDICTMENT_POLICY.COM to enable this feature. This requires a reboot of the system.

8.4.2 Displaying the Indicted CPU Status

OpenVMS Version 8.4 has added a new CPU state string to reflect the state of the indicted CPU. Executing SHOW CPU/FULL on an indicted CPU shows the new CPU state as "DEALLOCATED".

8.5 Power Management (Integrity servers Only)

OpenVMS has supported power savings on idle since Version 8.2-1 using the SYSGEN parameters, CPU_POWER_MGMT and CPU_POWER_THRSH. However, in OpenVMS Version 8.4, OpenVMS power management supports processor p-states, available on Intel Itanium processor 9100 series and later CPUs, to reduce power use while a CPU is not idle. OpenVMS also supports additional user and programming interfaces. On some platforms, OpenVMS Version 8.4 supports a power management interface from the iLO console and from the Insight Power Manager (IPM) software. A new system service $POWER_CONTROL is added on all Integrity server platforms. For information about the new system service, see the HP OpenVMS System Services Reference Manual.

OpenVMS Version 8.4 power management operates in an upward compatible manner on all platforms that do not fall into the category with IPM support. The performance of the idle power saving algorithm is improved. To enable idle power savings, the default value for the CPU_POWER_MGMT parameter is changed from 0 to 1.

For platforms that support iLO, the default firmware setting is Dynamic Power Savings, which, on OpenVMS, corresponds to the idle power saving algorithm.

In accordance with the standards used by all operating systems on Integrity servers, if you have an iLO or IPM power interface, that interface takes priority over other operating system interfaces. On OpenVMS systems, CPU_POWER_MGMT and $POWER_CONTROL are overridden by the iLO or IPM interface.

On OpenVMS guest systems, the default is low power mode and this behavior can not be changed.

Table 8-1 lists the power saving values that can be set using the iLO or IPM interface.

Table 8-1 iLO or IPM Power Savings
Power Mode Description OpenVMS Implementation
Static high performance The operating system makes no attempt to save power if there is any compromise in performance. No power savings method used.
Static low power The operating system saves power in every way it can, even to the detriment of performance. On CPUs that support static low power, switch to the lowest p-state at all times. Also uses idle power savings on all CPUs.
Dynamic Power Savings The operating system attempts to use lower power modes dynamically to save power while minimizing loss of performance. Use idle power savings.
OS Control The power savings mode is controlled by OS-specific mechanisms. Enable the $POWER_CONTROL system service and the $CPU_POWER_MGMT SYSGEN parameter.

8.6 New System Parameters

The following system parameters have been added in this release:

  • NISCS_UDP_PKTSZ - allows the system manager to change the packet size used for cluster communications over IP on network communication paths.
  • NISCS_USE_UDP - this parameter is set to enable the Cluster over IP functionality. PEDRIVER uses the UDP protocol in addition to IEEE 802.3 for cluster communication.
  • PAGED_LAL_SIZE - sets the maximum size, in bytes, to use for the page dynamic pool lookaside lists. Use of these lookaside lists can reduce paged dynamic pool variable freelist fragmentation and improve paged pool allocation and deallocation performance.
    By default, PAGED_LAL_SIZE is set to 0, which disables the use of the paged dynamic pool lookaside lists.
    For environments experiencing paged pool variable freelist fragmentation, a modest PAGED_LAL_SIZE, 512, has been adequate to improve paged pool performance and reduce fragmentation. If this parameter is made large and later decreased in size, some paged pool packets can be left unused until the parameter is made larger again, or the lookaside lists are reclaimed due to a paged pool shortage. The paged dynamic pool lookaside lists will not occupy more than three-quarters of the available paged pool.
  • ZERO_LIST_HI - is the maximum number of pages zeroed and provided on the zeroed page list. This list is used as a cache of pages containing all zeros, which improves the performance of allocating such pages.
    On systems with multiple RADs, this parameter is a page count per RAD. ZERO_LIST_HI has the AUTOGEN and DYNAMIC attributes.

8.7 HP System Analysis Tools Enhancements

The following new features are provided in the System Analysis Tools utilities for OpenVMS Version 8.4.

8.7.1 Support for Partial Dump Copies

The "Partial Dump Copies" feature has been added to SDA. This feature takes advantage of the organization of a selective dump. In most cases, only a small part of the dump is needed to investigate the cause of the system crash. The system manager can save the complete dump locally, but only copy the key sections of the dump over the network to HP support. This can significantly reduce the time taken to copy the dump.

If information is needed from a section of the dump that was not copied, it can be extracted from the saved local copy and submitted separately. The ANALYZE /CRASH_DUMP command now accepts multiple input files from the same crash and treats them as a single dump.

For an explanation of key processes and key global pages, and the organization of a selective system dump, see the System Manager's Manual, Volume 2.

Example

To create an initial partial dump copy and to extract an additional section, complete the following steps:

  1. Save the complete dump:


    $ ANALYZE/CRASH SYS$SYSTEM:SYSDUMP.DMP 
     
    OpenVMS system dump analyzer 
    ...analyzing an I64 compressed selective memory dump... 
     
    Dump taken on 22-SEP-2009 18:17:17.99 using version 8.4 
    SSRVEXCEPT, Unexpected system service exception 
     
    SDA> COPY SSRVEXCEPT.DMP 
    SDA> EXIT
    
  2. Create a partial copy containing only the key sections of the dump:


    $ ANALYZE/CRASH SSRVEXCEPT 
     
    OpenVMS system dump analyzer 
    ...analyzing an I64 compressed selective memory dump... 
     
    Dump taken on 22-SEP-2009 18:17:17.99 using version 8.4 
    SSRVEXCEPT, Unexpected system service exception 
     
    SDA> COPY/PARTIAL=KEY SSRVKEY 
    SDA> EXIT
    
  3. Provide the output of this copy, containing only the key sections, to HP for analysis:


    $ ANALYZE/CRASH SSRVKEY 
     
    OpenVMS system dump analyzer 
    ...analyzing an I64 compressed selective memory dump... 
     
    Dump taken on 22-SEP-2009 18:17:17.99 using version 8.4 
    SSRVEXCEPT, Unexpected system service exception 
     
    SDA> SHOW CRASH 
    SDA> ! etc.
    
  4. HP determines that the CLUSTER_SERVER process, not included in the partial dump copy, is required. Extract the process from the saved complete copy:


    $ ANALYZE/CRASH SSRVEXCEPT 
     
    OpenVMS system dump analyzer 
    ...analyzing an I64 compressed selective memory dump... 
     
    Dump taken on 22-SEP-2009 18:17:17.99 using version 8.4 
    SSRVEXCEPT, Unexpected system service exception 
     
    SDA> COPY/PARTIAL=PROCESS=NAME=CLUSTER_SERVER SSRVCSP 
    SDA> EXIT
    
  5. Provide the output of this copy to HP for analysis:


    $ ANALYZE/CRASH SSRVKEY,SSRVCSP 
     
    OpenVMS system dump analyzer 
    ...analyzing an I64 compressed selective memory dump... 
     
    Dump taken on 22-SEP-2009 18:17:17.99 using version 8.4 
    SSRVEXCEPT, Unexpected system service exception 
     
    SDA> SHOW PROCESS CLUSTER_SERVER 
    SDA> ! etc.
    

Note: In this step, the input files cannot be specified by SSRV*. If SSRV* is specified, SSRVCSP opens before SSRVKEY.


Previous Next Contents Index