[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Compaq ACMS for OpenVMS
Managing Applications


Previous Contents Index

  1. Task
    The name of the task.
  2. State
    The processing characteristics of the task element---whether it is set to HOLD or NOHOLD.
  3. Appl
    The name of the application containing the task.
  4. Priority
    The priority of the queued task element.
  5. Username
    The enqueuer user name specified explicitly or implicitly with ACMS$QUEUE_TASK service.
  6. Enq time
    The time the queued task element was created.
  7. Queue name
    The name of the task queue.
  8. Element ID
    The ID for the queued task element.
  9. Error cnt
    The number of times this queued task element has failed.
  10. Last error
    The last error that occurred while attempting to process the queued task element. This field is not displayed when the error count is zero.
  11. Error time
    The time that the last error occurred. This field is not displayed when the error count is zero.

A queued task element within a queue can have two states: HOLD and NOHOLD. Each of these states can be further qualified to yield a total of four different states/substates:

  • NOHOLD---The element is available for processing by the QTI, or available for dequeuing by the ACMS$DEQUEUE_TASK service.
  • NOHOLD (LOCKED)---The element is locked because it is currently being processed by the QTI, or it is being dequeued by the ACMS$DEQUEUE_TASK service.
  • HOLD---The element is not available for processing by the QTI or for dequeuing by the ACMS$DEQUEUE_TASK service. If an element in this state is set to NOHOLD using the ACMSQUEMGR, then the element immediately becomes a candidate for subsequent dequeue operations (either by the QTI or by the ACMS$DEQUEUE_TASK service).
  • HOLD (RETRY PENDING)---The element has been processed by the QTI and the called task failed. The QTI retries the task call at a later time depending on the value of the ACMSGEN QTI_RETRY_TIMER parameter. At the appropriate time, the QTI sets the element to NOHOLD, allowing the element to become a candidate for subsequent dequeue operations by either the QTI or the ACMS$DEQUEUE_TASK service. If an element in this state is set to HOLD using ACMSQUEMGR, then the QTI does not automatically set the element back to NOHOLD. If an element in this state is set to NOHOLD using the ACMSQUEMGR, then the element immediately becomes a candidate for subsequent dequeue operations.

5.5 Backing Up Task Queue Files Online

ACMS task queue files can be backed up without the QTI process being stopped and without terminating programs that call the ACMS$QUEUE_TASK and ACMS$DEQUEUE_TASK services. To perform an online backup of your task queue files:

  • Use the ACMS operator command ACMS/STOP QUEUE to stop each queue that needs to be backed up.
  • Use the ACMSQUEMGR Utility to suspend dequeues and suspend enqueues for each queue that needs to be backed up. This causes ACMS to close the queue files that are opened in processes calling the ACMS$QUEUE_TASK and ACMS$DEQUEUE_TASK services. If the queue file is participating in a transaction at the time ACMSQUEMGR suspends the dequeue or enqueue, the queue file cannot be closed. If the queue file cannot be closed for that reason, another close is retried 10 seconds later. If the queue file is not closed after 10 retries then the file is left open and an error is logged to SWL.
  • Back up the queue files that need to be backed up.
  • Use the ACMSQUEMGR Utility to resume dequeues and resume enqueues for each queue that was backed up.
  • Use the ACMS operator command ACMS/START QUEUE for each queue that was backed up.

For programs that call the ACMS$QUEUE_TASK and ACMS$DEQUEUE_TASK services to continue while a backup of a queue file is taking place, the programs must check the return statuses of ACMS$_QUEDEQSUS and ACMS$_QUEENQSUS. See Compaq ACMS for OpenVMS Writing Applications for details.

5.6 Summary of ACMSQUEMGR Commands and Qualifiers

QUEMGR commands allow you to create and manage ACMS queues. Table 5-1 lists the ACMSQUEMGR commands and qualifiers and provides a brief description of each command. See Chapter 20 for a complete description of the ACMSQUEMGR commands and qualifiers.

Table 5-1 Summary of ACMSQUEMGR Commands
Commands and Qualifiers Description
CREATE QUEUE
/DEQUEUE=keyword
/ENQUEUE=keyword
/FILE_SPECIFICATION=file-spec
/MAX_WORKSPACES_SIZE=n
Creates a queue for queued task elements.
DELETE ELEMENT
/[NO]CONFIRM
/EXCLUDE=keyword
/SELECT=keyword
Deletes one or more queued task elements.
DELETE QUEUE
/[NO]PURGE
Deletes a queue.
EXIT Exits the ACMSQUEMGR Utility.
HELP
/[NO]PROMPT
Provides information about ACMSQUEMGR commands.
MODIFY QUEUE
/FILE_SPECIFICATION=file-spec
/MAX_WORKSPACES_SIZE=n
Modifies the static characteristics of a queue.
SET ELEMENT
/[NO]CONFIRM
/EXCLUDE=keyword
/PRIORITY=n
/SELECT=keyword
/STATE=[NO]HOLD
Sets the state and/or priority of one or more queued task elements.
SET QUEUE
/DEQUEUE=keyword
/ENQUEUE=keyword
Dynamically sets the queue state.
SHOW ELEMENT
/BRIEF
/EXCLUDE=keyword
/FULL
/OUTPUT[=file-spec]
/SELECT=keyword
/TOTAL_ONLY
Displays one or more queued task elements in a queue.
SHOW QUEUE
/OUTPUT[=file-spec]
Displays the characteristics of a queue.


Chapter 6
Using Distributed Forms Processing

This chapter describes how to set up applications with distributed forms processing in a transaction processing (TP) system. (Applications with distributed forms processing are sometimes called distributed applications.)

6.1 What Is Distributed Forms Processing?

An ACMS application consists of forms processing and database processing. In a distributed ACMS TP system, one or more nodes, called the back end, handle the database processing and computation, while the forms processing is offloaded onto another node or set of nodes called the front end. The front end is sometimes referred to as the submitter node or nodes, and the back end is sometimes referred to as the application node or nodes.

This distribution of tasks over more than one node in a distributed system improves the speed and reliablity of ACMS transactions by allowing the configuration of a distributed system where more powerful machines are dedicated to database processing and smaller machines handle the forms processing. You can configure each node in the distributed system for the processing of specific tasks.

Reliability of a system can be enhanced by installing applications on more than one node of a system and using search lists so that, if a node fails, users are switched to a second node where the same application is running.

Figure 6-1 shows an example of a typical distributed ACMS system.

Figure 6-1 Distributed Forms Processing


On the front end, or submitter node, terminal users use menus to select tasks for applications running on the application node. The applications, in turn, interact with resource managers such as Rdb, DBMS, and RMS. Resource managers are the tools which manipulate your data. A distributed system can be established in an OpenVMS Cluster, a local-area network, or a wide-area network. ACMS uses DECnet to communicate between the front end and the back end of a distributed system.

The front-end system is often called the submitter node because it refers to a node on which tasks are being selected. The back-end system is often called the application node because it refers to the node where the application is executing and where all the actual processing of an application takes place.

Because many applications can run at a single time on one distributed system, it is important that application specifications in menu definitions on the submitter node point to the correct applications on the application node. Section 6.3 describes how you define application specifications for a distributed TP system. The following section describes what you must do to enable your system for distributed forms processing.

6.2 Preparing Your System for Distributed Forms Processing

Once you have designed your distributed system to the extent of deciding which nodes are to be used as the front end and which nodes are to be used as the back end, you can configure each node in your system for distributed forms processing. This section describes actions the system manager takes to enable processing of applications with distributed forms. This includes actions that must be taken in all environments and some actions that are specific to an OpenVMS Cluster environment, submitter nodes, or application nodes.

6.2.1 Common Setup Tasks for Distributed Forms Processing

The following procedures must be performed for both submitter and application nodes to set up your system for processing distributed forms processing:

  • You must set the node name parameter in the ACMS parameter file before ACMS makes use of DECnet. The recommended way to set the node name parameter is to edit the ACMVARINI.DAT file in SYS$MANAGER to include a line that specifies the DECnet node name of the system. For example:


    .
    .
    .
    
    NODE_NAME="MYNODE"
    .
    .
    .
    
  • The node name you specify in the ACMS parameter file must be the same as your DECnet node name. By default, the NODE_NAME parameter is null, and ACMS disables distributed processing. When you assign a node name, ACMS enables distributed processing, provided that DECnet is running.
  • Invoke the ACMSPARAM.COM procedure in SYS$MANAGER to apply the change to the ACMS parameter file.

    Note

    In an OpenVMS Cluster environment, you must ensure that the ACMS parameter file, ACMSPAR.ACM, is stored in the SYS$SPECIFIC directory before invoking the ACMSPARAM.COM command procedure. Because each node has a different DECnet node name, each node must have its own node-specific ACMSPAR.ACM file. Check this on each node of the cluster.
  • Alternatively, you can run the ACMSGEN Utility directly. Run the ACMSGEN Utility as follows:


    $ RUN SYS$SYSTEM:ACMSGEN
    ACMSGEN> SET NODE_NAME DOVE
    ACMSGEN> WRITE CURRENT
    

Note that WRITE CURRENT was used in the preceding example. This ensures that a new ACMSPAR.ACM file is not created where one already exists. It also provides a check that the file exists on each node in an OpenVMS Cluster.

Finally, note that regardless of which method you use to set the node name parameter, you must stop and restart the ACMS system for the change to take effect.

6.2.2 Actions Required on Submitter Nodes

After you have completed the actions listed in Section 6.2.1 detailing common steps that must be taken, the following additional actions are needed to enable distributed processing on a submitter, or front-end, node:

  • Ensure that application specifications in menu definitions point to the application, or back-end, node.
    Logical names provide a way to ensure that application specifications in menu database files (.MDB files) refer to the correct applications on the application node. See Section 6.3 for information about how to use logical names for application specifications.
  • Place DECforms escape routines in a protected directory.
    A DECforms escape routine is an ACMS application program subroutine that is called during the processing of an external DECforms exchange request. See Section 6.6 for a description of managing DECforms escape routines.

6.2.3 Actions Required on Application Nodes

Make sure that each step in Section 6.2.1 has been completed. Then, take these additional steps to enable distributed processing by authorizing remote access to ACMS on the application node.

The system manager on an application node can authorize remote task submitter nodes using these methods:

  • By assigning individual ACMS proxy accounts to task submitters
  • By assigning individual OpenVMS proxy accounts to task submitters
  • By assigning a default submitter user name account for those task submitters who do not have individual proxies
  • By creating a single wildcard proxy account for all submitters in the case where the submitter and application nodes are in a single OpenVMS Cluster

These methods are described in the following paragraphs.

When ACMS uses a proxy account or a default submitter user name for a remote task submitter, tasks executed by the remote submitter are executed as if they were selected by a local task submitter using the same account. If a proxy or default submitter user name does not exist for a remote task submitter, ACMS rejects the remote task selection.

6.2.3.1 Assigning Individual Proxy Accounts

The ACMS proxy enables system managers to give ACMS users on remote nodes access to ACMS applications on application nodes without granting access to other files and OpenVMS resources on the application node.

Note

For existing ACMS sites, simply adding a new user proxy to the ACMS proxy file does not make a system more secure. You must also remove the user's proxy from the OpenVMS proxy file to deny users on remote nodes access to other files and OpenVMS resources.
6.2.3.1.1 How ACMS Searches for a User's Proxy

When a user on a remote node first attempts to select a task on the application node, the following occurs:

  1. The ACC on the application node checks that the ACMS proxy file, ACMSPROXY.DAT, exists.
  2. If the ACMSPROXY.DAT file exists, the ACC checks the file contents for the proxy of the user on the remote node.
    If the remote user's proxy is in the file, the user is authorized to select the task.
    If the ACMSPROXY.DAT file does not exist, or if the remote user's proxy is not in the file, the procedure in the next step is followed.
  3. The ACC on the application node checks the OpenVMS proxy file, NETPROXY.DAT, for the remote task submitter's proxy.
  4. If the proxy file exists and the remote user's proxy is in the NETPROXY.DAT file, the user is authorized to select the task.
  5. If the proxy file does not exist or no proxy exists in NETPROXY.DAT, the ACC checks if the USERNAME_DEFAULT parameter is defined in the ACMS parameter file, ACMSPAR.ACM. The USERNAME_DEFAULT parameter defines the default submitter user name account used for remote task submitters that do not have an individual proxy account, or for task submitters from non-ACMS agents.
  6. If the parameter is defined in ACMSPAR.ACM, the remote user is authorized to select tasks.
  7. If the USERNAME_DEFAULT parameter is not defined and there is no OpenVMS proxy, the ACC rejects the remote user.
6.2.3.1.2 Deciding Which Proxy to Use

To decide which type of proxy is appropriate for a user on a remote node, first determine the type of access to the system that the user needs and what level of security is required. Then, decide whether to create an OpenVMS proxy, an ACMS proxy, or both.

Before making changes in the security level of users on remote nodes, consider the needs of the following types of users:

  • OpenVMS only user
    This user needs to access files and other OpenVMS resources on the application node, but does not need to select ACMS tasks on that node.
  • ACMS only user
    This user needs to select ACMS tasks on the application node, but does not require access to OpenVMS.
  • OpenVMS and ACMS user
    This user requires OpenVMS access and also needs to select ACMS tasks on the application node.

Table 6-1 identifies the types of proxies these users require.

Table 6-1 Proxy Choices
  OpenVMS Proxy ACMS Proxy
OpenVMS Only User +  
ACMS Only User   +
OpenVMS and ACMS User + +

An OpenVMS proxy allows users to select ACMS tasks remotely and users on remote nodes to access other files and OpenVMS resources on the application node.

An ACMS proxy allows users to select ACMS tasks remotely. Unlike an OpenVMS proxy, an ACMS proxy does not grant users on remote nodes access to any other files or OpenVMS resources on the application node, except through an ACMS task.

6.2.3.1.3 Setting Up the ACMS Proxy File

Use the ACMS User Definition Utility (UDU) to create and maintain the ACMS proxy file, ACMSPROXY.DAT. This file contains the mapping of <remote-node>::<remote-user> to <local-user>.

You also use UDU to add, remove, and display the proxy specifications in the ACMS proxy file. The UDU interface, including the command syntax and the use of wildcards, is similar to the proxy command interface in the OpenVMS Authorize Utility.

By default, UDU and run-time ACMS look for the ACMS proxy file, ACMSPROXY.DAT, in two different places:

  • In the current directory for UDU
  • In the file location SYS$SYSTEM:ACMSPROXY.DAT for run-time ACMS

UDU looks for the default ACMS proxy file ACMSPROXY.DAT in the current directory. You can define the logical name ACMSPROXY to specify another location for the file, define it in any logical name table in your process directory table, and in any access mode.

The SYS$SYSTEM:ACMSPROXY.DAT file location is the run-time default specification of the proxy file. You can define the system-level executive-mode logical name ACMSPROXY to specify an alternate file location, which the ACMS run-time system uses. For example, issue the following command to create the proxy file FOO.BAR in the SYS$TEST directory:


$ DEFINE/SYSTEM/EXECUTIVE ACMSPROXY SYS$TEST:FOO.BAR

If the ACC encounters any problems the first time it opens the ACMS proxy file, an error message is written to the SWL log file.

If you want the proxy file to be in SYS$SYSTEM and accessible to all nodes in the cluster, you must specify a SYS$COMMON directory, not SYS$SPECIFIC. In order for ACMS to search the file for remote proxies, the ACC process must be able to read the ACMS proxy file.

To allow ACC access to the ACMS proxy file, perform one of the following actions:

  • Set the ACMS proxy file owner UIC to be the same as the UIC of the ACC or the SYSTEM account.
  • Set the ACMS proxy file protection so that the ACC process has read access.
6.2.3.1.4 Creating ACMS Proxies

To implement the ACMS proxy, perform the following steps:

  1. Create the proxy file and add entries for remote users who need to select tasks in ACMS applications.
  2. Create proxies in the OpenVMS proxy file for remote users who additionally require access to other files and OpenVMS resources.
  3. Remove proxies in the OpenVMS proxy file for existing users on remote nodes who need only to select tasks in ACMS applications. Add these users to the ACMS proxy instead.

Performing these operations increases security and also ensures that the use of the ACMS proxy mechanism does not cause a degradation in ACMS performance. When ACC needs to search only one proxy file for a proxy, the ACMS performance is the same as in Version 3.2. (However, even when ACMS needs to search both the ACMS and OpenVMS proxy files, there is only slight negative impact on the performance of ACMS.)

The format you choose for creating ACMS proxies has security implications. The following formats are listed from most secure to least secure. The ACC process follows this order when it searches the ACMS proxy file for a match with a remote user proxy. If the ACC finds no match in the search list, it searches the OpenVMS proxy file in the same order for a remote user proxy.

  • An exact match of remote-node::remote-user
    This is the most secure way to set up ACMS proxies. This format allows only a specified user on a specified node to select tasks on the application node.
  • A match of remote-node::*
    Use this format to create ACMS proxies in an OpenVMS Cluster environment, where the submitter and application nodes are in the same cluster.
    You can also use this format outside an OpenVMS Cluster environment, but you need to understand its security implications; this format allows any user on the submitter node to select tasks on the application node.
  • A match of *::remote-user
    This format allows a user with a specified user name on any node in the network to select tasks on the application node.
  • A match of *::*
    This is the least secure format; it allows any user on any node to select tasks on the application node.

The application node ACC checks for a remote task submitter's proxy. In addition, the application node ACC requests that the submitter node ACC validate the task submitter based on a security token and submitter ID that the submitter node ACC assigned to the user. When the task submitter first enters ACMS, the submitter node ACC verifies the following items to make sure that the user is authorized:

  • Submitter terminal is in ACMSDDF.DAT
  • Submitter user name is in SYSUAF.DAT
  • Submitter user name is in ACMSUDF.DAT


Previous Next Contents Index