[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Programming Concepts Manual


Previous Contents Index

29.5.1.2 Providing Support for Extended File Naming

If an application does not handle extended names successfully, examine the application for any the following:

  • Does the application attempt to parse or assume knowledge of the syntax of a file specification? For example, the application might search for a bracket ([) to locate the beginning of a directory specification, or for a space character to mark the end of a file specification.
    Recommendation: The application should rely on RMS to determine whether a file specification is legal rather than pretesting the actual name. Use the NAM$L_NODE, NAM$L_DEV, NAM$L_DIR, NAM$L_TYPE, and NAM$L_VER fields of the NAM block or SYS$FILESCAN to retrieve this information.
  • Does the application attempt to determine if two file names are the same by doing a string comparison? Because file names are case-insensitive, and because there are several ways to represent some characters, a string compare may fail even though two strings represent the same file.
    Recommendation: See the example program [SYSHLP.EXAMPLES]FILENAME_COMPARE.C for a way to use the system service $CVT_FILENAMES to compare filenames.
  • Does the application depend on the NAM$V_DIR_LVLS bits in the NAM$L_FNB field to determine how many directory levels there are in the current file specification? Because there are only three bits in this field, it can only specify a maximum of eight levels. Applications seldom use these bits; they are mainly used by RMS when a NAM is specified as a related file specification.
    Recommendation: With OpenVMS Version 7.2 and greater, there is a larger field available in both the NAM and the NAML blocks, NAM$W_LONG_DIR_LEVELS. Use this field to locate the correct number of directory levels.
  • Does the application rely on the NAM$V_WILD_UFD and SFD1 - SFD7 bits to determine where there are wildcard directories? Because there are only eight of these bits, they can only report wildcards in the first eight directory levels. Applications seldom use these bits; they are mainly used by RMS when a NAM is specified as a related file specification.
    Recommendation: With OpenVMS Version 7.2 and greater, there is a field available in both the NAM and NAML block, NAML$W_FIRST_WILD_DIR. Use this field to locate the highest directory level where a wildcard is to be found.
  • Does the application use the QIO interface to the file system and specify or request a file name from QIO directly? The QIO interface requires that an application specify explicitly that it understands extended file names before it will accept or return the names. In addition, the file name format for extended file names is not identical between RMS and the QIO interface. Additionally, some file names may be specified in 2-byte Unicode (UCS-2) characters. Your application must be capable of dealing with 1 character that spans 2 bytes.
    Recommendations: Most applications that use the QIO interface also use RMS to parse file specifications and retrieve the file and directory ID for the file. They then use these ID values to access the file with the QIO interface. This method of access continues to work with extended names. HP recommends changing to this method to fix the problem.
    You can also obtain the name that the QIO system uses from the NAML$L_FILESYS_NAME field of a NAML block, or use the system service (SYS$CVT_FILENAME) to convert between the RMS and the QIO file name. In this case, you will also need to provide an expanded FIB block to the QIO service to specify that your application understands extended names, expand your buffers to the maximum size, and prepare to deal with 2-byte Unicode characters.

29.5.2 Upgrading to Full Support

Some OpenVMS applications, such as system or disk management utilities, may require full support for Extended File Specifications. Typically, these are utilities that must be able to view and manipulate all file specifications without DID or FID abbreviation. To upgrade an application so that it fully supports all the features of Extended File Specifications, do the following:

  1. Convert all uses of the RMS NAM block to the NAML block.
  2. Expand the input and output file name buffers used by RMS. To do this, use the NAML long_expanded and long_resultant buffer pointers (NAML$L_LONG_EXPAND and NAML$L_LONG_RESULT) rather than the short buffer pointers (NAML$L_ESA and NAML$L_RSA), and increase the buffer sizes from NAM$C_MAXRSS to NAML$C_MAXRSS.
  3. If long file names (greater than 255 bytes) are specified in the FAB file name buffer field (FAB$L_FNA), use the NAML long_filename buffer field (NAML$L_LONG_FILENAME) instead. If long file names are specified in the FAB default name buffer field (FAB$L_DNA), use the NAML default name buffer field (NAML$L_LONG_DEFNAME) instead.
  4. If you use the LIB$FIND_FILE, LIB$RENAME or LIB$DELETE routines, set LIB$M_FIL_LONG_NAMES in the flags argument (flags is an argument to the LIB$DELETE routine). Note that you can use the NAML block in place of the NAM block to pass information to LIB$FILE_SCAN without additional changes.
  5. If you use the LIB$FID_TO_NAME routine, the descriptor for the returned file specification may need to be changed to take advantage of the increased maximum allowed of 4095 (NAML$C_MAXRSS) bytes.
  6. If you use the FDL$CREATE, FDL$GENERATE, FDL$PARSE, or FDL$RELEASE routine, you must set FDL$M_LONG_NAMES in the flags argument.
  7. Examine the source code for any additional assumptions made internally that a file specification is no longer than 255 8-bit bytes.


Chapter 30
Distributed Transaction Manager (DECdtm)

This chapter describes the programming interfaces of the Distributed Transaction Manager (DECdtm). You use these interfaces to implement distributed transactions or when you write resource managers that participate in distributed transactions. Examples of single and multiple branch applications are also presented. Additionally, this chapter describes the implementation of the X/Open Distributed Transaction Processing XA interface. This interface allows DECdtm to coordinate XA-compliant resource managers and XA-compliant transaction processing systems to coordinate resource managers compliant with DECdtm.

DECdtm system services are documented in the HP OpenVMS System Services Reference Manual.

This chapter contains the following sections:

Section 30.1 provides an overview of the DECdtm programming interfaces.
Section 30.2 describes single branch applications.
Section 30.3 describes multiple branch applications.
Section 30.4 describes default transactions.
Section 30.5 describes the Resource Manager interface.
Section 30.6 describes the Communication Resource Manager interface.
Section 30.7 describes the XA interface (Alpha only).
Section 30.8 provides program examples that use DECdtm.

30.1 Overview of DECdtm

DECdtm provides a basic infrastructure for a distributed transaction processing system. A transaction is a collection of operations that change the system from one valid state to another. A transaction performs operations on resources. Examples of resources are databases and files.

Specifically, a transaction has the ACID properties:

Atomicity Either all of the changes for a transaction are made, or none are. If the changes for a transaction cannot be completed, partial changes by the transaction must be undone.
Consistency A transaction is expected to change the system from one consistent state to another.
Isolation Intermediate changes by a transaction must not be visible to other transactions.
Durability The changes made by a transaction should survive computer and media failures.

A transaction often needs to use more than one resource on one or more system. This type of transaction is called a distributed transaction.

Individual OpenVMS systems within the distributed system are called nodes in this chapter.

The DECdtm model constructs a distributed transaction processing system from three types of component:

  • An Application Program (AP) provides the application-specific code for the system and defines the boundaries between transactions.
    A transaction may be implemented by a single AP running in one node of the distributed system, or it may have multiple AP processes. Typically, each process runs on multiple nodes of the system.
  • A Resource Manager (RM) provides ACID operations for one or more data resources on a single node of the system. Oracle Rdb and RMS Journaling are examples of resource managers.
    Typically, a distributed transaction involves two or more RMs. This might be dissimilar RMs on a single node of the system (for example, Oracle Rdb and RMS Journaling), or it might be RMs on different nodes.
  • The Transaction Manager (TM) controls the interaction of APs and RMs, ensuring that they maintain a common view of the state of each transaction (in-progress, committed, or aborted).
    DECdtm is a TM. Typically, it is the sole TM in an OpenVMS system, but it also provides services that enable it to interoperate with other TMs.

DECdtm implements a two-phase commit protocol. This is a simple consensus protocol that allows a collection of participants to reach a single conclusion. The two-phase commit protocol makes sure that all of the operations can take effect before the transaction is committed. If any operation cannot take effect, for example if a network link is lost, then the transaction is aborted, and none of the operations take effect. Given a list of participants and a designated coordinator, the protocol proceeds as follows:

Phase 1: The coordinator asks each participant if it can agree to commit. Each participant examines its internal state. If the answer is yes, it does whatever it requires to ensure that it can either commit or abort the transaction, regardless of failures. Typically, this requires logging information to disk. It then votes either yes or no.
Phase 2: The coordinator records the outcome on disk: yes, if all the votes were positive, or no, if any votes were negative or missing.

The coordinator then informs each participant of the final result.

Note that this protocol reaches a single decision while it allows the coordinator and participants to fail. Any failure during phase 1 causes the transaction to be aborted. If the coordinator fails during phase 2, participants wait for it to recover and read the decision from disk. If a participant fails, it can ask the coordinator for the decision on recovery.

While DECdtm is not complex in itself, construction of a full-function resource manager needs knowledge of more techniques than can be given in this manual. Transaction Processing: Concepts and Techniques by Jim Gray and Andreas Reuter (Morgan Kaufman Publishers, 1993) may be helpful.

30.2 Single Branch Application

A sequence of AP operations that occurs within a single transaction is called a branch of the transaction. In the simplest use of DECdtm, a single AP invokes two or more RMs.

The AP uses just three of the DECdtm services: $START_TRANS, $END_TRANS, and $ABORT_TRANS. These services are documented in the HP OpenVMS System Services Reference Manual. They have not changed, but additional information is given in this manual.

$START_TRANS initiates a new transaction and returns a transaction identifier (TID) that is passed to other DECdtm services. $END_TRANS ends a transaction by attempting to commit it and returns the outcome of the transaction with either a commit or abort. $ABORT_TRANS ends the transaction by aborting it.

During the transaction, the AP passes the TID to each RM that it uses. The TID may be passed explicitly, or through the default transaction mechanism described in Section 30.4. Internally, each RM calls the DECdtm RM services. It also uses the branch services if parts of the transaction can be executed by different processes or on different nodes.

DECdtm aborts a transaction if the process executing a branch terminates. By default, it also aborts a transaction if the current program image terminates.

30.2.1 Calling DECdtm System Services for a Single Branch Application

An application using the DECdtm system services follows these steps:

  1. Calls SYS$START_TRANSW. This starts a new transaction and returns the transaction identifier.
  2. Instructs the resource managers to perform the required operations on their resources.
  3. Ends the transaction in one of two ways:
    • Commit: To attempt to perform or commit the transaction, the application calls SYS$END_TRANSW. This checks whether all participants can commit their operations. If any participant cannot commit an operation, the transaction is aborted.
      When SYS$END_TRANSW returns, the application determines the outcome of the transaction by reading the completion status in the I/O status block.
    • Abort: To abort the transaction, the application calls SYS$ABORT_TRANSW. Typically, an application aborts a transaction if a resource manager returns an error or if the user enters invalid information during the transaction.

30.2.1.1 Sample Single Branch Transaction

Edward Jessup, an employee of a computer company in Italy, is transferring to a subsidiary of the company in Japan. An application must remove his personal information from an Italian DBMS database and add it to a Japanese Rdb database. Both of these operations must happen, otherwise Edward's personal information may either end up cyber space (the application might remove him from the Italian database but then lose a network link while trying to add him to the Japanese database) or find that he is in both databases at the same time. Either way, the two databases would be out of step.

If the application used DECdtm to execute both operations as an atomic transaction, then this error could never happen; DECdtm would automatically detect the network link failure and abort the transaction. Neither of the databases would be updated, and the application could then try again.

Figure 30-1 shows the participants in the distributed transaction discussed in this sample transaction. The application is on node ITALY.

Figure 30-1 Participants in a Distributed Transaction


30.3 Multiple Branch Application

A transaction may have multiple branches. A separate branch is required for each process that takes part in a transaction, regardless of whether the processes run on the same node or on different nodes of the system.

The top branch of the transaction is created by $START_TRANS. A new branch can be requested in the following ways:

  • By making explicit use of the $ADD_BRANCH and $START_BRANCH services. The application can use any suitable communication technique to pass application calls between the processes and nodes of the system. Such communication is not a function of DECdtm.
  • By calling an RM such as Oracle Rdb that allows resource processing to be requested on another node of the system.
  • By calling a transaction processing framework such as ACMS that allows processing tasks to be requested on other nodes of the system.

Note that in the last two cases, the RM or TP framework make the necessary branch service calls on behalf of the application. There is no difference in the three cases from the viewpoint of DECdtm.

The top branch of a transaction is created by calling $START_TRANS. A subordinate branch is authorized when an existing branch calls $ADD_BRANCH. This returns a globally unique branch identifier (BID). The application passes the BID and TID with an application-specific request to another process or node of the system. $START_BRANCH is then called on the target node to add a new branch to the transaction. A subordinate branch of a transaction may in turn create further branches.

DECdtm can connect the two parts of the transaction together because $ADD_BRANCH specifies the name of the target node while $START_BRANCH specifies the name of the parent node. Either the two nodes must be in the same OpenVMS Cluster or they must be able to communicate by DECnet. DECdtm operation is more efficient within an OpenVMS Cluster.

Unless DECdtm operation is confined to a single cluster, you must configure each node with the same DECnet node name as its cluster node name.

An application may complete its processing within a branch by calling $END_BRANCH.

On $START_BRANCH, DECdtm checks that the two nodes are able to communicate, but it does not validate that the branch is authorized until $END_BRANCH is called. At that point, an unauthorized branch is aborted without affecting the ability of the authorized branches to commit.

Be careful in situations in which an application attempts to access the same resource from different branches of a transaction. Some RMs can recognize that the branches form part of the same transaction and allow concurrent access to the resource. In that case, just like multiple threads in a process, the application may need to serialize its own operations on the shared resource. Other RMs may lock one branch against another. In that case, the application is likely to deadlock.

Multiple branches in a transaction can serialize their operations on a shared resource within an OpenVMS Cluster using the Lock Manager. Care is needed if two branches outside an OpenVMS Cluster implicitly share a resource, perhaps by each creating a subordinate branch on a third system.

A single process may have multiple branches. For example, a server process may execute parallel operations on behalf of different transactions.

30.3.1 Resource Manager Use of the Branch Services

Strictly defined, an RM provides access to resources on the same process as an AP that has started a transaction or added a branch. However an RM may perform work for a transaction in a different process to the original request. In that case, it must use the branch services to join the transaction in the worker process.

Similarly, an RM such as Oracle Rdb may provide an application interface that allows remote resources to be accessed. In that case, the RM uses the branch services to add a branch on the local node and start a branch on the remote node.

30.3.2 Branch Synchronization

Processing in all branches of a transaction must be complete before calling $END_TRANS.

Normally DECdtm is used to ensure branch completion. In this case:

  • The call to $START_BRANCH does not specify the DDTM$M_BRANCH_UNSYNCHED flag.
  • Either $END_BRANCH or $ABORT_TRANS must be called to end the branch.
  • $END_BRANCH and $END_TRANS calls are not completed with a success status until all synchronized subordinate branches of the transaction have initiated calls to $END_BRANCH and the top branch has initiated a call to $END_TRANS.
  • $END_TRANS and $END_BRANCH are not completed with an SS$_ABORT status until all synchronized branches on the local node have initiated calls to $END_TRANS, $END_BRANCH, or $ABORT_TRANS.

In other words, when a transaction completes successfully, all synchronized branches complete together. When a transaction aborts, all synchronized branches on a single node complete together, but branches on different nodes complete at different times. Using synchronized branches does not add extra message overhead, because the synchronization events are implicit in the normal DECdtm commitment protocol.

DECdtm branch synchronization is redundant when branch processing is initiated by a synchronous call to a process or remote node, and that call does not return until processing is complete. For example, remote operations may be requested by Remote Procedure Call (RPC). In this case:

  • The call to $START_BRANCH specifies the DDTM$M_BRANCH_UNSYNCHED flag.
  • The branch must not call $END_BRANCH or $ABORT_TRANS. If the transaction is to be aborted, the branch must return an error status to its superior branch.

See Section 30.4 for a case in which unsynchronized branches are not advised.

30.4 Default Transactions

A default transaction TID is maintained for each process. Some DECdtm services act on the default transaction if no transaction is explicitly specified in the call. The default transaction of a process has two states:

  • Set: The process has a default transaction.
  • Clear: The process does not have a default transaction.

The default transaction is cleared during the processing that occurs when the transaction commits or aborts.

Some operations ($START_TRANS, $START_BRANCH) that set the default transaction of a process will fail if the default transaction of the process was not previously clear. Such operations will update the default transaction without error if it is still set but commit or abort processing that is already in progress.

The default transaction TID is read by the $GET_DEFAULT_TRANS service.

Some RMs check if a default transaction has been started by the application. If there is none, the requested operation is performed as a single atomic operation. Do not use unsynchronized branches with such RMs. The problem is that a transaction might be aborted asynchronously (by another branch) before the branch calls the RM in question. The RM would then perform the operation separately instead of joining the transaction and then receiving an abort notification. This problem cannot occur with a synchronized branch because the default transaction TID is not cleared until $END_BRANCH is called.

30.4.1 Multithreaded Applications

Because the default transaction TID is per-process, not per-thread, it is preferable to use explicit TIDs in multithreaded processes.

However, you must use the default transaction with RMs that do not provide an interface that allows the AP to specify the TID. In this case, use the $SET_DEFAULT_TRANS service to set the appropriate TID in each thread. Take care to serialize each sequence of operations that sets and uses the default transaction.

30.5 Resource Manager Interface

A resource manager provides transaction operations on one or more resources. The RM must have the following characteristics:

  • It should implement transactions with the ACID properties on the resources it manages. This is not a precondition for using DECdtm. For example, some RMs compromise on isolation for improved performance; but unless this characteristic is observed, distributed transactions constructed with DECdtm will not have the ACID properties expected by most applications. Section 30.5.6 describes where volatile (nondurable) resources are used.
  • It must be able to participate in the two-phase commit protocol. This means that it must be able to store the state of a transaction on disk in phase 1 and subsequently commit or roll back the changes as requested in phase 2.
  • It must respond correctly to DECdtm events in the event handler declared by $DECLARE_RM.
  • On recovery from an RM or node failure it must call DECdtm to determine the state of each transaction that was in phase 2 at the time of the failure. It must then commit or roll back the transaction as determined by DECdtm.

DECdtm recognizes two components of an RM:

  • RM instance (RMI) for each process that makes RM-related calls to DECdtm.
  • RM participant for each transaction in which an RM instance takes part.

The RMI and its RM participants share a single event handler, but each participant may have a different name and context. The name is used to find relevant transactions on recovery. The context is a handle, opaque to DECdtm, which is passed to the event handler and may be used to address RM-specific data.

An RM uses the following DECdtm services during normal execution of transactions:

$DECLARE_RM Creates an RM instance in the current process.
$JOIN_RM Adds an RM participant to a transaction.
$ACK_EVENT Acknowledges an event reported to an RMI or RM participant.
$FORGET_RM Deletes an RMI from the current process.

An RM uses the following DECdtm services during recovery from an RM or system failure:

$GETDTI Gets distributed transaction information. Used to get information about the state of transactions.
$SETDTI Sets distributed transaction information. Used to remove RM participants from a transaction.


Previous Next Contents Index