[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Upgrading Privileged-Code Applications on OpenVMS Alpha and OpenVMS I64 Systems


Previous Contents Index

However, as the size of the buffer increases, the run-time overhead to copy the PTEs into the DIOBM becomes noticeable. At some point it becomes less expensive to create a temporary window in S0/S1 space to the process PTEs that map the buffer. The PTE window method has a fixed cost, but the cost is relatively high because it requires PTE allocation and TB invalidates. For this reason, the PTE window method is not used for moderately sized buffers.

The transition point from the PTE copy method with a secondary DIOBM to the PTE window method is determined by a new system data cell,ioc$gl_diobm_ptecnt_max, which contains the maximum desireable PTE count for a secondary DIOBM. The PTE window method will be used if the buffer is mapped by more thanioc$gl_diobm_ptecnt_max PTEs.

When a PTE window is used,irp$l_svapteis set to the S0/S1 virtual address in the allocated PTE window that points to the first PTE that maps the buffer. This S0/S1 address is computed by taking the S0/S1 address that is mapped by the first PTE allocated for the window and adding the byte offset within page of the first buffer PTE address in page table space. A PTE window created this way is removed during I/O postprocessing.

The PTE window method is also used if the attempt to allocate the required secondary DIOBM structure fails due to insufficient contiguous nonpaged pool. With an 8 Kb page size, the PTE window requires a set of contiguous system page table entries equal to the number of 8 Mb regions in the buffer plus 1. Failure to create a PTE window as a result of insufficient SPTEs is unusual. However, in the unlikely event of such a failure, if the process has not disabled resource wait mode, the $QIO request is be backed out and the requesting process is put into a resource wait state for nonpaged pool (RSN$_NPDYNMEM). When the process is resumed, the I/O request is retried. If the process has disabled resource wait mode, a failure to allocate the PTE window results in the failure of the I/O request.

When the PTE window method is used, the level-3 process page table pages that contain the PTEs that map the user buffer are locked into memory as well. However, these level-3 page table pages are not locked when the PTEs are copied into nonpaged pool or when the SPT window is used.

The new IOC_STD$FILL_DIOBM routine is used to setirp$l_svapte by one of the previously described three methods. The OpenVMS-supplied FDT support routines EXE_STD$MODIFYLOCK, EXE_STD$READLOCK, and EXE_STD$WRITELOCK use the IOC_STD$FILL_DIOBM routine in the following way:

  1. The buffer is locked into memory by calling the new MMG_STD$IOLOCK_BUF routine. This routine returns a 64-bit pointer to the PTEs and replaces the obsolete MMG_STD$IOLOCK routine.


    status = mmg_std$iolock_buf (buf_ptr, bufsiz, is_read, pcb, &irp->irp$pq_vapte,                          &irp->irp$ps_fdt_context->fdt_context$q_qio_r1_value); 

    For more information about this routine, see Section B.17.
  2. A value for the 32-bitirp$l_svapte cell is derived by calling the new IOC_STD$FILL_DIOBM routine with a pointer to the embedded DIOBM in the IRP, the 64-bit pointer to the PTEs that was returned by MMG_STD$IOLOCK_BUF, and the address of theirp$l_svapte cell.2


    status = ioc_std$fill_diobm (&irp->irp$r_diobm, irp->irp$pq_vapte, pte_count,                              DIOBM$M_NORESWAIT, &irp->irp$l_svapte); 

    The DIOBM structure is fully described in Section A.6 and this routine is described in Section B.10.

Device drivers that call MMG_STD$IOLOCK directly will need to examine their use of the returned values and might need to call the IOC_STD$FILL_DIOBM routine.

2.2.3 Impact of MMG_STD$SVAPTECHK Changes

Prior to OpenVMS Alpha Version 7.0, the MMG_STD$SVAPTECHK and MMG$SVAPTECHK routines compute a 32-bitsvapte for either a process or system space address. As of OpenVMS Alpha Version 7.0, these routines are restricted to an S0/S1 system space address and no longer accept an address in P0/P1 space. The MMG_STD$SVAPTECHK and MMG$SVAPTECHK routines declare a bugcheck for an input address in P0/P1 space. These routines return a 32-bit system virtual address through the SPT window for an input address in S0/S1 space.

The MMG_STD$SVAPTECHK and MMG$SVAPTECHK routines are used by a number of OpenVMS Alpha device drivers and privileged components. In most instances, no source changes are required because the input address is in nonpaged pool.

The 64-bit process-private virtual address of the level 3 PTE that maps a P0/P1 virtual address can be obtained using the new PTE_VA macro. Unfortunately, this macro is not a general solution because it does not address the cross-process PTE access problem. Therefore, the necessary source changes depend on the manner in which thesvapte output from MMG_STD$SVAPTECHK is used.

The INIT_CRAM routine uses the MMG$SVAPTECHK routine in its computation of the physical address of the hardware I/O mailbox structure within a CRAM that is in P0/P1 space. If you need to obtain a physical address, use the new IOC_STD$VA_TO_PA routine.

If you call MMG$SVAPTECHK and IOC$SVAPTE_TO_PA, use the new IOC_STD$VA_TO_PA routine instead.

The PTE address indcb$l_svapte must be expressible using 32 bits and must be valid regardless of process context. Fortunately, the caller's address is within the buffer that was locked down earlier in the CONV_TO_DIO routine via a call to EXE_STD$WRITELOCK and the EXE_STD$WRITELOCK routine derived a value for theirp$l_svapte cell using the DIOBM in the IRP. Therefore, instead of calling the MMG$SVAPTECHK routine, the BUILD_DCB routine has been changed to call the new routine EXE_STD$SVAPTE_IN_BUF, which computes a value for thedcb$l_svapte cell based on the caller's address, the original buffer address in theirp$l_qio_p1 cell, and the address in theirp$l_svapte cell.

2.2.4 Impact of PFN Database Entry Changes

There are changes to the use of the PFN database entry cells containing the page reference count and back link pointer.

For more information, see Section 2.3.6.

2.2.5 Impact of IRP Changes

All source code references to theirp$l_ast,irp$l_astprm, andirp$l_iosb cells have been changed. These IRP cells were removed and replaced by new cells.

For more information, see Appendix A.

Note

2 For performance reasons, the common case that can be handled by the PTE vector in the embedded DIOBM may be duplicated in line to avoid the routine call.

2.3 General Memory Management Infrastructure Changes

This section describes OpenVMS Alpha Version 7.0 changes to the memory management subsystem that might affect privileged-code applications.

For complete information about OpenVMS Alpha support for 64-bit addresses, see the OpenVMS Alpha Guide to 64-Bit Addressing and VLM Features.3.

2.3.1 Location of Process Page Tables

The process page tables no longer reside in the balance slot. Each process references its own page tables within page table space using 64-bit pointers.

Three macros (located in VMS_MACROS.H in SYS$LIBRARY:SYS$LIB_C.TLB) are available to obtain the address of a PTE in Page Table Space:

  • PTE_VA --- Returns level 3 PTE address of input VA
  • L2PTE_VA --- Returns level 2 PTE address of input VA
  • L1PTE_VA --- Returns level 1 PTE address of input VA

Two macros (located in SYS$LIBRARY:LIB.MLB) are available to map and unmap a PTE in another process's page table space:

  • MAP_PTE --- Returns address PTE through system space window.
  • UNMAP_PTE --- Clears mapping of PTE through system space window.

Note that use of MAP_PTE and UNMAP_PTE requires the caller to hold the MMG spinlock across the use of these macros and that UNMAP_PTE must be invoked before another MAP_PTE can be issued.

2.3.2 Interpretation of Global and Process Section Table Index

As of OpenVMS Alpha Version 7.0, the global and process section table indexes, stored primarily in the PHD and PTEs, have different meanings. The section table index is now a positive index into the array of section table entries found in the process section table or in the global section table. The first section table index in both tables is now 1.

To obtain the address of a given section table entry, do the following:

  1. Add the PHD address to the value in PHD$L_PST_BASE_OFFSET.
  2. Multiply the section table index by SEC$C_LENGTH.
  3. Subtract the result of Step 2 from the result of Step 1.

Note

3 This manual has been archived but is available on the OpenVMS Documentation CD-ROM. This information has also been included in the OpenVMS Programming Concepts Manual, Volume I.


Previous Next Contents Index