[an error occurred while processing this directive]

HP OpenVMS Systems Documentation

Content starts here

Upgrading Privileged-Code Applications on OpenVMS Alpha and OpenVMS I64 Systems


Previous Contents Index


Chapter 6
Kernel Threads Process Structure

This chapter describes the components that make up a kernel threads process. This chapter contains the following sections:

For more information about kernel threads features, see the OpenVMS Alpha Version 7.0 Bookreader version of the OpenVMS Programming Concepts Manual.

6.1 Process Control Blocks (PCBs) and Process Headers (PHDs)

Two primary data structures exist in the OpenVMS executive that describe the context of a process:

  • Software process control block (PCB)
  • Process header (PHD)

The PCB contains fields that identify the process to the system. The PCB comprises contexts that pertain to quotas and limits, scheduling state, privileges, AST queues, and identifiers. In general, any information that must be resident at all times is in the PCB. Therefore, the PCB is allocated from nonpaged pool.

The PHD contains fields that pertain to a process's virtual address space. The PHD consists of the working set list, and the process section table. The PHD also contains the hardware process control block (HWPCB), and a floating point register save area. The HWPCB contains the hardware execution context of the process. The PHD is allocated as part of a balance set slot, and it can be outswapped.

6.1.1 Effect of a Multithreaded Process on the PCB and PHD

With multiple execution contexts within the same process, the multiple threads of execution all share the same address space but have some independent software and hardware context. This change to a multithreaded process impacts the PCB and PHD structures and any code that references them.

Before the implementation of kernel threads, the PCB contained much context that was per process. With the introduction of multiple threads of execution, much context becomes per thread. To accommodate per-thread context, a new data structure---the kernel thread block (KTB)--- is created, with the per-thread context removed from the PCB. However, the PCB continues to contain context common to all threads, such as quotas and limits. The new per-kernel thread structure contains the scheduling state, priority, and the AST queues.

The PHD contains the HWPCB, which gives a process its single execution context. The HWPCB remains in the PHD; this HWPCB is used by a process when it is first created. This execution context is also called the initial thread. A single threaded process has only this one execution context. Since all threads in a process share the same address space, the PHD continues to describe the entire virtual memeory layout of the process.

A new structure, the floating-point registers and execution data (FRED) block, contains the hardware context for newly created kernel threads.

6.2 Kernel Thread Blocks (KTBs)

The kernel thread block (KTB) is a new per-kernel thread data structure. The KTB contains all per-thread context moved from the PCB. The KTB is the basic unit of scheduling, a role previously performed by the PCB, and is the data structure placed in the scheduling state queues. Since the KTB is the logical extension of the PCB, the SCHED spinlock synchronizes access to the KTB and the PCB.

Typically, the number of KTBs a multithreaded process has, matches the number of CPUs on the system. Actually, the number of KTBs is limited by the value of the system parameter MULTITHREAD. If MULTITHREAD is zero, the OpenVMS kernel support is disabled. With kernel threads disabled, user-level threading is still possible with DECthreads. The environment is identical to the OpenVMS environment prior to this release that implements kernel threads. If MULTITHREAD is nonzero, it represents the maximum number of execution contexts or kernel threads that a process can own, including the initial one.

In reality the KTB is not an independent structure from the PCB. Both the PCB and KTB are defined as sparse structures. The fields of the PCB that move to the KTB retain their original PCB offsets in the KTB. In the PCB, these fields are unused. In effect, if the two structures are overlaid, the result is the PCB as it currently exists with new fields appended at the end. The PCB and KTB for the initial thread occupy the same block of nonpaged pool; therefore, the KTB address for the initial thread is the same as for the PCB.

6.2.1 KTB Vector

When a process becomes multithreaded, a vector similar to the PCB vector is created in pool. This vector contains the list of pool addresses for the kernel thread blocks in use by the process. The KTB vector entries are reused as kernel threads are created and deleted. An unused entry contains a zero. The vector entry number is used as a kernel thread ID. The first entry always contains the address of the KTB for the initial thread, which is by definition kernel thread ID zero. The kernel thread ID is used to build unique PIDs for the individual kernel threads. Section 6.3.1 describes PID changes for kernel threads.

To implement these changes, the following four new fields have been added to the PCB:

  • PCB$L_KTBVEC
  • PCB$L_INITIAL_KTB
  • PCB$L_KT_COUNT
  • PCB$L_KT_HIGH

The PCB$L_INITIAL_KTB field actually overlays the new KTB$L_PCB field. For a single threaded process, PCB$L_KTBVEC is initialized to contain the address of PCB$L_INITIAL_KTB. The PCB$L_INITIAL_KTB always contains the address of the initial thread's KTB. As a process transitions from being single threaded to multithreaded and back, PCB$L_KTBVEC is updated to point to either the KTB vector in pool or PCB$L_INITIAL_KTB.

The PCB$L_KT_COUNT field counts the valid entries in the KTB vector. The PCB$L_KT_HIGH field gives the highest vector entry number in use.

6.2.2 Floating-Point Registers and Execution Data Blocks (FREDs)

To allow for multiple execution contexts, not only are additional KTBs required to maintain the software context, but additional HWPCBs must be created to maintain the hardware context. Each HWPCB has allocated with it a block of 256 bytes for preserving the contents of the floating-point registers across context switches. Another 128 bytes is allocated for per-kernel thread data. Presently, only a clone of the PHD$L_FLAGS2 field is defined.

The combined structure that contains the HWPCB, floating-point register save area, and per-kernel thread data is called the floating-point registers and execution data (FRED) block. It is 512 bytes in length. These structures reside in the process's balance set slot. This allows the FREDs to be outswapped with the process header. On the first page allocated for FRED blocks, the first 512 bytes are reserved for the inner-mode semaphore.

6.2.3 Kernel Threads Region

Much process context resides in P1 space, taking the form of data cells and the process stacks. Some of these data cells need to be per-kernel thread, as do the stacks. By calling the appropriate system service, a kernel thread region in P1 space is initialized to contain the per-kernel thread data cells and stacks. The region begins at the boundary between P0 and P1 space at address 40000000x, and it grows toward higher addresses and the initial thread's user stack. The region is divided into per-kernel thread areas. Each area contains pages for data cells and the four stacks.

6.2.4 Per-Kernel Thread Stacks

A process is created with four stacks; each access mode has one stack. All four of these stacks are located in P1 space. Stack sizes are either fixed, determined by a SYSGEN parameter, or expandable. The parameter KSTACKPAGES controls the size of the kernel stack. This parameter continues to control all kernel stack sizes, including those created for new execution contexts. The executive stack is a fixed size of two pages; with kernel threads implementation, the executive stack for new execution contexts continues to be two pages in size. The supervisor stack is a fixed size of four pages; with kernel threads implementation, the supervisor stack for new execution contexts is reduced to two pages in size.

For the user stack, a more complex situation exists. OpenVMS allocates P1 space from high to lower addresses. The user stack is placed after the lowest P1 space address allocated. This allows the user stack to expand on demand toward P0 space. With the introduction of multiple sets of stacks, the locations of these stacks impose a limit on the size of each area in which they can reside. With the implementation of kernel threads, the user stack is no longer boundless. The initial user stack remains semiboundless; it still grows toward P0 space, but the limit is the per-kernel thread region instead of P0 space.


Previous Next Contents Index