gtpc3m0t | Concepts and Structures |
Tracing the flow of a message is a pragmatic way to introduce conventions and terminology used by the TPF system and to make the abstract control concepts a reality. Figure 17 represents some additional detail to the contents of the control structure diagram in Figure 16. The boxes and solid lines in Figure 17, as a matter of fact, represent the control structure diagram in Figure 16. Additional conventions used in Figure 17 follow:
The principal idea of the TPF system design is to place the TPF system into a state where the system is requesting message traffic (in the form of input messages) from the communications facilities, and updates to the database and outgoing messages (as responses to agents) are a result of application processing.
The CPU loop inspects the cross, ready, input, and deferred work lists and checks status indicators. Among other things, some of the checking of status indicators causes input messages to arrive in the TPF system.
An input message causes an Entry to be created and application program segments to be dispatched for processing the input message. The Entry makes requests of the control program for system services such as:
Many of the system services requested by an Entry have other system services implied. For example, if application program segment X calls application program segment Y, and Y is not already in main storage, then the system must:
The TPF system processing shown in Figure 17 consists of the following steps:
In terms of sheer logic, the purpose of the control program is to reach the CPU loop with nothing to do (that is, for the CPU loop to process empty work lists). If this can be accomplished with the end users receiving their responses in a timely fashion, then, relative to the end user, the system is performing as expected. The trade-offs in a performance-oriented system are very sensitive. On one hand, if the system is actually in the CPU loop with nothing to do, then there is excess computing power. However, if the computing power is over utilized, response time increases. The TPF system facility called data collection and reduction is used to tune the system and cope with the sensitive balance between utilization of resources and response to the users.
In any event, the execution of an application program segment (an Entry) results in requests for system services (through SVC interrupts) that can cause a delay. During this delay time, the CPU loop can give control of the I-stream engine to a different Entry.
Random (non-SVC) interrupts can be received that have nothing to do with the Entry currently in control. In the TPF system, the random interruption of an Entry does not cause the control program to switch to the processing of a different Entry. Instead, the control program places the information about any interrupts in a queue and returns control to the Entry that was interrupted. This is a significant attribute of the TPF system.
So, a basic premise of TPF system design is that no interval of processing by an Entry is assumed to ever require the instruction execution capability of an I-stream engine for more than a relatively small amount of time. This is a fundamental design decision that eliminates the need for complex algorithms for time-slicing code that is found in many time-sharing systems. This makes it difficult (but not impossible) to write applications to run under the TPF system that are compute bound.
Several processing intervals are usually required to accomplish the processing required by an Entry. A processing interval is, for example, measured from the time an Entry receives control until the Entry gives up control to wait for an I/O completion or because it has finished processing. I/O wait time does not count as processing time.
Figure 17. Normal TPF System Execution Overview
The entire process of initializing the TPF system is indicated by the box in Figure 17 labeled Initializer. Initializing the TPF system includes:
The TPF system uses a system program called the CPU loop to select an Entry for execution by an I-stream engine. Sometimes the term system task dispatcher is used rather than CPU loop. In a central processing complex (CPC) with multiple I-stream engines, the CPU loop (operating within an I-stream engine) continuously inspects its cross list to which another I-stream engine may have added work.
Although the phrase system task dispatcher is sometimes used in TPF documentation, the term CPU loop is pervasive. Task is a term that has the connotation of an application process found in the IBM MVS operating system. A TPF Entry is structured differently from an MVS task. Rather than being called the system task dispatcher, a better name would be system entry dispatcher or simply dispatcher. In the face of this dilemma, the term CPU loop is used in this publication.
The order of processing priority is determined by the sequence in which the CPU loop interrogates queues that identify work items to be dispatched. The term list refers to the CPU loop queues. Unfortunately, this term is also used to refer to tables that are not queues. As a result of the long history of the TPF system, the vernacular phrase CPU loop list is also used in this publication to refer to a queue used to dispatch work to an I-stream engine.
The CPU loop lists, in order of processing priority, are:
Through linkage conventions, these queues point to all the input necessary for an application to initially process or continue processing an input message.
In addition to the four main CPU loop lists there are two secondary lists:
The VCT and suspend lists are checked once every pass through all items on the input list.
In principle, the CPU loop is a set of programs with pointers to unique processing work lists (CPU loop lists) depending on which I-stream engine the CPU loop is running. All the lists, except the cross list, are private to an I-stream engine. The cross list is used to move work between I-stream engines. The CPU loop lists are located by pointers anchored in Page 0 (for each I-stream engine).
An Entry and an item on a CPU loop list, although closely related, are not the same thing. An item on a CPU loop list points to a system service routine that must be invoked before starting or returning to an Entry. This distinction is necessary to describe some important details required to understand the TPF structure. When you get to the detailed system documentation, an entry on a list may not be distinguished from a TPF application process called an Entry.
Control program components associated with message processing place items on the CPU loop lists:
In a uniprocessor environment, Multi-Processor Interconnect Facility (MPIF) input routines also place items on the main I-stream engine input list. In a multiprocessor environment, however, the MPIF input routines run only on I-stream engine 2 (known as the MPIF I-stream engine), and MPIF input items are placed on the input list for this I-stream engine. MPIF I/O is considered to be high priority and, therefore, is handled by an I-stream engine that is different from the one that handles all the other non-DASD-related I/O for the TPF system.
Whenever the ready or input lists are not empty, the CPU loop merely selects the work identified by the first item on one of these lists, giving priority to the ready list. (The deferred, VCT, and suspend lists are not needed to trace a normal message through the system and are not emphasized in this overview.)
OPZERO refers to a collection of system programs associated with communications control in the TPF system. Essentially, there is one OPZERO program per type of communication facility (where a type is loosely equivalent to a communications protocol).
OPZERO creates an entry control block (ECB) and associates the input message with the ECB. OPZERO then passes control (and the ECB) to COMM SOURCE (Communications Source Program) to continue input message processing.
In the TPF system, OPZERO is functionally considered to be part of communications control. There is more detail about OPZERO in Data Communications.
COMM SOURCE refers to a collection of system programs that transform input messages from their individual protocol-dependent formats into a common system format that is recognizable by applications. This relieves the application from any awareness of the various protocols in the TPF system.
COMM SOURCE uses system tables to determine the application program segment to pass control to for the continuation of input message processing.
In the TPF system, COMM SOURCE is considered to be functionally part of communications control. There is more detail about COMM SOURCE in Data Communications.