gtps4m0jSystem Generation

TPF System Loading

Following the assembly of all the system and application programs, and the generation of all the system and application data records, the next step is to load the online files.

Program/Keypoint Loading

The programs and keypoint records are loaded to the online system packs in a 2-step process that requires a loader general file and online system packs. SIP produces JCL to initialize and format a loader general file. The required number of online files must be initialized and formatted before starting to load the online system. The system loading steps are as follows:

  1. The control program, online keypoints, file and main storage resident program segments, and the startup program records are loaded to the loader general file by the offline system loader under control of MVS as shown in Figure 5.
  2. All of the program segments are loaded from the loader general file to the online packs by IPLing the loader general file, which is under the control of the TPF system.

Note:
The term startup programs refers to those real-time program segments which are really data records and are normally only loaded once, at initial restart. These records are typically modified by the system during execution and should not be reloaded unless the user requires that they be placed in their initial status. An example of these types of records are the application program name records of the message router.

There is a separate loader general file for each subsystem including the BSS in a MDBF environment.

The loading of programs and the correspondence of program names to system storage is done via an allocator and an offline loader.

Figure 5. TPF Program/Keypoint Loading


Loader General File

The loader general file is initialized and formatted by the SIP process. A separate job using the system loader offline segment is then executed under MVS control to create a general file that can be IPLed. The loader general file is created through a rudimentary language that represents the input to the offline loader. The statements, or control cards, of this language are called the load deck.

The relationships between the system loader offline segment, the system loader online segment, and the restart areas on both the loader general file and the online modules are shown in Figure 7 and are listed as follows:

  1. The offline segment, using the load deck as input, creates a restart area on the loader general file. This restart area is built from data found in MVS partitioned data sets. The restart procedure may be invoked from the loader general file by mounting the disk pack and pressing the IPL button. The online segment, although ECB-controlled, is not allocated through the TPF system allocator.
  2. The offline segment, using the load deck and the object library as input, creates data records including the data that is to be placed into the restart area of the online modules.

    An existing user must, of course, reconcile the keypoints on the loader general file with those on the existing system. This is done either by not loading keypoints on the loader general file, by modifying the keypoints on the loader general file, or a combination of the two.

  3. The loader general file is mounted and the IPL program invoked. The core image records of the TPF core resident control program and the system loader online segment are placed into main storage by the IPL program. The online segment is ultimately given control and executes as an ECB-controlled program. This is an initial restart.
  4. The online segment loads the online modules with the input records found on the loader general file. This loading includes the keypoint and core image portions of the online prime restart area. (The backup copies of the keypoints are created during normal online execution.)
  5. The online segment also places the file resident programs on the online modules, spreading them across all modules in the record type.
  6. The online segment exits to the restart scheduler. When the procedure is completed, and the key system data is loaded through pilot tapes, the system can be IPLed from the online modules. The IPL from the prime restart area is shown in Figure 7. The online IPL is necessary because the content of the keypoint and core image portions of the online module restart area is not the same as the corresponding portions of the loader general file. The IPL program must be invoked to place the different information into main storage. As a matter of fact, different values in CTKX and keypoints V and A make the IPL program react differently during an online restart than during an initial restart. Additionally, the online IPL is necessary to activate the interprocessor communications facility (IPC) of the loosely coupled facility.

In summary, an initial restart may be characterized as a system restart, done from the loader general file, which invokes the system loader online segment. An online restart is done from the IPL prime module and does not invoke the system loader online segment.

Figure 6. Allocation of TPF Online Modules


System Restart

The TPF control program is restarted by using the IPL option on the operator control panel of the hardware console (known as a hard IPL) or by using the ZRIPL command (known as a soft IPL). Whenever TPF is restarted, an initial copy of the control program is loaded from the online system packs to main storage. The keypoint records are loaded from the IPLed device and are used to start various fields and tables in main storage. All working storage is initialized with all core blocks appearing on the uncommitted storage lists. When initialization is complete, a message will appear on the operator console.

When the TPF system (through catastrophic error recovery) or the operator starts a soft IPL with the ZRIPL command, the IPL program will decide whether or not to reuse the main storage copy of some system tables and records. This decision is based on data in the IPL program's fast recovery table (FRT). (Refer to the FR0RT data macro.) This table contains information on system records, which may be saved across a software IPL. It is used to reduce the amount of data that must be loaded during system recovery to NORM state. This table contains validity information as well as start and end addresses of the FRT records. Also, on a soft IPL, the VFA area in main storage will be reused if the data is still valid.

When a hard IPL is performed, none of the records in the fast recovery table are reused. If there are no VFA delay file pending buffers, the main storage copy of VFA will not be reused. If there are delay file pending buffers, the main storage copy of VFA will be reused. This is done to ensure VFA data integrity by allowing the delay file data to be filed after the system is restarted.

Note:
If a hard IPL is performed with the CLEAR option, all of storage is cleared and there is no way the system can reuse VFA or the FRT records.

System restart consists of 3 components, the goal of each is as follows:

The IPL program loads the keypoints from the fixed file keypoint area into main storage. Automatic volume recognition, called disk roll-call, is invoked to assign hardware addresses to the file devices that must be premounted, and the module file status table (MFST) is initialized. The core image records are used to load main storage, including the initializer. The IPL program passes control to the initializer.

The initializer program, which was link-edited with the core resident control program by the MVS linkage editor, refers to information in the core resident control program. The initializer program (CCCTIN) performs the following functions:

The restart schedule is found in programs CTKS and CTKO. This is a sequence of ENTERs to ECB controlled system programs which set up system tables used for resource management and system execution. The restart schedule also ensures that devices (for example, real-time tapes) that are required for online execution are available. Many of these programs are related to the system services associated with components to be described in the following chapters. Upon completion, a message is sent to the system console.

For more information, see TPF Main Supervisor Reference.

Figure 7. Loading the TPF Restart Area


Database Loading

The concept of a TPF system state is associated with system restart and the switchover procedure. The precise meaning of the various TPF system states can only be described in the context of other system concepts, not all of which have been described. However, the general concept can be described. A system state is related to the resources managed and function performed by the online TPF system. Some of this resource management and system function is:

The various TPF system states are related to the function and resource management the system is willing (or able) to support. The two states most easily identified are:

The computer console (1052) state means that the system will accept commands from a directly attached computer console. Many system services are not available in the 1052 state, including clock management, file pool management, and communication facilities support. NORM state means all system services are available. The names of the remaining states that fall between 1052 and normal states are listed for completeness. Their precise meanings are not necessary for the current description.

See TPF Operations and TPF Main Supervisor Reference for an explanation of the various states and how to change states. What is important to understand now is that system restart has placed the system in 1052 state, that the system is essentially idle (no communications activity), and that the primary operator console is active. In addition, 3270 local terminals, which are designated as alternate computer room agent sets (CRAS), may also be active if logged to the system message processor. Before the system can permit communications (application) activity by cycling to a state higher than 1052 state, the following database records must be loaded on the online files:

Users of the multiple database function (MDBF) of the HPO feature should interpret system as subsystem in this entire section including Device-Independent File Addressing and File Address Formats. The term system should be replaced by subsystem because each subsystem's database of both fixed and pool records is unique and separate from each other subsystem, including the basic subsystem (BSS). For example, in the preceding paragraph when the system is referred to as being in 1052 state to load the pilot tapes and pool directories, MDBF users should interpret this to mean that each subsystem's pilot tapes and pool directories should be loaded when that subsystem is in 1052 state.

In addition, any programmable communication devices must be loaded before cycling to a state that permits communications activity.

Pilot tapes can be loaded any time the system is in 1052 state. The data loader (ACPD) loads the pilot system data records to the online disk modules. A header contains the record type and record ordinal that is used by the file address compute program (FACE) to determine the actual file address.

Note:
If you create the pilot tape with an ID of N, you can load the pilot tape when the system is in any state by using the ZSLDR command. See TPF Operations for more information about the ZSLDR command. See TPF Program Development Support Reference for more information about creating the pilot tape.

Pilot tapes are required for application data records, the processor resource ownership facility (for details see TPF Main Supervisor Reference), non-SNA communication keypoint records, and optionally, SNA communication tables.

A user converting an existing system to a new one may not be loading pilot tapes. In this case, any embedded fixed DASD addresses must be in FARF format. Pool file storage usually comprises a large portion of the application database, plus pool files are required for several system functions (unsolicited message processor (UMP), long message transmission (LMT), and others). Pool file directories are used by the online system to control dispensing of the pool records. See TPF Database Reference for more information about pool file directories.

Device-Independent File Addressing

Device independent file addressing in TPF applies only to DASD devices.

The objectives of device-independent file addressing are:

  1. To implement a file address format that is independent of physical device characteristics for DASD devices and future secondary storage devices.
  2. To preserve all current application interfaces to the database.
  3. To extend current data addressing facilities, principally the number of fixed file record types per system and the number of records in each type.

The objectives are met by implementing the following:

  1. A file address reference format, called FARF, is the device-independent addressing scheme used in TPF systems. FARF3 means file address reference format 3, FARF4 is format 4, FARF5 is format 5, and FARF6 is format 6. The file address reference formats are those bit configurations placed into the file address reference words (FARWs). The main distinction between reference formats 3, 4, 5 and 6 is the amount of disk storage that can be addressed by each. With FARF5, a fullword is used for addressing, while in FARF3 and FARF4, less than a fullword is used, implying addressing restrictions. With FARF6, two fullwords are used for addressing.
  2. All existing application interfaces are preserved. Fixed records are accessible through an entry to the FACS program with record type and ordinal number; FACS returns a 4-byte file address in FARF format; and a FIND/FILE macro is then issued with the FARF file address on the appropriate ECB data level. For data event control blocks (DECBs), file records are accessible by using the FAC8C or FACZC macro to obtain and 8-byte file address; and a FIND/FILE macro is then issued with an 8-byte file address on the appropriate DECB. For pool records, get file storage and release file storage (GFS/RFS) macros remain the same. All commands applicable to the database have also been preserved.
  3. For FARF3 only, data addressing is a maximum 67 108 863 records per pool type and 268 435 456 fixed records per system. In the fixed records, there may be up to 12 252 record types, each of which may contain up to 268 435 456 records.

    FARF4 addressing has a maximum of 1G addresses (1G equals 1 073 741 824) and FARF5 addressing has a maximum of 4G addresses, (4G equals 4 294 967 296) with no fixed percentage devoted to fixed records or pools. FARF6 addressing has a maximum of 64 petabytes, or PB (64 PB equals 72 057 594 037 927 936).

File Address Formats

TPF file address formats are described in TPF Database Reference.

Implementation

All support required for online FARF addressing is automatically created by offline generation of the FACE table and pools tables. The user's main concern is with the SIP RAMFIL statement because this forms the basis for the TPF database.

In FARF3 formatted addresses, the BAND parameter of the SIP RAMFIL macro allows the assignment of one or more subrecord types (band numbers) to a fixed record type. Users should code a unique band number, from 0-4095, for each 64K, or part thereof, of fixed records. Note that these band numbers will be physically associated with each band of 64K records in the sequence of the users SIP input.

It is important that although the FARF3 format is device independent, it is band number dependent. It is for this reason that band numbers, by design, are assigned by the user rather than automatically generated by SIP. The intent is for users to maintain the same band number assignments from one TPF system generation to another. This ensures that any embedded FARF3 addresses will always refer to the same logical record even though the record may have been physically relocated or even assigned a different record type.

FARF4, FARF5, and FARF6 formatted addresses use a scheme involving a universal format type (UFT) and a format type indicator (FTI). The UFT portion of an address selects a section of DASD address space and the FTI portion selects a record type within that section. FARF4 and FARF5 are UFT/FTI dependent in the same way that FARF3 is band number independent. UFT/FTI pairs defined for FARF6 are independent from UFT/FTI pairs defined for FARF4 and FARF5.

Database Reorganization

DBR allows the user to reorganize all or selected fixed and pool record types. See the TPF Database Reference for more information. Also, within a record type, ordinal number ranges may be specified. All these DBR options are input to the system via commands.

The output phase of DBR captures the appropriate records to tape. It may be run in 1052 or NORM state.

Before running the DBR input phase, the user should perform whatever procedures are required. (For example, reformatting the online files, generating a new FACE table, reassembling the allocator, etc.).

The DBR input phase reloads the database in two steps. Fixed records appear on the capture tape (DBF) and are loaded during the general file IPL. Pool records appear on the capture tape (DBP) and are loaded once the prime module IPL is complete. Both steps are triggered by commands.

Communication Device Loading

The IBM 3705 device, which is no longer in production, is the only communication device that can be loaded by the TPF system.

Use the 3705 communication controller system support package to load the emulation program (EP) load modules to the IBM 3705 device. See TPF Data Communications Services Reference for more information.