gtpm1m0fTPF V4R1 Migration Guide: 3.1 to 4.1

Areas with Changes or New Functions in the TPF 4.1 System

Table 1 includes changes or new functions from the TPF 3.1 system to the TPF 4.1 system. The information in Table 1 is presented in alphabetic order by the area of change.

Table 1. Areas with Changes or New Functions in the TPF 4.1 System

Area with Changes or New Functions Description of the Changes or New Functions
ACF/SNA Table Generation In the TPF 4.1 system, you can write network definitions to tape or general data sets (GDS). You can load Systems Network Architecture (SNA) resource definitions without deactivating the network, and fall back to previous definitions (even if some central processing units (CPUs) have incorporated the new definitions).

See Generating the TPF 4.1 System for more information about ACF/SNA table generation.

Adjacent Link Station (ALS) Attachment New for the TPF 4.1 system, the ZNETW MOUNT command allows for the dynamic addition of Network Control Programs (NCPs) when the TPF 4.1 system is running as a PU 2.1 host node. Previously, if channel adapters were added, a software IPL was required to make these adapters available for use.
Altering Files In the TPF 4.1 system, support for the 4-byte MCHR address format was removed. Use the 7-byte MCHR address format in its place.
Automatic Tape Mounting The TPF 4.1 system supports automatic tape mounting. Alternate (ALT) tapes can be mounted on write-enabled devices as well as converted to active or standby tapes, all without operator intervention.
BEGIN Macro The BEGIN macro no longer appends a 24-byte (hexadecimal 18) header to assembler programs. Online displays of real-time programs now align with the offline assembler listings. Though this does not apply to the C language segments, the C compiler OFFSET option adjusts the offsets in the compiled listings to line up with those on the online system.
Block Checking Mode New for the TPF 4.1 system, block checking mode is a diagnostic tool provided by the TPF 4.1 system to help flag application programs that access storage improperly. By allocating a single core block for each 4 KB frame of memory and allocating that core block at the end of the frame, the TPF 4.1 system uses the dynamic address translation (DAT) facility to detect improper storage references automatically. This includes application programs that store or reference beyond the end of a core block as well as referencing core blocks that were released.

See Diagnosing Problems and Debugging for more information about block checking mode.

Branch Trace Facility The TPF 4.1 system supports the IBM ESA/370 branch trace facility.

See Diagnosing Problems and Debugging for more information about the branch trace facility.

Capture and Restore The Capture and Restore utility controls the maximum number of simultaneous captures allowed for each tape control unit, DASD control unit, DASD channel path, and tape channel path based on the channel path activity. You can set the maximum number of captures allowed for each DASD and tape control unit, and DASD and tape channel path by using a command.

See Learning About the Changes in System Support Services for more information about capture and restore.

Capture and Restore Utility Multipathing Capture was enhanced to use CHPIDs rather than symbolic device addresses (SDAs) when starting capture activity. This allows Capture to take advantage of multipathing support.
Changes to <16-MB Globals Virtual addressing is used to map some of the <16-MB global area to real storage above 16 MB, freeing this storage below 16 MB for use by application programs.

See Changes to Application Utilities for more information about globals.

Coexistence Facilitation Multiple TPF images facilitate coexistence by enabling different images of the TPF control program (CP), restart area components, and program bases to run concurrently on the processors in a loosely coupled complex.
Control Program (CP) is Masked With few exceptions, the control program (CP) runs masked for I/O interrupts.
Controlling Dump Content You can control the content of dumps by using the selective memory dump table (SMDT) and the dump override table.
Core Image Restart (CIMR) Area Multiple TPF images allow for the management of up to eight core image restart (CIMR) areas on the online system. These areas that make up part of the TPF image can be selected dynamically with a hardware IPL. The CIMR records now reside in 4 KB file records and restrictions as to their location on the DASD were removed in this release.
Core Resident Program Area (CRPA) The TPF 4.1 system provides two core resident program areas; areas (CRPAs); the 24-bit CRPA, which is located below 16 MB, and the 31-bit CRPA, which is located above 16 MB. The 24-bit CRPA can contain both 24-bit programs and 31-bit programs.
CRETC Enhancements The CRETC macro was enhanced to permit the passing of a core block to the newly created entry control block (ECB).
Data Alter and Display The following enhancements were made to Data Alter and Display:
  • ZxPGM now supports display and alter of core image restart (CIMR) area components (such as the control program (CP), IPAT, and so on), as well as the IPL programs.
  • ZxPGM and ZxCOR now support a Disassemble option. The displays can now be generated in assembler language format.
  • ZxPGM supports the alter, display, and locking of programs by loadset as well as version code.
  • ZxCOR supports the alter and display of the system heap storage.
Database Protection by Isolation of Data In the TPF 4.1 system, the support of the dynamic address translation facility (DAT) and virtual addressing provides for protection of your database by isolating the data used by the application programs. Not only does the TPF 4.1 system map each ECB into its own address space, but the FILE macro and TAPE macro moves the data out of the issuing ECB's address space. These design philosophies result in two levels of database protection. Not only are ECBs prevented from corrupting the data of other ECBs, but an ECB is also unable to corrupt its own once filed or written to tape.
Data Collection and Reduction The following enhancements were made to data collection and reduction:
  • To permit the collection of file and program interception data on heavily loaded systems, the TPF 4.1 system introduces the SKIP parameter to the SIP DATACO macro. This parameter specifies the number of interceptions to skip for each interception collected.
  • The TPF 4.1 system implements a new shutdown algorithm that uses the new $TPCLC macro support. The collection runs shutdown, but all ending records are collected, thereby allowing reduction to process the abbreviated data.

Data collection and reduction no longer references a tape named JCD. Rather, data collection and reduction references a real-time tape named RTC.

Data Set Utility New for the TPF 4.1 system, the Data Set utility allows real-time programs to access data easily and seamlessly on a wide variety of input media. The ZDSMG facility allows you to associate data sets such as a general data set (GDS), general tape, or virtual reader with a data definition name. Real-time programs, through the use of a programmed interface, can then access their input by data definition name rather than using device-specific operations.
Note:
Virtual reader refers to the IBM VM/ESA facility for supplying input to a virtual machine.
Diagnostic Tools Several enhancements were made to the branch trace facility, real-time trace (RTT), online minidump, macro trace, system log trace, Enter/Back trace, I/O trace, register trace, program event recording (PER) facility, path information unit (PIU) trace facility, and SNA I/O trace facility.

See Diagnosing Problems and Debugging for more information.

Dynamic Load Function You are no longer required to perform an initial program load (IPL) to incorporate new SNA resource definitions following a dynamic load. You are also no longer required to perform an IPL after you fall back to previous SNA resource definitions. Simply enter the ZNOPL MERGE command while the network is running and the TPF 4.1 system is in any TPF system state to incorporate the new SNA resource definitions or fall back to the previous SNA resource definitions.
Enter-By-Name The program allocation table (PAT) provides an enter-by-name capability that allows the TPF 4.1 system to determine the address of a program at run time so that ENTxC expansions can be resolved at run time.
E-Type Loader (OLDR) You can load an unlimited number of E-type programs to the TPF 4.1 system by reading them from general data sets (GDS), tapes, virtual readers, or user-defined input devices. You can group these programs into an unlimited number of loadsets, and each loadset can contain an unlimited number of programs. There are a number of E-type loader functions available that you can perform on loadsets or on individual programs in a loadset. You can now use the E-type loader to load:
  • Unallocated programs
  • New versions of existing IBM C language library functions
  • IBM TPF Database Facility (TPFDF) programs.
Note:
Virtual reader refers to the IBM VM/ESA facility for supplying input to a virtual machine.

See Generating the TPF 4.1 System for more information about the E-type loader (OLDR), and Customizing the Code for more information about activating E-type programs.

E-Type Loader (OLDR) Activation Numbers In the TPF 4.1 system, the E-type loader (OLDR) support allows you to seamlessly introduce new versions of real-time programs without disrupting existing system activity. This is achieved by assigning an activation number to each ECB when it is created. This activation number corresponds to the latest loadset that was activated in the TPF 4.1 system. Enter/Back services use these activation numbers to ensure that, for the life of the ECB, only programs that were active when the ECB was created are run.

See Generating the TPF 4.1 System for more information about the E-type loader, and Customizing the Code for more information about activating E-type programs.

Event Table In the TPF 4.1 system, the event table format was changed to that of a hash table.
FACE Table Generation and the System Initialization Program (SIP) The file address compute program (FACE) table is generated offline by a new FACE table generator. This simplifies the system initialization process by allowing the FACE table to be generated without running a full SIP.

See Generating the TPF 4.1 System and Understanding Database Administration for more information about the FACE table and SIP changes.

FACZC Macro The FACZC macro allows utility programs to access unique records that belong to other I-streams, processors, or subsystem users (SSUs).
File Addressing Capacity The TPF 4.1 system supports two new file addressing formats, which are:
  • File Address Reference Format 4 (FARF4)
  • File Address Reference Format 5 (FARF5).

FARF4 is a migration step between the present File Address Reference Format 3 (FARF3) address scheme and increases from 640 million to addressing up to 4G records. The previous limit of 64 million pool records is removed.

FARF3 addresses are still supported but cannot coexist with FARF5 addresses.

See Understanding Database Administration for more information about file addressing.

FINIS Macro The FINIS macro was updated to include a LTORG. In addition, if space in the segment permits, the date and time that the segment was assembled is included at the end of the program.
Frames, not Blocks One of the cornerstones of the TPF 4.1 system design is that core blocks are now carved out of 4 KB frames. These frames are attached to the virtual address space of the ECB. This prevents application programs from interfering with each other and allows the TPF control program (CP) to ensure that blocks are not lost if an ECB exits abnormally.
General File Loader (ALDR) Enhancements User productivity and system management are improved by allowing you to have more control over loading programs and by removing system allocation restrictions. In the TPF 4.1 system, the number of programs that can be allocated was increased from fewer than 33,000 to more than 1,000,000 programs, which improves system management.

In addition, the online general file loader (ACPL) keeps track of program versions and assembly data.

See Generating the TPF 4.1 System for more information about the online general file loader (ACPL).

General Real-Time Code Changes All IBM-provided programs were modified to fully support the TPF 4.1 environment. For example, all programs:
  • Now run in 31-bit addressing mode
  • Are reentrant
  • Obtain file addresses of program segments by using the GETPC macro rather than accessing the object code produced by an ENTRC expansion, and use the GETFC macro rather than the GETSC or GETLC macros.
GETCC Macro The GETCC macro supports the request of a common block in addition to standard core blocks. Blocks (common and standard core) can be initialized to a user-specified hexadecimal value.
GETFC Macro The GETFC macro was enhanced to request file addresses based on up to 10 record ID attribute table (RIAT) pool attribute types rather than simply prime and overflow attributes.

The GETFC macro now supports the request of a common block as well as a standard core block. Blocks (common and standard core) can be initialized to a user-specified hexadecimal value.

GETPC Macro New options were added to the GETPC macro to aid migration to the TPF 4.1 system. The GETPC macro now allows you to request the core or file address of programs with options to specify a particular loadset or program base.
GSYSC Macro The GSYSC macro was added for ISO-C file resident support. This macro permits an application to allocate system heap storage storage.
Heap Storage Heap storage is new for the TPF 4.1 system and is treated as working storage. Heap storage is the total memory pool from which an application program can draw contiguous memory on a dynamic allocation request. The MALOC, CALOC, RALOC, and FREEC macros (and their corresponding IBM C language functions) were introduced to access and manage heap storage.

See Changing Application Programs for Migration for more information about heap storage.

IBM C Language Support In the TPF 4.1 system, IBM C language support is no longer a product feature. Rather, it is incorporated into the base TPF 4.1 product to allow system and application program growth.

See Operating Environment Requirements and Planning Information for more information about IBM C language support and software requirements.

Improved Dump Speed for CTL Dumps The speed at which control dumps are taken was improved by enhancing the ability to control the content of the dumps. The selective memory dump facility (SMDF) allows you to tailor the content of dumps by dump number.

Use the ZIDOT command or the IDOTB macro to tailor the content of a dump. Overrides can be put in place that remove or add regions of storage to be dumped.

See Customizing the Code for more information about controlling the dump content, and see Understanding Operations for more information about modifying dump tags.

Improved System Availability The TPF 4.1 system provides very high system availability. In many cases, the scheduled availability has exceeded 99.9%, which represents fewer than 10 minutes of downtime per week. Even higher availability can be achieved when TPF central processing complexes (CPCs) are run in a loosely coupled configuration.

By using the TPF 4.1 system, less downtime is needed for software maintenance because the customer can dynamically change storage allocation values, add SNA terminals without stopping the network, assign new programs online without reinitializing the system to activate the programs, and change program attributes without reinitializing the system to activate the attributes.

Increased Main Storage for Application Program Use By using the TPF 4.1 system, customer application programs benefit from increased access to storage above and below 16 MB, while maintaining the 24-bit application program interface (API) for existing TPF system software. The concept of virtual storage replaces the concept of real (main) storage.
Interprocessor Communications (IPC) Beginning with the TPF 4.1 system, the only supported path for interprocessor communications (IPC) is through the Multi-Processor Interconnect Facility (MPIF). MPIF is a required feature for the High Performance Option (HPO) feature.

See Operating Environment Requirements and Planning Information for more information about IPC.

Input/Output (I/O) Trace In the TPF 4.1 system, I/O trace can now be recorded by symbolic device address (SDA). This greatly simplifies the debugging of I/O-related problems such as stalled module queues. Calls to common I/O are traced as well as ending status.

See Diagnosing Problems and Debugging for more information about I/O trace.

Keypoints In the TPF 4.1 system, several changes were made to the usage of TPF keypoints:
  • The CTKX keypoint was moved out of the keypoint area and is now called the image pointer record.
  • The CTK5 keypoint was added, taking the place of the former CTKX keypoint. This keypoint is reserved for future use by IBM.
  • The CTK3 keypoint is no longer used by IBM. This processor-unique keypoint is available for your use.
Loading of Core Resident Programs For more efficient memory utilization, core resident programs are packed into core based on their program length rather than the size of their file allocation. In addition, core resident programs are no longer brought into core along with the core image restart (CIMR) area components. Rather, they are brought into core on demand or are identified for preload with a keyword on their program allocator list (PAL) deck entries.
Lost Tape Interrupts The TPF 4.1 system supports detection of lost tape interrupts and stalled tape module queues. The TPF 4.1 system now notifies the application program of a permanent error or begins a tape switch when a lost interrupt is detected.
Low Address Protection The low address protection facility protects the first 512 bytes of storage against any alteration by an application program or the TPF 4.1 system regardless of the storage key used.
Macro Trace Facility The Macro Trace facility was enhanced for the TPF 4.1 system. Macro trace information is maintained for every ECB in a reserved area of the ECB. This allows dump processing to display macro trace history by ECB in addition to a collating, system-wide trace display.

See Customizing the Code and Diagnosing Problems and Debugging for more information about diagnostic tools and the macro trace facility.

Migration Aids The TPF 4.1 system provides the following migration aids:
  • This publication explains the details of the TPF 4.1 system as they affect your system and application programs.
  • File Address Reference Format (FARF) migration path, which provides File Address Reference Format 4 (FARF4) as a transition step between File Address Reference Format 3 (FARF3) and File Address Reference Format 5 (FARF5). See Understanding Database Administration for more information about file addressing capacity.
  • Virtual-equals-real (VEQR) operating mode provides limited virtual function to help you convert from a nonvirtual TPF system.

    VEQR mode allows you to run programs that are unchanged from the TPF 3.1 system even though the programs use data-sharing techniques that are no longer supported. When unsupported storage sharing is found, the TPF 4.1 system logs the incident.

    Running in VEQR mode identifies illegal storage references between address spaces. By using VEQR mode in a test environment, you can test individual programs as you make changes, before modifying your entire application program for the TPF 4.1 system. VEQR mode allows you to migrate your application programs to the TPF 4.1 system gradually.

    See Diagnosing Problems and Debugging for more information about VEQR operating mode.

  • Block-checking mode identifies programs that use block storage management practices that no longer work in the TPF 4.1 system. Block-checking mode marks coding errors such as writing beyond the end of a block, passing blocks chained to other blocks, and using storage that was already released. Block-checking mode can be turned on and off without reinitializing the TPF 4.1 system.

    See Diagnosing Problems and Debugging for more information about block-checking mode.

  • Multiple TPF images function as both a migration aid and by enabling the definition of up to eight images of the TPF system on a single processor to give you greater flexibility in migrating to the TPF 4.1 system. You can maintain images of both the TPF 3.1 system and the TPF 4.1 system, and fall back to a previous program base if necessary.
Multiple Images of the TPF System Multiple TPF images allows you to define up to eight images of the TPF 4.1 system on a single processor. Maintaining multiple and separate TPF images allows you to integrate program changes more easily by:
  • Permitting you to perform loads while the TPF 4.1 system processes messages without destroying the existing program base
  • Providing the ability to fall back immediately to a previous program base without reloading the previous program versions.

See Understanding Operations for more information about multiple TPF images.

Multi-Volume Dumps The TPF 4.1 system supports dumps that span more than one volume. When combined with automatic tape mounting, the dumping of machines with large amounts of real storage is streamlined and greatly simplified.
New CINFC Option The CINFC A option was added to allow you to access the address of an entry in the CINFC table, simplifying initialization of these areas.
Non-SNA Communication Shutdown levels are based on the number of items in the input list rather than on available blocks.

See Understanding Non-SNA Communication for more information about non-SNA communication.

Online Patch Facility The Online Patch facility is new for the TPF 4.1 system. The ZPTCH command creates and manages patch decks, which are a collection of core alterations. These patch decks are applied in a contiguous, nondisruptive fashion.
Path Information Unit (PIU) Trace Facility The TPF 4.1 system provides expanded path information unit (PIU) tracing with additional information.
Note:
Advanced Communications Function/Trace Analysis Program (ACF/TAP) is no longer supported in the TPF 4.1 system.

See Understanding Operations and Diagnosing Problems and Debugging for more information about the PIU trace facility and diagnostic tools.

Performance Monitoring Enhancements Several reports are enhanced to be more usable and present additional information. A new report contains a histogram of storage frame usage.
Program Allocation Table (PAT) The program sharing table (PST) and the online allocator were merged into a single structure called the program allocation table (PAT). The PAT is the control structure for Enter/Back processing. The PAT controls the characteristics of real-time programs such as:
  • 24-bit and 31-bit addressing mode
  • Restricted macro access
  • Program residency.

The ZxPAT command allows these characteristics to be changed dynamically on the online system.

See Generating the TPF 4.1 System for more information about the program allocation process.

Program Event Recording (PER) Trace Facility The TPF 4.1 system supports the functions of the IBM ESA/370 program event recording (PER) trace facility for storage alteration and instruction fetching events. The TPF 4.1 system also supports the functions of the IBM ESA/390 PER trace facility for storage alteration, instruction fetching, and successful branching events. There are many enhancements to the system trace facilities in the TPF 4.1 system.

See Diagnosing Problems and Debugging for more information about the PER trace facility.

Program Nesting Nesting of programs is managed entirely in the address space of the ECB, which greatly enhances system performance. The depth of program nesting is either unlimited or restricted by using the ZCTKA command.
Programs Are 4 KB All real-time programs are now allocated to 4 KB file records.
Program Versions and Assembly Data Information about the version of the programs currently loaded to the online system are maintained in the following ways:
  • The TPF 4.1 system reports the loadset and version of programs through dumps and online displays.
  • The two-character version code, as well as the assembly date from the program's END card, is maintained in the program version records (PVR). This information is updated whenever programs are loaded by the general file loader (ALDR), auxiliary loader (TLDR), and the E-type loader (OLDR).
  • The FINIS macro includes the date and time of assembly at the end of the program if space permits. This applies only to assembler segments.
Record Uniqueness The TPF 4.1 system introduces the ability to define processor and I-stream unique records. This is in addition to the ability that previously existed to define subsystem user-unique records. These records possess these attributes in any combination.
Real-Time Program Management New for the TPF 4.1 system, the ZAPAT command allows you to dynamically change the allocation attributes of real-time programs. Attributes include:
  • The program's residency
  • 24-bit and 31-bit addressing mode
  • Restricted macro authorization.
Record Hold Table In the TPF 4.1 system, the record hold table format was changed to that of a hash table and a chained table.
Record ID Attribute Table (RIAT) The record ID attribute table (RIAT) was modified to control records Restore attributes. The Restore attribute may now be modified dynamically by using the ZRTDM command.
Recoup GROUP Macro The GROUP macro was modified to support specification of I-stream unique records. This macro is used by the Recoup utility to describe the structure of the database.

See Learning About the Changes in System Support Services for more information about the Recoup GROUP macro.

Registers In the TPF 4.1 system, R10 is now available for your use between SVCs. It will be destroyed when an SVC is issued.
Resource Control Resource control can maximize the use of available resources under varying system conditions. Utilities and batch processes can be automatically controlled so they do not deplete system resources during peak periods. By adding the capability to time slice CPU-intensive applications, utilities and batch processes can be forced to relinquish control for specified intervals of time, therefore allowing other transactions to process.
RSYSC Macro The RSYSC macro was added for ISO-C file resident support. This macro permits an application to release system heap storage storage.
Run-Time Macro Restriction In the TPF 4.1 system, a program's use of restricted macros can now be authorized at run time. Keywords used in the PAL deck set up a program's basic macro authorization. Authorization can be updated dynamically on the online system by using the ZAPAT command.
Simplified System and Program Allocation The system allocation process is simplified using the TPF 4.1 system. The system allocator program (SALO) compiles, link-edits, and runs in one job. Before the TPF 4.1 system, allocation was staged in several jobs.
SIPCC Function The SIPCC function was enhanced to support transmission of 4 KB data areas. You can also specify a target I-stream, a list of I-streams, or all I-streams in addition to a target central processing unit (CPU). Enter-by-name support allows interprocessor communications (IPC) to pass program names rather than Enter expansions.
SNA Communication In the TPF 4.1 system you can install new network definitions without disrupting the network. In addition, you can write network definitions to tape or general data sets (GDSs). The definitions become shared between processors during processing, allowing you to perform either fresh or dynamic loads from any processor in a loosely coupled complex. If problems are found with the network definitions, your installation can fall back to the previous network definitions, regardless of how many CPUs may have incorporated the new resource definitions.

In addition to ACF/SNA table generation, many SNA tables were moved above 16 MB, and generic name support (for generic TPF application program names in session requests) was expanded.

The SNA polling interval has changed; you can define this interval from 10 milliseconds (ms) to 50 ms, in 10-ms intervals.

See Understanding Systems Network Architecture (SNA) Communication for more information about SNA communication.

SNA I/O Trace Facility The SNA I/O trace facility is a diagnostic aid used to debug problems with SNA link activation. A 4 KB trace control table is used to record channel contact commands (including read and write XIDS, XID I-fields, and significant steps in XID7 processing for channel-to-channel (CTC) devices). The table is included in a variety of SNA system error dumps.

The XID I-field of the SNA I/O trace table contains new types of entries. Therefore, you must modify any tool that uses this table for the new entries.

See Diagnosing Problems and Debugging for more information about diagnostic tools and the SNA I/O trace facility.

Storage Allocation In the TPF 4.1 system you can change storage allocation values with a command and a re-IPL.

See Understanding Operations for more information about storage allocation and modifying storage allocation values online.

Structured Programming Macros (SPMs) The TPF 4.1 system now includes the TPF Database Facility (TPFDF) structured programming macros (SPMs). You can use the TPFDF SPMs in any TPF 4.1 application even if you do not have the TPFDF product installed.

See TPFDF and TPF Structured Programming Macros for more information about the SPMs provided with the TPF 4.1 system.

Supervisor Call (SVC) Instruction Definition An indexed supervisor call instruction (SVC) table structure allows you to define up to 32 767 macros. There are 32 SVCs reserved for your use and odd-numbered SVCs are supported.

See Customizing the Code for more information about SVCs.

Support for >32K Programs The TPF 4.1 system now supports more than 32 000 real-time programs.
Support for Format 1 (FMT1) CCWs TPF device and macro handlers were enhanced to use Format 1 (FMT1) CCWs. This allows access to addresses above 16 MB. If you write file or tape data chains, you are insulated from this change as well as virtual address conversion by the TPF 4.1 system's use of a CCW translation utility.
System Allocator (SALO) Takes input from as many as 15 input decks and produces the system allocator (SAL) table and the new program allocation table (PAT).

In addition, allocation of the SAL table changed for the TPF 4.1 system. See Generating the TPF 4.1 System and TPF System Generation for more information about allocation of the SAL table.

System Allocator (SAL) Tape The system allocator (SAL) tape is no longer supported in the TPF 4.1 system.
System Error Number Prefixes The system error number prefixes are assigned to dump numbers to distinguish between sets of user system error numbers and IBM system error numbers.

See Customizing the Code for more information about prefixes.

System Error Support You can control the content of a TPF dump, which is divided into two sections:
  • The processor status, trace control tables, and virtual memory for message address spaces
  • System storage areas.

In the TPF 4.1 system, system error options are no longer subsystem unique. You can define multiple sets of system error numbers and specify additional main storage areas to be dumped. Dumps can also span multiple tape volumes.

System Initialization System initialization is easier in the TPF 4.1 system. The file address compute program (FACE) table generation is handled by a new offline FACE table generator program rather than as part of the system initialization process (SIP).
System Service Request Enhancements to the macro decoder and supervisor call (SVC) instruction definitions increase the ability of an application program to request system services.

The primary interfaces for application program requests of system services are through macros using the SVC and fast-link macro decoders. Fast-link macros are macros that do not issue SVCs. In the TPF 4.1 system, the SVCs allocated for use are increased from 128 to 255, and the SVCs reserved for your use are increased from 1 to 32. In addition, 2 SVC entries (one reserved for IBM use and one for your use) can be reserved to support a second-level structure. With this secondary or indexed structure, you can define more than 32 000 additional macros.

One hundred fast-link macros are reserved for use by your application programs. Fast-link macros are more expedient than other macros because they do not issue SVCs that cause system interrupts.

Certain restricted-use TPF system macros now check the authorization level of the requesting program before providing the system service. If a program requests a service that it is not authorized to obtain, the service is not granted. This authorization level is specified as part of the program allocation information.

System Trace Control By using the ZSTRC command you can display or change the activity of the system trace facilities. These traces include:
  • Branch
  • Enter/Back
  • Input/Output (I/O)
  • Macro.
Tape Support In the TPF 4.1 system, the following changes were made to tape support and can be done without operator intervention:
  • Mount alternate (ALT) tapes on write-enabled devices
  • Convert ALT tapes to active tapes for tape macro processing
  • Convert ALT tapes to standby tapes for tape switching. Automatically mounting ALT tapes improves tape switching during the dump process, enabling you to minimize the number of tape devices used by a multiple volume tape dump.

In addition, the TPF 4.1 system:

  • Detects and reports long or lost tape interrupts and stalled module queue conditions
  • Supports only 3480 and 3490 tape and tape control devices.

See Learning About the Changes in System Support Services for more information about tape support.

TDSPC Enhancements In the TPF 4.1 system, TDSPC can now be used to query the length of the specified tape module queue.
Timer Rate Change The CPU loop timer interval was shortened to 10ms. This speeds polling of channel-attached communications controllers.
$TCPLC Macro The new $TCPLC macro supports writing to real-time tapes from the control program (CP).
TPF Advanced Program-to-Program Communications (TPF/APPC) The TPF 4.1 system includes additional base APPC functions to TPF Advanced Program-to-Program Communications (TPF/APPC) support to complete TPF system support of all APPC base functions, including mapped conversations (for the C language interface only) and parallel sessions.

The TPF/APPC mapped interface is based on the communication element of the IBM Systems Application Architecture (SAA) Common Programming Interface (CPI). Although TPF/APPC does not fully conform to CPI communication, standard CPI communication programs can be converted easily, provided the programs do not use the features that the TPF system does not support. See the TPF C/C++ Language Support User's Guide for more information about the C language functions.

The maximum number of LU 6.2 sessions increased to 8 million.

The TPF 4.1 system also adds some of the optional functions defined by the LU 6.2 architecture.

See Understanding Systems Network Architecture (SNA) Communication for more information about TPF/APPC.

TPF Database Facility (TPFDF) In the TPF 4.1 system, E-type loader fully supports the IBM Transaction Processing Facility Database Facility (TPFDF) product.
User Core Image Restart (CIMR) Areas The TPF 4.1 system introduces 2 new core image restart (CIMR) area components that are available for your use.

Component
Description

 USR1 
A subsystem-shared CIMR component.

 USR2 
A subsystem-unique CIMR component.

These components can be loaded with either the general file loader (ALDR) or the auxiliary loader (TLDR).

User Exits The TPF 4.1 system introduces an array of new user exits in the following areas.

Area
Description

 CLH 
In centralized list handling (CLH), user exits for getting and releasing blocks, which include ECBs, system work blocks (SWBs), and common blocks.

 CTIN 
User exit for initialization and key protection of tables as well as initialization of user CINFC labels.

 Data Set Utility 
User exits for management of a virtual reader and user-defined input devices.

 E-Type Loader (Offline) 
User exit for offline control of programs to be loaded with the E-type loader.

 E-Type Loader (Online) 
User exits for ZOLDR authorization, display interception, programs not entered through the normal enter mechanism, loadset history, program history, and selective program activation.

 ESPM 
Hook in the CPU timer interrupt routine for the System Performance Monitor package.

 FACE Table 
In the file address compute program (FACE) table, a mechanism providing access to FACE table header, split chain, and split.

 Indexed SVCs 
User exit for the indexed SVC decoder.

 System Error 
User exit for the dump override table (DOT), modification of program event recording (PER) data, and viewing of dump data.

 OLDF 
Hook in System Error for support of the online mini dump facility.

 WTOPC 
User exit for WTOPC PAGE size selection.
Virtual File Access (VFA) To improve performance in the TPF 4.1 system, virtual file access (VFA) is always present and active. You can run programs directly from VFA, thereby improving system performance. The online and offline VFA performance monitoring facilities improve your ability to better tune VFA candidates and VFA resources.

See Understanding Database Administration for more information about VFA.

Working Storage and 16-MB Constraint Relief (Transaction Protection and Data Integrity) The TPF 4.1 system uses the dynamic address translation (DAT) facility of the IBM ESA/370 architecture to view working storage above 16 MB as if it was below 16 MB. The ESA facilities of primary address space and home address space are implemented in the TPF 4.1 system as the ECB virtual memory (EVM) and the system virtual memory (SVM), respectively.

The TPF 4.1 system separates and isolates information into types of address spaces for system processing and message processing. Through the use of the DAT facility and low address protection, the TPF 4.1 system changes how storage is physically and logically used for system programs, application programs, and messages. The introduction of virtual address spaces in the TPF environment has significantly increased the integrity of the data environment in the TPF 4.1 system. The TPF 4.1 system also provides the basic tools needed for additional data integrity and recovery.

See Generating the TPF 4.1 System for more information about working storage.

WTOPC Macro Enhancements The following enhancements were made to the WTOPC macro.
  • The WTOPC macro now provides a centralized management system for long output messages. The WTOPC PAGE facility allows command writers that would potentially display a limitless amount of data to present the data in smaller page-sized pieces. The new ZPAGE command allows you to request the next page of output. The WTOPC PAGE facility provides page sizes for remote and local consoles as well as a user exit to allow facility-specific tailoring of output presentation. SNA display messages and E-type loader display messages now take advantage of the WTOPC PAGE facility.
  • CHAIN=YES processing was redesigned to significantly reduce the system resources consumed by users of this facility. CHAIN=YES now uses fewer ECBs and core blocks
  • UNSOL=YES allows command writers that send unsolicited messages to send them using the Unsolicited Message package. This helps to prevent potential data loss when sending to older remote consoles.
WTOPC Paging Control New for the TPF 4.1 system, the ZPAGE command is used to pass a continuation request to the WTOPC PAGE facility. The ZPAGE command provides a single, centralized operator interface for commands that use the WTOPC PAGE facility.
ZFMSG Facility New for the TPF 4.1 system, the ZFMSG facility allows you to dynamically define and change various characteristics of the TPF commands. You can create new commands, define their editor segments and describe the functional support consoles (FSCs) that should receive output, as well as change any of these characteristics for existing commands.
3-Byte Resource Identifier (RID) In the TPF 4.1 system, a 3-byte resource identifier (RID) allows you to increase the number of logical units (LUs) that you can use. The maximum number of LU sessions (other than LU 6.2) increased to 8 million.