gtps3m0jSystem Performance and Measurement Reference

System Data Analysis

Reports follow that provide the information needed for system performance analysis. The environment and system summary reports appear first. These are followed by the reports associated with each collector.

Environment Summary Reports

The Environment Summary Report contains general information about a collection run. Besides the date, the tape volume serial number, and information identifying the complex, information is displayed that defines the system environment at the time and describes the kind of data collection being run.

Figure 2. Environment Summary Report


TPF ENVIRONMENT SUMMARY REPORT                        SYSTEM WIDE
DATE COLLECTED - 27 FEB
DATE REDUCED   - 16 SEP 98
DATA COLLECTION TAPE VOLUME SERIAL NUMBER - AL3930
ENVIRONMENT - PR/SM-SH   PARTITION - TPF1       CAP - N   WAIT COMP - N
IBM MOD - 370XA/9672   SERIAL NUMBER - 0C034579   TPF CPUID - B
TPF RELEASE AND VERSION - TPF4.1  ,  PUT LEVEL - PUT07
VIRTUAL MODE - VEQV
 
OPERATOR INPUT REQUEST - ZMEAS I/SMPF/1023/05
MODE OF COLLECTION - SAMPLING
INTERVAL SYSTEM  = 05 SEC   MESSAGE = 05 SEC   PROGRAM = 05 SEC   FILE    = 05 SEC
PERIOD  =  23 SEC
FILE    COLLECTION INTERCEPT SKIP FACTOR =    99
PROGRAM COLLECTION INTERCEPT SKIP FACTOR =    99
START TIME = 14:18:27
END   TIME = 14:26:06
ACTIVE UTILITIES                       START               END
   DATA COLLECTION                       X                  X
 
ACTIVE TRACES                          START               END
   MACRO TRACE                           X                  X
   ENTER/BACK TRACE                      X                  X
   SYSTEM LOG                            X                  X
   IO TRACE                              X                  X
   VEQR MODE LOGGING                     X                  X


OPTIONS CHOSEN
COLLECTOR     OPTION                SUBOPTIONS
_________     ______                __________
 SYSTEM
 SNA
 CONTROL      DUMP                  10
 MESSAGE
 PROGRAM
 PROGRAM      PACKAGE               A          A*
 PROGRAM      PACKAGE               B          B*
 PROGRAM      PACKAGE               C          C*
 PROGRAM      PACKAGE               D          D*
 PROGRAM      PACKAGE               E          E*
 PROGRAM      PACKAGE               F          F*
 PROGRAM      PACKAGE               G          G*
 PROGRAM      PACKAGE               H          H*
 PROGRAM      PACKAGE               I          I*
 PROGRAM      PACKAGE               J          J*
 PROGRAM      PACKAGE               K          K*
 PROGRAM      PACKAGE               L          L*
 PROGRAM      PACKAGE               M          M*
 PROGRAM      PACKAGE               N          N*
 PROGRAM      PACKAGE               O          O*
 PROGRAM      PACKAGE               P          P*
 PROGRAM      PACKAGE               Q          Q*
 PROGRAM      PACKAGE               R          R*
 PROGRAM      PACKAGE               S          S*
 PROGRAM      PACKAGE               T          T*
 PROGRAM      PACKAGE               U          U*
 PROGRAM      PACKAGE               V          V*
 PROGRAM      PACKAGE               W          W*
 PROGRAM      PACKAGE               X          X*
 PROGRAM      PACKAGE               Y          Y*
 PROGRAM      PACKAGE               Z          Z*
 PROGRAM      PACKAGEREPORTS
 FILE
 FILE
 FILE         ACCESSESPERID         ALL
 FILE         CYLINDERANALYSIS      ALL
 FILE         DISTRIBUTION          ALL
 FILE         PLOT                  ALL
 FILE         PATHACTIVITY
 FILE         COMPARISON
 FILE         CACHE
 FILE         CACHE                 SSD
 FILE         CACHE                 SALL
 FILE         CACHE                 CCACHE
 FILE         CACHE                 CACHESUM
 FILE         CACHE                 CACHEALL
 FILE         PLOT                  ALL
 FILE         DIST                  ALL
 REDUCE       SS                    BSS
 ALIAS        S                     BSS        BSS        SUBSYSTE
 ALIAS        U                     HPN        BSS        SSU        ONE
CONFIGURATION SUMMARY                                 SYSTEM WIDE
CPU CONFIGURATION
MACH TYPE     SERIAL NR
370XA/9672    0C034579             B   DATA COLLECTED ON THIS CPU
 
 
 
 
 
MDBF CONFIGURATION
SS NAME    SS NR    SS STATE  SSU NAME   SSU NR
BSS            0       NORM                                   HPN

The options listed in the options chosen part of the Environment Summary Report come from a typical run. This list should not be considered exhaustive. See TPF Operations for more information about the list of options.

The system collector is primarily concerned with the operation of the CPU proper. The items shown in the system summary (see Figure 5) may be considered in groups comprising gross input, system utilization, working storage utilization, and queues or job lists, which make up the priority of processing.

Figure 3. Input Messages by Application Report


INPUT MESSAGES BY APPLICATION                            SYSTEM WIDE              27 FEB         14:18:27
 
 
APPLICATION      SSU NAME          MSG / SEC                       MESSAGES                         PERCENT OF TOTAL
 
  AAAA              HPN              0.000                            0                                  0.000
  APPA              HPN              0.000                            0                                  0.000
  APPC              HPN              0.000                            0                                  0.000
  AZAZ              HPN              0.000                            0                                  0.000
  BBBB              HPN              0.000                            0                                  0.000
  B32O              HPN              0.000                            0                                  0.000
  CBM1              HPN              0.000                            0                                  0.000
  CBM2              HPN              0.000                            0                                  0.000
  CBW1              HPN              0.000                            0                                  0.000
   
·
·
·
TSIM HPN 88.740 40732 99.948 WEWE HPN 0.000 0 0.000 WRAP HPN 0.000 0 0.000 WWWW HPN 0.000 0 0.000 XCXC HPN 0.000 0 0.000 XXXX HPN 0.000 0 0.000 YDYD HPN 0.000 0 0.000 YYYY HPN 0.000 0 0.000 Y0Y0 HPN 0.000 0 0.000 ZZZZ HPN 0.000 0 0.000 TOTAL 88.786 40753 100.000

Figure 4. Pushbutton Application Summary Report


PUSHBUTTON APPLICATION SUMMARY REPORT


 
 
SUBSYSTEM:     BSS
 
APPLICATION:   1
 
PROGRAM ENTERED          MESSAGE ACTION TYPE                   MSG / SEC
 
ACPF                     INPUT MESSAGE                             4.04
INVL                     INVALID INPUT MESSAGE                     0.07
WPA1                     PASSENGER DATA ENTRY                     12.04
WID1                     ALTER TRANSACTION INFORMATION             0.94
NFA1                     FLIGHT INFORMATION                       18.48
PRE1                     DISPLAY RECORD                            2.07
NAE1                     AVAILABILITY OR MISCELLANEOUS             0.00
ETA1                     END TRANSACTION                           8.52
FRD1                     FILE RECORD                               0.09
IGR1                     IGNORE TRANSACTION                        0.07
 
                         SS TOTAL FOR APPLICATION 1               46.32
 
                                      HPN  SUBTOTAL               46.32
 

Pushbutton Application Summary Report

A set of counters in the control program provides information on the number of high-speed messages started by pushbutton applications. A sample report is shown in Figure 4. The message counters used are never reset, so the counts in this report represent total messages for the applications listed over the entire life of the collection.

The system message counter table (SM) records are used as input to this report. The SM record represents a table of as many as 63 program entries that constitute one RES0 application. Each 8-byte program entry consists of a 4-byte program name and a 4-byte counter field. An SM record is written to RTC tape for each RES0 application for each subsystem user, for each usable I-stream, and for each subsystem. For example, on a system using 2 subsystems and 2 I-streams with 2 subsystem users per subsystem and with 5 RES0 applications implemented and variable MAXAPPL in segment JCD4 changed from 0 to 5, there would be (2 × 2 × 2 × 5 ) = 40 SM records written to tape, both at start and at end time. TPF installations implementing RES0 applications must change segment JCD4 to set variable MAXAPPL to the number of RES0 applications implemented. MAXAPPL defaults to 0. This is done by segment JCD4 (see the SMREC DSECT in JCD4 for SM record layout). The CROSC macro with entry GLBAC is used to retrieve pointers to the global areas where the RES0 application information can be found. The RES0 application information is defined on the pilot tape and updated by the UII package. Note that although SM records are collected by subsystem user, I-stream, and RES0 application, the program groups are unique only by subsystem and by RES0 application number (1 - 5). For this reason, pushbutton applications are reported by subsystem, RES0 application number, and program name only.

Program names and descriptions for additional applications must be entered into the name and description tables DNMTBLC and DMSGDES in segment JRA3 to be reported properly. These tables have a one-to-one correspondence with each other. They are a listing of all programs used by all pushbutton applications. When additional programs are to be supported, their names and descriptions must be inserted into these tables. Because the tables are searched sequentially for each program being reported, the order of programs in the tables is not important as long as each name in DNMTBLC is in the same array position as its description in DMSGDES.

The preceding example assumes that 5 RES0 applications have been defined. Only 1 RES0 application is predefined, and you are responsible for applications 2 through 5, if they are used.

System Summary Report

This report provides a summary of the system, of I-streams, of storage, and of shutdown conditions. The items shown in the I-stream Summary (see Figure 5) may be considered in groups that comprise system utilization, activity per I-stream, and queues or job lists that comprise priority of processing.

Figure 5. System Summary Report


SYSTEM SUMMARY                                                                     18 FEB        16:12:12     SYSTEM     PAGE     1
       205 OBSERVATIONS
INPUT MESSAGES PER SECOND  (WORK LOAD)                                           MIN                 MAX                MEAN
     HIGH SPEED MESSAGES (PROCESSED)                                           0.615               5.060               2.294
     LOW  SPEED                                                                0.000               0.000               0.000
     HIGH SPEED MESSAGES (ROUTED)                                              0.000               0.000               0.000
     TCP/IP WEIGHTED MESSAGES                                                  1.800            2715.743             909.267
     CREATED ENTRIES                                                          32.141             146.118              87.000
     SSCP INPUT MESSAGES                                                       0.000               0.000               0.000
     UNIT RECORD LOW  PRIORITY TASK                                            0.000               0.000               0.000
     UNIT RECORD HIGH PRIORITY TASK                                            0.000               0.000               0.000
     SQL REQUESTS PER SECOND                                                   0.000               0.000               0.000
     ACTIVE SQL ECBS                                                           0.000               0.000               0.000
     WEIGHTED MESSAGE RATE                                                     2.415            2720.803             911.561
 
RESOURCE UTILIZATION PER MESSAGE
     MILLISECONDS PER WEIGHTED MESSAGE                                       499.998           12975.598            2144.723
     CORE POOL BYTES  PER  ECB                                             37866.852           66018.835           49758.068
 
PROCESSOR UTILIZATION
     SYSTEM WAIT STATE                                                          20.2%               74.7%               50.8%
 
     TPF AVERAGE CPU UTILIZATION                                                25.3%               79.8%               49.2%
 
     PAUSE COUNT                                                               0.000               1.000               0.102
     MAIN I-STREAM WAIT TIME (MILS)                                            0.000               0.810               0.021
     TIME SPENT WORKING WHILE PAUSED (MILS)                                    0.000               0.010               0.003
     PAUSE TIME -- I-STREAM #2 (MILS)                                          0.000               0.000               0.000
     PAUSE TIME -- I-STREAM #3 (MILS)                                          0.000               0.000               0.000
     PAUSE TIME -- I-STREAM #4 (MILS)                                          0.000               0.000               0.000
     PAUSE TIME -- I-STREAM #5 (MILS)                                          0.000               0.000               0.000
     PAUSE TIME -- I-STREAM #6 (MILS)                                          0.000               0.000               0.000
     PAUSE TIME -- I-STREAM #7 (MILS)                                          0.000               0.000               0.000
     PAUSE TIME -- I-STREAM #8 (MILS)                                          0.000               0.000               0.000
     PAUSE TIME -- I-STREAM #9 (MILS)                                          0.000               0.000               0.000
     PAUSE TIME -- I-STREAM #10 (MILS)                                         0.000               0.000               0.000
     IPTE INSTRUCTION RATE                                                     6.676              24.918              14.838
     PTLB INSTRUCTION RATE                                                     0.985              16.582               6.430
 
COLLECTION INTERVAL IN SECONDS                                                 4.161               7.036               5.202
 
TPF CPU LIST LENGTH (SUMMATION OF ALL I-STREAMS)
     CROSS                                                                     0.000             200.000               4.902
     READY                                                                     0.000             236.000               3.857
     INPUT                                                                     0.000              33.000               2.402
     DEFERRED                                                                  0.000               1.000               0.878
 
     TOTAL SUSPEND LIST ECBS                                                   0.000               1.000               0.009
          SUSPEND LIST LOW-PRIORITY ECBS                                       0.000               0.000               0.000
          SUSPEND LIST TIME-SLICED ECBS                                        0.000               0.000               0.000
     VCT                                                                       0.000               0.000               0.000
 
LOW PRIORITY ECB CLASSIFICATIONS
     PRIORITY CLASS - BATCH                                                    0.000               0.000               0.000
     PRIORITY CLASS - LOBATCH                                                  0.000               0.000               0.000
     PRIORITY CLASS - IBMHI                                                    0.000               0.000               0.000
     PRIORITY CLASS - IBMLO                                                    0.000               0.000               0.000




SYSTEM SUMMARY                                                                     18 FEB        16:12:12     SYSTEM     PAGE     2
       205 OBSERVATIONS
INSTANTANEOUS ACTIVITY
     ACTIVE ECBS                                                             215.000             309.000             244.112
     TIME-SLICEABLE ECBS                                                       0.000               0.000               0.000
     TIME SLICES (PER SECOND)                                                  0.000               0.000               0.000
 
THREADS SUPPORT DATA
     MAXIMUM THREADS PER PROCESS                                               0.000               5.000               4.643
 
     HIGH WATER MARK NUMBER OF THREADS ACTIVE IN A PROCESS:                        5
 
EQUATIONS USED TO MANIPULATE DATA
 
WEIGHTED MESSAGE RATE = (HS MSG.PROCESSED-HS MSG.ROUTED) + (LS MSG.RATE *   5  ) + (HS MSG.ROUTED * 0.3  ) + TCP/IP WEIGHTED MSG
TPF CPU UTILIZATION   = ((TOTAL CLOCK TIME) - (SUM OF ALL IDLE LEVELS))/ (TOTAL CLOCK TIME)
 
**NOTES: 1) IF MAX OR MEAN = INFINITY, THEN THE WEIGHTED MESSAGE RATE = 0
         2) ACTIVE ECBS AND ACTIVE MESSAGE RATE ARE ANALOGOUS TERMS
         3) CPU UTILIZATION IS COMPUTED USING THE SYSTEM WAIT STATE PERCENTAGE


I-STREAM SUMMARY REPORT                                                            18 FEB        16:12:12     SYSTEM     PAGE     3
       205 OBSERVATIONS
THE MAIN I-STREAM IS                              I-STREAM 1
NUMBER  OF  I-STREAMS IN USE                              10
NUMBER OF I-STREAMS CAPABLE OF BEING USED                 10
 
                                                          I-STREAM  1         I-STREAM  2         I-STREAM  3         I-STREAM  4
 
CPU  HARDWARE  ADDRESS                                              0                   1                   2                   3
 
PROCESSOR  UTILIZATION
     IDLE LEVEL  3 - NOT ENOUGH FRMS FOR INPUT LIST               0.0%                0.1%                0.2%                0.1%
     IDLE LEVEL 11 - NO WORK FOR TPF                              0.0%                0.0%               61.9%               62.3%
 
     CPU  UTILIZATION                                           100.0%               99.9%               37.9%               37.6%
 
     SMOOTHED CPU UTILIZATION (MEAN)                             13.8%               11.6%                8.0%                8.1%
     ADJUSTED CPU UTILIZATION (MEAN)                             13.6%               11.1%                8.0%                8.1%
 
INSTANTANEOUS  ACTIVITY
     ACTIVE  ECBS  (MEAN)                                      14.244              31.678              30.976              30.971
 
TPF CPU LIST LENGTH (MEAN)
     CROSS                                                      2.001               0.452               0.338               0.343
     READY                                                      2.150               0.600               0.126               0.149
     INPUT                                                      1.723               0.611               0.008               0.008
     DEFERRED                                                   0.884               0.000               0.000               0.000
 
     TOTAL SUSPEND LIST ECBS (MEAN)                             0.005               0.000               0.000               0.000
          SUSPEND LIST LOW-PRIORITY ECBS (MEAN)                 0.000               0.000               0.000               0.000
          SUSPEND LIST TIME-SLICED ECBS (MEAN)                  0.000               0.000               0.000               0.000
     VCT (MEAN)                                                 0.000               0.000               0.000               0.000
 
INPUT  MESSAGES  PER  SECOND
     CREATED  ENTRIES  (VIA SWISC)                              0.028               0.016               0.102               0.100
     ROUTED  ENTRIES  (VIA SWISC)                               0.067               0.002               0.390               0.374
 
     MILS  PER  ROUTED  ENTRY                                     -            499500.000             971.794            1005.347
 
EQUATIONS  USED  TO  MANIPULATE  DATA
 
MILS PER ROUTED ENTRY = (CPU UTILIZATION) / (ROUTED ENTRIES)
TPF CPU UTILIZATION   = ((TOTAL CLOCK TIME) - (SUM OF ALL IDLE LEVELS))/ (TOTAL CLOCK TIME)
 
**NOTES: 1) SMOOTHED CPU UTILIZATION IS COMPUTED ONCE PER SECOND USING A SMOOTHING ALGORITHM
         2) ADJUSTED CPU UTILIZATION DOES NOT INCLUDE TIME SPENT WHILE EXECUTING TASKS
            THAT ARE EITHER DELAYED OR DEFERRED
         3) MILS PER ROUTED ENTRY NOT MEANINGFUL FOR MAIN I-STREAM BECAUSE MAIN I-STREAM
            DOES WORK ON BEHALF OF APPLICATION I-STREAMS
I-STREAM SUMMARY REPORT                                                            18 FEB        16:12:12     SYSTEM     PAGE     4
       205 OBSERVATIONS
THE MAIN I-STREAM IS                              I-STREAM 1
 
NUMBER  OF  I-STREAMS IN USE                              10
NUMBER OF I-STREAMS CAPABLE OF BEING USED                 10
 
                                                          I-STREAM  5         I-STREAM  6         I-STREAM  7         I-STREAM  8
 
CPU  HARDWARE  ADDRESS                                              4                   5                   6                   7
 
PROCESSOR  UTILIZATION
     IDLE LEVEL  3 - NOT ENOUGH FRMS FOR INPUT LIST               0.1%                0.2%                0.2%                0.2%
     IDLE LEVEL 11 - NO WORK FOR TPF                             61.4%               59.5%               60.6%               60.2%
 
     CPU  UTILIZATION                                            38.5%               40.3%               39.2%               39.6%
 
     SMOOTHED CPU UTILIZATION (MEAN)                              8.1%                8.0%                8.1%                8.2%
     ADJUSTED CPU UTILIZATION (MEAN)                              8.1%                8.0%                8.1%                8.2%
 
INSTANTANEOUS  ACTIVITY
     ACTIVE  ECBS  (MEAN)                                      31.273              31.400              31.946              32.434
 
TPF CPU LIST LENGTH (MEAN)
     CROSS                                                      0.349               0.361               0.375               0.361
     READY                                                      0.138               0.179               0.229               0.163
     INPUT                                                      0.010               0.010               0.018               0.008
     DEFERRED                                                   0.000               0.000               0.000               0.000
 
     TOTAL SUSPEND LIST ECBS (MEAN)                             0.000               0.000               0.000               0.000
          SUSPEND LIST LOW-PRIORITY ECBS (MEAN)                 0.000               0.000               0.000               0.000
          SUSPEND LIST TIME-SLICED ECBS (MEAN)                  0.000               0.000               0.000               0.000
     VCT (MEAN)                                                 0.000               0.000               0.000               0.000
 
INPUT  MESSAGES  PER  SECOND
     CREATED  ENTRIES  (VIA SWISC)                              0.109               0.098               0.109               0.114
     ROUTED  ENTRIES  (VIA SWISC)                               0.367               0.371               0.381               0.383
 
     MILS  PER  ROUTED  ENTRY                                1049.046            1086.253            1028.871            1033.942
 
EQUATIONS  USED  TO  MANIPULATE  DATA
 
MILS PER ROUTED ENTRY = (CPU UTILIZATION) / (ROUTED ENTRIES)
TPF CPU UTILIZATION   = ((TOTAL CLOCK TIME) - (SUM OF ALL IDLE LEVELS))/ (TOTAL CLOCK TIME)
 
**NOTES: 1) SMOOTHED CPU UTILIZATION IS COMPUTED ONCE PER SECOND USING A SMOOTHING ALGORITHM
         2) ADJUSTED CPU UTILIZATION DOES NOT INCLUDE TIME SPENT WHILE EXECUTING TASKS
            THAT ARE EITHER DELAYED OR DEFERRED
         3) MILS PER ROUTED ENTRY NOT MEANINGFUL FOR MAIN I-STREAM BECAUSE MAIN I-STREAM
            DOES WORK ON BEHALF OF APPLICATION I-STREAMS
I-STREAM SUMMARY REPORT                                                            18 FEB        16:12:12     SYSTEM     PAGE     5
       205 OBSERVATIONS
THE MAIN I-STREAM IS                              I-STREAM 1
 
NUMBER  OF  I-STREAMS IN USE                              10
NUMBER OF I-STREAMS CAPABLE OF BEING USED                 10
 
                                                          I-STREAM  9         I-STREAM 10
 
CPU  HARDWARE  ADDRESS                                              8                   9
 
PROCESSOR  UTILIZATION
     IDLE LEVEL  3 - NOT ENOUGH FRMS FOR INPUT LIST               0.2%                0.1%
     IDLE LEVEL 11 - NO WORK FOR TPF                             68.6%               70.5%
 
     CPU  UTILIZATION                                            31.2%               29.4%
 
     SMOOTHED CPU UTILIZATION (MEAN)                              7.7%                7.9%
     ADJUSTED CPU UTILIZATION (MEAN)                              7.7%                7.9%
 
INSTANTANEOUS  ACTIVITY
     ACTIVE  ECBS  (MEAN)                                       4.132               5.059
 
TPF CPU LIST LENGTH (MEAN)
     CROSS                                                      0.164               0.167
     READY                                                      0.065               0.063
     INPUT                                                      0.005               0.004
     DEFERRED                                                   0.000               0.000
 
     TOTAL SUSPEND LIST ECBS (MEAN)                             0.000               0.000
          SUSPEND LIST LOW-PRIORITY ECBS (MEAN)                 0.000               0.000
          SUSPEND LIST TIME-SLICED ECBS (MEAN)                  0.000               0.000
     VCT (MEAN)                                                 0.000               0.000
 
INPUT  MESSAGES  PER  SECOND
     CREATED  ENTRIES  (VIA SWISC)                              0.097               0.129
     ROUTED  ENTRIES  (VIA SWISC)                               0.378               0.365
 
     MILS  PER  ROUTED  ENTRY                                 825.396             805.479
 
EQUATIONS  USED  TO  MANIPULATE  DATA
 
MILS PER ROUTED ENTRY = (CPU UTILIZATION) / (ROUTED ENTRIES)
TPF CPU UTILIZATION   = ((TOTAL CLOCK TIME) - (SUM OF ALL IDLE LEVELS))/ (TOTAL CLOCK TIME)
 
**NOTES: 1) SMOOTHED CPU UTILIZATION IS COMPUTED ONCE PER SECOND USING A SMOOTHING ALGORITHM
         2) ADJUSTED CPU UTILIZATION DOES NOT INCLUDE TIME SPENT WHILE EXECUTING TASKS
            THAT ARE EITHER DELAYED OR DEFERRED
         3) MILS PER ROUTED ENTRY NOT MEANINGFUL FOR MAIN I-STREAM BECAUSE MAIN I-STREAM
            DOES WORK ON BEHALF OF APPLICATION I-STREAMS
SYSTEM SUMMARY REPORT                                                              18 FEB        16:12:12     SYSTEM     PAGE     6
       205 OBSERVATIONS
TPF WORKING STORAGE UTILIZATION
 
BLOCK TYPE                BYTES       BLOCKS      MEAN BLOCKS    MEAN BLOCKS                      UTILIZATION
                        ALLOCATED    ALLOCATED     AVAILABLE       IN USE              MIN            MAX           MEAN
LIOCB                   1142784           4464         4272.317       191.6           3.71%          5.77%          4.29%
LECB                   15974400           1300         1049.405       250.5          16.92%         24.30%         19.27%
LSWB                     897024           1752         1364.937       387.0          13.69%         32.24%         22.09%
LCOMMON                 2523136            616          589.229        26.7           3.57%          6.81%          4.34%
LFRAME                 25395200           6200         3282.351      2917.6          32.54%         75.72%         47.05%
 
TOTAL                  45932544
 
MAXIMUM NUMBER OF FRAMES ON FRAME PENDING LIST:                   70
 
SYSTEM HEAP UTILIZATION
 
   SYSTEM HEAP SIZE:         10485760 BYTES,                   2560 PAGES
**** HIGHWATER MARK:          5906432 BYTES IN USE,            1442 FRAMES IN USE,  56.328 PERCENT IN USE
**** THE HIGHWATER MARK OCCURRED PRIOR TO THIS DATA COLLECTION.
 
OBSERVATION MINIMUM:          2658304 BYTES IN USE,             649 FRAMES IN USE,  25.351 PERCENT IN USE
               MEAN:          2869596                           700                 27.366
            MAXIMUM:          4182016                          1021                 39.882
 
OBSERVATION MINIMUM:               11 SHARED MEMORY FRAMES IN USE,                       1 SHARED MEMORY SEGMENTS IN USE
               MEAN:               11                                                    1
            MAXIMUM:               11                                                    1
 
OBSERVATION MINIMUM:           14.100 PERCENT OF ALL FRAMES IN USE
               MEAN:           24.303
            MAXIMUM:           33.934
 
OBSERVATION MINIMUM:            3.903 SYSTEM HEAP REQUESTS PER SECOND
               MEAN:           14.038
            MAXIMUM:           22.460
 
THERE WERE NOT ANY UNSUCCESSFUL SYSTEM HEAP REQUESTS.
 
 
SYSTEM SUMMARY REPORT                                                              18 FEB        16:12:12     SYSTEM     PAGE     7
       205 OBSERVATIONS
PROGRAM STORAGE UTILIZATION
 
STORAGE RESERVED FOR 24-BIT CRPA                           6200000 BYTES     (   6.2 MEGABYTES)
STORAGE AVAILABLE IN THE 24-BIT CRPA                       5702896 BYTES     (   5.7 MEGABYTES)
24-BIT CRPA FULL, PROGRAM NOT LOADED IN 24-BIT AREA              0 OCCURRENCES
 
STORAGE RESERVED FOR 31-BIT CRPA                           8200000 BYTES     (   8.2 MEGABYTES)
STORAGE AVAILABLE IN THE 31-BIT CRPA                            16 BYTES     (   0.0 MEGABYTES)
31-BIT CRPA FULL, PROGRAM NOT LOADED IN 31-BIT AREA              0 OCCURRENCES
 
STORAGE RESERVED FOR PAT                                    984096 BYTES     (   1.0 MEGABYTES)
PAT SLOTS ALLOCATED                                          10249 SLOTS
STORAGE RESERVED FOR EXTRA PAT                              192024 BYTES     (   0.2 MEGABYTES)
EXTRA PAT SLOTS ALLOCATED FOR E-TYPE LOADER                   2000 SLOTS
 
COMMIT/ROLLBACK DATA
  RECOVERY LOG TRACK BUFFERS ALLOCATED                                10
  RECOVERY LOG RECORDS RESIDE ON SUBSYSTEM                           SSN
  RECOVERY LOG RECORDS ALLOCATED (THIS PROCESSOR)                   2040
  RECOVERY LOG TRACKS ALLOCATED (THIS PROCESSOR)                     170
  RECOVERY LOG TRACKS RESERVED FOR RECOVERY (THIS PROCESSOR)          50
  RECOVERY LOG TRACKS IN USE - MAXIMUM SINCE LAST IPL                  8
  COMMIT SCOPE BUFFERS PER COMMIT SCOPE-USER MAX                       0
  COMMIT SCOPE BUFFERS PER COMMIT SCOPE-MAX SINCE IPL                 19
 
                                                                     MIN                 MAX                MEAN
  WLOGC BLOCKED CONDITIONS, PER SECOND                             0.000               0.000               0.000
  RECOVERY LOG TRACK WRITES, PER SECOND                            0.434               6.250               1.972
  COMMITS, PER SECOND                                              0.579               7.781               3.269
  ROLLBACKS, PER SECOND                                            0.000               0.000               0.000
  381-BYTE COMMIT BUFFERS IN USE IN VFA                            0.000               0.000               0.000
  1055-BYTE COMMIT BUFFERS IN USE IN VFA                           0.000               0.000               0.000
  4095-BYTE COMMIT BUFFERS IN USE IN VFA                           0.000               0.000               0.000

SYSTEM SUMMARY REPORT                                                              18 FEB        16:12:12     SYSTEM     PAGE     8
       205 OBSERVATIONS
 
TPF SHUTDOWN CONDITIONS (TOTAL FOR ALL I-STREAMS)
TASK DESCRIPTION                                SHUTDOWN LEVEL           NUMBER OF SHUTDOWN OCCURRENCES
INPUT LIST                          MORE THAN  975 ACTIVE     ECB BLOCKS         --
INPUT LIST                          LESS THAN 1550 AVAILABLE  FRM BLOCKS          2
INPUT LIST                          LESS THAN   92 AVAILABLE  COM BLOCKS         --
INPUT LIST                          LESS THAN  325 AVAILABLE  ECB BLOCKS         --
INPUT LIST                          LESS THAN  263 AVAILABLE  SWB BLOCKS         --
INPUT LIST                          LESS THAN  446 AVAILABLE  IOB BLOCKS         --
DEFERRED LIST                       MORE THAN 1170 ACTIVE     ECB BLOCKS         --
 
TIME AVAILABLE SUPERVISOR           LESS THAN 3100 AVAILABLE  FRM BLOCKS         --
TIME AVAILABLE SUPERVISOR           LESS THAN  308 AVAILABLE  COM BLOCKS         --
TIME AVAILABLE SUPERVISOR           LESS THAN  650 AVAILABLE  ECB BLOCKS         --
 
CREM MACRO                          LESS THAN  175 AVAILABLE  SWB BLOCKS         --
CREM MACRO                          MORE THAN   50 ACTIVE     RDY BLOCKS         --
CRED MACRO                          LESS THAN  438 AVAILABLE  SWB BLOCKS         --
CREX MACRO                          LESS THAN  438 AVAILABLE  SWB BLOCKS         --
BSC INPUT                           MORE THAN  300 ACTIVE     INP BLOCKS         --
3270 LOCAL INPUT                    MORE THAN  300 ACTIVE     INP BLOCKS         --
AI INPUT                            MORE THAN    0 ACTIVE     INP BLOCKS         --
 
                                                                         NUMBER OF SLOWDOWN OCCURRENCES
SNA NCP SLOWDOWNS                                                                --
SNA CTC INPUT SLOWDOWNS                                                          --
SNA CTC OUTPUT SLOWDOWNS                                                         --

Maximum, minimum, and mean values are shown for each variable. For the variables that are continuous counts, reduced to a per second base, the maximum and minimum values are actually mean values for a single collection interval. The mean values are the mean of all the intervals, that is, a mean of all the interval means. The maximum and minimum values are indicative of the degree of variation that occurs in system load. Extreme values should lead one to the plot reports in which individual interval values can be examined. Note that the maximum and minimum values for different variables do not necessarily occur during the same interval.

For mean values, a long interval has a smoothing effect on the peaks and valleys of a sample set. For example, a peak might show a much higher maximum for a one-second interval than for a 15-second interval. Appropriate run lengths, periods, and intervals must be chosen to provide detailed data for each individual requirement.

The shutdown levels for system resources are established as absolute numbers of system resources. These levels can be redefined using the ZCTKA command:

Note:
CREMC activity is postponed whenever the ready list of the target I-stream exceeds 50 entries in length. This is a throttling mechanism and should not be considered a problem unless other TPF shutdown conditions are also reported.

The list should be kept current and readily available for the analyst working with the data reduction reports. A record of all changes to these values should be maintained in a history file so that reduction reports for a particular calendar time period can be analyzed with respect to the proper level setting.

Input Messages Per Second

Inputs are shown on a per second basis and consist of:

If utilities such as schedule change or file maintenance and no unit record tasks are run during data collection, the results of the reduction will be skewed by the load supplied by these utilities (one input message could produce an inordinate amount of file activity).

The ratio of one type of message to another is constantly changing. To relate one data collection to another, or one system to another, a common denominator is needed. Therefore, all message types, except TCP/IP native stack input messages, are expressed in terms of high-speed messages. See TPF Transmission Control Protocol/Internet Protocol for more information about counts for TCP/IP native stack messages.

The weighted message rate is calculated using the following factors:

System services messages are not included in the weighted message rate because they are considered network overhead and are reported solely for information.

Created entries are not included in the weighted message rate because the majority are generated by, and considered to be part of, some other, external message that is already counted.

The weighted message rate algorithm is as follows:

WEIGHTED MESSAGE RATE = (HS MSG PROC) - (HS MSG ROUT) + (LS MSG * WT) + (HS MSG ROUT * WT2) + (TCP/IP WEIGHTED MSG)

Weighting factors WT and WT2 are preprocessor statements entered during SIP time which may be adjusted for each system. However, the most commonly used value for WT is 5. The most commonly used value for WT2 is 0.3; therefore, WT2 is coded as 3 in the DATACO macro.

Experience with existing systems indicates that a teletype message requires four to five times the processing required by a high-speed message; therefore, most systems use the default value. Processing required by the average routed message is roughly three-tenths (.3) of that required by the average high-speed message. Again, most systems use the default value.

Weighting factors may also be established for a given TPF system through the real-time trace facility. Message types may be sized by comparing number of ENTERs, FINDs, FILEs, MACROs, and so on. Total processing by message type is not easily available without special software or hardware monitors. The comparison method yields satisfactory approximations for weighting purposes.

Note:
In the I-stream summary report, the counts for SWISC entries also include tpf_fork function calls.

Resource Utilization Per Message

Resource utilization per message involves two parameters: CPU busy time per weighted message as determined previously, and the number of bytes of working storage per active ECB. Frames allocated for system heap are not included in the number of bytes of working storage in use for this calculation. CPU busy time per weighted message is obtained by dividing TPF Processor Utilization by the Weighted Message Rate.

Processor Utilization

In the System Summary Report, PAUSE COUNT is the number of times per second that the main I-stream intentionally ran exclusively. For a multiprocessor, this is also the number of times all application I-streams were paused. Pausing has no impact for uniprocessors because there is only a main I-stream. The information is nevertheless provided as a migration aid. MAIN I-STREAM WAIT TIME (MILS) is the amount of time that the main I-stream spent waiting for other I-streams to be paused. The TIME SPENT WORKING WHILE PAUSED (MILS) is the amount of time that the main I-stream was actually working while all other I-streams were paused. The CPU utilization stated on the system summary computes utilization using idle time and collection interval time.

For the I-Stream Summary Report, data collection shows the processor utilization at each level and state and, in addition, gives TPF processor utilization. Data Collection uses the time-of-day (TOD) clock to accumulate elapsed time in 12 idle levels for the TPF system, including the wait state. The idle levels indicate if available working storage is too low or system activity (number of active ECBs) is too high. Idle Level 11 shows the percentage of time that the system (as far as TPF is concerned) is truly idle; that is, the control program has scanned the entire CPU loop and found no work to be done. In addition, the idle times greater than zero will also be listed on the I-Stream Summary Report. The I-Stream Summary Report contains three utilization values. The first is calculated the same as for the system summary. The second (smoothed) utilization is the mean value of the utilization used by the work scheduler. The third (adjusted) utilization is calculated in the same manner as the CPU utilization on the I-Stream Summary Report, except that the time spent processing either DLAYC or DEFRC macros is subtracted from the utilization. The amount of time subtracted for each macro processed is an estimate based on the DEFRC or DLAYC and any associated processing. The intent of reporting the adjusted utilization is to show the effort the TPF system is using to process actual work.

The control parameters that trigger the various idle levels are established at system initialization time and must be adjusted for a particular system. The parameter settings are a compromise that allow maximum utilization of resources yet protect the system from catastrophic resource depletion during peak conditions. Any significant idle occupancy on levels 1-12 warrants investigation. This means that either the parameters are too restrictive or the system is approaching maximum capacity because of working storage depletion.

Finally, when there is no work for the system, the system enters the wait state.

Collection Interval in Seconds

The collection interval is the actual duration of the sample rather than the specified duration. The specified and actual duration should be very nearly the same, but there may be some conditions that will cause slight variations in the actual duration.

TPF CPU List

The various lists appearing in the System Summary represent the job queues that are associated with the CPU loop. The sequence in which they appear also indicates the priority given to each type of job. The maximum, minimum, and mean values represent an actual count of the items taken twice during each data collection interval for sampling mode, and once for continuous mode. To analyze the list data, one must be knowledgeable of the CPU loop and the types of jobs placed on each of the lists by the control program. Because systems vary widely with regard to resources and load, it is difficult to quote numbers that one would expect to see on a particular list. However, guidelines may be established, and any significant variance warrants detailed analysis.

Low-Priority ECB Classifications

The low-priority ECB classification display shows the number of ECBs running for priority classes defined in the system. A priority class is assigned to an ECB when the LODIC macro is issued.

Instantaneous Activity

This is a snapshot of the instantaneous activity in the system. The number of active ECBs is snapped twice during each interval in sampling mode, and once per interval in continuous mode. The active ECBs include the ECBs used for data collection, and those used for unit record tasks. Data reduction subtracts the data collection ECBs, and the ECBs used for unit record tasks to determine the number of truly active ECBs.

When related to inputs, the ratio of active ECBs to weighted message rate is an indicator of message life in the system, or throughput, excluding line queue and transmission times.

Example:

Active Messages = 10

Weighted Message Rate = 20 per second

Therefore, the average message life in the system is one-half second or 500 milliseconds. Long mean message life might indicate a bottleneck, which could be because of insufficient working storage, improper program allocation, insufficient number of files, large file queues, and so on.

The maximum allowable number of active ECBs in a system is a control parameter that is established at initialization time. As with all control parameters, this number must be adjusted for each unique configuration. It must be high enough to allow full utilization of resources, yet low enough to protect the system during overloads. Correlation of active ECBs with message rates, working storage occupancy, idle occupancy, and input list helps eliminate the trial and error method of fine tuning all control parameters. ECBs that can be time-sliced are recorded here, along with the number of time slices that occur per second. ECBs are marked as being available for time slicing by the TMSLC macro.

Threads Support Data

To help ensure that there are enough threads defined to the TPF 4.1 system for any application that uses threads, data is collected from the following fields:

If a thread application, such as remote procedure call (RPC), is using a large number of threads compared to the number defined in keypoint A (CTKA), you can enter the ZCTKA ALTER command to change the maximum number of threads.

Equations Used to Manipulate Data

The possible complexity of the system operation and the amount and type of data collected could lead to questions of interpretation and meaning. For that reason data collection printouts include the equations used to compute:

Working Storage Configuration

The total working storage and the ratio of the number of one size block to another will vary from system to system. These parameters are adjustable and established at system initialization time. The number of blocks assigned to each pool is also printed on the System Summary Report.

In the System Summary Report, block occupancy (utilization) is expressed in two forms. The minimum, maximum, and mean block occupancies are expressed as decimal fractions (blocks in use or blocks allocated). Mean available block figures are the average of the actual number of blocks allocated but unused. As with all data collection variables, block occupancy must be related to other variables in order to be meaningful.

For each block size, the mean number in use multiplied by the number of bytes per block equals the mean bytes per pool in use. When the sum of all pools is divided by the number of active messages (as shown in the System Summary Report) the amount of working storage used on a per-message basis should approach system design criteria. Any great variance should be investigated.

Example

Assume that the ECB Block Occupancy is 10% and Idle Occupancy at Level 1 is quite high. Level 1 indicates that the system is not servicing the input list because of an insufficient number of available ECB blocks, yet the ECB pool is only 10% utilized. This indicates that the CPU loop parameter stating the number of ECB blocks that must be available is set too high, or the system is experiencing very unusual peaking conditions that are not reflected in mean utilization of the working storage pool.

The maximum number of frames on the frames pending list is collected to help determine if additional 4-KB frames are needed. If the maximum number is greater than 10% of the frames in the TPF 4.1 system, a system shutdown can occur because it is low on available frames. Frames that are released by threaded ECBs are placed on the frames pending list until it is safe to reuse them; for example, after a purge of the translation look-aside buffer (PTLB) is performed on all I-streams. If a large number of frames remain on the frames pending list, additional 4-KB frames can be generated in the TPF 4.1 system. See TPF System Generation for more information about the CORREQ macro.

Commit/Rollback Data

Data is collected on the following activities for TPF transaction services:

TPF Shutdown Conditions

As noted previously, control parameters are used to insure that maximum utilization of system resources can be realized, while at the same time protecting against irrecoverable conditions caused by inordinate peak loads. For instance, there is little to be gained and much to be lost by activating a message from the input list when the system activity (indicated by the ECB count) is already very high or the number of available working storage blocks is extremely low.

The control parameters are user-specified and will vary from system to system. Furthermore, there is no exact or scientific method for establishing the value of the various parameters. The System Summary Report is a valuable tool to assist in this effort because the report lists the user-assigned value of each parameter and calculates the number of data collection samples where shutdown occurred because the limits of the particular parameter were exceeded.

System Pools Summary Report

This report lists all active pool sections for the reduced subsystem. The dispensed addresses are shown for each active section. The number of return hits per second is shown for short-term pool sections only. The totals for all short-term sections are listed at the end of the report. The report also lists the number of times that reorder did not occur before active buffer depletion. The System Pools Summary Report is printed as part of SYSTEM reduction only (see Figure 6).

Figure 6. System Pools Summary Report


SYSTEM POOLS SUMMARY                                     BSS   SUBSYSTEM           18 APR        09:14:33     SYSTEM     PAGE     7
        30 OBSERVATIONS
 
      POOL      |   SET       REORDER     DISPENSED   DISPENSED   STP RETURN
      SECTION   |   SIZE      TIME(MIN)   ADDRS/SEC   ADDRS/MSG   HITS/SEC
     -----------|-----------|-----------|-----------|-----------|-----------|
                |
      DEVA-SST  |       2         0.00        0.60        1.61        0.31
      DEVA-SDP  |       2         0.00        0.40        1.09        N/A
      DEVA-LST  |       2         0.00        0.57        1.52        0.00
      DEVA-LDP  |       1         0.00        0.00        0.00        N/A
      DEVA-4ST  |       2         3.16        0.89        2.40        0.00
      DEVA-4DP  |       1         0.00        1.46        3.92        N/A
      DEVB-4ST  |       1         0.00        0.00        0.00        0.00
      DEVB-4DP  |       1         0.00        0.00        0.00        N/A
      DEVA-4D6  |       2        25.03        1.24        3.31        N/A
      DEVB-4D6  |       2        21.80        1.77        4.75        N/A
 
                SHORT TERM POOL TOTALS (ALL SECTIONS)
               ---------------------------------------
                     TOTAL ADDRESSES DISPENSED -     1250
                     TOTAL ADDRESSES RETURNED  -      180
 
                     REORDER DID NOT COMPLETE BEFORE ACTIVE BUFFER DEPLETION DURING    0 INTERVALS
 

TPF Logical Record Cache Summary Report

This report includes the following information about the logical record cache to help you determine the most effective cache size for your installation:

The following shows an example of a TPF Logical Record Cache Summary report.

Figure 7. TPF Logical Record Cache Summary Report


 TPF LOGICAL RECORD CACHE SUMMARY
        11 OBSERVATIONS
 
CACHE NAME    CACHE SIZE   TIMEOUT   READ CALLS   READ MISSES  CASTOUTS    HIT RATIO   UPDATE        READ BUFFER   DUPLICATE
              (ENTRIES)     VALUE    PER SECOND   PER SECOND   PER SECOND              INVALIDATES   INVALIDATES   HASH
                          (SECONDS)                                                    PER SECOND    PER SECOND    REFUSED
 
IDNSHOSTADDR          10       3600        0.00          0.00        0.00      0.00%          0.00          0.00           0
IDNSHOSTNAME          10       3600        0.00          0.00        0.00      0.00%          0.00          0.00           0
MAIL_CACHES           10       3600        0.00          0.00        0.00      0.00%          0.00          0.00           0
TPF_FS_DIR           200         60        0.35          0.24        0.00     30.76%          0.00          0.00           0
TPF_FS_INODE         200         60        0.53          0.01        0.00     98.30%          0.11          0.01           0
CASH                  16          3      683.22          1.20        0.00     99.82%          1.04          1.17           0
MONEY                 16          3      970.67          2.72        0.00     99.71%          2.97          2.65           0
GOLDBARS              16          3      701.56          0.88        0.00     99.87%          1.57          0.86           0
INGOTS                16          3     1201.32          2.50        0.00     99.84%          1.19          0.82           0
DOLLARS               16          3      664.93          1.00        0.00     99.86%          1.29          1.03           0
ANDCENTS              16          3      769.09          1.04        0.00     99.84%          0.92          1.31           0
PENNIES               16          3      850.94          1.33        0.00     99.81%          2.51          2.15           0
CACH2                 16          3     1157.65          2.19        0.00     99.86%          1.47          0.96           0
CACH4                 16          3      734.97          0.97        0.00     99.70%          1.49          1.20           0
CACH6                 16          3      411.13          1.21        0.00     99.84%          1.19          0.82           0
LOCALC                16          3        0.00          0.00        0.00      0.00%          0.00          0.00           0
 
 

Coupling Facility Data Collection Reports

Data collection for coupling facility (CF) support, CF record lock support, and logical record cache support provides data to effectively manage the performance of a CF, the CF list structures, and the CF cache structures. The following reports are provided:

See Figure 8 for an example of the Coupling Facility Usage Summary report. This report lists all the CFs defined in the processor configuration.

The Coupling Facility Usage Summary report provides:

Figure 8. Coupling Facility Usage Summary Report


COUPLING FACILITY USAGE SUMMARY                                                    14 SEP        14.19.15     SYSTEM     PAGE     8
       148 OBSERVATIONS
 
CF NAME               STORAGE IN USE     CHGD?         UTILIZATION       REPL?
                     MB        PERCENT               MEAN        MAX
 
TESTCF1             5.750       24.46%              2.65%      3.23%
TESTCF2             6.500       27.65%              0.01%      0.33%

See Figure 9 for an example of the Coupling Facility Structure Summary report. This report lists all the CF structures sorted by the CF names on which the CF structures reside. This particular report shows multiple CFs and many CF structures.

The Coupling Facility Structure Summary report provides:

Figure 9. Coupling Facility Structure Summary Report


COUPLING FACILITY STRUCTURE SUMMARY                                                14 SEP        14.19.15
        11 OBSERVATIONS
 
STRUCTURE NAME                STRUCTURE SIZE            REQS/SEC    SERVICE TIME   QUEUE TIME    REPL?
                             MB        PERCENT                           USEC
 
CF: FORTKNOX
 
ANDCENTS                    0.250        1.28%              2.40         9411     1520
CASH                        0.250        1.28%              2.31        14067      991
DOLLARS                     0.250        1.28%              2.25        20184     2251
GOLDBARS                    0.250        1.28%              2.50         9282     6744
INGOTS                      0.250        1.28%              5.54        11443      863
ITPFLK1_TU0001              5.000       25.64%             55.79        10884      459
ITPFLK2_TU0001              1.000        5.12%             67.65         6802      393
MONEY                       0.250        1.28%              5.79        11229      553
PENNIES                     0.250        1.28%              2.33         8262      742
TPF_FS_DIR                  0.250        1.28%              0.10       104418     2196
 
CF: PIGGYBNK
 
CACH2                       0.250        0.48%              4.80        13842     4668
CACH4                       0.250        0.48%              2.51         5637     2558
CACH6                       0.250        0.48%              2.80         8664      689
ITPFLK1_TU0001              9.250       17.96%             23.12         8822     3060
ITPFLK2_TU0001              1.000        1.94%             23.44         8771     1133
TPF_FS_INODE                0.250        0.48%              0.22        10635    15572
 

See Figure 10 for an example of the Coupling Facility Locking Summary report. This report provides the mean and maximum values for each variable.

The Coupling Facility Locking Summary report provides:

Figure 10. Coupling Facility Locking Summary Report


COUPLING FACILITY LOCKING SUMMARY                                                  14 SEP      14:19:15
        11 OBSERVATIONS
 
CF NAME      MODULES  CHGD?       OPERATIONS/REQUEST    LISTS         LIST DEPTH                 LOCKS HELD       LOCKS/LIST   REPL?
                                    MEAN       MAX                 MEAN         MAX           MEAN          MAX      MEAN
 
FORTKNOX          14                1.07        10       4929      0.11           8        1223.27         1666         0.25
PIGGYBNK           9                1.18        11       8427      0.11           4        1003.45         1352         0.12
 
 

See Figure 11 for an example of the Coupling Facility Caching Summary report. This report provides the cache size and castout values for each CF cache.

The Coupling Facility Caching Summary report provides:

Figure 11. Coupling Facility Caching Summary Report


COUPLING FACILITY CACHING SUMMARY                                                  27 JUN        07:03:42     SYSTEM     PAGE    12
        18 OBSERVATIONS
 
CACHE NAME      CACHE SIZE    CASTOUTS
                 (ENTRIES)    PER SECOND
 
CF: CFSEARS
 
TPF_FS_DIR             400          0.00
TPF_FS_INODE           300          0.00
          

TPF Internet Mail Server Summary Report

This report includes the following information about the TPF Internet mail server to help you determine the most effective configuration for your installation:

If the queue length of the active queue is too large for your environment or if the queue length increases over time, there are not enough delivery managers defined for the local or remote active queue. Similarly, if the queue length of the deferred queue is too large or increases, there are not enough delivery managers defined for the deferred queue. The number of delivery managers is defined in the TPF configuration file (/etc/tpf_mail.conf). See TPF Transmission Control Protocol/Internet Protocol for more information about the TPF configuration file.

The following shows an example of a TPF Internet mail server summary report.

Figure 12. TPF Internet Mail Server Summary Report


TPF INTERNET MAIL SERVER SUMMARY                                                   16 FEB        10:24:36     SYSTEM     PAGE    11
       300 OBSERVATIONS
 
                                    IN                       OUT                     BOUNCED
                                                   LOCAL            REMOTE
 
MESSAGES PER SECOND               4.77              0.63              0.00              0.09
 
CHARACTERS PER MESSAGE             602                                   0
 
                                ACTIVE          DEFERRED
 
QUEUE LENGTH (BLOCKS)            80.62            176.12
 
DELIVERY MANAGERS                LOCAL            REMOTE          DEFERRED
 
MEAN NUMBER ACTIVE                9.37              7.62              0.00
MAXIMUM ALLOWED                  10.00             10.00             10.00

TCP/IP Weighted Input Messages by Application Report

This report includes information about the number of TCP/IP weighted input messages for each TCP/IP native stack application for which there is activity. The TCP/IP weighted message by application report provides the following information:

The information in this report is sorted in descending order by activity; that is, the application with the highest activity is shown at the top of the report.

The following shows an example of a TCP/IP weighted input messages by application report.

Figure 13. TCP/IP Weighted Input Messages by Application Report


TCP/IP WEIGHTED MESSAGES BY APPLICATION                                            04 FEB        14:44:57     SYSTEM     PAGE     9
 
APPLICATION    PORT  WEIGHT        WEIGHTED         WEIGHTED        PERCENT     CUMULATIVE
                                   MESSAGES         MSGS/SEC       OF TOTAL        PERCENT
 
TEST-9981      9981     ***          278463           153.36         17.07%         17.07%
TEST-9980      9980     ***          278304           153.27         17.06%         34.14%
TEST-9982      9982     ***          278138           153.18         17.05%         51.20%
TEST-9984      9984     ***          278041           153.12         17.05%         68.25%
TEST-9983      9983     ***          258880           142.57         15.87%         84.13%
TEST-9985      9985     ***          258382           142.30         15.84%         99.97%
OTHER                   ***             169             0.09          0.01%         99.98%
RIP             520      50              65             0.04          0.00%         99.99%
FTP-DATA         20     100              58             0.03          0.00%         99.99%
TFTP             69     100              42             0.02          0.00%        100.00%
 
TOTAL                               1630542           897.98        100.00%        100.00%

ECB Frame Usage Summary Report

The number of frames held by exiting ECBs is reported in histogram form.

Figure 14. ECB Frame Usage Summary Report


TPF ECB FRAME USAGE SUMMARY REPORT                       SYSTEM  WIDE              27 FEB        14:18:27     SYSTEM     PAGE     7
 
MEAN TOTAL FRAMES USED DURING ECB LIFETIME                 1 FRAMES
 
   CLASS UPPER   FREQUENCY   PERCENT    FREQUENCY DIAGRAM  (SCALE =   968/1)
      LIMIT      OBSERVED    OF TOTAL
 
        0          77362       58.58%   |********************************************************************************
        1            428        0.32%   |*
        2            158        0.12%   |*
        3          19900       15.07%   |*********************
        4          34022       25.76%   |************************************
        5             41        0.03%   |*
        6             40        0.03%   |*
        7             19        0.01%   |*
        8              0        0.00%   |
        9             41        0.03%   |*
       10              0        0.00%   |
       11              0        0.00%   |
       12              0        0.00%   |
       13              0        0.00%   |
       14              0        0.00%   |
       15              0        0.00%   |
       16              0        0.00%   |
       17              0        0.00%   |
       18              0        0.00%   |
       19              0        0.00%   |
       20              0        0.00%   |
       25             20        0.02%   |*
       30             20        0.02%   |*
       35              0        0.00%   |
       40              1        0.00%   |*
       45              0        0.00%   |
       50              0        0.00%   |
       60              0        0.00%   |
       70              8        0.01%   |*
       80              0        0.00%   |
       90              0        0.00%   |
      100              0        0.00%   |
      120              0        0.00%   |
      140              0        0.00%   |
      160              0        0.00%   |
      180              0        0.00%   |
      200              0        0.00%   |
      220              0        0.00%   |
      240              0        0.00%   |
    > 240              0        0.00%   |

ECB Heap Area Usage Summary Report

The number of heap frames held by exiting ECBs is reported in histogram form.

Figure 15. Heap Area Usage Report


TPF ECB HEAP AREA USAGE SUMMARY REPORT                   SYSTEM  WIDE              27 FEB        14:18:27     SYSTEM     PAGE     8
 
MEAN FRAMES USED FOR HEAP STORAGE DURING ECB LIFETIME      0 FRAMES
MAXIMUM FRAMES USED FOR HEAP STORAGE BY AN ECB            58 FRAMES
 
   CLASS UPPER   FREQUENCY   PERCENT    FREQUENCY DIAGRAM  (SCALE =  1650/1)
      LIMIT      OBSERVED    OF TOTAL
 
        0         131970       99.93%   |********************************************************************************
        1             82        0.06%   |*
        2              0        0.00%   |
        3              0        0.00%   |
        4              0        0.00%   |
        5              0        0.00%   |
        6              0        0.00%   |
        7              0        0.00%   |
        8              0        0.00%   |
        9              0        0.00%   |
       10              0        0.00%   |
       11              0        0.00%   |
       12              0        0.00%   |
       13              0        0.00%   |
       14              0        0.00%   |
       15              0        0.00%   |
       16              0        0.00%   |
       17              0        0.00%   |
       18              0        0.00%   |
       19              0        0.00%   |
       20              0        0.00%   |
       25              0        0.00%   |
       30              0        0.00%   |
       35              0        0.00%   |
       40              0        0.00%   |
       45              0        0.00%   |
       50              0        0.00%   |
       60              8        0.01%   |*
       70              0        0.00%   |
       80              0        0.00%   |
       90              0        0.00%   |
      100              0        0.00%   |
      120              0        0.00%   |
      140              0        0.00%   |
      160              0        0.00%   |
      180              0        0.00%   |
      200              0        0.00%   |
      220              0        0.00%   |
      240              0        0.00%   |
    > 240              0        0.00%   |

MPIF Configuration Report

This report contains static Multi-Processor Interconnect Facility (MPIF) information. It can be used to determine the general MPIF environment specifications. MPIF-related reports are omitted when MPIF is not active.

Figure 16. MPIF Configuration Report


MPIF CONFIGURATION                                       SYSTEM WIDE               27 FEB        14:18:27     SYSTEM     PAGE     9
CRITICAL ACTIVE ECB BLOCKS             :           0
NUMBER OF PROCESSORS                   :          10
NUMBER OF GLOBAL USERS                 :         100
NUMBER OF RESIDENT USERS               :          24
NUMBER OF CONNECTIONS                  :          90
NUMBER OF PATHS                        :          23
NUMBER OF PATH ACTIVATION NOTIFICATION :          90
NUMBER OF DIRECTORY NOTIFICATION       :          90
NUMBER OF CLASSES                      :           3
BUFFER SIZE                            :     2265088
INTERFACE VERSION NUMBER               :           3
NAME OF THE SYSTEM                     :    CPUB.JM
CONNECTION TIMEOUT INTERVAL            :          80
1ST LEVEL PATH TIMEOUT INTERVAL        :          81
2ND LEVEL PATH TIMEOUT INTERVAL        :          82
PDT OUTPUT QUEUE DEPTH                 :          10
NUMBER OF LINKS BETWEEN PROCESSORS     :          10

Interprocessor Communication MPIF Summary Report

This report is generated when MPIF IPC is active. The report gives information about path activity between the origin processor and other processors, and includes mean values per second for the following:

Figure 17. Interprocessor Communication MPIF Report


INTERPROCESSOR COMMUNICATION MPIF SUMMARY
ORIGIN PROCESSOR TPF CPU-ID = B
DEST TPF      TOT SIPCC       TOT SIPCC      TOT SIPCC      XMIT FAIL      XMIT FAIL
 CPU-ID       ITEMS SENT      RECEIVED        RETURN        RETURN         NO RETURN
   B              0.00            0.00           0.00           0.00           0.00
   C              0.00            0.00           0.00           0.00           0.00
   D              0.00            0.00           0.00           0.00           0.00
   E              0.00            0.00           0.00           0.00           0.00
   Z              0.00            0.00           0.00           0.00           0.00
   0              0.00            0.00           0.00           0.00           0.00
 
 NOTE: COUNTS ARE FROM ORIGIN CPU VIEW POINT
      IE: SENT TO DESTINATION CPU OR RECEIVED FROM DESTINATION CPU
 NOTE: ALL VALUES SHOWN ARE TOTALS PER SECOND

MPIF Path Activity Report

Performance data for the MPIF paths are contained in the report. This data can be used to detect if a bottleneck exists on a particular path and help to determine the cause. The optional plot reports can help pinpoint heavy utilization periods.

The report is printed in order of path class, and in addition to class, path, and device names, it contains: message rate (messages per second), the average message size, the reads and writes per second, the number of requests on the pending queue, and the number of queue overruns. See Figure 18 for the format of the report.

Figure 18. MPIF Path Activity Report


MPIF PATH ACTIVITY REPORT                                                          27 FEB        14:18:27     SYSTEM     PAGE    10
  CTCNAME    PATHNAME   CL   MSGRATE    MSGSIZE   READS   WRITES   QUEUED   OVERRUNS
                             (/SEC)     (AVG)     (/SEC)  (/SEC)   (AVG)    (TOTAL)
             CPUB.A1    A       0.00       0.00    0.00     0.00        0          0
             CPUC.A1    A       0.00       0.00    0.00     0.00        0          0
             CPUD.A1    A       0.00       0.00    0.00     0.00        0          0
             CPUE.A1    A       0.00       0.00    0.00     0.00        0          0
             CPUZ.A1    A       0.00       0.00    0.00     0.00        0          0
             CPU0.A1    A       0.00       0.00    0.00     0.00        0          0
             CPU1.A1    A       0.00       0.00    0.00     0.00        0          0
             CPU2.A1    A       0.00       0.00    0.00     0.00        0          0
             CPUB.B1    B       0.00       0.00    0.00     0.00        0          0
             CPUC.B1    B       0.00       0.00    0.00     0.00        0          0
             CPUD.B1    B       0.00       0.00    0.00     0.00        0          0
             CPUE.B1    B       0.00       0.00    0.00     0.00        0          0
             CPUZ.B1    B       0.00       0.00    0.00     0.00        0          0
             CPU0.B1    B       0.00       0.00    0.00     0.00        0          0
             CPU1.B1    B       0.00       0.00    0.00     0.00        0          0
             CPU2.B1    B       0.00       0.00    0.00     0.00        0          0
             CPUC.C1    C       0.00       0.00    0.00     0.00        0          0
             CPUD.C1    C       0.00       0.00    0.00     0.00        0          0
             CPUE.C1    C       0.00       0.00    0.00     0.00        0          0
             CPUZ.C1    C       0.00       0.00    0.00     0.00        0          0
             CPU0.C1    C       0.00       0.00    0.00     0.00        0          0
             CPU1.C1    C       0.00       0.00    0.00     0.00        0          0
             CPU2.C1    C       0.00       0.00    0.00     0.00        0          0

Frequency Distribution Reports

Frequency distribution reports can be optioned for all the parameters in the system collector (see Figure 19). Actually, the distribution reports for almost all collected parameters can be obtained by placing the DISTRIBUTION keyword in the option field of any option card.

Figure 19. Frequency Distribution Report




CREATED ENTRIES PER SECOND                               BSS   SUBSYSTEM
 
 
        40 OBSERVATIONS              MIN =        1.063               VARIANCE
                                     MAX =       16.059               STANDARD D
                                     MEAN=        6.007               COEF. OF V
 
   CLASS        FREQUENCY           PERCENTAGES             MULTIPLE       STD.
UPPER LIMIT     OBSERVED       CLASS    ACCUM   REMAIN      OF MEAN         DEVI
        1.85        1         2.50          2.5     97.5      0.30        -0.78
        2.64        8        20.00         22.5     77.5      0.43        -0.63
        3.43       14        35.00         57.5     42.5      0.57        -0.48
        4.22        4        10.00         67.5     32.5      0.70        -0.33
        5.01        3         7.50         75.0     25.0      0.83        -0.18
        5.80        0         0.00         75.0     25.0      0.96        -0.03
        6.59        0         0.00         75.0     25.0      1.09         0.11
        7.38        0         0.00         75.0     25.0      1.22         0.26
        8.17        0         0.00         75.0     25.0      1.36         0.40
        8.96        0         0.00         75.0     25.0      1.49         0.55
        9.75        0         0.00         75.0     25.0      1.62         0.70
       10.54        0         0.00         75.0     25.0      1.75         0.85
       11.33        0         0.00         75.0     25.0      1.88         1.00
       12.12        0         0.00         75.0     25.0      2.01         1.15
       12.91        0         0.00         75.0     25.0      2.14         1.30
       13.70        1         2.50         77.5     22.5      2.28         1.45
       14.49        2         5.00         82.5     17.5      2.41         1.60
       15.28        2         5.00         87.5     12.5      2.54         1.75
       16.07        5        12.50        100.0      0.0      2.67         1.90
       16.86        0         0.00        100.0      0.0      2.80         2.05
 
 

These distribution reports are used to investigate any unusual data such as extreme maximum or minimum values found in the summary. Wide variances indicate that the processing flow through the system is not smooth. Examination of the class limits and observed frequencies can verify that the system is experiencing wide swings in either input or utilization of one or more of its resources. When such a situation occurs, use caution when comparing one variable to another in order to distinguish cause from effect. Peaks occurring in one variable usually impact some other resource later in the processing cycle. Note that these reports are concerned with frequency distribution only; no chronology of samples is set up.

Plot Reports

To distinguish the cause from effect mentioned previously, a chronological listing is necessary. In addition to the frequency distributions produced, the plot reports show each variable chronologically by interval over the life of the collection (see Figure 20). A separate page is produced for each 100 intervals of the collection period. The clock time associated with the start of each interval is shown along the abscissa.

Variables may now be compared. For example, a very high working storage utilization might have caused polling to be suspended; a high input rate and a long input list appear after polling is resumed. Depending on the order of occurrence, these same variables might indicate an entirely different situation. An extreme peak in high-speed messages can result in a long input list and high core utilization. Any regularity in this type of peaking suggests an adjustment of system parameters that control polling frequency. Messages are allowed to queue in the terminal interchange buffers before being polled into the system.

The proper collection mode, period, and interval must be chosen for the type of analysis suggested previously. Fluctuations may be hidden completely by the smoothing effect of long intervals.

The plot reports are available for all the same parameters as the distribution reports.

Figure 20. Plot Report


CREATED ENTRIES PER SECOND                               BSS   SUBSYSTEM
        40 OBSERVATIONS
       26.000 .
              .
              .
              .
              .
       23.500 .
              .
              .
              .
              .
       21.000 .
              .
              .
              .
              .
       18.500 .
              .
              .
 O            .
 B            .
 S     16.000 .                ***
 E            .        *       ***      *
 R            .     *  *       ***      *           *
 V            .     *  *       ***    ***           *
 E            .     *  *       ***    ***           *
 D     13.500 .     *  *      ****    ***           *
              .     *  *      ****    ***           *
 V            .     *  *      ****    ***           *
 A            .     *  *      ****    ***           *
 L            .     *  *      ****    ***           *
 U     11.000 .     *  *      ****    ***           *
 E            .     *  *      ****    ***           *
              .     *  *      ****    ***           *
              .     *  *      ****    ***           *
              .     *  *      ****    ***           *
        8.500 .     *  *      ****    ***           *
              .     *  *      ****    ***           *
              .     *  *      ****    ***           *
              .     *  *      ****    ***           *
              .     *  *      ****    ***           *
        6.000 .     *  *      ****    ***           *
              .     *  *      ****    ***           *
              .     *  *      ****    ***      *    *
              .     *  *      **** *  ***      *    * *
              .     *  *      **** *  ***      *    * *
        3.500 .*   **  *      **** *  ***      * **** *
              .*** *** *    * **** ** *******  ********
              .********** ******** ** *****************
              .********** *****************************
              .********** *****************************
              .****************************************
        1.000 ..........|.........|.........|.........|.........|.........|.....
            54:26      1:56      9:26     16:56     24:26     31:56     39:26
                                               OBSERVATIONS IN CHRONOLOGICAL ORD


Data Reduction Limitations Report

The limitations imposed on the reduction by the PL/I pre-compiler options appear near the end of the reduction report. A sample of this page is shown in Figure 21. These limits are set by you to match the online system or to restrict the types of data to be reduced. For instance, two versions of the data reduction package could be maintained in the system library, one that handles only system collector data, and the other that reduces data from all collectors. The system collector version would be used more often and would run in a smaller memory partition.

Figure 21. Data Reduction Limitations Report


THE PRECOMPILER OPTIONS HAVE GIVEN THE TPF 4.1 HPO SYSTEM       REDUCTION PROGRA
COLLECTIONS MADE BY COLLECTOR RELEASES PRIOR TO TPF VERSION 4.1 CANNOT BE REDUCE
THE FOLLOWING COLLECTOR'S OUTPUT CAN BE REDUCED:
                                                          SYSTEM
                                                          FILE
                                                          PROGRAM
                                                          MESSAGE
 
THESE PROGRAMS CAN PRODUCE BOTH SUMMARIES AND DETAIL REPORTS OF THEIR PARAMETERS
IN THE SUMMARIES AND DETAIL REPORTS THE FOLLOWING LIMITATIONS ARE IN EFFECT:
     THE MAXIMUM NUMBER OF COLLECTION INTERVALS ALLOWED IS    300
     THE LOW SPEED MESSAGE WEIGHTING FACTOR IS    5
     THE   ROUTED  MESSAGE WEIGHTING FACTOR IS  0.3
     THE MAXIMUM NUMBER OF RANDOM FILES IN THE SYSTEM  IS     132
     THE MAXIMUM NUMBER OF RECORD ID'S  IN THE SYSTEM  IS     100
     THE MAXIMUM NUMBER OF  TAPE UNITS  IN THE SYSTEM  IS      64
     THE MAXIMUM NUMBER OF   PROGRAMS   IN THE SYSTEM  IS     600
     THE MAXIMUM NUMBER OF     LINES    IN THE SYSTEM  IS     244
     THE MAXIMUM NUMBER OF  SNA LINES   IN THE SYSTEM  IS     255
     THE MAXIMUM NUMBER OF  NODENAMES   IN THE SYSTEM  IS    2750
     THE MAXIMUM NUMBER OF BSC STATIONS IN THE SYSTEM  IS     152
     THE MAXIMUM NUMBER OF INTERCHANGES IN THE SYSTEM  IS     220
     THE MAXIMUM NUMBER OF SUB SYSTEMS  IN THE SYSTEM  IS       4
     THE MAXIMUM NUMBER OF  L/C CPU'S   IN THE SYSTEM  IS       6
     THE MAXIMUM NUMBER OF    CITIES    IN THE SYSTEM  IS      35
     THE MAXIMUM NUMBER OF   TERMINALS  IN ONE CITY    IS     700
     THE MAXIMUM NUMBER OF INTERCHANGES IN ONE CITY    IS      30
     THE MAXIMUM NUMBER OF APPLICATIONS IN THE NETWORK IS     164
 
ALL VALID APPLICATION NAMES ARE SHOWN IN THE APPLICATION SUMMARY REPORT.
 
IF THESE LIMITS ARE EXCEEDED UNDETERMINED ERRORS WILL TAKE PLACE.
THEREFORE CHECK THESE PARAMETERS WITH THE ONLINE SYSTEM AFTER EACH CHANGE.
 
THE FOLLOWING MESSAGES ARE ERRORS FOUND BY THE REDUCTION PROGRAM, THE OPERATING
EXPLANATION OF THESE MESSAGES MAY BE FOUND IN THE SYSTEM PERFORMANCE AND MEASURE
THE PL/1 PROGRAMMERS GUIDE, THE OS MESSAGE GUIDE, OR THE OS SORT GUIDE.
********************************************************************************