gtps2m1l | ACF/SNA Data Communications Reference |
When designing your application, referred to as a transaction program (TP) in LU 6.2 terminology, a major factor to consider is the volume of traffic that will flow between the TP in the TPF system and the TP in the remote LU. There are 3 classes of LU 6.2 conversations that can be used:
Each class has advantages and disadvantages in the areas of application design and throughput. To help you understand these advantages and disadvantages, the following information compares the characteristics of each class performing a common task; processing 4 messages where the TPF TP sends a request (data) to the remote LU and receives a response (data).
The majority of existing LU 6.2 applications send and receive data on the same session. This is known as traditional LU 6.2 conversations. From a TP design point of view, this is the easiest method to use. If the number of transactions to be processed is low, then traditional LU 6.2 conversations are the best choice.
Figure 60 shows how a TP written to use traditional LU 6.2 conversations processes 4 messages. 1 ECB processes an entire message; the request and the response (req1 and rsp1, for example). Each message uses its own conversation, and the request and response flow on the same conversation.
Only 1 conversation can use a given LU 6.2 session at a time. When a conversation ends, another conversation can use that session. The main drawback to using traditional LU 6.2 conversations is that the session is not usable from the time that the TPF system sends out the request until the response is received. In the sample TP, the 4 messages are processed 1 at a time. Even though the TPF system has 4 requests to send out, the next request cannot be sent out until the response to the current request is received.
Using LU 6.2 parallel sessions helps the throughput situation because there are multiple sessions between the TPF system and the remote LU. For example, if you have 20 parallel sessions, you can process 20 messages at a time. If the TPF system is generating a consistently large amount of requests and processing each request in the remote LU takes a long time, then it is likely that all parallel sessions will be in use forcing requests to queue up in the TPF system. For high-volume TPs like these, traditional LU 6.2 conversations are not a good choice.
Figure 60. Traditional LU 6.2 Conversations
Pipeline LU 6.2 conversations are 1-way conversations. In most cases, 2 conversations are used; 1 for sending data and the other for receiving data. Using pipeline conversations requires more work by the TP because a request is sent over 1 conversation, but the response comes back over a different conversation. It is up to the TP to correlate responses with requests.
Figure 61 shows how a TP written to use pipeline LU 6.2 conversations processes 4 messages. 2 ECBs are needed to process a message; 1 ECB sends out the request and a different ECB is activated to process the response when that response is received. To process a message, 2 conversations are used; 1 for the request and 1 for the response. By ending the conversation as soon as a request is sent out, the session is immediately available and can be used to send out a second request before the response to the first request is received.
Pipeline conversations allow a large amount of messages to flow between the TPF system and a remote LU. An example of pipeline conversations can be found in any APPN network. A control point (CP) has special sessions, called CP-CP sessions, with each adjacent CP. These CP-CP sessions are a pair of LU 6.2 sessions used as 1-way pipes.
Figure 61. Pipeline LU 6.2 Conversations
Shared LU 6.2 conversations are similar to pipeline conversations in that a pair of 1-way pipes are used. The difference is that shared conversations allow multiple messages to be processed by 1 conversation, thereby eliminating the overhead involved with starting and ending the conversations associated with pipeline conversations. Shared conversations are more efficient from a network point of view because the number of path information units (PIUs) that flow are drastically reduced compared to traditional or pipeline LU 6.2 conversations.
When starting a conversation, a TPF TP codes a parameter on the TPPCC ALLOCATE macro (or tppc_allocate C language function) to indicate that the conversation is shared. The shared option on the ALLOCATE verb starts the conversation that the TPF system will use as the 1-way pipe to send data.
Only certain LU 6.2 verbs are allowed for shared conversations because these conversations are 1-way outbound pipes. The valid verbs include SEND_DATA, FLUSH, GET_ATTRIBUTES, and DEALLOCATE (except when TYPE=CONFIRM is specified). Any ECB can issue 1 of these verbs for a shared conversation. For a conversation that is not shared, only 1 ECB (the ECB that creates or owns the conversation) can issue verbs for that conversation.
Multiple requests (from different ECBs) all flow on the same shared conversation. To increase network efficiency, multiple requests are packaged together and sent out in a single PIU. Multiple responses are also returned in a single PIU.
Figure 62 shows how a TP written to use shared LU 6.2 conversations processes 4 messages. 1 ECB starts the shared conversation, which causes the TP in the remote LU to start the other shared conversation. Next, ECBs in the TPF system representing different TPs issue SEND_DATA verbs. In this example, 4 requests fit in a PIU, so all 4 requests are sent out in a single PIU. However, only 2 responses fit in a PIU. To handle responses, the ECB created when the second conversation was started (ECB 2) issues an ACTIVATE_ON_RECEIPT verb (equivalent to the LU 6.2 RECEIVE verb except that the data received is passed to a new ECB, not to the ECB that issued the ACTIVATE_ON_RECEIPT verb). When the first response reaches the TPF system, a new ECB is created (ECB 7) and the response (rsp1) is passed to ECB 7. ECB 7 immediately issues an ACTIVATE_ON_RECEIPT verb, and then processes the response that arrived. ECB 8 is created right away because the second response (rsp2) has already been received by the TPF system. ECB 8 also issues ACTIVATE_ON_RECEIPT, and then processes the response that it was passed (rsp2).
Figure 62. Shared LU 6.2 Conversations
Table 3 shows the statistics for processing 4 messages using the different LU 6.2 conversation methods. To do comparisons, a larger scale is needed, so the data in Table 4 for processing 100 messages will be used. This table shows the value of using shared conversations to process high-volume messages.
In traditional and pipeline LU 6.2 conversations, 40-50% of the verbs issued by the TP (ALLOCATE and DEALLOCATE) is overhead used to start and end those conversations. For shared LU 6.2 conversations, less than 1% of the verbs is overhead.
An even larger savings is in the number of PIUs. Both traditional
and pipeline LU 6.2 conversations require 2 PIUs to process each
message. Table 4 uses realistic values to say that 25 requests fit in 1 PIU
and 10 responses fit in 1 PIU. Using these numbers, only 16 PIUs are
needed to process 100 messages with shared LU 6.2 conversations.
The other classes of conversations require 200 PIUs, therefore shared
conversations reduce the number of PIUs by over 90% in this
example.
Table 3. LU 6.2 Statistics for Processing 4 Messages
Statistic | Traditional | Pipeline | Shared |
---|---|---|---|
Number of ALLOCATE verbs | 4 | 4 | 1 |
Number of SEND_DATA verbs | 4 | 4 | 4 |
Number of RECEIVE or ACTIVATE_ON_RECEIPT verbs | 8 | 8 | 4 |
Number of DEALLOCATE verbs | 4 | 8 | 0 |
Number of conversations | 4 | 8 | 2 |
Number of PIUs | 8 | 8 | 5 |
Table 4. LU 6.2 Statistics for Processing 100 Messages
Statistic | Traditional | Pipeline | Shared |
---|---|---|---|
Number of ALLOCATE verbs | 100 | 100 | 1 |
Number of SEND_DATA verbs | 100 | 100 | 100 |
Number of RECEIVE or ACTIVATE_ON_RECEIPT verbs | 200 | 200 | 100 |
Number of DEALLOCATE verbs | 100 | 200 | 0 |
Number of conversations | 100 | 200 | 2 |
Number of PIUs | 200 | 200 | 16 |