gtpa2m1cApplication Programming

TPF MQSeries Local Queue Manager Support

When an application connects to the queue manager, the application can connect to the local queue manager running on the TPF system by specifying the name of the local queue manager in the MQCONN function. All subsequent MQI function requests will be serviced by the local TPF queue manager. The MQI functions that are available to the local queue manager are more restrictive than most remote servers. See the TPF C/C++ Language Support User's Guide for more information about the MQI functions that are available.

Supported Queue Types

The TPF local queue manager supports the following queue types:

These queue types are defined using the ZMQSC DEFINE QA, ZMQSC DEF QL, or ZMQSC DEF QR commands. The following are the two types of local queues:

Normal local queues physically reside in the TPF system and messages on local queues are retrieved by applications using the MQGET function. Applications can add messages to local queues for processing by a TPF application by using the MQPUT function.

Transmission queues contain messages that are destined for a remote system. In fact, the transmission queue is also physically located in the TPF system, but applications do not normally get and put messages directly to them. When an application puts a message to a remote queue, the TPF queue manager, in turn, determines on which transmission queue to put the message. At some point, the channel associated with that transmission queue takes the messages from that queue and sends them to the remote system.

With alias queues, the system administrator can define an alias queue that is opened by an application. However, unknown to the application, the queue that is actually opened is the target of the alias queue, which is some other local queue or local definition of a remote queue. In this way, the administrator manages the queues that are processed by applications. The application code never has to change to satisfy changes in queue names.

Starting TPF MQSeries Applications Using Triggers

TPF MQSeries provides a facility that allows you to automatically start an application when messages arrive on a queue. This facility is known as triggering. The ZMQSC ALT QL and ZMQSC DEF QL commands support the following trigger types:

First
Trigger first processing occurs the first time a message arrives on a queue by setting a trigger when an application attempts to read (MQGET) a message from an empty queue. The next message that arrives on the queue, triggers a program in the process object associated with the queue. If no process object is associated with the queue, the TPF MQSeries queue trigger user exit ,CUIR, is called.

Every
Trigger every processing occurs every time a message arrives on a queue. Every time a message arrives on the queue, the TPF system creates a new ECB and triggers a program in the process object associated with the queue.

When the TPF MQSeries queue trigger user exit, CUIR, is called, the TPF system passes the message queuing message descriptor (MQMD) and message queuing trigger message (MQTM) structures on data level 0 of the entry control block (ECB) to CUIR. CUIR can interpret this data and pass control to the appropriate application for processing the message. The MQGET, MQPUT, and MQPUT1 C functions define the values that are passed in the MQMD structure. The ZMQSC ALT PROC, ZMQSC DEF PROC, ZMQSC ALT QL, and ZMQSC DEF QL commands define the values that are passed in the MQTM structure. The MQMD structure is as follows:

typedef struct tagMQMD {
    MQCHAR4   StrucId;           /* Structure identifier */
    MQLONG    Version;           /* Structure version number */
    MQLONG    Report;            /* Report options */
    MQLONG    MsgType;           /* Message type */
    MQLONG    Expiry;            /* Expiry time */
    MQLONG    Feedback;          /* Feedback or reason code */
    MQLONG    Encoding;          /* Data encoding */
    MQLONG    CodedCharSetId;    /* Coded character set identifier */
    MQCHAR8   Format;            /* Format name */
    MQLONG    Priority;          /* Message priority */
    MQLONG    Persistence;       /* Message persistence */
    MQBYTE24  MsgId;             /* Message identifier */
    MQBYTE24  CorrelId;          /* Correlation identifier */
    MQLONG    BackoutCount;      /* Backout counter */
    MQCHAR48  ReplyToQ;          /* Name of reply-to queue */
    MQCHAR48  ReplyToQMgr;       /* Name of reply queue manager */
    MQCHAR12  UserIdentifier;    /* User identifier */
    MQBYTE32  AccountingToken;   /* Accounting token */
    MQCHAR32  ApplIdentityData;  /* Application data relating to
                                    identity */
    MQLONG    PutApplType;       /* Type of application that put the
                                    message */
    MQCHAR28  PutApplName;       /* Name of application that put the
                                    message */
    MQCHAR8   PutDate;           /* Date when message was put */
    MQCHAR8   PutTime;           /* Time when message was put */
    MQCHAR4   ApplOriginData;    /* Application data relating to origin */
    MQBYTE24  GroupId;           /* Group identifier */
    MQLONG    MsgSeqNumber;      /* Sequence number of logical message
                                    within group */
    MQLONG    Offset;            /* Offset of data in physical message
                                    from start of logical message */
    MQLONG    MsgFlags;          /* Message flags */
    MQLONG    OriginalLength;    /* Length of original message */
   } MQMD;

The MQTM structure is as follows:

typedef struct tagMQTM {
    MQCHAR4    StrucId;      /* Structure identifier */
    MQLONG     Version;      /* Structure version number */
    MQCHAR48   QName;        /* Name of triggered queue */
    MQCHAR48   ProcessName;  /* Name of process object */
    MQCHAR64   TriggerData;  /* Trigger data */
    MQLONG     ApplType;     /* Application type */
    MQCHAR256  ApplId;       /* Application identifier */
    MQCHAR128  EnvData;      /* Environment data */
    MQCHAR128  UserData;     /* User data */
   } MQTM;

When a process is called, the TPF system passes the MQTMC2 structure to the process. The ZMQSC ALT PROC, ZMQSC DEF PROC, ZMQSC ALT QL, and ZMQSC DEF QL commands define the values that are passed in the MQTMC2 structure. The MQTMC2 structure is as follows:

typedef struct tagMQTMC2 {
  MQCHAR4    StrucId;      /* Structure identifier */
  MQCHAR4    Version;      /* Structure version number */
  MQCHAR48   QName;        /* Name of triggered queue */
  MQCHAR48   ProcessName;  /* Name of process object */
  MQCHAR64   TriggerData;  /* Trigger data */
  MQCHAR4    ApplType;     /* Application type */
  MQCHAR256  ApplId;       /* Application identifier */
  MQCHAR128  EnvData;      /* Environment data */
  MQCHAR128  UserData;     /* User data */
  MQCHAR48   QMgrName;     /* Queue manager name */
} MQTMC2;

See TPF C/C++ Language Support User's Guide for more information about the MQGET, MQPUT, and MQPUT1 C functions. See TPF Operations for more information about the ZMQSC ALT PROC, ZMQSC DEF PROC, ZMQSC ALT QL, and ZMQSC DEF QL commands. See TPF System Installation Support Reference for more information about the TPF MQSeries queue trigger user exit.

Message Routing

The TPF system supports the following methods when resolving queue names.

Local Definition of Remote Queues

To remove the burden of having the application determine the queue manager and queue to receive its message, the MQSeries administrator can define a local definition of a remote queue that specifies the actual destination queue manager and destination queue name. The application opens a local name for the queue, and the TPF queue manager will then substitute the specified queue manager and queue name and put the message on the specified transmission queue. For more information about defining a local definition of a remote queue, see the ZMQSC DEF QR command in TPF Operations.

Queue Manager Aliasing

The TPF system also supports queue manager aliasing. Here, the name of the remote queue is known, but not the name of the remote queue manager. When the application opens a queue specifying a queue manager, the TPF system will look up the name of the queue manager and substitute the queue manager that is specified. Queue manager aliasing is accomplished by leaving the RNAME field blank in the ZMQSC DEF QR command.

Queue Manager Name as Transmission Queue Name

In addition to a local definition of remote queues and queue manager aliasing, the system administrator can send a message to an adjacent queue manager if the name of the queue manager that is opened by the application is the same as a transmission queue.

Middle Hop Routing

Messages that are received by TPF MQSeries local queue manager channels may not be destined for the TPF queue manager. The TPF receiver channel calls the local TPF queue manager to resolve the name of the destination queue manager and queue name for each message it receives. The queue manager and queue name are resolved according to the rules previously stated, and the message is put on the appropriate transmission queue.

Processor Unique Queues versus Processor Shared Queues

Turbo enhancements for TPF support of MQSeries local queue manager provides a performance enhancement that makes processor unique queues (which are defined by specifying NO for the COMMON parameter on the ZMQSC DEF QL command) memory resident. Processor shared queues (which are defined by specifying YES for the COMMON parameter on the ZMQSC DEF QL command) reside in TPF collection support (TPFCS). Before this enhancement, all queues resided in TPFCS. Now, processor unique queues reside in memory and use checkpoint records and the recovery log as a repository for persistent data (such as the messages). Only local normal queues can be defined as processor shared. See TPF Operations for more information about the ZMQSC DEF QL command. See TPF Database Reference for more information about recovery logs.

Monitoring Queue Depth

The TPF system provides a queue depth monitor for processor unique queues. When the queue depth on a transmission queue exceeds the value specified by the administrator on the QDEPTHHI parameter using the ZMQSC DEF QL command, a warning message is sent to the operator console. This could mean that the queue is stalled and may need operator intervention. The warning message is sent to the console every xx seconds until the queue goes below the QDEPTHHI value (where xx is the interval that is determined by the administrator via the QDT parameter on the ZMQSC DEF MQP command). See TPF Operations for more information about the ZMQSC DEF QL command.

Channels

The TPF MQSeries local queue manager supports two channel types that connect to remote MQSeries systems:

Receiver channels make use of the TPF Internet daemon. When a remote sender channel first connects to the TPF system over Transmission Control Protocol/Internet Protocol (TCP/IP), it sends a connection request to port 1414, which is the standard MQSeries port. System administrators must set up an Internet daemon listener on that port that, once the connection request is received, passes control to the TPF MQSeries receiver channel session initiation program (CMQL). To set up the Internet daemon listener for MQSeries, you must add an MQSeries server by entering the following.

ZINET ADD S-MQS P-TCP MODEL-AOR PORT-1414 PGM-CMQL AORL-8

This is required before establishing a connection between remote sender channels and TPF receiver channels. It is possible to change the TCP/IP port that is used for these connections. If you establish an Internet daemon listener on a different port for the MQSeries server, you need to specify the same port in the connection name when defining the sender channel on the remote MQSeries system.

Two channel speeds are supported for both sender and receiver channels: normal and fast.

When sending messages over normal speed sender channels, persistent and nonpersistent messages are included in batches and receipt confirmation is required before the messages are deleted from the transmission queue.

Persistent and nonpersistent messages can be sent over fast sender channels. When sending messages over fast sender channels, only persistent messages are included in batches. Nonpersistent messages are sent outside of the batch and are deleted from the transmission queue without receiving receipt confirmation. Nonpersistent messages are never sent again during channel recovery procedures.

When receiving both persistent and nonpersistent messages over normal receiver channels, the messages are processed as part of a batch, and receipt confirmation is sent for the entire batch of messages. Once the confirmation is sent, the messages appear on local TPF MQSeries queues or are put on transmission queues destined for another queue manager if the TPF queue manager is not the target queue manager. Applications must retrieve messages from the local queues by using the MQGET function.

When receiving nonpersistent messages over a fast receiver channel, the message is not filed and is assumed to be destined for a traditional non-MQSeries application. To obtain significant performance throughput for these messages, they are given directly to TPF applications by using the TPF-unique MQSeries ROUTC Bridge function. The messages never appear on a queue. Persistent messages received over fast receiver channels are processed as if the message was received over a normal receiver channel.

MQSeries ROUTC Bridge

TPF local queue manager support includes a TPF-unique mechanism for passing nonpersistent messages received over fast channels directly to traditional TPF applications. In this way, TPF customers can take advantage of MQSeries-oriented networks for delivering older traditional, high-speed TPF-type messages to TPF applications. The MQSeries message is converted into TPF AM0SG format and given to TPF message router program COA4 for routing the message to an application. A user exit is provided that gives you the opportunity to assign a line number, interchange address and terminal address (LNIATA) to the message before giving it to the application. In addition, the terminal address table (WGTA) for that LNIATA is marked with an MQSeries indicator, so when the application responds to the message using the ROUTC bridge, the message is intercepted and converted back to an MQSeries message format. See TPF System Installation Support Reference for more information about user exits.

Transmission Queues: Swinging

The TPF local queue manager provides a unique feature that redirects messages originally destined for a transmission queue to an alternate transmission queue. If, for example, a channel is stalled or the remote receiver channel is down, messages can be moved to a transmission queue that has an active channel. All messages that were previously on the original transmission queue are moved to the new transmission queue, and all new messages put to the original transmission queue are actually added to the new transmission queue. The ZMQSC SWQ command is used to perform this function. See TPF Operations for more information about the ZMQSC SWQ command.

Transaction Manager

With the release of turbo enhancements for TPF support of MQSeries local queue manager, the TPF transaction manager was enabled to control MQSeries API functions. This means that MQSeries MQPUT, MQGET, and MQPUT1 API functions will participate in transaction scopes. Before this, an MQPUT function in a commit scope resulted in the message being immediately put on the queue and the queue was locked until a tx_commit function or tx_rollback function was issued. The transaction manager had no knowledge of the MQSeries APIs. With turbo enhancements for TPF support of MQSeries local queue manager enhancements, MQPUT and MQGET functions become visible to other processes during tx_commit function processing and the queue is only locked during tx_commit function processing. With the transaction scopes in place for MQSeries APIs, the behavior of these APIs will change for those applications that already have transaction scopes surrounding the MQSeries APIs.