------------------------------------------------------------ Commonly Reported SQL/DS Errors ------------------------------------------------------------ By Tsae-Ling Ho IBM Canada Laboratory Ltd. Toronto, Canada February 1995 (c) Copyright IBM Corp 1995 ------------------------------------------------------------ Disclaimer ------------------------------------------------------------ Any performance data contained in this presentation was determined in a controlled environment, and, therefore, the results obtained in other operating environments may vary significantly. Users of the information in this presentation should verify the applicable data for their specific environment. Any reference to an IBM licensed program in this presentation is not intended to state or imply that only IBM's licensed program may be used. Any functionally equivalent program that does not infringe any of IBM's intellectual property rights may be used instead. IBM retains the ownership and the right to republish and to distribute copies of this presentation to whomever it chooses. SQL/DS is a trademark of International Business Machines Corp. ------------------------------------------------------------ Contents ------------------------------------------------------------ 1. How To Avoid Calling Service 2. Some Common Problems 3. Common Segment Problems 4. What Can and Cannot Overlap? 5. What Can Be Loaded Above 16M? 6. Problems Related to Migration 7. Some Common Performance Problems 8. Preprocessing with VSE Guest Sharing 9. DROP DBSPACE Problems 10. SYSTEM.SYSOPTIONS changes 11. Some Common RXSQL Problems 12. Where to Go for Help? 13. When You Have to Call Service ------------------------------------------------------------ How To Avoid Calling Service ------------------------------------------------------------ o Prevent problems by: * Migrating to current release * Applying regular maintenance * Testing backup and recovery procedures o Check manuals for error conditions and suggested actions Avoid problems by upgrading your database to higher-quality releases and by regularly applying service. The current release of SQL/DS, 3.4.0, has fixes for many known problems, and by applying regular service to the database, you may be able to avoid some problems entirely. Be sure to test your backup and recovery procedures thoroughly, so you are not in for a surprise the day you need it! The next page contains some specific things to watch out for. Many conditions (such as limit errors) are explained in the Messages and Codes manual, with suggested actions for recovery. Check the manuals for documented error conditions before calling Service. An example of a limit error is the following: ARI0039E SQL/DS limit error occurred Ä ARICDSP 03 The explanation (from the 'Limit Errors' section of the Messages and Codes manual) is: "All agents waiting for buffers, and SQL is out of page buffers". o Don't DDR the database while it is up and running o Take at least one database archive o Balance the number of log archives and database archives taken NEVER take a DDR copy of a database while it is online, even if it is 'quiesced' and there are no active users. You MUST issue SQLEND UARCHIVE to shut it down first. While SQL/DS is up and running, the pages and directory blocks that have recently been used are still in storage, and will not be written to DASD until a checkpoint occurs. If you DDR copy the dbextents while the database is up, you will lose the current copy of any pages that have not yet been flushed to DASD. This could give you an archive that is inconsistent (if, for example, a row was inserted into a table with an index, and the updated index page was written to DASD but the updated data page was still in storage), and you would see strange errors when you try to restore it and use it. Some users whose data is recoverable (i.e. copied from another database or generated by application programs) believe they do not have to take an SQL/DS archive, since they can recreate their data. What they don't realise is that they can lose the structure of their database - i.e., what tables are in what dbspaces, indexes, authorities... basically, EVERYTHING in the system catalogs - if they suffer a hardware crash. The process for recovering from a DASD failure is to replace the damaged extent with a new one and restore an archive. Without an archive to restore, you will first have to REGEN your database before you can reload the data. It is a good idea to take an archive after any structural changes (i.e. add dbextent or add dbspace) for recovery in case of a hardware failure. An old archive is better than NO archive! Be careful about taking log archives instead of database archives 'to save time'. While it may take longer to TAKE a database archive than a log archive, it can take a very long time to APPLY many log archives. Aim for a balance. ------------------------------------------------------------ Some Common Problems ------------------------------------------------------------ 1. Segment Problems 2. Problems Related to Migration 3. Some Common Performance Problems 4. Preprocessing with VSE Guest Sharing 5. DROP DBSPACE Problems 6. SYSTEM.SYSOPTIONS changes 7. Some Common RXSQL Problems This is not an exhaustive list of the types of problems that users can see; it is merely a sampling of some of the more commonly reported errors that users could solve themselves. ------------------------------------------------------------ Common Segment Problems ------------------------------------------------------------ * Very often caused by segment overlay * Sample error messages: o Error loading segment - storage may be in use * SQLCODE -934 o SQL/DS segment overlaps (or is too close to) CMS nucleus * DMSFRO159E INSUFFICIENT STORAGE AVAILABLE TO SATISFY FREE STORAGE REQUEST FROM VSTOR * DMSLIO105S ERROR 25 WRITING LOAD MAP A5 ON DISK * DMSLIO380E STORAGE AT ORIGIN VSTOR IN USE. MODULE NOT LOADED * ARI653E INCLUDE COMMAND ERROR RETURN CODE 104 from ARISAVES EXEC o if USE INST = YES and CMSINST segment overlaps SQL * ARI0609E ARISDBBT DSC/DBSS SAVED SEGMENT. RC=4600 * Can bypass with 'I CMS PARM NOSPROF INSTSEG NO' o if shared segment defined within virtual machine (non-ESA only) * DMKVMA456W SHARED PAGE ALTERED o May also see addressing, operation, protection exceptions, failures during segment create, ... o Can also see addressing, operation, protection exceptions, or segment overlaps if segments become too small after applying service Segment errors can cause a variety of error messages to be displayed. This is a sampling of some of the more common ones. If you run with segments and suspect a segment problem, there are a couple of simple tests you can perform. Most commonly, the problems are caused by an overlap in the segment addresses, although you can also have problems if the application of service has caused the code to increase in size to the point where it no longer fits in the defined segment. * How to diagnose segment problems? o Run in DMSFREE o Load segments from CMS (e.g. SEGMENT LOAD) * should load segments in same order as SQL/DS does * can look at storage to see if code loaded o Use operating system-specific commands for segment layout and look for overlaps If you suspect a segment error, you can try to start up the database without using the segments. This is not difficult to do: o Update the ARISSEGC MACRO to change the SAVED SEG column to 'NO' o Run the SQLBOOTS EXEC - this will modify the default bootstrap modules (SQLDBA) o Edit the SQLDBN file to specify SQLDBA as the DCSSID, so the new default bootstraps are used (for the database machine) o Re-access the disk and rerun SQLINIT EXEC to pick up the changes (for a user machine) If the error does not occur, then there is a problem with the segments. Make sure there are no overlaps and move or rebuild the segments as necessary. You can also try to load the segments from CMS (e.g. via the SEGMENT LOAD command). Be sure to load the segments in the same order as SQL/DS does: for the machine, DBSS and then RDS; for the user, ISQL then RA. Once the segment is loaded, you can look at the storage to see if the code was in fact loaded successfully (via '#CP D Taddr.50'). To check for overlaps, use the operating system specific command to display the segment layout (e.g. Q NSS) and look for overlaps in the specified addresses. ------------------------------------------------------------ What Can and Cannot Overlap? ------------------------------------------------------------ o In MUM, RDS, DBSS, and CMS must NOT overlap in the database machine o In SUM, RA, RDS, DBSS, and CMS must NOT overlap in the database machine o In the user machine, ISQL, RA, ARIMS001 (the message repository) and CMS must NOT overlap o Any program products in segments that use SQL/DS (i.e. CSP, QMF) must NOT overlap RA o Program products used within same terminal session must NOT overlap o For QMF users: QMF, ISPF, GDDM, RA, and CMS must NOT overlap o ISQL and program products CAN overlap DBSS and RDS o Virtual machine size should not be greater than lowest origin address of any loaded segment (non-ESA only) The message repository (ARIMS001) is loaded in a segment in the user machine only. In the database machine, it is always loaded into DMSFREE. In general, segments that are loaded in the database machine only (e.g. DBSS and RDS) can overlap segments that are loaded in the user machine only (e.g. ISQL and RA). Watch our for single user mode (SUM) - when the database machine is run in SUM, the RA segment is loaded in the machine and must not overlap DBSS or RDS. In a non-ESA environment, do not define segments within your database virtual machine; for example, if you have a 9 Meg virtual machine, your lowest segment origin address should be greater than or equal to 9M. In an ESA environment, this restriction is lifted; however, we recommend that you use the SEGMENT RESERVE command for any segments that will be loaded within the virtual machine. ------------------------------------------------------------ What Can Be Loaded Above 16M? ------------------------------------------------------------ SQL/DS Saved Segments o Interactive SQL (ISQL) - SQLISQL MUST be below 16M - does not support AMODE 31 o Resource Adapter or Resource Manager (RA / RM) - SQLRMGR may be above 16M (except for SUM when machine in 370 mode) o Database Storage System (DBSS) - SQLSQLDS MUST be below 16M - does not support AMODE 31 o Relational Data System (RDS) - SQLXRDS MUST be below 16M - does not support AMODE 31 o Message Repositories * ARIMxxxx - SQL/DS message repository, MUST be below 16M * ARIUME - added to CMS message repository, MUST be below 16M This applies only to SQL/DS r330 and later. Note that the segment names included here are the defaults; you can specify different names for the segments if you wish. Note also that the RA segment can be above or below the 16M line, but must not cross it (i.e. the segment must be all above 16M or all below). ------------------------------------------------------------ Problems Related to Migration ------------------------------------------------------------ * Error seen during migration while running ARISMEX EXEC: ARI0533E Catalog migration error while processing ISYSKEYS2. ARI0504I SQLERRP: ARIXOOP SQLERRD1: Ä480 SQLERRD2: 0 ARI0505I SQLCODE = Ä551 SQLSTATE = 42501 ROWCOUNT = 0 ARI0040E SQL/DS system error occurred Ä ARISMG3 12. * Reason: SQLVALID was not run or did not complete successfully * Solution: run SQLVALID EXEC This error message is confusing because SQLCODE -551 means the user does not have the authority to access the object (here it is the index ISYSKEYS2), yet migration is performed using an id with DBA authority. The error message appears because SQLVALID was not run, or did not complete successfully. The SQLVALID EXEC not only checks for trailing blanks in indexes and object names, it also prepares the database for migration. One of the actions it performs is defining a new index (ISYSKEYS2) and granting authority on that index to the SQLDBA id. If SQLVALID is not run, SQLDBA does not have the authority to drop and recreate ISYSKEYS2. The SQLVALID EXEC must be run successfully even if there are no trailing blanks in your indexes or object names. It is a mandatory step in the migration process. * Errors during SQLLOG EXEC or ARISMEX EXEC such as 'database not found' (For users migrating to new release who create a new production disk) * Reason: new production disk does not contain database definition files (e.g. dbname SQLFDEF and dbname SQLDBN) * Solution: when creating new production disk, make it an exact copy (i.e. by using DDR) of the old production disk The Installation manual states that when migrating, you can either define a new production disk (e.g. 395) or use the existing production disk (195). The manual does not state that the new disk MUST contain all the database definition files from the old 195 disk. Without these files, the migration will fail. To avoid this, when creating the new production disk, make it an exact copy of the old production disk. Then the migration step ARISMEX will replace the old code with the new code while leaving the database definition files intact. * After migration, may see SQLCODE -945 or -946 when packages dynamically re-prepped * Reason: packages will be bigger in release 3.3.0 and higher Dynamic re-prep may cause package to not fit in existing package dbspace * Solution: add another package dbspace before migrating to avoid problems In release 3.3.0 and higher, packages will be bigger (if they contain SELECT statements). When a package is dynamically re-prepped (note: 'dynamic re-prep' includes RELOAD and REBIND), it may not fit in its current package dbspace. In release 3.3.0, the re-prep will fail with SQLCODE -946 because the package does not fit in the dbspace, and you will have to do an explicit prep to move the package to a new dbspace. Note that this may fail with SQLCODE -945 if there are no package dbspaces with room for this package. In release 3.4.0, if a package does not fit in the dbspace, SQL/DS will try to put the package in another package dbspace, and will fail with SQLCODE -945 if no dbspaces have room for the package. The error can be avoided by acquiring another package dbspace before migrating, so the bigger packages have another dbspace available to move to if necessary. * Some general points: o Don't trust your memory! Read the manual o You can't go back - once you've accessed a v2 database with v3 code o You cannot 'migrate' by unloading data from a v2 database and loading it into a v3 database There are a number of steps that must be followed, in the proper sequence, when migrating from version 2 to version 3. Don't rely on your memory; if you miss a step, recovery can be painful (see the next point below). The first time you access a version 2 database with version 3 code, the format of parts of the directory (BDISK) is changed (to add support for the deletion of dbextents). If you access a version 2 database with the version 3 code by mistake, and then attempt to access the database with version 2 code, you will get an ARIYI07 03 abend for every page that is not on the first extent, because the version 2 code does not recognise the changed format. You have to restore a version 2 archive to carry on. Be especially careful when you use saved segments in an environment with a mix of v2 and v3 machines! The pre-migration utilities (ie SQLINDID, SQLOBJCT, and especially SQLVALID) have to be run against the data in the database. You cannot bypass them by unload/reload of your data. Also, if you UNLOAD a DBSPACE or TABLE using v2 code, you cannot RELOAD it using v3 code (because the v2 DBSU code builds DDL with trailing blanks which the v3 DBSU code will not be able to execute). ------------------------------------------------------------ Some Common Performance Problems ------------------------------------------------------------ * Query runs slower after migration * Reason: query contains a JOIN of two columns with different DECIMAL precision and scale Predicate no longer sargable * Solution: drop one of the tables and recreate it giving the column the same DECIMAL precision and scale as in the other table Prior to release 3.2.0, a column declared as DECIMAL(6,3) would be converted internally to DECIMAL(7,3). As a result, if it is joined to a column declared as DECIMAL(7,3), both columns would be treated as having the same scale, and the predicate would be sargable. In release 3.2.0 and later, columns declared as DECIMAL are not converted internally. The column declared as DECIMAL(6,3) stays as DECIMAL(6,3). Therefore, the two columns in the join predicate do not have the same precision, and the predicate will be residual. This will affect performance because residual predicates are less efficient than sargable. Basically, a sargable predicate is a WHERE condition that can be applied against the data earlier in the processing than a residual predicate. The earlier a predicate is applied, the fewer rows need to be fetched from the database, resulting in less I/O. * Query runs slower after migration * Reason: query contains a JOIN of two columns created with different CCSIDs Predicate no longer sargable * Solution: drop one of the tables and recreate it under the same CCSID as the other table The concept of CCSIDs was introduced in release 3.3.0. Tables created prior to release 3.3.0 and migrated will have a CCSID based on values in the system catalog. The default CCSID for a migrated table is 37 (English). New tables created will have a CCSID which matches the CHARNAME defined for the database. If the CHARNAME has been redefined (for example, to INTERNATIONAL), the CCSID of the new table's columns (500) will not match the old table's (37). When the columns in a join predicate have different CCSIDs, the predicate becomes residual. This will affect performance because residual predicates are less efficient than sargable. * Performance degradation during normal processing SHOW DBEXTENT shows storage pool near SOS condition (e.g. 89% when SOSLEVEL is 10) COUNTER * shows excessive checkpoints * Reason: update activity causing SOSLEVEL to be reached will trigger a checkpoint; frequent checkpoints caused by SOSLEVEL will decrease performance * Solution: increase the size of the storage pool (i.e. add a dbextent) When the number of pages in use (including shadow pages) reaches the SOSLEVEL, a checkpoint is posted to free pages to bring the number of pages in use below SOSLEVEL. Let's take an example: suppose we have 100 pages in a storage pool, the SOSLEVEL is set to 10 (so a short on storage condition is triggered when 90 pages are used), and there are 89 non-shadow pages in use after a checkpoint. An update to a page in the pool will cause a shadow page to be obtained, bringing the number of in-use pages to 90. A checkpoint is taken, bringing the number back to 89. In this situation, a checkpoint will be taken for every update until the number of non-shadow pages in use reaches 90. (Once the number of pages in use after a checkpoint is greater than or equal to SOSLEVEL, a checkpoint will only be taken when the pages in use (including shadow pages) nears the total number of pages in the storage pool.) This will severely degrade performance. * Other performance problems - things to check: o UPDATE ALL STATISTICS run? o Same environment as when good performance received? o Same start up parameters? o Compare EXPLAIN output for 'good' and 'bad' query It is possible that some queries may experience degraded performance, especially after migration. It is important to check the environment the query is running in to be sure it is the same as when the query performed well. Things to check for include the availability of current statistics, indexes and how clustered they are, the values of the start up parameters NPAGBUF and NDIRBUF, the amount of data in the tables, etc. If possible, it would be useful to compare EXPLAIN output of a 'good' and 'bad' run and see what is different. Running UPDATE ALL STATISTICS will update the statistics in the catalog for each column of each table that is being updated. By comparison, UPDATE STATISTICS (without the 'ALL') only updates the statistics of the columns that are the FIRST column of an index. The more accurate statistics that are available to the optimizer, the better the estimation of the query cost will be. We recommend that UPDATE ALL STATISTICS be run at least once - if there are no statistics available for a column, defaults will be used, and they may not be a good match for your data. ------------------------------------------------------------ Preprocessing with VSE Guest Sharing ------------------------------------------------------------ * Error seen when preprocessing program on VSE guest: ARI2913I Character translation table for CHARNAME = 'unknown' cannot be found, the character translation table for INTERNATIONAL will be used. ARI0503E An SQL error has occurred. The SQL statement cannot be executed because the previous error has left the application in an unconnected condition. ARI0505I SQLCODE = Ä900 ROWCOUNT = 0 ARI0504I SQLERRP: ARIBRM, SQLERRD1: 0, SQLERRD2: 0 * Reason: VSE Guest Sharing installation not completed * Solution: need to install the release 3.4.0 version of the ARIPDBS package in the VM database o Run the job ARIS205D This error message may be confusing to users. In order to preprocess programs under VSE Guest Sharing, the VSE user code requires the SQLDBA.ARIPDBS package to exist in the VM database. (More specifically, a release 3.4.0 guest requires the release 3.4.0 version of the package to exist in the VM database.) To complete the VSE Guest Sharing installation, the SQLDBA.ARIPDBS package (supplied with your VSE installation) must be loaded into the VM database. This package, ARIPDBS, determines the DBCS option in effect for the prep of the application program and loads the character translation table, used for folding. It is only used for VSE programs. See page 49 of the Installation for VSE version 3 release 4 manual (manual number GH09-8090-02) in the section titled 'Loading SQLDBA.ARIPDBS'. ------------------------------------------------------------ DROP DBSPACE Problems ------------------------------------------------------------ * Error occurred during DROP DBSPACE (e.g. SOS in storage pool) * Subsequent DROPs not successful * Entries exist in SYSTEM.SYSDBSPACES with OWNER = '\\\\\\\\' and DBSPACENAME equal to DBSPACENO (Note: The '\' is a NOT character. The NOT character cannot be properly displayed in this document.) * Reason: first DROP did not complete successfully due to error * Solution: fix whatever problem occurred during the first DROP (e.g. add a dbextent to the storage pool), then ACQUIRE and DROP another dbspace DROP processing is a two-phase process. The first phase removes references to the object from the system catalogs and puts an entry in SYSTEM.SYSDROP. The second phase, which is triggered by a COMMIT WORK, processes the entries in SYSDROP and physically drops the object. If the second phase of the DROP processing hits an error, such as a short on storage condition in the storage pool, it will rollback, leaving the entry in SYSDROP. Subsequent DROPs will not complete successfully until the error which caused this DROP to fail is addressed. A subsequent DROP of another object will trigger the SYSDROP processing, which will again fail with the original error, leaving TWO entries in SYSDROP. The error condition must be corrected. Once this happens, another DROP will trigger SYSDROP processing, which will complete the DROPs of all the objects in SYSDROP. ------------------------------------------------------------ SYSTEM.SYSOPTIONS changes ------------------------------------------------------------ * Changes made to SYSTEM.SYSOPTIONS (for example, MCCSIDSBCS) did not take effect * Reason: SYSTEM.SYSOPTIONS is only read during database startup * Solution: Shut down database and bring it up again to activate changes During startup, the module ARIXERO reads the entries in SYSTEM.SYSOPTIONS into variables. Once the database is up, the values of these variables are not changed, even if you update SYSTEM.SYSOPTIONS. You have to bring the database down and up again to activate the changes. A good idea is to think of the table as 'system.sysSTARTUPoptions'. The values in SYSTEM.SYSOPTIONS that you might want to change include: o DBCS (whether the DBCS is enabled), o CHARSUB (default SUBTYPE to be used for character columns), o MCCSIDSCBS (default CCSID for migrated character columns), o MCCSIDMIXED (default CCSID for migrated mixed character columns), o MCCSIDGRAPHIC (default CCSID for migrated DBCS character columns), o DATE (which date format to be used; for example, ISO, LOCAL, USA), o TIME (which time format to be used), o LDATELEN (length of local date format), o LTIMELEN (length of local time format). Check the SQL Reference for IBM VM Systems and VSE (Version 3 Release 4 publication number SH09-8087-03) for more information on SYSTEM.SYSOPTIONS and what the various values mean. ------------------------------------------------------------ Some Common RXSQL Problems ------------------------------------------------------------ * Program check occurs on next RXSQL statement after a call to SQLRMEND or SQLINIT EXEC * Cause: SQLRMEND and SQLINIT detach the Resource Adapter, but RXSQL doesn't know this * Solution: COMMIT or ROLLBACK work, and RELEASE, or NUCXDROP RXSQL, before call to SQLINIT or SQLRMEND When an EXEC that has been issuing RXSQL commands calls SQLRMEND or SQLINIT, the SQL/DS Resource Adapter becomes detached from RXSQL. RXSQL does not get informed of this, so a program check may occur on the next RXSQL command in the REXX program. Before calling either SQLRMEND or SQLINIT, issue one of the following: o NUCXDROP RXSQL o RXSQL COMMIT WORK RELEASE o RXSQL ROLLBACK WORK RELEASE The next RXSQL call will reconnect to the Resource Adapter. * Message ELO0901E with 'WRONG RXSQL LOADLIB' received after migration to VM/ESA from VM/XA * Cause: RXSQL uses a different loadlib in XA and non-XA environments * Solution: Relink-edit RXSQL When RXSQL is installed in an XA environment, and the operating system is migrated to a non-XA environment, a link-edit of RXSQL is required. Conversely, a link-edit is also necessary when moving from a non-XA environment to an XA environment. The error occurs because RXSQL has an environment-dependent module called ELOECXA for XA environments and one called ELOECNXA for non-XA environments. To solve the problem with the loadlib, relink-edit RXSQL in the environment that it will be running in, in the event of an operating system migration. ------------------------------------------------------------ Where to Go for Help? ------------------------------------------------------------ o Performance problems * Performance and Tuning Handbook (SH09-8111) o Client/Server connectivity problems * DRDA Connectivity Guide (SC26-4783) * VM Connectivity Planning, Administration, and Operation (SC24-5448) * Setup and Usage of SQL/DS in a DRDA Environment (redbook GG24-3733) * Distributed Relational Database Cross Platform Connectivity and Application (redbook GG24-4311) * DRDA Client/Server Application Scenarios for VM and VSE (redbook GG24-4193) * Distributed Relational Database: Using OS/2 DRDA Client Support with DB2 (redbook GG24-3771) * Distributed Relational Database Remote Unit of Work Implementation DB2-DB2 and DB2-SQL/DS Volume 2 (redbook GG24-3716) * Client/Server Computing Application Design Guidelines: A Distributed Relational Data Perspective (redbook GG24-3727) o General * SQL/DS Diagnosis Guide and Reference (LH09-8081) * INFO APARs (i.e II03487 for segment problems) o Finally ... 1-800-237-5511 ------------------------------------------------------------ When You Have to Call Service ------------------------------------------------------------ * Examples of documentation to collect: o Operating System (e.g. VM/ESA 1.2.1, VSE/ESA 1.3.2) o SQL/DS release level (e.g. 3.4) o Service history (e.g. latest PUT applied) o The entire SQLCA * SQLCODE, SQLERRP, SQLERRD1, SQLERRD2, SQLERRM o FULL (not partial!) VM or VSE dump o SQL/DS minidump, operator console o contents of registers o EXPLAIN output (for query performance problems) o SQL/DS Level 2 unformatted trace There are many types of documentation which can be helpful to IBM Service in performing problem diagnosis. Depending on the type of problem encountered, different documentation may be required. The SQLCA is very useful in identifying the symptom of the problem. It is very important to IBM Service to know the complete SQLCA when a problem is encountered. Some fields of the SQLCA are described below: 1. SQLCODE - summarises the result of the executing statement 2. SQLERRP - the module that detected the failure 3. SQLERRD1 - RDS return code 4. SQLERRD2 - DBSS return code 5. SQLERRD3 - number of rows processed 6. SQLERRD4 - cost estimate for the statement 7. SQLERRD5 - additional delete information for RI 8. SQLWARN - characters that warn of various conditions encountered during the processing of your statement 9. SQLERRM - contains the message tokens If a dump was produced by the error, you should also provide the minidump that was produced at the time of the error. It is very important that the dump and minidump match. For a problem such as a protection exception (that is reproducible), it would be helpful to know the contents of the registers. To collect this information, do the following: o CP TRACE PROG o reproduce the error (you will be in CP READ) o display the registers o take a dump (e.g. VMDUMP)