Release Notes IBM(R) DB2(R) Universal Database Release Notes Version 7.1 FixPak 2 (c) Copyright International Business Machines Corporation 2000. All rights reserved. U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. ------------------------------------------------------------------------ Table of Contents Welcome to DB2 Universal Database Version 7.1! Special Notes * 1.1 DB2 Universal Database Business Intelligence Quick Tour * 1.2 Downloading Installation Packages for All Supported DB2 Clients * 1.3 Installing DB2 on Windows 2000 o 1.3.1 Installing DB2 on Windows 95 * 1.4 Notes on Greater Than 8-Character User IDs and Schema Names * 1.5 National Language Versions of DB2 Version 7.1 o 1.5.1 Control Center and Documentation Filesets * 1.6 Accessibility Features of DB2 UDB Version 7.1 o 1.6.1 Keyboard Input and Navigation + 1.6.1.1 Keyboard Input + 1.6.1.2 Keyboard Focus o 1.6.2 Features for Accessible Display + 1.6.2.1 High-Contrast Mode + 1.6.2.2 Font Settings + 1.6.2.3 Non-dependence on Color o 1.6.3 Alternative Alert Cues o 1.6.4 Compatibility with Assistive Technologies o 1.6.5 Accessible Documentation * 1.7 DB2 Everywhere is Now DB2 Everyplace * 1.8 Error Messages when Attempting to Launch Netscape * 1.9 Mouse Required * 1.10 Supported Web Browsers on the Windows 2000 Operating System * 1.11 Opening External Web Links in Netscape Navigator From The Information Center when Netscape is Already Open (UNIX Based Systems) * 1.12 Problems Starting the Information Center * 1.13 Configuration Requirement for Adobe Acrobat Reader on UNIX Based Systems * 1.14 Attempting to Bind from the DB2 Run-time Client Results in a "Bind files not found" Error * 1.15 Additional Required Solaris Patch Level * 1.16 Supported CPUs on DB2 Version 7.1 for Solaris * 1.17 Searching the DB2 Online Information on Solaris * 1.18 Java Control Center on OS/2 * 1.19 Search Discovery * 1.20 Problems When Adding Nodes to a Partitioned Database * 1.21 Errors During Migration * 1.22 Memory Windows for HP-UX 11 * 1.23 SQL Reference is Provided in One PDF File * 1.24 Migration Issue Regarding Views Defined with Special Registers * 1.25 User Action for dlfm client_conf Failure * 1.26 In the Rare Event that the Copy Daemon Does Not Stop on dlfm stop * 1.27 Chinese Locale Fix on Red Flag Linux * 1.28 Uninstalling DB2 DFS Client Enabler * 1.29 DB2 Install May Hang if a Removable Drive is Not Attached * 1.30 Client Authentication on Windows NT * 1.31 AutoLoader May Hang During a Fork * 1.32 DATALINK Restore * 1.33 Define User ID and Password in IBM Communications Server for Windows NT (CS/NT) o 1.33.1 Node Definition * 1.34 Federated Systems Restrictions * 1.35 DataJoiner Restriction * 1.36 IPX/SPX Protocol Support on Windows 2000 * 1.37 Stopping DB2 Processes Before Upgrading a Previous Version of DB2 * 1.38 Run db2iupdt After Installing DB2 If Another DB2 Product is Already Installed * 1.39 JDK Level on OS/2 * 1.40 Setting up the Linux Environment to Run DB2 * 1.41 Hebrew Information Catalog Manager for Windows NT * 1.42 Error While Creating an SQL Stored Procedure on the Server * 1.43 Microsoft SNA Server and SNA Multisite Update (Two Phase Commit) Support * 1.44 DB2's SNA SPM Fails to Start After Booting Windows * 1.45 Additional Locale Setting for DB2 for Linux in a Japanese and Simplified Chinese Linux Environment * 1.46 Locale Setting for the DB2 Administration Server * 1.47 Java Method Signature in PARAMETER STYLE JAVA Procedures and Functions * 1.48 Shortcuts Not Working * 1.49 Service Account Requirements for DB2 on Windows NT and Windows 2000 * 1.50 Lost EXECUTE Privilege for Query Patroller Users Created in Version 6 * 1.51 Query Patroller Restrictions * 1.52 Need to Commit all User-defined Programs That Will Be Used in the Data Warehouse Center (DWC) * 1.53 Sub-element Statistics * 1.54 Control Center Problem on Microsoft Internet Explorer * 1.55 New Option for Data Warehouse Center Command Line Export * 1.56 Backup Services APIs (XBSA) * 1.57 OS/390 Agent o 1.57.1 Installation overview o 1.57.2 Installation details o 1.57.3 Setting up additional agent functions * 1.58 DB2 Universal Database Enterprise Edition and DB2 Connect Enterprise Edition for Linux on S/390 * 1.59 DB2 Universal Database Enterprise - Extended Edition for Linux * 1.60 JDBC 2.0 Support for Linux, Linux/390 and HP-UX * 1.61 Client Side Caching on Windows NT * 1.62 Incompatibility between DB2 and Sybase in the Windows Environment * 1.63 DB2 UDB Supports the Baltic Rim Code Page (MS-1257) on Windows Platforms * 1.64 Windows NT DLFS Incompatible with Norton's Utilities * 1.65 SET CONSTRAINTS Replaced by SET INTEGRITY * 1.66 Loss of Control Center Function Administration Guide: Planning * 2.1 Chapter 8. Physical Database Design * 2.2 Chapter 9. Designing Distributed Databases * 2.3 Chapter 13. High Availability in the Windows NT Environment o 2.3.1 Need to Reboot the Machine Before Running DB2MSCS Utility * 2.4 Chapter 14. DB2 and High Availability on Sun Cluster 2.2 * 2.5 Appendix E. National Language Support Administration Guide: Implementation * 3.1 Adding or Extending DMS Containers (New Process) * 3.2 Chapter 4. Altering a Database o 3.2.1 Adding a Container to an SMS Table Space on a Partition o 3.2.2 Switching the State of a Table Space * 3.3 Chapter 8. Recovering a Database o 3.3.1 How to Use Suspended I/O * 3.4 Appendix C. User Exit for Database Recovery * 3.5 Appendix I. High Speed Inter-node Communications o 3.5.1 Enabling DB2 to Run Using VI Administration Guide: Performance * 4.1 Chapter 5. System Catalog Statistics o 4.1.1 Collecting and Using Distribution Statistics * 4.2 Chapter 6. Understanding the SQL Compiler o 4.2.1 Replicated Summary Tables o 4.2.2 Data Access Concepts and Optimization * 4.3 Chapter 13. Configuring DB2 o 4.3.1 Sort Heap Size (sortheap) o 4.3.2 Sort Heap Threshold (sheapthres) o 4.3.3 Maximum Percent of Lock List Before Escalation (maxlocks) o 4.3.4 Configuring DB2/DB2 Data Links Manager/Data Links Access Token Expiry Interval (dl_expint) o 4.3.5 MIN_DEC_DIV_3 Database Configuration Parameter * 4.4 Appendix A. DB2 Registry and Environment Variables o 4.4.1 Table of New and Changed Registry Variables * 4.5 Appendix C. SQL Explain Tools Administrative API Reference * 5.1 db2ConvMonStream * 5.2 db2DatabasePing (new API) o db2DatabasePing - Ping Database * 5.3 db2XaGetInfo (new API) o db2XaGetInfo - Get Information for Resource Manager * 5.4 db2XaListIndTrans (new API that supercedes sqlxphqr) o db2XaListIndTrans - List Indoubt Transactions * 5.5 sqlaintp - Get Error Message * 5.6 Documentation Error Regarding AIX Extended Shared Memory Support (EXTSHM) * 5.7 SQLFUPD Documentation Error Application Building Guide * 6.1 Chapter 1. Introduction o 6.1.1 Supported Software o 6.1.2 Sample Programs * 6.2 Chapter 3. General Information for Building DB2 Applications o 6.2.1 Build Files, Makefiles, and Error-checking Utilities * 6.3 Chapter 4. Building Java Applets and Applications o 6.3.1 Setting the Environment * 6.4 Chapter 5. Building SQL Procedures. o 6.4.1 Setting the SQL Procedures Environment o 6.4.2 Setting the Compiler Environment Variables o 6.4.3 Customizing the Compilation Command o 6.4.4 Retaining Intermediate Files o 6.4.5 Backup and Restore * 6.5 Creating SQL Procedures * 6.6 Calling Stored Procedures * 6.7 Chapter 7. Building HP-UX Applications. o 6.7.1 HP-UX C o 6.7.2 HP-UX C++ * 6.8 Chapter 10. Building PTX Applications o 6.8.1 ptx/C++ * 6.9 Chapter 12. Building Solaris Applications o 6.9.1 SPARCompiler C++ * 6.10 VisualAge C++ Version 4.0 on OS/2 and Windows Application Development Guide * 7.1 Writing OLE Automation Stored Procedures * 7.2 Chapter 7. Stored Procedures o 7.2.1 DECIMAL Type Fails in Linux Java Routines * 7.3 Chapter 12. Working with Complex Objects: User-Defined Structured Types o 7.3.1 Inserting Structured Type Attributes Into Columns * 7.4 Chapter 20. Programming in C and C++ o 7.4.1 C/C++ Types for Stored Procedures, Functions, and Methods * 7.5 Appendix B. Sample Programs * 7.6 Activating the IBM DB2 Universal Database Project and Tool Add-ins for Microsoft Visual C++ * 7.7 IBM DB2 OLE DB Provider * 7.8 Using Cursors in Recursive Stored Procedures * 7.9 Language Considerations/Programming in Java/Creating Java Applications and Applets/Applet Support in Java CLI Guide and Reference * 8.1 CLI Unicode Functions and SQL_C_WCHAR Support on AIX Only * 8.2 Binding Database Utilities Using the Run-Time Client * 8.3 Addition to the "Using Compound SQL" Section * 8.4 Writing a Stored Procedure in CLI * 8.5 CLI Stored Procedures and Autobinding * 8.6 Addition to Appendix D "Extended Scalar Functions": DAYOFWEEK_ISO() and WEEK_ISO() Functions * 8.7 Appendix K. Using the DB2 CLI/ODBC/JDBC Trace Facility * 8.8 Using Static SQL in CLI Applications * 8.9 Limitations of JDBC/ODBC/CLI Static Profiling * 8.10 Parameter Correction for SQLBindFileToParam() CLI Function * 8.11 SQLNextResult - Associate Next Result Set with Another Statement Handle o 8.11.1 Purpose o 8.11.2 Syntax o 8.11.3 Function Arguments o 8.11.4 Usage o 8.11.5 Return Codes o 8.11.6 Diagnostics o 8.11.7 Restrictions o 8.11.8 References * 8.12 ADT Transforms Command Reference * 9.1 db2batch - Benchmark Tool * 9.2 db2cap (new command) o db2cap - CLI/ODBC Static Package Binding Tool * 9.3 db2gncol (new command) o db2gncol - Update Generated Column Values * 9.4 db2inidb - Initialize a Mirrored Database * 9.5 db2look - DB2 Statistics Extraction Tool * 9.6 db2updv7 - Update Database to Version 7 Current Fix Level * 9.7 Migrating from Version 6 of DB2 Query Patroller Using dqpmigrate * 9.8 New Command Line Processor Option (-x, Suppress printing of column headings) * 9.9 True Type Font Requirement for DB2 CLP * 9.10 BIND * 9.11 CALL * 9.12 EXPORT * 9.13 GET DATABASE CONFIGURATION * 9.14 IMPORT * 9.15 LOAD * 9.16 PING (new command) o PING Connectivity Supplement * 10.1 Setting Up the Application Server in a VM Environment * 10.2 CLI/ODBC/JDBC Configuration PATCH1 and PATCH2 Settings Data Links Manager Quick Beginnings * 11.1 Dlfm start Fails with Message: "Error in getting the afsfid for prefix" * 11.2 Setting Tivoli Storage Manager Class for Archive Files * 11.3 Disk Space Requirements for DFS Client Enabler * 11.4 Monitoring the Data Links File Manager Back-end Processes on AIX * 11.5 Installing and Configuring DB2 Data Links Manger for AIX: Additional Installation Considerations in DCE-DFS Environments * 11.6 Failed "dlfm add_prefix" Command * 11.7 Installing and Configuring DB2 Data Links Manger for AIX: Installing DB2 Data Links Manager on AIX Using the db2setup Utility * 11.8 Installing and Configuring DB2 Data Links Manager for AIX: DCE-DFS Post-Installation Task * 11.9 Installing and Configuring DB2 Data Links Manager for AIX: Manually Installing DB2 Data Links Manager Using Smit * 11.10 Installing and Configuring DB2 Data Links DFS Client Enabler * 11.11 Installing and Configuring DB2 Data Links Manager for Solaris * 11.12 Choosing a Backup Method for DB2 Data Links Manager on AIX * 11.13 Choosing a Backup Method for DB2 Data Links Manager on Windows NT * 11.14 Backing up a Journalized File System on AIX * 11.15 Administrator Group Privileges in Data Links on Windows NT * 11.16 Minimize Logging for DataLinks File System Filter (DLFF) Installation o 11.16.1 Logging Messages after Installation * 11.17 Uninstalling DLFM Components Using SMIT May Remove Additional Filesets Data Movement Utilities Guide and Reference * 12.1 Pending States After a Load Operation * 12.2 Load Restrictions and Limitations * 12.3 rexecd Required to Run Autoloader When Authentication=yes Installation and Configuration Supplement * 13.1 Binding Database Utilities Using the Run-Time Client * 13.2 UNIX Client Access to DB2 Using ODBC * 13.3 Switching NetQuestion for OS/2 to Use TCP/IP * 13.4 Chapter 26. Setting Up a Federated System to Access Oracle Data Sources Documentation Errors Message Reference * 14.1 DWC13603E (New Message) * 14.2 DWC13700E (New Message) * 14.3 DWC13701E (New Message) * 14.4 DWC13702E (New Message) * 14.5 DWC13703E (New Message) * 14.6 DWC13705E (New Message) * 14.7 DWC13706E (New Message) * 14.8 DWC13707E (New Message) * 14.9 SQL0270N (New Reason Code 40) * 14.10 SQL0301N (New Explanation Text) * 14.11 SQL0303N (New Text) * 14.12 SQL0358N (New User Response 26) * 14.13 SQL0408N (New Text) * 14.14 SQL0423N (Revised Text) * 14.15 SQL0670N (Revised Text) * 14.16 SQL1179W (Revised Text) * 14.17 SQL1550N (New SQLCODE) * 14.18 SQL1551N (New SQLCODE) * 14.19 SQL1552N (New SQLCODE) * 14.20 SQL1553N (New SQLCODE) * 14.21 SQL1704N (New Reason Codes) * 14.22 SQL2426N (New Message) * 14.23 SQL2571N (New Message) * 14.24 SQL2572N (New Message) * 14.25 SQL4942N (New Text) * 14.26 SQL20117N (Changed Reason Code 1) * 14.27 SQL20133N (New Message) * 14.28 SQL20134N (New Message) * 14.29 SQL20135N (New Message) * 14.30 New SQLSTATE values: 428F7, 55045, 55046 Replication Guide and Reference * 15.1 Replication on Windows 2000 * 15.2 Table and Column Names * 15.3 DATALINK Replication * 15.4 LOB Restrictions * 15.5 Replication and Non-IBM Servers * 15.6 Update-anywhere Prerequisite * 15.7 Replication Scenarios * 15.8 Planning for Replication * 15.9 Setting Up Your Replication Environment * 15.10 Problem Determination * 15.11 Capture and Apply for AS/400 * 15.12 Table Structures * 15.13 Capture and Apply Messages * 15.14 Starting the Capture and Apply Programs from Within an Application SQL Reference * 16.1 ALTER TABLE * 16.2 IDENTITY_VAL_LOCAL * 16.3 OLAP Functions * 16.4 SQL Procedures/Compound Statement * 16.5 LCASE and UCASE (Unicode) * 16.6 WEEK_ISO * 16.7 Naming Conventions and Implicit Object Name Qualifications * 16.8 Queries (select-statement/fetch-first-clause) * 16.9 Libraries Used by the CREATE WRAPPER Statement on Linux * 16.10 Update of the Partitioning Key Now Supported o 16.10.1 Statement: ALTER TABLE o 16.10.2 Statement: CREATE TABLE o 16.10.3 Statement: DECLARE GLOBAL TEMPORARY TABLE PARTITIONING KEY (column-name,...) o 16.10.4 Statement: SET transition-variable o 16.10.5 Statement: UPDATE * 16.11 Enabling the New SQL Built-in Scalar Functions * 16.12 ABS or ABSVAL * 16.13 MULTIPLY_ALT * 16.14 ROUND o 16.14.1 Examples: System Monitor Guide and Reference * 17.1 db2ConvMonStream Troubleshooting Guide * 18.1 Starting DB2 on Windows 95 and Windows 98 When the User Is Not Logged On Using DB2 Universal Database on 64-bit Platforms * 19.1 Chapter 5. Configuration * 19.2 Chapter 6. Restrictions Control Center * 20.1 Ability to Administer DB2 Server for VSE and VM Servers * 20.2 Java 1.2 Support for the Control Center * 20.3 "Invalid shortcut" Error when Using the Online Help on the Windows Operating System * 20.4 "File access denied" Error when Attempting to View a Completed Job in the Journal on the Windows Operating System * 20.5 Multisite Update Test Connect * 20.6 Control Center for DB2 for OS/390 * 20.7 Required Fix for Control Center for OS/390 * 20.8 Change to the Create Spatial Layer Dialog * 20.9 Troubleshooting Information for the DB2 Control Center * 20.10 Control Center Troubleshooting on UNIX Based Systems * 20.11 Possible Infopops Problem on OS/2 * 20.12 Launching More Than One Control Center Applet * 20.13 Help for the jdk11_path Configuration Parameter * 20.14 Solaris System Error (SQL10012N) when Using the Script Center or the Journal * 20.15 Help for the DPREPL.DFT File * 20.16 Online Help for the Control Center Running as an Applet * 20.17 Running the Control Center in Applet Mode (Windows 95) * 20.18 DB2 Control Center for OS/390 Data Warehouse Center * 21.1 Data Warehouse Center Publications o 21.1.1 Data Warehouse Center Application Integration Guide o 21.1.2 Data Warehouse Center Administration Guide o 21.1.3 Data Warehouse Center Messages o 21.1.4 Data Warehouse Center Online Help o 21.1.5 Revised Business Intelligence Tutorial * 21.2 Warehouse Control Database o 21.2.1 The default warehouse control database o 21.2.2 The Warehouse Control Database Management window o 21.2.3 Changing the active warehouse control database o 21.2.4 Creating and initializing a warehouse control database o 21.2.5 Migrating IBM Visual Warehouse control databases * 21.3 Setting up and running replication with Data Warehouse Center * 21.4 Troubleshooting tips * 21.5 Correction to RUNSTATS and REORGANIZE TABLE Online Help * 21.6 Notification Page (Warehouse Properties Notebook and Schedule Notebook) * 21.7 Agent Module Field in the Agent Sites Notebook * 21.8 Accessing DB2 Version 5 data with the DB2 Version 7.1 warehouse agent o 21.8.1 Migrating DB2 Version 5 servers o 21.8.2 Changing the agent configuration + 21.8.2.1 UNIX warehouse agents + 21.8.2.2 Microsoft Windows NT, Windows 2000, and OS/2 warehouse agents * 21.9 Accessing warehouse control databases * 21.10 Accessing sources and targets * 21.11 Accessing DB2 Version 5 information catalogs with the DB2 Version 7.1 Information Catalog Manager * 21.12 Additions to supported non-IBM database sources * 21.13 Importing and Exporting Metadata Using the Common Warehouse Metadata Interchange (CWMI) o 21.13.1 Introduction o 21.13.2 Importing Metadata o 21.13.3 Updating Your Metadata After Running the Import Utility o 21.13.4 Exporting Metadata * 21.14 Creating a Data Source Manually in Data Warehouse Center DB2 Stored Procedure Builder * 22.1 Java 1.2 Support for the DB2 Stored Procedure Builder * 22.2 Remote Debugging of DB2 Stored Procedures * 22.3 Building SQL Procedures on Windows, OS/2 or UNIX Platforms * 22.4 Using the DB2 Stored Procedure Builder on the Solaris Platform * 22.5 Known Problems and Limitations * 22.6 Using DB2 Stored Procedure Builder with Traditional Chinese Locale * 22.7 UNIX (AIX, Sun Solaris, Linux) Installations and the Stored Procedure Builder DB2 Warehouse Manager * 23.1 "Warehouse Manager" Should Be "DB2 Warehouse Manager" * 23.2 Information Catalog Manager Initialization Utility * 23.3 Information Catalog Manager for the Web * 23.4 DB2 Warehouse Manager Publications o 23.4.1 Information Catalog Manager Administration Guide * 23.5 Information Catalog Manager Programming Guide and Reference o 23.5.1 Information Catalog Manager Reason Codes * 23.6 Information Catalog Manager User's Guide * 23.7 Information Catalog Manager: Online Messages * 23.8 Information Catalog Manager: Online Help * 23.9 Query Patroller Administration Guide o 23.9.1 DB2 Query Patroller Client is a Separate Component o 23.9.2 Manual Installation of Query Patroller on AIX and Solaris + 23.9.2.1 Creating the Query Patroller Schema and Binding the Application Bind Files + 23.9.2.2 Manual Installation Commands o 23.9.3 Enabling Query Management o 23.9.4 Starting Query Administrator o 23.9.5 User Administration o 23.9.6 Creating a Job Queue o 23.9.7 Using the Command Line Interface o 23.9.8 Query Enabler Notes Information Center * 24.1 "Invalid shortcut" Error on the Windows Operating System OLAP Starter Kit * 25.1 OLAP Server Web Site * 25.2 Completing the DB2 OLAP Starter Kit Setup on AIX and Solaris * 25.3 Logging in from OLAP Integration Server Desktop o 25.3.1 Starter Kit Login Example * 25.4 Manually creating and configuring the sample databases for OLAP Integration Server * 25.5 Known problems and limitations o 25.5.1 DB2 OLAP Starter Kit o 25.5.2 DB2 OLAP Desktop Client o 25.5.3 Spreadsheet Clients o 25.5.4 DB2 OLAP Integration Server * 25.6 OLAP Starter Kit Spreadsheet Needs Current Windows svc.pack * 25.7 OLAP Spreadsheet Add-in EQD Files Missing * 25.8 Attribute Dimension Support o 25.8.1 Updated books for DB2 OLAP Starter Kit What's New * 26.1 On Demand Log Archive Support Documentation Error Unicode Updates * 27.1 Introduction o 27.1.1 DB2 Unicode Databases and Applications o 27.1.2 Documentation Updates * 27.2 SQL Reference o 27.2.1 Chapter 3 Language Elements + 27.2.1.1 Promotion of Data Types + 27.2.1.2 Casting Between Data Types + 27.2.1.3 Assignments and Comparisons + 27.2.1.4 Rules for Result Data Types + 27.2.1.5 Rules for String Conversions + 27.2.1.6 Expressions + 27.2.1.7 Predicates o 27.2.2 Chapter 4 Functions + 27.2.2.1 Scalar Functions * 27.3 CLI Guide and Reference o 27.3.1 Chapter 3. Using Advanced Features + 27.3.1.1 Writing a DB2 CLI Unicode Application o 27.3.2 Appendix C. DB2 CLI and ODBC + 27.3.2.1 ODBC Unicode Applications * 27.4 Data Movement Utilities Guide and Reference o 27.4.1 Appendix C. Export/Import/Load Utility File Formats Wizards * 28.1 Setting Extent Size in the Create Database Wizard Additional Information * 29.1 DB2 Universal Database and DB2 Connect Online Support * 29.2 DB2 Magazine Appendix A. Notices * A.1 Trademarks ------------------------------------------------------------------------ Welcome to DB2 Universal Database Version 7.1! This file contains information about the following products that was not available when the DB2 manuals were printed: IBM DB2 Universal Database Personal Edition, Version 7.1 IBM DB2 Universal Database Workgroup Edition, Version 7.1 IBM DB2 Universal Database Enterprise Edition, Version 7.1 IBM DB2 Data Links Manager, Version 7.1 IBM DB2 Universal Database Enterprise - Extended Edition, Version 7.1 IBM DB2 Query Patroller, Version 7.1 IBM DB2 Personal Developer's Edition, Version 7.1 IBM DB2 Universal Developer's Edition, Version 7.1 IBM DB2 Data Warehouse Manager, Version 7.1 IBM DB2 Relational Connect, Version 7.1 A separate Release Notes file, installed as READCON.TXT, is provided for the following products: IBM DB2 Connect Personal Edition, Version 7.1 IBM DB2 Connect Enterprise Edition, Version 7.1 The What's New book contains both an overview of some of the major DB2 enhancements for Version 7.1, and a detailed description of these new features and enhancements. ------------------------------------------------------------------------ Special Notes ------------------------------------------------------------------------ 1.1 DB2 Universal Database Business Intelligence Quick Tour The Quick Tour is not available on DB2 for Linux or Linux/390. The Quick Tour is optimized to run with small system fonts. You may have to adjust your Web browser's font size to correctly view the Quick Tour on OS/2. Refer to your Web browser's help for information on adjusting font size. To view the Quick Tour correctly (SBCS only), it is recommended that you use an 8-point Helv font. For Japanese and Korean customers, it is recommended that you use an 8-point Mincho font. When you set font preferences, be sure to select the "Use my default fonts, overriding document-specified fonts" option in the Fonts page of the Preference window. In some cases the Quick Tour may launch behind a secondary browser window. To correct this problem, close the Quick Tour, and follow the steps in 1.8, Error Messages when Attempting to Launch Netscape. When launching the Quick Tour, you may receive a JavaScript error similar to the following: file:/C/Program Files/SQLLIB/doc/html/db2qt/index4e.htm, line 65: Window is not defined. This JavaScript error prevents the Quick Tour launch page, index4e.htm, from closing automatically after the Quick Tour is launched. You can close the Quick Tour launch page by closing the browser window in which index4e.htm is displayed. In the "What's New" section, under the Data Management topic, it is stated that "on-demand log archive support" is supported in Version 7.1. This is not the case. It is also stated that: The size of the log files has been increased from 4GB to 32GB. This sentence should read: The total active log space has been increased from 4GB to 32GB. The section describing the DB2 Data Links Manager contains a sentence that reads: Also, it now supports the use of the Veritas XBSA interface for backup and restore using NetBackup. This sentence should read: Also, it now supports the XBSA interface for file archival and restore. Storage managers that support the XBSA interface include Legato NetWorker and Veritas NetBackup. ------------------------------------------------------------------------ 1.2 Downloading Installation Packages for All Supported DB2 Clients To download installation packages for all supported DB2 clients, which include all the pre-Version 7.1 clients, connect to the IBM DB2 Client Application Enabler Pack Web site at http://www.ibm.com/software/data/db2/db2tech/clientpak.html ------------------------------------------------------------------------ 1.3 Installing DB2 on Windows 2000 On Windows 2000, when installing over a previous version of DB2 or when reinstalling the current version, ensure that the recovery options for all of the DB2 services are set to "Take No Action". -------------------------------------------------------------------------- 1.3.1 Installing DB2 on Windows 95 When a user is installing DB2 on a Windows 95 machine that is non-English, then the user will need to do a manual upgrade of their WinSock2. For English languages, the DB2 install handles this , but for non-English languages, the upgrade fails if not done before DB2 is installed. We have tried to get around this with our install, but is seems the problem is not with DB2 when we try that upgrade, but with OS or the WinSock2 program. ------------------------------------------------------------------------------ 1.4 Notes on Greater Than 8-Character User IDs and Schema Names * DB2 Version 7.1 products on Windows 32-bit platforms support user IDs that are up to 30 characters long. However, because of native support of Windows NT and Windows 2000, the practical limit for user ID is 20 characters. * DB2 Version 7.1 supports non-Windows 32-bit clients connecting to Windows NT and Windows 2000 with user IDs longer than 8 characters when user ID and password are being specified explicitly. This excludes connections using Client or DCE authentication. * DCE authentication on all platforms continues to have the 8-character user ID limit. * The authid returned in the SQLCA from a successful CONNECT or ATTACH is truncated to 8 characters. The SQLWARN fields contain warnings when truncation occurs. For more information, refer to the description of the CONNECT statement in the SQL Reference. * The authid returned by the command line processor (CLP) from a successful CONNECT or ATTACH is truncated to 8 characters. An ellipsis (...) is appended to the authid to indicate truncation. * DB2 Version 7.1 supports schema names with length up to 30 bytes, with the following exceptions: o Tables with schema names longer than 18 bytes cannot be replicated. o User defined types (UDTs) cannot have schema names longer than 8 bytes. ------------------------------------------------------------------------ 1.5 National Language Versions of DB2 Version 7.1 DB2 Version 7.1 is available in English, French, German, Italian, Spanish, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, Traditional Chinese, Danish, Finnish, Norwegian, Swedish, Czech, Dutch, Hungarian, Polish, Turkish, Russian, Bulgarian, and Slovenian. On UNIX-based platforms, the DB2 product messages and library can be installed in several different languages. The DB2 installation utility lays down the message catalog file sets into the most commonly used locale directory for a given platform as shown in the following chart. Operating AIX HPUX Solaris Linux Linux/390 SGI Dynix System Language Locale Cde Locale Cde Locale Cde Locale Cde Locale Cde Locale CdeLocale Cde Pg Pg Pg Pg Pg Pg Pg French fr_FR 819 fr_FR.iso88591 819 fr 819 fr 819 fr 819 fr 819 Fr_FR 850 fr_FR.roman8 1051 German de_DE 819 de_DE.iso88591 819 de 819 de 819 de 819 de 819 De_DE 850 de_DE.roman8 1051 Italian it_IT 819 it_IT.iso88591 819 it 819 it 819 it 819 It_IT 850 it_IT.roman8 1051 Spanish es_ES 819 es_ES.iso88591 819 es 819 es 819 es 819 es 819 Es_ES 850 es_ES.roman8 1051 Brazilian pt_BR 819 pt_BR 819 pt_BR 819 pt_BR 819 Portu- guese Japanese ja_JP 954 ja_JP.eucJP 954 ja 954 ja_JP.ujis 954 ja_JP.EUC 954 Ja_JP 932 Korean ko_KR 970 ko_KR.eucKR 970 ko 970 ko_KO.euc 970 Simplified zh_CN 1383 zh_CN.hp15CN 1383 zh 1383 zh 1386 Chinese Zh_ 1386 zh_CN.GBK CN.GBK Traditionalzh_TW 964 zh_TW.eucTW 964 zh_TW 964 Chinese Zh_TW 950 zh_TW.big5 950 zh_TW.BIG5 950 Danish da_DK 819 da_DK.iso88591 819 da 819 Da_DK 850 da_DK.roman8 1051 Finnish fi_FI 819 fi_FI.iso88591 819 fi 819 Fi_FI 850 fi_FI.roman8 1051 Norwegian no_NO 819 no_NO.iso88591 819 no 819 No_NO 850 no_NO.roman8 1051 Sweden sv_SE 819 sv_SE.iso88591 819 sv 819 Sv_SE 850 sv_SE.roman8 1051 Czech cs_CZ 912 Hungarian hu_HU 912 Polish pl_PL 912 Dutch nl_NL 819 nl 819 Nl_NL 850 Turkish tr_TR 920 Russian ru_RU 915 Bulgarian bg_BG 915 bg_BG.iso88595 915 Slovenian sl_SI 912 sl_SI.iso88592 912 sl_SI 912 If your system uses the same code pages but different locale names than those provided above, you can still see the translated messages by creating a link to the appropriate message directory. For example, if your AIX machine default locale is ja_JP.IBM-eucJP and the code page of ja_JP.IBM-eucJP is 954, you can create a link from /usr/lpp/db2_07_01/msg/ja_JP.IBM-eucJP to /usr/lpp/db2_07_01/msg/ja_JP by issuing the following command: ln -s /usr/lpp/db2_07_01/msg/ja_JP /usr/lpp/db2_07_01/msg/ja_JP.IBM-eucJP After the execution of this command, all DB2 messages come up in Japanese. 1.5.1 Control Center and Documentation Filesets The Control Center, Control Center Help and documentation filesets are placed in the following directories on the target workstation: * DB2 for AIX: o /usr/lpp/db2_07_01/cc/%L o /usr/lpp/db2_07_01/java/%L o /usr/lpp/db2_07_01/doc/%L o /usr/lpp/db2_07_01/qp/$L o /usr/lpp/db2_07_01/spb/%L * DB2 for HP-UX: o /opt/IBMdb2/V7.1/cc/%L o /opt/IBMdb2/V7.1/java/%L o /opt/IBMdb2/V7.1/doc/%L * DB2 for Linux: o /usr/IBMdb2/V7.1/cc/%L o /usr/IBMdb2/V7.1/java/%L o /usr/IBMdb2/V7.1/doc/%L * DB2 for Solaris: o /opt/IBMdb2/V7.1/cc/%L o /usr/IBMdb2/V7.1/java/%L o /opt/IBMdb2/V7.1/doc/%L Control Center file sets are in Unicode code page. Documentation and Control Center help file sets are in a browser-recognized code set. If your system uses a different locale name than the one provided, you can still run the translated version of the Control Center and see the translated version of help by creating links to the appropriate language directories. For example, if your AIX machine default locale is ja_JP.IBM-eucJP, you can create links from /usr/lpp/db2_07_01/cc/ja_JP.IBM-eucJP to /usr/lpp/db2_07_01/cc/ja_JP and from /usr/lpp/db2_07_01/doc/ja_JP.IBM-eucJP to /usr/lpp/db2_07_01/doc/ja_JP by issuing the following commands: * ln -s /usr/lpp/db2_07_01/cc/ja_JP /usr/lpp/db2_07_01/cc/ja_JP.IBM-eucJP * ln -s /usr/lpp/db2_07_01/doc/ja_JP /usr/lpp/db2_07_01/doc/ja_JP.IBM-eucJP After the execution of these commands, the Control Center and help text come up in Japanese. Note:The Web Control Center is not supported on Linux/390. ------------------------------------------------------------------------ 1.6 Accessibility Features of DB2 UDB Version 7.1 The DB2 UDB family of products includes a number of features that make the products more accessible for people with disabilities. These features include: * Features that facilitate keyboard input and navigation * Features that enhance display properties * Options for audio and visual alert cues * Compatibility with assistive technologies * Compatibility with accessibility features of the operating system * Accessible documentation formats 1.6.1 Keyboard Input and Navigation 1.6.1.1 Keyboard Input The DB2 Control Center can be operated using only the keyboard. Menu items and controls provide access keys that allow users to activate a control or select a menu item directly from the keyboard. These keys are self-documenting, in that the access keys are underlined on the control or menu where they appear. 1.6.1.2 Keyboard Focus In UNIX-based systems, the position of the keyboard focus is highlighted, indicating which area of the window is active and where the user's keystrokes will have an effect. 1.6.2 Features for Accessible Display The DB2 Control Center has a number of features that enhance the user interface and improve accessibility for users with low vision. These accessibility enhancements include support for high-contrast settings and customizable font properties. 1.6.2.1 High-Contrast Mode The Control Center interface supports the high-contrast-mode option provided by the operating system. This feature assists users who require a higher degree of contrast between background and foreground colors. 1.6.2.2 Font Settings The Control Center interface allows users to select the color, size, and font for the text in menus and dialog windows. 1.6.2.3 Non-dependence on Color Users do not need to distinguish between colors in order to use any of the functions in this product. 1.6.3 Alternative Alert Cues The user can opt to receive alerts through audio or visual cues. 1.6.4 Compatibility with Assistive Technologies The DB2 Control Center interface is compatible with screen reader applications such as Via Voice. When in application mode, the Control Center interface has the properties required for these accessibility applications to make onscreen information available to blind users. 1.6.5 Accessible Documentation Documentation for the DB2 family of products is available in HTML format. This allows users to view documentation according to the display preferences set in their browsers. It also allows the use of screen readers and other assistive technologies. ------------------------------------------------------------------------ 1.7 DB2 Everywhere is Now DB2 Everyplace The name of DB2 Everywhere has changed to DB2 Everyplace. ------------------------------------------------------------------------ 1.8 Error Messages when Attempting to Launch Netscape If you encounter the following error messages when attempting to launch Netscape: Cannot find file (or one of its components). Check to ensure the path and filename are correct and that all required libraries are available. Unable to open "D:\Program Files\SQLLIB\CC\..\doc\html\db2help\XXXXX.htm" you should take the following steps to correct this problem on Windows NT, 95, or 98 (see below for what to do on Windows 2000): 1. From the Start menu, select Programs --> Windows Explorer. Windows Explorer opens. 2. From Windows Explorer, select View --> Options. The Options Notebook opens. 3. Click the File types tab. The File types page opens. 4. Highlight Netscape Hypertext Document in the Registered file types field and click Edit. The Edit file type window opens. 5. Highlight "Open" in the Actions field. 6. Click the Edit button. The Editing action for type window opens. 7. Uncheck the Use DDE check box. 8. In the Application used to perform action field, make sure that "%1" appears at the very end of the string (include the quotation marks, and a blank space before the first quotation mark). If you encounter the messages on Windows 2000, you should take the following steps: 1. From the Start menu, select Windows Explorer. Windows Explorer opens. 2. From Windows Explorer, select Tools --> Folder Options. The Folder Options notebook opens. 3. Click the File Types tab. 4. On the File Types page, in the Registered file types field, highlight: HTM Netscape Hypertext Document and click Advanced. The Edit File Type window opens. 5. Highlight "open" in the Actions field. 6. Click the Edit button. The Editing Action for Type window opens. 7. Uncheck the Use DDE check box. 8. In the Application used to perform action field, make sure that "%1" appears at the very end of the string (include the quotation marks, and a blank space before the first quotation mark). 9. Click OK. 10. Repeat steps 4 through 8 for the HTML Netscape Hypertext Document and SHTML Netscape Hypertext Document file types. ------------------------------------------------------------------------ 1.9 Mouse Required For all platforms except Windows, a mouse is required to use the tools. ------------------------------------------------------------------------ 1.10 Supported Web Browsers on the Windows 2000 Operating System We recommend that you use Microsoft Internet Explorer on Windows 2000. If you use Netscape, please be aware of the following: * DB2 online information searches may take a long time to complete on Windows 2000 using Netscape. Netscape will use all available CPU resources and appear to run indefinitely. While the search results may eventually return, we recommend that you change focus by clicking on another window after submitting the search. The search results will then return in a reasonable amount of time. * You may notice that when you request help it is displayed correctly in a Netscape browser window, however, if you leave the browser window open and request help later from a different part of the Control Center, nothing changes in the browser. If you close the browser window and request help again, the correct help comes up. You may be able to fix this problem by following the steps in 1.8, Error Messages when Attempting to Launch Netscape. You can also get around the problem by closing the browser window before requesting help for the Control Center. * When you request Control Center help, or a topic from the Information Center, you may get an error message. To fix this, follow the steps in 1.8, Error Messages when Attempting to Launch Netscape. ------------------------------------------------------------------------ 1.11 Opening External Web Links in Netscape Navigator From The Information Center when Netscape is Already Open (UNIX Based Systems) If Netscape Navigator is already open and displaying either a local DB2 HTML document or an external Web site, an attempt to open an external Web site from the Information Center will result in a Netscape error. The error will state that "Netscape is unable to find the file or directory named ." To work around this problem, close the open Netscape browser before opening the external Web site. Netscape will restart and bring up the external Web site. Note that this error does not occur when attempting to open a local DB2 HTML document with Netscape already open. ------------------------------------------------------------------------ 1.12 Problems Starting the Information Center On some systems, the Information Center can be slow to start if you invoke it using the Start Menu, First Steps, or the db2ic command. If you experience this problem, start the Control Center, then select Help --> Information Center. ------------------------------------------------------------------------ 1.13 Configuration Requirement for Adobe Acrobat Reader on UNIX Based Systems Acrobat Reader is only offered in English on UNIX based platforms, and errors may be returned when attempting to open PDF files with language locales other than English. These errors suggest font access or extraction problems with the PDF file, but are actually due to the fact that the English Acrobat Reader cannot function correctly within a UNIX non-English language locale. To view such PDF files, switch to the English locale by performing one of the following steps before launching the English Acrobat Reader: * Edit the Acrobat Reader's launch script, by adding the following line after the #!/bin/sh statement in the launch script file: LANG=C;export LANG This approach will ensure correct behavior when Acrobat Reader is launched by other applications, such as Netscape Navigator, or an application help menu. * Enter LANG=C at the command prompt to set the Acrobat Reader's application environment to English. For further information, contact Adobe Systems (http://www.Adobe.com). ------------------------------------------------------------------------ 1.14 Attempting to Bind from the DB2 Run-time Client Results in a "Bind files not found" Error Because the DB2 Run-time Client does not have the full set of bind files, the binding of GUI tools cannot be done from the DB2 Run-time Client, and can only be done from the DB2 Administration Client. ------------------------------------------------------------------------ 1.15 Additional Required Solaris Patch Level DB2 Universal Database Version 7.1 for Solaris Version 2.6 requires patch 106285-02 or higher, in addition to the patches listed in the DB2 for UNIX Quick Beginnings manual. ------------------------------------------------------------------------ 1.16 Supported CPUs on DB2 Version 7.1 for Solaris CPU versions previous to UltraSparc are not supported. ------------------------------------------------------------------------ 1.17 Searching the DB2 Online Information on Solaris If you are having problems searching the DB2 online information on your Solaris system, check your system's kernel parameters in /etc/system. Here are the minimum kernel parameters required by DB2's search system, NetQuestion: semsys:seminfo_semmni 256 semsys:seminfo_semmap 258 semsys:seminfo_semmns 512 semsys:seminfo_semmnu 512 semsys:seminfo_semmsl 50 shmsys:shminfo_shmmax 6291456 shmsys:shminfo_shmseg 16 shmsys:shminfo_shmmni 300 To set a kernel parameter, add a line at the end of /etc/system as follows: set = value You must reboot your system for any new or changed values to take effect. ------------------------------------------------------------------------ 1.18 Java Control Center on OS/2 The Control Center must be installed on an HPFS-formatted drive. ------------------------------------------------------------------------ 1.19 Search Discovery Search discovery is only supported on broadcast media. For example, search discovery will not function through an ATM adapter. However, this restriction does not apply to known discovery. ------------------------------------------------------------------------ 1.20 Problems When Adding Nodes to a Partitioned Database When adding nodes to a partitioned database that has one or more system temporary table spaces with a page size that is different from the default page size (4 KB), you may encounter the error message: "SQL6073N Add Node operation failed" and an SQLCODE. This occurs because only the IBMDEFAULTBP buffer pool exists with a page size of 4 KB when the node is created. For example, you can use the db2start command to add a node to the current partitioned database: DB2START NODENUM 2 ADDNODE HOSTNAME newhost PORT 2 If the partitioned database has system temporary table spaces with the default page size, the following message is returned: SQL6075W The Start Database Manager operation successfully added the node. The node is not active until all nodes are stopped and started again. However, if the partitioned database has system temporary table spaces that are not the default page size, the returned message is: SQL6073N Add Node operation failed. SQLCODE = "<-902>" In a similar example, you can use the ADD NODE command after manually updating the db2nodes.cfg file with the new node description. After editing the file and running the ADD NODE command with a partitioned database that has system temporary table spaces with the default page size, the following message is returned: DB20000I The ADD NODE command completed successfully. However, if the partitioned database has system temporary table spaces that are not the default page size, the returned message is: SQL6073N Add Node operation failed. SQLCODE = "<-902>" One way to prevent the problems outlined above is to run: DB2SET DB2_HIDDENBP=16 before issuing db2start or the ADD NODE command. This registry variable enables DB2 to allocate hidden buffer pools of 16 pages each using a page size different from the default. This enables the ADD NODE operation to complete successfully. Another way to prevent these problems is to specify the WITHOUT TABLESPACES clause on the ADD NODE or the db2start command. After doing this, you will have to create the buffer pools using the CREATE BUFFERPOOL statement, and associate the system temporary table spaces to the buffer pool using the ALTER TABLESPACE statement. When adding nodes to an existing nodegroup that has one or more table spaces with a page size that is different from the default page size (4 KB), you may encounter the error message: "SQL0647N Bufferpool "" is currently not active.". This occurs because the non-default page size buffer pools created on the new node have not been activated for the table spaces. For example, you can use the ALTER NODEGROUP statement to add a node to a nodegroup: DB2START CONNECT TO mpp1 ALTER NODEGROUP ng1 ADD NODE (2) If the nodegroup has table spaces with the default page size, the following message is returned: SQL1759W Redistribute nodegroup is required to change data positioning for objects in nodegroup "" to include some added nodes or exclude some drop nodes. However, if the nodegroup has table spaces that are not the default page size, the returned message is: SQL0647N Bufferpool "" is currently not active. One way to prevent this problem is to create buffer pools for each page size and then to reconnect to the database before issuing the ALTER NODEGROUP statement: DB2START CONNECT TO mpp1 CREATE BUFFERPOOL bp1 SIZE 1000 PAGESIZE 8192 CONNECT RESET CONNECT TO mpp1 ALTER NODEGROUP ng1 ADD NODE (2) A second way to prevent the problem is to run: DB2SET DB2_HIDDENBP=16 before issuing the db2start command, and the CONNECT and ALTER NODEGROUP statements. Another problem can occur when the ALTER TABLESPACE statement is used to add a table space to a node. For example: DB2START CONNECT TO mpp1 ALTER NODEGROUP ng1 ADD NODE (2) WITHOUT TABLESPACES ALTER TABLESPACE ts1 ADD ('ts1') ON NODE (2) This series of commands and statements generates the error message SQL0647N (not the expected message SQL1759W). To complete this change correctly, you should reconnect to the database after the ALTER NODEGROUP... WITHOUT TABLESPACES statement. DB2START CONNECT TO mpp1 ALTER NODEGROUP ng1 ADD NODE (2) WITHOUT TABLESPACES CONNECT RESET CONNECT TO mpp1 ALTER TABLESPACE ts1 ADD ('ts1') ON NODE (2) Another way to prevent the problem is to run: DB2SET DB2_HIDDENBP=16 before issuing the db2start command, and the CONNECT, ALTER NODEGROUP, and ALTER TABLESPACE statements. ------------------------------------------------------------------------ 1.21 Errors During Migration During migration, error entries in the db2diag.log file (database not migrated) appear even when migration is successful, and can be ignored. ------------------------------------------------------------------------ 1.22 Memory Windows for HP-UX 11 Memory windows is for users on large HP 64-bit machines, who want to take advantage of greater than 1.75 GB of shared memory for 32-bit applications. Memory windows makes available a separate 1 GB of shared memory per process or group of processes. This allows an instance to have its own 1 GB of shared memory, plus the 0.75 GB of global shared memory. If users want to take advantage of this, they can run multiple instances, each in its own window. Following are prerequisites and conditions for using memory windows: * DB2 EE environment o Patches: Extension Software 12/98, and PHKL_17795. o The $DB2INSTANCE variable must be set for the instance. o There must be an entry in the /etc/services.window file for each DB2 instance that you want to run under memory windows. For example: db2instance1 50 db2instance2 60 Note: There can only be a single space between the name and the ID. o Any DB2 commands that you want to run on the server, and that require more than a single statement, must be run using a TCP/IP loopback method. This is because the shell will terminate when memory windows finishes processing the first statement. DB2 Service knows how to accomplish this. o Any DB2 command that you want to run against an instance that is running in memory windows must be prefaced with db2win (located in sqllib/bin). For example: db2win db2start db2win db2stop o Any DB2 command that is run outside of memory windows (but when memory windows is running) should return a 1042. For example: db2win db2start <== OK db2 connect to db <==SQL1042 db2stop <==SQL1042 db2win db2stop <== OK * DB2 EEE environment o Patches: Extension Software 12/98, and PHKL_17795. o The $DB2INSTANCE variable must be set for the instance. o The DB2_ENABLE_MEM_WINDOWS registry variable must be set to TRUE. o There must be an entry in the /etc/services.window file for each logical node of each instance that you want to run under memory windows. The first field of each entry should be the instance name concatenated with the port number. For example: === $HOME/sqllib/db2nodes.cfg for db2instance1 === 5 host1 0 7 host1 1 9 host2 0 === $HOME/sqllib/db2nodes.cfg for db2instance2 === 1 host1 0 2 host2 0 3 host2 1 === /etc/services.window on host1 === db2instance10 50 db2instance11 55 db2instance20 60 === /etc/services.window on host2 === db2instance10 30 db2instance20 32 db2instance21 34 o You must not preface any DB2 command with db2win, which is to be used in an EE environment only. ------------------------------------------------------------------------ 1.23 SQL Reference is Provided in One PDF File The "Using the DB2 Library" appendix in each book indicates that the SQL Reference is available in PDF format as two separate volumes. This is incorrect. Although the printed book appears in two volumes, and the two corresponding form numbers are correct, there is only one PDF file, and it contains both volumes. The PDF file name is db2s0x70. ------------------------------------------------------------------------ 1.24 Migration Issue Regarding Views Defined with Special Registers Views become unusable after database migration if the special register USER or CURRENT SCHEMA is used to define a view column. For example: create view v1 (c1) as values user In Version 5, USER and CURRENT SCHEMA were of data type CHAR(8), but since Version 6, they have been defined as VARCHAR(128). In this example, the data type for column c1 is CHAR if the view is created in Version 5, and it will remain CHAR after database migration. When the view is used after migration, it will compile at run time, but will then fail because of the data type mismatch. The solution is to drop and then recreate the view. Before dropping the view, capture the syntax used to create it by querying the SYSCAT.VIEWS catalog view. For example: select text from syscat.views where viewname='<>' ------------------------------------------------------------------------ 1.25 User Action for dlfm client_conf Failure If, on a DLFM client, dlfm client_conf fails for some reason, "stale" entries in DB2 catalogs may be the reason. The solution is to issue the following commands: db2 uncatalog db db2 uncatalog node db2 terminate Then try dlfm client_conf again. ------------------------------------------------------------------------ 1.26 In the Rare Event that the Copy Daemon Does Not Stop on dlfm stop It could happen in very rare situations that dlfm_copyd (the copy daemon) does not stop when a user issues a dlfm stop, or there is an abnormal shutdown. If this happens, issue a dlfm shutdown before trying to restart dlfm. ------------------------------------------------------------------------ 1.27 Chinese Locale Fix on Red Flag Linux If you are using Simplified Chinese Red Flag Linux Server Version 1.1, contact Red Flag to receive the Simplified Chinese locale fix. Without the Simplified Chinese locale fix for Version 1.1, DB2 does not recognize that the code page of Simplified Chinese is 1381. ------------------------------------------------------------------------ 1.28 Uninstalling DB2 DFS Client Enabler Before the DB2 DFS Client Enabler is uninstalled, root should ensure that no DFS file is in use, and that no user has a shell open in DFS file space. As root, issue the command: stop.dfs dfs_cl Check that /... is no longer mounted: mount | grep -i dfs If this is not done, and DB2 DFS Client Enabler is uninstalled, the machine will need to be rebooted. ------------------------------------------------------------------------ 1.29 DB2 Install May Hang if a Removable Drive is Not Attached During DB2 installation, the install may hang after selecting the install type when using a computer with a removable drive that is not attached. To solve this problem, run setup, specifying the -a option: setup.exe -a ------------------------------------------------------------------------ 1.30 Client Authentication on Windows NT A new DB2 registry variable DB2DOMAINLIST is introduced to complement the existing client authentication mechanism in the Windows NT environment. This variable is used on the DB2 for Windows NT server to define one or more Windows NT domains. Only connection or attachment requests from users belonging to the domains defined in this list will be accepted. This registry variable should only be used under a pure Windows NT domain environment with DB2 servers and clients running at Version 7.1 (or higher). For information about setting this registry variable, refer to the "DB2 Registry and Environment Variables" section in the Administration Guide: Performance. ------------------------------------------------------------------------ 1.31 AutoLoader May Hang During a Fork AIX 4.3.3 contains a fix for a libc problem that could cause the AutoLoader to hang during a fork. The AutoLoader is a multithreaded program. One of the threads forks off another process. Forking off a child process causes an image of the parent's memory to be created in the child. It is possible that locks used by libc.a to manage multiple threads allocating memory from the heap within the same process have been held by a non-forking thread. Since the non-forking thread will not exist in the child process, this lock will never be released in the child, causing the parent to sometimes hang. ------------------------------------------------------------------------ 1.32 DATALINK Restore Restore of any offline backup that was taken after a database restore, with or without rollforward, will not involve fast reconcile processing. In such cases, all tables with DATALINK columns under file link control will be put in datalink reconcile pending (DRP) state. ------------------------------------------------------------------------ 1.33 Define User ID and Password in IBM Communications Server for Windows NT (CS/NT) If you are using APPC as the communication protocol for remote DB2 clients to connect to your DB2 server and if you use CS/NT as the SNA product, make sure that the following keywords are set correctly in the CS/NT configuration file. This file is commonly found in the x:\ibmcs\private directory. 1.33.1 Node Definition TG_SECURITY_BEHAVIOR This parameter allows the user to determine how the node is to handle security information present in the ATTACH if the TP is not configured for security IGNORE_IF_NOT_DEFINED This parameter allows the user to determine if security parameters are present in the ATTACH and to ignore them if the TP is not configured for security. If you use IGNORE_IF_NOT_DEFINED, you don't have to define a User ID and password in CS/NT. VERIFY_EVEN_IF_NOT_DEFINED This parameter allows the user to determine if security parameters are present in the ATTACH and verify them even if the TP is not configured for security. This is the default. If you use VERIFY_EVEN_IF_NOT_DEFINED, you have to define User ID and password in CS/NT. To define the CS/NT User ID and password, perform the following steps: 1. Start --> Programs --> IBM Communications Server --> SNA Node Configuration. The Welcome to Communications Server Configuration window opens. 2. Choose the configuration file you want to modify. Click Next. The Choose a Configuration Scenario window opens. 3. Highlight CPI-C, APPC or 5250 Emulation. Click Finish. The Communications Server SNA Node Window opens. 4. Click the [+] beside CPI-C and APPC. 5. Click the [+] beside LU6.2 Security. 6. Right click on User Passwords and select Create. The Define a User ID Password window opens. 7. Fill in the User ID and password. Click OK. Click Finish to accept the changes. ------------------------------------------------------------------------ 1.34 Federated Systems Restrictions Following are restrictions that apply to federated systems: * The Oracle data types NCHAR, NVARCHAR2, BLOB, CLOB, NCLOB, and BFILE are not supported in queries involving nicknames. * The Create Server Option, Alter Server Option, and Drop Server Option commands are not supported from the Control Center. To issue any of these commands, you must use the command line processor (CLP). * For queries involving nicknames, DB2 UDB does not always abide by the DFT_SQLMATHWARN database configuration option. Instead, DB2 UDB returns the arithmetic errors or warnings directly from the remote data source regardless of the DFT_SQLMATHWARN setting. * The CREATE SERVER OPTION statement does not allow the COLSEQ server option to be set to 'I' for data sources with case-insensitive collating sequences. * The ALTER NICKNAME statement returns SQL0901N when an invalid option is specified. * For Oracle data sources, numeric data types cannot be mapped to DB2's BIGINT data type. By default, Oracle's number(p,s) data type, where 10 <= p <= 18, and s = 0, maps to DB2's DECIMAL data type. ------------------------------------------------------------------------ 1.35 DataJoiner Restriction Distributed requests issued within a federated environment are limited to read-only operations. ------------------------------------------------------------------------ 1.36 IPX/SPX Protocol Support on Windows 2000 The published protocol support chart is not completely correct. A Windows 2000 client connected to any OS/2 or UNIX based server using IPX/SPX is not supported. Also, any OS/2 or UNIX based client connected to a Windows 2000 server using IPX/SPX is not supported. ------------------------------------------------------------------------ 1.37 Stopping DB2 Processes Before Upgrading a Previous Version of DB2 If you are upgrading a previous version of DB2 that is running on your Windows machine, the installation program provides a warning containing a list of processes that are holding DB2 DLLs in memory. At this point, you have the option to manually stop the processes that appear in that list, or you can let the installation program shut down these processes automatically. It is recommended that you manually stop all DB2 processes before installing to avoid loss of data. The best way to ensure that DB2 processes are not running is to view your system's processes through the Windows Services panel. In the Windows Services panel, ensure that there are no DB2 services, OLAP services, or Data warehouse services running. Note:You can only have one version of DB2 running on Windows platforms at any one time. For example, you cannot have DB2 Version 7.1 and DB2 Version 6 running on the same Windows machine. If you install DB2 Version 7.1 on a machine that has DB2 Version 6 installed, the installation program will delete DB2 Version 6 during the installation. Refer to the appropriate Quick Beginnings manual for more information on migrating from previous versions of DB2. ------------------------------------------------------------------------ 1.38 Run db2iupdt After Installing DB2 If Another DB2 Product is Already Installed When installing DB2 UDB Version 7.1 on UNIX based systems, and a DB2 product is already installed, you will need to run the db2iupdt command to update those instances with which you intend to use the new features of this product. Some features will not be available until this command is run. ------------------------------------------------------------------------ 1.39 JDK Level on OS/2 Some messages will not display on OS/2 running versions of JDK 1.1.8 released prior to 09/99. Ensure that you have the latest JDK Version 1.1.8. ------------------------------------------------------------------------ 1.40 Setting up the Linux Environment to Run DB2 After leaving the DB2 installer on Linux and returning to the terminal window, type the following commands to set the correct environment to run the DB2 Control Center: su -l export JAVA_HOME=/usr/jdk118 export DISPLAY=:0 Then, open another terminal window and type: su root xhost + Close that terminal window and return to the terminal where you are logged in as the instance owner ID, and type the command: db2cc to start the Control Center. ------------------------------------------------------------------------ 1.41 Hebrew Information Catalog Manager for Windows NT The Information Catalog Manager component is available in Hebrew and is provided on the DB2 Warehouse Manager for Windows NT CD. The Hebrew translation is provided in a zip file called IL_ICM.ZIP and is located in the DB2\IL directory on the DB2 Warehouse Manager for Windows NT CD. To install the Hebrew translation of Information Catalog Manger, first install the English version of DB2 Warehouse Manager for Windows NT and all prerequisites on a Hebrew Enabled version of Windows NT. After DB2 Warehouse Manager for Windows NT has been installed, unzip the IL_ICM.ZIP file from the DB2\IL directory into the same directory where DB2 Warehouse Manager for Windows NT was installed. Ensure that the correct options are supplied to the unzip program to create the directory structure in the zip file. After the file has been unzipped, the global environment variable LC_ALL must be changed from En_US to Iw_IL. To change the setting: 1. Open the Windows NT Control Panel and double click on the System icon. 2. In the System Properties window, click on the Environment tab, then locate the variable LC_ALL in the System Variables section. 3. Click on the variable to display the value in the Value edit box. Change the value from En_US to Iw_IL. 4. Click on the Set button. 5. Close the System Properties window and the Control Panel. The Hebrew version of Information Catalog Manager should now be installed. ------------------------------------------------------------------------ 1.42 Error While Creating an SQL Stored Procedure on the Server To create SQL stored procedures on the server, the application development client (as well as a compiler) must be installed on the server. Otherwise, the create operation will fail with a message indicating that db2udp.dll cannot be loaded. ------------------------------------------------------------------------ 1.43 Microsoft SNA Server and SNA Multisite Update (Two Phase Commit) Support Host and AS/400 applications cannot access DB2 UDB servers using SNA two phase commit when Microsoft SNA Server is the SNA product in use. Any DB2 UDB publications indicating this is supported are incorrect. IBM Communications Server for Windows NT Version 5.02 or greater is required. Note:Applications accessing host and AS/400 database servers using DB2 UDB for Windows can use SNA two phase commit using Microsoft SNA Server Version 4 Service Pack 3 or greater. ------------------------------------------------------------------------ 1.44 DB2's SNA SPM Fails to Start After Booting Windows If you are using Microsoft SNA Server Version 4 SP3 or later, please verify that DB2's SNA SPM started properly after a reboot. Check the \sqllib\\db2diag.log file for entries that are similar to the following: 2000-04-20-13.18.19.958000 Instance:DB2 Node:000 PID:291(db2syscs.exe) TID:316 Appid:none common_communication sqlccspmconnmgr_APPC_init Probe:19 SPM0453C Sync point manager did not start because Microsoft SNA Server has not been started. 2000-04-20-13.18.23.033000 Instance:DB2 Node:000 PID:291(db2syscs.exe) TID:302 Appid:none common_communication sqlccsna_start_listen Probe:14 DIA3001E "SNA SPM" protocol support was not successfully started. 2000-04-20-13.18.23.603000 Instance:DB2 Node:000 PID:291(db2syscs.exe) TID:316 Appid:none common_communication sqlccspmconnmgr_listener Probe:6 DIA3103E Error encountered in APPC protocol support. APPC verb "APPC(DISPLAY 1 BYTE)". Primary rc was "F004". Secondary rc was "00000000". If such entries exist in your db2diag.log, and the time stamps match your most recent reboot time, you must: 1. Invoke db2stop. 2. Start the SnaServer service (if not already started). 3. Invoke db2start. Check the db2diag.log file again to verify that the entries are no longer appended. ------------------------------------------------------------------------ 1.45 Additional Locale Setting for DB2 for Linux in a Japanese and Simplified Chinese Linux Environment An additional locale setting is required when you want to use the Java GUI tools, such as the Control Center, on a Japanese or Simplified Chinese Linux system. Japanese or Chinese characters cannot be displayed correctly without this setting. Please include the following setting in your user profile, or run it from the command line before every invocation of the Control Center. For a Japanese system: export LC_ALL=ja_JP For a Simplified Chinese system: export LC_ALL=zh_CN ------------------------------------------------------------------------ 1.46 Locale Setting for the DB2 Administration Server Please ensure that the locale of the DB2 Administration Server instance is compatible to the locale of the DB2 instance. Otherwise, the DB2 instance cannot communicate with the DB2 Administration Server. If the LANG environment variable is not set in the user profile of the DB2 Administration Server, the DB2 Administration Server will be started with the default system locale. If the default system locale is not defined, the DB2 Administration Server will be started with code page 819. If the DB2 instance uses one of the DBCS locales, and the DB2 Administration Server is started with code page 819, the instance will not be able to communicate with the DB2 Administration Server. The locale of the DB2 Administration Server and the locale of the DB2 instance must be compatible. For example, on a Simplified Chinese Linux system, "LANG=zh_CN" should be set in the DB2 Administration Server's user profile. ------------------------------------------------------------------------ 1.47 Java Method Signature in PARAMETER STYLE JAVA Procedures and Functions If specified after the Java method name in the EXTERNAL NAME clause of the CREATE PROCEDURE or CREATE FUNCTION statement, the Java method signature must correspond to the default Java type mapping for the signature specified after the procedure or function name. For example, the default Java mapping of the SQL type INTEGER is "int", not "java.lang.Integer". ------------------------------------------------------------------------ 1.48 Shortcuts Not Working In some languages, for the Control Center on UNIX based systems and on OS/2, some keyboard shortcuts do not work. Please use the mouse to select options. ------------------------------------------------------------------------ 1.49 Service Account Requirements for DB2 on Windows NT and Windows 2000 During the installation of DB2 for Windows NT or Windows 2000, the setup program creates several Windows services and assigns a service account for each service. To run DB2 properly, the setup program grants the following user rights to the service account that is associated with the DB2 service: * Act as part of the operating system * Create a token object * Increase quotas * Log on as a service * Replace a process level token. If you want to use a different service account for the DB2 services, you must grant these user rights to the service account. In addition to these user rights, the service account must also have write access to the directory where the DB2 product is installed. The service account for the DB2 Administration Server service (DB2DAS00 service) must also have the authority to start and stop other DB2 services (that is, the service account must belong to the Power Users group) and have DB2 SYSADM authority against any DB2 instances that it administers. ------------------------------------------------------------------------ 1.50 Lost EXECUTE Privilege for Query Patroller Users Created in Version 6 Because of some new stored procedures (IWM.DQPGROUP, IWM.DQPVALUR, IWM.DQPCALCT, and IWM.DQPINJOB) added in Query Patroller Version 7, existing users created in Query Patroller Version 6 do not hold the EXECUTE privilege on those packages. An application to automatically correct this problem has been added to FixPak 1. When you try to use DQP Query Admin to modify DQP user information, please do not try to remove existing users from the user list. ------------------------------------------------------------------------ 1.51 Query Patroller Restrictions Because of JVM (Java Virtual Machine) platform restrictions, the Query Enabler is not supported on HP-UX and NUMA-Q. In addition, the Query Patroller Tracker is not supported on NUMA-Q. If all of the Query Patroller client tools are required, we recommend the use of a different platform (such as Windows NT) to run these tools against the HP-UX or NUMA-Q server. ------------------------------------------------------------------------ 1.52 Need to Commit all User-defined Programs That Will Be Used in the Data Warehouse Center (DWC) If you want to use a stored procedure built by the DB2 Stored Procedure Builder as a user-defined program in the Data Warehouse Center (DWC), you must insert the following statement into the stored procedure before the con.close(); statement: con.commit(); If this statement is not inserted, changes made by the stored procedure will be rolled back when the stored procedure is run from the DWC. For all user-defined programs in the DWC, it is necessary to explicitly commit any included DB2 functions for the changes to take effect in the database; that is, you must add the COMMIT statements to the user-defined programs. ------------------------------------------------------------------------ 1.53 Sub-element Statistics In FixPak 1, an option is provided to collect and use sub-element statistics. These are statistics about the content of data in columns when the data has a structure in the form of a series of sub-fields or sub-elements delimited by blanks. For example, suppose a database contains a table DOCUMENTS in which each row describes a document, and suppose that in DOCUMENTS there is a column called KEYWORDS containing a list of relevant keywords relating to this document for text retrieval purposes. The values in KEYWORDS might be as follows: 'database simulation analytical business intelligence' 'simulation model fruitfly reproduction temperature' 'forestry spruce soil erosion rainfall' 'forest temperature soil precipitation fire' In this example, each column value consists of 5 sub-elements, each of which is a word (the keyword), separated from the others by one blank. For queries that specify LIKE predicates on such columns using the % match_all character: SELECT .... FROM DOCUMENTS WHERE KEYWORDS LIKE '%simulation%' it is often beneficial for the optimizer to know some basic statistics about the sub-element structure of the column, namely: SUB_COUNT The average number of sub-elements. SUB_DELIM_LENGTH The average length of each delimiter separating each sub-element, where a delimiter, in this context, is one or more consecutive blank characters. In the KEYWORDS column example, SUB_COUNT is 5, and SUB_DELIM_LENGTH is 1, because each delimiter is a single blank character. In FixPak 1, the system administrator controls the collection and use of these statistics by means of an extension to the DB2_LIKE_VARCHAR registry variable. This registry variable affects how the DB2 UDB optimizer deals with a predicate of the form: COLUMN LIKE '%xxxxxx' where xxxxxx is any string of characters; that is, any LIKE predicate whose search value starts with a % character. (It may or may not end with a % character). These are referred to as "wildcard LIKE predicates" below. For all predicates, the optimizer has to estimate how many rows match the predicate. For wildcard LIKE predicates, the optimizer assumes that the COLUMN being matched has a structure of a series of elements concatenated together to form the entire column, and estimates the length of each element based on the length of the string, excluding leading and trailing % characters. The new syntax is: db2set DB2_LIKE_VARCHAR=[Y|N|S|num1][,Y|N|num2] where - the first term (preceding the comma) means the following, but only for columns that do not have positive sub-element statistics S Use the algorithm as used in DB2 Version 2. N Use a fixed-length sub-element algorithm. Y (default) Use a variable-length sub-element algorithm with a default value for the algorithm parameter. num1 Use a variable-length sub-element algorithm, and use num1 as the algorithm parameter. - the second term (following the comma) means: N (default) Do not collect or use sub-element statistics. Y Collect sub-element statistics. Use a variable-length sub-element algorithm that uses those statistics, together with a default value for the algorithm parameter in the case of columns with positive sub-element statistics. num2 Collect sub-element statistics. Use a variable-length sub-element algorithm that uses those statistics, together with num2 as the algorithm parameter in the case of columns with positive sub-element statistics. If the value of DB2_LIKE_VARCHAR contains only the first term, no sub-element statistics are collected, and any that have previously been collected are ignored. The value specified affects how the optimizer calculates the selectivity of wildcard LIKE predicates in the same way as before; that is: * If the value is S, the optimizer uses the same algorithm as was used in DB2 Version 2, which does not presume the sub-element model. * If the value is N, the optimizer uses an algorithm that presumes the sub-element model, and assumes that the COLUMN is of a fixed length, even if it is defined as variable length. * If the value is Y (the default) or a floating point constant, the optimizer uses an algorithm that presumes the sub-element model and recognizes that the COLUMN is of variable length, if so defined. It also infers sub-element statistics from the query itself, rather than from the data. This algorithm involves a parameter (the "algorithm parameter") that specifies how much longer the element is than the string enclosed by the % characters. * If the value is Y, the optimizer uses a default value of 1.9 for the algorithm parameter. * If the value is a floating point constant, the optimizer uses the specified value for the algorithm parameter. This constant must lie within the range of 0 to 6.2. If the value of DB2_LIKE_VARCHAR contains two terms, and the second is Y or a floating point constant, sub-element statistics on single-byte character set string columns of type CHAR, VARCHAR, GRAPHIC, or VARGRAPHIC are collected during a RUNSTATS operation and used during compilation of queries involving wildcard LIKE predicates. The optimizer uses an algorithm that presumes the sub-element model and uses the SUB_COUNT and SUB_DELIM_LENGTH statistics, as well as an algorithm parameter, to calculate the selectivity of the predicate. The algorithm parameter is specified in the same way that the inferential algorithm is specified, that is: * If the value is Y, the optimizer uses a default value of 1.9 for the algorithm parameter. * If the value is a floating point constant, the optimizer uses the specified value for the algorithm parameter. This constant must lie within the range of 0 to 6.2. If, during compilation, the optimizer finds that sub-element statistics have not been collected on the column involved in the query, it will use the "inferential" sub-element algorithm; that is, the one used when only the first term of DB2_LIKE_VARCHAR is specified. Thus, in order for the sub-element statistics to be used by the optimizer, the second term of DB2_LIKE_VARCHAR must be set both during RUNSTATS and compilation. The values of the sub-element statistics can be viewed by querying SYSIBM.SYSCOLUMNS. For example: select substr(NAME,1,16), SUB_COUNT, SUB_DELIM_LENGTH from sysibm.syscolumns where tbname = 'DOCUMENTS' The SUB_COUNT and SUB_DELIM_LENGTH columns are not present in the SYSSTAT.COLUMNS statistics view, and therefore cannot be updated. Note:RUNSTATS may take longer if this option is used. For example, RUNSTATS may take between 15 and 40% longer on a table with five character columns, if the DETAILED and DISTRIBUTION options are not used. If either the DETAILED or the DISTRIBUTION option is specified, the percentage overhead is less, even though the absolute amount of overhead is the same. If you are considering using this option, you should assess this overhead against improvements in query performance. ------------------------------------------------------------------------ 1.54 Control Center Problem on Microsoft Internet Explorer There is a problem caused by Internet Explorer (IE) security options settings. The Control Center uses unsigned jars, therefore access to system information is disabled by the security manager. To eliminate this problem, reconfigure the IE security options as follows: 1. Select Internet Options on the View menu (IE4) or the Tools menu (IE5). 2. On the Security page, select Trustees sites zone. 3. Click Add Sites.... 4. Add the Control Center Web server to the trustees sites list. If the Control Center Web server is in the same domain, it may be useful to add only the Web server name (without the domain name). For example: http://ccWebServer.ccWebServerDomain http://ccWebServer 5. Click OK. 6. Click on Settings.... 7. Scroll down to Java --> Java Permissions and select Custom. 8. Click Java Custom Settings.... 9. Select the Edit Permissions page. 10. Scroll down to Unsigned Content --> Run Unsigned Content --> Additional Unsigned Permissions --> System Information and select Enable. 11. Click OK on each open window. ------------------------------------------------------------------------ 1.55 New Option for Data Warehouse Center Command Line Export Command line export to tag files has a new option, /B. This option is not available through the Data Warehouse Center interface. The new syntax for the iwh2exp2 command is: iwh2exp2 filename.INP dbname userid password [PREFIX=table_schema] [/S] [/R] [/B] where - filename.INP is the full path name of the INP file - dbname is the Data Warehouse Center control database name - userid is the user ID used to log on to the database - password is the password used to log on to the database - optional parameters are: - PREFIX=table_schema: the table schema for the control database tables (the default value is IWH) - /S: export schedules with selected steps - /R: do not export warehouse sources with selected steps - /B: do not export contributing steps with selected steps Note:If /R or /B is specified, the warehouse sources or contributing steps must already exist when the resulting tag file is imported, or an error is returned. ------------------------------------------------------------------------ 1.56 Backup Services APIs (XBSA) Backup Services APIs (XBSA) have been defined by the Open Group in the United Kingdom as an open application programming interface between applications or facilities needing data storage management for backup or archiving purposes. This is documented in "Open Group Technical Standard System Management: Backup Services API (XBSA)", Document Number C425 (ISBN: 1-85912-056-3). In support of this, two new DB2 registry variables have been created and are currently supported on AIX, Solaris, and Windows NT: DB2_VENDOR_INI Points to a file containing all vendor-specific environment settings. The value is picked up when the database manager starts. DB2_XBSA_LIBRARY Points to the vendor-supplied XBSA library. The setting must include the shared object if it is not named shr.o. For example, to use Legato's NetWorker Business Suite Module for DB2, the registry variable must be set as follows: db2set DB2_XBSA_LIBRARY="/usr/lib/libxdb2.a(bsashr10.o)" The XBSA interface can be invoked through the BACKUP DATABASE or the RESTORE DATABASE command. For example: db2 backup db sample use XBSA db2 restore db sample use XBSA ------------------------------------------------------------------------ 1.57 OS/390 Agent IBM's DB2 Universal Database for OS/390 Version 7 now includes an OS/390 agent. You can use the agent to communicate between your DB2 Universal Database for OS/390 and other databases, including DB2 databases on other platforms and non-DB2 databases. It can communicate with any data source that uses an ODBC connection. The agent improves the performance of DB2 Universal Database for the OS/390, especially when you are moving data from another data source into an OS/390 DB2 database. The agent runs under OS/390 Unix Systems Services. It requires OS/390 V2R6 or higher, and it is backward compatible with DB2 Universal Database for OS/390 Versions 5 and 6. The OS/390 agent includes the following features: * Copy data from a source DB2 database to a target DB2 database, regardless of the platform on which you are running DB2 * Sample contents from a table or file * Execute user-defined programs * Access non-DB2 databases through IBM's DataJoiner program on Windows NT * Access VSAM or IMS data through Cross Access's Classic Connect product * Run DB2 Universal Database for OS/390 utilities (Load, Reorg, Runstats) * Run IBM's Data Propagator to gather data from any source that has an OBDC interface 1.57.1 Installation overview These steps summarize the installation process. Each step is then explained in the following paragraphs. 1. Installing the OS/390 agent from the DB2 Universal Database for OS/390 tape 2. Updating the environment variables in your .profile file 3. Setting up connections o between the kernel and the agent daemon o between the agent and the databases that it will access 4. Binding CLI locally and to any remote databases 5. Setting up your ODBC initialization file 6. Setting up authorizations so that the user o can execute the agent daemon o has execute authority on plan DSNAOCLI o has read and write authorization to the logging and ODBC trace directories, if needed 7. Starting the agent daemon 1.57.2 Installation details Installing the OS/390 agent The agent is included in the DB2 Universal Database for OS/390 version 7 tape. See the program directory that accompanies the tape for details on installing the OS/390 agent. Updating the environment variables in your .profile file Environment variables for the agent are listed in chapter 2, "Setting up your warehouse" of the Data Warehouse Center Administration Guide. They point the agent to various DB2 libraries, output directories, etc. Here are the contents of a sample .profile file. This file defines the environment variables, and it belongs in the home directory of the user who starts the agent daemon: export VWS_LOGGING=/usr/lpp/DWC/logs export VWP_LOG=/usr/lpp/DWC/vwp.log export VWS_TEMPLATES=usr/lpp/DWC/ export DSNAOINI=/usr/lpp/DWC/dsnaoini export LIBPATH=usr/lpp/DWC/:$LIBPATH export PATH=/usr/lpp/DWC/:$PATH export STEPLIB=DSN710.SDSNEXIT:DSN710.SDSNLOAD Setting up connections between the kernel and the agent daemon To set up the kernel and daemon connections add the following lines to your /etc/services or TCPIP.ETC.SERVICES files: vwkernal 11000/tcp vwd 11001/tcp vwlogger 11002/tcp Setting up connections between the agent and the databases that it will access To set up connections between the OS/390 agent and databases, add any remote databases to your OS/390 communications database. Here are some sample CDB inserts: INSERT INTO SYSIBM.LOCATIONS (LOCATION, LINKNAME, PORT) VALUES ('NTDB','VWNT704','60002'); INSERT INTO SYSIBM.IPNAMES (LINKNAME, SECURITY_OUT, USERNAMES, IPADDR) VALUES ('VWNT704', 'P', 'O', 'VWNT704.STL.IBM.COM'); INSERT INTO SYSIBM.USERNAMES (TYPE, AUTHID, LINKNAME, NEWAUTHID, PASSWORD) VALUES ('O', 'MVSUID', 'VWNT704', 'NTUID', 'NTPW'); For more information see the "Connecting Distributed Database Systems" chapter in DB2 UDB for OS/390 Installation Guide, GC26-9008-00. Binding CLI Because the agent uses CLI to communicate with DB2, you must bind your CLI plan to all of the remote databases that your agent plans to access. Here are some sample bind package statements for a local MVS DB2 database: BIND PACKAGE (DWC6CLI) MEMBER(DSNCLICS) ISO(CS) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLINC) ISO(NC) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIRR) ISO(RR) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIRS) ISO(RS) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIUR) ISO(UR) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIMS) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIC1) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIC2) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIF4) Here are some sample bind package statements for a DB2 database running on Windows NT: BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLICS) ISO(CS) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLINC) ISO(NC) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIRR) ISO(RR) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIRS) ISO(RS) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIUR) ISO(UR) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIC1) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIC2) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIQR) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIF4) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIV1) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIV2) Here is a sample bind statement to bind the CLI packages together in a plan: BIND PLAN(DWC6CLI) PKLIST(*.DWC6CLI.* ) For more information on binding, see DB2 UDB for OS/390 ODBC Guide and Reference, SC26-9005. Setting up your ODBC initialization file A sample ODBC initialization file, inisamp, is included in the usr/lpp/DWC/ directory. You can edit this file to work with your system, or you can create your own file. To be sure that the file works correctly, verify that it is properly configured: * The DSNAOINI environment variable points to it * The naming convention is dsnaoini.location_name * The file includes CONNECTTYPE=2 and MVSATTACHTYPE=CAF Setting up authorizations The OS/390 agent is a daemon process. You can run the agent daemon with regular Unix security or with OS/390 Unix security. Because the agent requires daemon authority, these agent executables must be defined to RACF Program Control: * libtls4d.dll * iwhcomnt.dll * vwd To define these executables to RACF Program Control, change directory (cd) to the location of the Data Warehouse Center executables and use the following commands: extattr +p libtls4d.dll extattr +p iwhcomnt.dll extattr +p vwd To be able to use the extattr command with the +p parameter, you must have at least read access to the BPX.FILEATTR.PROGCTL FACILITY class. The following example shows the RACF command used to give this permission to userid SMORG: RDEFINE FACILITY BPX.FILEATTR.PROGCTL UACC(NONE) PERMIT BPX.FILEATTR.PROGCTL CLASS(FACILITY) ID(SMORG) ACCESS(READ) SETROPTS RACLIST(FACILITY) REFRESH For more information, see OS/390 Unix System Services Planning, SC28-1890. Starting the agent daemon After you finish configuring your system for the OS/390 warehouse agent, start the agent daemon: 1. Telnet to Unix Systems Services on OS/390 through the OS/390 hostname and USS port. 2. Change to the usr/lpp/DWC directory. 3. Start the agent daemon: o To start it normally, type vwd on the command line o To start it in the background, type vwd>/usr/lpp/DWC/logs/vwd.log 2>&1 & To verify that the warehouse agent daemon is running, type the following on a Unix shell command line: ps -e | grep vwd Or, type D OMVS,a=all on the OS/390 console and search for the string vwd: 1.57.3 Setting up additional agent functions Overview of user defined programs supported by the OS/390 agent A user defined program is assigned to one or more steps. When you run the user defined program, the following actions occur: * The step executes * The agent starts * The agent runs the user-defined program * The agent returns a return code and a feedback file to agent * The agent returns the results to the kernel The DB2 Warehouse Manager provides the following user defined programs: * vwpftp: Runs an FTP command file * vwpmvs: Submits a JCL jobstream * vwprcpy: Copies a file using FTP * XTClient: Client trigger program * etidlmvs: ETI, delete a file on MVS * etircmvs: ETI, run FTP on an MVS Host * etiexmvs: ETI, run for MVS In addition, customers can define their own programs and stored procedures in the Data Warehouse Center. The OS/390 agent supports any executables that run under Unix Systems Services. The DB2 Warehouse Manager provides transformers for most platforms, but the OS/390 platform does not support them because DB2 for OS/390 has only recently provided support for Java stored procedures. Note that the OS/390 agent can still run the transformers on other platforms. In order to run ETI programs on OS/390, you must first apply FixPak 2 to DB2 Universal Database version 7.1. For UDPs that submit a JCL job, you can find more information on setting up the JCL's job card in TCP/IP V3R2 for MVS, SC31-7136-03. A job consists of job control language (JCL) and data. If you use a UDP to submit a job using FTP, you must first create the JCL and data that you want to submit. The job name in the JCL must be USERIDx, where x is a 1-character letter or number (example: MYUSERA). The output class for the MSGCLASS and SYSOUT files contained in your JCL must specify a JES held output class. Note:The maximum LRECL for the submitted job is 254 characters. JES scans only the first 72 characters of JCL. Modifying the Data Warehouse Center template for FTP support Data Warehouse Center installs a JCL template for transferring files using FTP. If you plan to have the OS/390 agent use the FTP commands get or put to transfer files from an OS/390 host to another remote host, you need to modify the account information in the JCL template for your OS/390 system: 1. Log on with an ID that has authority to copy and update files in the /usr/lpp/DWC directory. 2. Locate ftp.jcl and duplicate the file with the new filename systemname.ftp.jcl, where systemname is the name of the MVS system. 3. Create a copy of this file for each OS/390 system on which you plan to run the conversion programs vwpmvs or ETI extract. For example, if you want to run either of these programs on STLMVS1, create a copy of the file called STLMVS1.ftp.jcl. 4. Use a text editor to customize the JCL to meet your site's requirements. Modify the account information to match the standard account information for your MVS system. Do not modify any parameters contained in brackets, such as [USERID] and [FTPFILE]. Note:The brackets are the hexadecimal characters x'AD' and x'BD', respectively. If you do not have your TSO terminal type set to 3278A in SPF option 0, you may see these values as special characters rather than as brackets. This is not a problem if you do not modify the x'AD' or the x'BD', or any of the data that is between the characters. 5. Update the environment variable VWS_TEMPLATES to point to the directory of the copied template file. The Data Warehouse Center ships with this sample JCL template: //[USERID]A JOB , 'PUT/GET', // CLASS=A, // USER=&SYSUID, // NOTIFY=&SYSUID, // TIME=(,30), // MSGCLASS=H //STEP1 EXEC PGM=FTP,PARM='( EXIT' //INPUT DD DSN=[FTPFILE],DISP=SHR //OUTPUT DD SYSOUT=* //SYSPRINT DD SYSOUT=* Sampling contents of a table or file Using the OS/390 agent you can sample contents of flat files such as Unix Systems Services files and OS/390 native flat files. You can also sample contents of DB2 tables, IMS or VSAM files via Classic Connect using the OS/390 agent. For flat files, the agent looks at the parameters in the properties of the file definition to determine the file format. Accessing databases outside the UDB family To access non-UDB family databases, the OS/390 agent uses DataJoiner. DataJoiner has an interface that lets the agent use a normal DRDA flow to it as if it were a UDB database. If an ODBC request is directed to a non-UDB source, DataJoiner invokes an additional layer of code to access foreign databases. For example, when accessing Microsoft SQL server, DataJoiner passes the request to the Windows ODBC driver manager, which then sends the request to SQL Server. DataJoiner can access Oracle, Sybase, Informix, Microsoft SQL Server, Teradata, and any other database that has an ODBC driver that runs on the NT, AIX or Sun platforms. It can also access IMS and VSAM through Classic Connect. The OS/390 agent can access DataJoiner as a source, but not as a target. DataJoiner does not support 2-phase commit. Although DataJoiner supports TCP/IP as an application requestor in versions 2.1 and 2.1.1, it does not have an application server. Since the OS/390 agent would require DataJoiner to have an application server to use TCP/IP, use a SNA connection instead to access DataJoiner from OS/390. Accessing IMS and VSAM on OS/390 Note: APAR PQ36586 is needed for Classic Connect Support. The Windows and NT agents access IMS and VSAM through the Classic Connect ODBC driver. Classic Connect allows customers to set up a DB2-like definition of IMS and VSAM data sets, and then to access them using ODBC. The OS/390 agent has a function that loads the correct ODBC driver based on whether a request is directed to Classic Connect or DB2. If you are accessing a DB2 source, the agent loads the DB2 ODBC dll driver. If you're accessing a VSAM or IMS source the agent loads the Classic Connect ODBC driver. The agent's request is then processed. The Classic Connect ODBC driver cannot be used to access a DB2 Universal Database, nor can DB2 Universal Database be used to access the Classic Connect ODBC driver. Setting up the Classic Connect ODBC driver The Classic Connect non-relational data mapper is a Microsoft Windows-based application that automates many of the tasks required to create logical table definitions for non-relational data structures. The objective is to view a single file or portion of a file as one or more relational tables. The mapping must be accomplished while maintaining the structural integrity of the underlying database or file. Classic Connect is purchased and installed separately from the warehouse agent. 1. Install Classic Connect Data Server on OS/390 2. Install Classic Connect Data Mapper on NT (Optional) 3. Use data mapper to create logical table definitions for IMS and VSAM structures, or create definitions manually Setting up Classic Connect Warehouse access Once you have Classic Connect set up, you can set up access to your warehouse. 1. Create a Classic Connect .ini file. 2. Update the DATASOURCE line. o This line contains a data source name and a protocol address. o The data source name must correspond to a Query Processor name defined on the Classic Connect Data Server, which is located in the QUERY PROCESSOR SERVICE INFO ENTRY in the data server's config file. o The protocol address can be found in the same file in the TCP/IP SERVICE INFO entry. The USERID and USERPASSWORD in this file will be used when defining a Warehouse data source. 3. Export the CXA_CONFIG environment variable to your Classic Connect executables, usually the same directory as your .ini file. 4. Update your LIBPATH environment variable to include the path to your Classic Connect executables, which are usually in the same directory as your .ini file. 5. Verify the install with the test program cxasamp (This step is optional). 6. From the directory containing your .ini file, enter cxasamp. The location/uid/pwd is the data source name/userid/userpassword that is defined in your .ini file. 7. Define a data source to the warehouse as you would any DB2 data source. Note:You do not need to update your dsnaoini file because DB2 for OS/390 does not have a driver manager. The driver manager for Classic Connect is built into the OS/390 agent. A sample Classic Connect application configuration file cxa.ini is in the/usr/lpp/DWC/ directory: * national language for messages NL = US English * resource master file NL CAT = usr/lpp/DWC/v4r1m00/msg/engcat FETCH BUFFER SIZE = 32000 DEFLOC = CXASAMP USERID = uid USERPASSWORD = pwd DATASOURCE = DJX4DWC tcp/9.112.46.200/1035 MESSAGE POOL SIZE = 1000000 Executing DB2 for OS/390 utilities Note:APAR PQ31845 is needed for DB2 for OS/390 version 5. APAR PQ31846 is needed for DB2 for OS/390 version 6. DSNUTILS is a DB2 for OS/390 stored procedure that executes in a WLM and RRS environment. You can use it to run any DB2 utilities that you have installed by using the user-defined stored procedure interface. In addition, there are special user interfaces for the DB2 for OS/390 load, reorg and runstats utilities. The Warehouse Manager also provides an interface to DSNUTILS to allow inclusion of DB2 utilities in Warehouse Manager steps. For more information about DSNUTILS see the DB2 Utilities Reference Guide. To set up DSNUTILS: 1. Execute job DSNTIJSG when installing DB2 to define and bind DSNUTILS. Make sure the definition of DSNUTILS has parameter style general with nulls and linkage = N. 2. Enable the WLM-managed stored procedures. 3. Set up your RRS and WLM environments. 4. Run the sample batch DSNUTILS programs. (This step is recommended but not required.) 5. Bind the DSNUTILS plan with your DSNCLI plan so that CLI can call the stored procedure: BIND PLAN(DSNAOCLI) PKLIST(*.DSNAOCLI.*, *.DSNUTILS.*) 6. Set up a step using the Warehouse Manager and execute it. The Population type should be APPEND. If it is not, the Warehouse Manager will delete everything in the table before executing the utility. Using the DB2 Utilities to move data is typically faster than using SQL. Using the DSNUTILS DB2 for OS/390 Reorg interface, you can specify the UNLOAD EXTERNAL option of the REORG TABLESPACE utility to unload a table to a dataset that can be loaded into another table by the LOAD utility. The other table could be on the same or different OS/390 system. The UNLOAD EXTERNAL option of the REORG TABLESPACE utility creates two datasets, one with the table data and one with the utility control statement that the LOAD utility can use. In the control statement, the INTO TABLE table name is the name of the unloaded table. The DSNUTILS DB2 for OS/390 Reorg interface allows a filename in the utility statement field. You can specify the file that was created by the REORG TABLESPACE utility, which contains a valid control statement, and specify a table name to replace the table name in the control statement. If the utility statement starts with the word :FILE: (FILE surrounded by colons), the text following it is an HFS filename or MVS dataset name that contains the utility control statement, and it is read. If the word :TABLE: (TABLE surrounded by colons) is found in the utility statement field, the text following is a table name which is used to replace the table name in the "INTO TABLE" clause of the control statement. If you want to unload table data to a file, use the REORG TABLESPACE utility and specify the UNLOAD EXTERNAL option. To change the properties, locate the REORG UNLOAD Step, and then right-mouse click the step to bring up the Properties notebook. On the Parameters page, you can change the values for the parameters. Here are some examples: Table 1. Properties for the Reorg Unload Step UTILITY_ID REORGULX RESTART NO UTSTMT REORG TABLESPACE DBVW.USAINENT UNLOAD EXTERNAL RETCODE UTILITY_NAME REORG TABLESPACE RECDSN DBVW.DSNURELD.RECDSN RECDEVT SYSDA RECSPACE 50 DISCDSN DISCDEVT DISCSPACE PNCHDSN DBVW.DSNURELD.PNCHDSN PNCHDEVT SYSDA PNCHSPACE 3 If you want to use the LOAD utility to work with the output from the previous example, here are the values from the Parameters tab of the LOAD properties sheet: Table 2. LOAD Step Properties UTILITY_ID LOADREORG RESTART NO UTSTMT :FILE:DBVW.DSNURELD.PNCHDSN:TABLE:[DBVW].INVENTORY RETCODE UTILITY_NAME LOAD RECDSN DBVW.DSNURELD.RECDSN RECDEVT SYSDA For more detailed information about the DB2 utilities available for the OS/390 platform, see the DB2 for OS/390 Utility Guide and Reference. Replication You can use the OS/390 agent to automate your Data Propagator replication apply steps. Replication requires a source database, a control database, and a target database. These may be different or the same databases. A capture job reads the DB2 log to determine which of the rows in the source database have been added, updated or changed, and it writes the changes out to a changed-data table. An apply job is then run to apply the changes to a target database. Warehouse Manager can automate the execution of the apply job by creating a replication step. The Warehouse Manager allows you to define the type of apply job to run and when to run it. You need to export your SASNLINK library to your steplib environment variable. Modifying the Data Warehouse Center Template for Replication Support Data Warehouse Center installs a JCL template for replication support. If you plan to use this OS/390 agent to run the apply program, you need to modify the account and dataset information in this template for your OS/390 system. To modify the template: 1. Log on with an ID that has authority to copy and update files in the /usr/lpp/DWC/ directory. 2. Find apply.jcl and copy this file as systemname,apply.jcl, where systemname is the name of the MVS system. For example, if you are on STLMVS1, create a copy of the file called STLMVS1.apply.jcl. 3. Use a text editor to customize the JCL to meet your site's requirements. Modify the account information to match the standard account information and modify the dataset for STEPLIB DD and MSGS DD for your MVS system. 4. If necessary, change the program name on the EXEC card. For details on changing program names, see the DB2 Replication Guide and Reference. Do not modify any parameters contained in brackets, such as [USERID] and [APPLY_PARMS]. Note:The brackets are the hexadecimal characters x'AD' and x'BD', respectively. If you do not have your TSO terminal type set to 3278A in SPF option 0, you may see these values as special characters rather than as brackets. This is not a problem if you do not modify the x'AD' or the x'BD', or any of the data that is between the characters. 5. Remember to update the environment variable VWS_TEMPLATES to point to the directory of the copied template file. The following example shows the JCL template that is shipped with the Data Warehouse Center: Apply JCL template: //[USERID]A JOB ,MSGCLASS=H,MSGLEVEL=(1,1), // REGION=2M,TIME=1440,NOTIFY=&SYSUID //* DON'T CHANGE THE FIRST LINE OF THIS TEMPLATE. //* THE REMAINING JCL SHOULD BE MODIFIED FOR YOUR SITE. //********************************************** //* RUN APPLY/MVS ON OS/390 DB2 6.1 * //********************************************** //ASNARUN EXEC PGM=ASNAPV66,REGION=10M, // [APPLY_PARMS] //STEPLIB DD DISP=SHR,DSN=DPROPR.V6R1M0.SASNLINK // DD DISP=SHR,DSN=DSN610.SDSNLOAD //MSGS DD DSN=DPROPR.V2R1M0A.MSGS,DISP=SHR //ASNASPL DD DSN=&&asnaspl,DISP=(NEW,DELETE,DELETE), // UNIT=SYSDA,SPACE=(CYL,(10,1)), // DCB=(RECFM=VB,BLKSIZE=6404) //SYSTERM DD SYSOUT=* //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* // Scheduling warehouse steps with the trigger program (XTClient) The trigger program allows you to schedule Warehouse steps from the OS/390 platform. You or an OS/390 job scheduler can submit a job which triggers a step on DB2 Warehouse Manager. If the step is successful, the trigger step in the JCL returns a 0 return code. You must have the Java Development Kit (JDK) 1.1.8 or later installed on your OS/390 Unix Systems Services to use the trigger program. To execute the trigger, first start XT Server on the machine where your warehouse server is running. This process is described in Chapter 5 of the Data Warehouse Center Administration Guide, in topic "Starting a step from outside the Data Warehouse Center." After XT Server is started, start the XT Client on OS/390. Here is some sample JCL to execute the trigger. //DBA1A JOB 1,'XTCLIENT',CLASS=A,MSGCLASS=H, // MSGLEVEL=(1,1),REGION=4M,NOTIFY=&SYSUID //****************************************************** //* submit iwhetrig //****************************************************** //BRADS EXEC PGM=BPXBATCH, // PARM=('sh cd /usr/lpp/DWC/; java XTClient 9.317.171.133 1100x // 9 drummond pw bvmvs2nt 1 1 100') //STDOUT DD PATH='/tmp/xtclient.stdout', // PATHOPTS=(OWRONLY,OCREAT), // PATHMODE=SIRWXU //STDERR DD PATH='/tmp/xtclient.stderr', // PATHOPTS=(OWRONLY,OCREAT), // PATHMODE=SIRWXU // The first part of the parm (cd /usr/lpp/DWC/;) changes to the directory where the OS/390 agent is installed. The second executes XTClient and passes the 8 parameters, which are as follows. * The DWC server host name or IP address * The DWC server port (normally 11009) * The DWC userid * The DWC password * The name of the step to execute * The DWC server command, where: o 1 = populate the step o 2 = promote the step to test o 3 = promote the step to production o 4 = demote the step to test o 5 = demote the step to development * Wait for the BV completion, where o 1= yes o 0 = no * The maximum number rows (use 0 or blank to fetch all rows) Note:The above JCL shows how to continue the parameters to a new line. To do so, type the parameters up to column 71, put a 'X' in column 72 and continue in column 16 on the next line. Agent Logging Many DB2 Warehouse Manager components such as Server, Logger, Agents, and some VWPs write logs to the logging directory, which is specified in the VWS_LOGGING environment variable. The agent trace supports levels 0-4: * Level 1 - entry/exit tracing * Level 2 - level 1 plus debugging trace * Level 3 - level 2 plus data tracing * Level 4 - internal buffer tracing When trace is set higher than level 1, performance will be slower. You should turn tracing on only for debugging purposes. The tracing information is stored in the file AGNTxxx.LOG and environment information is stored in the file AGNTxxx.SET. ------------------------------------------------------------------------ 1.58 DB2 Universal Database Enterprise Edition and DB2 Connect Enterprise Edition for Linux on S/390 DB2 Universal Database Enterprise Edition and DB2 Connect Enterprise Edition are now available for Linux on S/390. Before installing Linux on an S/390 machine, you should be aware of the software and hardware requirements: Hardware S/390 9672 Generation 5 or higher, Multiprise 3000. Software * SuSE Linux v7.0 for S/390 * kernel level 2.2.16, with patches for S/390 (see below) * glibc 2.1.3 * libstdc++ 6.1 The following patches are required for Linux on S/390: * no patches are required at this time. For the latest updates, go to the http://www.software.ibm.com/data/db2/linux Web site. Notes: 1. Only 32-bit Intel-based Linux and Linux on S/390 are supported. 2. The following are not available on Linux/390 in DB2 Version 7.1: o DB2 UDB Enterprise - Extended Edition o DB2 Extenders o Data Links Manager o DB2 Administrative Client o Change Password Support o LDAP Support ------------------------------------------------------------------------ 1.59 DB2 Universal Database Enterprise - Extended Edition for Linux DB2 Universal Database Enterprise - Extended Edition is now available for Linux. Note:For Linux EEE each physical node in the EEE cluster should have the same kernel, glibc, and libstdc++ levels. ------------------------------------------------------------------------ 1.60 JDBC 2.0 Support for Linux, Linux/390 and HP-UX JDBC 2.0 is now supported on Linux, Linux/390, and HP-UX. For more information, see "Chapter 4. Building Java Applets and Applications" in the Application Building Guide. Note:Running Java stored procedures with JDK 1.2 or JDK 1.3 on these platforms is not supported. ------------------------------------------------------------------------ 1.61 Client Side Caching on Windows NT If a user tries to access a READ PERM DB file residing on a Windows NT server machine where DB2 Datalinks is installed using a shared drive using a valid token, the file opens as expected. However, after that, subsequent open requests using the same token do not actually reach the server, but are serviced from the cache on the client. Even after the token expires, the contents of the file continue to be visible to the user, since the entry is still in the cache. However, this problem does not occur if the file resides on an Windows NT workstation. A solution would be to set the registry entry \\HKEY_LOCAL_MACHINE\SYSTEM \CurrentControlSet\Services\Lanmanserver\Parameters\EnableOpLocks to zero on the Windows NT server. With this registry setting, whenever a file residing on the server is accessed from a client workstation through a shared drive, the request will always reach the server, instead of being serviced from the client cache. Therefore, the token is re-validated for all requests. The negative impact of this solution is that this affects the overall performance for all file access from the server over shared drives. Even with this setting, if the file is accessed through a shared drive mapping on the server itself, as opposed to from a different client machine, it appears that the request is still serviced from the cache. Therefore, the token expiry does not take effect. Note:In all cases, if the file access is a local access and not through a shared drive, token validation and subsequent token expiry will occur as expected. ------------------------------------------------------------------------ 1.62 Incompatibility between DB2 and Sybase in the Windows Environment The installation of DB2 Version 7.1 on the same Windows NT or Windows 2000 machine with Sybase Open Client results in an error, and the Sybase Utilities stops working. An error message similar to the following occurs: Fail to initialize LIBTCL.DLL. Please make sure the SYBASE environment variable is set correctly. Avoid this scenario by removing the environment parameter LC_ALL from the Windows Environment parameters. LC_ALL is a locale category parameter. Locale categories are manifest constants used by the localization routines to specify which portion of the locale information for a program to use. The locale refers to the locality (or country) for which certain aspects of your program can be customized. Locale-dependent areas include, for example, the formatting of dates or the display format for monetary values. LC_ALL affects all locale-specific behavior (all categories). If you remove the LC_ALL environment parameter so that DB2 can coexist with Sybase on the Windows NT platform, the following DB2 facilities no longer work: * Information Catalog User * Information Catalog Administrator * Information Catalog Manager ------------------------------------------------------------------------ 1.63 DB2 UDB Supports the Baltic Rim Code Page (MS-1257) on Windows Platforms DB2 UDB supports the Baltic Rim code page, MS-1257, on Windows 32-bit operating systems. This code page is used for Latvian, Lithuanian, and Estonian. ------------------------------------------------------------------------ 1.64 Windows NT DLFS Incompatible with Norton's Utilities The Windows NT Data Links File System is incompatible with Norton Utilities. When a file is deleted from a drive controlled by DLFS, a kernel exception results: error 0x1E (Kernel Mode Exception Not Handled). The exception being 0xC00000005 (Access Violation). This access violation happens because the Norton Utilities driver gets loaded after the DLFS filter driver gets loaded. A temporary work-around is to load the DLFSD driver, after the Norton Utilities driver is loaded. This work-around can be done by changing the DLFSD driver startup to manual. Click on Start and select Settings--> Control Panel-->Devices-->DLFSD and set it to manual. A batch file, that can be added to the startup folder, can be created which loads the DLFSD driver and the DLFM Service on system startup. The contents of the batch file are as follows: net start dlfsd net start "dlfm service" Name this batch file start_dlfs.bat, and copy it into the C:\WINNT\Profiles\Administrator\Start Menu\Programs\Startup directory. Only the administrator has the privilege to load the DLFS filter driver and the DLFM service. ------------------------------------------------------------------------ 1.65 SET CONSTRAINTS Replaced by SET INTEGRITY The SET CONSTRAINTS statement has been replaced by the SET INTEGRITY statement. For backwards compatibility, both statements are accepted in DB2 UDB V7. ------------------------------------------------------------------------ 1.66 Loss of Control Center Function Beginning with the first interim FixPak following FixPak 2, downlevel clients connecting through the Control Center will experience an almost complete loss of functionality. Downlevel in this case refers to any Version 6 client prior to FixPak 6, and any Version 7 client prior to FixPak 2. Version 5 clients are not affected. The suggested fix is to upgrade any affected clients. Version 6 clients must be upgraded to FixPak 6 or later, and Version 7 clients must be upgraded to FixPak 2 or later. ------------------------------------------------------------------------ Administration Guide: Planning ------------------------------------------------------------------------ 2.1 Chapter 8. Physical Database Design In the "Nodegroup Design Considerations" subsection of the "Designing Nodegroups" section , the following text from the "Partitioning Keys" sub-subsection stating the points to be considered when defining partitioning keys should be deleted only if DB2_UPDATE_PART_KEY=ON: Note:If DB2_UPDATE_PART_KEY=OFF (the default), then the restrictions still apply. * You cannot update the partitioning key column value for a row in the table. * You can only delete or insert partitioning key column values. ------------------------------------------------------------------------ 2.2 Chapter 9. Designing Distributed Databases In the section "Updating Multiple Databases", the list of setup steps has an inaccuracy. Step 4, which now reads as follows: Precompile your application program to specify a type 2 connection (that is, specify CONNECT 2 on the PRECOMPILE PROGRAM command), and one-phase commit (that is, specify SYNCPOINT ONEPHASE on the PRECOMPILE PROGRAM command), as described in the Application Development Guide. should be changed to: Precompile your application program to specify a type 2 connection (that is, specify CONNECT 2 on the PRECOMPILE PROGRAM command), and two-phase commit (that is, specify SYNCPOINT TWOPHASE on the PRECOMPILE PROGRAM command), as described in the Application Development Guide. ------------------------------------------------------------------------ 2.3 Chapter 13. High Availability in the Windows NT Environment 2.3.1 Need to Reboot the Machine Before Running DB2MSCS Utility The DB2MSCS utility is used to perform the required setup to enable DB2 for Fail-Over support under the Microsoft Cluster Service environment. For the DB2MSCS utility to run successfully, the Cluster Service must be able to locate the resource DLL, db2wolf.dll, which resides under the %ProgramFiles%\SQLLIB\bin directory. The DB2 UDB Version 7.1 Installation Program sets the PATH system environment variable to point to the %ProgramFiles%\SQLLIB\bin directory. However, it is not required that you reboot the machine after installation if you are running on the Windows 2000 operating system. If you want to run the DB2MSCS utility, you must reboot the machine so that the PATH environment variable is updated for the Cluster Service. ------------------------------------------------------------------------ 2.4 Chapter 14. DB2 and High Availability on Sun Cluster 2.2 DB2 Connect is supported on Sun Cluster 2.2 if: * The protocol to the host is TCP/IP (not SNA) * Two-phase commit is not used. This restriction is relaxed if the user configures the SPM log to be on a shared disk (this can be done through the spm_log_path database manager configuration parameter), and the failover machine has an identical TCP/IP configuration (the same host name, IP address, and so on). ------------------------------------------------------------------------ 2.5 Appendix E. National Language Support The first paragraph in the section entitled "Deriving Code Page Values" states the following: The application code page is derived from the active environment when the database connection is made. If the DB2CODEPAGE registry variable is set, its value is taken as the application code page. This is not always true for applications coded to use the CLI interface. The CLI code layer will use the locale settings in some cases, even if the user has set the DB2CODEPAGE registry variable. ------------------------------------------------------------------------ Administration Guide: Implementation ------------------------------------------------------------------------ 3.1 Adding or Extending DMS Containers (New Process) DMS containers (both file containers and raw device containers) which are added (during tablespace creation or after) or extended are now done so in parallel through the prefetchers. To achieve an increase in parallelism of these create / resize container operations, one can increase the number of prefetchers running in the system. The only process which is not done in parallel is the logging of these actions and, in the case of creating containers, the tagging of the containers. Note:Parallelism of the CREATE TABLESPACE / ALTER TABLESPACE (with respect to adding new containers to an existing tablespace) will no longer increase when the number of prefetchers equals the number of containers being added. ------------------------------------------------------------------------ 3.2 Chapter 4. Altering a Database Under the section "Altering a Table Space", the following new sections are to be added: 3.2.1 Adding a Container to an SMS Table Space on a Partition You can add a container to an SMS table space on a partition (or node) that currently has no containers. The contents of the table space are rebalanced across all containers. Access to the table space is not restricted during the rebalancing. If you need to add more than one container, you should add them all at the same time. To add a container to an SMS table space using the command line, enter the following: ALTER TABLESPACE ADD ('') ON NODE () The partition specified by number, and every partition (or node) in the range of partitions, must exist in the nodegroup on which the table space is defined. A partition_number may only appear explicitly or within a range in exactly one on-nodes-clause for the statement. The following example shows how to add a new container to partition number 3 of the nodegroup used by table space "plans" on a UNIX based operating system: ALTER TABLESPACE plans ADD ('/dev/rhdisk0') ON NODE (3) 3.2.2 Switching the State of a Table Space The SWITCH ONLINE clause of the ALTER TABLESPACE statement can be used to move table spaces in an OFFLINE state to an ONLINE state if the containers associated with that table space have become accessible. The table space is moved to an ONLINE state while the rest of the database is still up and being used. An alternative to the use of this clause is to disconnect all applications from the database and then to have the applications connect to the database again. This moves the table space from an OFFLINE state to an ONLINE state. To switch the table space to an ONLINE state using the command line, enter: ALTER TABLESPACE SWITCH ONLINE ------------------------------------------------------------------------ 3.3 Chapter 8. Recovering a Database Under the section "Tivoli Storage Manager", subsection "Managing Backups and Log Archives on TSM", in the third paragraph just before "Examples of Using db2adutl:" the last sentence is missing information on the right side of the page. The missing information is: You can also qualify the command with OLDER [THAN] or DAYS. This will delete backups older than the given date (timestamp) or older than the days specified. You can also select a range of logs to be listed instead of seeing all of the logs. A specific backup for deletion can be selected by using the TAKEN AT parameter. 3.3.1 How to Use Suspended I/O In Chapter 8."Recovering a Database", the following new section on using the suspended I/O function is to be added: db2inidb is a new tool shipped with DB2 that can perform crash recovery, put a database in rollforward pending, and rollforward the database. Suspended I/O supports continuous system availability by providing a full implementation for online split mirror handling, that is, splitting a mirror without shutting down the database. If a customer cannot afford doing offline or online backups on a large database, backups or system copies can be done from a mirror image by using suspended I/O and split mirror. Depending on how the storage devices are being mirrored, the uses of db2inidb will vary. The following uses assume that the entire database is mirrored consistently through the storage system. 1. Making a Clone Database The objective here is to have a clone of the primary database to be used for read-only purposes. The following procedure describes how a clone database may be made: a. Suspend I/O on the primary system by entering the following command: db2 set write suspend for database b. Use the operating system level command to split the mirror from the primary database. c. Resume I/O on primary system by entering the following command: db2 set write resume for database After running the command, the database on the primary system should be back to a normal state. d. Attach to the mirrored database from another machine. e. Start the database instance by entering the following command: db2start f. Start the DB2 crash recovery by entering the following command: db2inidb database_name AS SNAPSHOT Note:This command will rollback the changes made by transactions that are inflight at the time of the split. Any DB2 backup image taken on the cloned database cannot be used to restore on the original database for the purpose of performing rollforward recovery using the log files produced on the original database after the split mirror. 2. Using the Split Mirror as a Standby Database As the mirrored (standby) database is continually rolling forward through the logs, new logs that are being created by the primary database, are constantly fetched from the primary system. The following procedure describes how the split mirror can be used as a standby database: a. Suspend I/O writes on the primary database. b. Split the mirror from the primary system. c. Resume the I/O writes on the primary database so that the primary database goes back to normal processing. d. Attach the mirrored database to another instance. e. Copy logs by setting up a user exit program to retrieve log files from the primary system to ensure that the latest logs will be available for this mirrored database. f. Place the mirror in roll-forward pending and roll forward the mirror. Run the db2inidb tool (db2inidb as standby) to place the mirrored database in a roll-forward pending state, remove the suspend write state, and roll forward the database to the end of the logs. g. Go back to step e and repeat this process until the primary database is down. 3. Using the Split Mirror as a Backup Image The following procedure describes how to use the mirrored system as a backup image to restore over the primary system: a. Use operating system commands to copy the mirrored data and logs on top of the primary system. b. Start the database instance by entering the following command: db2start c. Run the following command to place the mirrored database in a roll-forward pending state, remove the suspend write state, and roll forward the database to the end of the logs. db2inidb database_alias AS MIRROR ------------------------------------------------------------------------ 3.4 Appendix C. User Exit for Database Recovery Under the section "Archive and Retrieve Considerations", the following paragraph is no longer true and should be removed from the list: A user exit may be interrupted if a remote client loses its connection to the DB2 server. That is, while handling the archiving of logs through a user exit, one of the other SNA-connected clients dies or powers off resulting in a signal (SIGUSR1) being sent to the server. The server passes the signal to the user exit causing an interrupt. The user exit program can be modified to check for an interrupt and then continue. The Error Handling section has a Notes list that should replace the contents of Note 3 with the following information: * User exit program requests are suspended for five minutes. During this time, all requests are ignored including the log file request that caused the return code. Following the five minute suspension in processing requests, the next request is processed. If no error occurs with the processing of this request, then processing of new user exit program requests continues and DB2 will reissue the archive request for the log files that either failed to archive previously, or were suspended. If a return code of greater than 8 is generated during the retry, requests are suspended for an additional five minutes. The five minute suspensions continue until the problem is corrected or the database is stopped and restarted. Once all applications disconnect from the database and the database is reopened, DB2 will issue the archive request for any log file that might not have been successfully archived in the previous use of the database. If the user exit program fails to archive log files, your disk can be filled with log files and performance may be degraded because of extra work to format these log files. Once the disk becomes full, the database manager will not accept further application requests for database changes. If the user exit program was called to retrieve log files, roll-forward recovery is suspended but not stopped unless a stop was specified in the ROLLFORWARD DATABASE utility. If a stop was not specified, you can correct the problem and resume recovery. ------------------------------------------------------------------------ 3.5 Appendix I. High Speed Inter-node Communications The following section has been updated: 3.5.1 Enabling DB2 to Run Using VI Detailed installation information is found in DB2 Enterprise - Extended Edition for Windows Quick Beginnings. After completing the installation of DB2 as documented in DB2 Enterprise - Extended Edition for Windows Quick Beginnings, set the following DB2 registry variables and carry out the following tasks on each database partition server in the instance: * Set DB2_VI_ENABLE=ON Use the db2set command to modify the value for the registry variable. Use the db2_all command to run the db2set command on all database partition servers in the instance. You must be logged on with a user account that is a member of the Administrators group to run the db2_all command. In the following example, the ; character is placed inside the double quotation marks to allow the request to run concurrently on all the database partition servers in the instance: db2_all ";db2set DB2_VI_ENABLE=ON" For more information about the db2_all command, see "Issuing Commands to Multiple Database Partition Servers" in the Administration Guide: Implementation. * Set DB2_VI_DEVICE=nic0 For example: db2_all ";db2set DB2_VI_DEVICE=nic0" Note:With Synfinity Interconnect, this variable should be set DB2_VI_DEVICE=VINIC. The device name (VINIC) must be in upper case. * Set DB2_VI_VIPL=vipl.dll For example: db2_all ";db2set DB2_VI_VIPL=vipl.dll" Note:The value used in the example is the default for the registry variable. For more information on the registry variables, see Administration Guide: Performance. * Enter db2start on the MPP instance. * Review the db2diag.log file. There should be one message for each partition stating that "VI is enabled." * Fast Communications Manager (FCM) configuration parameters may need to be updated. Should you encounter a problem as a result of resource constraints involving FCM, you should raise the values of the FCM configuration parameters. If you are moving from another high speed interconnect environment where you have increased the values for the FCM configuration parameters, you may need to lower these values. Also, on Windows NT, you may be required to set the DB2NTMEMSIZE registry variable to override the DB2 defaults. Refer to Administration Guide: Performance for more information on the registry variables. ------------------------------------------------------------------------ Administration Guide: Performance ------------------------------------------------------------------------ 4.1 Chapter 5. System Catalog Statistics The following section requires a change: 4.1.1 Collecting and Using Distribution Statistics In the subsection called "Example of Impact on Equality Predicates", there is a discussion of a predicate C <= 10. The error is stated as being -86%. This is incorrect. The sentence at the end of the paragraph should read: Assuming a uniform data distribution and using formula (1), the number of rows that satisfy the predicate is estimated as 1, an error of -87.5%. In the subsection called "Example of Impact on Equality Predicates", there is a discussion of a predicate C > 8.5 AND C <= 10. The estimate of the r_2 value using linear interpolation must be changed to the following: 10 - 8.5 r_2 *= ---------- x (number of rows with value > 8.5 and <= 100.0) 100 - 8.5 10 - 8.5 r_2 *= ---------- x (10 - 7) 100 - 8.5 1.5 r_2 *= ---- x (3) 91.5 r_2 *= 0 The paragraph following this new example must also be modified to read as follows: The final estimate is r_1 + r_2 *= 7, and the error is only -12.5%. ------------------------------------------------------------------------ 4.2 Chapter 6. Understanding the SQL Compiler The following sections require changes: 4.2.1 Replicated Summary Tables The following information will replace or be added to the existing information already in this section: Replicated summary tables can be used to assist in the collocation of joins. For example, if you had a star schema where there is a large fact table spread across twenty nodes, then the joins between the fact table and the dimension tables are most efficient if these tables are collocated. By placing all of the tables in the same nodegroup, at most there would one dimension table partitioned correctly for a collocated join. All other dimension tables would not be able to be used in a collocated join because the join column(s) on the fact table would not correspond to the fact table's partitioning key. For example, you could have a table called FACT (C1, C2, C3, ...) partitioned on C1; and a table called DIM1 (C1, dim1a, dim1b, ...) partitioned on C1; and a table called DIM2 (C2, dim2a, dim2b, ...) partitioned on C2; and so on. From this example, you could see that the join between FACT and DIM1 is perfect because the predicate DIM1.C1 = FACT.C1 would be collocated. Both of these tables are partitioned on column C1. The join between DIM2 with the predicate WHERE DIM2.C2 = FACT.C2 cannot be collocated because FACT is partitioned on column C1 and not on column C2. In this case, it would be good to replicate DIM2 in the fact table's nodegroup. In this way we can do the join locally on each partition. Note:The replicated summary tables discussion here has to do with intra-database replication. Inter-database replication has to do with subscriptions, control tables, and data located in different databases and on different operating systems. If you are interested in inter-database replication refer to the Replication Guide and Reference for more information. When creating a replicated summary table, the source table could be a single-node nodegroup table or a multi-node nodegroup table. In most cases, the table is small and can be placed in a single-node nodegroup. You may place a limit on the data to be replicated by specifying only a subset of the columns from the table, or by limiting the number of rows through the predicates used, or by using both methods when creating the replicated summary table. Note:The data capture option is not required for replicated summary tables to function. The replicated summary table could also be created in a multi-node nodegroup. The nodegroup is the same as the nodegroup in which you have placed your large tables. In this case, copies of the source table are created on all of the partitions of the nodegroup. Joins between a large fact table and the dimension tables have a better chance of being done locally in this environment rather than having to broadcast the source table to all partitions. Indexes on replicated tables are not created automatically. Indexes are created and may be different from those identified in the source table. Note:You cannot create unique indexes (or put on any constraints) on the replicated tables. This will prevent constraint violations that are not present on the source tables. These constraints are disallowed even if there is the same constraint on the source table. After using the REFRESH statement, you should run RUNSTATS on the replicated table as you would any other table. The replicated tables can be referenced directly within a query. However, you cannot use the NODENUMBER() predicate with a replicated table to see the table data on a particular partition. To see if a created replicated summary table was used (given a query that referenced the source table), you can use the EXPLAIN facility. First, you would ensure the EXPLAIN tables existed. Then, you would create an explain plan for the SELECT statement you are interested in. Finally, you would use db2exfmt utility to format the EXPLAIN output. The access plan chosen by the optimizer may or may not use the replicated summary table depending on the information that needs to be joined. Not using the replicated summary table could occur if the optimizer determined that it would be cheaper to broadcast the original source table to the other partitions in the nodegroup. 4.2.2 Data Access Concepts and Optimization The section "Multiple Index Access" under "Index Scan Concepts" has changed. Add the following information before the note at the end of the section: To realize the performance benefits of dynamic bitmaps when scanning multiple indexes, it may be necessary to change the value of the sort heap size (sortheap) database configuration parameter, and the sort heap threshold (sheapthres) database manager configuration parameter. Additional sort heap space is required when dynamic bitmaps are used in access plans. When sheapthres is set to be relatively close to sortheap (that is, less than a factor of two or three times per concurrent query), dynamic bitmaps with multiple index access must work with much less memory than the optimizer anticipated. The solution is to increase the value of sheapthres relative to sortheap. The section "Search Strategies for Star Join" under "Predicate Terminology" has changed. Add the following information at the end of the section: The dynamic bitmaps created and used as part of the Star Join technique uses sort heap memory. See Chapter 13, "Configuring DB2" in the Administration Guide: Performance manual for more information on the Sort Heap Size (sortheap) database configuration parameter. ------------------------------------------------------------------------ 4.3 Chapter 13. Configuring DB2 The following parameters require changes: 4.3.1 Sort Heap Size (sortheap) The "Recommendation" section has changed. The information here should now read: When working with the sort heap, you should consider the following: * Appropriate indexes can minimize the use of the sort heap. * Hash join buffers and dynamic bitmaps (used for index ANDing and Star Joins) use sort heap memory. Increase the size of this parameter when these techniques are used. * Increase the size of this parameter when frequent large sorts are required. * ... (the rest of the items are unchanged) 4.3.2 Sort Heap Threshold (sheapthres) The second last paragraph in the description of this parameter has changed. The paragraph should now read: Examples of those operations that use the sort heap include: sorts, dynamic bitmaps (used for index ANDing and Star Joins), and operations where the table is in memory. The following information is to be added to the description of this parameter: There is no reason to increase the value of this parameter when moving from a single-node to a multi-node environment. Once you have tuned the database and database manager configuration parameters on a single node (in a DB2 EE) environment, the same values will in most cases work well in a multi-node (in a DB2 EEE) environment. The Sort Heap Threshold parameter, as a database manager configuration parameter, applies across the entire DB2 instance. The only way to set this parameter to different values on different nodes or partitions, is to create more than one DB2 instance. This will require managing different DB2 databases over different nodegroups. Such an arrangement defeats the purpose of many of the advantages of a partitioned database environment. 4.3.3 Maximum Percent of Lock List Before Escalation (maxlocks) The following change pertains to the Recommendation section of the "Maximum Percent of Lock List Before Escalation (maxlocks)" database configuration parameter. Recommendation: The following formula allows you to set maxlocks to allow an application to hold twice the average number of locks: maxlocks = 2 * 100 / maxappls Where 2 is used to achieve twice the average and 100 represents the largest percentage value allowed. If you have only a few applications that run concurrently, you could use the following formula as an alternative to the first formula: maxlocks = 2 * 100 / (average number of applications running concurrently) One of the considerations when setting maxlocks is to use it in conjunction with the size of the lock list (locklist). The actual limit of the number of locks held by an application before lock escalation occurs is: maxlocks * locklist * 4096 / (100 * 36) Where 4096 is the number of bytes in a page, 100 is the largest percentage value allowed for maxlocks, and 36 is the number of bytes per lock. If you know that one of your applications requires 1000 locks, and you do not want lock escalation to occur, then you should choose values for maxlocks and locklist in this formula so that the result is greater than 1000. (Using 10 for maxlocks and 100 for locklist, this formula results in greater than the 1000 locks needed.) If maxlocks is set too low, lock escalation happens when there is still enough lock space for other concurrent applications. If maxlocks is set too high, a few applications can consume most of the lock space, and other applications will have to perform lock escalation. The need for lock escalation in this case results in poor concurrency. You may use the database system monitor to help you track and tune this configuration parameter. 4.3.4 Configuring DB2/DB2 Data Links Manager/Data Links Access Token Expiry Interval (dl_expint) Contrary to the documentation, if dl_expint is set to "-1", the access control token expires. The workaround for this is to set dl_expint to its maximum value, 31536000 (seconds). This corresponds to an expiration time of one year, which should be adequate for all applications. 4.3.5 MIN_DEC_DIV_3 Database Configuration Parameter The addition of the MIN_DEC_DIV_3 database configuration parameter is provided as a quick way to enable a change to computation of the scale for decimal division in SQL. MIN_DEC_DIV_3 can be set to YES or NO. The default value for MIN_DEC_DIV_3 is NO. The MIN_DEC_DIV_3 database configuration parameter changes the resulting scale of a decimal arithmetic operation involving division. If the value is NO, the scale is calculated as 31-p+s-s'. Refer to the SQL Reference, Chapter 3, "Decimal Arithmetic in SQL" for more information. If set to YES, the scale is calculated as MAX(3, 31-p+s-s'). This causes the result of decimal division to always have a scale of at least 3. Precision is always 31. Changing this database configuration parameter may cause changes to applications for existing databases. This can occur when the resulting scale for decimal division would be impacted by changing this database configuration parameter. Listed below are some possible scenarios that may impact applications. These scenarios should be considered before changing the MIN_DEC_DIV_3 on a database server with existing databases. * If the resulting scale of one of the view columns is changed, a view that is defined in an environment with one setting could fail with SQLCODE -344 when referenced after the database configuration parameter is changed. The message SQL0344N refers to recursive common table expressions, however, if the object name (first token) is a view, then you will need to drop the view and create it again to avoid this error. * A static package will not change behavior until the package is rebound, either implicitly or explicitly. For example, after changing the value from NO to YES, the additional scale digits may not be included in the results until rebind occurs. For any changed static packages, an explicit rebind command can be used to force a rebind. * A check constraint involving decimal division may restrict some values that were previously accepted. Such rows now violate the constraint but will not be detected until the one of the columns involved in the check constraint row is updated or the SET INTEGRITY command with the IMMEDIATE CHECKED option is processed. To force checking of such a constraint, perform an ALTER TABLE command in order to drop the check constraint and then perform an ALTER TABLE command to add the constraint again. Note:DB2 Version 7 also has the following limitations: 1. The command GET DB CFG FOR DBNAME will not display the MIN_DEC_DIV_3 setting. The best way to determine the current setting is to observe the side-effect of a decimal division result. For example, consider the following statement: VALUES (DEC(1,31,0)/DEC(1,31,5)) If this statement returns sqlcode SQL0419N, then the database does not have MIN_DEC_DIV_3 support or it is set to OFF. If the statement returns 1.000, then MIN_DEC_DIV_3 is set to ON. 2. MIN_DEC_DIV_3 does not appear in the list of configuration keywords when you run the following command: ? UPDATE DB CFG ------------------------------------------------------------------------ 4.4 Appendix A. DB2 Registry and Environment Variables The following registry variables are new or require changes: 4.4.1 Table of New and Changed Registry Variables Table 3. Registry Variables Variable Name Operating Values System Description DB2MAXFSCRSEARCH All Default=5 Values: -1, 1 to 33554 Specifies the number of free space control records to search when adding a record to a table. The default is to search five free space control records. Modifying this value allows you to balance insert speed with space reuse. Use large values to optimize for space reuse. Use small values to optimize for insert speed. Setting the value to -1 forces the database manager to search all free space control records. DLFM_TSM_MGMTCLASS AIX, Windows Default: the default TSM NT, Solaris management class Values: any valid TSM management class Specifies which TSM management class to use to archive and retrieve linked files. If there is no value set for this variable, the default TSM management class is used. DB2_CORRELATED_PREDICATES All Default=ON Values: ON or OFF The default for this variable is ON. When there are unique indexes on correlated columns in a join, and this registry variable is ON, the optimizer attempts to detect and compensate for correlation of join predicates. When this registry variable is ON, the optimizer uses the KEYCARD information of unique index statistics to detect cases of correlation, and dynamically adjusts the combined selectivities of the correlated predicates, thus obtaining a more accurate estimate of the join size and cost. DB2_VI_DEVICE Windows NT Default=null Values: nic0 or VINIC Specifies the symbolic name of the device or Virtual Interface Provider Instance associated with the Network Interface Card (NIC). Independent hardware vendors (IHVs) each produce their own NIC. Only one (1) NIC is allowed per Windows NT machine; Multiple logical nodes on the same physical machine will share the same NIC. The symbolic device name "VINIC" must be in upper case and can only be used with Synfinity Interconnect. All other currently supported implementations use "nic0" as the symbolic device name. DB2_SELECTIVITY ALL Default=NO Values: ON or OFF This registry variable controls where the SELECTIVITY clause can be used. See the SQL Reference, Language Elements, Search Conditions for complete details on the SELECTIVITY clause. When this registry variable is set to YES, the SELECTIVITY clause can be specified when the predicate is a basic predicate where at least one expression contains host variables. DB2_UPDATE_PART_KEY ALL Default=OFF Values: ON or OFF This registry variable specifies whether or not update of the partitioning key is permitted. ------------------------------------------------------------------------ 4.5 Appendix C. SQL Explain Tools The section titled "Running db2expln and dynexpln" should have the last paragraph replaced with the following: To run db2expln, you must have SELECT privilege to the system catalog views as well as EXECUTE authority for the db2expln package. To run dynexpln, you must have BINDADD authority for the database, the schema you are using to connect to the database must exist or you must have the EXPLICIT_SCHEMA authority for the database, and you must have any privileges needed for the SQL statements being explained. (Note that if you have SYSADM or DBADM authority, you will automatically have all these authorization levels.) ------------------------------------------------------------------------ Administrative API Reference ------------------------------------------------------------------------ 5.1 db2ConvMonStream In the Usage Notes, the structure for the snapshot variable datastream type SQLM_ELM_SUBSECTION should be sqlm_subsection. ------------------------------------------------------------------------ 5.2 db2DatabasePing (new API) db2DatabasePing - Ping Database Tests the network response time of the underlying connectivity between a client and a database server. This API can be used by an application when a host database server is accessed via DB2 Connect (either directly or through a gateway). Authorization None Required Connection Database Version db2ApiDf.h C API Syntax /* File: db2ApiDf.h */ /* API: Ping Database */ /* ... */ SQL_API_RC SQL_API_FN db2DatabasePing ( db2Uint32 versionNumber, void *pParmStruct, /* Input/output parameters */ struct sqlca *pSqlca); /* SQLCA */ /* ... */ typedef SQL_STRUCTURE db2DatabasePingStruct { char iDbAlias[SQL_ALIAS_SZ + 1]; /* Reserved */ db2Uint16 iNumIterations; /* Number of iterations */ db2Uint32 *poElapsedTime; /* Array of elapsed times (in microseconds) */ } Generic API Syntax /* File: db2ApiDf.h */ /* API: Ping Database */ /* ... */ SQL_API_RC SQL_API_FN db2gDatabasePing ( db2Uint32 versionNumber, void *pParmStruct, /* Input/output parameters */ struct sqlca *pSqlca); /* SQLCA */ /* ... */ typedef SQL_STRUCTURE db2gDatabasePingStruct { db2Uint16 iDbAliasLength; /* Reserved */ char iDbAlias[SQL_ALIAS_SZ]; /* Reserved */ db2Uint16 iNumIterations; /* Number of iterations */ db2Uint32 *poElapsedTime; /* Array of elapsed times (in microseconds) */ } API Parameters versionNumber Input. Version and release of the DB2 Universal Database or DB2 Connect product that the application is using. Note:Constant db2Version710 or higher should be used for DB2 Version 7.1 or higher. iDbAliasLength Input. Length of the database alias name. Note:This parameter is not currently used. It is reserved for future use. iDbAlias Input. Database alias name. Note:This parameter is not currently used. It is reserved for future use. iNumIterations Input. Number of test request iterations. The value must be between 1 and 32767 inclusive. poElapsedTime Output. A pointer to an array of 32-bit integers where the number of elements is equal to iNumIterations. Each element in the array will contain the elapsed time in microseconds for one test request iteration. Note:The application is responsible for allocating the memory for this array prior to calling this API. pSqlca Output. A pointer to the sqlca structure. For more information about this structure, see the Administrative API Reference. Usage Notes A database connection must exist before invoking this API, otherwise an error will result. This function can also be invoked using the PING command. For a description of this command, see the Command Reference. ------------------------------------------------------------------------ 5.3 db2XaGetInfo (new API) db2XaGetInfo - Get Information for Resource Manager Extracts information for a particular resource manager once an xa_open call has been made. Authorization None Required Connection Database Version sqlxa.h C API Syntax /* File: sqlxa.h */ /* API: Get Information for Resource Manager */ /* ... */ SQL_API_RC SQL_API_FN db2XaGetInfo ( db2Uint32 versionNumber, void * pParmStruct, struct sqlca * pSqlca); typedef SQL_STRUCTURE db2XaGetInfoStruct { db2int32 iRmid; struct sqlca oLastSqlca; } db2XaGetInfoStruct; API Parameters versionNumber Input. Specifies the version and release level of the structure passed in as the second parameter, pParmStruct. pParmStruct Input. A pointer to the db2XaGetInfoStruct structure. pSqlca Output. A pointer to the sqlca structure. For more information about this structure, see the Administrative API Reference. iRmid Input. Specifies the resource manager for which information is required. oLastSqlca Output. Contains the sqlca for the last XA API call. Note:Only the sqlca that resulted from the last failing XA API can be retrieved. ------------------------------------------------------------------------ 5.4 db2XaListIndTrans (new API that supercedes sqlxphqr) db2XaListIndTrans - List Indoubt Transactions Provides a list of all indoubt transactions for the currently connected database. Scope This API affects only the node on which it is issued. Authorization One of the following: * sysadm * dbadm Required Connection Database Version db2ApiDf.h C API Syntax /* File: db2ApiDf.h */ /* API: List Indoubt Transactions */ /* ... */ SQL_API_RC SQL_API_FN db2XaListIndTrans ( db2Uint32 versionNumber, void * pParmStruct, struct sqlca * pSqlca); typedef SQL_STRUCTURE db2XaListIndTransStruct { db2XaRecoverStruct * piIndoubtData; db2Uint32 iIndoubtDataLen; db2Uint32 oNumIndoubtsReturned; db2Uint32 oNumIndoubtsTotal; db2Uint32 oReqBufferLen; } db2XaListIndTransStruct; typedef SQL_STRUCTURE db2XaRecoverStruct { sqluint32 timestamp; SQLXA_XID xid; char dbalias[SQLXA_DBNAME_SZ]; char applid[SQLXA_APPLID_SZ]; char sequence_no[SQLXA_SEQ_SZ]; char auth_id[SQL_USERID_SZ]; char log_full; char connected; char indoubt_status; char originator; char reserved[8]; } db2XaRecoverStruct; API Parameters versionNumber Input. Specifies the version and release level of the structure passed in as the second parameter, pParmStruct. pParmStruct Input. A pointer to the db2XaListIndTransStruct structure. pSqlca Output. A pointer to the sqlca structure. For more information about this structure, see the Administrative API Reference. piIndoubtData Input. A pointer to the application supplied buffer where indoubt data will be returned. The indoubt data is in db2XaRecoverStruct format. The application can traverse the list of indoubt transactions by using the size of the db2XaRecoverStruct structure, starting at the address provided by this parameter. If the value is NULL, DB2 will calculate the size of the buffer required and return this value in oReqBufferLen. oNumIndoubtsTotal will contain the total number of indoubt transactions. The application may allocate the required buffer size and issue the API again. oNumIndoubtsReturned Output. The number of indoubt transaction records returned in the buffer specified by pIndoubtData. oNumIndoubtsTotal Output. The Total number of indoubt transaction records available at the time of API invocation. If the piIndoubtData buffer is too small to contain all the records, oNumIndoubtsTotal will be greater than the total for oNumIndoubtsReturned. The application may reissue the API in order to obtain all records. Note:This number may change between API invocations as a result of automatic or heuristic indoubt transaction resynchronisation, or as a result of other transactions entering the indoubt state. oReqBufferLen Output. Required buffer length to hold all indoubt transaction records at the time of API invocation. The application can use this value to determine the required buffer size by calling the API with pIndoubtData set to NULL. This value can then be used to allocate the required buffer, and the API can be issued with pIndoubtData set to the address of the allocated buffer. Note:The required buffer size may change between API invocations as a result of automatic or heuristic indoubt transaction resynchronisation, or as a result of other transactions entering the indoubt state. The application may allocate a larger buffer to account for this. timestamp Output. Specifies the time when the transaction entered the indoubt state. xid Output. Specifies the XA identifier assigned by the transaction manager to uniquely identify a global transaction. dbalias Output. Specifies the alias of the database where the indoubt transaction is found. applid Output. Specifies the application identifier assigned by the database manager for this transaction. sequence_no Output. Specifies the sequence number assigned by the database manager as an extension to the applid. auth_id Output. Specifies the authorization ID of the user who ran the transaction. log_full Output. Indicates whether or not this transaction caused a log full condition. Valid values are: SQLXA_TRUE This indoubt transaction caused a log full condition. SQLXA_FALSE This indoubt transaction did not cause a log full condition. connected Output. Indicates whether or not the application is connected. Valid values are: SQLXA_TRUE The transaction is undergoing normal syncpoint processing, and is waiting for the second phase of the two-phase commit. SQLXA_FALSE The transaction was left indoubt by an earlier failure, and is now waiting for resynchronisation from the transaction manager. indoubt_status Output. Indicates the status of this indoubt transaction. Valid values are: SQLXA_TS_PREP The transaction is prepared. The connected parameter can be used to determine whether the transaction is waiting for the second phase of normal commit processing or whether an error occurred and resynchronisation with the transaction manager is required. SQLXA_TS_HCOM The transaction has been heuristically committed. SQLXA_TS_HROL The transaction has been heuristically rolled back. SQLXA_TS_MACK The transaction is missing commit acknowledgement from a node in a partitioned database. SQLXA_TS_END The transaction has ended at this database. This transaction may be re-activated, committed, or rolled back at a later time. It is also possible that the transaction manager encountered an error and the transaction will not be completed. If this is the case, this transaction requires heuristic actions, because it may be holding locks and preventing other applications from accessing data. Usage Notes A typical application will perform the following steps after setting the current connection to the database or to the partitioned database coordinator node: 1. Call db2XaListIndTrans with piIndoubtData set to NULL. This will return values in oReqBufferLen and oNumIndoubtsTotal. 2. Use the returned value in oReqBufferLen to allocate a buffer. This buffer may not be large enough if there are additional indoubt transactions because the initial invocation of this API to obtain oReqBufferLen. The application may provide a buffer larger than oReqBufferLen. 3. Determine if all indoubt transaction records have been obtained. This can be done by comparing oNumIndoubtsReturned to oNumIndoubtTotal. If oNumIndoubtsTotal is greater than oNumIndoubtsReturned, the application can repeat the above steps. See Also "sqlxhfrg - Forget Transaction Status", "sqlxphcm - Commit an Indoubt Transaction", and "sqlxphrl - Roll Back an Indoubt Transaction" in the Administrative API Reference. ------------------------------------------------------------------------ 5.5 sqlaintp - Get Error Message The following usage note is to be added to the description of this API: In a multi-threaded application, sqlaintp must be attached to a valid context; otherwise, the message text for SQLCODE -1445 cannot be obtained. ------------------------------------------------------------------------ 5.6 Documentation Error Regarding AIX Extended Shared Memory Support (EXTSHM) In "Appendix E. Threaded Applications with Concurrent Access", Note 2 should now read: 2. By default, AIX does not permit 32-bit applications to attach to more than 11 shared memory segments per process, of which a maximum of 10 can be used for DB2 connections. Although EXTSHM can be used to increase the maximum number of shared memory segments for a process and can be used for client applications, DB2 does not support EXTSHM, so this does not increase the maximum number of 10 DB2 connections per process. ------------------------------------------------------------------------ 5.7 SQLFUPD Documentation Error In "Chapter 3. Data Structures", Table 53. Updatable Database Configuration Parameters incorrectly lists the token value for dbheap as 701. The correct value is 58. ------------------------------------------------------------------------ Application Building Guide ------------------------------------------------------------------------ 6.1 Chapter 1. Introduction 6.1.1 Supported Software AIX The listed versions for C and C++ compilers should be the following: IBM C and C++ Compilers for AIX Version 3.6.6 (Version 3.6.6.3 for 64-bit) IBM C for AIX 4.4 IBM VisualAge C++ Version 4.0 Note:Please download the latest available FixPaks for these compiler versions from http://www.ibm.com/software/ad/vacpp/service/csd.html The listed versions for the Micro Focus COBOL compiler should be the following: AIX 4.2.1 Micro Focus COBOL Version 4.0.20 (PRN 12.03 or later) Micro Focus COBOL Version 4.1.10 (PRN 13.04 or later) AIX 4.3 Micro Focus COBOL Server Express Version 1.0 Note:For information on DB2 support for Micro Focus COBOL stored procedures and UDFs on AIX 4.3, see the DB2 Application Development Web page: http://www.ibm.com/software/data/db2/udb/ad To build 64-bit applications with the IBM XL Fortran for AIX Version 5.1.0 compiler, use the "-q64" option in the compile and link steps. Note that 64-bit applications are not supported on earlier versions of this compiler. HP-UX The listed version for the C++ compiler should be the following: HP aC++, Version A.03.25 Note:HP does not support binary compatibility among objects compiled with old and new compilers, so this will force recompiles of any C++ application built to access DB2 on HP-UX. C++ applications must also be built to handle exceptions with this new compiler. This is the URL for the aCC transition guide: http://www.hp.com/esy/lang/cpp/tguide. The C++ incompatibility portion is here: http://www.hp.com/esy/lang/cpp/tguide/transcontent.html#RN.CVT.1.2 http://www.hp.com/esy/lang/cpp/tguide/transcontent.html#RN.CVT.3.3 The C vs C++ portion is here: http://www.hp.com/esy/lang/cpp/tguide/transcontent.html#RN.CVT.3.3.1 Even though C and aCC are compatible, when using the two different object types, the object containing "main" must be compiled with aCC, and the final executable must be linked with aCC. Linux DB2 for Linux supports the following REXX version: Object REXX Interpreter for Linux Version 2.1 Linux/390 DB2 for Linux/390 supports only Java, C and C++. OS/2 The listed versions for C/C++ compiler should be the following: IBM VisualAge C++ for OS/2 Version 3.6.5 and Version 4.0 Note:Please download the latest available FixPaks for these compiler versions from http://www.ibm.com/software/ad/vacpp/service/csd.html Solaris The listed version for the Micro Focus COBOL compiler should be: Micro Focus COBOL Server Express Version 1.0 Windows 32-bit Operating Systems The listed versions for the IBM VisualAge C++ compiler should be the following: IBM VisualAge C++ for Windows Versions 3.6.5 and 4.0 Note:Please download the latest available FixPaks for these compiler versions from http://www.ibm.com/software/ad/vacpp/service/csd.html The listed versions for the Micro Focus COBOL compiler should be the following: Micro Focus COBOL Version 4.0.20 Micro Focus COBOL Net Express Version 3.0 6.1.2 Sample Programs The following should be added to the "Object Linking and Embedding Samples" section: salarycltvc A Visual C++ DB2 CLI sample that calls the Visual Basic stored procedure, salarysrv. SALSVADO A sample OLE automation stored procedure (SALSVADO) and a SALCLADO client (SALCLADO), implemented in 32-bit Visual Basic and ADO, that calculates the median salary in table staff2. The following should be added to the "Log Management User Exit Samples" section: Applications on AIX using the ADSM API Client at level 3.1.6 and higher must be built with the xlc_r or xlC_r compiler invocations, not with xlc or xlC, even if the applications are single-threaded. This ensures that the libraries are thread-safe. This applies to the Log Management User Exit Sample, db2uext2.cadsm. If you have an application that is compiled with a non thread-safe library, you can apply fixtest IC21925E or contact your application provider. The fixtest is available on the index.storsys.ibm.com anonymous ftp server. This will regress the ADSM API level to 3.1.3. ------------------------------------------------------------------------ 6.2 Chapter 3. General Information for Building DB2 Applications 6.2.1 Build Files, Makefiles, and Error-checking Utilities The entry for bldevm in table 16 should read: bldevm The event monitor sample program, evm (only available on AIX, OS/2, and Windows 32-bit operating systems). Table 17 should include the entries: bldmevm The event monitor sample program, evm, with the Microsoft Visual C++ compiler. bldvevm The event monitor sample program, evm, with the VisualAge C++ compiler. ------------------------------------------------------------------------ 6.3 Chapter 4. Building Java Applets and Applications 6.3.1 Setting the Environment If you are using IBM JDK 1.1.8 on supported platforms to build SQLJ programs, a JDK build date of November 24, 1999 (or later) is required. Otherwise you may get JNI panic errors during compilation. If you are using IBM JDK 1.2.2 on supported platforms to build SQLJ programs, a JDK build date of April 17, 2000 (or later) is required. Otherwise, you may get Invalid Java type errors during compilation. For sub-sections AIX and Solaris, replace the information on JDBC 2.0 with the following: Using the JDBC 2.0 Driver with Java Applications The JDBC 1.22 driver is still the default driver on all operating systems. To take advantage of the new features of JDBC 2.0, you must install JDK 1.2 support. Before executing an application that takes advantage of the new features of JDBC 2.0, you must set your environment by issuing the usejdbc2 command from the sqllib/java12 directory. If you want your applications to always use the JDBC 2.0 driver, consider adding the following line to your login profile, such as .profile, or your shell initialization script, such as .bashrc, .cshrc, or .kshrc: . sqllib/java12/usejdbc2 Ensure that this command is placed after the command to run db2profile, as usejdbc2 should be run after db2profile. To switch back to the JDBC 1.22 driver, execute the following command from the sqllib/java12 directory: . usejdbc1 Using the JDBC 2.0 Driver with Java Stored Procedures and UDFs To use the JDBC 2.0 driver with Java stored procedures and UDFs, you must set the environment for the fenced user ID for your instance. The default fenced user ID is db2fenc1. To set the environment for the fenced user ID, perform the following steps: 1. Add the following line to the fenced user ID profile, such as .profile, or the fenced user ID shell initialization script, such as .bashrc, .cshrc, or .kshrc: . sqllib/java12/usejdbc2 2. Issue the following command from the CLP: db2set DB2_USE_JDK12=1 To switch back to the JDBC 1.22 driver support for Java UDFs and stored procedures, perform the following steps: 1. Remove the following line from the fenced user ID profile, such as .profile, or the fenced user ID shell initialization script, such as .bashrc, .cshrc, or .kshrc: . sqllib/java12/usejdbc2 2. Issue the following command from the CLP: db2set DB2_USE_JDK12= If you want your applications to always use the JDBC 2.0 driver, you can add the following line to your login profile, such as .profile, or your shell initialization script, such as .bashrc, .cshrc, or .kshrc: . sqllib/java12/usejdbc2 Ensure that this command is placed after the command to run db2profile, as usejdbc2 should be run after db2profile. HP-UX Java stored procedures and user-defined functions are not supported on DB2 for HP-UX servers. Silicon Graphics IRIX When building SQLJ applications with the -o32 object type, using the Java JIT compiler with JDK 1.2.2, if the SQLJ translator fails with a segmentation fault, try turning off the JIT compiler with this command: export JAVA_COMPILER=NONE JDK 1.2.2 is required for building Java SQLJ programs on Silicon Graphics IRIX. Windows 32-bit Operating Systems Using the JDBC 2.0 Driver with Java Stored Procedures and UDFs To use the JDBC 2.0 driver with Java stored procedures and UDFs, you must set the environment by performing the following steps: 1. Issue the following command in the sqllib\java12 directory: usejdbc2 2. Issue the following command from the CLP: db2set DB2_USE_JDK12=1 To switch back to the JDBC 1.22 driver support for Java UDFs and stored procedures, perform the following steps: 1. Issue the following command in the sqllib\java12 directory: usejdbc2 2. Issue the following command from the CLP: db2set DB2_USE_JDK12= ------------------------------------------------------------------------ 6.4 Chapter 5. Building SQL Procedures. 6.4.1 Setting the SQL Procedures Environment These instructions are in addition to the instructions for setting up the DB2 environment in "Setup". For SQL procedures support, you have to install the Application Development Client and a DB2 supported C or C++ compiler on the server. For information about installing the Application Development Client, refer to the Quick Beginnings book for your platform. For the C and C++ compilers supported by DB2 on your platform, see "Supported Software by Platform". Note:On an OS/2 FAT file system, you are limited to a schema name for SQL Procedures of eight characters or less. You have to use the HPFS file system for schema names longer than eight characters. The compiler configuration consists of two parts: setting the environment variables for the compiler, and defining the compilation command. The environment variables provide the paths to the compiler's binaries, libraries and include files. The compilation command is the full command that DB2 will use to compile the C files generated for SQL procedures. 6.4.2 Setting the Compiler Environment Variables There are different rules for configuring the environment on OS/2, Windows, and UNIX based operating systems, as explained below. In some cases, no configuration is needed; in other cases, the DB2_SQLROUTINE_COMPILER_PATH DB2 registry variable must be set to point to an executable script that sets the environment variables appropriately. On OS/2: for IBM VisualAge C++ for OS/2 Version 3.6: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcxxo\bin\setenv.cmd" for IBM VisualAge C++ for OS/2 Version 4: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcpp40\bin\setenv.cmd" Note:For these commands, it is assumed that the C++ compiler is installed on the c: drive. Change the drive or the path, if necessary, to reflect the location of the C++ compiler on your system. On Windows 32-bit operating systems, if the environment variables for your compiler are set as SYSTEM variables, no configuration is needed. Otherwise, set the DB2_SQLROUTINE_COMPILER_PATH DB2 registry variable as follows: for Microsoft Visual C++ Versions 5.0: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\devstudio\vc\bin\vcvars32.bat" for Microsoft Visual C++ Versions 6.0: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\Micros~1\vc98\bin\vcvars32.bat" for IBM VisualAge C++ for Windows Version 3.6: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcxxw\bin\setenv.bat" for IBM VisualAge C++ for Windows Version 4: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcppw40\bin\setenv.bat" Note:For these commands, it is assumed that the C++ compiler is installed on the c: drive. Change the drive or the path, if necessary, to reflect the location of the C++ compiler on your system. On UNIX based operating systems, DB2 will generate the executable script file $HOME/sqllib/function/routine/sr_cpath (which contains the default values for the compiler environment variables) the first time you compile a stored procedure. You can edit this file if the default values are not appropriate for your compiler. Alternatively, you can set the DB2_SQLROUTINE_COMPILER_PATH DB2 registry variable to contain the full path name of another executable script that specifies the desired settings (see examples above). 6.4.3 Customizing the Compilation Command The installation of the Application Development Client provides a default compilation command that works for at least one of the compilers supported on each platform: AIX: IBM C Set++ for AIX Version 3.6.6 Solaris: SPARCompiler C++ Versions 4.2 and 5.0 HP-UX: HP-UX C++ Version A.12.00 Linux: GNU/Linux g++ Version egcs-2.90.27 980315 (egcs-1.0.2 release) PTX: ptx/C++ Version 5.2 OS/2: IBM VisualAge C++ for OS/2 Version 3 Windows NT and Windows 2000: Microsoft Visual C++ Versions 5.0 and 6.0 To use other compilers, or to customize the default command, you must set the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable with a command like: db2set DB2_SQLROUTINE_COMPILE_COMMAND=compilation_command where compilation_command is the C or C++ compilation command, including the options and parameters required to create stored procedures. In the compilation command, use the keyword SQLROUTINE_FILENAME to replace the filename for the generated SQC, C, PDB, DEF, EXP, messages log and shared library files. For AIX only, use the keyword SQLROUTINE_ENTRY to replace the entry point name. The following are the default values for the DB2_SQLROUTINE_COMPILE_COMMAND for C or C++ compilers on supported server platforms. AIX To use IBM C for AIX Version 3.6.6: db2set DB2_SQLROUTINE_COMPILE_COMMAND=xlc -H512 -T512 \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c -bE:SQLROUTINE_FILENAME.exp \ -e SQLROUTINE_ENTRY -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -lc -ldb2 To use IBM C Set++ for AIX Version 3.6.6: db2set DB2_SQLROUTINE_COMPILE_COMMAND=xlC -H512 -T512 \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c -bE:SQLROUTINE_FILENAME.exp \ -e SQLROUTINE_ENTRY -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -lc -ldb2 This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. Note:To compile 64-bit SQL procedures on AIX, add the -q64 option to the above commands. To use IBM VisualAge C++ for AIX Version 4: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld" If you do not specify the configuration file after vacbld command, DB2 will create the following default configuration file at the first attempt of creating any SQL procedure: $HOME/sqllib/function/routine/sqlproc.icc If you want to use your own configuration file, you can specify your own configuration file when setting the DB2 registry value for DB2_SQLROUTINE_COMPILE_COMMAND: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld %DB2PATH%/function/sqlproc.icc" HP-UX To use HP C Compiler Version A.11.00.03: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc +DAportable +ul -Aa +z \ -I$HOME/sqllib/include -c SQLROUTINE_FILENAME.c; \ ld -b -o SQLROUTINE_FILENAME SQLROUTINE_FILENAME.o \ -L$HOME/sqllib/lib -ldb2 To use HP-UX C++ Version A.12.00: db2set DB2_SQLROUTINE_COMPILE_COMMAND=CC +DAportable +a1 +z -ext \ -I$HOME/sqllib/include -c SQLROUTINE_FILENAME.c; \ ld -b -o SQLROUTINE_FILENAME SQLROUTINE_FILENAME.o \ -L$HOME/sqllib/lib -ldb2 This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. Linux To use GNU/Linux gcc Version 2.7.2.3: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -shared -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -ldb2 To use GNU/Linux g++ Version egcs-2.90.27 980315 (egcs-1.0.2 release): db2set DB2_SQLROUTINE_COMPILE_COMMAND=g++ \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -shared -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -ldb2 This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. PTX To use ptx/C Version 4.5: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc -KPIC \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -G -o SQLROUTINE_FILENAME.so -L$HOME/sqllib/lib -ldb2 ; \ cp SQLROUTINE_FILENAME.so SQLROUTINE_FILENAME To use ptx/C++ Version 5.2: db2set DB2_SQLROUTINE_COMPILE_COMMAND=c++ -KPIC \ -D_RWSTD_COMPILE_INSTANTIATE=0 -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -G -o SQLROUTINE_FILENAME.so -L$HOME/sqllib/lib -ldb2 ; \ cp SQLROUTINE_FILENAME.so SQLROUTINE_FILENAME This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. OS/2 To use IBM VisualAge C++ for OS/2 Version 3: db2set DB2_SQLROUTINE_COMPILE_COMMAND="icc -Ge- -Gm+ -W2 -I%DB2PATH%\include SQLROUTINE_FILENAME.c /B\"/NOFREE /NOI /ST:64000\" SQLROUTINE_FILENAME.def %DB2PATH%\lib\db2api.lib" This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. To use IBM VisualAge C++ for OS/2 Version 4: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld" If you do not specify the configuration file after vacbld command, DB2 will create the following default configuration file at the first attempt of creating any SQL procedure: %DB2PATH%\function\routine\sqlproc.icc If you want to use your own configuration file, you can specify your own configuration file when setting the DB2 registry value for DB2_SQLROUTINE_COMPILE_COMMAND: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld %DB2PATH%\function\sqlproc.icc" Solaris To use SPARCompiler C Versions 4.2 and 5.0: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc -xarch=v8plusa -Kpic \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -G -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib \ -R$HOME/sqllib/lib -ldb2 To use SPARCompiler C++ Versions 4.2 and 5.0: db2set DB2_SQLROUTINE_COMPILE_COMMAND=CC -xarch=v8plusa -Kpic \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -G -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib \ -R$HOME/sqllib/lib -ldb2 This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. Notes: 1. The compiler option -xarch=v8plusa has been added to the default compiler command. For details on why this option has been added, see 6.9, "Chapter 12. Building Solaris Applications". 2. To compile 64-bit SQL procedures on Solaris, take out the -xarch=v8plusa option and add the -xarch=v9 option to the above commands. Windows NT and Windows 2000 Note:SQL procedures are not supported on Windows 98 or Windows 95. To use Microsoft Visual C++ Versions 5.0 and 6.0: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cl -Od -W2 /TC -D_X86_=1 -I%DB2PATH%\include SQLROUTINE_FILENAME.c /link -dll -def:SQLROUTINE_FILENAME.def /out:SQLROUTINE_FILENAME.dll %DB2PATH%\lib\db2api.lib This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. To use IBM VisualAge C++ for Windows Version 3.6: db2set DB2_SQLROUTINE_COMPILE_COMMAND="ilib /GI SQLROUTINE_FILENAME.def & icc -Ti -Ge- -Gm+ -W2 -I%DB2PATH%\include SQLROUTINE_FILENAME.c /B\"/ST:64000 /PM:VIO /DLL\" SQLROUTINE_FILENAME.exp %DB2PATH%\lib\db2api.lib" To use IBM VisualAge C++ for Windows Version 4: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld" If you do not specify the configuration file after vacbld command, DB2 will create the following default configuration file at the first attempt of creating any SQL procedure: %DB2PATH%\function\routine\sqlproc.icc If you want to use your own configuration file, you can specify your own configuration file when setting the DB2 registry value for DB2_SQLROUTINE_COMPILE_COMMAND: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld %DB2PATH%\function\sqlproc.icc" To return to the default compiler options, set the DB2 registry value for DB2_SQLROUTINE_COMPILE_COMMAND to null with the following command: db2set DB2_SQLROUTINE_COMPILE_COMMAND= 6.4.4 Retaining Intermediate Files You have to manually delete intermediate files that may be left when an SQL procedure is not created successfully. These files are in the following directories: UNIX $DB2PATH/function/routine/sqlproc/$DATABASE/$SCHEMA/tmp where $DB2PATH represents the directory in which the instance was created, $DATABASE represents the database name, and $SCHEMA represents the schema name with which the SQL procedures were created. OS/2 and Windows %DB2PATH%\function\routine\sqlproc\%DATABASE%\%SCHEMA%\tmp where %DB2PATH% represents the directory in which the instance was created, %DATABASE% represents the database name, and %SCHEMA% represents the schema name with which the SQL procedures were created. 6.4.5 Backup and Restore When an SQL procedure is created, the generated shared library/DLL is also kept in the catalog table if the generated shared library/DLL is smaller than 2 MB. When the database is backed up and restored, any SQL procedure with a generated shared library/DLL less than 2 MB will be backed up and restored with the version kept in the catalog table. If you have SQL procedures with a generated shared library/DLL larger than 2 MB, ensure that you also do the filesystem backup and restore with the database backup and restore. If not, you will have to recreate the shared library/DLL of the SQL procedure manually by using the source in the syscat.procedures catalog table. Note:At database recovery time, all the SQL procedure executables on the filesystem belonging to the database being recovered will be removed. If the index creation configuration parameter (indexrec) is set to RESTART, all SQL procedure executables will be extracted from the catalog table and put back on the filesystem at next connect time. Otherwise, the SQL executables will be extracted on first execution of the SQL procedures. The executables will be put back in the following directory: UNIX $DB2PATH/function/routine/sqlproc/$DATABASE where $DB2PATH represents the directory in which the instance was created and $DATABASE represents the database name with which the SQL procedures were created. OS/2 and Windows %DB2PATH%\function\routine\sqlproc\%DATABASE% where %DB2PATH% represents the directory in which the instance was created and %DATABASE% represents the database name with which the SQL procedures were created. ------------------------------------------------------------------------ 6.5 Creating SQL Procedures Set the database manager configuration parameter KEEPDARI to 'NO' for developing SQL procedures. If an SQL procedure is kept loaded once it is executed, you may have problems dropping and recreating the stored procedure with the same name, as the library cannot be refreshed and the executables cannot be dropped from the filesystem. You will also have problems when you try to rollback the changes or drop the database because the executables cannot be deleted. See 'Updating the Database Manager Configuration File' in "Chapter 2. Setup" of the 'Application Building Guide' for more information on setting the KEEPDARI parameter. Note:SQL procedures do not support the following data types for parameters: * LONG VARGRAPHIC * Binary Large Object (BLOB) * Character Large Object (CLOB) * Double-byte Character Large Object (DBCLOB) ------------------------------------------------------------------------ 6.6 Calling Stored Procedures The first paragraph in 'Using the CALL Command' should read: To use the call command, you must enter the stored procedure name plus any IN or INOUT parameters, as well as '?' as a place-holder for each OUT parameter. For details on the syntax of the CALL command, see 9.11, "CALL". ------------------------------------------------------------------------ 6.7 Chapter 7. Building HP-UX Applications. 6.7.1 HP-UX C In "Multi-threaded Applications", the bldmt script file has been revised with different compile options. The new version is in the sqllib/samples/c directory. 6.7.2 HP-UX C++ In the build scripts, the C++ compiler variable CC has been replaced by aCC, for the HP aC++ compiler. The revised build scripts are in the sqllib/samples/cpp directory. The "+u1" compile option should be used to build stored procedures and UDFs with the aCC compiler. This option allows unaligned data access. The sample build scripts shipped with DB2 for HP-UX, bldsrv and bldudf, and the sample makefile, have not been updated with this option. They should be revised to add this option before use. Here is the new compile step for the bldsrv and bldudf scripts: aCC +DAportable +u1 -Aa +z -ext -I$DB2PATH/include -c $1.C In "Multi-threaded Applications", the bldmt script file has been revised with different compile options. The new version is in the sqllib/samples/cpp directory. ------------------------------------------------------------------------ 6.8 Chapter 10. Building PTX Applications 6.8.1 ptx/C++ Libraries need to be linked with the -shared option to build stored procedures and user-defined functions. In the sqllib/samples directory, the makefile, the build scripts bldsrv, and bldudf have been updated to include this option, as in the following link step from bldsrv: c++ -shared -G -o $1 $1.o -L$DB2PATH/lib -ldb2 ------------------------------------------------------------------------ 6.9 Chapter 12. Building Solaris Applications 6.9.1 SPARCompiler C++ Problems with executing C/C++ Applications and running SQL Procedures on Solaris When using the Sun WorkShop Compiler C/C++, if you experience problems with your executable where you receive errors like the following: 1. syntax error at line 1: `(' unexpected 2. ksh: : cannot execute (where application name is the name of the compiled executable) you may be experiencing a problem that the compiler does not produce valid executables when linking with libdb2.so. One suggestion to fix this is to add the following compiler option to your compile and link commands: -xarch=v8plusa for example, when compiling the sample application, dynamic.sqc: embprep dynamic sample embprep utilemb sample cc -c utilemb.c -xarch=v8plusa -I/export/home/db2inst1/sqllib/include cc -o dynamic dynamic.c utilemb.o -xarch=v8plusa -I/export/home/db2inst1/sqllib/include \ -L/export/home/db2inst1/sqllib/lib -R/export/home/db2inst1/sqllib/lib -l db2 Notes: 1. If you are using SQL Procedures on Solaris, and you are using your own compile string via the DB2_SQLROUTINE_COMPILE_COMMAND profile variable, please ensure that you include the compiler option given above. The default compiler command includes this option: db2set DB2_SQLROUTINE_COMPILE_COMMAND="cc -# -Kpic -xarch=v8plusa -I$HOME/sqllib/include \ SQLROUTINE_FILENAME.c -G -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -R$HOME/sqllib/lib -ldb2 2. To compile 64-bit SQL procedures on Solaris, take out the -xarch=v8plusa option and add the -xarch=v9 option to the above commands. ------------------------------------------------------------------------ 6.10 VisualAge C++ Version 4.0 on OS/2 and Windows Note:This updates the section "VisualAge C++ Version 4.0" in "Chapter 6. Building AIX Applications". That section contains information common to AIX, OS/2, and Windows 32-bit operating systems. For OS/2 and Windows, use the set command instead of the export command. For example, set CLI=tbinfo. In "DB2 CLI Applications", subsection "Building and Running Embedded SQL Applications", for OS/2 and Windows, the cliapi.icc file must be used instead of the cli.icc file, because embedded SQL applications need the db2api.lib library linked in by cliapi.icc. ------------------------------------------------------------------------ Application Development Guide ------------------------------------------------------------------------ 7.1 Writing OLE Automation Stored Procedures The last sentence in the following paragraph is missing from the second paragraph under section "Writing OLE automation Stored Procedures": After you code an OLE automation object, you must register the methods of the object as stored procedures using the CREATE PROCEDURE statement. To register an OLE automation stored procedure, issue a CREATE PROCEDURE statement with the LANGUAGE OLE clause. The external name consists of the OLE progID identifying the OLE automation object and the method name separated by ! (exclamation mark). The OLE automation object needs to be implemented as an in-process server (.DLL). ------------------------------------------------------------------------ 7.2 Chapter 7. Stored Procedures 7.2.1 DECIMAL Type Fails in Linux Java Routines This problem occurs because the IBM Developer Kit for Java does not create links for its libraries in the /usr/lib directory. The security model for DB2 routines does not allow them to access libraries outside of the standard system libraries. To enable DECIMAL support in Java routines on Linux, perform the following steps: 1. Create symbolic links from the IBM Developer Kit for Java libraries to /usr/lib/ by issuing the following command with root authority: For IBM Developer Kit for Java 1.1.8: ln -sf /usr/jdk118/lib/linux/native_threads/* /usr/lib/ For IBM Developer Kit for Java 1.3: ln -sf /opt/IBMJava2-13/jre/bin/*.so /usr/lib/ 2. Issue the ldconfig command to update the list of system-wide libraries. ------------------------------------------------------------------------ 7.3 Chapter 12. Working with Complex Objects: User-Defined Structured Types 7.3.1 Inserting Structured Type Attributes Into Columns The following rule applies to embedded static SQL statements: To insert an attribute of a user-defined structured type into a column that is of the same type as the attribute, enclose the host variable that represents the instance of the type in parentheses, and append the double-dot operator and attribute name to the closing parenthesis. For example, consider the following situation: - PERSON_T is a structured type that includes the attribute NAME of type VARCHAR(30). - T1 is a table that includes a column C1 of type VARCHAR(30). - personhv is the host variable declared for type PERSON_T in the programming language. The proper syntax for inserting the NAME attribute into column C1 is: EXEC SQL INSERT INTO T1 (C1) VALUES ((:personhv)..NAME) ------------------------------------------------------------------------ 7.4 Chapter 20. Programming in C and C++ The following table supplements the information included in chapter 7, "Stored Procedures", chapter 15, "Writing User-Defined Functions and Methods", and chapter 20, "Programming in C and C++". The table lists the supported mappings between SQL data types and C data types for stored procedures, UDFs, and methods. 7.4.1 C/C++ Types for Stored Procedures, Functions, and Methods Table 4. SQL Data Types Mapped to C/C++ Declarations SQL Column Type C/C++ Data Type SQL Column Type Description sqlint16 16-bit signed integer SMALLINT (500 or 501) sqlint32 32-bit signed integer INTEGER (496 or 497) sqlint64 64-bit signed integer BIGINT (492 or 493) float Single-precision floating REAL point (480 or 481) double Double-precision floating DOUBLE point (480 or 481) Not supported. To pass a decimal value, DECIMAL(p,s) define the parameter to be (484 or 485) of a data type castable from DECIMAL (for example CHAR or DOUBLE) and explicitly cast the argument to this type. char[n+1] where n is Fixed-length, CHAR(n) large enough to hold null-terminated character (452 or 453) the data string 1<=n<=254 char[n+1] where n is Fixed-length character CHAR(n) FOR BIT DATA large enough to hold string (452 or 453) the data 1<=n<=254 char[n+1] where n is Null-terminated varying VARCHAR(n) large enough to hold length string (448 or 449) (460 or the data 461) 1<=n<=32 672 Not null-terminated varying VARCHAR(n) FOR BIT struct { length character string DATA sqluint16 length; (448 or 449) char[n] } 1<=n<=32 672 Not null-terminated varying LONG VARCHAR struct { length character string (456 or 457) sqluint16 length; char[n] } 32 673<=n<=32 700 Non null-terminated varying CLOB(n) struct { length character string with (408 or 409) sqluint32 length; 4-byte string length char data[n]; indicator } 1<=n<=2 147 483 647 Non null-terminated varying BLOB(n) struct { binary string with 4-byte (404 or 405) sqluint32 length; string length indicator char data[n]; } 1<=n<=2 147 483 647 char[11] null-terminated character DATE form (384 or 385) char[9] null-terminated character TIME form (388 or 389) char[27] null-terminated character TIMESTAMP form (392 or 393) Note: The following data types are only available in the DBCS or EUC environment when precompiled with the WCHARTYPE NOCONVERT option. sqldbchar[n+1] where n Fixed-length, GRAPHIC(n) is large enough to null-terminated double-byte (468 or 469) hold the data character string 1<=n<=127 sqldbchar[n+1] where n Not null-terminated, VARGRAPHIC(n) is large enough to variable-length double-byte (400 or 401) hold the data character string 1<=n<=16 336 Not null-terminated, LONG VARGRAPHIC struct { variable-length double-byte (472 or 473) sqluint16 length; character string sqldbchar[n] } 16 337<=n<=16 350 Non null-terminated varying DBCLOB(n) struct { length character string with (412 or 413) sqluint32 length; 4-byte string length sqldbchar data[n]; indicator } 1<=n<=1 073 741 823 ------------------------------------------------------------------------ 7.5 Appendix B. Sample Programs The following should be added to the "Object Linking and Embedding Samples" section: salarycltvc A Visual C++ DB2 CLI sample that calls the Visual Basic stored procedure, salarysrv. SALSVADO A sample OLE automation stored procedure (SALSVADO) and a SALCLADO client (SALCLADO), implemented in 32-bit Visual Basic and ADO, that calculates the median salary in table staff2. ------------------------------------------------------------------------ 7.6 Activating the IBM DB2 Universal Database Project and Tool Add-ins for Microsoft Visual C++ Before running the db2vccmd command (step 1), please ensure that you have started and stopped Visual C++ at least once with your current login ID. The first time you run Visual C++, a profile is created for your user ID, and that is what gets updated by the db2vccmd command. If you have not started it once, and you try to run db2vccmd, you may see errors like the following: "Registering DB2 Project add-in ...Failed! (rc = 2)" ------------------------------------------------------------------------ 7.7 IBM DB2 OLE DB Provider Installing IBM DB2 Version 7.1 FixPak 1 corrects the condition that caused DB2 to issue the following error: Test connection failed because of an error in initializing provider. The IBM OLE DB Provider is not available at this time. Please refer to the readme file for more information. ------------------------------------------------------------------------ 7.8 Using Cursors in Recursive Stored Procedures To avoid errors when using SQL Procedures or stored procedures written in embedded SQL, close all open cursors before issuing a recursive CALL statement. For example, assume the stored procedure MYPROC contains the following code fragment: OPEN c1; CALL MYPROC(); CLOSE c1; DB2 returns an error when MYPROC is called because cursor c1 is still open when MYPROC issues a recursive CALL statement. The specific error returned by DB2 depends on the actions MYPROC performs on the cursor. To successfully call MYPROC, rewrite MYPROC to close any open cursors before the nested CALL statement as shown in the following example: OPEN c1; CLOSE c1; CALL MYPROC(); Close all open cursors before issuing the nested CALL statement to avoid an error. ------------------------------------------------------------------------ 7.9 Language Considerations/Programming in Java/Creating Java Applications and Applets/Applet Support in Java It is essential that the db2java.zip file used by the Java applet be at the same FixPak level as the JDBC applet server. Under normal circumstances, db2java.zip is loaded from the Web Server where the JDBC applet server is running, as shown in Figure 22 of the book. This ensures a match. If, however, your configuration has the Java applet loading db2java.zip from a different location, a mismatch can occur. Prior to FixPak 2, this could lead to unexpected failures. As of FixPak 2, matching FixPak levels between the two files is strictly enforced at connection time. If a mismatch is detected, the connection is rejected, and the client receives one of the following exceptions: * If db2java.zip is at FixPak 2 or later: COM.ibm.db2.jdbc.DB2Exception: [IBM][JDBC Driver] CLI0621E Unsupported JDBC server configuration. * If db2java.zip is prior to FixPak 2: COM.ibm.db2.jdbc.DB2Exception: [IBM][JDBC Driver] CLI0601E Invalid statement handle or statement is closed. SQLSTATE=S1000 If a mismatch occurs, the JDBC applet server logs one of the following messages in the jdbcerr.log file: * If the JDBC applet server is at FixPak 2 or later: jdbcFSQLConnect: JDBC Applet Server and client (db2java.zip) versions do not match. Unable to proceed with connection., einfo= -111 * If the JDBC applet server is prior to FixPak 2: jdbcServiceConnection(): Invalid Request Received., einfo= 0 ------------------------------------------------------------------------ CLI Guide and Reference ------------------------------------------------------------------------ 8.1 CLI Unicode Functions and SQL_C_WCHAR Support on AIX Only CLI Unicode functions accept pointers to character strings or to SQLPOINTER in their arguments. The argument strings are in UCS2 format. These functions are implemented as functions with a W suffix. In Unicode functions that return or take strings, length arguments are passed as count of characters. For functions that return length information for server data, the display size and precision are described in number of characters. When a length can refer to string or to non-string data, the length is described in octet lengths. The function prototypes for the Unicode functions can be found in sqlcli1.h. The following is a list of Unicode functions: SQLColAttributeW SQLColAttributesW SQLColumnPrivilegesW SQLColumnsW SQLConnectW SQLDataSourcesW SQLDescribeColW SQLDriverConnectW SQLBrowseConnectW SQLErrorW SQLExecDirectW SQLForeignKeysW SQLGetCursorNameW SQLGetInfoW SQLNativeSqlW SQLPrepareW SQLPrimaryKeysW SQLProcedureColumnsW SQLProceduresW SQLSetCursorNameW SQLSpecialColumnsW SQLStatisticsW SQLTablePrivilegesW SQLTablesW SQLGetDiagFieldW SQLGetDiagRecW SQLSetConnectAttrW SQLSetStmtAttrW SQLGetDescFieldW SQLSetDescFieldW An application can be written so that it can be compiled as either a Unicode application or an ANSI application. The application is compiled as a Unicode application by turning on the UNICODE define. In this case, character data types can be declared SQL_C_TCHAR. This is a macro found in sqlcli1.h. The macro inserts SQL_C_WCHAR if the application is compiled as a Unicode application, or SQL_C_CHAR if it is compiled as an ANSI application. The function prototypes for the Unicode functions can be found in sqlcli1.h. Function calls without the W suffix will be mapped to the corresponding function with the W suffix if the application is compiled with the UNICODE define turned on. Unicode and ANSI function calls cannot be mixed. All SQL data types that can be converted to SQL_C_CHAR can also be converted to SQL_C_WCHAR; the converse is also true. The following restrictions apply: * From an ODBC perspective, CLI is not a UNICODE driver. However, because SQLConnectW is not exported, it is mapped to SQLConnectWInt in sqlcli1.h. * Currently, Unicode functions and SQL_C_WCHAR are only supported on AIX. To use the CLI Unicode functions and SQL_C_WCHAR in applications on AIX, use sqlcli1.h and compile with the UNICODE define. For Windows NT applications requiring Unicode functions and SQL_C_WCHAR, use the ODBC 3.5 driver. The ODBC 3.5 driver manager will treat the DB2 UDB CLI driver as an ANSI driver. The ODBC 3.5 driver manager will convert Unicode function (with the W suffix) to an ANSI function call and pass it to the CLI driver. The ODBC 3.5 driver manager will also map SQL_C_WCHAR to SQL_C_CHAR. * Currently SQL_C_WCHAR support is provided by converting data to and from UCS2 to an application code page. * There is no SQL_WCHAR, SQL_WVARCHAR, or SQL_WLONGVARCHAR support. * WCHARTYPE NOCONVERT is not supported for the Unicode functions or for SQL_C_CHAR. ------------------------------------------------------------------------ 8.2 Binding Database Utilities Using the Run-Time Client The Run-Time Client cannot be used to bind the database utilities (import, export, reorg, the command line processor) and DB2 CLI bind files to each database before they can be used with that database. You must use the DB2 Administration Client or the DB2 Application Development Client instead. You must bind these database utilities and DB2 CLI bind files to each database before they can be used with that database. In a network environment, if you are using multiple clients that run on different operating systems, or are at different versions or service levels of DB2, you must bind the utilities once for each operating system and DB2-version combination. ------------------------------------------------------------------------ 8.3 Addition to the "Using Compound SQL" Section The following note is missing from the book: Any SQL statement that can be prepared dynamically, other than a query, can be executed as a statement inside a compound statement. Note: Inside Atomic Compound SQL, savepoint, release savepoint, and rollback to savepoint SQL statements are also disallowed. Conversely, Atomic Compound SQL is disallowed in savepoint. ------------------------------------------------------------------------ 8.4 Writing a Stored Procedure in CLI Following is an undocumented limitation on CLI stored procedures: If you are making calls to multiple CLI stored procedures, the application must close the open cursors from one stored procedure before calling the next stored procedure. More specifically, the first set of open cursors must be closed before the next stored procedure tries to open a cursor. ------------------------------------------------------------------------ 8.5 CLI Stored Procedures and Autobinding The CLI/ODBC driver will normally autobind the CLI packages the first time a CLI/ODBC application executes SQL against the database, provided the user has the appropriate privilege or authorization. Autobinding of the CLI packages cannot be performed from within a stored procedure, and therefore will not take place if the very first thing an application does is call a CLI stored procedure. Before running a CLI application that calls a CLI stored procedure against a new DB2 database, you must bind the CLI packages once with this command: UNIX db2 bind /@db2cli.lst blocking all Windows and OS/2 db2bind "%DB2PATH%\bnd\@db2cli.lst" blocking The recommended approach is to always bind these packages at the time the database is created to avoid autobind at runtime. Autobind can fail if the user does not have privilege, or if another application tries to autobind at the same time. ------------------------------------------------------------------------ 8.6 Addition to Appendix D "Extended Scalar Functions": DAYOFWEEK_ISO() and WEEK_ISO() Functions The following functions are missing from the Date and Time Functions section of Appendix D "Extended Scalar Functions": DAYOFWEEK_ISO( date_exp ) Returns the day of the week in date_exp as an integer value in the range 1-7, where 1 represents Monday. Note the difference between this function and the DAYOFWEEK() function, where 1 represents Sunday. WEEK_ISO( date_exp ) Returns the week of the year in date_exp as an integer value in the range of 1-53. Week 1 is defined as the first week of the year to contain a Thursday. Therefore, Week1 is equivalent to the first week that contains Jan 4, since Monday is considered to be the first day of the week. Note that WEEK_ISO() differs from the current definition of WEEK(), which returns a value up to 54. For the WEEK() function, Week 1 is the week containing the first Saturday. This is equivalent to the week containing Jan. 1, even if the week contains only one day. DAYOFWEEK_ISO() and WEEK_ISO() are automatically available in a database created in Version 7.1. If a database was created prior to Version 7.1, these functions may not be available. To make DAYOFWEEK_ISO() and WEEK_ISO() functions available in such a database, use the db2updb system command. For more information about db2updb, see the "Command Reference" section in these Release Notes. ------------------------------------------------------------------------ 8.7 Appendix K. Using the DB2 CLI/ODBC/JDBC Trace Facility The sections within this appendix have been updated. See the "Traces" chapter in the Troubleshooting Guide for the most up-to-date information on this trace facility. ------------------------------------------------------------------------ 8.8 Using Static SQL in CLI Applications For more information on using static SQL in CLI applications, see the Web page at: http://www.ibm.com/software/data/db2/udb/staticcli/ ------------------------------------------------------------------------ 8.9 Limitations of JDBC/ODBC/CLI Static Profiling JDBC/ODBC/CLI static profiling currently targets straightforward applications. It is not meant for complex applications with many functional components and complex program logic during execution. An SQL statement must have successfully executed for it to be captured in a profiling session. In a statement matching session, unmatched dynamic statements will continue to execute as dynamic JDBC/ODBC/CLI calls. An SQL statement must be identical character-by-character to the one that was captured and bound to be a valid candidate for statement matching. Spaces are significant: for example, "COL = 1" is considered different than "COL=1". Use parameter markers in place of literals to improve match hits. When executing an application with pre-bound static SQL statements, dynamic registers that control the dynamic statement behavior will have no effect on the statements that are converted to static. If an application issues DDL statements for objects that are referenced in subsequent DML statements, you will find all of these statements in the capture file. The JDBC/ODBC/CLI Static Profiling Bind Tool will attempt to bind them. The bind attempt will be successful with DBMSs that support the VALIDATE(RUN) bind option, but it fail with ones that do not. In this case, the application should not use Static Profiling. The Database Administrator may edit the capture file to add, change, or remove SQL statements, based on application-specific requirements. ------------------------------------------------------------------------ 8.10 Parameter Correction for SQLBindFileToParam() CLI Function The last parameter - IndicatorValue - in the SQLBindFileToParam() CLI function is currently documented as "output (deferred)". It should be "input (deferred)". ------------------------------------------------------------------------ 8.11 SQLNextResult - Associate Next Result Set with Another Statement Handle The following text should be added to Chapter 5, "DB2 CLI Functions": 8.11.1 Purpose Specification: DB2 CLI 7.x 8.11.2 Syntax SQLRETURN SQLNextResult (SQLHSTMT StatementHandle1 SQLHSTMT StatementHandle2); 8.11.3 Function Arguments Table 5. SQLNextResult Arguments Data Type Argument Use Description SQLHSTMT StatementHandle input Statement handle. SQLHSTMT StatementHandle input Statement handle. 8.11.4 Usage A stored procedure returns multiple result sets by leaving one or more cursors open after exiting. The first result set is always accessed by using the statement handle that called the stored procedure. If multiple result sets are returned, either SQLMoreResults() or SQLNextResult() can be used to describe and fetch the result set. SQLMoreResults() is used to close the cursor for the first result set and allow the next result set to be processed, whereas SQLNextResult() moves the next result set to StatementHandle2, without closing the cursor on StatementHandle1. Both functions return SQL_NO_DATA_FOUND if there are no result sets to be fetched. Using SQLNextResult() allows result sets to be processed in any order once they have been transferred to other statement handles. Mixed calls to SQLMoreResults() and SQLNextResult() are allowed until there are no more cursors (open result sets) on StatementHandle1. When SQLNextResult() returns SQL_SUCCESS, the next result set is no longer associated with StatementHandle1. Instead, the next result set is associated with StatementHandle2, as if a call to SQLExecDirect() had just successfully executed a query on StatementHandle2. The cursor, therefore, can be described using SQLNumResultSets(), SQLDescribeCol(), or SQLColAttribute(). After SQLNextResult() has been called, the result set now associated with StatementHandle2 is removed from the chain of remaining result sets and cannot be used again in either SQLNextResult() or SQLMoreResults(). This means that for 'n' result sets, SQLNextResult() can be called successfully at most 'n-1' times. If SQLFreeStmt() is called with the SQL_CLOSE option, or SQLFreeHandle() is called with HandleType set to SQL_HANDLE_STMT, all pending result sets on this statement handle are discarded. SQLNextResult() returns SQL_ERROR if StatementHandle2 has an open cursor or StatementHandle1 and StatementHandle2 are not on the same connection. If any errors or warnings are returned, SQLError() must always be called on StatementHandle1. Note:SQLMoreResults() also works with a parameterized query with an array of input parameter values specified with SQLParamOptions() and SQLBindParameter(). SQLNextResult(), however, does not support this. 8.11.5 Return Codes * SQL_SUCCESS * SQL_SUCCESS_WITH_INFO * SQL_STILL_EXECUTING * SQL_ERROR * SQL_INVALID_HANDLE * SQL_NO_DATA_FOUND 8.11.6 Diagnostics Table 6. SQLNextResult SQLSTATEs SQLSTATE Description Explanation 40003 Communication Link The communication link between the 08S01 failure. application and data source failed before the function completed. 58004 Unexpected system Unrecoverable system error. failure. HY001 Memory allocation DB2 CLI is unable to allocate the memory failure. required to support execution or completion of the function. HY010 Function sequence The function was called while in a error. data-at-execute (SQLParamData(), SQLPutData()) operation. StatementHandle2 has an open cursor associated with it. The function was called while within a BEGIN COMPOUND and END COMPOUND SQL operation. HY013 Unexpected memory DB2 CLI was unable to access the memory handling error. required to support execution or completion of the function. HYT00 Time-out expired. The time-out period expired before the data source returned the result set. Time-outs are only supported on non-multitasking systems such as Windows 3.1 and Macintosh System 7. The time-out period can be set using the SQL_ATTR_QUERY_TIMEOUT attribute for SQLSetConnectAttr(). 8.11.7 Restrictions Only SQLMoreResults() can be used for parameterized queries. 8.11.8 References * "SQLMoreResults - Determine If There Are More Result Sets" on page 535 * "Returning Result Sets from Stored Procedures" on page 120 ------------------------------------------------------------------------ 8.12 ADT Transforms The following supercedes existing information in the book. * There is a new descriptor type (smallint) SQL_DESC_USER_DEFINED_TYPE_CODE, with values: SQL_TYPE_BASE 0 (this is not a USER_DEFINED_TYPE) SQL_TYPE_DISTINCT 1 SQL_TYPE_STRUCTURED 2 This value can be queried with either SQLColAttribute or SQLGetDescField (IRD only). The following attributes are added to obtain the actual type names: SQL_DESC_REFERENCE_TYPE SQL_DESC_STRUCTURED_TYPE SQL_DESC_USER_TYPE The above values can be queried using SQLColAttribute or SQLGetDescField (IRD only). * Add SQL_DESC_BASE_TYPE in case the application needs it. For example, the application may not recognize the structured type, but intends to fetch or insert it, and let other code deal with the details. * Add a new connection attribute called SQL_ATTR_TRANSFORM_GROUP to allow an application to set the transform group (rather than use the SQL "SET CURRENT DEFAULT TRANSFORM GROUP" statement). * Add a new statement/connection attribute called SQL_ATTR_RETURN_USER_DEFINED_TYPES that can be set or queried using SQLSetConnectAttr, which causes CLI to return the value SQL_DESC_USER_DEFINED_TYPE_CODE as a valid SQL type. This attribute is required before using any of the transforms. o By default, the attribute is off, and causes the base type information to be returned as the SQL type. o When enabled, SQL_DESC_USER_DEFINED_TYPE_CODE will be returned as the SQL_TYPE. The application is expected to check for SQL_DESC_USER_DEFINED_TYPE_CODE, and then to retrieve the appropriate type name. This will be available to SQLColAttribute, SQLDescribeCol, and SQLGetDescField. * The SQLBindParameter does not give an error when you bind SQL_C_DEFAULT, because there is no code to allow SQLBindParameter to specify the type SQL_USER_DEFINED_TYPE. The standard default C types will be used, based on the base SQL type flowed to the server. For example: sqlrc = SQLBindParameter (hstmt, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR, 30, 0, &c2, 30, NULL); * SQLDescribeParam and SQLGetDescField for parameter markers do not yet return structured type information. (This support will be added in the first Version 7.1 FixPak.) ------------------------------------------------------------------------ Command Reference ------------------------------------------------------------------------ 9.1 db2batch - Benchmark Tool The last sentence in the description of the PERF_DETAIL parameter should read: A value greater than 1 is only valid on DB2 Version 2 and DB2 UDB servers, and is not currently supported on host machines. ------------------------------------------------------------------------ 9.2 db2cap (new command) db2cap - CLI/ODBC Static Package Binding Tool Binds a capture file to generate one or more static packages. A capture file is generated during a static profiling session of a CLI/ODBC/JDBC application, and contains SQL statements that were captured during in the application run. This utility processes the capture file so that it can be used by the CLI/ODBC/JDBC driver to execute static SQL for the application. For more information on how to use static SQL in CLI/ODBC/JDBC applications, see the Static Profiling feature in the CLI Guide and Reference. Authorization * Access privileges to any database objects referenced by SQL statements recorded in the capture file. * Sufficient authority to set bind options such as OWNER and QUALIFIER if they are different from the connect ID used to invoke the db2cap command. * BINDADD authority if the package is being bound for the first time; otherwise, BIND authority is required. Command Syntax >>-db2cap----+----+--bind--capture-file----d--database_alias----> +--h-+ '--?-' >-----+--------------------------------+----------------------->< '--u--userid--+---------------+--' '--p--password--' Command Parameters -h/-? Displays help text for the command syntax. bind capture-file Binds the statements from the capture file and creates one or more packages. -d database_alias Specifies the database alias for the database that will contain one or more packages. -u userid Specifies the user ID to be used to connect to the data source. Note:If a user ID is not specified, a trusted authorization ID is obtained from the system. -p password Specifies the password to be used to connect to the data source. Usage Notes The command must be entered in lowercase on UNIX platforms, but can be entered in either lowercase or uppercase on Windows operating systems and OS/2. This utility supports a number of user-specified bind options that can be found in the capture file. For performance and security reasons, the file can be examined and edited with a text editor to change these options. The SQLERROR(CONTINUE) and the VALIDATE(RUN) bind options can be used to create a package. When using this utility to create a package, static profiling must be disabled. The number of packages created depends on the isolation levels used for the SQL statements that are recorded in the capture file. The package name consists of up to a maximum of the first seven characters of the package keyword from the capture file, and one of the following single-character suffixes: * 0 - Uncommitted Read (UR) * 1 - Cursor Stability (CS) * 2 - Read Stability (RS) * 3 - Repeatable Read (RR) * 4 - No Commit (NC) To obtain specific information about packages, the user can: * Query the appropriate SYSIBM catalog tables using the COLLECTION and PACKAGE keywords found in the capture file. * View the capture file. ------------------------------------------------------------------------ 9.3 db2gncol (new command) db2gncol - Update Generated Column Values Updates generated columns in tables that are in check pending mode and have limited log space. This tool is used to prepare for a SET INTEGRITY statement on a table that has columns which are generated by expressions. Authorization One of the following * sysadm * dbadm Command Syntax >>-db2gncol----d--database----s--schema_name----t--table_name---> >-----c--commit_count----+---------------------------+----------> '--u--userid---p--password--' >-----+-----+-------------------------------------------------->< '--h--' Command Parameters -d database Specifies an alias name for the database in which the table is located. -s schema_name Specifies the schema name for the table. The schema name is case sensitive. -t table_name Specifies the table for which new column values generated by expressions are to be computed. The table name is case sensitive. -c commit_count Specifies the number of rows updated between commits. This parameter influences the size of the log space required to generate the column values. -u userid Specifies a user ID with system administrator or database administrator privileges. If this option is omitted, the current user is assumed. -p password Specifies the password for the specified user ID. -h Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed. Usage Notes Using this tool instead of the FORCE GENERATED option on the SET INTEGRITY statement may be necessary if a table is large and the following conditions exist: * All column values must be regenerated after altering the generation expression of a generated column. * An external UDF used in a generated column was changed, causing many column values to change. * A generated column was added to the table. * A large load or load append was performed that did not provide values for the generated columns. * The log space is too small due to long-running concurrent transactions or the size of the table. This tool will regenerate all column values that were created based on expressions. While the table is being updated, intermittent commits are performed to avoid using up all of the log space. Once db2gncol has been run, the table can be taken out of check pending mode using the SET INTEGRITY statement. ------------------------------------------------------------------------ 9.4 db2inidb - Initialize a Mirrored Database In a split mirror environment, the db2inidb is used to initialized a mirrored database for different purposes. Authorization Must be one of the following: o sysadm o sysctrl o sysmaint Required Connection None Command Syntax >>-db2inidb----database_alias----AS----+-SNAPSHOT-+------------>< +-STANDBY--+ '-MIRROR---' Command Parameters database_alias Specifies the alias of the database to be initialized. SNAPSHOT Use this option to initialize the mirrored database as a clone (or snapshot) of the primary database. This database is read only. STANDBY This option allows the mirrored database to continually roll forward through the logs. New logs from the primary database can be fetched and applied to this standby database. Therefore, it can be used as a take-over database in case the primary database goes down. MIRROR This option allows the mirrored database to be used as a backup image which can be restored over the primary database. ------------------------------------------------------------------------ 9.5 db2look - DB2 Statistics Extraction Tool The syntax diagram should appear as follows: >>-db2look---d--DBname----+--------------+---+-----+---+-----+--> '--u--Creator--' '--s--' '--g--' >-----+-----+---+-----+---+-----+---+-----+---+-----+-----------> '--a--' '--h--' '--r--' '--c--' '--p--' >-----+------------+---+-------------------+--------------------> '--o--Fname--' '--e--+----------+--' '--t Tname-' >-----+-------------------+---+-----+---+-----+-----------------> '--m--+----------+--' '--l--' '--x--' '--t Tname-' >-----+---------------------------+---+-----+------------------>< '--i--userid---w--password--' '--f--' ------------------------------------------------------------------------ 9.6 db2updv7 - Update Database to Version 7 Current Fix Level This command updates the system catalogs in a database to support the current FixPak in the following ways: * Enables the use of the new built-in functions (ABS, ROUND, and MULTIPLY_ALT). * Add or apply corrections to WEEK_ISO and DAYOFWEEK_ISO functions on Windows and OS/2 databases. * Apply a correction to table packed descriptors for tables migrated from Version 2 to Version 6. Authorization sysadm Required Connection Database. This command automatically establishes a connection to the specified database. Command Syntax >>-db2updv7----d---database_name----+-----+-------------------->< '--h--' Command Parameters -d database-name The name of the database to be updated. -h Display help information. When this option is specified, all other options are ignored, and only the help information is displayed. Example After installing the FixPak, update the system catalog in the sample database by issuing the following command: db2updv7 -d sample Usage Notes This tool can only be used on a database running DB2 Version 7.1 with at least FixPak 2 installed. If the command is issued more than once, no errors are reported and each of the catalog updates is applied only once. To enable the new built-in functions, all applications must disconnect from this database and the database must be deactivated if it has been activated. ------------------------------------------------------------------------ 9.7 Migrating from Version 6 of DB2 Query Patroller Using dqpmigrate The dqpmigrate command must be used if the Version 7 Query Patroller Server was installed over the Version 6 Query Patroller Server. For FixPak 2 or later, you do not have to run dqpmigrate manually as the installation of the FixPak runs this command for you. Without using this command, the existing users defined in v6 have no EXECUTE privileges on several new stored procedures added in Version 7. Note:dqpmigrate.bnd is found in the sqllib/bnd directory and dqpmigrate.exe is found in the sqllib/bin directory. To use dqpmigrate manually to grant the EXECUTE privileges, perform the following after installing the FixPak: 1. Bind the /sqllib/bnd/dqpmigrate.bnd package file to the database where the Query Patroller server has been installed by entering the following command: db2 bind dqpmigrate.bnd 2. Execute dqpmigrate by entering the following: dqpmigrate dbalias userid passwd ------------------------------------------------------------------------ 9.8 New Command Line Processor Option (-x, Suppress printing of column headings) A new option, -x, tells the command line processor to return data without any headers, including column names. The default setting for this command option is OFF. ------------------------------------------------------------------------ 9.9 True Type Font Requirement for DB2 CLP To display the national characters for single byte (SBCS) languages correctly from the DB2 command line processor (CLP) window, change the font to True Type. ------------------------------------------------------------------------ 9.10 BIND The command syntax for DB2 should be modified to show the federated parameter as follows: FEDERATED--+--NO--+-- '-YES--' FEDERATED Specifies whether a static SQL statement in a package references a nickname or a federated view. If this option is not specified and a static SQL statement in the package references a nickname or a federated view, a warning is returned and the package is created. NO A nickname or federated view is not referenced in the static SQL statements of the package. If a nickname or federated view is encountered in a static SQL statement during the prepare or bind of this package, an error is returned and the package is not created. YES A nickname or federated view can be referenced in the static SQL statements of the package. If no nicknames or federated views are encountered in static SQL statements during the prepare or bind of the package, no errors or warnings are returned and the package is created. Note:In Version 7 FixPak 2, an SQL1179W warning message is generated by the server when precompiling a source file or binding a bind file without specifying a value for the FEDERATED option. The same message is generated when the source file or bind file includes static SQL references to a nickname. There are two exceptions: * For clients that are at an earlier FixPak than Version 7 FixPak 2 or for downlevel clients, the sqlaprep() API does not report this SQL1179W warning in the message file. The Command Line Processor PRECOMPILE command also does not output the warning in this case. * For clients that are at an earlier FixPak than Version 7 FixPak 2 or for downlevel clients, the sqlabndx API does report this SQL1179W warning in the message file. However, the message file also incorrectly includes an SQL0092N message indicating that no package was created. This is not correct as the package is indeed created. The Command Line Processor BIND command returns the same erroneous warning. ------------------------------------------------------------------------ 9.11 CALL The syntax for the CALL command should appear as follows: .-,---------------. V | >>-CALL--proc-name---(-----+-----------+--+---)---------------->< '-argument--' The description of the argument parameter has been changed to: Specifies one or more arguments for the stored procedure. All input and output arguments must be specified in the order defined by the procedure. Output arguments are specified using the "?" character. For example, a stored procedure foo with one integer input parameter and one output parameter would be invoked as "call foo (4, ?)". Notes: 1. When invoking this utility from an operating system prompt, it may be necessary to delimit the command as follows: "call DEPT_MEDIAN (51)" A single quotation mark (') can also be used. 2. The stored procedure being called must be uniquely named in the database. 3. The stored procedure must be cataloged. If an uncataloged procedure is called, a DB21036 error message is returned. 4. A DB21101E message is returned if not enough parameters are specified on the command line, or the command line parameters are not in the correct order (input, output), according to the stored procedure definition. 5. There is a maximum of 1023 characters for a result column. 6. LOBS and binary data (FOR BIT DATA, VARBINARY, LONGVARBINARY, GRAPHIC, VARGAPHIC, or LONGVARGRAPHIC) are not supported. 7. CALL supports result sets. 8. If an SP with an OUTPUT variable of an unsupported type is called, the CALL fails, and message DB21036 is returned. 9. The maximum length for an INPUT parameter to CALL is 1024. ------------------------------------------------------------------------ 9.12 EXPORT In the section "DB2 Data Links Manager Considerations", Step 3 of the procedure to ensure that a consistent copy of the table and the corresponding files referenced by DATALINK columns are copied for export should read: 3. Run the dlfm_export utility at each Data Links server. Input to the dlfm_export utility is the control file name, which is generated by the export utility. This produces a tar (or equivalent) archive of the files listed within the control file. For Distributed File Systems (DFS), the dlfm_export utility will get the DCE network root credentials before archiving the files listed in the control file. dlfm_export does not capture the ACLs information of the files that are archived. In the same section, the bullets following "Successful execution of EXPORT results in the generation of the following files" should be modified as follows: The second sentence in the first bullet should read: A DATALINK column value in this file has the same format as that used by the import and load utilities. The first sentence in the second bullet should read: Control files server_name, which are generated for each Data Links server. (On the Windows NT operating system, a single control file, ctrlfile.lst, is used by all Data Links servers. For DFS, there is one control file for each cell.) The following sentence should be added to the paragraph before Table 5: For more information about dlfm_export, refer to the "Data Movement Utilities Guide and Reference" under "Using Export to move DB2 Data Links Manager Data". ------------------------------------------------------------------------ 9.13 GET DATABASE CONFIGURATION The description of the DL_TIME_DROP configuration parameter should be changed to the following: Applies to DB2 Data Links Manager only. This parameter specifies the interval of time (in days) files would be retained on an archive server (such as a TSM server) after a DROP DATABASE command is issued. ------------------------------------------------------------------------ 9.14 IMPORT In the section "DB2 Data Links Manager Considerations", the following sentence should be added to Step 3: For Distributed File Systems (DFS), update the cell name information in the URLs (of the DATALINK columns) from the exported data for the SQL table, if required. The following sentence should be added to Step 4: For DFS, define the cells at the target configuration in the DB2 Data Links Manager configuration file. The paragraph following Step 4 should read: When the import utility runs against the target database, files referred to by DATALINK column data are linked on the appropriate Data Links servers. ------------------------------------------------------------------------ 9.15 LOAD In the section "DB2 Data Links Manager Considerations", add the following sentence to Step 1 of the procedure that is to be performed before invoking the load utility, if data is being loaded into a table with a DATALINK column that is defined with FILE LINK CONTROL: For Distributed File Systems (DFS), ensure that the DB2 Data Links Managers within the target cell are registered. The following sentence should be added to Step 5: For DFS, register the cells at the target configuration referred to by DATALINK data (to be loaded) in the DB2 Data Links Manager configuration file. In the section "Representation of DATALINK Information in an Input File", the first note following the parameter description for urlname should read: Currently "http", "file", "unc", and "dfs" are permitted as a scheme name. The first sentence of the second note should read: The prefix (scheme, host, and port) of the URL name is optional. For DFS, the prefix refers to the scheme cellname filespace-junction portion. In the DATALINK data examples for both the delimited ASCII (DEL) file format and the non-delimited ASCII (ASC) file format, the third example should be removed. The DATALINK data examples in which the load or import specification for the column is assumed to be DL_URL_DEFAULT_PREFIX should be removed and replaced with the following: Following are DATALINK data examples in which the load or import specification for the column is assumed to be DL_URL_REPLACE_PREFIX ("http://qso"): * http://www.almaden.ibm.com/mrep/intro.mpeg This is stored with the following parts: o scheme = http o server = qso o path = /mrep/intro.mpeg o comment = NULL string * /u/me/myfile.ps This is stored with the following parts: o scheme = http o server = qso o path = /u/me/myfile.ps o comment = NULL string ------------------------------------------------------------------------ 9.16 PING (new command) PING Tests the network response time of the underlying connectivity between a client and a database server where DB2 Connect is used to establish the connection. Authorization None Required Connection Database Command Syntax .-time-. .-1--+------+---------------------. >>-PING---db_alias----+-+-----------------------------+-+------>< '-number_of_times--+-------+--' +-times-+ '-time--' Command Parameters db_alias Specifies the database alias for the database on a DRDA server that the ping is being sent to. Note:This parameter, although mandatory, is not currently used. It is reserved for future use. Any valid database alias name can be specified. number of times Specifies the number of iterations for this test. The value must be between 1 and 32767 inclusive. The default is 1. One timing will be returned for each iteration. Examples To test the network response time for the connection to the host database server hostdb once: db2 ping hostdb 1 or: db2 ping hostdb The command will display output that looks like this: Elapsed time: 7221 microseconds To test the network response time for the connection to the host database server hostdb 5 times: db2 ping hostdb 5 or: db2 ping hostdb 5 times The command will display output that looks like this: Elapsed time: 8412 microseconds Elapsed time: 11876 microseconds Elapsed time: 7789 microseconds Elapsed time: 10124 microseconds Elapsed time: 10988 microseconds Usage Notes A database connection must exist before invoking this command, otherwise an error will result. The elapsed time returned is for the connection between the client and a DRDA server database via DB2 Connect. ------------------------------------------------------------------------ Connectivity Supplement ------------------------------------------------------------------------ 10.1 Setting Up the Application Server in a VM Environment Add the following sentence after the first (and only) sentence in the section "Provide Network Information", subsection "Defining the Application Server": The RDB_NAME is provided on the SQLSTART EXEC as the DBNAME parameter. ------------------------------------------------------------------------ 10.2 CLI/ODBC/JDBC Configuration PATCH1 and PATCH2 Settings The CLI/ODBC/JDBC driver can be configured through the Client Configuration Assistant or the ODBC Driver Manager (if it is installed on the system), or by manually editing the db2cli.ini file. For more details, see either the Installation and Configuration Supplement, or the CLI Guide and Reference. The DB2 CLI/ODBC driver default behavior can be modified by specifying values for both the PATCH1 and PATCH2 keyword through either the db2cli.ini file or through the SQLDriverConnect() or SQLBrowseConnect() CLI API. The PATCH1 keyword is specified by adding together all keywords that the user wants to set. For example, if patch 1, 2, and 8 were specified, then PATCH1 would have a value of 11. Following is a description of each keyword value and its effect on the driver: 1 - This makes the driver search for "count(exp)" and replace it with "count(distinct exp)". This is needed because some versions of DB2 support the "count(exp)" syntax, and that syntax is generated by some ODBC applications. Needed by Microsoft applications when the server does not support the "count(exp)" syntax. 2 - Some ODBC applications are trapped when SQL_NULL_DATA is returned in the SQLGetTypeInfo() function for either the LITERAL_PREFIX or LITERAL_SUFFIX column. This forces the driver to return an empty string instead. Needed by Impromptu 2.0. 4 - This forces the driver to treat the input time stamp data as date data if the time and the fraction part of the time stamp are zero. Needed by Microsoft Access. 8 - This forces the driver to treat the input time stamp data as time data if the date part of the time stamp is 1899-12-30. Needed by Microsoft Access. 16 - Not used. 32 - This forces the driver to not return information about SQL_LONGVARCHAR, SQL_LONGVARBINARY, and SQL_LONGVARGRAPHIC columns. To the application it appears as though long fields are not supported. Needed by Lotus 123. 64 - This forces the driver to NULL terminate graphic output strings. Needed by Microsoft Access in a double byte environment. 128 - This forces the driver to let the query "SELECT Config, nValue FROM MSysConf" go to the server. Currently the driver returns an error with associated SQLSTATE value of S0002 (table not found). Needed if the user has created this configuration table in the database and wants the application to access it. 256 - This forces the driver to return the primary key columns first in the SQLStatistics() call. Currently, the driver returns the indexes sorted by index name, which is standard ODBC behavior. 512 - This forces the driver to return FALSE in SQLGetFunctions() for both SQL_API_SQLTABLEPRIVILEGES and SQL_API_SQLCOLUMNPRIVILEGES. 1024 - This forces the driver to return SQL_SUCCESS instead of SQL_NO_DATA_FOUND in SQLExecute() or SQLExecDirect() if the executed UPDATE or DELETE statement affects no rows. Needed by Visual Basic applications. 2048 - Not used. 4096 - This forces the driver to not issue a COMMIT after closing a cursor when in autocommit mode. 8192 - This forces the driver to return an extra result set after invoking a stored procedure. This result set is a one row result set consisting of the output values of the stored procedure. Can be accessed by Powerbuild applications. 32768 - This forces the driver to make Microsoft Query applications work with DB2 MVS synonyms. 65536 - This forces the driver to manually insert a "G" in front of character literals which are in fact graphic literals. This patch should always be supplied when working in an double byte environment. 131072 - This forces the driver to describe a time stamp column as a CHAR(26) column instead, when it is part of an unique index. Needed by Microsoft applications. 262144 - This forces the driver to use the pseudo-catalog table db2cli.procedures instead of the SYSCAT.PROCEDURES and SYSCAT.PROCPARMS tables. 524288 - This forces the driver to use SYSTEM_TABLE_SCHEMA instead of TABLE_SCHEMA when doing a system table query to a DB2/400 V3.x system. This results in better performance. 1048576 - This forces the driver to treat a zero length string through SQLPutData() as SQL_NULL_DATA. The PATCH2 keyword differs from the PATCH1 keyword. In this case, multiple patches are specified using comma separators. For example, if patch 1, 4, and 5 were specified, then PATCH2 would have a value of "1,4,5". Following is a description of each keyword value and its effect on the driver: 1 - This forces the driver to convert the name of the stored procedure in a CALL statement to uppercase. 2 - Not used. 3 - This forces the driver to convert all arguments to schema calls to uppercase. 4 - This forces the driver to return the Version 2.1.2 like result set for schema calls (that is, SQLColumns(), SQLProcedureColumns(), and so on), instead of the Version 5 like result set. 5 - This forces the driver to not optimize the processing of input VARCHAR columns, where the pointer to the data and the pointer to the length are consecutive in memory. 6 - This forces the driver to return a message that scrollable cursors are not supported. This is needed by Visual Basic programs if the DB2 client is Version 5 and the server is DB2 UDB Version 5. 7 - This forces the driver to map all GRAPHIC column data types to the CHAR column data type. This is needed in a double byte environment. 8 - This forces the driver to ignore catalog search arguments in schema calls. 9 - Do not commit on Early Close of a cursor 10 - Not Used 11 - Report that catalog name is supported, (VB stored procedures) 12 - Remove double quotes from schema call arguments, (Visual Interdev) 13 - Do not append keywords from db2cli.ini to output connection string 14 - Ignore schema name on SQLProcedures() and SQLProcedureColumns() 15 - Always use period for decimal separator in character output 16 - Force return of describe information for each open 17 - Do not return column names on describe 18 - Attempt to replace literals with parameter markers 19 - Currently, DB2 MVS V4.1 does not support the ODBC syntax where parenthesis are allowed in the ON clause in an Outer join clause. Turning on this PATCH2 will cause IBM DB2 ODBC driver to strip the parenthesis when the outer join clause is in an ODBC escape sequence. This PATCH2 should only be used when going against DB2 MVS 4.1. 20 - Currently, DB2 on MVS does not support BETWEEN predicate with parameter markers as both operands (expression ? BETWEEN ?). Turning on this patch will cause the IBM ODBC Driver to rewrite the predicate to (expression >= ? and expression <= ?). 21 - Set all OUTPUT only parameters for stored procedures to SQL_NULL_DATA 22 - This PATCH2 causes the IBM ODBC driver to report OUTER join as not supported. This is for application that generates SELECT DISTINCT col1 or ORDER BY col1 when using outer join statement where col1 has length greater than 254 characters and causes DB2 UDB to return an error (since DB2 UDB does not support greater-than-254 byte column in this usage 23 - Do not optimize input for parameters bound with cbColDef=0 24 - Access workaround for mapping Time values as Characters 25 - Access workaround for decimal columns - removes trailing zeros in char representation 26 - Do not return sqlcode 464 to application - indicates result sets are returned 27 - Force SQLTables to use TABLETYPE keyword value, even if the application specifies a valid value 28 - Describe real columns as double columns 29 - ADO workaround for decimal columns - removes leading zeroes for values x, where 1 > x > -1 (Only needed for some MDAC versions) 30 - Disable the Stored Procedure caching optimization 31 - Report statistics for aliases on SQLStatistics call 32 - Override the sqlcode -727 reason code 4 processing 33 - Return the ISO version of the time stamp when converted to char (as opposed to the ODBC version) 34 - Report CHAR FOR BIT DATA columns as CHAR 35 - Report an invalid TABLENAME when SQL_DESC_BASE_TABLE_NAME is requested - ADO readonly optimization 36 - Reserved 37 - Reserved ------------------------------------------------------------------------ Data Links Manager Quick Beginnings ------------------------------------------------------------------------ 11.1 Dlfm start Fails with Message: "Error in getting the afsfid for prefix" For a Data Links Manager running in the DCE-DFS environment, if dlfm start fails with the error: Error in getting the afsfid for prefix contact IBM Service. The error may occur when a DFS file set that had been registered to the Data Links Manager using "dlfm add_prefix" was subsequently deleted. ------------------------------------------------------------------------ 11.2 Setting Tivoli Storage Manager Class for Archive Files To specify which TSM management class to use for the archive files, set the DLFM_TSM_MGMTCLASS DB2 registry entry to the appropriate management class name. ------------------------------------------------------------------------ 11.3 Disk Space Requirements for DFS Client Enabler The DFS Client Enabler is an optional component that you can select during DB2 Universal Database client or server installation. You cannot install a DFS Client Enabler without installing a DB2 Universal Database client or server product, even though the DFS Client Enabler runs on its own without the need for a DB2 UDB client or server. In addition to the 2 MB of disk space required for the DFS Client Enabler code, you should set aside an additional 40 MB if you are installing the DFS Client Enabler as part of a DB2 Run-Time Client installation. You will need more disk space if you install the DFS Client Enabler as part of a DB2 Administration Client or DB2 server installation. For more information about disk space requirements for DB2 Universal Database products, refer to the DB2 for UNIX Quick Beginnings manual. ------------------------------------------------------------------------ 11.4 Monitoring the Data Links File Manager Back-end Processes on AIX There has been a change to the output of the dlfm see command. When this command is issued to monitor the Data Links File Manager back-end processes on AIX, the output that is returned will be similar to the following: PID PPID PGID RUNAME UNAME ETIME DAEMON NAME 17500 60182 40838 dlfm root 12:18 dlfm_copyd_(dlfm) 41228 60182 40838 dlfm root 12:18 dlfm_chownd_(dlfm) 49006 60182 40838 dlfm root 12:18 dlfm_upcalld_(dlfm) 51972 60182 40838 dlfm root 12:18 dlfm_gcd_(dlfm) 66850 60182 40838 dlfm root 12:18 dlfm_retrieved_(dlfm) 67216 60182 40838 dlfm dlfm 12:18 dlfm_delgrpd_(dlfm) 60182 1 40838 dlfm dlfm 12:18 dlfmd_(dlfm) DLFM SEE request was successful. The name that is enclosed within the parentheses is the name of the dlfm instance, in this case "dlfm". ------------------------------------------------------------------------ 11.5 Installing and Configuring DB2 Data Links Manger for AIX: Additional Installation Considerations in DCE-DFS Environments In the section called "Installation prerequisites", there is new information that should be added: You must also install either an e-fix for DFS 3.1, or PTF set 1 (when it becomes available). The e-fix is available from: http://www.transarc.com/Support/dfs/datalinks/efix_dfs31_main_page.html Also: The dfs client must be running before you install the Data Links Manager. Use db2setup or smitty. In the section called "Keytab file", there is an error that should be corrected as: The keytab file, which contains the principal and password information, should be called datalink.ktb and .... The correct name: datalink.ktb is used in the example below. The "Keytab file" section should be moved under "DCE-DFS Post-Installation Task", because the creation of this file cannot occur until after the DLMADMIN instance has been created. In the section called "Data Links File Manager servers and clients", it should be noted that the Data Links Manager server must be installed before any of the Data Links Manager clients. A new section, "Backup directory", should be added: If the backup method is to a local file system, this must be a directory in the DFS file system. Ensure that this DFS file set has been created by a DFS administrator. This should not be a DMLFS file set. ------------------------------------------------------------------------ 11.6 Failed "dlfm add_prefix" Command For a Data Links Manager running in the DCE/DFS environment, the dlfm add_prefix command might fail with a return code of -2061 (backup failed). If this occurs, perform the following steps: 1. Stop the Data Links Manager daemon processes by issuing the dlfm stop command. 2. Stop the DB2 processes by issuing the dlfm stopdbm command. 3. Get dce root credentials by issuing the dce_login root command. 4. Start the DB2 processes by issuing the dlfm startdbm command. 5. Register the file set with the Data Links Manager by issuing the dlfm add_prefix command. 6. Start the Data Links Manager daemon processes by issuing the dlfm start command. ------------------------------------------------------------------------ 11.7 Installing and Configuring DB2 Data Links Manger for AIX: Installing DB2 Data Links Manager on AIX Using the db2setup Utility In the section "DB2 database DLFM_DB created", the DLFM_DB is not created in the DCE_DFS environment. This must be done as a post-installation step. In the section "DCE-DFS pre-start registration for DMAPP", Step 2 should be changed to the following: 2. Commands are added to /opt/dcelocal/tcl/user_cmd.tcl to ensure that the DMAPP is started when DFS is started. ------------------------------------------------------------------------ 11.8 Installing and Configuring DB2 Data Links Manager for AIX: DCE-DFS Post-Installation Task The following new section, "Complete the Data Links Manager Install", should be added: On the Data Links Manager server, the following steps must be performed to complete the installation: 1. Create the keytab file as outlined under "Keytab file" in the section "Additional Installation Considerations in DCE-DFS Environment", in the chapter "Installing and Configuring DB2 Data Links Manger for AIX". 2. As root, enter the following commands to start the DMAPP: stop.dfs all start.dfs all 3. Run "dlfm setup" using dce root credentials as follows: a. Login as the Data Links Manager administrator, DLMADMIN. b. As root, issue dce_login. c. Enter the command: dlfm setup. On the Data Links Manager client, the following steps must be performed to complete the installation: 1. Create the keytab file as outlined under "Keytab file" in the section "Additional Installation Considerations in DCE-DFS Environment", in the chapter "Installing and Configuring DB2 Data Links Manger for AIX". 2. As root, enter the following commands to start the DMAPP: stop.dfs all start.dfs all ------------------------------------------------------------------------ 11.9 Installing and Configuring DB2 Data Links Manager for AIX: Manually Installing DB2 Data Links Manager Using Smit Under the section, "SMIT Post-installation Tasks", modify step 7 to indicate that the command "dce_login root" must be issued before "dlfm setup". Step 11 is not needed. This step is performed automatically when Step 6 (dlfm server_conf) or Step 8 (dlfm client_conf) is done. Also remove step 12 (dlfm start). To complete the installation, perform the following steps: 1. Create the keytab file as outlined under "Keytab file" in the section "Additional Installation Considerations in DCE-DFS Environment", in the chapter "Installing and Configuring DB2 Data Links Manger for AIX". 2. As root, enter the following commands to start the DMAPP: stop.dfs all start.dfs all ------------------------------------------------------------------------ 11.10 Installing and Configuring DB2 Data Links DFS Client Enabler In the section "Configuring a DFS Client Enabler", add the following to Step 2: Performing the "secval" commands will usually complete the configuration. It may, however, be necessary to reboot the machine as well. If problems are encountered in accessing READ PERMISSION DB files, reboot the machine where the DB2 DFS Client Enabler has just been installed. ------------------------------------------------------------------------ 11.11 Installing and Configuring DB2 Data Links Manager for Solaris The following actions must be performed after installing DB2 Data Links Manager for Solaris: 1. Add the following three lines to the /etc/system file: set dlfsdrv:glob_mod_pri=0x100800 set dlfsdrv:glob_mesg_pri=0xff set dlfsdrv:ConfigDlfsUid=UID where UID represents the user ID of the id dlfm. 2. Reboot the machine to activate the changes. ------------------------------------------------------------------------ 11.12 Choosing a Backup Method for DB2 Data Links Manager on AIX In addition to Disk Copy and XBSA, you can also use Tivoli Storage Manager (TSM) for backing up files that reside on a Data Links server. To use Tivoli Storage Manager as an archive server: 1. Install Tivoli Storage Manager on the Data Links server. For more information, refer to your Tivoli Storage Manager product documentation. 2. Register the Data Links server client application with the Tivoli Storage Manager server. For more information, refer to your Tivoli Storage Manager product documentation. 3. Add the following environment variables to the Data Links Manager Administrator's db2profile or db2cshrc script files: (for Bash, Bourne, or Korn shell) export DSMI_DIR=/usr/lpp/tsm/bin export DSMI_CONFIG=$HOME/tsm/dsm.opt export DSMI_LOG=$HOME/dldump export PATH=$PATH:/usr/lpp/tsm/bin (for C shell) setenv DSMI_DIR /usr/lpp/tsm/bin setenv DSMI_CONFIG ${HOME}/tsm/dsm.opt setenv DSMI_LOG ${HOME}/dldump setenv PATH=${PATH}:/usr/lpp/tsm/bin 4. Ensure that the dsm.sys TSM system options file is located in the /usr/lpp/tsm/bin directory. 5. Ensure that the dsm.opt TSM user options file is located in the INSTHOME/tsm directory, where INSTHOME is the home directory of the Data Links Manager Administrator. 6. Set the PASSWORDACCESS option to generate in the /usr/lpp/tsm/bin/dsm.sys Tivoli Storage Manager system options file. 7. Register TSM password with the generate option before starting the Data Links File Manager for the first time. This way, you will not need to provide a password when the Data Links File Manager initiates a connection to the TSM server. For more information, refer to your TSM product documentation. 8. Set the DLFM_BACKUP_TARGET registry variable to TSM. The value of DLFM_BACKUP_DIR_NAME registry variable will be ignored in this case. This will activate the Tivoli Storage Manager backup option. Notes: 1. If you change the setting of the DLFM_BACKUP_TARGET registry variable between TSM and disk at run time, you should be aware that the archived files are not moved to the newly specified archive location. For example, if you start the Data Links File Manager with the DLMF_BACKUP_TARGET registry value set to TSM, and change the registry value to a disk location, all newly archived files will be stored in the new location on the disk. The files that were previously archived to TSM will not be moved to the new disk location. 2. To override the default TSM management class there is a new registry variable called DLFM_TSM_MGMTCLASS. If this registry variable is left unset then the default TSM management class will be used. 9. Stop the Data Links File Manager by entering the dlfm stop command. 10. Start the Data Links File Manager by entering the dlfm start command. ------------------------------------------------------------------------ 11.13 Choosing a Backup Method for DB2 Data Links Manager on Windows NT Whenever a DATALINK value is inserted into a table with a DATALINK column that is defined for recovery, the corresponding DATALINK files on the Data Links server are scheduled to be backed up to an archive server. Currently, Disk Copy (default method) and Tivoli Storage Manager are the two options that are supported for file backup to an archive server. Future releases of DB2 Data Links Manager for Windows NT will support other vendors' backup media and software. Disk Copy (default method) When the backup command is entered on the DB2 server, it ensures that the linked files in the database are backed up on the Data Links server to the directory specified by the DLFM_BACKUP_DIR_NAME environment variable. The default value for this variable is c:\dlfmbackup, where c:\ represents the Data Links Manager backup installation drive. To set this variable to c:\dlfmbackup, enter the following command: db2set -g DLFM_BACKUP_DIR_NAME=c:\dlfmbackup The location specified by the DLFM_BACKUP_DIR_NAME environment variable must not located on a file system using a Data Links Filesystem Filter and that the required space is available in the directory you specified for the backup files. Also, ensure that the DLFM_BACKUP_TARGET variable is set to LOCAL by entering the following command: db2set -g DLFM_BACKUP_TARGET=LOCAL After setting or changing these variables, stop and restart the Data Links File Manager using the dlfm stop and dlfm start commands. Tivoli Storage Manager To use Tivoli Storage Manager as an archive server: 1. Install Tivoli Storage Manager on the Data Links server. For more information, refer to your Tivoli Storage Manager product documentation. 2. Register the Data Links server client application with the Tivoli Storage Manager server. For more information, refer to your Tivoli Storage Manager product documentation. 3. Click on Start and select Settings --> Control Panel --> System. The System Properties window opens. Select the Environment tab and enter the following environment variables and corresponding values: Variable Value DSMI_DIR c:\tsm\baclient DSMI_CONFIG c:\tsm\baclient\dsm.opt DSMI_LOG c:\tsm\dldump 4. Ensure that the dsm.sys TSM system options file is located in the c:\tsm\baclient directory. 5. Ensure that the dsm.opt TSM user options file is located in the c:\tsm\baclient directory. 6. Set the PASSWORDACCESS option to generate in the c:\tsm\baclient\dsm.sys Tivoli Storage Manager system options file. 7. Register TSM password with the generate option before starting the Data Links File Manager for the first time. This way, you will not need to provide a password when the Data Links File Manager initiates a connection to the TSM server. For more information, refer to your TSM product documentation. 8. Set the DLFM_BACKUP_TARGET environment variable to TSM using the following command: db2set -g DLFM_BACKUP_TARGET=TSM The value of the DLFM_BACKUP_DIR_NAME environment variable will be ignored in this case. This will activate the Tivoli Storage Manager backup option. Notes: 1. If you change the setting of the DLFM_BACKUP_TARGET environment variable between TSM and LOCAL at run time, you should be aware that the archived files are not moved to the newly specified archive location. For example, if you start the Data Links File Manager with the DLMF_BACKUP_TARGET environment variable set to TSM, and change its value to LOCAL, all newly archived files will be stored in the new location on the disk. The files that were previously archived to TSM will not be moved to the new disk location. 2. To override the default TSM management class there is a new environment variable called DLFM_TSM_MGMTCLASS. If this variable is left unset then the default TSM management class will be used. 9. Stop the Data Links File Manager by entering the dlfm stop command. 10. Start the Data Links File Manager by entering the dlfm start command. ------------------------------------------------------------------------ 11.14 Backing up a Journalized File System on AIX The book states that the Data Links Manager must be stopped, and that an offline backup should be made of the file system. The following approach, which removes the requirement of stopping the Data Links Manager, is suggested for users who require higher availability. 1. Extract the attached (see below) CLI source file quiesce.c and the shell script online.sh. 2. Compile quiesce.c: xlC -o quiesce -L$HOME/sqllib/lib --I$HOME/sqllib/include -c quiesce.c 3. Run the script on the node that has the DLFS file system. The shell script online.sh assumes that you have a catalog entry on the Data Link Manager node for each database that is registered with the Data Link Manager. It also assumes that /etc/filesystem has the complete entry for the DLFS file system. The shell script does the following: * Quiesces all the tables in databases that are registered with the Data Links Manager. This will stop any new activity. * Unmounts and remounts the file system as a read-only file system. * Performs a file system backup. * Unmounts and remounts the file system as a read-write file system. * Resets the DB2 tables; that is, brings them out of the quiesce state. The script must be modified to suit your environment as follows: 1. Select the backup command and put in the do_backup function of the script. 2. Set the following environment variables within the script: o DLFM_INST: set this to the DLFM instance name. o PATH_OF_EXEC: set this to the path where the "quiesce" executable resides. Invoke the script as follows: online.sh ------------------------- start of 'online.sh' script ---------------------- #!/bin/ksh # Sample script for performing a filesystem backup without bringing it # offline for most of the duration of the backup # Some sections of the script need to be modified by the users to suit their # specific needs including replacing some of the parameters with their own. # Usage: onlineb #The dlfs filesystem being backed up would remain accessible in read-only mode #for most of the time that the filesystem backup is going on. #For a short while in between it may be necessary to have all users off the #filesystem. This would be required at two points; the first, when switching #the filesystem to read-only (an unmount followed by re-mount as read-only) #and the second when switching it back to read-write (unmount again followed by #re-mount as read-write) # Environment dependent variables ... # To be changed according to needs ... DLFM_INST=sharada PATH_OF_EXEC=/home/sharada/amit # Local environment variables. EXEC=quiesce DLFM_DB_NAME=dlfm_db # Function to check if root check_id() { if [ `id -u` -ne 0 ] then echo "You need to be root to run this" exit 1 fi } # # Function to quiesce the tables with Datalinks value in databases registered # with DLFM_DB # quiesce_tables() { echo "Starting DB2 ..." su - $DLFM_INST "-c db2start | tail -n 1" # Print just the last line su - $DLFM_INST "-c $PATH_OF_EXEC/$EXEC -q $DLFM_DB_NAME" } # # Function to make the dlfs filesystem read-only # # [The filesystem should not be in use during this time; no user should even # have 'cd'-ied into the filesystem] # - If the filesystem is NFS exported, unexport it # unexport_fs() { if exportfs | grep -w $filesystem_name then echo $filesystem_name " is NFS exported" nfs_export_existed=1 echo "Unexporting " $filesystem_name exportfs -u $filesystem_name result=$? if [ $result -ne 0 ] then echo "Failed to unexport " $filesystem_name reset_tables exit 1 fi else echo $filesystem_name " is not NFS exported" fi } # # Function to Unmount the filesystem # umount_fs() { echo "Unmounting " $filesystem_name umount $filesystem_name result=$? if [ $result -ne 0 ] then echo "Unable to unmount " $filesystem_name echo "Filesystem " $filesystem_name " may be in use" echo "Please make sure that no one is using the filesystem and" echo "and then press a key" read $ans umount $filesystem_name result=$? fi if [ $result -ne 0 ] then echo "Unable to unmount " $filesystem_name echo "Aborting ..." echo "Resetting the quiesced tables ..." reset_tables exit 1 fi echo "Successfully unmounted " $filesystem_name } # # Function to remount the same filesystem back as read-only or # read-write depending on the value of "RO" variable. # remount_fs() { if [ $RO -eq 1 ] then echo "Now re-mounting " $filesystem_name " as read-only" mount -v dlfs -r $filesystem_name else echo "Now re-mounting " $filesystem_name " as read-write" mount -v dlfs $filesystem_name fi result=$? if [ $result -ne 0 ] then echo "Failed to remount " $filesystem_name echo "Aborting ..." reset_tables exit 1 fi echo "Successfully re-mounted " $filesystem_name " as read-only" } # # Function: If this was NFS exported, then export it a read-only now # make_fs_ro() { if [ $nfs_export_existed ] then echo "Re-exporting for NFS as read-only" chnfsexp -d $filesystem_name -N -t ro result=$? if [ $result -ne 0 ] then echo "Warning: Unable to NFS export " $filesystem_name # Not aborting here - continuing with a warning # at least the filesystem is available locally ## TBD: Or perhaps it would be better to exit else echo "Successfully exported " $filesystem_name " as read-only" fi fi } # # Function to do the backup. # Update this function with the backup command that you want to use. # do_backup() { echo "Initiating backup of " $filesystem_name # [ Add lines here to issue your favourite backup command with the right # parameters, or uncomment one of the following ] # To invoke backup via smit, uncomment the following line # smit fs # Select Backup a Filesystem # OR # To issue the backup command directly, uncomment and modify the following # line with your own options (for example full/incremental) and the # appropriate parameters (you might want to replace /dev/rmt0 by the name of # your backup device) # /usr/sbin/backup -f'/dev/rmt0' -'0' $filesystem_name # result=$? # if [ $result -ne 0 ] # then # echo "Backup failed" # # Do we exit here ? Or cleanup ? # else # echo "Successful backup" # fi # OR # Put in your own backup script here # } # # Function to remount the filesystem as read-write. And NFS export it, if it # was NFS exported to start with. export_fs() { if [ $nfs_export_existed ] then echo "Exporting back for NFS as read-write" chnfsexp -d $filesystem_name -N -t rw result=$? if [ $result -ne 0 ] then echo "Warning: Unable to NFS export " $filesystem_name # Not aborting here - continuing with a warning # at least the filesystem is available locally # TBD: Or perhaps it would be better to exit else echo "Successfully exported " $filesystem_name " as read-write" fi fi } # Function to reset Quiesced tables reset_tables() { su - $DLFM_INST "-c $PATH_OF_EXEC/$EXEC -r $DLFM_DB_NAME" } #***************** MAIN PORTION starts here ...***************** #Check args # if [ $# -lt 1 ] then echo "Usage: " $0 " " exit 1 fi check_id # Quiesce tables ( after waiting for all transactions to get over ...) quiesce_tables # (i) umount and remount the filesystem as read-only filesystem_name=$1 unexport_fs umount_fs RO=1 remount_fs # READ_ONLY make_fs_ro # (ii) Start BackUp do_backup # (iii) unmount and remount the filesystem as read-write umount_fs RO=0 remount_fs # READ_WRITE export_fs # Reset all Quiesced tables ... reset_tables # Now the filesystem is ready for normal operation of Datalinks echo "Done" exit 0 ------------------------- end of 'online.sh' script ------------------------ ------------------------- start of 'quiesce.c' script ------------------------ /********************************************************************** * * OCO SOURCE MATERIALS * * COPYRIGHT: P#2 P#1 * (C) COPYRIGHT IBM CORPORATION Y1, Y2 * * The source code for this program is not published or otherwise divested of * its trade secrets, irrespective of what has been deposited with the U.S. * Copyright Office. * * Source File Name = quiesce.c (%W%) * * Descriptive Name = Quiesce or Reset tables. * * Function: It quiesces ( OR resets ) the tables ( with datalinks column ) of * the databases which are registered with DLFM_DB * * This program expects the databases registered with DLFM_DB are * catalogued. It also expects that db2 is started. * * Dependencies: * * Restrictions: * ***********************************************************************/ #include #include #include #include #include #include #define MAX_UID_LENGTH 20 #define MAX_PWD_LENGTH 20 #define MAXCOLS 255 struct sqlca sqlca; struct SQLB_TBSPQRY_DATA *sqlb; #ifndef max #define max(a,b) (a > b ? a : b) #endif #define CHECK_HANDLE( htype, hndl, RC ) if ( RC != SQL_SUCCESS ) \ { check_error( htype, hndl, RC, __LINE__, __FILE__ ) ; } SQLRETURN check_error( SQLSMALLINT, SQLHANDLE, SQLRETURN, int, char * ) ; SQLRETURN DBconnect( SQLHANDLE, SQLHANDLE * ) ; SQLRETURN print_error( SQLSMALLINT, SQLHANDLE, SQLRETURN, int, char * ) ; SQLRETURN prompted_connect( SQLHANDLE, SQLHANDLE * ) ; SQLRETURN terminate( SQLHANDLE, SQLRETURN ) ; SQLCHAR server[SQL_MAX_DSN_LENGTH + 1] ; SQLCHAR uid[MAX_UID_LENGTH + 1] ; SQLCHAR pwd[MAX_PWD_LENGTH + 1] ; /* check_error - calls print_error(), checks severity of return code */ SQLRETURN check_error( SQLSMALLINT htype, /* A handle type identifier */ SQLHANDLE hndl, /* A handle */ SQLRETURN frc, /* Return code to be included with error msg */ int line, /* Used for output message, indicate where */ char * file /* the error was reported from */ ) { print_error( htype, hndl, frc, line, file ) ; switch ( frc ) { case SQL_SUCCESS: break ; case SQL_INVALID_HANDLE: printf( "\n>------ ERROR Invalid Handle --------------------------\n"); case SQL_ERROR: printf( "\n>--- FATAL ERROR, Attempting to rollback transaction --\n"); if ( SQLEndTran( htype, hndl, SQL_ROLLBACK ) != SQL_SUCCESS ) printf( ">Rollback Failed, Exiting application\n" ) ; else printf( ">Rollback Successful, Exiting application\n" ) ; return( terminate( hndl, frc ) ) ; case SQL_SUCCESS_WITH_INFO: printf( "\n> ----- Warning Message, application continuing ------- \n"); break ; case SQL_NO_DATA_FOUND: printf( "\n> ----- No Data Found, application continuing --------- \n"); break ; default: printf( "\n> ----------- Invalid Return Code --------------------- \n"); printf( "> --------- Attempting to rollback transaction ---------- \n"); if ( SQLEndTran( htype, hndl, SQL_ROLLBACK ) != SQL_SUCCESS ) printf( ">Rollback Failed, Exiting application\n" ) ; else printf( ">Rollback Successful, Exiting application\n" ) ; return( terminate( hndl, frc ) ) ; } return ( frc ) ; } /* connect without prompt */ SQLRETURN DBconnect( SQLHANDLE henv, SQLHANDLE * hdbc ) { /* allocate a connection handle */ if ( SQLAllocHandle( SQL_HANDLE_DBC, henv, hdbc ) != SQL_SUCCESS ) { printf( ">---ERROR while allocating a connection handle-----\n" ) ; return( SQL_ERROR ) ; } /* Set AUTOCOMMIT OFF */ if ( SQLSetConnectAttr( * hdbc, SQL_ATTR_AUTOCOMMIT, ( void * ) SQL_AUTOCOMMIT_OFF, SQL_NTS ) != SQL_SUCCESS ) { printf( ">---ERROR while setting AUTOCOMMIT OFF ------------\n" ) ; return( SQL_ERROR ) ; } if ( SQLConnect( * hdbc, server, SQL_NTS, uid, SQL_NTS, pwd, SQL_NTS ) != SQL_SUCCESS ) { printf( ">--- Error while connecting to database: %s -------\n", server ) ; SQLDisconnect( * hdbc ) ; SQLFreeHandle( SQL_HANDLE_DBC, * hdbc ) ; return( SQL_ERROR ) ; } else /* Print Connection Information */ printf( "\nConnected to %s\n", server ) ; return( SQL_SUCCESS ) ; } /*---> SQLL1X32.SCRIPT */ /* print_error - calls SQLGetDiagRec(), displays SQLSTATE and message ** ** - called by check_error */ SQLRETURN print_error( SQLSMALLINT htype, /* A handle type identifier */ SQLHANDLE hndl, /* A handle */ SQLRETURN frc, /* Return code to be included with error msg */ int line, /* Used for output message, indicate where */ char * file /* the error was reported from */ ) { SQLCHAR buffer[SQL_MAX_MESSAGE_LENGTH + 1] ; SQLCHAR sqlstate[SQL_SQLSTATE_SIZE + 1] ; SQLINTEGER sqlcode ; SQLSMALLINT length, i ; printf( ">--- ERROR -- RC = %d Reported from %s, line %d ------------\n", frc, file, line ) ; i = 1 ; while ( SQLGetDiagRec( htype, hndl, i, sqlstate, &sqlcode, buffer, SQL_MAX_MESSAGE_LENGTH + 1, &length ) == SQL_SUCCESS ) { printf( " SQLSTATE: %s\n", sqlstate ) ; printf( "Native Error Code: %ld\n", sqlcode ) ; printf( "%s \n", buffer ) ; i++ ; } printf( ">--------------------------------------------------\n" ) ; return( SQL_ERROR ) ; } /*<-- */ /* prompted_connect - prompt for connect options and connect */ SQLRETURN prompted_connect( SQLHANDLE henv, SQLHANDLE * hdbc ) { /* allocate a connection handle */ if ( SQLAllocHandle( SQL_HANDLE_DBC, henv, hdbc ) != SQL_SUCCESS ) { printf( ">---ERROR while allocating a connection handle-----\n" ) ; return( SQL_ERROR ) ; } /* Set AUTOCOMMIT OFF */ if ( SQLSetConnectAttr( * hdbc, SQL_ATTR_AUTOCOMMIT, ( void * ) SQL_AUTOCOMMIT_OFF, SQL_NTS ) != SQL_SUCCESS ) { printf( ">---ERROR while setting AUTOCOMMIT OFF ------------\n" ) ; return( SQL_ERROR ) ; } if ( SQLConnect( * hdbc, server, SQL_NTS, uid, SQL_NTS, pwd, SQL_NTS ) != SQL_SUCCESS ) { printf( ">--- ERROR while connecting to %s -------------\n", server ) ; SQLDisconnect( * hdbc ) ; SQLFreeHandle( SQL_HANDLE_DBC, * hdbc ) ; return( SQL_ERROR ) ; } else /* Print Connection Information */ printf( "\nConnected to %s\n", server ) ; return( SQL_SUCCESS ) ; } /* terminate and free environment handle */ SQLRETURN terminate( SQLHANDLE henv, SQLRETURN rc ) { SQLRETURN lrc ; printf( ">Terminating ....\n" ) ; print_error( SQL_HANDLE_ENV, henv, rc, __LINE__, __FILE__ ) ; /* Free environment handle */ if ( ( lrc = SQLFreeHandle( SQL_HANDLE_ENV, henv ) ) != SQL_SUCCESS ) print_error( SQL_HANDLE_ENV, henv, lrc, __LINE__, __FILE__ ) ; return( rc ) ; } void show_progress() { int i; for(i=0;i<3;i++) { printf("..."); /* sleep(1);*/ } printf("... DONE.\n"); } void wrong_input(char *str) { printf("\n\n\t****************************************************************\n"); printf("\t* usage: %s -q ( to Quiesce tables ..) *\n",str); printf("\t* OR *\n"); printf("\t* usage: %s -r ( to reset Quiesced tables ..)*\n",str); printf("\t****************************************************************\n\n\n"); exit(0); } extern SQLCHAR server[SQL_MAX_DSN_LENGTH + 1] ; extern SQLCHAR uid[MAX_UID_LENGTH + 1] ; extern SQLCHAR pwd[MAX_PWD_LENGTH + 1] ; #define MAX_STMT_LEN 500 int reset=-1; /******************************************************************* ** main *******************************************************************/ int main( int argc, char * argv[] ) { SQLHANDLE henv,hdbc[3], hstmt,hstmt1,hstmt2 ; SQLRETURN rc ; SQLCHAR * sqlstmt = ( SQLCHAR * ) "SELECT dbname,dbinst,password from dfm_dbid" ;/* for the primary db */ SQLCHAR * stmt = ( SQLCHAR * ) "SELECT COLS.TBCREATOR, COLS.TBNAME FROM SYSIBM.SYSCOLUMNS COLS, " " SYSIBM.SYSCOLPROPERTIES PROPS WHERE COLS.TBCREATOR = PROPS.TABSCHEMA AND " " COLS.TBNAME = PROPS.TABNAME AND COLS.TYPENAME='DATALINK' AND SUBSTR(PROPS.DL_FEATURES, 2, 1) " " = 'F' GROUP BY COLS.TBCREATOR, COLS.TBNAME";/*test for the secondary db's*/ SQLCHAR * stmt2 = ( SQLCHAR * ) "SELECT count(*) from dfm_xnstate where xn_state=3" ;/* for the primary db */ SQLCHAR v_dbname[20] ; SQLINTEGER v_xnstate ; SQLCHAR v_usernm[20] ; SQLCHAR v_passwd[20] ; SQLINTEGER nullind; SQLVARCHAR v_tbname[128]; SQLCHAR v_tbcreator[20]; SQLINTEGER rowcount; int i,count; char state[6],v_tb[100]; int flag=0; int xxx,tong=0; if( (argc != 2 && argc!=3) || argv[1][0]!='-' || strlen(argv[1]) !=2) wrong_input(argv[0]); /*** NOTE : If argc==2 then DB-NAME the program would ask user to enter DB-Name else it would take the second argument to this program ( argv[2] ) as DB-NAME ***/ if(argv[1][1]=='q' || argv[1][1]=='Q') { reset=0; } else { if(argv[1][1]!='r' || argv[1][1]!='R') { reset=1; } else { wrong_input(argv[0]); } if(reset==-1) wrong_input(argv[0]); } SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv ) ; /* Before allocating any connection handles, set Environment wide Connect Options Set to Connect Type 2, Syncpoint 1 */ if ( SQLSetEnvAttr( henv, SQL_CONNECTTYPE, ( SQLPOINTER ) SQL_COORDINATED_TRANS, 0 ) != SQL_SUCCESS ) { printf( ">---ERROR while setting Connect Type 2 -------------\n" ) ; return( SQL_ERROR ) ; } /*<-- */ /*---> */ if ( SQLSetEnvAttr( henv, SQL_SYNC_POINT, ( SQLPOINTER ) SQL_ONEPHASE, 0 ) != SQL_SUCCESS ) { printf( ">---ERROR while setting Syncpoint One Phase -------------\n" ) ; return( SQL_ERROR ) ; } if(argc==3) { strcpy(server,argv[2]); } else { printf( ">Enter database Name:\n" ) ; gets( ( char * ) server ) ; } /*prompted_connect(henv,&hdbc[0]);*/ /* allocate an environment handle */ rc = SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv ) ; if ( rc != SQL_SUCCESS ) return( terminate( henv, rc ) ) ; /* allocate a connect handle, and connect to the primary database*/ rc = DBconnect( henv, &hdbc[0] ) ; if ( rc != SQL_SUCCESS ) return( terminate( henv, rc ) ) ; flag=1; if(reset!=1) { printf("\nWaiting for XNs to get over ..."); while(flag) /* Outer While */ { rc = SQLAllocHandle( SQL_HANDLE_STMT, hdbc[0], &hstmt2 ) ; CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ; rc = SQLExecDirect( hstmt2, stmt2, SQL_NTS ) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ; rc = SQLBindCol( hstmt2, 1, SQL_C_LONG, &v_xnstate, 0, &nullind) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt2, rc ) ; while ( ( rc = SQLFetch( hstmt2 ) ) == SQL_SUCCESS ) { /*printf( "\nCount of XNs Pending : %d \n",v_xnstate) ;*/ if (v_xnstate > 0) { fflush(stdout); printf("."); sleep(1); break; } else flag=0; } /* Inner While */ /* Deallocation */ rc = SQLFreeHandle( SQL_HANDLE_STMT, hstmt2 ) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt2, rc ) ; } /* Outer While */ } /* IF */ if(!reset) printf("XNs OVER !!\n"); rc = SQLAllocHandle( SQL_HANDLE_STMT, hdbc[0], &hstmt ) ; CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ; rc = SQLExecDirect( hstmt, sqlstmt, SQL_NTS ) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ; rc = SQLBindCol( hstmt, 1, SQL_C_CHAR, v_dbname, sizeof(v_dbname), NULL) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ; rc = SQLBindCol( hstmt, 2, SQL_C_CHAR, v_usernm, sizeof(v_usernm), NULL) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ; v_passwd[0]='\0'; rc = SQLBindCol( hstmt, 3, SQL_C_CHAR, v_passwd, sizeof(v_passwd), NULL) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ; /* Counter for number of rows fetched from the primary db*/ count=1; for (i=1;i<=count;i++) /* For the FOR LOOP */ { while ( ( rc = SQLFetch( hstmt ) ) == SQL_SUCCESS ) { printf( "\nDatabase Name : %s \n",v_dbname) ; count=count+1; /* Depending on the no. of rows fetched from the primary db connect to the sec db's */ if ( SQLAllocHandle( SQL_HANDLE_DBC,henv,&hdbc[i]) != SQL_SUCCESS ) { printf(">---ERROR while allocating a connection handle-----\n"); return( SQL_ERROR ) ; } /* Set AUTOCOMMIT ON */ if ( SQLSetConnectAttr( * hdbc,SQL_ATTR_AUTOCOMMIT,( void * ) SQL_AUTOCOMMIT_ON, SQL_NTS) != SQL_SUCCESS ) { printf(">---ERROR while setting AUTOCOMMIT OFF ------------\n"); return( SQL_ERROR ) ; } rc = SQLConnect(hdbc[i],v_dbname,SQL_NTS,((v_passwd[0]=='\0') ? NULL : v_usernm),SQL_NTS,v_passwd,SQL_NTS); if ( rc != SQL_SUCCESS ) return( terminate( henv, rc ) ) ; /* tRYING OUT FOR SELECTION FROM THESE DB'S*/ rc = SQLAllocHandle( SQL_HANDLE_STMT, hdbc[i], &hstmt1 ) ; CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[i], rc ) ; rc = SQLExecDirect( hstmt1, stmt, 276 ) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt1, rc ) ; rc = SQLBindCol( hstmt1, 1, SQL_C_CHAR, v_tbcreator, sizeof(v_tbcreator), NULL) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt1, rc ) ; rc = SQLBindCol( hstmt1, 2, SQL_C_CHAR, v_tbname, sizeof(v_tbname), NULL) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt1, rc ) ; while ( ( rc = SQLFetch( hstmt1 ) ) == SQL_SUCCESS ) { v_tb[0]= '\0'; strcat(v_tb,v_tbcreator); strcat(v_tb,"."); strcat(v_tb,v_tbname); printf("\tTABLE : %s ",v_tb); sqluvqdp (v_tb,(reset==1) ? 9 : 2, NULL, &sqlca); /** 9 --> to RESET 2 --> to Quiesce ( exclusive) */ if (sqlca.sqlcode==0) { if (reset==1) { /* printf("The quiesced tablespace successfully reset.\n"); */ show_progress(); } else { /* printf("The tablespace successfully quiesced\n");*/ show_progress(); } } else if (sqlca.sqlcode== -3805 ||sqlca.sqlcode==01004) { if(reset==1) { /* printf("The quiesced tablespace could not be reset.\n");*/ show_progress(); } else { /* printf("The tablespace has already been quiesced\n");*/ show_progress(); } } else { if(reset==1) { printf("The quiesced tablespace could not be reset.\n"); } else { printf("The tablespace could not be quiesced. \n"); } printf("\t\tSQLCODE = %ld\n", sqlca.sqlcode); strncpy(state, sqlca.sqlstate, 5); state[5] = '\0'; printf("\t\tSQLSTATE = %s\n", state); } } rc = SQLFreeHandle( SQL_HANDLE_STMT, hstmt1 ) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt1, rc ) ; rc = SQLDisconnect( hdbc[i] ); CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[i], rc ) ; rc = SQLFreeHandle( SQL_HANDLE_DBC, hdbc[i] ) ; CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[i], rc ) ; } } printf("The NO. of DATABASES is %d \n",count-1); if ( rc != SQL_NO_DATA_FOUND ) CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ; /* Commit the changes. */ rc = SQLEndTran( SQL_HANDLE_DBC, hdbc[0], SQL_COMMIT ) ; CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ; /* Disconnect and free up CLI resources. */ rc = SQLFreeHandle( SQL_HANDLE_STMT, hstmt ) ; CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ; /* ******************************************************/ printf( "\n>Disconnecting .....\n" ) ; rc = SQLDisconnect( hdbc[0] ); CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ; rc = SQLFreeHandle( SQL_HANDLE_DBC, hdbc[0] ) ; CHECK_HANDLE( SQL_HANDLE_DBC, hdbc[0], rc ) ; /**********************************************************/ rc = SQLFreeHandle( SQL_HANDLE_ENV, henv ) ; if ( rc != SQL_SUCCESS ) return( terminate( henv, rc ) ) ; return( SQL_SUCCESS ) ; } /* end main */ ------------------------- end of 'quiesce.c' script ------------------------ ------------------------------------------------------------------------ 11.15 Administrator Group Privileges in Data Links on Windows NT On Windows NT, a user belonging to the adminstrator group would have the same privileges with regard to files linked using DataLinks as a root user on UNIX for most functions. The following table compares both. Operation Unix (root) Windows NT (Administrator) Rename Yes Yes Access file without tokenYes Yes Delete Yes No (see note below) Update Yes No (see note below) Note:The NTFS disallows these operations for a read-only file. The administrator user can make these operations successful by enabling the write permission for the file. ------------------------------------------------------------------------ 11.16 Minimize Logging for DataLinks File System Filter (DLFF) Installation You can minimize logging for the DataLinks File System Filter (DLFF) Installation by changing the dlfs_cfg file. The dlfs_cfg file is passed to strload routine to load the driver and configuration parameters. The file is located in the /usr/lpp/db2_07_01/cfg/ directory. Through a symbolic link, the file can also be found in the /etc directory. The dlfs_cfg file has the following format: d 'driver-name' 'vfs number' 'dlfm id' 'global message priority' 'global module priority' - 0 1 where: d The d parameter specifies that the driver is to be loaded. driver-name The driver-name is the full path of the driver to be loaded. For instance, the full path for DB2 Version 7 is /usr/lpp/db2_07_01/bin/dflsdrv. The name of the driver is dlfsdrv. vfs number This is the vfs entry for DLFS in /etc/vfs. dlfm id This is the DataLinks File Manager Id. global message priority This is the global message priority global module priority This is the global module priority 0 1 0 1 are the minor numbers for creating non clone nodes for this driver. The node names are created by appending the minor number to the cloned driver node name. No more than five minor numbers can be given (0-4). A real-world example might look as follows: d $DRIVER_PATH/dlfsdrv 14,208,255,-1 - 0 1 The messages that are logged depend on the settings for the global message priority and global module priority. To minimize logging, you can change the value for the global message priority. There are four message priority values you can use: #define LOG_EMERGENCY 0x01 #define LOG_TRACING 0x02 #define LOG_ERROR 0x04 #define LOG_TROUBLESHOOT 0x08 Most of the messages in DLFF have LOG_TROUBLESHOOT as the message priority. Here are a few alternative configuration examples: If you do require emergency messages and error messages, set the global message priority to 5 (1+4) in the dlfs_cfg configuration file: d $DRIVER_PATH/dlfsdrv 14,208,5,-1 - 0 1 If only error messages are required, set the global message priority to 4: d $DRIVER_PATH/dlfsdrv 14,208,4,-1 - 0 1 If you do not require logging for DLFS, then set global message priority to 0: d $DRIVER_PATH/dlfsdrv 14,208,0,-1 - 0 1 11.16.1 Logging Messages after Installation If you need to log emergency, error, and troubleshooting messages after installation, you must modify the dlfs_cfg file. The dlfs_cfg file is located in the /usr/lpp/db2_07_01/cfg directory. The global message priority must be set to 255 (maximum priority) or to 13 (8+4+1). Setting the priority to 13 (8+4+1) will log emergency, error, and troubleshooting information. After setting the global message priority, unmount the DLFS filter file system and reload the dlfsdrv driver to have the new priority values set at load time. After reloading the dlfsdrv driver, the DLFS filter file system must be re-mounted. Note:The settings for dlfs_cfg will remain for any subsequent loading of dlfsdrv driver until the dlfs_cfg file is changed again. ------------------------------------------------------------------------ 11.17 Uninstalling DLFM Components Using SMIT May Remove Additional Filesets Before uninstalling DB2 (Versions 5, 6, or 7) from an AIX machine on which the Data Links Manager is installed, follow these steps: 1. As root, make a copy of /etc/vfs using the command: cp -p /etc/vfs /etc/vfs.bak 2. Uninstall DB2. 3. As root, replace /etc/vfs with the backup copy made in step 1: cp -p /etc/vfs.bak /etc/vfs ------------------------------------------------------------------------ Data Movement Utilities Guide and Reference ------------------------------------------------------------------------ 12.1 Pending States After a Load Operation The first two sentences in the last paragraph in this section have been changed to the following: The fourth possible state associated with the load process (check pending state) pertains to referential and check constraints, DATALINKS constraints, AST constraints, or generated column constraints. For example, if an existing table is a parent table containing a primary key referenced by a foreign key in a dependent table, replacing data in the parent table places both tables (not the table space) in check pending state. ------------------------------------------------------------------------ 12.2 Load Restrictions and Limitations The following restrictions apply to generated columns and the load utility: * It is not possible to load a table having a generated column in a unique index unless the generated column is an "include column" of the index or the generatedoverride file type modifier is used. If this modifier is used, it is expected that all values for the column will be supplied in the input data file. * It is not possible to load a table having a generated column in the partitioning key unless the generatedoverride file type modifier is used. If this modifier is used, it is expected that all values for the column will be supplied in the input data file. ------------------------------------------------------------------------ 12.3 rexecd Required to Run Autoloader When Authentication=yes When running the autoloader with the authentication option set to yes in the autoloader configuration file, the rexecd must be enabled. If rexecd is not enabled, the following error message will be produced: SQL6554N An error occurred when attempting to remotely execute a process. ------------------------------------------------------------------------ Installation and Configuration Supplement ------------------------------------------------------------------------ 13.1 Binding Database Utilities Using the Run-Time Client The Run-Time Client cannot be used to bind the database utilities (import, export, reorg, the command line processor) and DB2 CLI bind files to each database before they can be used with that database. You must use the DB2 Administration Client or the DB2 Application Development Client instead. You must bind these database utilities and DB2 CLI bind files to each database before they can be used with that database. In a network environment, if you are using multiple clients that run on different operating systems, or are at different versions or service levels of DB2, you must bind the utilities once for each operating system and DB2-version combination. ------------------------------------------------------------------------ 13.2 UNIX Client Access to DB2 Using ODBC Chapter 12 ("Running Your Own Applications") states that you need to update odbcinst.ini if you install an ODBC Driver Manager with your ODBC client application or ODBC SDK. This is partially incorrect. You do not need to update odbcinst.ini if you install a Merant ODBC Driver Manager product. ------------------------------------------------------------------------ 13.3 Switching NetQuestion for OS/2 to Use TCP/IP The instructions for switching NetQuestion to use TCP/IP on OS/2 systems are incomplete. The location of the *.cfg files mentioned in those instructions is the data subdirectory of the NetQuestion installation directory. You can determine the NetQuestion installation directory by entering one of the following commands: echo %IMNINSTSRV% //for SBCS installations echo %IMQINSTSRV% //for DBCS installations ------------------------------------------------------------------------ 13.4 Chapter 26. Setting Up a Federated System to Access Oracle Data Sources Documentation Errors The section, "Adding Oracle Data Sources to a Federated System" has the following errors: * A step is missing in the procedure. The correct steps are: 1. Install and configure the Oracle client software on the DB2 federated server using the documentation provided by Oracle. 2. For DB2 federated servers running on UNIX platforms, run the djxlink script to link-edit the Oracle SQL*Net or Net8 libraries to your DB2 federated server. The djxlink script is located in /install_directory/bin. Run this script only after installing Oracle's client software on the DB2 federated server. 3. Set data source environment variables by modifying the DB2DJ.ini file and issuing the db2set command. The db2set command updates the DB2 profile registry with your settings. * The documentation indicates to set: DB2_DJ_INI = sqllib/cfg/db2dj.ini This is incorrect, it should be set to the following: DB2_DJ_INI = $INSTHOME/sqllib/cfg/db2dj.ini ------------------------------------------------------------------------ Message Reference ------------------------------------------------------------------------ 14.1 DWC13603E (New Message) DWC13603EThe export utility was unable to open the log file. Explanation: The Data Warehouse Center always attempts to create a log file to capture all the details of the export process. This error indicates that Data Warehouse Center cannot access or open this log file. If the Data Warehouse Center cannot create the log file, the export process cannot continue. Some typical reasons that a log file could not be opened include: * The file name is not valid. * The path name is not valid. * You do not have write access to the log path. User Response: Verify that you have write access to the specified log path and that there is an adequate amount of memory and storage available on your system. If the problem persists, contact IBM Software Support. ------------------------------------------------------------------------ 14.2 DWC13700E (New Message) DWC13700EThe Data Warehouse Center object "" named "", which is required to import the Data Warehouse Center object "" named "", could not be found. Explanation: This is an internal error that occurs when the import utility cannot find an object that should already exist in the Data Warehouse Center. If the required object was not created during the import process, the import utility cannot continue. User Response: Verify that the XML file that you are importing is not damaged. To do this, regenerate the XML file from its original source and then run the import again. If you still receive this error message, contact IBM Software Support or the vendor who provided the file. ------------------------------------------------------------------------ 14.3 DWC13701E (New Message) DWC13701EUnable to import the Data Warehouse Center object "" named "", because no common warehouse metamodel object of type "" was found. Explanation: Creating an object of this type depends on Data Warehouse Center finding a necessary common warehouse metamodel object. Without this common warehouse metamodel object, the Data Warehouse Center object is not valid. If you are getting this error message, it is likely that the XML file that you are importing does not contain the necessary common warehouse metamodel object. User Response: Verify that the XML file that you are importing is not damaged. To do this, regenerate the XML file from its original source and then run the import again. If you still receive this error message, contact IBM Software Support or the vendor who provided the file. ------------------------------------------------------------------------ 14.4 DWC13702E (New Message) DWC13702EA primary key already exists and cannot be updated. The import process cannot continue. Explanation: Your warehouse control database has a primary key, and the data that you are trying to import contains a different primary key on the same table. To complete the import process, there must either be just one primary key, or two primary keys that match. You cannot have two different primary keys. User Response: To resolve the unmatched primary keys, take one of the following actions: * Change your warehouse control database's primary key to match the primary key that is in the data that you want to import. * Delete the primary key from the data that you want to import and use the primary key that is in your warehouse control database. * Change the primary key in the data that you want to import to match the primary key that is in your warehouse control database. ------------------------------------------------------------------------ 14.5 DWC13703E (New Message) DWC13703EA foreign key already exists and cannot be updated. The import process cannot continue. Explanation: Your warehouse control database has a foreign key, and the data that you are trying to import contains a different foreign key on the same table. To complete the import process, there must either be just one foreign key, or two foreign keys that match. You cannot have two different foreign keys. User Response: To resolve the unmatched foreign keys, take one of the following actions: * Change your warehouse control database's foreign key to match the foreign key that is in the data that you want to import. * Delete the foreign key from the data that you want to import and use the foreign key that is in your warehouse control database. * Change the foreign key that is in the data you want to import to match the foreign key that is in your warehouse control database. ------------------------------------------------------------------------ 14.6 DWC13705E (New Message) DWC13705EThe import utility was unable to create a temporary XML file in the EXCHANGE directory. Exception = "". Explanation: The Data Warehouse Center must be able to create a copy of the XML file in the same directory as the CWM.DTD file. This error message indicates that Data Warehouse cannot create that XML file. If Data Warehouse Center cannot create this file, the import process cannot continue. User Response: Verify that you have write access to the specified EXCHANGE path and that there is an adequate amount of memory and storage available on your system. If the problem persists, note the exception code from this error message and contact IBM Software Support. ------------------------------------------------------------------------ 14.7 DWC13706E (New Message) DWC13706EThe XML file "" cannot be loaded. Exception = "" Explanation: This is an internal error that occurs when Data Warehouse Center is unable to read an XML file during the import process. Typical causes include files that have either been damaged, or that do not contain XML data. If Data Warehouse Center cannot read the XML file, the import process cannot continue. User Response: Verify that the XML file that you are importing is not damaged. To do this, regenerate the XML file from its original source and then run the import again. If you still receive this error message, contact IBM Software Support or the vendor who provided the file. ------------------------------------------------------------------------ 14.8 DWC13707E (New Message) DWC13707EThe import utility was unable to open the log file. Explanation: The Data Warehouse Center always attempts to create a log file to capture all the details of the import process. This error indicates that Data Warehouse Center cannot access or open the log file. If the Data Warehouse Center cannot create the log file, the import process cannot continue. User Response: Some typical reasons that a log file could not be opened on import include: * The file name is not valid. * The path name is not valid. * You do not have write access to the log path. Check to see if any of these problems exist and if so, make the necessary changes, or call IBM Software Support. ------------------------------------------------------------------------ 14.9 SQL0270N (New Reason Code 40) The following reason code has been added to message SQL0270N: Reason code 40 Under "Explanation": The function IDENTITY_VAL_LOCAL cannot be used in a trigger or SQL function. Under "User Response": Remove the invocation of the IDENTITY_VAL_LOCAL function from the trigger definition or the SQL function definition. ------------------------------------------------------------------------ 14.10 SQL0301N (New Explanation Text) The Explanation section for this message has been extended. It now reads as follows: Explanation: A host variable could not be used as specified in the statement because its data type is incompatible with the intended use of its value. This error can occur as a result of specifying an incorrect host variable or an incorrect SQLTYPE value in a SQLDA on an EXECUTE or OPEN statement. In the case of a user-defined structured type, it may be that the associated built-in type of the host variable or SQLTYPE is not compatible with the parameter of the TO SQL transform function defined in the transform group for the statement. The statement cannot be processed. ------------------------------------------------------------------------ 14.11 SQL0303N (New Text) The Explanation and User Response sections for this message have been extended. They now read as follows: Explanation: An embedded SELECT or VALUES statement selects into a host variable, but the data type of the variable is not compatible with the data type of the corresponding SELECT-list or VALUES-list element. Both must be numeric, character, or graphic. For a user-defined data type, it is possible that the host variable is defined with an associated built-in data type that is not compatible with the result type of the FROM SQL transform function defined in the transform group for the statement. For example, if the data type of the column is date or time, the data type of the variable must be character with an appropriate minimum length. The statement cannot be processed. User Response: Verify that the table definitions are current and that the host variable has the correct data type. For a user-defined data type, verify that the associated built-in type of the host variable is compatible with the result type of the FROM SQL transform function defined in the transform group for the statement. ------------------------------------------------------------------------ 14.12 SQL0358N (New User Response 26) Reason code 26 Explanation: DATALINK value referenced file cannot be accessed for linking. It may be a directory, a symbolic link, a file with permission bit for set user ID (SUID) on, or set group ID (SGID) on, or a file owned by user nobody (uid = -2). User Response: Linking of directories is not allowed. Use the actual file name, not the symbolic link. If SUID or SGID is on, this file cannot be linked using a DATALINK type. If the file was owned by user nobody (uid = -2), this file cannot be linked using a DATALINK type with the READ PERMISSION DB option. ------------------------------------------------------------------------ 14.13 SQL0408N (New Text) The Explanation and User Response sections for this message have been extended. They now read as follows: Explanation: The data type of the value to be assigned to the column, parameter, SQL variable, or transition variable by the SQL statement is incompatible with the declared data type of the assignment target. Both must be: - Numeric - Character - Graphic - Dates or character - Times or character - Timestamps or character - Datalinks - The same distinct types - Reference types, where the target type of the value is a subtype of the target type of the column. - The same user-defined structured types. Or, the static type of the value must be a subtype of the static type (declared type) of the target. If a host variable is involved, the associated built-in type of the host variable must be compatible with the parameter of the TO SQL transform function defined in the transform group for the statement. The statement cannot be processed. User Response: Examine the statement and possibly the target table or view to determine the target data type. Ensure that the variable, expression, or literal value assigned has the proper data type for the assignment target. For a user-defined structured type, also consider the parameter of the TO SQL transform function defined in the transform group for the statement as an assignment target. ------------------------------------------------------------------------ 14.14 SQL0423N (Revised Text) Locator variable "" does not currently represent any value. Explanation: A locator variable is in error. Either it has not had a LOB value assigned to it, the locator associated with the variable has been freed, or the result set cursor has been closed. If "" is provided, it gives the ordinal position of the variable in error in the set of variables specified. Depending on when the error is detected, the database manager may not be able to determine "". Instead of an ordinal position, "" may have the value "function-name RETURNS", which means that the locator value returned from the user-defined function identified by function-name is in error. User Response: If this was a LOB locator, correct the program so that the LOB locator variables used in the SQL statement have valid LOB values before the statement is executed. A LOB value can be assigned to a locator variable by means of a SELECT INTO statement, a VALUES INTO statement, or a FETCH statement. If this was a with return cursor, you must ensure that the cursor is opened before attempting to allocate it. sqlcode: -423 sqlstate: 0F001 ------------------------------------------------------------------------ 14.15 SQL0670N (Revised Text) Message SQL0670N refers to row length limits for tables defined in a CREATE TABLE or an ALTER TABLE statement, and to the regular table space in which these tables are created. However, SQL0670N also applies to the row lengths of declared temporary tables defined in a DECLARE GLOBAL TEMPORARY TABLE statement, and to the user temporary table spaces in which these declared temporary tables are created. If a DECLARE GLOBAL TEMPORARY TABLE statement fails with SQL0670N, it means that the user temporary table space cannot accommodate the row length defined in the DECLARE TEMPORARY TABLE statement. Following is the revised message text: The row length of the table exceeded a limit of "" bytes. (Table space "".) Explanation: The row length of a table in the database manager cannot exceed: - 4005 bytes in a table space with a 4K page size. - 8101 bytes in a table space with an 8K page size. - 16293 bytes in a table space with an 16K page size. - 32677 bytes in a table space with an 32K page size. The length is calculated by adding the internal lengths of the columns. Details of internal column lengths can be found under CREATE TABLE in the SQL Reference. One of the following conditions can occur: - The row length for the table defined in the CREATE TABLE or ALTER TABLE statement exceeds the limit for the page size of the table space. The regular table space name "" identifies the table space from which the page size was used to determine the limit on the row length. - The row length for the table defined in the DECLARE GLOBAL TEMPORARY TABLE statement exceeds the limit for the page size of the table space. The user temporary table space name "" identifies the table space from which the page size was used to determine the limit on the row length. The statement cannot be processed. User Response: Depending on the cause, do one of the following: - In the case of CREATE TABLE, ALTER TABLE, or DECLARE GLOBAL TEMPORARY TABLE, specify a table space with a larger page size, if possible. - Otherwise, reduce the row length by eliminating one or more columns, or reducing the lengths of one or more columns. sqlcode: -670 sqlstate: 54010 ------------------------------------------------------------------------ 14.16 SQL1179W (Revised Text) The Explanation and User Response sections for this message have been extended. They now read as follows: SQL1179W The "" called "" may require the invoker to have necessary privileges on data source objects. Explanation: The object identified by "" references an OLE DB table function, or a nickname where the actual data exists at a data source. When the data source data is accessed, the user mapping and authorization checking is based on the user that initiated the operation. If the "" is SUMMARY TABLE, then the operation is refreshing the data for the summary table. The user that invoked the REFRESH TABLE or SET INTEGRITY statement that causes the refresh may be required to have the necessary privileges to access the underlying data source object at the data source. If the "" is PACKAGE, PROCEDURE or VIEW, then any user of the package, procedure or view may be required to have the necessary privileges to access the underlying data source object at the data source. In any case, an authorization error may occur when the attempt is made to access the data source object. User Response: Granting privileges to the view, summary table, package or procedure may not be sufficient to support operations that access the data from the data source. User access may need to be granted at the data source for the underlying data source objects of the view or summary table. sqlcode: +1179 sqlstate: 01639 ------------------------------------------------------------------------ 14.17 SQL1550N (New SQLCODE) SQL1550NThe SET WRITE SUSPEND command failed. Reason code = "". Explanation: You cannot issue the SET WRITE SUSPEND command until the condition indicated by "" is resolved: 1 Database is not activated. 2 A backup database operation is currently in progress for the target database. You cannot suspend write operations until DB2 completes the backup. 3 A restore database operation is currently in progress for the target database. You cannot suspend write operations for this database until DB2 completes the restore operation. 4 Write operations have already been suspended for this database. User Response: 1 Activate the database by issuing the ACTIVATE DATABASE command, then re-issue the SET WRITE SUSPEND command. 2 Wait until the BACKUP procedure finishes, then re-issue the SET WRITE SUSPEND command. 3 Wait until the RESTORE procedure finishes, then re-issue the SET WRITE SUSPEND command. 4 The database is already in suspended state. To resume write operations for this database, issue the SET WRITE RESUME command. sqlcode: -1550 ------------------------------------------------------------------------ 14.18 SQL1551N (New SQLCODE) SQL1551NThe SET WRITE RESUME command failed because the database is not currently in WRITE SUSPEND state. Explanation: The database is not currently in WRITE SUSPEND state. You can only resume write operations for a database for which write operations have been suspended. User Response: No action is required, because write operations are enabled for this database. To suspend write operations for the database, issue the SET WRITE SUSPEND command. sqlcode: -1551 ------------------------------------------------------------------------ 14.19 SQL1552N (New SQLCODE) SQL1552NThe command failed because the database is currently in WRITE SUSPEND state. Explanation: This command is not allowed when write operations are suspended for the database. The database is in WRITE SUSPEND state. User Response: If the command that failed was RESTART DATABASE, re-issue the RESTART DATABASE command using the WRITE RESUME option. If the command that failed was a BACKUP or RESTORE command, issue a SET WRITE RESUME FOR DATABASE command to resume write operations for the database. Then re-issue the BACKUP or RESTORE command. sqlcode: -1552 ------------------------------------------------------------------------ 14.20 SQL1553N (New SQLCODE) SQL1553NDB2 cannot be stopped because one or more databases are in WRITE SUSPEND state. Explanation: You cannot shut down a database for which write operations are suspended. The database is in WRITE SUSPEND state. User Response: Issue the SET WRITE RESUME command to resume write operations for the database, then re-issue the db2stop command. sqlcode: -1553 ------------------------------------------------------------------------ 14.21 SQL1704N (New Reason Codes) Reason code 14 Explanation: Table has an invalid primary key or unique constraint. User Response: Table has an index that was erroneously used for a primary key or unique constraint. Drop the primary key or unique constraint that uses the index. This must be done in the release of the database manager in use prior to the current release. Resubmit the database migration command under the current release and then recreate the primary key or unique constraint. Reason code 15 Explanation: Table does not have a unique index on the REF IS column. User Response: Create a unique index on the REF IS column of the typed table using the release of the database manager in use prior to the current release. Resubmit the database migration command under the current release. Reason code 16 Explanation: Table is not logged but has a DATALINK column with file link control. User Response: Drop the table and then create the table without the not logged property. This must be done in the release of the database manager in use prior to the current release. Resubmit the database migration command under the current release. Reason Code 17 Explanation: Fail to allocate new page from the DMS system catalog table space. User Response: Restore database backup onto its previous database manager system. Add more containers to the table space. It is recommended to allocate 70%free space for database migration. Move back to the current release and migrate the database. ------------------------------------------------------------------------ 14.22 SQL2426N (New Message) SQL2426NIncremental backup is not enabled for this database. Ensure that modification tracking is activated, and perform a full backup of this database. Explanation: Incremental backups are not enabled until after modification tracking is activated for the database and a full database backup has been performed. The full database backup is required when you attempt to restore any subsequent incremental backups. User Response: To enable incremental backups for this database, activate modification tracking for this database by issuing the following command: UPDATE DB CFG FOR database-name USING TRACKMOD ON Then perform a full database backup. ------------------------------------------------------------------------ 14.23 SQL2571N (New Message) SQL2571NAutomatic restore of the incremental backup image set failed. DB2 is unable to determine the complete chain of backup images from the current database history. Explanation: The current database history does not contain the complete set of backup event entries required to automatically restore this incremental backup image. During an automatic restore of a set of incremental backup images, DB2 uses the database history to determine which backup images to restore and to determine their locations. To determine the complete set of images to restore, the database history must contain the previous full backup upon which this incremental image is based, along with any other tablespace or delta images which have been created since that full image. User Response: Perform a manual incremental restore of the set of images, as described in the Command Reference. ------------------------------------------------------------------------ 14.24 SQL2572N (New Message) SQL2572NAttempted an incremental restore of an out of order image. The restore of tablespace "" encountered an error because the backup image with timestamp "" must be restored before the image that was just attempted. Explanation: When restoring images produced with an incremental backup strategy, restore the images in the following order: 1. Restore the final image first to indicate to DB2 the increment to which you want to restore the database. 2. Restore the full database or tablespace image which precedes the set of incremental images. 3. Restore the set of incremental and delta images in the chronological order in which they were produced. 4. Restore the final image for a second time. Each tablespace in the backup image is aware of the backup image that must be restored before the backup image that failed can be successfully restored. You must restore the image with the timestamp reported in this message before you can successfully restore the image that invoked this message. There might be additional images to restore before the indicated image, but this was the first tablespace to encounter an error. User Response: Ensure the order of the set of incremental backup images is correct and continue the incremental restore process. ------------------------------------------------------------------------ 14.25 SQL4942N (New Text) The Explanation and User Response sections for this message have been extended. They now read as follows: Explanation: An embedded SELECT statement selects into a host variable "", but the data type of the variable and the corresponding SELECT list element are not compatible. If the data type of the column is date and time, the data type of the variable must be character with an appropriate minimum length. Both must either be numeric, character, or graphic. For a user-defined data type, it is possible that the host variable is defined with an associated built-in data type that is not compatible with the result type of the FROM SQL transform function defined in the transform group for the statement. The function cannot be completed. User Response: Verify that the table definitions are current, and that the host variable has the proper data type. For a user-defined data type, it is possible that the host variable is defined with an associated built-in data type that is not compatible with the result type of the FROM SQL transform function defined in the transform group for the statement. ------------------------------------------------------------------------ 14.26 SQL20117N (Changed Reason Code 1) The following reason code has been changed for message SQL20117N: Reason code 1 Under "Explanation": RANGE or ROWS is specified without an ORDER BY in the window specification. Under "User Response": Add a window ORDER BY clause to each window specification that specifies RANGE or ROWS. ------------------------------------------------------------------------ 14.27 SQL20133N (New Message) SQL20133NOperation "" cannot be performed on external routine "". The operation can only be performed on SQL routines. Explanation: You attempted to perform operation "" on external routine "". However, you can only perform that operation on SQL routines. The operation did not complete successfully. User Response: Ensure the name you provide identifies an SQL routine. sqlcode: -20133 sqlstate: 428F7 ------------------------------------------------------------------------ 14.28 SQL20134N (New Message) SQL20134NThe SQL Archive (SAR) file for routine "" could not be created on the server. Explanation: The creation of the SQL archive (SAR) for routine "" failed because DB2 could not find either the library or the bind file for the specified routine. Bind files are only available for SQL routines created with DB2 Version 7.1, FixPak 2 or later. User Response: Recreate the procedure on a server with DB2 Version 7.1, FixPak 2 or later, and try the operation again. sqlcode: -20134 sqlstate: 55045 ------------------------------------------------------------------------ 14.29 SQL20135N (New Message) SQL20135NThe specified SQL archive does not match the target environment. Reason code = "". Explanation: The specified SQL archive does not match the target environment for one of the following reasons: 1 The operating system of the target environment is not the same as the operating system on which the SQL archive was created. 2 The database type and level of the target environment is not the same as the database type and level on which the SQL archive was created. User Response: Ensure that the environment on which the SQL archive was created matches the target environment and reissue the command. If the environments do not match, you must manually create the SQL routine using the target environment. sqlcode: -20135 sqlstate: 55046 ------------------------------------------------------------------------ 14.30 New SQLSTATE values: 428F7, 55045, 55046 Table 7. New SQLSTATE Values and Text SQLSTATE Meaning Value 428F7 An operation that applies only to SQL routines was attempted on an external routine. 55045 The SQL Archive (SAR) file for the routine cannot be created because a necessary component is not available at the server. 55046 The specified SQL archive does not match the target environment. ------------------------------------------------------------------------ Replication Guide and Reference ------------------------------------------------------------------------ 15.1 Replication on Windows 2000 DB2 DataPropagator Version 7.1 is compatible with the Windows 2000 operating system. ------------------------------------------------------------------------ 15.2 Table and Column Names Replication does not support blanks in table and column names. ------------------------------------------------------------------------ 15.3 DATALINK Replication DATALINK replication is available on Solaris as part of Version 7.1 FixPak 1. It requires an FTP daemon that runs in the source and target DATALINK file system and supports the MDTM (modtime) command, which displays the last modification time of a given file. If you are using Version 2.6 of the Solaris operating system, or any other version that does not include FTP support for MDTM, you need additional software such as WU-FTPD. You cannot replicate DATALINK columns between DB2 databases on AS/400 and DB2 databases on other platforms. On the AS/400 platform, there is no support for the replication of the "comment" attribute of DATALINK values. If you are running AIX 4.2, before you run the default user exit program (ASNDLCOPY) you must install the PTF for APAR IY03101 (AIX 4210-06 RECOMMENDED MAINTENANCE FOR AIX 4.2.1). This PTF contains a Y2K fix for the "modtime/MDTM" command in the FTP daemon. To verify the fix, check the last modification time returned from the "modtime " command, where is a file that was modified after January 1, 2000. If the target table is an external CCD table, DB2 DataPropagator calls the ASNDLCOPY routine to replicate DATALINK files. For the latest information about how to use the ASNDLCOPY and ASNDLCOPYD programs, see the prologue section of each program's source code. The following restrictions apply: * Internal CCD tables can contain DATALINK indicators, but not DATALINK values. * Condensed external CCD tables can contain DATALINK values. * Noncondensed CCD target tables cannot contain any DATALINK columns. * When the source and target servers are the same, the subscription set must not contain any members with DATALINK columns. ------------------------------------------------------------------------ 15.4 LOB Restrictions Condensed internal CCD tables cannot contain references to LOB columns or LOB indicators. ------------------------------------------------------------------------ 15.5 Replication and Non-IBM Servers You must use DataJoiner Version 2 or later to replicate data to or from non-IBM servers such as Informix, Microsoft SQL Server, Oracle, Sybase, and Sybase SQL Anywhere. You cannot use the relational connect function for this type of replication because DB2 Relational Connect Version 7.1 does not have update capability. Also, you must use DJRA (DataJoiner Replication Administration) to administer such heterogeneous replication on all platforms (AS/400, OS/2, OS/390, UNIX, and Windows) for all existing versions of DB2 and DataJoiner. ------------------------------------------------------------------------ 15.6 Update-anywhere Prerequisite If you want to set up update-anywhere replication with conflict detection and with more than 150 subscription set members in a subscription set, you must run the following DDL to create the ASN.IBMSNAP_COMPENSATE table on the control server: CREATE TABLE ASN.IBMSNAP_COMPENSATE ( APPLY_QUAL char(18) NOT NULL, MEMBER SMALLINT, INTENTSEQ CHAR(10) FOR BIT DATA, OPERATION CHAR(1)); ------------------------------------------------------------------------ 15.7 Replication Scenarios See the Library page of the DataPropagator Web site (http://www.ibm.com/software/data/dpropr/) for a new heterogeneous data replication scenario. Follow the steps in that scenario to copy changes from a replication-source table in an Oracle database on AIX to a target table in a database on DB2 for Windows NT. That scenario uses the DB2 DataJoiner Replication Administration (DJRA) tool, Capture triggers, the Apply program, and DB2 DataJoiner. On page 44 of the book, the instructions in Step 6 for creating a password file should read as follows: Step 6: Create a password file Because the Apply program needs to connect to the source server, you must create a password file for user authentication. Make sure that the user ID that will run the Apply program can read the password file. To create a password file: 1. From a Windows NT command prompt window, change to the C:\scripts directory. 2. Create a new file in this directory called DEPTQUAL.PWD. You can create this file using any text editor, such as Notepad. The naming convention for the password file is applyqual.pwd; where applyqual is a case-sensitive string that must match the case and value of the Apply qualifier used when you created the subscription set. For this scenario, the Apply qualifier is DEPTQUAL. Note:The filenaming convention from Version 5 of DB2 DataPropagator is also supported. 3. The contents of the password file has the following format: SERVER=server USER=userid PWD=password Where: server The name of the source, target, or control server, exactly as it appears in the subscription set table. For this scenario, these names are SAMPLE and COPYDB. userid The user ID that you plan to use to administer that particular database. This value is case-sensitive for Windows NT and UNIX operating systems. password The password that is associated with that user ID. This value is case-sensitive for Windows NT and UNIX operating systems. Do not put blank lines or comment lines in this file. Add only the server-name, user ID, and password information. 4. The contents of the password file should look similar to: SERVER=SAMPLE USER=subina PWD=subpw SERVER=COPYDB USER=subina PWD=subpw For more information about DB2 authentication and security, refer to the IBM DB2 Administration Guide. ------------------------------------------------------------------------ 15.8 Planning for Replication On page 65, "Connectivity" should include the following fact: If the Apply program cannot connect to the control server, the Apply program terminates. When using data blocking for AS/400, you must ensure that the total amount of data to be replicated during the interval does not exceed "4 million rows", not "4 MB" as stated on page 69 of the book. ------------------------------------------------------------------------ 15.9 Setting Up Your Replication Environment Page 95, "Customizing CD table, index, and tablespace names" states that the DPREPL.DFT file is in either the \sqllib\bin directory or the \sqllib\java directory. This is incorrect, DPREPL.DFT is in the \sqllib\cc directory. ------------------------------------------------------------------------ 15.10 Problem Determination The Replication Analyzer runs on Windows 32-bit systems and AIX. To run the Analyzer on AIX, ensure that the sqllib/bin directory appears before /usr/local/bin in your PATH environment variable to avoid conflicts with /usr/local/bin/analyze. The Replication Analyzer has two additional optional keywords: CT and AT. CT=n Show only those entries from the Capture trace table that are newer than n days old. This keyword is optional. If you do not specify this keyword, the default is 7 days. AT=n Show only those entries from the Apply trail table that are newer than n days old. This keyword is optional. If you do not specify this keyword, the default is 7 days. Example: analyze mydb1 mydb2 f=mydirectory ct=4 at=2 deepcheck q=applyqual1 For the Replication Analyzer, the following keyword information is updated: deepcheck Specifies that the Analyzer perform a more complete analysis, including the following information: CD and UOW table pruning information, DB2 for OS/390 tablespace-partitioning and compression detail, analysis of target indexes with respect to subscription keys, subscription timelines, and subscription-set SQL-statement errors. The analysis includes all servers. This keyword is optional. lightcheck Specifies that the following information be excluded from the report: all column detail from the ASN.IBMSNAP_SUBS_COLS table, subscription errors or anomalies or omissions, and incorrect or inefficient indexes. This reduction in information saves resources and produces a smaller HTML output file. This keyword is optional and is mutually exclusive with the deepcheck keyword. Analyzer tools are available in PTFs for replication on AS/400 platforms. These tools collect information about your replication environment and produce an HTML file that can be sent to your IBM Service Representative to aid in problem determination. To get the AS/400 tools, download the appropriate PTF (for example, for product 5769DP2, you must download PTF SF61798 or its latest replacement). Add the following problem and solution to the "Troubleshooting" section: Problem: The Apply program loops without replicating changes; the Apply trail table shows STATUS=2. The subscription set includes multiple source tables. To improve the handling of hotspots for one source table in the set, an internal CCD table is defined for that source table, but in a different subscription set. Updates are made to the source table but the Apply process that populates the internal CCD table runs asynchronously (for example, the Apply program might not be started or an event not triggered, and so on). The Apply program that replicates updates from the source table to the target table loops because it is waiting for the internal CCD table to be updated. To stop the looping, start the Apply program (or trigger the event that causes replication) for the internal CCD table. The Apply program will populate the internal CCD table and allow the looping Apply program to process changes from all source tables. A similar situation could occur for a subscription set that contains source tables with internal CCD tables that are populated by multiple Apply programs. ------------------------------------------------------------------------ 15.11 Capture and Apply for AS/400 On page 178, "A note on work management" should read as follows: You can alter the default definitions or provide your own definitions. If you create your own subsystem description, you must name the subsystem QZSNDPR and create it in a library other than QDPR. See "OS/400 Work Management V4R3", SC41-5306 for more information about changing these definitions. Add the following to page 178, "Verifying and customizing your installation of DB2 DataPropagator for AS/400": If you have problems with lock contention due to high volume of transactions, you can increase the default wait timeout value from 30 to 120. You can change the job every time the Capture job starts or you can use the following procedure to change the default wait timeout value for all jobs running in your subsystem: 1. Issue the following command to create a new class object by duplicating QGPL/QBATCH: CRTDUPOBJ OBJ(QBATCH) FROMLIB(QGPL) OBJTYPE(*CLS) TOLIB(QDPR) NEWOBJ(QZSNDPR) 2. Change the wait timeout value for the newly created class (for example, to 300 seconds): CHGCLS CLS(QDPR/QZSNDPR) DFTWAIT(300) 3. Update the routing entry in subsystem description QDPR/QZSNDPR to use the newly created class: CHGRTGE SBSD(QDPR/QZSNDPR) SEQNBR(9999) CLS(QDPR/QZSNDPR) On page 195, the ADDEXITPGM command parameters should read: ADDEXITPGM EXITPNT(QIBM_QJO_DLT_JRNRCV) FORMAT(DRCV0100) PGM(QDPR/QZSNDREP) PGMNBR(*LOW) CRTEXITPNT(*NO) PGMDTA(65535 10 QSYS) ------------------------------------------------------------------------ 15.12 Table Structures On page 339, append the following sentence to the STATUS column description for the value "2": If you use internal CCD tables and you repeatedly get a value of "2" in the status column of the Apply trail table, go to "Chapter 8: Problem Determination" and refer to "Problem: The Apply program loops without replicating changes, the Apply trail table shows STATUS=2". ------------------------------------------------------------------------ 15.13 Capture and Apply Messages Message ASN1027S should be added: ASN1027S There are too many large object (LOB) columns specified. The error code is "". Explanation: Too many large object (BLOB, CLOB, or DBCLOB) columns are specified for a subscription set member. The maximum number of columns allowed is 10. User response: Remove the excess large object columns from the subscription set member. Message ASN1048E should read as follows: ASN1048E The execution of an Apply cycle failed. See the Apply trail table for full details: "" Explanation: An Apply cycle failed. In the message, "" identifies the "", "", and "". User response: Check the APPERRM fields in the audit trail table to determine why the Apply cycle failed. ------------------------------------------------------------------------ 15.14 Starting the Capture and Apply Programs from Within an Application On page 399 of the book, a few errors appear in the comments of the Sample routine that starts the Capture and Apply programs; however the code in the sample is correct. The latter part of the sample pertains to the Apply parameters, despite the fact that the comments indicate that it pertains to the Capture parameters. You can get samples of the Apply and Capture API, and their respective makefiles, in the following directories: For NT - sqllib\samples\repl For UNIX - sqllib/samples/repl ------------------------------------------------------------------------ SQL Reference ------------------------------------------------------------------------ 16.1 ALTER TABLE The ALTER TABLE statement is enhanced and can now modify existing tables by altering the IDENTITY column to RESTART the sequence of values assigned to the column. The column-alteration syntax diagram segment is replaced with the following: column-alteration |--column-name--------------------------------------------------> >-----+-SET--+-DATA TYPE--+-VARCHAR-----------+---(--integer--)--+-+> | | +-CHARACTER VARYING-+ | | | | '-CHAR VARYING------' | | | '-EXPRESSION AS--(--generation-expression--)--------' | +-ADD SCOPE--+-typed-table-name-+----------------------------+ | '-typed-view-name--' | '-RESTART WITH--numeric-constant-----------------------------' >---------------------------------------------------------------| The following text should be added to the "Description" section before the DROP PRIMARY KEY description: RESTART WITH numeric-constant Resets the state of the sequence associated with the identity column. The numeric-constant value is used as the next value for the column. numeric-constant is an exact numeric constant that can be any positive or negative value that can be assigned to this column (SQLSTATE 42820) as long as there are no nonzero digits to the right of the decimal point (SQLSTATE 42894). The column must already be defined with the IDENTITY attribute (SQLSTATE 42837). ------------------------------------------------------------------------ 16.2 IDENTITY_VAL_LOCAL >>-IDENTITY_VAL_LOCAL--(--)------------------------------------>< The schema is SYSIBM. The IDENTITY_VAL_LOCAL function is a non-deterministic function that returns the most recently assigned value for an identity column, where the assignment occurred as a result of a single row INSERT statement using a VALUES clause. The function has no input parameters. The result is a DECIMAL(31,0), regardless of the actual data type of the identity column to which the result value corresponds. The value returned is the value assigned to the identity column of the table identified in the most recent single row INSERT statement with a VALUES clause for a table containing an identity column. Note that the INSERT statement must be issued at the same level 1 (that is, the value is available locally at the level it was assigned, until it is replaced by the next assigned value). The assigned value could be a value supplied by the user (if the identity column is defined as GENERATED BY DEFAULT), or an identity value generated by DB2. The function returns the null value in the following situations: * when a single row INSERT statement with a VALUES clause has not been issued for a table containing an identity column at the current processing level * when a COMMIT or ROLLBACK of a unit of work has occurred since the most recent INSERT statement that assigned a value. 2 The result of the function is not affected by the following statements: * a single row INSERT statement with a VALUES clause for a table that does not contain an identity column * a multiple row INSERT statement with a VALUES clause * an INSERT statement with a fullselect * a ROLLBACK TO SAVEPOINT statement. Notes: * Expressions in the VALUES clause of an INSERT statement are evaluated prior to the assignments for the target columns of the INSERT statement. Thus, an invocation of an IDENTITY_VAL_LOCAL function invoked in the VALUES clause of an INSERT statement uses the most recently assigned value for an identity column from a previous INSERT statement. The function returns the null value if no previous single row INSERT statement with a VALUES clause for a table containing an identity column has been executed within the same level as the IDENTITY_VAL_LOCAL function. * The IDENTITY_VAL_LOCAL function cannot be used in a trigger or an SQL function (SQLSTATE 42997). * The identity column value of the table for which the trigger is defined can be determined within a trigger, by referencing the trigger transition variable for the identity column. * Since the results of the IDENTITY_VAL_LOCAL function are not deterministic, the result of an invocation of the IDENTITY_VAL_LOCAL function within the SELECT statement of a cursor can vary for each FETCH statement. * The assigned value is the value actually assigned to the identity column (that is, the value that would be returned on a subsequent SELECT statement). This value is not necessarily the value provided in the VALUES clause of the INSERT statement, or a value generated by DB2. The assigned value could be a value specified in a SET transition variable statement within the body of a before insert trigger, for a trigger transition variable associated with the identity column. * The value returned by the function is unpredictable following a failed single row INSERT with a VALUES clause into a table with an identity column. The value may be the value that would have been returned from the function had it been invoked prior to the failed INSERT, or it may be the value that would have been assigned had the INSERT succeeded. The actual value returned depends on the point of failure and is therefore unpredictable. Examples: * Set the variable IVAR to the value assigned to the identity column in the EMPLOYEE table. If this insert is the first into the EMPLOYEE table, then IVAR would have a value of 1. CREATE TABLE EMPLOYEE (EMPNO INTEGER GENERATED ALWAYS AS IDENTITY, NAME CHAR(30), SALARY DECIMAL(5,2), DEPTNO SMALLINT) * An IDENTITY_VAL_LOCAL function invoked in an INSERT statement returns the value associated with the previous single row INSERT statement, with a VALUES clause for a table with an identity column. Assume for this example that there are two tables, T1 and T2. Both T1 and T2 have an identity column named C1. DB2 generates values in sequence starting with 1 for the C1 column in table T1, and values in sequence starting with 10 for the C1 column in table T2. CREATE TABLE T1 (C1 INTEGER GENERATED ALWAYS AS IDENTITY, C2 INTEGER), CREATE TABLE T2 (C1 DECIMAL(15,0) GENERATED BY DEFAULT AS IDENTITY (START WITH 10), C2 INTEGER), INSERT INTO T1 (C2) VALUES (5), INSERT INTO T1 (C2) VALUES (6), SELECT * FROM T1 C1 C2 ----------- ---------- 1 5 2 6 VALUES IDENTITY_VAL_LOCAL() INTO :IVAR At this point, the IDENTITY_VAL_LOCAL function would return a value of 2 in IVAR, because that was the value most recently assigned by DB2. The following INSERT statement inserts a single row into T2, where column C2 gets a value of 2 from the IDENTITY_VAL_LOCAL function. INSERT INTO T2 (C2) VALUES (IDENTITY_VAL_LOCAL()), SELECT * FROM T2 WHERE C1 = DECIMAL(IDENTITY_VAL_LOCAL(),15,0) C1 C2 ----------------- ---------- 10. 2 Invoking the IDENTITY_VAL_LOCAL function after this insert results in a value of 10, which is the value generated by DB2 for column C1 of T2. ------------------------------------------------------------------------ 16.3 OLAP Functions The following represents a correction to the "OLAP Functions" section under "Expressions" in Chapter 3. aggregation-function |--column-function--OVER---(--+------------------------------+--> '-| window-partition-clause |--' >----+--------------------------------------------------------------------+> '-| window-order-clause |--+--------------------------------------+--' '-| window-aggregation-group-clause |--' >---------------------------------------------------------------| window-order-clause .-,-------------------------------------------. V .-| asc option |---. | |---ORDER BY-----sort-key-expression--+------------------+--+---| '-| desc option |--' asc option .-NULLS LAST--. |---ASC--+-------------+----------------------------------------| '-NULLS FIRST-' desc option .-NULLS FIRST--. |---DESC--+--------------+--------------------------------------| '-NULLS LAST---' window-aggregation-group-clause |---+-ROWS--+---+-| group-start |---+---------------------------| '-RANGE-' +-| group-between |-+ '-| group-end |-----' group-end |---+-UNBOUNDED FOLLOWING-----------+---------------------------| '-unsigned-constant--FOLLOWING--' In the window-order-clause description: NULLS FIRST The window ordering considers null values before all non-null values in the sort order. NULLS LAST The window ordering considers null values after all non-null values in the sort order. In the window-aggregation-group-clause description: window-aggregation-group-clause The aggregation group of a row R is a set of rows, defined relative to R in the ordering of the rows of R's partition. This clause specifies the aggregation group. If this clause is not specified, the default is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, providing a cumulative aggregation result. ROWS Indicates the aggregation group is defined by counting rows. RANGE Indicates the aggregation group is defined by an offset from a sort key. group-start Specifies the starting point for the aggregation group. The aggregation group end is the current row. Specification of the group-start clause is equivalent to a group-between clause of the form "BETWEEN group-start AND CURRENT ROW". group-between Specifies the aggregation group start and end based on either ROWS or RANGE. group-end Specifies the ending point for the aggregation group. The aggregation group start is the current row. Specification of the group-end clause is equivalent to a group-between clause of the form "BETWEEN CURRENT ROW AND group-end". UNBOUNDED PRECEDING Includes the entire partition preceding the current row. This can be specified with either ROWS or RANGE. Also, this can be specified with multiple sort-key-expressions in the window-order-clause. UNBOUNDED FOLLOWING Includes the entire partition following the current row. This can be specified with either ROWS or RANGE. Also, this can be specified with multiple sort-key-expressions in the window-order-clause. CURRENT ROW Specifies the start or end of the aggregation group based on the current row. If ROWS is specified, the current row is the aggregation group boundary. If RANGE is specified, the aggregation group boundary includes the set of rows with the same values for the sort-key-expressions as the current row. This clause cannot be specified in group-bound2 if group-bound1 specifies value FOLLOWING. value PRECEDING Specifies either the range or number of rows preceding the current row. If ROWS is specified, then value is a positive integer indicating a number of rows. If RANGE is specified, then the data type of value must be comparable to the type of the sort-key-expression of the window-order-clause. There can only be one sort-key-expression, and the data type of the sort-key-expression must allow subtraction. This clause cannot be specified in group-bound2 if group-bound1 is CURRENT ROW or value FOLLOWING. value FOLLOWING Specifies either the range or number of rows following the current row. If ROWS is specified, then value is a positive integer indicating a number of rows. If RANGE is specified, then the data type of value must be comparable to the type of the sort-key-expression of the window-order-clause. There can only be one sort-key-expression, and the data type of the sort-key-expression must allow addition. ------------------------------------------------------------------------ 16.4 SQL Procedures/Compound Statement Following is a revised syntax diagram for the Compound Statement: .-NOT ATOMIC--. >>-+---------+--BEGIN----+-------------+------------------------> '-label:--' '-ATOMIC------' >-----+-----------------------------------------------+---------> | .-----------------------------------------. | | V | | '-----+-| SQL-variable-declaration |-+---;---+--' +-| condition-declaration |----+ '-| return-codes-declaration |-' >-----+--------------------------------------+------------------> | .--------------------------------. | | V | | '----| statement-declaration |--;---+--' >-----+-------------------------------------+-------------------> | .-------------------------------. | | V | | '----DECLARE-CURSOR-statement--;---+--' >-----+------------------------------------+--------------------> | .------------------------------. | | V | | '----| handler-declaration |--;---+--' .-------------------------------. V | >--------SQL-procedure-statement--;---+---END--+--------+------>< '-label--' SQL-variable-declaration .-,--------------------. V | |---DECLARE-------SQL-variable-name---+-------------------------> .-DEFAULT NULL-------. >-----+-data-type----+--------------------+-+-------------------| | '-DEFAULT--constant--' | '-RESULT_SET_LOCATOR--VARYING---------' condition-declaration |---DECLARE--condition-name--CONDITION--FOR---------------------> .-VALUE-. .-SQLSTATE--+-------+---. >----+-----------------------+---string-constant----------------| statement-declaration .-,-----------------. V | |---DECLARE-----statement-name---+---STATEMENT------------------| return-codes-declaration |---DECLARE----+-SQLSTATE--CHAR (5)--+---+--------------------+-| '-SQLCODE--INTEGER----' '-DEFAULT--constant--' handler-declaration |---DECLARE----+-CONTINUE-+---HANDLER--FOR----------------------> +-EXIT-----+ '-UNDO-----' .-,-----------------------------------. V .-VALUE-. | >---------+-SQLSTATE--+-------+--string--+--+-------------------> +-condition-name---------------+ +-SQLEXCEPTION-----------------+ +-SQLWARNING-------------------+ '-NOT FOUND--------------------' >----SQL-procedure-statement------------------------------------| A statement-declaration declares a list of one or more names that are local to the compound statement. A statement name cannot be the same as another statement name within the same compound statement. ------------------------------------------------------------------------ 16.5 LCASE and UCASE (Unicode) In a Unicode database, the entire repertoire of Unicode characters is uppercase (or lowercase) based on the Unicode properties of these characters. Double-wide versions of ASCII characters, as well as Roman numerals, now convert to upper or lower case correctly. ------------------------------------------------------------------------ 16.6 WEEK_ISO Change the description of this function to the following: The schema is SYSFUN. Returns the week of the year of the argument as an integer value in the range 1-53. The week starts with Monday and always includes 7 days. Week 1 is the first week of the year to contain a Thursday, which is equivalent to the first week containing January 4. It is therefore possible to have up to 3 days at the beginning of a year appear in the last week of the previous year. Conversely, up to 3 days at the end of a year may appear in the first week of the next year. The argument must be a date, timestamp, or a valid character string representation of a date or timestamp that is neither a CLOB nor a LONG VARCHAR. The result of the function is INTEGER. The result can be null; if the argument is null, the result is the null value. Example: The following list shows examples of the result of WEEK_ISO and DAYOFWEEK_ISO. DATE WEEK_ISO DAYOFWEEK_ISO ---------- ----------- ------------- 1997-12-28 52 7 1997-12-31 1 3 1998-01-01 1 4 1999-01-01 53 5 1999-01-04 1 1 1999-12-31 52 5 2000-01-01 52 6 2000-01-03 1 1 ------------------------------------------------------------------------ 16.7 Naming Conventions and Implicit Object Name Qualifications Add the following note to this section in Chapter 3: The following names, when used in the context of SQL Procedures, are restricted to the characters allowed in an ordinary identifier, even if the names are delimited: - condition-name - label - parameter-name - procedure-name - SQL-variable-name - statement-name ------------------------------------------------------------------------ 16.8 Queries (select-statement/fetch-first-clause) The last paragraph in the description of the fetch-first-clause: Specification of the fetch-first-clause in a select-statement makes the cursor not deletable (read-only). This clause cannot be specified with the FOR UPDATE clause. is incorrect and should be removed. ------------------------------------------------------------------------ 16.9 Libraries Used by the CREATE WRAPPER Statement on Linux Linux uses libraries called LIBDRDA.SO and LIBSQLNET.SO, not LIBDRDA.A and LIBSQLNET.A as may have been documented previously. ------------------------------------------------------------------------ 16.10 Update of the Partitioning Key Now Supported Update the partitioning key is now supported. The following text from various statements in Chapter 6 should be deleted only if the DB2_UPDATE_PART_KEY=ON: Note:If DB2_UPDATE_PART_KEY=OFF (the default), then the restrictions still apply. 16.10.1 Statement: ALTER TABLE Rules * A partitioning key column of a table cannot be updated (SQLSTATE 42997). * A nullable column of a partitioning key cannot be included as a foreign key column when the relationship is defined with ON DELETE SET NULL (SQLSTATE 42997). 16.10.2 Statement: CREATE TABLE Rules * A partitioning key column of a table cannot be updated (SQLSTATE 42997). * A nullable column of a partitioning key cannot be included as a foreign key column when the relationship is defined with ON DELETE SET NULL (SQLSTATE 42997). 16.10.3 Statement: DECLARE GLOBAL TEMPORARY TABLE PARTITIONING KEY (column-name,...) Note:The partitioning key columns cannot be updated (SQLSTATE 42997). 16.10.4 Statement: SET transition-variable Rules * If the statement is used in a BEFORE UPDATE trigger, the column-name specified as a transition-variable cannot be a partitioning key column (SQLSTATE 42997). 16.10.5 Statement: UPDATE Footnotes * 108 A column of a partitioning key is not updatable (SQLSTATE 42997). The row of data must be deleted and inserted to change columns in a partitioning key. ------------------------------------------------------------------------ 16.11 Enabling the New SQL Built-in Scalar Functions FixPak 2 of Version 7.1 delivers new SQL built-in scalar functions. Refer to the SQL Reference updates for a description of these new functions. The new functions are not automatically enabled on each database when the database server code is upgraded to the new service level. To enable these new functions, the system administrator must issue the command db2updv7, specifying each database at the server. This command makes an entry in the database that ensures that database objects created prior to executing this command use existing function signatures that may match the new function signatures. ------------------------------------------------------------------------ 16.12 ABS or ABSVAL ABS or ABSVAL >>-+-ABS----+--(expression)------------------------------------>< '-ABSVAL-' The schema is SYSIBM. Note:The SYSFUN version of the ABS (or ABSVAL) function continues to be available. Returns the absolute value of the argument. The argument is an expression that returns a value of any built-in numeric data type. The result of the function has the same data type and length attribute as the argument. If the argument can be null or the database is configured with DFT_SQLMATHWARN set to yes and the argument type is SMALLINT, INTEGER, or BIGINT, then the result can be null; if the argument is null, the result is the null value. For example: ABS(-51234) returns an INTEGER with a value of 51234. ------------------------------------------------------------------------ 16.13 MULTIPLY_ALT MULTIPLY_ALT >>-MULTIPLY_ALT-------------------------------------------------> >----(exact_numeric_expression, exact_numeric_expression)------>< The schema is SYSIBM. The MULTIPLY_ALT scalar function returns the product of the two arguments as a decimal value. It is provided as an alternative to the multiplication operator, especially when the sum of the precisions of the arguments exceeds 31. The arguments can be any built-in exact numeric data type (DECIMAL, BIGINT, INTEGER, or SMALLINT). The result of the function is a DECIMAL. The precision and scale of the result are determined as follows, using the symbols p and s to denote the precision and scale of the first argument, and the symbols p' and s' to denote the precision and scale of the second argument. * The precision is MIN(31, p + p') * The scale is: o 0 if the scale of both arguments is 0 o MIN(31, s+s') if p+p' is less than or equal to 31 o MAX(MIN(3, s+s'), 31-(p-s+p'-s') ) if p+p' is greater than 31. The result can be null if at least one argument can be null or the database is configured with DFT_SQLMATHWARN set to yes; the result is the null value if one of the arguments is null. The MULTIPLY_ALT function is a better choice than the multiplication operator when performing decimal arithmetic where a scale of at least 3 is needed and the sum of the precisions exceeds 31. In these cases, the internal computation is performed so that overflows are avoided. The final result is then assigned to the result data type using truncation where necessary to match the scale. Note that overflow of the final result is still possible when the scale is 3. The following is a sample comparing the result types using MULTIPLY_ALT and the multiplication operator. Type of argument 1Type of argument Result using Result using 2 MULTIPLY_ALT multiplication operator DECIMAL(31,3) DECIMAL(15,8) DECIMAL(31,3) DECIMAL(31,11) DECIMAL(26,23) DECIMAL(10,1) DECIMAL(31,19) DECIMAL(31,24) DECIMAL(18,17) DECIMAL(20,19) DECIMAL(31,29) DECIMAL(31,31) DECIMAL(16,3) DECIMAL(17,8) DECIMAL(31,9) DECIMAL(31,11) DECIMAL(26,5) DECIMAL(11,0) DECIMAL(31,3) DECIMAL(31,5) DECIMAL(21,1) DECIMAL(15,1) DECIMAL(31,2) DECIMAL(31,2) Example: Multiply two values where the data type of the first argument is DECIMAL(26, 3) and the data type of the second argument is DECIMAL(9,8). The data type of the result is DECIMAL(31,7). values multiply_alt(98765432109876543210987.654,5.43210987) 1 --------------------------------- 536504678578875294857887.5277415 Note that the complete product of these two numbers is 536504678578875294857887.52774154498 but the last 4 digits were truncated to match the scale of the result data type. Using the multiplication operator with the same values results in an arithmetic overflow since the result data type is DECIMAL(31,11) and the result value has 24 digits left of the decimal, but the result data type only supports 20 digits. ------------------------------------------------------------------------ 16.14 ROUND ROUND >>-ROUND---(expression1, expression2)-------------------------->< The schema is SYSIBM. Note:The SYSFUN version of the ROUND function continues to be available. The ROUND function returns expression1 rounded to expression2 places to the right of the decimal point if expression2 is positive or to the left of the decimal point if expression2 is zero or negative. If expression1 is positive, a value of 5 is rounded to the next higher positive number. For example, ROUND(3.5,0) = 4. If expression1 is negative, a value of 5 is rounded to the next lower negative number. For example, ROUND(-3.5,0) = -4. expression1 An expression that returns a value of any built-in numeric data type. expression2 An expression that returns a small or large integer. When the value of expression2 is not negative, it specifies rounding to that number of places to the right of the decimal separator. When the value of expression2 is negative, it specifies rounding to the absolute value of expression2 places to the left of the decimal separator. If expression2 is not negative, expression1 is rounded to the absolute value of expression2 number of places to the right of the decimal point. If the value of expression2 is greater than the scale of expression1 then the value is unchanged except that the result value has a precision that is larger by 1. For example, ROUND(748.58,5) = 748.58 where the precision is now 6 and the scale remains 2. If expression2 is negative, expression1 is rounded to the absolute value of expression2+1 number of places to the left of the decimal point. If the absolute value of a negative expression2 is larger than the number of digits to the left of the decimal point, the result is 0. For example, ROUND(748.58,-4) = 0. The data type and length attribute of the result are the same as the data type and length attribute of the first argument, except that the precision is increased by one if the expression1 is DECIMAL or NUMERIC and the precision is less than 31. For example, an argument with a data type of DECIMAL(5,2) results in DECIMAL(6,2). An argument with a data type of DECIMAL(31,2) results in DECIMAL(31,2). The scale is the same as the scale of the first argument. If either argument can be null or the database is configured with DFT_SQLMATHWARN set to yes, the result can be null. If either argument is null, the result is the null value. 16.14.1 Examples: Calculate the number 873.726 rounded to 2, 1, 0, -1, -2, -3, and -4 decimal places respectively. VALUES (ROUND(873.726, 2), ROUND(873.726, 1), ROUND(873.726, 0), ROUND(873.726,-1), ROUND(873.726,-2), ROUND(873.726,-3), ROUND(873.726,-4) ) This example returns: 1 2 3 4 5 6 7 --------- --------- --------- --------- --------- --------- --------- 873.730 873.700 874.000 870.000 900.000 1000.000 0.000 Calculate a both positive and negative numbers. VALUES (ROUND(3.5, 0), ROUND(3.1, 0), ROUND(-3.1, 0), ROUND(-3.5,0) ) This example returns: 1 2 3 4 ---- ---- ---- ---- 4.0 3.0 -3.0 -4.0 ------------------------------------------------------------------------ System Monitor Guide and Reference ------------------------------------------------------------------------ 17.1 db2ConvMonStream In the Usage Notes, the structure for the snapshot variable datastream type SQLM_ELM_SUBSECTION should be sqlm_subsection. ------------------------------------------------------------------------ Troubleshooting Guide ------------------------------------------------------------------------ 18.1 Starting DB2 on Windows 95 and Windows 98 When the User Is Not Logged On For a db2start command to be successful in a Windows 95 or a Windows 98 environment, you must either: * Log on using the Windows logon window or the Microsoft Networking logon window * Issue the db2logon command (see note (NOTE1) for information about the db2logon command). In addition, the user ID that is specified either during the logon or for the db2logon command must meet DB2's requirements (see note (NOTE2)). When the db2start command starts, it first checks to see if a user is logged on. If a user is logged on, the db2start command uses that user's ID. If a user is not logged on, the db2start command checks whether a db2logon command has been run, and, if so, the db2start command uses the user ID that was specified for the db2logon command. If the db2start command cannot find a valid user ID, the command terminates. During the installation of DB2 Universal Database Version 7 on Windows 95 and Windows 98, the installation software, by default, adds a shortcut to the Startup folder that runs the db2start command when the system is booted (see note (NOTE1) for more information). If the user of the system has neither logged on nor issued the db2logon command, the db2start command will terminate. If you or your users do not normally log on to Windows or to a network, you can hide the requirement to issue the db2logon command before a db2start command by running commands from a batch file as follows: 1. Create a batch file that issues the db2logon command followed by the db2start.exe command. For example: @echo off db2logon db2local /p:password db2start cls exit 2. Name the batch file db2start.bat, and store it in the /bin directory that is under the drive and path where you installed DB2. You store the batch file in this location to ensure that the operating system can find the path to the batch file. The drive and path where DB2 is installed is stored in the DB2 registry variable DB2PATH. To find the drive and path where you installed DB2, issue the following command: db2set -g db2path Assume that the db2set command returns the value c:\sqllib. In this situation, you would store the batch file as follows: c:\sqllib\bin\db2start.bat 3. To start DB2 when the system is booted, you should run the batch file from a shortcut in the Startup folder. You have two options: o Modify the shortcut that is created by the DB2 installation program to run the batch file instead of db2start.exe. In the preceding example, the shortcut would now run the db2start.bat batch file. The shortcut that is created by DB2 installation program is called DB2 - DB2.lnk, and is located in c:\WINDOWS\Start Menu\Programs\Start\DB2 - DB2.lnk on most systems. o Add your own shortcut to run the batch file, and delete the shortcut that is added by the DB2 installation program. Use the following command to delete the DB2 shortcut: del "C:\WINDOWS\Start Menu\Programs\Startup\DB2 - DB2.lnk" If you decide to use your own shortcut, you should set the close on exit attribute for the shortcut. If you do not set this attribute, the DOS command prompt is left in the task bar even after the db2start command has successfully completed. To prevent the DOS window from being opened during the db2start process, you can create this shortcut (and the DOS window it runs in) set to run minimized. Note:As an alternative to starting DB2 during the boot of the system, DB2 can be started prior to the running of any application that uses DB2. See note (NOTE5) for details. If you use a batch file to issue the db2logon command before the db2start command is run, and your users occasionally log on, the db2start command will continue to work, the only difference being that DB2 will use the user ID of the logged on user. See note (NOTE1) for additional details. Notes: 1. The db2logon command simulates a user logon. The format of the db2logon command is: db2logon userid /p:password The user ID that is specified for the command must meet the DB2 naming requirements (see note (NOTE2) for more information). If the command is issued without a user ID and password, a window opens to prompt the user for the user ID and password. If the only parameter provided is a user ID, the user is not prompted for a password; under certain conditions a password is required, as described below. The user ID and password values that are set by the db2logon command are only used if the user did not log on using either the Windows logon window or the Microsoft Networking logon window. If the user has logged on, and a db2logon command has been issued, the user ID from the db2logon command is used for all DB2 actions, but the password specified on the db2logon command is ignored When the user has not logged on using the Windows logon window or the Microsoft Networking logon window, the user ID and password that are provided through the db2logon command are used as follows: o The db2start command uses the user ID when it starts, and does not require a password. o In the absence of a high-level qualifier for actions like creating a table, the user ID is used as the high-level qualifier. For example: 1. If you issue the following: db2logon db2local 2. Then issue the following: create table tab1 The table is created with a high-level qualifier as db2local.tab1. You should use a user ID that is equal to the schema name of your tables and other objects. o When the system acts as client to a server, and the user issues a CONNECT statement without a user ID and password (for example, CONNECT TO TEST) and authentication is set to server, the user ID and password from the db2logon command are used to validate the user at the remote server. If the user connects with an explicit user ID and password (for example, CONNECT TO TEST USER userID USING password), the values that are specified for the CONNECT statement are used. 2. In Version 7, the user ID that is either used to log on or specified for the db2logon command must conform to the following DB2 requirements: o It cannot be any of the following: USERS, ADMINS, GUESTS, PUBLIC, LOCAL, or any SQL reserved word that is listed in the SQL Reference. o It cannot begin with: SQL, SYS or IBM o Characters can include: + A through Z (Windows 95 and Windows 98 support case-sensitive user IDs) + 0 through 9 + @, #, or $ 3. You can prevent the creation of the db2start shortcut in the Startup folder during a customized interactive installation, or if you are performing a response file installation and specify the DB2.AUTOSTART=NO option. If you use these options, there is no db2start shortcut in the Startup folder, and you must add your own shortcut to run the db2start.bat file. 4. On Windows 98, an option is available that you can use to specify a user ID that is always logged on when Windows 98 is started. In this situation, the Windows logon window will not appear. If you use this option, a user is logged on and the db2start command will succeed if the user ID meets DB2 requirements (see note (NOTE2) for details). If you do not use this option, the user will always be presented with a logon window. If the user cancels out of this window without logging on, the db2start command will fail unless the db2logon command was previously issued, or invoked from the batch file, as described above. 5. If you do not start DB2 during a system boot, DB2 can be started by an application. You can run the db2start.bat file as part of the initialization of applications that use DB2. Using this method, DB2 will only be started when the application that will use it is started. When the user exits the application, a db2stop command can be issued to stop DB2. Your business applications can start DB2 in this way, if DB2 is not started during the system boot. To use the DB2 Synchronizer application or call the synchronization APIs from your application, DB2 must be started if the scripts that are download for execution contain commands that operate either against a local instance or a local database. These commands can be in database scripts, instance scripts, or embedded in operating system (OS) scripts. If an OS script does not contain Command Line Processor commands or DB2 APIs that use an instance or a database, it can be run without DB2 being started. Because it may be difficult to tell in advance what commands will be run from your scripts during the synchronization process, DB2 should normally be started before synchronization begins. If you are calling either the db2sync command or the synchronization APIs from your application, you would start DB2 during the initialization of your application. If your users will be using the DB2 Synchronizer shortcut in the DB2 for Windows folder to start synchronization, the DB2 Synchronization shortcut must be modified to run a db2sync.bat file. The batch file should contain the following commands to ensure that DB2 is running before synchronization begins: @echo off db2start.bat db2sync.exe db2stop.exe cls exit In this example, it is assumed that the db2start.bat file invokes the db2logon and db2start commands as described above. If you decide to start DB2 when the application starts, ensure that the installation of DB2 does not add a shortcut to the Startup folder to start DB2. See note (NOTE3) for details. ------------------------------------------------------------------------ Using DB2 Universal Database on 64-bit Platforms ------------------------------------------------------------------------ 19.1 Chapter 5. Configuration DB2 users on the 64-bit Solaris operating system should increase the value of "shmsys:shminfo_shmmax" in /etc/system, as necessary, to be able to allocate a large database shared memory set. The DB2 for UNIX Quick Beginnings book recommends setting that parameter to "90% of the physical RAM in the machine, in bytes". This recommendation is also valid for 64-bit implementations. However, there is a problem with the following recommendation in the DB2 for UNIX Quick Beginnings book: For 32-bit systems with more than 4 GB of RAM (up to 64 GB in total is possible on the Solaris operating system), if a user sets the shmmax value to a number larger than 4 GB, and is using a 32-bit kernel, the kernel only looks at the lower 32 bits of the number, sometimes resulting in a very small value for shmmax. The last sentence in "Chapter 5. Configuration": If the DBHEAP value is greater than 64 KB, the cast results in a wrapped value... should be changed to: If the DBHEAP value is greater than 65535, the cast results in a wrapped value... ------------------------------------------------------------------------ 19.2 Chapter 6. Restrictions There is currently no LDAP support on 64-bit operating systems. 32-bit and 64-bit databases cannot be created on the same path. For example, if a 32-bit database exists on , then: db2 create db on if issued from a 64-bit instance, fails with "SQL10004C An I/O error occurred while accessing the database directory." ------------------------------------------------------------------------ Control Center ------------------------------------------------------------------------ 20.1 Ability to Administer DB2 Server for VSE and VM Servers The DB2 Universal Database Version 7.1 Control Center has enhanced its support of DB2 Server for VSE and VM databases. All DB2 Server for VSE and VM database objects can be viewed by the Control Center. There is also support for the CREATE INDEX, REORGANIZE INDEX, and UPDATE STATISTICS statements, and for the REBIND command. REORGANIZE INDEX and REBIND require a stored procedure running on the DB2 Server for VSE and VM hosts. This stored procedure is supplied by the Control Center for VSE and VM feature of DB2 Server for VSE and VM. The fully integrated Control Center allows the user to manage DB2, regardless of the platform on which the DB2 server runs. DB2 Server for VSE and VM objects are displayed on the Control Center main window, along with DB2 Universal Database objects. The corresponding actions and utilities to manage these objects are invoked by selecting the object. For example, a user can list the indexes of a particular database, select one of the indexes, and reorganize it. The user can also list the tables of a database and run update statistics, or define a table as a replication source. For information about configuring the Control Center to perform administration tasks on DB2 Server for VSE and VM objects, refer to the DB2 Connect User's Guide, or the Installation and Configuration Supplement. ------------------------------------------------------------------------ 20.2 Java 1.2 Support for the Control Center The Control Center supports bi-directional languages, such as Arabic and Hebrew, using bi-di support in Java 1.2. This support is provided for the Windows NT platform only. Java 1.2 must be installed for the Control Center to recognize and use it: 1. JDK 1.2.2 is available on the DB2 UDB CD under the DB2\bidi\NT directory. ibm-inst-n122p-win32-x86.exe is the installer program, and ibm-jdk-n122p-win32-x86.exe is the JDK distribution. Copy both files to a temporary directory on your hard drive, then run the installer program from there. 2. Install it under \java\Java12, where is the installation path of DB2. 3. Do not select JDK/JRE as the System VM when prompted by the JDK/JRE installation. After Java 1.2 is installed successfully, starting the Control Center in the normal manner will use Java 1.2. To stop the use of Java 1.2, you may either uninstall JDK/JRE from \java\Java12, or simply rename the \java\Java12 sub-directory to something else. Note:Do not confuse \java\Java12 with \Java12. \Java12 is part of the DB2 installation, and includes JDBC support for Java 1.2. ------------------------------------------------------------------------ 20.3 "Invalid shortcut" Error when Using the Online Help on the Windows Operating System When using the Control Center online help, you may encounter an error like: "Invalid shortcut". If you have recently installed a new Web browser or a new version of a Web browser, ensure that HTML and HTM documents are associated with the correct browser. See the Windows Help topic "To change which program starts when you open a file". ------------------------------------------------------------------------ 20.4 "File access denied" Error when Attempting to View a Completed Job in the Journal on the Windows Operating System On DB2 Universal Database for Windows NT, a "File access denied" error occurs when attempting to open the Journal to view the details of a job created in the Script Center. The job status shows complete. This behavior occurs when a job created in the Script Center contains the START command. To avoid this behavior, use START/WAIT instead of START in both the batch file and in the job itself. ------------------------------------------------------------------------ 20.5 Multisite Update Test Connect Multisite Update Test Connect functionality in the Version 7.1 Control Center is limited by the version of the target instance. The target instance must be at least Version 7.1 for the "remote" test connect functionality to run. To run Multisite Update Test Connect functionality in Version 6, you must bring up the Control Center locally on the target instance and run it from there. ------------------------------------------------------------------------ 20.6 Control Center for DB2 for OS/390 The DB2 UDB Control Center for OS/390 allows you to manage the use of your licensed IBM DB2 utilities. Utility functions that are elements of separately orderable features of DB2 UDB for OS/390 must be licensed and installed in your environment before being managed by the DB2 Control Center. The "CC390" database, defined with the Control Center when you configure a DB2 for OS/390 subsystem, is used for internal support of the Control Center. Do not modify this database. Although DB2 for OS/390 Version 7.1 is not mentioned specifically in the Control Center table of contents, or the Information Center Task information, the documentation does support the DB2 for OS/390 Version 7.1 functions. Many of the DB2 for OS/390 Version 6-specific functions also relate to DB2 for OS/390 Version 7.1, and some functions that are DB2 for OS/390 Version 7.1-specific in the table of contents have no version designation. If you have configured a DB2 for OS/390 Version 7.1 subsystem on your Control Center, you have access to all the documentation for that version. To access and use the Generate DDL function from the Control Center for DB2 for OS/390, you must have the Generate DDL function installed: * For Version 5, install DB2Admin 2.0 with DB2 for OS/390 Version 5. * For Version 6, install the small programming enhancement that will be available as a PTF for the DB2 Admin feature of DB2 for OS/390 Version 6. * For Version 7.1, the Generate DDL function is part of the separately priced DB2 Admin feature of DB2 for OS/390 Version 7.1. You can access Stored Procedure Builder from the Control Center, but you must have already installed it by the time you start the DB2 UDB Control Center. It is part of the DB2 Application Development Client. To catalog a DB2 for OS/390 subsystem directly on the workstation, select to use the Client Configuration Assistant tool. 1. On the Source page, specify the Manually configure a connection to a database radio button. 2. On the Protocol page, complete the appropriate communications information. 3. On the Database page, specify the subsystem name in the Database name field. 4. On the Node Options page, select the Configure node options (Optional) check box. 5. Select MVS/ESA, OS/390 from the list in the Operating system field. 6. Click Finish to complete the configuration. To catalog a DB2 for OS/390 subsystem via a gateway machine, follow steps 1-6 above on the gateway machine, and then: 1. On the client machine, start the Control Center. 2. Right click on the Systems folder and select Add. 3. In the Add System dialog, type the gateway machine name in the System name field. 4. Type DB2DAS00 in the Remote instance field. 5. For the TCP/IP protocol, in the Protocol parameters, specify the gateway machine's host name in the Host name field. 6. Type 523 in the Service name field. 7. Click OK to add the system. You should now see the gateway machine added under the Systems folder. 8. Expand the gateway machine name. 9. Right click on the Instances folder and select Add. 10. In the Add Instance dialog, click Refresh to list the instances available on the gateway machine. If the gateway machine is a Windows NT system, the DB2 for OS/390 subsystem was probably cataloged under the instance DB2. 11. Select the instance. The protocol parameters are filled in automatically for this instance. 12. Click OK to add the instance. 13. Open the Instances folder to see the instance you just added. 14. Expand the instance. 15. Right click on the Databases folder and select Add. 16. Click Refresh to display the local databases on the gateway machine. If you are adding a DB2 subsystem in the Add Database dialog, type the subsystem name in the Database name field. Option: Type a local alias name for the subsystem (or the database). 17. Click OK. You have now successfully added the subsystem in the Control Center. When you open the database, you should see the DB2 for OS/390 subsystem displayed. ------------------------------------------------------------------------ 20.7 Required Fix for Control Center for OS/390 You must apply APAR PQ36382 to the 390 Enablement feature of DB2 for OS/390 Version 5 and DB2 for OS/390 Version 6 to manage these subsystems using the DB2 UDB Control Center for Version 7.1. Without this fix, you cannot use the DB2 UDB Control Center for Version 7.1 to run utilities for those subsystems. The APAR should be applied to the following FMIDs: DB2 for OS/390 Version 5 390 Enablement: FMID JDB551D DB2 for OS/390 Version 6 390 Enablement: FMID JDB661D ------------------------------------------------------------------------ 20.8 Change to the Create Spatial Layer Dialog The "<<" and ">>" buttons have been removed from the Create Spatial Layer dialog. ------------------------------------------------------------------------ 20.9 Troubleshooting Information for the DB2 Control Center In the "Control Center Installation and Configuration" chapter in your Quick Beginnings book, the section titled "Troubleshooting Information" tells you to unset your client browser's CLASSPATH from a command window if you are having problems running the Control Center as an applet. This section also tells you to start your browser from the same command window. However, the command for starting your browser is not provided. To launch Internet Explorer, type start iexplore and press Enter. To launch Netscape, type start netscape and press Enter. These commands assume that your browser is in your PATH. If it is not, add it to your PATH or switch to your browser's installation directory and reissue the start command. ------------------------------------------------------------------------ 20.10 Control Center Troubleshooting on UNIX Based Systems If you are unable to start the Control Center on a UNIX based system, set the JAVA_HOME environment variable to point to your Java distribution: * If java is installed under /usr/jdk118, set JAVA_HOME to /usr/jdk118. * For the sh, ksh, or bash shell: export JAVA_HOME=/usr/jdk118. * For the csh or tcsh shell: setenv JAVA_HOME /usr/jdk118 ------------------------------------------------------------------------ 20.11 Possible Infopops Problem on OS/2 If you are running the Control Center on OS/2, using screen size 1024x768 with 256 colors, and with Workplace Shell Palette Awareness enabled, infopops that extend beyond the border of the current window may be displayed with black text on a black background. To fix this problem, either change the display setting to more than 256 colors, or disable Workplace Shell Palette Awareness. ------------------------------------------------------------------------ 20.12 Launching More Than One Control Center Applet You cannot launch more than one Control Center applet simultaneously on the same machine. This restriction applies to Control Center applets running in all supported browsers. ------------------------------------------------------------------------ 20.13 Help for the jdk11_path Configuration Parameter In the Control Center help, the description of the Java Development Kit 1.1 Installation Path (jdk11_path) configuration parameter is missing a line under the sub-heading Applies To. The complete list under Applies To is: * Database server with local and remote clients * Client * Database server with local clients * Partitioned database server with local and remote clients * Satellite database server with local clients ------------------------------------------------------------------------ 20.14 Solaris System Error (SQL10012N) when Using the Script Center or the Journal When selecting a Solaris system from the Script Center or the Journal, the following error may be encountered: SQL10012N - An unexpected operating system error was received while loading the specified library "/udbprod/db2as/sqllib/function/unfenced/ db2scdar!ScheduleInfoOpenScan". SQLSTATE=42724. This is caused by a bug in the Solaris runtime linker. To correct this problem, apply the following patch: 105490-06 (107733 makes 105490 obsolete) for Solaris 2.6 ------------------------------------------------------------------------ 20.15 Help for the DPREPL.DFT File In the Control Center, in the help for the Replication page of the Tool Settings notebook, step 5d says: Save the file into the working directory for the Control Center (for example, SQLLIB\BIN) so that the system can use it as the default file. Step 5d should say: Save the file into the working directory for the Control Center (SQLLIB\CC) so that the system can use it as the default file. ------------------------------------------------------------------------ 20.16 Online Help for the Control Center Running as an Applet When the Control Center is running as an applet, the F1 key only works in windows and notebooks that have infopops. You can press the F1 key to bring up infopops in the following components: * DB2 Universal Database for OS/390 * The wizards In the rest of the Control Center components, F1 does not bring up any help. To display help for the other components, please use the Help push button, or the Help pull-down menu. ------------------------------------------------------------------------ 20.17 Running the Control Center in Applet Mode (Windows 95) An attempt to open the Script Center may fail if an invalid user ID and password are specified. Ensure that a valid user ID and password are entered when signing on to the Control Center. ------------------------------------------------------------------------ 20.18 DB2 Control Center for OS/390 The first paragraph in the section "Control Center 390" states: The DB2 UDB Control Center for OS/390 allows you to manage the use of your licensed IBM DB2 utilities. Utility functions that are elements of separately orderable features of DB2 UDB for OS/390 must be licensed and installed in your environment before being managed by the DB2 Control Center. This section should now read: The DB2 Control Center for OS/390 allows you to manage the use of your licensed IBM DB2 utilities. Utility functions that are elements of separately orderable products must be licensed and installed in your environment in order to be managed by DB2 Control Center. ------------------------------------------------------------------------ Data Warehouse Center * When creating editioned SQL steps, based on usage, you might want to consider creating a non-unique index on the edition column to speed performance of deleting of editions. Consider this for large warehouse tables only, since the performance of inserts can be impacted when inserting a small numbers of rows. * In the Process Model window, if you change a source or target, the change that you made is automatically saved immediately. If you make any other change, such as adding a step, you must explicitly save the change to make the change permanent. To save the change, click Process --> Save. * You can specify up to 254 characters in the Description field of notebooks in the Data Warehouse Center. This maximum replaces the maximum lengths specified in the online help. * You cannot successfully run a Sample Contents request that uses the AS/400 agent on a flat file source. Although you can create a flat file source and attempt to use an AS/400 agent to issue a sampleContent request, the request will fail. * You might receive an error when you run Sample Contents on a warehouse target in the process modeler. This error is related to the availability of a common agent site to warehouse source, warehouse target, and step in a process. The list of available agent sites for a step is obtained from the intersection of the warehouse source IR agent sites, the warehouse target IR agent sites, and the agent sites available for this particular step (The steps are selected in the last page of the agent sites properties notebook). For example, You want to view the Sample Contents for a process that runs the FTP Put program (VWPRCPY). The step used in the process must be selected for the agent site in the agent site definition. When you run Sample Contents against the Target file, the first agent site on the selected list is usually used. However, database maintenance operations might affect the order of the agent sites listed. Sample contents will fail if the agent site selected does not reside in the same system as the source or target file. * When you try to edit the Create DDL SQL statement for a target table for a step in development mode, you see the following misleading message: "Any change to the Create DDL SQL statement will not be reflected on the table definition or actual physical table. Do you want to continue?" The change will be reflected in the actual physical table. Ignore the message and continue changing the Create DDL statement. The corrected version of this message for steps in development mode should read as follows: "Any change to the Create DDL SQL statement will not be reflected in the table definition. Do you want to continue?" For steps in test or production mode, the message is correct. The Data Warehouse Center will not change the physical target table that was created when you promoted the step to test mode. * If you want to migrate Visual Warehouse metadata synchronization business views to the Data Warehouse Center, promote the business views to production status before you migrate the warehouse control database. If the business views are in production status, their schedules are migrated to the Data Warehouse Center. If the business views are not in production status, they will be migrated in test status without their schedules. You cannot promote the migrated steps to production status. You must create the synchronization steps again in the Data Warehouse Center and delete the migrated steps. * When the Data Warehouse Center generates the target table for a step, it does not generate a primary key for the target table. Some of the transformers, such as Moving Average, use the generated table as a source table and also require that the source table have a primary key. Before you use the generated table with the transformer, define the primary key for the table by right-clicking the table in the DB2 Control Center and clicking Alter. * To access Microsoft SQL Server on Windows NT using the Merant ODBC drivers, verify that the system path contains the sqllib\odbc32 directory. * When you define a warehouse source or warehouse target for an OS/2 database, type the database name in uppercase letters. * The DB2 Control Center or the Command Line Processor might indicate that the warehouse control database is in an inconsistent state. This state is expected because it indicates that the warehouse server did not commit its initial startup message to the warehouse logger. * In the data warehousing sample contained in the TBC_MD database, you cannot use SQL Assist to change the SQL in the Select Scenario SQL step, because the SQL was edited after it was generated by SQL Assist. * To use the FormatDate function, click Build SQL on the SQL Statement page of the Properties notebook for an SQL step. The output of the FormatDate function is of data type varchar(255). You cannot change the data type by selecting Date, Time, or Date/Time from the Category list on the Function Parameters - FormatDate page. * On AIX and the Solaris Operating Environment, the installation process sets the language to publish for the information catalog, and export to the OLAP Integration Server. If you want to use these functions in a language other than the language set during installation, create the following soft link by entering the following command on one line: On AIX /usr/bin/ln -sf /usr/lpp/db2_07_01/msg/locale/flgnxolv.str /usr/lpp/db2_07_01/bin/flgnxolv.str locale The locale name of the language in xx_yy format On the Solaris Operating Environment /usr/bin/ln -sf /opt/IBMdb2/V7.1/msg/locale/flgnxolv.str /opt/IBMdb2/V7.1/bin/flgnxolv.str locale The locale name of the language in xx_yy format * When you use the Update the value in the key column option of the Generate Key Table transformer, the transformer updates only those rows in the table that do not have key values. (That is, the values are null). When additional rows are inserted into the table, the key values are null until you run the transformer again. To avoid this problem, use the following approach: o After the initial run of the transformer, use the Replace all values option to create the keys for all the rows again. * The warehouse server does not maintain connections to local or remote databases when the DB2 server that manages the databases is stopped and restarted. If you stop and restart DB2, then stop and restart the warehouse services as well. * When you install the DB2 Administration Client and the Data Warehousing Tools to set up a Data Warehouse Center administrative client on a different workstation from the one that contains the warehouse server, you must add the TCP/IP port number at which the warehouse server workstation is listening to the services file for the client workstation. Add an entry into the services file as follows: vwkernel 11000/tcp * When you define a warehouse source for a DB2 for VM database, which is accessed through a DRDA gateway, there are restrictions on the use of CLOB and BLOB data types: o You cannot use the Sample Contents function to view data of CLOB and BLOB data types. o You cannot use columns of CLOB and BLOB data types with an SQL step. This restriction is a known restriction on the DB2 for VM Version 5.2 server in which LOB objects cannot be transmitted using DRDA to a DB2 Version 7.1 client. * When you define a DB2 for VM or DB2 for VSE target table in the Data Warehouse Center, do not select the Grant to public check box. The GRANT command syntax that the Data Warehouse Center generates not supported on DB2 for VM and DB2 for VSE. * To enable delimited identifier support for Sybase and Microsoft SQL Server on Windows NT: select the Enable Quoted Identifiers check box in the Advanced page of the ODBC Driver Setup notebook. To enable delimited identifier support for Sybase on UNIX, edit the Sybase data source in the .odbc.ini file to include the connect attribute EQI=1. ------------------------------------------------------------------------ 21.1 Data Warehouse Center Publications 21.1.1 Data Warehouse Center Application Integration Guide In Chapter 6. Data Warehouse Center metadata, the description of the POSNO column object property should be changed to: An index, starting with 1, of the column or field in the row of the table or file. In Chapter 8. Information Catalog Manager object types, the directory where you can find the .TYP files, which include the tag language for defining an object type, has been changed to \SQLLIB\DGWIN\TYPES. 21.1.2 Data Warehouse Center Administration Guide * The Data Warehouse Center troubleshooting information has moved to the DB2 Troubleshooting Guide. * In "Chapter 5. Defining and running processes", section "Starting a step from outside the Data Warehouse Center", it should be noted that JDK 1.1.8 or later is required on the warehouse server workstation and the agent site if you start a step that has a double-byte name. * On page 180, section "Defining values for a Submit OS/390 JCL jobstream (VWPMVS) program," step 8 states that you must define a .netrc file in the same directory as the JES file. Instead, the program creates the .netrc file. If the file does not exist, the program creates the file in the home directory. If a .netrc file already exists in the home directory, the program renames the existing file and creates a new file. When the program finishes processing, it deletes the new .netrc file that it created and renames the original file to .netrc. * In the Data warehousing sample appendix, section "Viewing and modifying the sample metadata", the GEOGRAPHIES table should be included in the list of source tables. * In the Data warehousing sample appendix, section "Promoting the steps", in the procedure for promoting steps to production mode, the following statement is incorrect because the target table was created when you promoted the step to test mode: The Data Warehouse Center starts to create the target table, and displays a progress window. * On Microsoft Windows NT and Windows 2000, the Data Warehouse Center logs events to the system event log. The Event ID corresponds to the Data Warehouse Center message number. For information about the Data Warehouse Center messages, refer to the Message Reference. * The example in Figure 20 on page 315 has an error. The following commands are correct: "C:\IS\bin\olapicmd" < "C:\IS\Batch\my_script.script" > "C:\IS\Batch\my_script.log" The double quotation marks around "C:\IS\bin\olapicmd" are necessary if the name of a directory in the path contains a blank, such as Program Files. * In "Appendix F. Using Classic Connect with the Data Warehouse Center", the section "Installing the CROSS ACCESS ODBC driver" on page 388 has been replaced with the following information: Install the CROSS ACCESS ODBC driver by performing a custom install of the DB2 Warehouse Manager Version 7, and selecting the Classic Connect Drivers component. The driver is not installed as part of a typical installation of the DB2 Warehouse Manager. The CROSS ACCESS ODBC driver will be installed in the ODBC32 subdirectory of the SQLLIB directory. After the installation is complete, you must manually add the path for the driver (for example, C:\Program Files\SQLLIB\ODBC32) to the PATH system environment variable. If you have another version of the CROSS ACCESS ODBC driver already installed, place the ...\SQLLIB\ODBC32\ path before the path for the other version. The operating system will use the first directory in the path that contains the CROSS ACCESS ODBC driver. * The following procedure should be added to "Appendix F. Using Classic Connect with the Data Warehouse Center": Installing the Classic Connect ODBC Driver: 1. Insert the Warehouse Manager CD-ROM into your CD-ROM drive. The launchpad opens. 2. Click Install from the launchpad. 3. In the Select Products window, ensure that the DB2 Warehouse Manager check box is selected, then click Next. 4. In the Select Installation Type window, select Custom, then click Next. 5. In the Select Components window, select Classic Connect Drivers and Warehouse Agent, clear all other check boxes, and then click Next. 6. In the Start Copying Files window, review your selections. If you want to change any of your selections, click Back to return to the window where you can change the selection. Click Next to begin the installation. * In "Appendix G. Data Warehouse Center environment structure" on page 401, there is an incorrect entry in the table. C:\Program Files\SQLLIB\ODBC32 is not added to the PATH environment variable. The only update to the PATH environment variable is C:\Program Files\SQLLIB\BIN. * The book states that the Invert Transformer can create a target table based on parameters, but it misses the point that the generated target table will not have the desired output columns, which must be created explicitly in the target table. 21.1.3 Data Warehouse Center Messages Data Warehouse Center message DWC3778E should read as follows: "Cannot delete a Data Warehouse Center default Data Warehouse Center Program Group." Data Warehouse Center message DWC3806E should read as follows: "Step being created or updated is not associated with either a source resource or Data Warehouse Center program for population." Data Warehouse Center message DWC6119E should read as follows: "The warehouse client failed to receive a response from the warehouse server." 21.1.4 Data Warehouse Center Online Help * A table or view must be defined for replication using the DB2 Control Center before it can be used as a replication source in the Data Warehouse Center. * Before running the Essbase VWPs with the AS/400 agent, ARBORLIB and ARBORPATH need to be set as *sys environment variables. To set these, the user ID must have *jobctl authority. These environment variables need to point to the library where Essbase is installed. * Publish Data Warehouse Center Metadata window and associated properties window: In step 10 of the task help, there is an example states that if you specify a limit value of 1 (Limit the levels of objects in the tree) and publish a process, only 1 step from that process is published and displayed. This example is not correct in all situations. In step 8, on the second bulleted item, the first statement is incorrect. It should read "Click at the column level to generate a transformation object between an information catalog source column and a target column." * Any references in the online help to "foreign keys" should read "warehouse foreign keys." * Any references in the online help to the "Define Replication notebook" should read "replication step notebook." * Importing a tag language online help: In the bulleted list showing common import errors, one item in the list is "Importing a tag language file that was not exported properly". This item is not applicable to the list of common input errors. * In the "Add data" topic of the online help, the links to the "Adding source tables to a process" and "Adding target tables to a process" topics are broken. You can find these topics in the help index. * The help topics "Importing source tables and views into a warehouse source" and "Importing target tables into a warehouse target" contain incorrect information regarding the wildcard character. The sentence: For example, XYZ* would return tables and views with schemas that start with these characters. should read: For example, XYZ% would return tables and views with schemas that start with these characters. 21.1.5 Revised Business Intelligence Tutorial FixPak 2 includes a revised Business Intelligence Tutorial and Data Warehouse Center Sample database which correct various problems that exist in Version 7.1. In order to apply the revised Data Warehouse Center Sample database, you must do the following: If you have not yet installed the sample databases, create new sample databases using the First Steps launch pad. ClickStart and select Programs --> IBM DB2 --> First Steps. If you have previously installed the sample databases, drop the sample databases DWCTBC, TBC_MD, and TBC. If you have added any data that you want to keep to the sample databases, back them up before dropping them. To drop the three sample databases: 1. To open the DB2 Command Window, clickStart and select Programs --> IBM DB2 --> Command Window. 2. In the DB2 Command Window, type each of the following three commands, pressing Enter after typing each one: db2 drop database dwctbc db2 drop database tbc_md db2 drop database tbc 3. Close the DB2 Command Window. 4. Create new sample databases using the First Steps launch pad. Click Start and select Programs --> IBM DB2 --> First Steps. ------------------------------------------------------------------------ 21.2 Warehouse Control Database This section covers the following topics related to the management of warehouse control databases: * The default warehouse control database * The Warehouse Control Database Management window * Changing the active warehouse control database * Creating and initializing a warehouse control database * Migrating IBM Visual Warehouse control databases for use with the DB2 Version 7.1 Data Warehouse Center 21.2.1 The default warehouse control database During a typical DB2 installation on Windows NT or Windows 2000, DB2 creates and initializes a default warehouse control database for the Data Warehouse Center if there is no active warehouse control database identified in the Windows NT registry. Initialization is the process in which the Data Warehouse Center creates the control tables that are required to store Data Warehouse Center metadata. The default warehouse control database is named DWCTRLDB. When you log on, the Data Warehouse Center specifies DWCTRLDB as the warehouse control database by default. To see the name of the warehouse control database that will be used, click the Advanced button on the Data Warehouse Center Logon window. 21.2.2 The Warehouse Control Database Management window The Warehouse Control Database Management window is installed during a typical DB2 installation on Windows NT or Windows 2000. You can use this window to change the active warehouse control database, create and initialize new warehouse control databases, and migrate warehouse control databases that have been used with IBM Visual Warehouse. The following sections discuss each of these activities. Stop the warehouse server before using the Warehouse Control Database Management window. 21.2.3 Changing the active warehouse control database If you want to use a warehouse control database other than the active warehouse control database, use the Warehouse Control Database Management window to register the database as the active control database. If you specify a name other than the active warehouse control database when you log on to the Data Warehouse Center, you will receive an error that states that the database that you specified does not match the database specified by the warehouse server. To register the database: 1. Click Start --> Programs --> IBM DB2 --> Warehouse Control Database Management. 2. In the New control database field, type the name of the control database that you want to use. 3. In the Schema field, type the name of the schema to use for the database. 4. In the User ID field, type the name of the user ID that is required to access the database 5. In the Password field, type the name of the password for the user ID. 6. In the Verify Password field, type the password again. 7. Click OK. The window remains open. The Messages field displays messages that indicate the status of the registration process. 8. After the process is complete, close the window. 21.2.4 Creating and initializing a warehouse control database If you want to create a warehouse control database other than the default, you can create it during the installation process or after installation by using the Warehouse Control Database Management window. You can use the installation process to create a database on the same workstation as the warehouse server or on a different workstation. To change the name of the warehouse control database that is created during installation, you must perform a custom installation and change the name on the Define a Local Warehouse Control Database window. The installation process will create the database with the name that you specify, initialize the database for use with the Data Warehouse Center, and register the database as the active warehouse control database. To create a warehouse control database during installation on a workstation other than where the warehouse server is installed, select Warehouse Local Control Database during a custom installation. The installation process will create the database. After installation, you must then use the Warehouse Control Database Management window on the warehouse server workstation by following the steps in 21.2.3, Changing the active warehouse control database. Specify the database name that you specified during installation. The database will be initialized for use with the Data Warehouse Center and registered as the active warehouse control database. To create and initialize a warehouse control database after the installation process, use the Warehouse Control Database Management window on the warehouse server workstation. If the new warehouse control database is not on the warehouse server workstation, you must create the database first and catalog it on the warehouse server workstation. Then follow the steps in 21.2.3, Changing the active warehouse control database. Specify the database name that you specified during installation. When you log on to the Data Warehouse Center, click the Advanced button and type the name of the active warehouse control database. 21.2.5 Migrating IBM Visual Warehouse control databases DB2 Universal Database Quick Beginnings for Windows provides information about how the active warehouse control database is migrated during a typical install of DB2 Universal Database Version 7.1 on Windows NT and Windows 2000. If you have more than one warehouse control database to be migrated, you must use the Warehouse Control Database Management window to migrate the additional databases. Only one warehouse control database can be active at a time. If the last database that you migrate is not the one that you intend to use when you next log on to the Data Warehouse Center, you must use the Warehouse Control Database Management window to register the database that you intend to use. ------------------------------------------------------------------------ 21.3 Setting up and running replication with Data Warehouse Center 1. Setting up and running replication with Data Warehouse Center requires that the Replication Control tables exist on both the Warehouse Control database and the Warehouse Target databases. Replication requires that the Replication Control tables exist on both the Control and Target databases. The Replication Control tables are found in the ASN schema and they all start with IBMSNAP. The Replication Control tables are automatically created for you on a database when you define a Replication Source via the Control Center, if the Control tables do not already exist. Note that the Control tables must also exist on the Target DB. To get a set of Control tables created on the target DB you can either create a Replication Source using Control Center, then remove the Replication Source, just leaving the Control tables in place. Or you can use the DJRA, Data Joiner Replication Administration, product to define just the control tables. 2. Installing and Using the DJRA If you want or need to use the DJRA to define the control tables, you will need to install it first. The DJRA ships as part of DB2. To install the DJRA, go to the d:\sqllib\djra directory (where your DB2 is installed) and click on the djra.exe package. This will install the DJRA on your system. To access the DJRA after that, on Windows NT, from the start menu, click on the DB2 for Windows NT selection, then select Replication, then select Replication Administration Tools. The DJRA interface is a bit different from usual NT applications. For each function that it performs, it creates a set of SQL to be run, but does not execute it. The user must manually save the generated SQL and then select the Execute SQL function to run the SQL. 3. Setup to Run Capture and Apply For the system that you are testing on, see the Replication Guide and Reference Manual for instructions on configuring your system to run the Capture and Apply program. You must bind the Capture and Apply programs on each database where they will be used. Note that you do NOT need to create a password file. The Data Warehouse Center will automatically create a password file for the Replication subscription. 4. Define a Replication Source in the Control Center Use the Control Center to define a Replication Source. The Data Warehouse Center supports five types of replication: user copy, point-in-time, base aggregate, change aggregate, and staging tables (CCD tables). The types of User Copy, Point-in-Time, and Condensed Staging table require that the replication source table have a primary key. The other replication types do not. Keep this in mind when choosing an input table to be defined as a Replication Source. A Replication Source is actually the definition of the original source table and a created CD (Change Data) table to hold the data changes before they are moved to the target table. When you define a Replication Source in the Control Center, a record is written out to ASN.IBMSNAP_REGISTER to define the source and its CD table. The CD table is created at the same time, but initially it contains no data. When you define a Replication Source you can choose to include only the after-image columns or both the before and after-image columns. These choices are made via check boxes in the Control Center Replication Source interface. Your selection of before and after image columns is then translated into columns created in the new CD table. In the CD table, after-image columns have the same name as their original source table column names. The after-image columns will have a 'X' as the first character in the column name. 5. Import the Replication Source into the Data Warehouse Center Once you have created the Replication Source in the Control Center, you can import it into the Data Warehouse Center. When importing the source, be sure to click on the check box that says "Tables that can be replicated". This tells the Data Warehouse Center to look at the records in the ASN.IBMSNAP_REGISTER table to see what tables have been defined as Replication Sources. 6. Define a Replication Step in the Data Warehouse Center On the process modeler, select one of the five Replication types: base aggregate, change aggregate, point-in-time, staging table, or user copy. If you want to define a base aggregate or change aggregate replication type, see the section below about How to setup a Base Aggregate or Change Aggregate replication in the Data Warehouse Center. Select an appropriate Replication Source for the Replication type. As mentioned above, the replication types of: user copy, point-in-time, and condensed staging tables require that the input source have a primary key. Connect the Replication Source to the Replication Step. Open the properties on the Replication Step. Go to the Parameters tab. Select the desired columns. Select the check box to have a target table created. Select a Warehouse target. Go to the Processing Options and fill in the parameters. Press OK. 7. Start the Capture Program In a DOS window, enter: ASNCCP source-database COLD PRUNE The COLD parameter indicates a COLD start and will delete any existing data in the CD tables. The PRUNE parameter tells the capture program to maintain the IBMSNAP_PRUNCNTL table. Leave the Capture program running. When it comes time to quit, you can stop it with a Ctrl-Break in its DOS window. Be aware that you need to start the Capture program before you start the Apply program. 8. Replication Step Promote-To-Test Back in the Data Warehouse Center, for the defined Replication Step, promote the step to Test mode. This causes the Replication Subscription information to be written out to the Replication Control tables. You will see records added to IBMSNAP_SUBS_SET, IBMSNAP_SUBS_MEMBR, IBMSNAP_SUBS_COLS, and IBMSNAP_SUBS_EVENT to support the subscription. The target table will also be created in the target database. If the replication type is user copy, point-in-time, or condensed staging table, a primary key is required on the target table. Go to the Control Center to create the Primary Key. Note that some replication target tables also require unique indexes on various columns. Code exists in the Data Warehouse Center to create these unique indexes when the table is created so that you do NOT have to create these yourself. Note though that if you define a primary key in the Control Center and a unique index already exists for that column then you will get a WARNING message when you create the primary key. Ignore this warning message. 9. Replication Step Promote-To-Production No replication subscription changes are made during Promote-to-Production. This is strictly a Data Warehouse Center operation like any other step. 10. Run a Replication Step After a Replication Step has been promoted to Test mode, it can be run. Do an initial run before making any changes to the source table. Go to the Work-in-Progress (WIP) section and select the Replication Step. Run it. When the step is run, the event record in the IBMSNAP_SUBS_EVENT table is updated and the subscription record in IBMSNAP_SUBS_SET is posted to be active. The subscription should run immediately. When the subscription runs, the Apply program is called by the Agent to process the active subscriptions. If you update the original source table after that point, then the changed data will be moved into the CD table. If you run the replication step following that, such that the Apply program runs again, the changed data will be moved from the CD table to the target table. 11. Replication Step Demote-To-Test No replication subscription changes are made during Demote-to-Test. This is strictly a Data Warehouse Center operation like any other step. 12. Replication Step Demote-to-Development When you demote a Replication Step to development, the subscription information is removed from the Replication Control tables. No records will remain in the Replication Control tables for that particular subscription after the Demote-to-Development finishes. The target table will also be dropped at this point. The CD table remains in place since it belongs to the definition of the Replication Source. 13. How to setup a Base Aggregate or Change Aggregate Replication in the Data Warehouse Center. o Input table. Choose an input table that can be used with a GROUP BY statement. For our example we will use an Input table that has these columns: SALES, REGION, DISTRICT. o Replication step. Choose Base or Change Aggregate. Open the Step properties. + When the Apply program runs, it needs to execute a SELECT statement that looks like: SELECT SUM(SALES), REGION, DISTRICT GROUP BY REGION, DISTRICT. Therefore in the output columns selected you will need to choose REGION, DISTRICT and one calculated column of SUM(SALES). Use the Add Calculated Column button. For our example enter: SUM(SALES) in the Expression field. Save it. + Where clause. There is a Replication requirement that when you set up a Replication step that only requires a GROUP BY clause then you must also provide a DUMMY where clause, such as 1=1. Do NOT include the word "WHERE" in the WHERE clause. Therefore in the Data Warehouse Center GUI for Base Aggregate there is only a WHERE clause entry field. In this field, for our example: enter: 1=1 GROUP BY REGION, DISTRICT For the Change Aggregate, there is both a WHERE clause and a GROUP BY entry field: In the WHERE clause field enter: 1=1 and in the GROUP BY field enter: GROUP BY REGION, DISTRICT + Setup the rest of the step properties, as you would do for any other type of Replication. Press OK to save the step and create the target table object. o Open the target table object. You now need to rename the output column for the calculated column expression to a valid column name and you need to specify a valid data type for the column. Save the target table object. o Run Promote-to-Test on the Replication step. The target table will be created. It does NOT need a primary key. o Run the step like any other Replication step. ------------------------------------------------------------------------ 21.4 Troubleshooting tips * To turn on tracing for the Apply Program, set the Agent Trace value = 4 in the Warehouse Properties panel. The Agent turns on full tracing for Apply when Agent Trace = 4. If you don't see any data in the CD table, then most likely either the Capture program has not been started or you have not updated the original source table to create some changed data. * The mail server field of the Notification page of the Schedule notebook is missing from the online help. * The mail server needs to support ESMTP for the Data Warehouse Center notification to work. In the Open the Work in Progress window help, click Warehouse --> Work in Progress rather than Warehouse Center --> Work in Progress. ------------------------------------------------------------------------ 21.5 Correction to RUNSTATS and REORGANIZE TABLE Online Help The online help for these utilities states that the table that you want to run statistics on, or that is to be reorganized, must be linked as both the source and the target. However, because the step writes to the source, you only need to link from the source to the step. ------------------------------------------------------------------------ 21.6 Notification Page (Warehouse Properties Notebook and Schedule Notebook) On the Notification page of the Warehouse Properties notebook, the statement: The Sender entry field is initialized with the string . should be changed to: The Sender entry field is initialized with the string . On the Notification page of the Schedule notebook, the sender will be initialized to what is set in the Warehouse Properties notebook. If nothing is set, it is initialized to the current logon user e-mail address. If there is no e-mail address associated with the logon user, the sender is set to the logon user ID. ------------------------------------------------------------------------ 21.7 Agent Module Field in the Agent Sites Notebook The Agent Module field in the Agent Sites notebook provides the name of the program that is run when the warehouse agent daemon spawns the warehouse agent. Do not change the name of the field unless IBM directs you to do so. ------------------------------------------------------------------------ 21.8 Accessing DB2 Version 5 data with the DB2 Version 7.1 warehouse agent DB2 Version 7.1 warehouse agents, as configured by the DB2 Version 7.1 install process, will support access to DB2 Version 6 and DB2 Version 7.1 data. If you need to access DB2 Version 5 data, you must take one of the following two approaches: * Migrate DB2 Version 5 servers to DB2 Version 6 or DB2 Version 7.1. * Modify the agent configuration, on the appropriate operating system, to allow access to DB2 Version 5 data. DB2 Version 7.1 warehouse agents do not support access to data from DB2 Version 2 or any other previous versions. 21.8.1 Migrating DB2 Version 5 servers For information about migrating DB2 Version 5 servers, see DB2 Universal Database Quick Beginnings for your operating system. 21.8.2 Changing the agent configuration The following information describes how to change the agent configuration on each operating system. When you migrate the DB2 servers to DB2 Version 6 or later, remove the changes that you made to the configuration. 21.8.2.1 UNIX warehouse agents To set up a UNIX warehouse agent to access data from DB2 Version 5 with either CLI or ODBC access: 1. Install the DB2 Version 6 run-time client. You can obtain the run-time client by selecting the client download from the following URL: http://www.ibm.com/software/data/db2/udb/support 2. Update the warehouse agent configuration file so that the DB2INSTANCE environment variable points to a DB2 Version 6 instance. 3. Catalog all databases in this DB2 Version 6 instance that the warehouse agent is to access. 4. Stop the agent daemon process by issuing the kill command with the agent daemon process ID. The agent daemon will then restart automatically. You need root authority to kill the process. 21.8.2.2 Microsoft Windows NT, Windows 2000, and OS/2 warehouse agents To set up a Microsoft NT, Windows 2000 or OS/2 warehouse agent to access data from DB2 Version 5: 1. Install DB2 Connect Enterprise Edition Version 6 on a workstation other than where the DB2 Version 7.1 warehouse agent is installed. DB2 Connect Enterprise Edition is included as part of DB2 Universal Database Enterprise Edition and DB2 Universal Database Enterprise - Extended Edition. If Version 6 of either of these DB2 products is installed, you do not need to install DB2 Connect separately. Restriction:You cannot install multiple versions of DB2 on the same Windows NT or OS/2 workstation. You can install DB2 Connect on another Windows NT workstation or on an OS/2 or UNIX workstation. 2. Configure the warehouse agent and DB2 Connect Version 6 for access to the DB2 Version 5 data. For more information, see the DB2 Connect User's Guide. The following steps are an overview of the steps that are required: a. On the DB2 Version 5 system, use the DB2 Command Line Processor to catalog the Version 5 database that the warehouse agent is to access. b. On the DB2 Connect system, use the DB2 Command Line Processor to catalog: + The TCP/IP node for the DB2 Version 5 system + The database for the DB2 Version 5 system + The DCS entry for the DB2 Version 5 system c. On the warehouse agent workstation, use the DB2 Command Line Processor to catalog: + The TCP/IP node for the DB2 Connect system + The database for the DB2 Connect system For information about cataloging databases, see the DB2 Universal Database Installation and Configuration Supplement. 3. At the warehouse agent workstation, bind the DB2 CLI package to each database that is to be accessed through DB2 Connect. The following DB2 commands give an example of binding to v5database, a hypothetical DB2 version 5 database. Use the DB2 Command Line Processor to issue the following commands. db2cli.lst and db2ajgrt are located in the \sqllib\bnd directory. db2 connect to v5database user userid using password db2 bind db2ajgrt.bnd db2 bind @db2cli.lst blocking all grant public where userid is the user ID for the v5 database and password is the password for the user ID. An error occurs when db2cli.list is bound to the DB2 Version 5 database. This error occurs because large objects (LOBs) are not supported in this configuration. This error will not affect the warehouse agent's access to the DB2 Version 5 database. FixPak 14 for DB2 Universal Database Version 5, which is available in June, 2000, is required for accessing DB2 Version 5 data through DB2 Connect. Refer to APAR number JR14507 in that FixPak. ------------------------------------------------------------------------ 21.9 Accessing warehouse control databases In a typical installation of DB2 Version 7.1 on Windows NT, a DB2 Version 7 warehouse control database is created along with the warehouse server. If you have a Visual Warehouse warehouse control database, you must upgrade the DB2 server containing the warehouse control database to DB2 Version 7.1 before the metadata in the warehouse control database can be migrated for use by the DB2 Version 7.1 Data Warehouse Center. You must migrate any warehouse control databases that you want to continue to use to Version 7.1. The metadata in your active warehouse control database is migrated to Version 7.1 during the DB2 Version 7.1 install process. To migrate the metadata in any additional warehouse control databases, use the Warehouse Control Database Migration utility, which you start by selecting Start --> Programs --> IBM DB2 --> Warehouse Control Database Management on Windows NT. For information about migrating your warehouse control databases, see DB2 Universal Database for Windows Quick Beginnings. ------------------------------------------------------------------------ 21.10 Accessing sources and targets The following tables list the version and release levels of the sources and targets that the Data Warehouse Center supports. Table 8. Version and release levels of supported IBM warehouse sources Source Version/Release IMS 5.1 DB2 Universal Database for Windows NT5.2 - 7.1 DB2 Universal Database 5.2 - 7.1 Enterprise-Extended Edition DB2 Universal Database for OS/2 5.2 - 7.1 DB2 Universal Database for AS/400 3.7 - 4.5 DB2 Universal Database for AIX 5.2 - 7.1 DB2 Universal Database for Solaris 5.2 - 7.1 Operating Environment DB2 Universal Database for OS/390 4.1 - 5.1.6 DB2 DataJoiner 2.1.2 DB2 for VM 5.3.4 or later DB2 for VSE 7.1 Source Windows NT AIX Informix 7.2.2 - 8.2.1 7.2.4 - 9.2.0 Oracle 7.3.2 - 8.1.5 8.1.5 Microsoft SQL Server 7.0 Microsoft Excel 97 Microsoft Access 97 Sybase 11.5 11.9.2 Table 9. Version and release levels of supported IBM warehouse targets Target Version/Release DB2 Universal Database for Windows NT6 - 7 DB2 Universal Database 6 - 7 Enterprise-Extended Edition DB2 Universal Database for OS/2 6 - 7 DB2 Universal Database for AS/400 3.1-4.5 DB2 Universal Database for AIX 6 -7 DB2 Universal Database for Solaris 6 -7 Operating Environment DB2 Universal Database for OS/390 4.1 - 6 DB2 DataJoiner 2.1.2 DB2 DataJoiner/Oracle 8 DB2 for VM 3.4 - 5.3.4 DB2 for VSE 3.2, 7.1 CA/400 3.1.2 ------------------------------------------------------------------------ 21.11 Accessing DB2 Version 5 information catalogs with the DB2 Version 7.1 Information Catalog Manager The DB2 Version 7.1 Information Catalog Manager subcomponents, as configured by the DB2 Version 7.1 install process, support access to information catalogs stored in DB2 Version 6 and DB2 Version 7.1 databases. You can modify the configuration of the subcomponents to access information catalogs that are stored in DB2 Version 5 databases. The DB2 Version 7.1 Information Catalog Manager subcomponents do not support access to data from DB2 Version 2 or any other previous versions. To set up the Information Catalog Administrator, the Information Catalog User, and the Information Catalog Initialization Utility to access information catalogs that are stored in DB2 Version 5 databases: 1. Install DB2 Connect Enterprise Edition Version 6 on a workstation other than where the DB2 Version 7.1 Information Catalog Manager is installed. DB2 Connect Enterprise Edition is included as part of DB2 Universal Database Enterprise Edition and DB2 Universal Database Enterprise - Extended Edition. If Version 6 of either of these DB2 products is installed, you do not need to install DB2 Connect separately. Restriction:You cannot install multiple versions of DB2 on the same Windows NT or OS/2 workstation. You can install DB2 Connect on another Windows NT workstation or on an OS/2 or UNIX workstation. 2. Configure the Information Catalog Manager and DB2 Connect Version 6 for access to the DB2 Version 5 data. For more information, see the DB2 Connect User's Guide. The following steps are an overview of the steps that are required: a. On the DB2 Version 5 system, use the DB2 Command Line Processor to catalog the Version 5 database that the Information Catalog Manager is to access. b. On the DB2 Connect system, use the DB2 Command Line Processor to catalog: + The TCP/IP node for the DB2 Version 5 system + The database for the DB2 Version 5 system + The DCS entry for the DB2 Version 5 system c. On the workstation with the Information Catalog Manager, use the DB2 Command Line Processor to catalog: + The TCP/IP node for the DB2 Connect system + The database for the DB2 Connect system For information about cataloging databases, see the DB2 Universal Database Installation and Configuration Supplement. 3. At the warehouse with the Information Catalog Manager, bind the DB2 CLI package to each database that is to be accessed through DB2 Connect. The following DB2 commands give an example of binding to v5database, a hypothetical DB2 version 5 database. Use the DB2 Command Line Processor to issue the following commands. db2cli.lst and db2ajgrt are located in the \sqllib\bnd directory. db2 connect to v5database user userid using password db2 bind db2ajgrt.bnd db2 bind @db2cli.lst blocking all grant public where userid is the user ID for v5database and password is the password for the user ID. An error occurs when db2cli.list is bound to the DB2 Version 5 database. This error occurs because large objects (LOBs) are not supported in this configuration. This error will not affect the warehouse agent's access to the DB2 Version 5 database. FixPak 14 for DB2 Universal Database Version 5, which is available in June, 2000, is required for accessing DB2 Version 5 data through DB2 Connect. Refer to APAR number JR14507 in that FixPak. ------------------------------------------------------------------------ 21.12 Additions to supported non-IBM database sources The following table contains additions to the supported non-IBM database sources: Database client Database Operating system requirements Informix AIX Informix-Connect and ESQL/C version 9.1.4 or later Informix Solaris Operating Informix-Connect and Environment ESQL/C version 9.1.3 or later Informix Windows NT Informix-Connect for Windows Platforms 2.x or Informix-Client Software Developer's Kit for Windows Platforms 2.x Oracle 7 AIX Oracle7 SQL*Net and Oracle7 SQL*Net shared library (built by the genclntsh script) Oracle 7 Solaris Operating Oracle7 SQL*Net and Environment Oracle7 SQL*Net shared library (built by the genclntsh script) Oracle 7 Windows NT The appropriate DLLs for the current version of SQL*Net, plus OCIW32.DLL. For example, SQL*Net 2.3 requires ORA73.DLL, CORE35.DLL, NLSRTL32.DLL, CORE350.DLL and OCIW32.DLL. Oracle 8 AIX Oracle8 Net8 and the Oracle8 SQL*Net shared library (built by the genclntsh8 script) Oracle 8 Solaris Operating Oracle8 Net8 and the Environment Oracle8 SQL*Net shared library (built by the genclntsh8 script) Oracle 8 Windows NT To access remote Oracle8 database servers at a level of version 8.0.3 or later, install Oracle Net8 Client version 7.3.4.x, 8.0.4, or later. On Intel systems, install the appropriate DLLs for the Oracle Net8 Client (such as Ora804.DLL, PLS804.DLL and OCI.DLL) on your path. Sybase AIX In a non-DCE environment (ibsyb15 ODBC driver): libct library In a DCE environment (ibsyb1115 ODBC driver): Sybase 11.1 client library libct_r Sybase Solaris Operating In a non-DCE environment Environment (ibsyb15 ODBC driver): libct library In a DCE environment (ibsyb1115 ODBC driver): Sybase 11.1 client library libct_r Sybase Windows NT Sybase Open Client-Library 10.0.4 or later and the appropriate Sybase Net-Library. ------------------------------------------------------------------------ 21.13 Importing and Exporting Metadata Using the Common Warehouse Metadata Interchange (CWMI) 21.13.1 Introduction In addition to the existing support for tag language files, the Data Warehouse Center can now import and export metadata to and from XML files that conform to the Common Warehouse Metamodel (CWM) standard. Importing and exporting these CWM-compliant XML files is referred to as the Common Warehouse Metadata Interchange (CWMI). You can import and export metadata from the following Data Warehouse Center objects: * Warehouse sources * Warehouse targets * Subject areas, including processes, sources, targets, and steps * User-defined programs The CWMI import and export utility does not currently support certain kinds of metadata, including: schedules, warehouse schemas, shortcut steps, cascade relationships, users, and groups. The Data Warehouse Center creates a log file that contains the results of the import and export processes. Typically, the log file is created in the x:\program files\sqllib\logging directory (where x: is the drive where you installed DB2), or the directory that you specified as the VWS_LOGGING environment variable. The log file is plain text; you can view it with any text editor. 21.13.2 Importing Metadata You can import metadata either from within Data Warehouse Center, or from the command line. New objects that are created through the import process are assigned to the default Data Warehouse Center security group. For more information, see "Updating security after importing" in these Release Notes. If you are importing metadata about a step, multiple files can be associated with the step. Metadata about the step is stored in an XML file, but sometimes a step has associated data stored as BLOBs. The BLOB metadata has the same file name as the XML file, but it is in separate files that have numbered extensions. All of the related step files must be in the same directory when you import. Updating steps when they are in test or production mode A step must be in development mode before the Data Warehouse Center can update the step's metadata. If the step is in test or production mode, demote the step to development mode before importing the metadata: 1. Log on to the Data Warehouse Center. 2. Right-click the step that you want to demote, and click Mode. 3. Click Development. The step is now in development mode. Change the step back to either test or production mode after you import the metadata. Importing data from the Data Warehouse Center You can import metadata from within the Data Warehouse Center: 1. Log on to the Data Warehouse Center. 2. In the left pane, click Warehouse. 3. Click Selected --> Import Metadata. 4. In the Import Metadata window, specify the file name that contains the metadata that you want to import. You can either type the file name or browse for the file. o If you know the location, type the fully qualified path and file name that you want to import. Be sure to include the .xml file extension to specify that you want to import metadata in the XML format. o To browse for your files: a. Click the ellipsis (...) push button. b. In the File window, change Files of type to XML. c. Go to the correct directory and select the file that you want to import. Note:The file must have an .xml extension. d. Click OK. 5. In the Import Metadata window, click OK to finish. The Progress window is displayed while the Data Warehouse Center imports the file. Using the command line to import metadata You can also use the command line to import metadata. Here is the import command syntax: CWMImport XML_file dwcControlDB dwcUserId dwcPW [PREFIX = DWCtbschema] XML_file The fully qualified path and file name (including the drive and directory) of the XML file that you want to import. This parameter is required. dwcControlDB The name of the warehouse control database into which you want to import your metadata. This parameter is required. dwcUserId The user ID that you use to log on to the warehouse control database. This parameter is required. dwcPW The user password that you use to log on to the warehouse control database. This parameter is required. [PREFIX=DWCtbschema] The database schema name for the Data Warehouse Center system tables, sometimes referred to as the table prefix. If no value for PREFIX= is specified, the default schema name is IWH. This parameter is optional. 21.13.3 Updating Your Metadata After Running the Import Utility Updating security after importing As a security measure, the Data Warehouse Center does not import or export passwords. You need to update the passwords on new objects as needed. For more details on import considerations, see the Data Warehouse Center Administration Guide, Chapter 12, "Exporting and importing Data Warehouse Center metadata." When you import metadata, all of the objects are assigned to the default security group. You can change the groups who have access to the object: 1. Log on to the Data Warehouse Center. 2. Right-click on the folder that contains the object that you want to change. 3. Click Properties, and then click the Security tab. 4. Remove groups from the Selected warehouse groups list or add groups from Available warehouse groups list. 5. Click OK. 21.13.4 Exporting Metadata You can export metadata either from within Data Warehouse Center, or from the command line. Some steps have metadata that is stored as a BLOB. The BLOB metadata is exported to a separate file that has the same file name as the step's XML file, but with a numbered extension (.1, .2 and so on). Exporting data from the Data Warehouse Center You can export metadata from within the Data Warehouse Center: 1. Log on to the Data Warehouse Center. 2. In the left pane, click Warehouse. 3. Click Selected --> Export Metadata--> Interchange file. 4. In the Export Metadata window, specify the file name that will contain the exported metadata. You can either enter the file name or browse for the file: o If you know the fully qualified path and file name that you want to use, type it in the File name entry field. Be sure to include the .xml file extension to specify that you want to export metadata in the XML format. o To browse for your files: a. Click the ellipsis (...) push button. b. In the File window, change Files of type to XML. c. Go to the correct directory and select the file that you want to contain the exported metadata. Note:Any existing file that you select is overwritten with the exported metadata. d. Click OK. 5. When the Export Metadata window displays the correct filename, click the object from the Available objects list whose metadata you want to export. 6. Click the > sign to move the selected object from the Available objects list to the Selected objects list. Repeat until all of the objects that you want to export are listed in the Selected objects list. 7. Click OK. The Data Warehouse Center creates an input file, which contains information about the Data Warehouse Center objects that you selected to export, and then exports the metadata about those objects. The progress window is displayed while the Data Warehouse Center is exporting the metadata. When the export process is complete, you will receive an informational message about the export process. A return code 0 indicates that the export was successful. You can also view the log file for more detailed information. Using the command line to export metadata Before you can export metadata from the command line, you must first create an input file. The input file is a text file with an .INP extension, and it lists all of the objects by object type that you want to export. When you export from within the Data Warehouse Center, the input file is created automatically, but to export from the command line you must first create the input file. You can create the input file with any text editor. Type all of the object names as they appear in the Data Warehouse Center. Make sure you create the file in a read/write directory. When you run the export utility, the Data Warehouse Center writes the XML files to the same directory where the input file is. Here's a sample input file: Tutorial Fact Table Process Tutorial file source Tutorial target New Program group In the (processes) section, list all of the processes that you want to export. In the (information resources) section, list all the warehouse sources and targets that you want to export. The Data Warehouse Center automatically includes the tables and columns that are associated with these sources and targets. In the (user defined programs) section, list all the program groups that you want to export. To export metadata, enter the following command at a DOS command prompt: CWMExport INPcontrol_file dwcControlDB dwcUserID dwcPW [PREFIX=DWCtbschema] INPcontrol_file The fully qualified path and file name (including the drive and directory) of the .INP file that contains the objects that you want to export. This parameter is required. dwcControlDB The name of the warehouse control database that you want to export from. This parameter is required. dwcUserID The user ID that you use to log on to the warehouse control database. This parameter is required. dwcPW The password that you use to log on to the warehouse control database. This parameter is required. [PREFIX=DWCtbschema] The database schema name for the Data Warehouse Center system tables, sometimes referred to as the table prefix. If no value for PREFIX= is specified, the default value is IWH. This parameter is optional. ------------------------------------------------------------------------ 21.14 Creating a Data Source Manually in Data Warehouse Center When a data source is created using Relational Connect and the "Create Nickname" statement, the data source will not be available in the functions related to importing tables in Data Warehouse Center. To use the data source as a source or target table, perform the following steps: 1. Define the source/target without importing any tables. 2. Expand the Warehouse Sources/Targets tree from the main window of the Data Warehouse Center, and right-click "Tables" for the desired source/target. 3. Click Define. 4. Define the data source using the notebook that opens and ensure that the columns are defined for each data source. For more information see, "Defining a Warehouse Source Table" or "Defining a Warehouse Target Table" in the Information Center. ------------------------------------------------------------------------ DB2 Stored Procedure Builder ------------------------------------------------------------------------ 22.1 Java 1.2 Support for the DB2 Stored Procedure Builder The DB2 Stored Procedure Builder supports building Java stored procedures using Java 1.2 functionality. In addition, the Stored Procedure Builder supports bi-directional languages, such as Arabic and Hebrew, using the bi-di support in Java 1.2. This support is provided for Windows NT platforms only. In order for the Stored Procedure Builder to recognize and use Java 1.2 functionality, Java 1.2 must be installed. To install Java 1.2: 1. JDK 1.2.2 is available on the DB2 UDB CD under the DB2\bidi\NT directory. ibm-inst-n122p-win32-x86.exe is the installer program, and ibm-jdk-n122p-win32-x86.exe is the JDK distribution. Copy both files to a temporary directory on your hard drive, then run the installer program from there. 2. Install it under \java\Java12, where is the installation path of DB2. 3. Do not select JDK/JRE as the System VM when prompted by the JDK/JRE installation. After Java 1.2 is installed successfully, start the Stored Procedure Builder in the normal manner. To execute Java stored procedures using JDK 1.2 support, set the database server environment variable DB2_USE_JDK12 to TRUE using the following command: DB2SET DB2_USE_JDK12=TRUE Also, set your JDK11_PATH to point to the directory where your Java 1.2 support is installed. You set this path by using the following command: DB2 UPDATE DBM CFG USING JDK11_PATH To stop the use of Java 1.2, you can either uninstall the JDK/JRE from \java\Java12, or simply rename the \java\Java12 subdirectory. Important: Do not confuse \java\Java12 with \Java12. \Java12 is part of the DB2 installation and includes JDBC support for Java 1.2. ------------------------------------------------------------------------ 22.2 Remote Debugging of DB2 Stored Procedures To use the remote debugging capability for stored procedures on the Intel and UNIX platforms, you need to install the IBM Distributed Debugger. The IBM Distributed Debugger is included on the Visual Age for Java Professional Edition CD. The debugger client runs only on the Windows platform. Supported server platforms include: Windows, AIX and Solaris. At this time, only Java and C stored procedures can be debugged remotely. Support for SQL procedures will be available at a later date. To debug SQL procedures on the OS/390 platform, you must also have the IBM C/C++ Productivity Tools for OS/390 R1 product. For more information on the IBM C/C++ Productivity Tools for OS/390 R1, go to the following Web site: http://www.ibm.com/software/ad/c390/pt/ ------------------------------------------------------------------------ 22.3 Building SQL Procedures on Windows, OS/2 or UNIX Platforms Before you can use the Stored Procedure Builder to successfully build SQL Procedures on your Windows, OS/2 or UNIX database, you must configure your server for SQL Procedures. For information on how to configure your server for SQL Procedures, see the IBM DB2 Universal Database Application Building Guide. The database manager configuration parameter KEEPDARI must be set to NO. This can be done using the command db2 update dbm cfg using KEEPDARI NO, or using the Control Center. If KEEPDARI is set to YES, you may get message SQL0454N when attempting to build an SQL stored procedure that was previously built and run. ------------------------------------------------------------------------ 22.4 Using the DB2 Stored Procedure Builder on the Solaris Platform To use the Stored Procedure Builder on the Solaris platform: 1. Download and install JDK 1.1.8. You can download JDK 1.1.8 from the JavaSoft web site. 2. Set the environment variable JAVA_HOME to the location where you installed the JDK. 3. Set your DB2 JDK11_PATH to the directory where you installed the JDK. To set the DB2 JDK11_PATH, use the command: DB2 UPDATE DBM CFG USING JDK11_PATH. ------------------------------------------------------------------------ 22.5 Known Problems and Limitations * SQL Procedures are not currently supported on Windows 98. * For Java stored procedures, the JAR ID, class names, and method names cannot contain non-ASCII characters. * On AS/400 the following V4R4 PTFs must be applied to OS/400 V4R4: - SF59674 - SF59878 * Stored procedure parameters with a character subtype of FOR MIXED DATA or FOR SBCS DATA are not shown in the source code in the editor pane when the stored procedure is restored from the database. * Currently, there is a problem when Java source code is retrieved from a database. At retrieval time, the comments in the code come out collapsed. This will affect users of the DB2 Stored Procedure Builder who are working in non-ASCII code pages, and whose clients and servers are on different code pages. ------------------------------------------------------------------------ 22.6 Using DB2 Stored Procedure Builder with Traditional Chinese Locale There is a problem when using Java Development Kit or Java Runtime 1.1.8 with the Traditional Chinese locale. Graphical aspects of the Stored Procedure Builder program (including menus, editor text, messages, and so on) will not display properly. The solution is to make a change to the file font.properties.zh_TW, which appears in one or both of the following directories: sqllib/java/jdk/lib sqllib/java/jre/lib Change: monospaced.0=\u7d30\u660e\u9ad4,CHINESEBIG5_CHARSET,NEED_CONVERTED to: monospaced.0=Courier New,ANSI_CHARSET ------------------------------------------------------------------------ 22.7 UNIX (AIX, Sun Solaris, Linux) Installations and the Stored Procedure Builder For Sun Solaris installations, and if you are using a Java Development Kit or Runtime other than the one installed on AIX with UDB, you must set the environment variable JAVA_HOME to the path where Java is installed (that is, to the directory containing the /bin and /lib directories). Stored Procedure Builder is not supported on Linux, but can be used on supported platforms to build and run stored procedures on DB2 UDB for Linux systems. ------------------------------------------------------------------------ DB2 Warehouse Manager ------------------------------------------------------------------------ 23.1 "Warehouse Manager" Should Be "DB2 Warehouse Manager" All occurrences of the phrase "Warehouse Manager" in product screens and in product documentation should read "DB2 Warehouse Manager". ------------------------------------------------------------------------ 23.2 Information Catalog Manager Initialization Utility If you get the following message: FLG0083E: You do not have a valid license for the IBM Information Catalog Manager Initialization utility. Please contact your local software reseller or IBM marketing representative. You must purchase the DB2 Warehouse Manager or the IBM DB2 OLAP Server and install the Information Catalog Manager component, which includes the Information Catalog Initialization utility. If you installed the DB2 Warehouse Manager or IBM DB2 OLAP Server and then installed another Information Catalog Manager Administrator component (using the DB2 Universal Database CD-ROM) on the same workstation, you might have overwritten the Information Catalog Initialization utility. In that case, from the \sqllib\bin directory, find the files createic.bak and flgnmwcr.bak and rename them to createic.exe and flgnmwcr.exe respectively. If you install additional Information Catalog Manager components from DB2 Universal Database, the components must be on a separate workstation from where you installed the Data Warehouse Manager. For more information, see Chapter 3, Installing Information Catalog Manager components, in the DB2 Warehouse Manager Installation Guide. ------------------------------------------------------------------------ 23.3 Information Catalog Manager for the Web When using an information catalog that is located on a DB2 UDB for OS/390 system, case insensitive search is not available. This is true for both a simple search and an advanced search. The online help does not explain that all searches on a DB2 UDB for OS/390 information catalog are case sensitive for a simple search. Moreover, all grouping category objects are expandable, even when there are no underlying objects. ------------------------------------------------------------------------ 23.4 DB2 Warehouse Manager Publications 23.4.1 Information Catalog Manager Administration Guide * Step 2 in the first section of Chapter 1, "Setting up an information catalog", says: When you install either the DB2 Warehouse Manager or the DB2 OLAP Server, a default information catalog is created on DB2 Universal Database for Windows NT. The statement is incorrect. You must define a new information catalog. See the "Creating the Information Catalog" section for more information. * In Chapter 6, "Exchanging metadata with other products", in the section "Identifying OLAP objects to publish", there is a statement in the second paragraph that says: When you publish DB2 OLAP Integration Server metadata, a linked relationship is created between an information catalog "dimensions within a multi-dimensional database" object type and a table object in the OLAP Integration Server. The statement should say: When you publish DB2 OLAP Integration Server metadata, a linked relationship is created between an information catalog "dimensions within a multi-dimensional database object and a table object". This statement also appears in Appendix C, "Metadata mappings", in the section "Metadata mappings between the Information Catalog Manager and OLAP Server". * In Chapter 6, "Exchanging Metadata", there is a section entitled "Identifying OLAP objects to publish". At the end of this section there is an example of using the flgnxoln command to publish OLAP server metadata to an information catalog. The example incorrectly shows the directory for the db2olap.ctl and db2olap.ff files as x:\Program Files\sqllib\logging. The directory name should be x:\Program Files\sqllib\exchange as described on page 87. * Chapter 6. Exchanging metadata with other products: "Converting MDIS-conforming metadata into a tag language file", page 97. You cannot issue the MDISDGC command from the MS-DOS command prompt. You must issue the MDISDGC command from a DB2 command window. The first sentence of the section, "Converting a tag language file into MDIS-conforming metadata," also says you must issue the DGMDISC command from the MS-DOS command prompt. You must issue the DGMDISC command from a DB2 command window. * Some examples in the Information Catalog Administration Guide show commands that contain the directory name Program Files. When you invoke a program that contains Program Files as part of its path name, you must enclose the program invocation in double quotation marks. For example, Appendix B, "Predefined Information Catalog Manager object types", contains an example in the section called "Initializing your information catalog with the predefined object types". If you use the example in this section, you will receive an error when you run it from the DOS prompt. The following example is correct: "X:Program Files\SQLLIB\SAMPLES\SAMPDATA\DGWDEMO" /T userid password dgname ------------------------------------------------------------------------ 23.5 Information Catalog Manager Programming Guide and Reference 23.5.1 Information Catalog Manager Reason Codes In Appendix D: Information Catalog Manager reason codes, some text might be truncated at the far right column for the following reason codes: 31014, 32727, 32728, 32729, 32730, 32735, 32736, 32737, 33000, 37507, 37511, and 39206. If the text is truncated, please see the HTML version of the book to view the complete column. ------------------------------------------------------------------------ 23.6 Information Catalog Manager User's Guide In Chapter 2, there is a section called "Registering a server node and remote information catalog." The section lists steps that you can complete from the DB2 Control Center before registering a remote information catalog using the Information Catalog Manager. The last paragraph of the section says that after completing a set of steps from the DB2 Control Center (add a system, add an instance, and add a database), you must shut down the Control Center before opening the Information Catalog Manager. That information is incorrect. It is not necessary to shut down the Control Center before opening the Information Catalog Manager. The same correction also applies to the online help task "Registering a server node and remote information catalog", and the online help for the Register Server Node and Information Catalog window. ------------------------------------------------------------------------ 23.7 Information Catalog Manager: Online Messages * Message FLG0260E. The second sentence of the message explanation should say: The error caused a rollback of the information catalog, which failed. The information catalog is not in stable condition, but no changes were made. * Message FLG0051E. The second bullet in the message explanation should say: The information catalog contains too many objects or object types. The administrator response should say: Delete some objects or object types from the current information catalog using the import function. * Message FLG0003E. The message explanation should say: The information catalog must be registered before you can use it. The information catalog might not have been registered correctly. * Message FLG0372E. The first sentence of the message explanation should say: The ATTACHMENT-IND value was ignored for an object because that object is an Attachment object. * Message FLG0615E. The second sentence of the message should say: The Information Catalog Manager has encountered an unexpected database error or cannot find the bind file in the current directory or path. ------------------------------------------------------------------------ 23.8 Information Catalog Manager: Online Help Information Catalog window: The online help for the Selected menu Open item incorrectly says "Opens the selected object". It should say "Opens the Define Search window". ------------------------------------------------------------------------ 23.9 Query Patroller Administration Guide 23.9.1 DB2 Query Patroller Client is a Separate Component The DB2 Query Patroller client is a separate component that is not part of the DB2 Administration client. This means that it is not installed during the installation of the DB2 Administration Client, as indicated in the Query Patroller Installation Guide. Instead, the Query Patroller client must be installed separately. 23.9.2 Manual Installation of Query Patroller on AIX and Solaris To install DB2 Query Patroller using installp or smit, perform the steps listed below. Refer to 23.9.2.2, Manual Installation Commands for detailed syntax and parameter information. 1. Set up or create a DB2 UDB EEE or EE instance to use with DB2 Query Patroller. 2. Add an entry in the etc/services file to be used with the DB2 Query Patroller server. For example, dqp1 55000/TCP. 3. Create a user named iwm if one does not exist already. 4. Mount the CD-ROM. 5. Go to the /cdrom/db2 directory. 6. o Agent a. If you are installing a Query Patroller Agent on AIX, use smit to install the following filesets: i. db2_07_01.dqp.cln ii. db2_07_01.mlic iii. db2_07_01.dqp.agt Note:These filesets must be installed in the above order. If you are not using smit to install the filesets, ensure that this order is respected. Furthermore, if the filesets db2_07_01.cj and db2_07_01.jdbc were not installed when you set up your DB2 EE or EEE instance, you need to install these prior to starting the installation of the Query Patroller Agent. b. If you are installing on Solaris, use pkgadd to install the following packages for the DB2 Query Patroller server: i. db2qpc71 ii. db2mlic71 iii. db2dqpa71 Note:These filesets must be installed in the order given. Furthermore, if the filesets db2cj71 and db2jdbc71 were not installed when you set up your DB2 UDB EE or EEE instance, you need to install these prior to starting the installation of the Query Patroller Agent. o Server a. To install a DB2 Query Patroller Server on AIX install the fileset db2_07_01.dqp.agt and its prerequisite filesets described above. Then, install the db2_07_01.dqp.srv fileset. b. To install a DB2 Query Patroller Server on Solaris install db2dqpa71 and its prerequisite packages described above. Then, install the db2dqps71 package. If you are performing a migration from Version 6 to Version 7.1, refer to the DB2 Query Patroller Installation Guide. If you have installed the server, set up the license as follows: 1. Add the user iwm to the primary group for the DB2 UDB EE or EEE instance owner. This will give the iwm user SYSADM authority over the instance. 2. Add the following lines to the .profile file of the iwm user. The INSTHOME variable is the home directory of the DB2 Query Patroller server instance. . INSTHOME/sqllib/db2profile Note:If a C shell is being used, add source /sqllib/db2cshrc to the .login file. 3. Log on as root, and run the following command: The Query Patroller Server must be set up on either the DB2 UDB EE or the DB2 UDB EEE main node where the instance was created. For Query Patroller Server installation: a. Enter the following command: dqpcrt -s -p port_name instance_name The port_name variable is the port name you used in Step 2. instance_name is the name of the DB2 UDB EE or EEE instance. Refer to 23.9.2.2, Manual Installation Commands for detailed syntax and parameter information. Note:To remove a dqp instance you can run the dqpdrop instance_name command. You can only run this command on the node where the server is set up. b. Log on as the instance name. Run the following command: dqpsetup -D database_name -g nodegroup_name -n node_number -t tablespace_name -r result_tablespace_name -l tablespace_path Refer to 23.9.2.2, Manual Installation Commands for detailed syntax and parameter information. For Query Patroller Agent installation, enter: dqpcrt -a -p port_name instance_name The port_name variable is the port name you used in Step 2. instance_name is the name of the DB2 UDB EE or EEE instance. Refer to 23.9.2.2, Manual Installation Commands for detailed syntax and parameter information. 4. Use the db2licm command to register DB2 Query Patroller. See the DB2 Command Reference for further information. 23.9.2.1 Creating the Query Patroller Schema and Binding the Application Bind Files To manually create the DB2 Query Patroller schema and bind all the application bind files perform the following steps: 1. Create the DB2 table space that will be used for the DB2 Query Patroller schema. This table space must be created on one nodegroup. 2. Use the program db2_qp_schema in the DB2 bin directory to create the schema. This program will use the script file iwm_schema.sql as input. db2_qp_schema supports either syntax: db2_qp_schema db2_qp_schema 3. Bind the DB2 Query Patroller server bind files using the bind file list file db2qp.lst in the DB2 bnd directory. After connecting to the database, issue the DB2 CLP command: db2 bind @db2qp.lst blocking all grant public 4. Run the following command: db2 bind iwmsx001.bnd isolation ur blocking all grant public insert buf datetime iso 5. Bind the DB2 Query Patroller stored procedure bind files using the bind file list file db2qp_sp.lst in the DB2 bnd directory. After connecting to the database, issue the DB2 CLP command: db2 bind @db2qp_sp.lst blocking all 6. Create a table space for the DB2 Query Patroller result tables. 23.9.2.2 Manual Installation Commands dqpcrt This command is used to allocate a node on the DB2 UDB EE or DB2 UDB EEE system as a DB2 Query Patroller server. The port name to be used with the DB2 Query Patroller instance, and the name of the DB2 UDB EE or EEE instance designated as the DB2 Query Patroller server, are required parameters. Syntax: >>-dpqcrt----+--s---p--port_name instance_name--+-------------->< +--a---p--port_name instance_name--+ '--h-------------------------------' Table 10. dqpcrt Command Parameters Parameter Description -s Used to create a DB2 Query Patroller server on the named DB2 UDB EE or EEE instance. -a Used to create a DB2 Query Patroller agent on the named DB2 UDB EE or EEE instance. port_name Identifies the port name to be used with the DB2 Query Patroller server or agent. instance_name Identifies the name of the DB2 UDB EE or EEE instance that is to designated as a DB2 Query Patroller server instance. -h Displays command usage information. dqpsetup This command is used to set the parameters for the DB2 Query Patroller server. The size_DMS parameter and the -o flag are optional. The -o flag can be used to remove schema objects from a previously installed version of this product. Syntax: >>-dqpsetup----+-| setup parameters |--+----------------------->< '--h--------------------' setup parameters |----d--database_name-----g--nodegroup_group--------------------> >------n--node_number-----t--tablespace_name--------------------> >------r--result_tablespace_name-----l--tablespace_path---------> >-----+---------------+---+-----+---instance_name---------------| '--s--size_DMS--' '--o--' Table 11. dqpsetup Command Parameters Parameter Description -d database_name Name of the database to be used with the DB2 Query Patroller server. -g nodegroup_name Name of the nodegroup that contains the table space for the DB2 Query Patroller server. -n node_number Node number of a single node where the nodegroup is defined. -t tablespace_name Name of the DB2 Query Patroller table space. The default type is an SMS table space. -r result_tablespace_name Name of the Result Table Space to be used -l tablespace_path Full path name of the table space. -s size_DMS Size of the DMS table space. Use the -s flag to specify the size for the DMS table space. This parameter is optional and only specified if a DMS table space is to be used. The default is an SMS table space. -o Overwrites any existing DB2 Query Patroller schema objects. This parameter is optional. instance_name Name of the DB2 UDB EE or EEE instance that is to be designated as a DB2 Query Patroller server. -h Displays command usage information. dqplist This command is used to find the name of the DB2 UDB EE or the DB2 UDB EEE instance being used as the DB2 Query Patroller server. It can only be run from the node where the DB2 Query Patroller server was created. Syntax: >>-dpqlist----+-----+------------------------------------------>< '--h--' The -h flag displays command usage information. dqpdrop This command is used to drop an existing DB2 Query Patroller server instance. This command can only be run from the node where the DB2 Query Patroller server was created. Syntax: >>-dpqdrop----+-instance_name--+------------------------------->< '--h-------------' The -h flag provides usage information. The instance_name parameter is the name of the DB2 Query Patroller instance that you want to drop. 23.9.3 Enabling Query Management In the "Getting Started" chapter under "Enabling Query Management", the text should read: You must be the owner of the data base, or you must have SYSADM, SYSCTRL, or SYSMAINT authority to set database configuration parameters. 23.9.4 Starting Query Administrator In the "Using QueryAdministrator to Administer DB2 Query Patroller" chapter, instructions are provided for starting QueryAdministrator from the Start menu on Windows. The first step provides the following text: If you are using Windows, you can select DB2 Query Patroller --> QueryAdministrator from the IBM DB2 program group. The text should read: DB2 Query Patroller --> QueryAdmin. 23.9.5 User Administration In the "User Administration" section of the "Using QueryAdministrator to Administer DB2 Query Patroller" chapter, the definition for the Maximum Elapsed Time parameter indicates that if the value is set to 0 or -1, the query will always run to completion. This parameter cannot be set to a negative value. The text should indicate that if the value is set to 0, the query will always run to completion. The Max Queries parameter specifies the maximum number of jobs that the DB2 Query Patroller will run simultaneously. Max Queries must be an integer within the range of 0 to 32767. 23.9.6 Creating a Job Queue In the "Job Queue Administration" section of the "Using QueryAdministrator to Administer DB2 Query Patroller" chapter, the screen capture in the steps for "Creating a Job Queue" should be displayed after the second step. The Information about new Job Queue window opens once you click New on the Job Queue Administration page of the QueryAdministrator tool. References to the Job Queues page or the Job Queues tab should read Job Queue Administration page and Job Queue Administration tab, respectively. 23.9.7 Using the Command Line Interface For a user with User authority on the DB2 Query Patroller system to submit a query and have a result table created, the user may require CREATETAB authority on the database. The user does not require CREATETAB authority on the database if the DQP_RES_TBLSPC profile variable is left unset, or if the DQP_RES_TBLSPC profile variable is set to the name of the default table space. The creation of the result tables will succeed in this case because users have the authority to create tables in the default table space. 23.9.8 Query Enabler Notes * When using third-party query tools that use a keyset cursor, queries will not be intercepted. In order for Query Enabler to intercept these queries, you must modify the db2cli.ini file to include: [common] DisableKeySetCursor=1 * For AIX clients, please ensure that the environment variable LIBPATH is not set. Library libXext.a, shipped with the JDK, is not compatible with the library in the /usr/lib/X11 subdirectory. This will cause problems with the Query Enabler GUI. ------------------------------------------------------------------------ Information Center ------------------------------------------------------------------------ 24.1 "Invalid shortcut" Error on the Windows Operating System When using the Information Center, you may encounter an error like: "Invalid shortcut". If you have recently installed a new Web browser or a new version of a Web browser, ensure that HTML and HTM documents are associated with the correct browser. See the Windows Help topic "To change which program starts when you open a file". ------------------------------------------------------------------------ OLAP Starter Kit ------------------------------------------------------------------------ 25.1 OLAP Server Web Site For the latest installation and usage tips for the DB2 OLAP Starter Kit, check the Library page of the DB2 OLAP Server Web site: http://www.ibm.com/software/data/db2/db2olap/library.html ------------------------------------------------------------------------ 25.2 Completing the DB2 OLAP Starter Kit Setup on AIX and Solaris The DB2 OLAP Starter Kit install follows the basic procedures of the DB2 UDB install for UNIX. The product files are laid down by the installer to a system directory owned by root user: (for AIX: /usr/lpp/db2_07_01; for Solaris: /opt/IBMdb2/V7.1). Then during the instance creation phase, two DB2 OLAP directories will be created (essbase and is) within the instance user's home directory under sqllib. Only one instance of OLAP server can run on a machine at a time. To complete the set up, user must manually set the is/bin directory so that it is not a link to the is/bin directory in the system. It should link to a writable directory within the instance's home directory. To complete the setup for Solaris, logon using the instance ID, change to the sqllib/is directory, then enter the following: rm bin mkdir bin cd bin ln -s /opt/IBMdb2/V7.1/is/bin/ismesg.mdb ismesg.mdb ln -s /opt/IBMdb2/V7.1/is/bin/olapicmd olapicmd ln -s /opt/IBMdb2/V7.1/is/bin/olapisvr olapisvr ln -s /opt/IBMdb2/V7.1/is/bin/essbase.mdb essbase.mdb ln -s /opt/IBMdb2/V7.1/is/bin/libolapams.so libolapams.so ------------------------------------------------------------------------ 25.3 Logging in from OLAP Integration Server Desktop To use DB2 OLAP Integration Server Desktop to create OLAP models and metaoutlines, you must connect the client software to two servers: DB2 OLAP Integration Server and DB2 OLAP Server. The login dialog prompts you for the necessary information for the Desktop to connect to these two servers. On the left side of the dialog, enter information about DB2 OLAP Integration Server. On the right side, enter information about DB2 OLAP Server. To connect to DB2 OLAP Integration Server: * Server: Enter the host name or IP address of your Integration Server. If you have installed the Integration Server on the same workstation as your desktop, then typical values are "localhost" or "127.0.0.1". * OLAP Metadata Catalog: When you connect to OLAP Integration Server you must also specify a Metadata Catalog. OLAP Integration Server stores information about the OLAP models and metaoutlines you create in a relational database known as the Metadata Catalog. This relational database must be registered for ODBC. The catalog database contains a special set of relational tables that OLAP Integration Server recognizes. On the login dialog, you can specify an Integration Server and then expand the pull-down menu for the OLAP Metadata Catalog field to see a list of the ODBC data source names known to the OLAP Integration Server. Choose an ODBC database that contains the metadata catalog tables. * User Name and Password: OLAP Integration Server will connect to the Metadata Catalog using the User name and password that you specify on this panel. This is a login account that exists on the server (not the client, unless the server and client are running on the same machine). The user name must be the user who created the OLAP Metadata Catalog. Otherwise, OLAP Integration Server will not find the relational tables in the catalog database because the table schema names are different. The DB2 OLAP Server information is optional, so the input fields on the right side of the Login dialog may be left blank. However, some operations in the Desktop and the Administration Manager require that you connect to a DB2 OLAP Server. If you leave these fields blank, then the Desktop will display the Login dialog again if the Integration Server needs to connect to DB2 OLAP Server in order to complete an operation that you requested. It is recommended that you always fill in the DB2 OLAP Server fields on the Login dialog. To connect to DB2 OLAP Server: * Server: Enter the host name or IP address of your DB2 OLAP Server. If you are running the OLAP Starter Kit, then your OLAP Server and Integration Server are the same. If the Integration Server and OLAP Server are installed on different hosts, then enter the host name or an IP address that is defined on OLAP Integration Server. * User Name and Password: OLAP Integration Server will connect to DB2 OLAP Server using the user name and password that you specify on this panel. This user name and password must already be defined to the DB2 OLAP Server. OLAP Server manages its own user names and passwords separately from the host operating system. 25.3.1 Starter Kit Login Example The following example assumes that you created the OLAP Sample, and you selected db2admin as your administrator user ID, and password as your administrator password during DB2 UDB 7.1 installation. * For OLAP Integration Server: Server is localhost, OLAP Metadata Catalog is TBC_MD, User Name is db2admin, Password is password * For DB2 OLAP Server: Server is localhost, User Name is db2admin ------------------------------------------------------------------------ 25.4 Manually creating and configuring the sample databases for OLAP Integration Server The sample databases are created automatically when you install OLAP Integration Server. The following instructions explain how to setup the Catalog and Sample databases manually, if necessary. 1. In Windows, open the Command Center window by clicking Start -->Programs-->DB2 for Windows NT--> Command Window. 2. Create the production catalog database: a. Type db2 create db OLAP_CAT b. Type db2 connect to OLAP_CAT 3. Create tables in the database: a. Navigate to \SQLLIB\IS\ocscript\ocdb2.sql b. Type db2 -tf ocdb2.sql 4. Create the sample source database: a. Type db2 connect reset b. Type db2 create db TBC c. Type db2 connect to TBC 5. Create tables in the database: a. Navigate to \SQLLIB\IS\samples\ b. Copy tbcdb2.sql to \SQLLIB\samples\db2sampl\tbc c. Copy lddb2.sql to \SQLLIB\samples\db2sampl\tbc d. Navigate to \SQLLIB\samples\db2sampl\tbc e. Type db2 -tf tbcdb2.sql f. Type db2 - vf lddb2.sqlto load sample source data into the tables. 6. Create the sample catalog database: a. Type db2 connect reset b. Type db2 create db TBC_MD c. Type db2 connect to TBC_MD 7. Create tables in the database: a. Navigate to \SQLLIB\IS\samples\tbc_md b. Copy ocdb2.sql to \SQLLIB\samples\db2sampl\tbcmd c. Copy lcdb2.sql to \SQLLIB\samples\db2sampl\tbcmd d. Navigate to \SQLLIB\samples\db2sampl\tbcmd e. Type db2 -tf ocdb2.sql f. Type db2 -vf lcdb2.sql to load sample metadata into the tables. 8. Configure ODBC for TBC_MD, TBC, AND OLAP_CAT: a. Open the NT control panel by clicking Start-->Settings-->Control Panel b. Select ODBC (or ODBC data sources) from the list. c. Select the System DSM tab. d. Click Add. The Create New Data Source window opens. e. Select IBM DB2 ODBC DRIVER from the list. f. Click Finish. The ODBC IBM D2 Driver - Add window opens. g. Type the name of the data source (OLAP_CAT) in the Data source name field. h. Type the alias name in the Database alias field, or click the down arrow and select OLAP_CAT from the list. i. Click OK. j. Repeat these steps for the TBC_MD and the TBC databases. ------------------------------------------------------------------------ 25.5 Known problems and limitations This section lists known limitations for DB2 OLAP Starter Kit, DB2 OLAP Desktop client, Spreadsheet clients, and DB2 OLAP Integration Server 25.5.1 DB2 OLAP Starter Kit * The currently supported platforms for the DB2 OLAP Starter Kit are Windows, AIX, and Sun Solaris Operating Environment. * The OLAP Starter Kit includes four sample DB2 OLAP Server applications named Demo, Sampeast, Sample, and Samppart. Each application includes one or more databases. No data is loaded into any of these databases. You must upgrade to the DB2 OLAP Server full product to be able to load data into these databases. * The incorrect help panels are displayed for: o The Query Designer Row-->Measures-->Profit window. o The Cascade Options window. o The Subset Dialog window. o F1 help does not bring up the correct help screen. * Some words or characters are not translated or are translated incorrectly for the following windows: o The Output Options window uses the English words "Default" and "Long Names". o In combined.mtx messages, the variable %s is resolved in English. o In the Query Designer, the NLV translated words for Ascending and Descending do not appear when you expand Data Sorting in the left pane. o In Database Object Names, some NLV characters are incorrect. o The log files contain some incorrect NLV characters. * You must install Adobe Acrobat in order to use the tutorial and online help. * There is no migration path from DB2 OLAP server to OLAP Starter Kit. OLAP Starter Kit is targeted for first time OLAP users. 25.5.2 DB2 OLAP Desktop Client * OLAP Integration Server requires that an accounts dimension with at least one measure be defined in the OLAP Model. If the OLAP model that you use to create a metaoutline does not contain an accounts dimension with one or more measures, and you use the Database Measures tab to create a single measure in the OLAP metaoutline, you can save the metaoutline without receiving an error, but a subsequent member load will fail. * DB2 OLAP Integration Server occasionally displays Essbase error numbers without the corresponding error message. WORKAROUND: View the message.txt file located in the ISHOME/esslib directory or check the OLAP Integration Server log file. * DB2 OLAP Integration Server Desktop does not support large fonts. If client-user computer resolution is set to 1024 x 768 pixels or less with large fonts only, buttons in the OLAP Integration Server Desktop Login and Welcome windows are truncated, and usage is restricted. WORKAROUND: On the Windows desktop, select Start--> Settings--> Control Panel -->Display--> Settings. Select Small Fonts from the Font Size drop-down list. * Process buttons in the OLAP Integration Server Desktop OLAP Model Assistant and OLAP Metaoutline Assistant tabs do not display properly without a color palette of 65536 colors. WORKAROUND: To set the optimum color palette, on the Windows Desktop, select Start--> Settings--> Control Panel -->Display--> Settings. Select 65536 from the Color Palette drop-down list. * Windows. If there is a date/time transformation and filter on the same member, then DB2 server crashes on Windows NT. This problem does not occur on DB2 servers running on UNIX. * OLAP Integration Server Desktop only supports table names from 1 to 30 characters in length. * The Dimension renaming function reacts differently depending on the procedure used. You can use names that contain spaces in the Rename window on a dimension table. You cannot use names that contain spaces in the Rename window of Dimension Properties. WORKAROUND: Do not imbed spaces in dimension names. * In some environments, the Metaoutline Properties window does not open unless you save the metaoutline before trying to open the window. * If your operating system file system does not allow names that are longer than eight characters, the member load and view sample functions could fail with error number 2001007. The workaround is to start DB2 OLAP Integration Server in a directory that is in a file system that supports longer names. * An accounts dimension should not be deleted from a metaoutline after the metaoutline has been created. If you need to change the measures in an accounts dimension, delete all existing measures in the metaoutline and then create new measures. * The View Table Data option in the OLAP Model standard user interface, which enables viewing of relational source table data in the left frame of the OLAP Model main window, has a limit of 1000 rows. Data source rows are displayed in 100-row increments per window by clicking the Next button. The total number of rows retrieved for display cannot exceed the first 1000 rows. 25.5.3 Spreadsheet Clients * If you use NLV characters in a Lotus 123 spreadsheet name, or use NLV characters in the spreadsheet, you will be unable to connect, retrieve data, or execute any other function from the spreadsheet. * Rapid double clicking in a cell in Lotus 123 spreadsheet add-in can cause the following error message: Microsoft Visual C++ Runtime error Program c:\lotus\123\123w.exe abnormal program termination. WORKAROUND: Click slowly to expand the cells one at a time. * In Lotus 123, selecting two rows at same time and then selecting Essbase - Keep only or Remove only, causes only one column to be kept or removed. * In Lotus 123, an error occurs in the Essbase calculation when several rows are selected. * AIX. The sample applications do not contain data. You are able to use spreadsheet programs to retrieve the database information, there will be no values displayed other than the dimension and member names. Some values are indicated as missing. * If a down arrow is pressed in the Linked Objects browser, spreadsheets crash with a DR Watson trap error. The correct error message cannot be displayed. * In Lotus 123, if no workbook is open, an error message should be displayed when you click on an OLAP Server menu item. Instead no message is displayed, and you receive two warning beeps. * In Excel, double clicking formulas in the spreadsheet causes formulas to be deleted from the spreadsheet. Specifying Retain on Zooms does not prevent this error. 25.5.4 DB2 OLAP Integration Server * Currently, only English is supported for scheduling functions. In some non-English environments, using the Tool-->Scheduler function in the OLAP Metaoutline standard user interface on NT client computers causes the OLAP Integration Server to crash. The crash occurs because the NT Scheduler stores scheduling information in language- specific format, and the OLAP Integration Server is unable to parse the information. * DB2 OLAP Integration Server does not support the GRAPHIC, VARGRAPHIC and LONG VARGRAPHIC data type during View Table operations in the OLAP Model standard user interface. Relational tables containing data with the GRAPHIC, VARGRAPHIC or LONG VARGRAPHIC data type appear blank when View Table is selected in the OLAP model. WORKAROUND: Make the following addition to the DB2CLI.INI file: [SAMPLE] PATCH1=65536 < PATCH2=7 < DBALIAS=SAMPLE * If OLAP Integration Server encounters a NULL value during a data load, it automatically loads the data into the parent member of the NULL. However, if the NULL is at Generation 2, OLAP Integration Server cannot load the data to the parent member because the parent member is the dimension level member. In this case, OLAP Integration Server records an error in the log file. WORKAROUND: Do not include NULLs at Generation 2. * OLAP Integration Server does not support Relational Database Management System (RDBMS) column names that have imbedded blanks. If OLAP Integration Server encounters blanks, it generates invalid SQL statements. * OLAP Integration Server does not read some double-byte character set (DBCS) characters, such as minus (-) signs, when retrieving relational data source values during a preview operation. If OLAP Integration Server encounters such characters during a preview operation, an "Unexpected Error at Condition" error message is displayed. * Using the DBCS minus sign (-) character in a column concatenation in an OLAP model generates a syntax error during member loads. WORKAROUND: When performing transformations on columns in an OLAP model, do not use a minus sign, a hyphen, or a dash (-) character in the column name. Do not use relational tables or relational table columns that include a minus sign, a hyphen or a dash (-) character. * AIX. When you start the sample application, you will see a message indicating that the server does not have the currency conversion option. * Windows: Two sample catalog databases are created during installation for use with the DB2 OLAP Integration Server. However, if you try to login into one of these databases from the DB2 OLAP Integration Server Desktop, you receive a CLI error message 'Invalid connection string attribute'. WORKAROUND: Update the db2cli.ini file located in the sqllib directory, following these steps: 1. Make a backup copy of db2cli.ini. 2. Delete 'DATABASE=OLAPCATP' and 'DATABASE=OLAPCATD' from each stanza. [OLAPCATP] DATABASE=OLAPCATP <----------------------------------(delete) DESCRIPTION=OLAPCATP DBALIAS=OLAPCATP [OLAPCATD] DATABASE=OLAPCATD <---------------------------------(delete) DESCRIPTION=OLAPCATD DBALIAS=OLAPCATD * Hyperion Integration Server does not add a metaoutline filter description in the OLAP Metaoutline Assistant. * Use the following workaround to avoid a scheduling problem in the AIX English version environment. For all new AIX servers, add to the /var/adm/cron/cron.allow command to allow scheduling for the specified user. Create an empty file named for the specified user and provide permission equal to 555 in the var/spool/crontabs directory. A similar setup is required for other UNIX environments; for details, see the man page for crontab. * A DBCS space is not recognized as a DBCS space by OLAP Integration Server. The following transformation space settings do not work for source data columns that contain a DBCS space: o Dropping leading/trailing spaces o Converting spaces to underscores o Concatenating * OLAP Integration Server cannot save a dimension description in the OLAP Model Assistant if the description contains DBCS characters. * OLAP Integration Server does not support pass-through (database specific) transformations using SQL functions. Specifying a built-in RDBMS function, such as Substring, or Left, causes OLAP Integration Server to generate invalid SQL. * Creating hierarchies in OLAP Model Assistant causes OLAP Integration Server to create an empty folder in the Hyperion\IS\Loadinfo folder on the client. The empty folder contains an empty .txt file. Empty folders and files are also created when you access either a View Sample from the Edit Hierarchy dialog box, or the Preview Results dialog box from the Edit OLAP Model dialog box in the standard user interface. To prevent buildup of empty folders and files, you can delete them from the Loadinfo folder at any time. * OLAP Integration Server does not display a message to confirm that a dimension table has been deleted from an OLAP model. Following is a suggested workaround: If you did not save the OLAP model after you added the dimension table that you want to delete, click Close to close the OLAP model without saving any changes, then revert back to the previous version of the model. Any other changes that you made during the current session will also be lost. * OLAP Integration Server does not support Essbase ESSCMD scripts. The IS\esscript directory has not been deleted from the OLAP Integration Server directory structure that is created during the installation process. This directory is an empty directory that is not used. ------------------------------------------------------------------------ 25.6 OLAP Starter Kit Spreadsheet Needs Current Windows svc.pack Before installing DB2 OLAP Server on Windows NT, you must apply MS Windows NT 4.0 service patch 5. If problems occur while installing the OLAP Starter Kit spreadsheet add-in on Windows 95 or Windows 98, this may be due to down level Microsoft System files. Download the following files from Microsoft via a Windows 95/98 service pack, or unzip %arborpath%\bin\olapewd.zip and copy the files into the windows system directory. Make sure you do not replace any files already on your system that have a newer release date. The Windows 9x system files and their required levels are: * ASYCFILT.DLL 2.20.4118.1 * COMCAT.DLL 4.71.1441.1 * COMPOBJ.DLL 2.10.35.35 * DCOMCNFG.EXE 4.0.1381.4 * DLLHOST.EXE 4.0.1381.4 * IPROP.DLL 4.0.1381.4 * OLE2.DLL 2.10.35.35 * OLEAUT32.DLL 2.20.4118.1 * OLECNV32.DLL 4.0.1381.4 * OLEDLG.DLL 4.0.1381.4 * OLEPRO32.DLL 5.0.4118.1 * OLETHK32.DLL 4.0.1371.1 * RPCLTC1.DLL 4.0.1381.4 * RPCLTCCM.DLL 4.0.1381.4 * RPCLTSCM.DLL 4.0.1381.4 * RPCMQCL.DLL 4.0.1381.4 * RPCMQSVR.DLL 4.0.1381.4 * RPCNS4.DLL 4.0.1371.1 * RPCSS.EXE 4.0.1381.4 * STDOLE2.TLB 2.20.4122.1 * STDOLE32.TLB 2.10.3027.1 * STORAGE.DLL 2.10.35.35 ------------------------------------------------------------------------ 25.7 OLAP Spreadsheet Add-in EQD Files Missing In the DB2 OLAP Starter Kit, the Spreadsheet add-in has a component called the Query Designer (EQD). The online help menu for EQD includes a button called Tutorial that does not display anything. The material that should be displayed in the EQD tutorials is a subset of chapter two of the OLAP Spreadsheet Add-in User's Guide for Excel, and the OLAP Spreadsheet Add-in User's Guide for 1-2-3. All the information in the EQD tutorial is available in the HTML versions of these books in the Information Center, and in the PDF versions. ------------------------------------------------------------------------ 25.8 Attribute Dimension Support DB2 OLAP Starter Kit now includes Attribute Dimension Support. Attribute dimensions can now be created in OLAP models and metaoutlines to analyze attribute data in an OLAP database. The following are now available: * Meaningful summaries of data using attributes through the creation of crosstab reports. Crosstab reports provide a way of displaying summaries of data based on multiple characteristics. * Access to five consolidations of all attribute data: sums, counts, averages, minimums, and maximums. * Four attribute types that enable you to selectively view the data comparisons that you want to see: text, numeric, Boolean, and date-based. * Use of numeric attributes to group and summarize attribute data by ranges of values. Attribute dimensions and members are Dynamic Calc only, meaning that attribute data is not stored in the OLAP database, resulting in smaller outlines. In addition, at retrieval time, users can decide whether or not to view attribute data, resulting in greater choice and flexibility in determining OLAP reporting needs on an as-needed basis. 25.8.1 Updated books for DB2 OLAP Starter Kit The following books for the DB2 OLAP Starter Kit have been updated: * OLAP Integration Server Administration Guide * OLAP Integration Server Model User's Guide * OLAP Integration Server Metaoutline User's Guide These books are available on the Web at: http://www.ibm.com/software/data/db2/db2olap/library.html ------------------------------------------------------------------------ What's New ------------------------------------------------------------------------ 26.1 On Demand Log Archive Support Documentation Error In "Chapter 4. Data Management Enhancements", there are two subsections pertaining to log archiving: "Closing Log After Backup" and "On Demand Log Archive Support". "On Demand Log Archive Support" is incorrect and should be disregarded. Information contained in the section "Closing Log After Backup" is correct as published. ------------------------------------------------------------------------ Unicode Updates ------------------------------------------------------------------------ 27.1 Introduction The Unicode standard is the universal character encoding scheme for written characters and text. Unicode is multi-byte representation of a character. It defines a consistent way of encoding multilingual text that enables the exchange of text data internationally and creates the foundation for global software. Unicode provides two encoding schemes. The default encoding scheme is UTF-16, which is a 16-bit encoding format. UCS-2 is a subset of UTF-16 which uses two bytes to represent a character. UCS-2 is generally accepted as the universal code page capable of representing all the necessary characters from all existing single and double byte code pages. UCS-2 is registered in IBM as code page 1200. The other Unicode encoding format is UTF-8, which is byte-oriented and has been designed for ease of use with existing ASCII-based systems. UTF-8 uses a varying number of bytes (usually 1-3, sometimes 4) to store each character. The invariant ASCII characters are stored as single bytes. All other characters are stored using multiple bytes. In general, UTF-8 data can be treated as extended ASCII data by code that was not designed for multi-byte code pages. UTF-8 is registered in IBM as code page 1208. It is important that applications take into account the requirements of data as it is converted between the local code page, UCS-2 and UTF-8. For example, 20 characters will require exactly 40 bytes in UCS-2 and somewhere between 20 and 60 bytes in UTF-8, depending on the original code page and the characters used. 27.1.1 DB2 Unicode Databases and Applications A DB2 Universal database for Unix, Windows, or OS/2 created with a UTF-8 codeset can be used to store data in both UCS-2 and UTF-8 formats. Such a database is referred to as a Unicode database. SQL CHAR data is encoded using UTF-8 and SQL GRAPHIC data is encoded using UCS-2. This can be equated to storing Single-Byte (SBCS) and Multi-Byte(MBCS) codesets in CHAR columns and Double-Byte (DBCS) codesets in GRAPHIC columns. The code page of an application may not match the code page that DB2 uses to store data. In a non-Unicode database, when the code pages are not the same, the database manager converts character and graphic (pure DBCS) data that is transferred between client and server. In a Unicode database, the conversion of character data between the client code page and UTF-8 is automatically performed by the database manager, but all graphic (UCS-2) data is passed without any conversion between the client and the server. Figure 1. Code Page Conversions Performed by the Database Manager [Code Page Conversions Performed by the Database Manager] Notes: 1. When connecting to Unicode Databases, if the application sets DB2CODEPAGE=1208, the local code page is UTF-8, so no code page conversion is needed. 2. When connected to a Unicode Database, CLI applications can also receive character data as graphic data, and graphic data as character data. It is possible for an application to specify a UTF-8 code page, indicating that it will send and receive all graphic data in UCS-2 and character data in UTF-8. This application code page is only supported for Unicode databases. Other points to consider when using Unicode: 1. The database code page is determined at the time the database is created, and by default its value is determined from the operating system locale (or code page). The CODESET and TERRITORY keywords can be used to explicitly create a Unicode DB2 database. For example: CREATE DATABASE unidb USING CODESET UTF-8 TERRITORY US 2. The application code page also defaults to the local code page, but this can be overridden by UTF-8 in one of two ways: o Setting the application code page to UTF-8 (1208) with this command: db2set DB2CODEPAGE=1208 o For CLI/ODBC applications, by calling SQLSetConnectAttr() and setting the SQL_ATTR_ANSI_APP to SQL_AA_FALSE. The default setting is SQL_AA_TRUE. 3. Data in GRAPHIC columns will take exactly two bytes for each Unicode character, whereas data in CHAR columns will take from 1 to 3 bytes for each Unicode character. SQL limits in terms of characters for GRAPHIC columns are generally half of those for CHAR columns, but in terms of bytes they are equal. The maximum character length for a CHAR column is 254. The maximum character length for a graphic column is 127. For more information, see MAX in the "Functions" chapter of the SQL Reference. 4. A graphic literal is differentiated from a character literal by a G prefix. For example: SELECT * FROM mytable WHERE mychar = 'utf-8 data' AND mygraphic = G'ucs-2 data' Note:The G prefix is not required for Unicode databases. See "Literals in Unicode Databases" for more information and updated support. 5. Support for CLI/ODBC and JDBC applications differ from the support for Embedded applications. For information specific to CLI/ODBC support, see 27.3, "CLI Guide and Reference". 6. The byte ordering of UCS-2 data may differ between platforms. Internally, DB2 uses big-endian format. 27.1.2 Documentation Updates This document updates the following information on using Unicode with DB2 Version 7.1: * SQL Reference: Chapter 3 Language Elements Chapter 4 Functions * CLI Guide and Reference: Chapter 3. Using Advanced Features Appendix C. DB2 CLI and ODBC * Data Movement Utilities Guide and Reference, Appendix C. Export/Import/Load Utility File Formats For more information on using Unicode with DB2 refer to the Administration Guide, Appendix J. National Language Support (NLS): "Unicode/UCS-2 and UTF-8 Support in DB2 UDB". ------------------------------------------------------------------------ 27.2 SQL Reference 27.2.1 Chapter 3 Language Elements 27.2.1.1 Promotion of Data Types In this section table 5 shows the precedence list for each data type. Please note: 1. For a Unicode database, the following are considered to be equivalent data types: o CHAR and GRAPHIC o VARCHAR and VARGRAPHIC o LONG VARCHAR and LONG VARGRAPHIC o CLOB and DBCLOB 2. In a Unicode database, it is possible to create functions where the only difference in the function signature is between equivalent CHAR and GRAPHIC data types, for example, foo(CHAR(8)) and foo(GRAPHIC(8)). We strongly recommend that you do not define such duplicate functions since migration to a future release will require one of them to be dropped before the migration will proceed. If such duplicate functions do exist, the choice of which one to invoke is determined by a two pass algorithm. The first pass attempts to find a match using the same algorithm as is used for resolving functions in a non-Unicode database. If no match is found, then a second pass will be done taking into account the following promotion precedence for CHAR and GRAPHIC strings: GRAPHIC-->CHAR-->VARGRAPHIC-->VARCHAR-->LONG VARGRAPHIC-->LONG VARCHAR-->DBCLOB-->CLOB 27.2.1.2 Casting Between Data Types The following entry has been added to the list introduced as: "The following casts involving distinct types are supported": * for a Unicode database, cast from a VARCHAR or VARGRAPHIC to distinct type DT with a source data type CHAR or GRAPHIC. The following are updates to Table 6. Supported Casts between Built-in Data Types. Only the affected rows of the table are included. Table 12. Supported Casts between Built-in Data Types L O N L G Target Data Type O V V > N A A G R R V V G G G A A R R R D R R A A A B C C C C P P P C H H H L H H H L Source Data Type A A A O I I I O V R R R B C C C B CHAR Y Y Y Y Y1 Y1 - - VARCHAR Y Y Y Y Y1 Y1 - - LONGVARCHAR Y Y Y Y - - Y1 Y1 CLOB Y Y Y Y - - - Y1 GRAPHIC Y1 Y1 - - Y Y Y Y VARGRAPHIC Y1 Y1 - - Y Y Y Y LONGVARGRAPHIC - - Y1 Y1 Y Y Y Y DBCLOB - - - Y1 Y Y Y Y 1 Cast is only supported for Unicode databases. 27.2.1.3 Assignments and Comparisons Assignments and comparisons involving both character and graphic data are only supported when one of the strings is a literal. For function resolution, graphic literals and character literals will both match character and graphic function parameters. The following are updates to Table 7. Data Type Compatibility for Assignments and Comparisons. Only the affected rows of the table, and the new footnote 6, are included: Binary Decimal Floating CharacterGraphic Time- Binary Operands Integer Number Point String String Date Time stamp String UDT CharacterNo No No Yes Yes 6 1 1 1 No 3 2 String Graphic No No No Yes 6 Yes No No No No 2 String 6 Only supported for Unicode databases. String Assignments Storage Assignment The last paragraph of this sub-section is modified as follows: When a string is assigned to a fixed-length column and the length of the string is less than the length attribute of the target, the string is padded to the right with the necessary number of single-byte, double-byte, or UCS-22 blanks. The pad character is always a blank even for columns defined with the FOR BIT DATA attribute. Retrieval Assignment The third paragraph of this sub-section is modified as follows: When a character string is assigned to a fixed-length variable and the length of the string is less than the length attribute of the target, the string is padded to the right with the necessary number of single-byte, double-byte, or UCS-22 blanks. The pad character is always a blank even for strings defined with the FOR BIT DATA attribute. 2 UCS-2 defines several SPACE characters with different properties. For a Unicode database, the database manager always uses the ASCII SPACE at position x'0020' as UCS-2 blank. For an EUC database, the IDEOGRAPHIC SPACE at position x'3000' is used for padding GRAPHIC strings. Conversion Rules for String Assignments The following paragraph has been added to the end of this sub-section: For Unicode databases, character strings can be assigned to a graphic column, and graphic strings can be assigned to a character column. DBCS Considerations for Graphic String Assignments The first paragraph of this sub-section has been modified as follows: Graphic string assignments are processed in a manner analogous to that for character strings. For non-Unicode databases, graphic string data types are compatible only with other graphic string data types, and never with numeric, character string, or datetime data types. For Unicode databases, graphic string data types are compatible with character string data types. String Comparisons Conversion Rules for Comparison This sub-section has been modified as follows: When two strings are compared, one of the strings is first converted, if necessary, to the encoding scheme and/or code page of the other string. For details, see the "Rules for String Conversions" section of Chapter 3 Language Elements in the SQL Reference. 27.2.1.4 Rules for Result Data Types Character and Graphic Strings in a Unicode Database This is a new sub-section inserted after the sub-section "Graphic Strings". In a Unicode database, character strings and graphic strings are compatible. If one operand is... And the other operand The data type of the is... result is... GRAPHIC(x) CHAR(y) or GRAPHIC(y) GRAPHIC(z) where z = max(x,y) VARGRAPHIC(x) CHAR(y) or VARCHAR(y) VARGRAPHIC(z) where z = max(x,y) VARCHAR(x) GRAPHIC(y) or VARGRAPHIC(z) where z = VARGRAPHIC max(x,y) LONG VARGRAPHIC CHAR(y) or VARCHAR(y) LONG VARGRAPHIC or LONG VARCHAR LONG VARCHAR GRAPHIC(y) or LONG VARGRAPHIC VARGRAPHIC(y) DBCLOB(x) CHAR(y) or VARCHAR(y) DBCLOB(z) where z = or CLOB(y) max(x,y) DBCLOB(x) LONG VARCHAR DBCLOB(z) where z = max(x,16350) CLOB(x) GRAPHIC(y) or DBCLOB(z) where z = VARGRAPHIC(y) max(x,y) CLOB(x) LONG VARGRAPHIC DBCLOB(z) where z = max(x,16350) 27.2.1.5 Rules for String Conversions The third point has been added to the following list in this section: For each pair of code pages, the result is determined by the sequential application of the following rules: * If the code pages are equal, the result is that code page. * If either code page is BIT DATA (code page 0), the result code page is BIT DATA. * In a Unicode database, if one code page denotes data in an encoding scheme different from the other code page, the result is UCS-2 over UTF-8 (that is, the graphic data type over the character data type).1 * Otherwise, the result code page is determined by Table 8 of the "Rules for String Conversions" section of Chapter 3 Language Elements in the SQL Reference. An entry of 'first' in the table means the code page from the first operand is selected and an entry of 'second' means the code page from the second operand is selected. 1 In a non-Unicode database, conversion between different encoding schemes is not supported. 27.2.1.6 Expressions The following has been added: In a Unicode database, an expression that accepts a character or graphic string will accept any string types for which conversion is supported. With the Concatenation Operator The following has been added to the end of this sub-section: In a Unicode database, concatenation involving both character string operands and graphic string operands will first convert the character operands to graphic operands. Note that in a non-Unicode database, concatenation cannot involve both character and graphic operands. 27.2.1.7 Predicates The following entry has been added to the list introduced by the sentence: "The following rules apply to all types of predicates": * In a Unicode database, all predicates that accept a character or graphic string will accept any string types for which conversion is supported. 27.2.2 Chapter 4 Functions 27.2.2.1 Scalar Functions The following sentence has been added to the end of this section: In a Unicode database, all scalar functions that accept a character or graphic string will accept any string types for which conversion is supported. ------------------------------------------------------------------------ 27.3 CLI Guide and Reference 27.3.1 Chapter 3. Using Advanced Features The following is a new section for this chapter. 27.3.1.1 Writing a DB2 CLI Unicode Application There are two main areas of support for DB2 CLI Unicode Applications: 1. The addition of a set of functions that can accept Unicode string arguments in place of ANSI string arguments. 2. The addition of new C and SQL data types to describe data as ANSI or Unicode data. The following sections provide more information for both of these areas. To be considered a Unicode application, the application must set the SQL_ATTR_ANSI_APP connection attribute to SQL_AA_FALSE, before a connection is made. This will ensure that CLI will connect as a Unicode client, and all Unicode data will be sent in either UTF-8 for CHAR data or UCS-2 for GRAPHIC data. Unicode Functions The following is a list of the ODBC API functions that support both Unicode (W) and ANSI (A) versions (the function name will have a W for Unicode): SQLBrowseConnect SQLForeignKeys SQLPrimaryKeys SQLColAttribute SQLGetConnectAttr SQLProcedureColumns SQLColAttributes SQLGetConnectOption SQLProcedures SQLColumnPrivileges SQLGetCursorName SQLSetConnectAttr SQLColumns SQLGetDescField SQLSetConnectOption SQLConnect SQLGetDescRec SQLSetCursorName SQLDataSources SQLGetDiagField SQLSetDescField SQLDescribeCol SQLGetDiagRec SQLSetStmtAttr SQLDriverConnect SQLGetInfo SQLSpecialColumns SQLDrivers SQLGetStmtAttr SQLStatistics SQLError SQLNativeSQL SQLTablePrivileges SQLExecDirect SQLPrepare SQLTables Unicode functions that always return, or take, string length arguments are passed as count-of-characters. For functions that return length information for server data, the display size and precision are described in number of characters. When the length (transfer size of the data) could refer to string or nonstring data, the length is described in octet lengths. For example, SQLGetInfoW will still take the length as count-of-bytes, but SQLExecDirectW will use count-of-characters. CLI will return result sets in either Unicode or ANSI, depending on the application's binding. If an application binds to SQL_C_CHAR, the driver will convert SQL_WCHAR data to SQL_CHAR. The driver manager maps SQL_C_WCHAR to SQL_C_CHAR for ANSI drivers but does no mapping for Unicode drivers. New datatypes and valid conversions There are two new CLI or ODBC defined data types, SQL_C_WCHAR and SQL_WCHAR. SQL_C_WCHAR indicates that the C buffer contains UCS-2 data. SQL_WCHAR indicates that a particular column or parameter marker contains Unicode data. For DB2 Unicode Servers, graphic columns will be described as SQL_WCHAR. Conversion will be allowed between SQL_C_WCHAR and SQL_CHAR, SQL_VARCHAR, SQL_LONGVARCHAR and SQL_CLOB, as well as between the graphic data types. Table 13. Supported Data Conversions S S Q Q L S S L _ Q Q _ C L L C S S _ _ _ _ Q Q T C C D S L L Y _ _ B S Q S _ _ P S S C B C S Q S S L S Q C C E Q Q L L L Q L S Q S Q _ Q L _ _ _ L L O O O L _ Q L Q L C L _ T T T _ S _ B B B _ C L _ L _ _ _ C Y Y I C Q C _ _ _ C _ _ C _ C T C _ P P M _ L _ L L L _ N C _ C _ I _ D E E E B _ D O O O B U _ W _ S N F O _ _ S I C B C C C I M C C L H Y L U D T T N _ C A A A G E H H O O I O B A I A A B H T T T I R A A N R N A L T M M R I A O O O N I SQL Data Type R R G T T T E E E P Y T R R R R T C BLOB X X D X CHAR D X X X X X X X X X X X X X CLOB D X X X DATE X X D X DBCLOB X X D X DECIMAL D X X X X X X X X X X DOUBLE X X X X X X D X X X FLOAT X X X X X X D X X X GRAPHIC X X D (Non-Unicode) GRAPHIC X X X X X X X X X X X X D X (Unicode) INTEGER X X D X X X X X X X LONG D X X VARCHAR LONG X X X D VARGRAPHIC (Non-Unicode) LONG X X X D VARGRAPHIC (Unicode) NUMERIC D X X X X X X X X REAL X X X X X D X X X SMALLINT X X X D X X X X X X BIGINT X X X X X X X X X D X TIME X X D X TIMESTAMP X X X X D VARCHAR D X X X X X X X X X X X X X VARGRAPHIC X X D (Non-Unicode) VARGRAPHIC X X X X X X X X X X X X D X (Unicode) Note:D Conversion is supported. This is the default conversion for the SQL data type. X All IBM DBMSs support the conversion. blank No IBM DBMS supports the conversion. * Data is not converted to LOB Locator types, rather locators represent a data value, refer to Using Large Objects for more information. * SQL_C_NUMERIC is only available on 32-bit Windows operating systems. Obsolete Keyword/Patch Value Before Unicode applications were supported, applications that were written to work with single byte character data could be made to work with double byte graphic data by a series of cli ini file keywords, such as GRAPHIC=1,2 or 3, Patch2=7 etc. These workarounds presented graphic data as character data, and also affected the reported length of the data. These keywords are no longer required for Unicode applications, and in fact should not be used otherwise there could be serious side effects. If it is not known if a particular application is a Unicode application, we suggest you try without any of the keywords that affect the handling of graphic data. Literals in Unicode Databases In Non-Unicode databases, data in LONG VARGRAPHIC and LONG VARCHAR columns cannot be compared. Data in GRAPHIC/VARGRAPHIC and CHAR/VARCHAR columns can only be compared, or assigned to each other, using explicit cast functions since no implicit code page conversion is supported. This includes GRAPHIC/VARGRAPHIC and CHAR/VARCHAR literals where a GRAPHIC/VARGRAPHIC literal is differentiated from a CHAR/VARCHAR literal by a G prefix. For Unicode databases, casting between GRAPHIC/VARGRAPHIC and CHAR/VARCHAR literals is not required. Also, a G prefix is not required in front of a GRAPHIC/VARGRAPHIC literal. Provided at least one of the arguments is a literal, implicit conversions occur. This allows literals with or without the G prefix to be used within statements that use either SQLPrepareW() or SQLExecDirect(). Literals for LONG VARGRAPHICs still must have a G prefix. For more information, please see "Casting Between Data Types" in Chapter 3 Language Elements of the SQL Reference. New CLI Configuration Keywords The following three keywords have been added to avoid any extra overhead when Unicode applications connect to a database. 1. DisableUnicode Keyword Description: Disables the underlying support for Unicode. db2cli.ini Keyword Syntax: DisableUnicode = 0 | 1 Default Setting: 0 (false) DB2 CLI/ODBC Settings Tab: This keyword cannot be set using the CLI/ODBC Settings notebook. The db2cli.ini file must be modified directly to make use of this keyword. Usage Notes: With Unicode support enabled, and when called by a Unicode application, CLI will attempt to connect to the database using the best client codepage possible to ensure there is no unnecessary data loss due to codepage conversion. This may increase the connection time as codepages are exchanged, or may cause codepage conversions on the client that did not occur before this support was added. Setting this keyword to True will cause all Unicode data to be converted to the application's local codepage first, before the data is sent to the server. This can cause data loss for any data that cannot be represented in the local codepage. 2. ConnectCodepage Keyword Description: Specifies a specific codepage to use when connecting to the data source to avoid extra connection overhead. db2cli.ini Keyword Syntax: ConnectCodepage = 0 | 1 Default Setting: 0 DB2 CLI/ODBC Settings Tab: This keyword cannot be set using the CLI/ODBC Settings notebook. The db2cli.ini file must be modified directly to make use of this keyword. Usage Notes: Non-Unicode applications always connect to the database using the application's local codepage, or the DB2Codepage environment setting. By default, CLI will ensure that Unicode applications will connect to Unicode databases using UTF-8 and UCS-2 codepages, and will connect to non-Unicode databases using the database's codepage. This ensures there is no unnecessary data loss due to codepage conversion. This keyword allows the user to specify the database's codepage when connecting to a non-Unicode database in order to avoid any extra overhead on the connection. Specify a value of 1 to cause SQLDriverConnect() to return the correct value in the output connection string, so the value can be used on future SQLDriverConnect() calls. 3. UnicodeServer Keyword Description: Indicates that the data source is a Unicode Server. Equivalent to setting ConnectCodepage=1208. db2cli.ini Keyword Syntax: UnicodeServer = 0 | 1 Default Setting: 0 DB2 CLI/ODBC Settings Tab: This keyword cannot be set using the CLI/ODBC Settings notebook. The db2cli.ini file must be modified directly to make use of this keyword. Usage Notes: This keyword is equivalent to ConnectCodepage=1208, and is added only for convenience. Set this keyword to avoid extra connect overhead when connecting to DB2 for OS/390 Version 7 or higher. There is no need to set this keyword for DB2 for Windows, DB2 for Unix or DB2 for OS/2 databases, since there is no extra processing required. 27.3.2 Appendix C. DB2 CLI and ODBC The following is a new section added to this appendix. 27.3.2.1 ODBC Unicode Applications A Unicode ODBC application sends and retrieves character data primarily in UCS-2. It does this by calling Unicode versions of the ODBC functions ('W' suffix) and by indicating Unicode data types. The application does not explicitly specify a local code page. The application can still call the ANSI functions and pass local code page strings. For example, the application may call SQLConnectW() and pass the DSN, User ID and Password as Unicode arguments. It may then call SQLExecDirectW() and pass in a Unicode SQL statement string, and then bind a combination of ANSI local code page buffers (SQL_C_CHAR) and Unicode buffers (SQL_C_WCHAR). The database data types may be local code page or UCS-2 and UTF-8. If a CLI application calls SQLConnectW or calls SQLSetConnectAttr with SQL_ATTR_ANSI_APP set to SQL_AA_FALSE, the application is considered a Unicode application. This means all CHAR data is sent and received from the database in UTF-8 format. The application can then fetch CHAR data into SQL_C_CHAR buffers in local code page (with possible data loss), or into SQL_C_WCHAR buffers in UCS-2 without any data loss. If the application does not do either of the two calls above, CHAR data is converted to the applications local codepage at the server. This means CHAR data fetched into SQL_C_WCHAR may suffer data loss. If the DB2CODEPAGE instance variable is set (using db2set) to code page 1208 (UTF-8), the application will receive all CHAR data in UTF-8 since this is now the local code page. The application must also ensure that all CHAR input data is also in UTF-8. ODBC also assumes that all SQL_C_WCHAR data is in the native endian format. CLI will perform any required byte-reversal for SQL_C_WCHAR. ODBC Unicode Versus Non-Unicode Applications This release of DB2 Universal Database contains the SQLConnectW() API. A Unicode driver must export SQLConnectW in order to be recognized as a Unicode driver by the driver manager. It is important to note that many ODBC applications (such as Microsoft Access and Visual Basic) call SQLConnectW(). In previous releases of DB2 Universal Database, DB2 CLI has not supported this API, and thus was not recognized as a Unicode driver by the ODBC driver manager. This caused the ODBC driver manager to convert all Unicode data to the application's local code page. With the added support of the SQLConnectW() function, these applications will now connect as Unicode applications and DB2 CLI will take care of all required data conversion. DB2 CLI now accepts Unicode APIs (with a suffix of "W") and regular ANSI APIs. ODBC defines a set of functions with a suffix of "A", but the driver manager does not pass ANSI functions with the "A" suffix to the driver. Instead, it converts these functions to ANSI function calls without the suffix, and then passes them to the driver. An ODBC application that calls the SQLConnectW() API is considered a Unicode application. Since the ODBC driver manager will always call the SQLConnectW() API regardless of what version the application called, ODBC introduced the SQL_ATTR_ANSI_APP connect attribute to notify the driver if the application should be considered an ANSI or UNICODE application. If SQL_ATTR_ANSI_APP is not set to SQL_AA_FALSE, DB2 CLI converts all Unicode data to the local code page before sending it to the server. ------------------------------------------------------------------------ 27.4 Data Movement Utilities Guide and Reference 27.4.1 Appendix C. Export/Import/Load Utility File Formats The following update has been added to this Appendix: The export, import, and load utilities are not supported when they are used with a Unicode client connected to a non-Unicode database. Unicode client files are only supported when the Unicode client is connected to a Unicode database. ------------------------------------------------------------------------ Wizards ------------------------------------------------------------------------ 28.1 Setting Extent Size in the Create Database Wizard Using the Create Database Wizard, it is possible to set the Extent Size and Prefetch Size parameters for the User Table Space (but not those for the Catalog or Temporary Tables) of the new database. This feature will be enabled only if at least one container is specified for the User Table Space on the "User Tables" page of the Wizard. ------------------------------------------------------------------------ Additional Information ------------------------------------------------------------------------ 29.1 DB2 Universal Database and DB2 Connect Online Support For a complete and up-to-date source of DB2 information, including information about issues discovered after this document was published, use the DB2 Universal Database & DB2 Connect Online Support Web site, located at http://www.ibm.com/software/data/db2/udb/winos2unix/support. ------------------------------------------------------------------------ 29.2 DB2 Magazine For the latest information about the DB2 family of products, obtain a free subscription to "DB2 magazine". The online edition of the magazine is available at http://www.db2mag.com; instructions for requesting a subscription are also posted on this site. ------------------------------------------------------------------------ Appendix A. Notices IBM may not offer the products, services, or features discussed in this document in all countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Canada Limited Office of the Lab Director 1150 Eglinton Ave. East North York, Ontario M3C 1H7 CANADA Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this information and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information may contain examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information may contain sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. Each copy or any portion of these sample programs or any derivative work must include a copyright notice as follows: (C) (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. (C) Copyright IBM Corp. _enter the year or years_. All rights reserved. ------------------------------------------------------------------------ A.1 Trademarks The following terms, which may be denoted by an asterisk(*), are trademarks of International Business Machines Corporation in the United States, other countries, or both. ACF/VTAM IBM AISPO IMS AIX IMS/ESA AIX/6000 LAN DistanceMVS AIXwindows MVS/ESA AnyNet MVS/XA APPN Net.Data AS/400 OS/2 BookManager OS/390 CICS OS/400 C Set++ PowerPC C/370 QBIC DATABASE 2 QMF DataHub RACF DataJoiner RISC System/6000 DataPropagator RS/6000 DataRefresher S/370 DB2 SP DB2 Connect SQL/DS DB2 Extenders SQL/400 DB2 OLAP Server System/370 DB2 Universal Database System/390 Distributed Relational SystemView Database Architecture VisualAge DRDA VM/ESA eNetwork VSE/ESA Extended Services VTAM FFST WebExplorer First Failure Support TechnologyWIN-OS/2 The following terms are trademarks or registered trademarks of other companies: Microsoft, Windows, and Windows NT are trademarks or registered trademarks of Microsoft Corporation. Java or all Java-based trademarks and logos, and Solaris are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Tivoli and NetView are trademarks of Tivoli Systems Inc. in the United States, other countries, or both. UNIX is a registered trademark in the United States, other countries or both and is licensed exclusively through X/Open Company Limited. Other company, product, or service names, which may be denoted by a double asterisk(**) may be trademarks or service marks of others. ------------------------------------------------------------------------ Footnotes: 1 A new level is initiated when a trigger, function, or stored procedure is invoked. 2 Interfaces that automatically commit after each statement will return a null value when the function is invoked in separate statements, unless the automatic commit is turned off.