Release Notes IBM(R) DB2(R) Universal Database Release Notes Version 7.2/Version 7.1 FixPak 3 (c) Copyright International Business Machines Corporation 2000, 2001. All rights reserved. U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. ------------------------------------------------------------------------ Contents * Contents * Welcome to DB2 Universal Database Version 7! ------------------------------------------------------------------------ Special Notes * Special Notes o 1.1 Accessibility Features of DB2 UDB Version 7 + 1.1.1 Keyboard Input and Navigation + 1.1.1.1 Keyboard Input + 1.1.1.2 Keyboard Focus + 1.1.2 Features for Accessible Display + 1.1.2.1 High-Contrast Mode + 1.1.2.2 Font Settings + 1.1.2.3 Non-dependence on Color + 1.1.3 Alternative Alert Cues + 1.1.4 Compatibility with Assistive Technologies + 1.1.5 Accessible Documentation o 1.2 Additional Required Solaris Patch Level o 1.3 Supported CPUs on DB2 Version 7 for Solaris o 1.4 Problems When Adding Nodes to a Partitioned Database o 1.5 Errors During Migration o 1.6 Chinese Locale Fix on Red Flag Linux o 1.7 DB2 Install May Hang if a Removable Drive is Not Attached o 1.8 Additional Locale Setting for DB2 for Linux in a Japanese and Simplified Chinese Linux Environment o 1.9 Control Center Problem on Microsoft Internet Explorer o 1.10 Incompatibility between Information Catalog Manager and Sybase in the Windows Environment o 1.11 Loss of Control Center Function o 1.12 Netscape CD not shipped with DB2 UDB o 1.13 Error in XML Readme Files o 1.14 Possible Data Loss on Linux for S/390 o 1.15 DB2 UDB on Windows 2000 * Online Documentation (HTML, PDF, and Search) o 2.1 Supported Web Browsers on the Windows 2000 Operating System o 2.2 Searching the DB2 Online Information on Solaris o 2.3 Switching NetQuestion for OS/2 to Use TCP/IP o 2.4 Error Messages when Attempting to Launch Netscape o 2.5 Configuration Requirement for Adobe Acrobat Reader on UNIX Based Systems o 2.6 SQL Reference is Provided in One PDF File ------------------------------------------------------------------------ Installation and Configuration * General Installation Information o 3.1 Downloading Installation Packages for All Supported DB2 Clients o 3.2 Installing DB2 on Windows 2000 o 3.3 Migration Issue Regarding Views Defined with Special Registers o 3.4 IPX/SPX Protocol Support on Windows 2000 o 3.5 Stopping DB2 Processes Before Upgrading a Previous Version of DB2 o 3.6 Run db2iupdt After Installing DB2 If Another DB2 Product is Already Installed o 3.7 Setting up the Linux Environment to Run the DB2 Control Center o 3.8 DB2 Universal Database Enterprise Edition and DB2 Connect Enterprise Edition for Linux on S/390 o 3.9 DB2 Universal Database Enterprise - Extended Edition for UNIX Quick Beginnings o 3.10 shmseg Kernel Parameter for HP-UX o 3.11 Migrating IBM Visual Warehouse Control Databases o 3.12 Accessing Warehouse Control Databases * Data Links Manager Quick Beginnings o 4.1 Dlfm start Fails with Message: "Error in getting the afsfid for prefix" o 4.2 Setting Tivoli Storage Manager Class for Archive Files o 4.3 Disk Space Requirements for DFS Client Enabler o 4.4 Monitoring the Data Links File Manager Back-end Processes on AIX o 4.5 Installing and Configuring DB2 Data Links Manager for AIX: Additional Installation Considerations in DCE-DFS Environments o 4.6 Failed "dlfm add_prefix" Command o 4.7 Installing and Configuring DB2 Data Links Manager for AIX: Installing DB2 Data Links Manager on AIX Using the db2setup Utility o 4.8 Installing and Configuring DB2 Data Links Manager for AIX: DCE-DFS Post-Installation Task o 4.9 Installing and Configuring DB2 Data Links Manager for AIX: Manually Installing DB2 Data Links Manager Using Smit o 4.10 Installing and Configuring DB2 Data Links DFS Client Enabler o 4.11 Installing and Configuring DB2 Data Links Manager for Solaris o 4.12 Choosing a Backup Method for DB2 Data Links Manager on AIX o 4.13 Choosing a Backup Method for DB2 Data Links Manager on Solaris Operating Environment o 4.14 Choosing a Backup Method for DB2 Data Links Manager on Windows NT o 4.15 Backing up a Journalized File System on AIX o 4.16 Administrator Group Privileges in Data Links on Windows NT o 4.17 Minimize Logging for Data Links File System Filter (DLFF) Installation + 4.17.1 Logging Messages after Installation o 4.18 Uninstalling DLFM Components Using SMIT May Remove Additional Filesets o 4.19 Before You Begin/Determine hostname o 4.20 Working with the Data Links File Manager: Cleaning up After Dropping a DB2 Data Links Manager from a DB2 Database o 4.21 DLFM1001E (New Error Message) o 4.22 DLFM Setup Configuration File Option o 4.23 Error when Running Data Links/DFS Script dmapp_prestart on AIX o 4.24 Tivoli Space Manager Integration with Data Links + 4.24.1 Restrictions and Limitations o 4.25 Chapter 4. Installing and Configuring DB2 Data Links Manager for AIX + 4.25.1 Common Installation Considerations + 4.25.1.1 Migrating from DB2 File Manager Version 5.2 to DB2 Data Links Manager Version 7 * Installation and Configuration Supplement o 5.1 Chapter 5. Installing DB2 Clients on UNIX Operating Systems + 5.1.1 HP-UX Kernel Configuration Parameters o 5.2 Chapter 12. Running Your Own Applications + 5.2.1 Binding Database Utilities Using the Run-Time Client + 5.2.2 UNIX Client Access to DB2 Using ODBC o 5.3 Chapter 24. Setting Up a Federated System to Access Multiple Data Sources + 5.3.1 Federated Systems + 5.3.1.1 Restriction + 5.3.2 Installing DB2 Relational Connect + 5.3.2.1 Installing DB2 Relational Connect on Windows NT servers + 5.3.2.2 Installing DB2 Relational Connect on AIX, Linux, and Solaris Operating Environment servers o 5.4 Chapter 26. Accessing Oracle data sources + 5.4.1 Documentation Errors o 5.5 Accessing Sybase data sources (new chapter) + 5.5.1 Adding Sybase data sources to a federated server + 5.5.1.1 Step 1: Set the environment variables and update the profile registry + 5.5.1.2 Step 2: Link DB2 to Sybase client software (AIX and Solaris only) + 5.5.1.3 Step 3: Recycle the DB2 instance + 5.5.1.4 Step 4: Create and set up an interfaces file + 5.5.1.5 Step 5: Create the wrapper + 5.5.1.6 Step 6: Optional: Set the DB2_DJ_COMM environment variable + 5.5.1.7 Step 7: Create the server + 5.5.1.8 Optional: Step 8: Set the CONNECTSTRING server option + 5.5.1.9 Step 9: Create a user mapping + 5.5.1.10 Step 10: Create nicknames for tables and views + 5.5.2 Specifying Sybase code pages o 5.6 Accessing Microsoft SQL Server data sources using ODBC (new chapter) + 5.6.1 Adding Microsoft SQL Server data sources to a federated server + 5.6.1.1 Step 1: Set the environment variables (AIX only) + 5.6.1.2 Step 2: Run the shell script (AIX only) + 5.6.1.3 Step 3: Optional: Set the DB2_DJ_COMM environment variable + 5.6.1.4 Step 4: Recycle the DB2 instance (AIX only) + 5.6.1.5 Step 5: Create the wrapper + 5.6.1.6 Step 6: Create the server + 5.6.1.7 Step 7: Create a user mapping + 5.6.1.8 Step 8: Create nicknames for tables and views + 5.6.1.9 Step 9: Optional: Obtain ODBC traces + 5.6.2 Reviewing Microsoft SQL Server code pages ------------------------------------------------------------------------ Administration * Administration Guide: Planning o 6.1 Chapter 8. Physical Database Design + 6.1.1 Partitioning Keys o 6.2 Designing Nodegroups o 6.3 Chapter 9. Designing Distributed Databases + 6.3.1 Updating Multiple Databases o 6.4 Chapter 13. High Availability in the Windows NT Environment + 6.4.1 Need to Reboot the Machine Before Running DB2MSCS Utility o 6.5 Chapter 14. DB2 and High Availability on Sun Cluster 2.2 o 6.6 Veritas Support on Solaris o 6.7 Appendix B. Naming Rules + 6.7.1 Notes on Greater Than 8-Character User IDs and Schema Names + 6.7.2 User IDs and Passwords o 6.8 Appendix D. Incompatibilities Between Releases + 6.8.1 Windows NT DLFS Incompatible with Norton's Utilities + 6.8.2 SET CONSTRAINTS Replaced by SET INTEGRITY o 6.9 Appendix E. National Language Support + 6.9.1 National Language Versions of DB2 Version 7 + 6.9.1.1 Control Center and Documentation Filesets + 6.9.2 Locale Setting for the DB2 Administration Server + 6.9.3 DB2 UDB Supports the Baltic Rim Code Page (MS-1257) on Windows Platforms + 6.9.4 Deriving Code Page Values + 6.9.5 Country Code and Code Page Support + 6.9.6 Character Sets * Administration Guide: Implementation o 7.1 Adding or Extending DMS Containers (New Process) o 7.2 Chapter 1. Administering DB2 using GUI Tools o 7.3 Chapter 3. Creating a Database + 7.3.1 Creating a Table Space + 7.3.1.1 Using Raw I/O on Linux + 7.3.2 Creating a Sequence + 7.3.3 Comparing IDENTITY Columns and Sequences + 7.3.4 Creating an Index, Index Extension, or an Index Specification o 7.4 Chapter 4. Altering a Database + 7.4.1 Adding a Container to an SMS Table Space on a Partition + 7.4.2 Altering an Identity Column + 7.4.3 Altering a Sequence + 7.4.4 Dropping a Sequence + 7.4.5 Switching the State of a Table Space + 7.4.6 Modifying Containers in a DMS Table Space o 7.5 Chapter 5. Controlling Database Access + 7.5.1 Sequence Privileges + 7.5.2 Data Encryption o 7.6 Chapter 8. Recovering a Database + 7.6.1 How to Use Suspended I/O + 7.6.2 Incremental Backup and Recovery + 7.6.2.1 Restoring from Incremental Backup Images + 7.6.3 Parallel Recovery + 7.6.4 Backing Up to Named Pipes + 7.6.5 Backup from Split Image + 7.6.6 On Demand Log Archive + 7.6.7 Log Mirroring + 7.6.8 Cross Platform Backup and Restore Support on Sun Solaris and HP + 7.6.9 DB2 Data Links Manager Considerations/Backup Utility Considerations + 7.6.10 DB2 Data Links Manager Considerations/Restore and Rollforward Utility Considerations + 7.6.11 Restoring Databases from an Offline Backup without Rolling Forward + 7.6.12 Restoring Databases and Table Spaces, and Rolling Forward to the End of the Logs + 7.6.13 DB2 Data Links Manager and Recovery Interactions + 7.6.14 Detection of Situations that Require Reconciliation o 7.7 Appendix C. User Exit for Database Recovery o 7.8 Appendix D. Issuing Commands to Multiple Database Partition Servers o 7.9 Appendix I. High Speed Inter-node Communications + 7.9.1 Enabling DB2 to Run Using VI * Administration Guide: Performance o 8.1 Chapter 3. Application Considerations + 8.1.1 Specifying the Isolation Level + 8.1.2 Adjusting the Optimization Class + 8.1.3 Dynamic Compound Statements o 8.2 Chapter 4. Environmental Considerations + 8.2.1 Using Larger Index Keys o 8.3 Chapter 5. System Catalog Statistics + 8.3.1 Collecting and Using Distribution Statistics + 8.3.2 Rules for Updating Catalog Statistics + 8.3.3 Sub-element Statistics o 8.4 Chapter 6. Understanding the SQL Compiler + 8.4.1 Replicated Summary Tables + 8.4.2 Data Access Concepts and Optimization o 8.5 Chapter 8. Operational Performance + 8.5.1 Managing the Database Buffer Pool + 8.5.2 Managing Multiple Database Buffer Pools o 8.6 Chapter 9. Using the Governor o 8.7 Chapter 13. Configuring DB2 + 8.7.1 Sort Heap Size (sortheap) + 8.7.2 Sort Heap Threshold (sheapthres) + 8.7.3 Maximum Percent of Lock List Before Escalation (maxlocks) + 8.7.4 Configuring DB2/DB2 Data Links Manager/Data Links Access Token Expiry Interval (dl_expint) + 8.7.5 MIN_DEC_DIV_3 Database Configuration Parameter + 8.7.6 Application Control Heap Size (app_ctl_heap_sz) + 8.7.7 Database System Monitor Heap Size (mon_heap_sz) + 8.7.8 Maximum Number of Active Applications (maxappls) + 8.7.9 Recovery Range and Soft Checkpoint Interval (softmax) + 8.7.10 Track Modified Pages Enable (trackmod) + 8.7.11 Change the Database Log Path (newlogpath) + 8.7.12 Location of Log Files (logpath) + 8.7.13 Maximum Storage for Lock List (locklist) o 8.8 Appendix A. DB2 Registry and Environment Variables + 8.8.1 Table of New and Changed Registry Variables o 8.9 Appendix C. SQL Explain Tools * Administering Satellites Guide and Reference o 9.1 Setting up Version 7.2 DB2 Personal Edition and DB2 Workgroup Edition as Satellites + 9.1.1 Prerequisites + 9.1.1.1 Installation Considerations + 9.1.2 Configuring the Version 7.2 System for Synchronization + 9.1.3 Installing FixPak 2 or Higher on a Version 6 Enterprise Edition System + 9.1.3.1 Upgrading Version 6 DB2 Enterprise Edition for Use as the DB2 Control Server + 9.1.4 Upgrading a Version 6 Control Center and Satellite Administration Center * Command Reference o 10.1 db2batch - Benchmark Tool o 10.2 db2cap (new command) + db2cap - CLI/ODBC Static Package Binding Tool o 10.3 db2ckrst (new command) + db2ckrst - Check Incremental Restore Image Sequence o 10.4 db2gncol (new command) + db2gncol - Update Generated Column Values o 10.5 db2inidb - Initialize a Mirrored Database o 10.6 db2look - DB2 Statistics Extraction Tool o 10.7 db2updv7 - Update Database to Version 7 Current Fix Level o 10.8 New Command Line Processor Option (-x, Suppress printing of column headings) o 10.9 True Type Font Requirement for DB2 CLP o 10.10 ADD DATALINKS MANAGER o 10.11 ARCHIVE LOG (new command) + Archive Log o 10.12 BACKUP DATABASE + 10.12.1 Syntax Diagram + 10.12.2 DB2 Data Links Manager Considerations o 10.13 BIND o 10.14 CALL o 10.15 DROP DATALINKS MANAGER (new command) + DROP DATALINKS MANAGER o 10.16 EXPORT o 10.17 GET DATABASE CONFIGURATION o 10.18 GET ROUTINE (new command) + GET ROUTINE o 10.19 GET SNAPSHOT o 10.20 IMPORT o 10.21 LIST HISTORY o 10.22 LOAD o 10.23 PING (new command) + PING o 10.24 PUT ROUTINE (new command) + PUT ROUTINE o 10.25 RECONCILE o 10.26 REORGANIZE TABLE o 10.27 RESTORE DATABASE + 10.27.1 Syntax + 10.27.2 DB2 Data Links Manager Considerations o 10.28 ROLLFORWARD DATABASE o 10.29 Documentation Error in CLP Return Codes * Data Movement Utilities Guide and Reference o 11.1 Chapter 2. Import + 11.1.1 Using Import with Buffered Inserts o 11.2 Chapter 3. Load + 11.2.1 Pending States After a Load Operation + 11.2.2 Load Restrictions and Limitations + 11.2.3 totalfreespace File Type Modifier o 11.3 Chapter 4. AutoLoader + 11.3.1 rexecd Required to Run Autoloader When Authentication Set to YES * Replication Guide and Reference o 12.1 Replication and Non-IBM Servers o 12.2 Replication on Windows 2000 o 12.3 Known Error When Saving SQL Files o 12.4 DB2 Maintenance o 12.5 Data Difference Utility on the Web o 12.6 Chapter 3. Data replication scenario + 12.6.1 Replication Scenarios o 12.7 Chapter 5. Planning for replication + 12.7.1 Table and Column Names + 12.7.2 DATALINK Replication + 12.7.3 LOB Restrictions + 12.7.4 Planning for Replication o 12.8 Chapter 6. Setting up your replication environment + 12.8.1 Update-anywhere Prerequisite + 12.8.2 Setting Up Your Replication Environment o 12.9 Chapter 8. Problem Determination o 12.10 Chapter 9. Capture and Apply for AS/400 o 12.11 Chapter 10. Capture and Apply for OS/390 + 12.11.1 Prerequisites for DB2 DataPropagator for OS/390 + 12.11.2 UNICODE and ASCII Encoding Schemes on OS/390 + 12.11.2.1 Choosing an Encoding Scheme + 12.11.2.2 Setting Encoding Schemes o 12.12 Chapter 11. Capture and Apply for UNIX platforms + 12.12.1 Setting Environment Variables for Capture and Apply on UNIX and Windows o 12.13 Chapter 14. Table Structures o 12.14 Chapter 15. Capture and Apply Messages o 12.15 Appendix A. Starting the Capture and Apply Programs from Within an Application * System Monitor Guide and Reference o 13.1 db2ConvMonStream * Troubleshooting Guide o 14.1 Starting DB2 on Windows 95, Windows 98, and Windows ME When the User Is Not Logged On o 14.2 Chapter 2. Troubleshooting the DB2 Universal Database Server * Using DB2 Universal Database on 64-bit Platforms o 15.1 Chapter 5. Configuration + 15.1.1 LOCKLIST + 15.1.2 shmsys:shminfo_shmmax o 15.2 Chapter 6. Restrictions * XML Extender Administration and Programming * MQSeries o 17.1 Installation and Configuration for the DB2 MQSeries Functions + 17.1.1 Install MQSeries + 17.1.2 Install MQSeries AMI + 17.1.3 Enable DB2 MQSeries Functions o 17.2 MQSeries Messaging Styles o 17.3 Message Structure o 17.4 MQSeries Functional Overview + 17.4.1 Limitations + 17.4.2 Error Codes o 17.5 Usage Scenarios + 17.5.1 Basic Messaging + 17.5.2 Sending Messages + 17.5.3 Retrieving Messages + 17.5.4 Application-to-Application Connectivity + 17.5.4.1 Request/Reply Communications + 17.5.4.2 Publish/Subscribe o 17.6 enable_MQFunctions + enable_MQFunctions o 17.7 disable_MQFunctions + disable_MQFunctions ------------------------------------------------------------------------ Administrative Tools * Control Center o 18.1 Ability to Administer DB2 Server for VSE and VM Servers o 18.2 Java 1.2 Support for the Control Center o 18.3 "Invalid shortcut" Error when Using the Online Help on the Windows Operating System o 18.4 Java Control Center on OS/2 o 18.5 "File access denied" Error when Attempting to View a Completed Job in the Journal on the Windows Operating System o 18.6 Multisite Update Test Connect o 18.7 Control Center for DB2 for OS/390 o 18.8 Required Fix for Control Center for OS/390 o 18.9 Change to the Create Spatial Layer Dialog o 18.10 Troubleshooting Information for the DB2 Control Center o 18.11 Control Center Troubleshooting on UNIX Based Systems o 18.12 Possible Infopops Problem on OS/2 o 18.13 Help for the jdk11_path Configuration Parameter o 18.14 Solaris System Error (SQL10012N) when Using the Script Center or the Journal o 18.15 Help for the DPREPL.DFT File o 18.16 Launching More Than One Control Center Applet o 18.17 Online Help for the Control Center Running as an Applet o 18.18 Running the Control Center in Applet Mode (Windows 95) o 18.19 Working with Large Query Results * Information Center o 19.1 "Invalid shortcut" Error on the Windows Operating System o 19.2 Opening External Web Links in Netscape Navigator when Netscape is Already Open (UNIX Based Systems) o 19.3 Problems Starting the Information Center * Wizards o 20.1 Setting Extent Size in the Create Database Wizard o 20.2 MQSeries Assist wizard o 20.3 OLE DB Assist wizard ------------------------------------------------------------------------ Business Intelligence * Business Intelligence Tutorial o 21.1 Revised Business Intelligence Tutorial * Data Warehouse Center Administration Guide o 22.1 Troubleshooting o 22.2 Setting up Excel as a Warehouse Source o 22.3 Defining and Running Processes o 22.4 Export Metadata Dialog o 22.5 Defining Values for a Submit OS/390 JCL Jobstream (VWPMVS) Program o 22.6 Changes to the Data Warehousing Sample Appendix o 22.7 Data Warehouse Center Messages o 22.8 Creating an Outline and Loading Data in the DB2 OLAP Integration Server o 22.9 Using Classic Connect with the Data Warehouse Center o 22.10 Data Warehouse Center Environment Structure o 22.11 Using the Invert Transformer o 22.12 Accessing DB2 Version 5 Data with the DB2 Version 7 Warehouse Agent + 22.12.1 Migrating DB2 Version 5 Servers + 22.12.2 Changing the Agent Configuration + 22.12.2.1 UNIX Warehouse Agents + 22.12.2.2 Microsoft Windows NT, Windows 2000, and OS/2 Warehouse Agents o 22.13 IBM ERwin metadata extract program + 22.13.1 Contents + 22.13.2 Software requirements + 22.13.3 Program files + 22.13.4 Creating tag language files + 22.13.5 Importing a tag language file into the Data Warehouse Center + 22.13.6 Importing a tag language file into the Information Catalog Manager + 22.13.7 Troubleshooting + 22.13.8 ERwin to DB2 Data Warehouse Center mapping + 22.13.8.1 ERwin to Information Catalog Manager mapping o 22.14 Name and address cleansing in the Data Warehouse Center + 22.14.1 + 22.14.1.1 Requirements + 22.14.1.2 Trillium Software System components + 22.14.1.3 Using the Trillium Batch System with the Data Warehouse Center + 22.14.1.4 Importing Trillium metadata + 22.14.1.5 Mapping the metadata + 22.14.1.6 Restrictions + 22.14.2 Writing Trillium Batch System JCL file + 22.14.3 Writing Trillium Batch System script file on UNIX and Windows + 22.14.4 Defining a Trillium Batch System step + 22.14.5 Using the Trillium Batch System user-defined program + 22.14.6 Error handling + 22.14.6.1 Error return codes + 22.14.6.2 Log file o 22.15 Integration of MQSeries with the Data Warehouse Center + 22.15.1 Creating views for MQSeries messages + 22.15.1.1 Requirements + 22.15.1.2 Restrictions + 22.15.1.3 Creating a view for MQSeries messages + 22.15.2 Importing MQSeries messages and XML metadata + 22.15.2.1 Requirements + 22.15.2.2 Restrictions + 22.15.2.3 Importing MQSeries messages and XML metadata + 22.15.2.4 Using the MQSeries user-defined program + 22.15.2.5 Error return codes + 22.15.2.6 Error Log file o 22.16 Microsoft OLE DB and Data Transaction Services support + 22.16.1 Creating views for OLE DB table functions + 22.16.2 Creating views for DTS packages o 22.17 Using incremental commit with replace o 22.18 Component trace data file names o 22.19 Open Client needed for Sybase sources on AIX and the Solaris Operating Environment o 22.20 Sample entries corrected o 22.21 Chapter 3. Setting up warehouse sources + 22.21.1 Mapping the Memo field in Microsoft Access to a warehouse source o 22.22 Chapter 10. Maintaining the Warehouse Database + 22.22.1 Linking tables to a step subtype for the DB2 UDB RUNSTATS program o 22.23 The Default Warehouse Control Database o 22.24 The Warehouse Control Database Management Window o 22.25 Changing the Active Warehouse Control Database o 22.26 Creating and Initializing a Warehouse Control Database o 22.27 Creating editioned SQL steps o 22.28 Changing sources and targets in the Process Modeler window o 22.29 Adding descriptions to Data Warehouse Center objects o 22.30 Running Sample Contents o 22.31 Editing a Create DDL SQL statement o 22.32 Migrating Visual Warehouse business views o 22.33 Generating target tables and primary keys o 22.34 Using Merant ODBC drivers o 22.35 New ODBC Driver o 22.36 Defining a warehouse source or target in an OS/2 database o 22.37 Monitoring the state of the warehouse control database o 22.38 Using SQL Assist with the TBC_MD sample database o 22.39 Using the FormatDate function o 22.40 Changing the language setting o 22.41 Using the Generate Key Table transformer o 22.42 Maintaining connections to databases o 22.43 Setting up a remote Data Warehouse Center client o 22.44 Defining a DB2 for VM warehouse source o 22.45 Defining a DB2 for VM or DB2 for VSE target table o 22.46 Enabling delimited identifier support o 22.47 Data Joiner Error Indicates a Bind Problem o 22.48 Setting up and Running Replication with Data Warehouse Center o 22.49 Troubleshooting Tips o 22.50 Accessing Sources and Targets o 22.51 Additions to Supported non-IBM Database Sources o 22.52 Creating a Data Source Manually in Data Warehouse Center o 22.53 Importing and Exporting Metadata Using the Common Warehouse Metadata Interchange (CWMI) + 22.53.1 Introduction + 22.53.2 Importing Metadata + 22.53.3 Updating Your Metadata After Running the Import Utility + 22.53.4 Exporting Metadata o 22.54 OS/390 Runstats utility step o 22.55 OS/390 Load utility step o 22.56 Common Warehouse Metamodel (CWM) XML support o 22.57 Process modeler o 22.58 Schema modeler o 22.59 Mandatory fields o 22.60 Data Warehouse Center launchpad enhancements o 22.61 Printing step information to a file * Data Warehouse Center Application Integration Guide o 23.1 Additional metadata templates + 23.1.1 Commit.tag + 23.1.1.1 Tokens + 23.1.1.2 Examples of values + 23.1.2 ForeignKey.tag + 23.1.2.1 Tokens + 23.1.2.2 Examples of values + 23.1.3 ForeignKeyAdditional.tag + 23.1.3.1 Tokens + 23.1.3.2 Examples of values + 23.1.4 PrimaryKey.tag + 23.1.4.1 Tokens + 23.1.4.2 Examples of values + 23.1.5 PrimaryKeyAdditional.tag + 23.1.5.1 Tokens + 23.1.5.2 Examples of values * Data Warehouse Center Online Help o 24.1 Defining Tables or Views for Replication o 24.2 Running Essbase VWPs with the AS/400 Agent o 24.3 Using the Publish Data Warehouse Center Metadata Window and Associated Properties Window o 24.4 Foreign Keys o 24.5 Replication Notebooks o 24.6 Importing a Tag Language o 24.7 Links for Adding Data o 24.8 Importing Tables o 24.9 Correction to RUNSTATS and REORGANIZE TABLE Online Help o 24.10 Notification Page (Warehouse Properties Notebook and Schedule Notebook) o 24.11 Agent Module Field in the Agent Sites Notebook * DB2 OLAP Starter Kit o 25.1 OLAP Server Web Site o 25.2 Supported Operating System Service Levels o 25.3 Completing the DB2 OLAP Starter Kit Setup on UNIX o 25.4 Configuring ODBC for the OLAP Starter Kit + 25.4.1 Configuring Data Sources on UNIX systems + 25.4.1.1 Configuring ODBC Environment Variables + 25.4.1.2 Editing the odbc.ini File + 25.4.1.3 Adding a data source to an odbc.ini file + 25.4.1.4 Example of ODBC Settings for DB2 + 25.4.1.5 Example of ODBC Settings for Oracle + 25.4.2 Configuring the OLAP Metadata Catalog on UNIX Systems + 25.4.3 Configuring Data Sources on Windows Systems + 25.4.4 Configuring the OLAP Metadata Catalog on Windows Systems + 25.4.5 After You Configure a Data Source o 25.5 Logging in from OLAP Starter Kit Desktop + 25.5.1 Starter Kit Login Example o 25.6 Manually creating and configuring the sample databases for OLAP Starter Kit o 25.7 Migrating Applications to OLAP Starter Kit Version 7.2 o 25.8 Known Problems and Limitations o 25.9 OLAP Spreadsheet Add-in EQD Files Missing * Information Catalog Manager Administration Guide o 26.1 Information Catalog Manager Initialization Utility + 26.1.1 + 26.1.2 Licensing issues + 26.1.3 Installation Issues o 26.2 Accessing DB2 Version 5 Information Catalogs with the DB2 Version 7 Information Catalog Manager o 26.3 Setting up an Information Catalog o 26.4 Exchanging Metadata with Other Products o 26.5 Exchanging Metadata using the flgnxoln Command o 26.6 Exchanging Metadata using the MDISDGC Command o 26.7 Invoking Programs * Information Catalog Manager Programming Guide and Reference o 27.1 Information Catalog Manager Reason Codes * Information Catalog Manager User's Guide * Information Catalog Manager: Online Messages o 29.1 Message FLG0260E o 29.2 Message FLG0051E o 29.3 Message FLG0003E o 29.4 Message FLG0372E o 29.5 Message FLG0615E * Information Catalog Manager: Online Help o 30.1 Information Catalog Manager for the Web * DB2 Warehouse Manager Installation Guide o 31.1 Software requirements for warehouse transformers o 31.2 Connector for SAP R/3 + 31.2.1 Installation Prerequisites o 31.3 Connector for the Web + 31.3.1 Installation Prerequisites * Query Patroller Administration Guide o 32.1 DB2 Query Patroller Client is a Separate Component o 32.2 Migrating from Version 6 of DB2 Query Patroller Using dqpmigrate o 32.3 Enabling Query Management o 32.4 Location of Table Space for Control Tables o 32.5 New Parameters for dqpstart Command o 32.6 New Parameter for iwm_cmd Command o 32.7 New Registry Variable: DQP_RECOVERY_INTERVAL o 32.8 Starting Query Administrator o 32.9 User Administration o 32.10 Creating a Job Queue o 32.11 Using the Command Line Interface o 32.12 Query Enabler Notes o 32.13 DB2 Query Patroller Tracker may Return a Blank Column Page o 32.14 Query Patroller and Replication Tools o 32.15 Appendix B. Troubleshooting DB2 Query Patroller Clients ------------------------------------------------------------------------ Application Development * Administrative API Reference o 33.1 db2ArchiveLog (new API) + db2ArchiveLog o 33.2 db2ConvMonStream o 33.3 db2DatabasePing (new API) + db2DatabasePing - Ping Database o 33.4 db2HistData o 33.5 db2HistoryOpenScan o 33.6 db2XaGetInfo (new API) + db2XaGetInfo - Get Information for Resource Manager o 33.7 db2XaListIndTrans (new API that supercedes sqlxphqr) + db2XaListIndTrans - List Indoubt Transactions o 33.8 db2GetSnapshot - Get Snapshot o 33.9 Forget Log Record o 33.10 sqlaintp - Get Error Message o 33.11 sqlbctcq - Close Tablespace Container Query o 33.12 sqlubkp - Backup Database o 33.13 sqlureot - Reorganize Table o 33.14 sqlurestore - Restore Database o 33.15 Documentation Error Regarding AIX Extended Shared Memory Support (EXTSHM) o 33.16 SQLFUPD + 33.16.1 locklist o 33.17 SQLEDBDESC o 33.18 SQLFUPD Documentation Error * Application Building Guide o 34.1 Chapter 1. Introduction + 34.1.1 Supported Software + 34.1.2 Sample Programs o 34.2 Chapter 3. General Information for Building DB2 Applications + 34.2.1 Build Files, Makefiles, and Error-checking Utilities o 34.3 Chapter 4. Building Java Applets and Applications + 34.3.1 Setting the Environment + 34.3.1.1 JDK Level on OS/2 + 34.3.1.2 Java2 on HP-UX o 34.4 Chapter 5. Building SQL Procedures + 34.4.1 Setting the SQL Procedures Environment + 34.4.2 Setting the Compiler Environment Variables + 34.4.3 Customizing the Compilation Command + 34.4.4 Retaining Intermediate Files + 34.4.5 Backup and Restore + 34.4.6 Creating SQL Procedures + 34.4.7 Calling Stored Procedures + 34.4.8 Distributing Compiled SQL Procedures o 34.5 Chapter 7. Building HP-UX Applications. + 34.5.1 HP-UX C + 34.5.2 HP-UX C++ o 34.6 Chapter 9. Building OS/2 Applications + 34.6.1 VisualAge C++ for OS/2 Version 4.0 o 34.7 Chapter 10. Building PTX Applications + 34.7.1 ptx/C++ o 34.8 Chapter 12. Building Solaris Applications + 34.8.1 SPARCompiler C++ o 34.9 Chapter 13. Building Applications for Windows 32-bit Operating Systems + 34.9.1 VisualAge C++ Version 4.0 * Application Development Guide o 35.1 Chapter 2. Coding a DB2 Application + 35.1.1 Activating the IBM DB2 Universal Database Project and Tool Add-ins for Microsoft Visual C++ o 35.2 Chapter 6. Common DB2 Application Techniques + 35.2.1 Generating Sequential Values + 35.2.1.1 Controlling Sequence Behavior + 35.2.1.2 Improving Performance with Sequence Objects + 35.2.1.3 Comparing Sequence Objects and Identity Columns o 35.3 Chapter 7. Stored Procedures + 35.3.1 DECIMAL Type Fails in Linux Java Routines + 35.3.2 Using Cursors in Recursive Stored Procedures + 35.3.3 Writing OLE Automation Stored Procedures o 35.4 Chapter 12. Working with Complex Objects: User-Defined Structured Types + 35.4.1 Inserting Structured Type Attributes Into Columns o 35.5 Chapter 13. Using Large Objects (LOBs) + 35.5.1 Large object (LOBs) support in federated database systems + 35.5.1.1 How DB2 retrieves LOBs + 35.5.1.2 How applications can use LOB locators + 35.5.1.3 Restrictions on LOBs + 35.5.1.4 Mappings between LOB and non-LOB data types + 35.5.2 Tuning the system o 35.6 Part 5. DB2 Programming Considerations + 35.6.1 IBM DB2 OLE DB Provider o 35.7 Chapter 20. Programming in C and C++ + 35.7.1 C/C++ Types for Stored Procedures, Functions, and Methods o 35.8 Chapter 21. Programming in Java + 35.8.1 Java Method Signature in PARAMETER STYLE JAVA Procedures and Functions + 35.8.2 Connecting to the JDBC Applet Server o 35.9 Appendix B. Sample Programs * CLI Guide and Reference o 36.1 Binding Database Utilities Using the Run-Time Client o 36.2 Using Static SQL in CLI Applications o 36.3 Limitations of JDBC/ODBC/CLI Static Profiling o 36.4 ADT Transforms o 36.5 Chapter 3. Using Advanced Features + 36.5.1 Writing Multi-Threaded Applications + 36.5.2 Scrollable Cursors + 36.5.2.1 Server-side Scrollable Cursor Support for OS/390 + 36.5.3 Using Compound SQL + 36.5.4 Using Stored Procedures + 36.5.4.1 Writing a Stored Procedure in CLI + 36.5.4.2 CLI Stored Procedures and Autobinding o 36.6 Chapter 4. Configuring CLI/ODBC and Running Sample Applications + 36.6.1 Configuration Keywords o 36.7 Chapter 5. DB2 CLI Functions + 36.7.1 SQLBindFileToParam - Bind LOB File Reference to LOB Parameter + 36.7.2 SQLNextResult - Associate Next Result Set with Another Statement Handle + 36.7.2.1 Purpose + 36.7.2.2 Syntax + 36.7.2.3 Function Arguments + 36.7.2.4 Usage + 36.7.2.5 Return Codes + 36.7.2.6 Diagnostics + 36.7.2.7 Restrictions + 36.7.2.8 References o 36.8 Appendix D. Extended Scalar Functions + 36.8.1 Date and Time Functions o 36.9 Appendix K. Using the DB2 CLI/ODBC/JDBC Trace Facility * Message Reference o 37.1 Getting Message and SQLSTATE Help o 37.2 SQLCODE Remapping Change in DB2 Connect o 37.3 New and Changed Messages + 37.3.1 Call Level Interface (CLI) Messages + 37.3.2 DB2 Messages + 37.3.3 DBI Messages + 37.3.4 Data Warehouse Center (DWC) Messages + 37.3.5 SQL Messages o 37.4 Corrected SQLSTATES * SQL Reference o 38.1 SQL Reference is Provided in One PDF File o 38.2 Chapter 3. Language Elements + 38.2.1 Naming Conventions and Implicit Object Name Qualifications + 38.2.2 DATALINK Assignments + 38.2.3 Expressions + 38.2.3.1 Syntax Diagram + 38.2.3.2 OLAP Functions + 38.2.3.3 Sequence Reference o 38.3 Chapter 4. Functions + 38.3.1 Enabling the New Functions and Procedures + 38.3.2 Scalar Functions + 38.3.2.1 ABS or ABSVAL + 38.3.2.2 DECRYPT_BIN and DECRYPT_CHAR + 38.3.2.3 ENCRYPT + 38.3.2.4 GETHINT + 38.3.2.5 IDENTITY_VAL_LOCAL + 38.3.2.6 LCASE and UCASE (Unicode) + 38.3.2.7 MQPUBLISH + 38.3.2.8 MQREAD + 38.3.2.9 MQRECEIVE + 38.3.2.10 MQSEND + 38.3.2.11 MQSUBSCRIBE + 38.3.2.12 MQUNSUBSCRIBE + 38.3.2.13 MULTIPLY_ALT + 38.3.2.14 REC2XML + 38.3.2.15 ROUND + 38.3.2.16 WEEK_ISO + 38.3.3 Table Functions + 38.3.3.1 MQREADALL + 38.3.3.2 MQRECEIVEALL + 38.3.4 Procedures + 38.3.4.1 GET_ROUTINE_SAR + 38.3.4.2 PUT_ROUTINE_SAR o 38.4 Chapter 5. Queries + 38.4.1 select-statement/syntax diagram + 38.4.2 select-statement/fetch-first-clause o 38.5 Chapter 6. SQL Statements + 38.5.1 Update of the Partitioning Key Now Supported + 38.5.1.1 Statement: ALTER TABLE + 38.5.1.2 Statement: CREATE TABLE + 38.5.1.3 Statement: DECLARE GLOBAL TEMPORARY TABLE PARTITIONING KEY (column-name,...) + 38.5.1.4 Statement: UPDATE + 38.5.2 Larger Index Keys for Unicode Databases + 38.5.2.1 ALTER TABLE + 38.5.2.2 CREATE INDEX + 38.5.2.3 CREATE TABLE + 38.5.3 ALTER SEQUENCE + ALTER SEQUENCE + 38.5.4 ALTER TABLE + 38.5.5 Compound SQL (Embedded) + 38.5.6 Compound Statement (Dynamic) + Compound Statement (Dynamic) + 38.5.7 CREATE FUNCTION (Source or Template) + 38.5.8 CREATE FUNCTION (SQL Scalar, Table or Row) + 38.5.9 CREATE METHOD + CREATE METHOD + 38.5.10 CREATE SEQUENCE + CREATE SEQUENCE + 38.5.11 CREATE TRIGGER + CREATE TRIGGER + 38.5.12 CREATE WRAPPER + 38.5.13 DECLARE CURSOR + 38.5.14 DELETE + 38.5.15 DROP + 38.5.16 GRANT (Sequence Privileges) + GRANT (Sequence Privileges) + 38.5.17 INSERT + 38.5.18 SELECT INTO + 38.5.19 SET ENCRYPTION PASSWORD + SET ENCRYPTION PASSWORD + 38.5.20 SET transition-variable + SET Variable + 38.5.21 UPDATE o 38.6 Chapter 7. SQL Procedures now called Chapter 7. SQL Control Statements + 38.6.1 SQL Procedure Statement + SQL Procedure Statement + 38.6.2 FOR + FOR + 38.6.3 Compound Statement changes to Compound Statement (Procedure) + 38.6.4 RETURN + RETURN + 38.6.5 SIGNAL + SIGNAL o 38.7 Appendix A. SQL Limits o 38.8 Appendix D. Catalog Views + 38.8.1 SYSCAT.SEQUENCES * DB2 Stored Procedure Builder o 39.1 Java 1.2 Support for the DB2 Stored Procedure Builder o 39.2 Remote Debugging of DB2 Stored Procedures o 39.3 Building SQL Procedures on Windows, OS/2 or UNIX Platforms o 39.4 Using the DB2 Stored Procedure Builder on the Solaris Platform o 39.5 Known Problems and Limitations o 39.6 Using DB2 Stored Procedure Builder with Traditional Chinese Locale o 39.7 UNIX (AIX, Sun Solaris, Linux) Installations and the Stored Procedure Builder o 39.8 Building SQL Stored Procedures on OS/390 o 39.9 Debugging SQL Stored Procedures o 39.10 Exporting Java Stored Procedures o 39.11 Inserting Stored Procedures on OS/390 o 39.12 Setting Build Options for SQL Stored Procedures on a Workstation Server o 39.13 Automatically Refreshing the WLM Address Space for Stored Procedures Built on OS/390 o 39.14 Developing Java stored procedures on OS/390 o 39.15 Building a DB2 table user defined function (UDF) for MQ Series and OLE DB * Unicode Updates o 40.1 Introduction + 40.1.1 DB2 Unicode Databases and Applications + 40.1.2 Documentation Updates o 40.2 SQL Reference + 40.2.1 Chapter 3 Language Elements + 40.2.1.1 Promotion of Data Types + 40.2.1.2 Casting Between Data Types + 40.2.1.3 Assignments and Comparisons + 40.2.1.4 Rules for Result Data Types + 40.2.1.5 Rules for String Conversions + 40.2.1.6 Expressions + 40.2.1.7 Predicates + 40.2.2 Chapter 4 Functions + 40.2.2.1 Scalar Functions o 40.3 CLI Guide and Reference + 40.3.1 Chapter 3. Using Advanced Features + 40.3.1.1 Writing a DB2 CLI Unicode Application + 40.3.2 Appendix C. DB2 CLI and ODBC + 40.3.2.1 ODBC Unicode Applications o 40.4 Data Movement Utilities Guide and Reference + 40.4.1 Appendix C. Export/Import/Load Utility File Formats ------------------------------------------------------------------------ Connecting to Host Systems * Connectivity Supplement o 41.1 Setting Up the Application Server in a VM Environment o 41.2 CLI/ODBC/JDBC Configuration PATCH1 and PATCH2 Settings ------------------------------------------------------------------------ General Information * General Information o 42.1 DB2 Universal Database Business Intelligence Quick Tour o 42.2 DB2 Everywhere is Now DB2 Everyplace o 42.3 Mouse Required o 42.4 Attempting to Bind from the DB2 Run-time Client Results in a "Bind files not found" Error o 42.5 Search Discovery o 42.6 Memory Windows for HP-UX 11 o 42.7 User Action for dlfm client_conf Failure o 42.8 In the Rare Event that the Copy Daemon Does Not Stop on dlfm stop o 42.9 Uninstalling DB2 DFS Client Enabler o 42.10 Client Authentication on Windows NT o 42.11 AutoLoader May Hang During a Fork o 42.12 DATALINK Restore o 42.13 Define User ID and Password in IBM Communications Server for Windows NT (CS/NT) + 42.13.1 Node Definition o 42.14 Federated Systems Restrictions o 42.15 DataJoiner Restriction o 42.16 Hebrew Information Catalog Manager for Windows NT o 42.17 Microsoft SNA Server and SNA Multisite Update (Two Phase Commit) Support o 42.18 DB2's SNA SPM Fails to Start After Booting Windows o 42.19 Locale Setting for the DB2 Administration Server o 42.20 Shortcuts Not Working o 42.21 Service Account Requirements for DB2 on Windows NT and Windows 2000 o 42.22 Lost EXECUTE Privilege for Query Patroller Users Created in Version 6 o 42.23 Query Patroller Restrictions o 42.24 Need to Commit all User-defined Programs That Will Be Used in the Data Warehouse Center (DWC) o 42.25 New Option for Data Warehouse Center Command Line Export o 42.26 Backup Services APIs (XBSA) o 42.27 OS/390 agent + 42.27.1 Installation overview + 42.27.2 Installation details + 42.27.3 Setting up additional agent functions + 42.27.4 Scheduling warehouse steps with the trigger program (XTClient) + 42.27.5 Transformers + 42.27.6 Accessing databases outside of the DB2 family + 42.27.7 Running DB2 for OS/390 utilities + 42.27.8 Replication + 42.27.9 Agent logging o 42.28 Client Side Caching on Windows NT o 42.29 Trial Products on Enterprise Edition UNIX CD-ROMs o 42.30 Trial Products on DB2 Connect Enterprise Edition UNIX CD-ROMs o 42.31 Drop Data Links Manager o 42.32 Uninstalling DLFM Components Using SMIT May Remove Additional Filesets o 42.33 Error SQL1035N when Using CLP on Windows 2000 o 42.34 Enhancement to SQL Assist o 42.35 Gnome and KDE Desktop Integration for DB2 on Linux o 42.36 Running DB2 under Windows 2000 Terminal Server, Administration Mode o 42.37 Online Help for Backup and Restore Commands o 42.38 "Warehouse Manager" Should Be "DB2 Warehouse Manager" ------------------------------------------------------------------------ Additional Information * Additional Information o 43.1 DB2 Universal Database and DB2 Connect Online Support o 43.2 DB2 Magazine ------------------------------------------------------------------------ Appendixes * Appendix A. Notices o A.1 Trademarks ------------------------------------------------------------------------ Welcome to DB2 Universal Database Version 7! Note: Set the font to monospace for better viewing of these Release Notes. The DB2 Universal Database and DB2 Connect Support site is updated regularly. Check http://www.ibm.com/software/data/db2/udb/winos2unix/support for the latest information. This file contains information about the following products that was not available when the DB2 manuals were printed: IBM DB2 Universal Database Personal Edition, Version 7.2 IBM DB2 Universal Database Workgroup Edition, Version 7.2 IBM DB2 Universal Database Enterprise Edition, Version 7.2 IBM DB2 Data Links Manager, Version 7.2 IBM DB2 Universal Database Enterprise - Extended Edition, Version 7.2 IBM DB2 Query Patroller, Version 7.2 IBM DB2 Personal Developer's Edition, Version 7.2 IBM DB2 Universal Developer's Edition, Version 7.2 IBM DB2 Data Warehouse Manager, Version 7.2 IBM DB2 Relational Connect, Version 7.2 A separate Release Notes file, installed as READCON.TXT, is provided for the following products: IBM DB2 Connect Personal Edition, Version 7.2 IBM DB2 Connect Enterprise Edition, Version 7.2 The What's New book contains an overview of some of the major DB2 enhancements for Version 7.2. If you don't have the 7.2 version of the What's New book, you can view it and download it from http://www.ibm.com/software/data/db2/udb/winos2unix/support. Note: A revision bar (|) on the left side of a page indicates that the line on that level has been added or modified since the Release Notes were first published. ------------------------------------------------------------------------ Special Notes ------------------------------------------------------------------------ Special Notes ------------------------------------------------------------------------ 1.1 Accessibility Features of DB2 UDB Version 7 The DB2 UDB family of products includes a number of features that make the products more accessible for people with disabilities. These features include: * Features that facilitate keyboard input and navigation * Features that enhance display properties * Options for audio and visual alert cues * Compatibility with assistive technologies * Compatibility with accessibility features of the operating system * Accessible documentation formats 1.1.1 Keyboard Input and Navigation 1.1.1.1 Keyboard Input The DB2 Control Center can be operated using only the keyboard. Menu items and controls provide access keys that allow users to activate a control or select a menu item directly from the keyboard. These keys are self-documenting, in that the access keys are underlined on the control or menu where they appear. 1.1.1.2 Keyboard Focus In UNIX-based systems, the position of the keyboard focus is highlighted, indicating which area of the window is active and where the user's keystrokes will have an effect. 1.1.2 Features for Accessible Display The DB2 Control Center has a number of features that enhance the user interface and improve accessibility for users with low vision. These accessibility enhancements include support for high-contrast settings and customizable font properties. 1.1.2.1 High-Contrast Mode The Control Center interface supports the high-contrast-mode option provided by the operating system. This feature assists users who require a higher degree of contrast between background and foreground colors. 1.1.2.2 Font Settings The Control Center interface allows users to select the color, size, and font for the text in menus and dialog windows. 1.1.2.3 Non-dependence on Color Users do not need to distinguish between colors in order to use any of the functions in this product. 1.1.3 Alternative Alert Cues The user can opt to receive alerts through audio or visual cues. 1.1.4 Compatibility with Assistive Technologies The DB2 Control Center interface is compatible with screen reader applications such as Via Voice. When in application mode, the Control Center interface has the properties required for these accessibility applications to make onscreen information available to blind users. 1.1.5 Accessible Documentation Documentation for the DB2 family of products is available in HTML format. This allows users to view documentation according to the display preferences set in their browsers. It also allows the use of screen readers and other assistive technologies. ------------------------------------------------------------------------ 1.2 Additional Required Solaris Patch Level DB2 Universal Database Version 7 for Solaris Version 2.6 requires patch 106285-02 or higher, in addition to the patches listed in the DB2 for UNIX Quick Beginnings manual. ------------------------------------------------------------------------ 1.3 Supported CPUs on DB2 Version 7 for Solaris CPU versions previous to UltraSparc are not supported. ------------------------------------------------------------------------ 1.4 Problems When Adding Nodes to a Partitioned Database When adding nodes to a partitioned database that has one or more system temporary table spaces with a page size that is different from the default page size (4 KB), you may encounter the error message: "SQL6073N Add Node operation failed" and an SQLCODE. This occurs because only the IBMDEFAULTBP buffer pool exists with a page size of 4 KB when the node is created. For example, you can use the db2start command to add a node to the current partitioned database: DB2START NODENUM 2 ADDNODE HOSTNAME newhost PORT 2 If the partitioned database has system temporary table spaces with the default page size, the following message is returned: SQL6075W The Start Database Manager operation successfully added the node. The node is not active until all nodes are stopped and started again. However, if the partitioned database has system temporary table spaces that are not the default page size, the returned message is: SQL6073N Add Node operation failed. SQLCODE = "<-902>" In a similar example, you can use the ADD NODE command after manually updating the db2nodes.cfg file with the new node description. After editing the file and running the ADD NODE command with a partitioned database that has system temporary table spaces with the default page size, the following message is returned: DB20000I The ADD NODE command completed successfully. However, if the partitioned database has system temporary table spaces that are not the default page size, the returned message is: SQL6073N Add Node operation failed. SQLCODE = "<-902>" One way to prevent the problems outlined above is to run: DB2SET DB2_HIDDENBP=16 before issuing db2start or the ADD NODE command. This registry variable enables DB2 to allocate hidden buffer pools of 16 pages each using a page size different from the default. This enables the ADD NODE operation to complete successfully. Another way to prevent these problems is to specify the WITHOUT TABLESPACES clause on the ADD NODE or the db2start command. After doing this, you will have to create the buffer pools using the CREATE BUFFERPOOL statement, and associate the system temporary table spaces to the buffer pool using the ALTER TABLESPACE statement. When adding nodes to an existing nodegroup that has one or more table spaces with a page size that is different from the default page size (4 KB), you may encounter the error message: "SQL0647N Bufferpool "" is currently not active.". This occurs because the non-default page size buffer pools created on the new node have not been activated for the table spaces. For example, you can use the ALTER NODEGROUP statement to add a node to a nodegroup: DB2START CONNECT TO mpp1 ALTER NODEGROUP ng1 ADD NODE (2) If the nodegroup has table spaces with the default page size, the following message is returned: SQL1759W Redistribute nodegroup is required to change data positioning for objects in nodegroup "" to include some added nodes or exclude some drop nodes. However, if the nodegroup has table spaces that are not the default page size, the returned message is: SQL0647N Bufferpool "" is currently not active. One way to prevent this problem is to create buffer pools for each page size and then to reconnect to the database before issuing the ALTER NODEGROUP statement: DB2START CONNECT TO mpp1 CREATE BUFFERPOOL bp1 SIZE 1000 PAGESIZE 8192 CONNECT RESET CONNECT TO mpp1 ALTER NODEGROUP ng1 ADD NODE (2) A second way to prevent the problem is to run: DB2SET DB2_HIDDENBP=16 before issuing the db2start command, and the CONNECT and ALTER NODEGROUP statements. Another problem can occur when the ALTER TABLESPACE statement is used to add a table space to a node. For example: DB2START CONNECT TO mpp1 ALTER NODEGROUP ng1 ADD NODE (2) WITHOUT TABLESPACES ALTER TABLESPACE ts1 ADD ('ts1') ON NODE (2) This series of commands and statements generates the error message SQL0647N (not the expected message SQL1759W). To complete this change correctly, you should reconnect to the database after the ALTER NODEGROUP... WITHOUT TABLESPACES statement. DB2START CONNECT TO mpp1 ALTER NODEGROUP ng1 ADD NODE (2) WITHOUT TABLESPACES CONNECT RESET CONNECT TO mpp1 ALTER TABLESPACE ts1 ADD ('ts1') ON NODE (2) Another way to prevent the problem is to run: DB2SET DB2_HIDDENBP=16 before issuing the db2start command, and the CONNECT, ALTER NODEGROUP, and ALTER TABLESPACE statements. ------------------------------------------------------------------------ 1.5 Errors During Migration During migration, error entries in the db2diag.log file (database not migrated) appear even when migration is successful, and can be ignored. ------------------------------------------------------------------------ 1.6 Chinese Locale Fix on Red Flag Linux If you are using Simplified Chinese Red Flag Linux Server Version 1.1, contact Red Flag to receive the Simplified Chinese locale fix. Without the Simplified Chinese locale fix for Version 1.1, DB2 does not recognize that the code page of Simplified Chinese is 1386. ------------------------------------------------------------------------ 1.7 DB2 Install May Hang if a Removable Drive is Not Attached During DB2 installation, the install may hang after selecting the install type when using a computer with a removable drive that is not attached. To solve this problem, run setup, specifying the -a option: setup.exe -a ------------------------------------------------------------------------ 1.8 Additional Locale Setting for DB2 for Linux in a Japanese and Simplified Chinese Linux Environment An additional locale setting is required when you want to use the Java GUI tools, such as the Control Center, on a Japanese or Simplified Chinese Linux system. Japanese or Chinese characters cannot be displayed correctly without this setting. Please include the following setting in your user profile, or run it from the command line before every invocation of the Control Center. For a Japanese system: export LC_ALL=ja_JP For a Simplified Chinese system: export LC_ALL=zh_CN ------------------------------------------------------------------------ 1.9 Control Center Problem on Microsoft Internet Explorer There is a problem caused by Internet Explorer (IE) security options settings. The Control Center uses unsigned jars, therefore access to system information is disabled by the security manager. To eliminate this problem, reconfigure the IE security options as follows: 1. Select Internet Options on the View menu (IE4) or the Tools menu (IE5). 2. On the Security page, select Trustees sites zone. 3. Click Add Sites.... 4. Add the Control Center Web server to the trustees sites list. If the Control Center Web server is in the same domain, it may be useful to add only the Web server name (without the domain name). For example: http://ccWebServer.ccWebServerDomain http://ccWebServer 5. Click OK. 6. Click on Settings.... 7. Scroll down to Java --> Java Permissions and select Custom. 8. Click Java Custom Settings.... 9. Select the Edit Permissions page. 10. Scroll down to Unsigned Content --> Run Unsigned Content --> Additional Unsigned Permissions --> System Information and select Enable. 11. Click OK on each open window. ------------------------------------------------------------------------ 1.10 Incompatibility between Information Catalog Manager and Sybase in the Windows Environment The installation of Information Catalog Manager (ICM) Version 7 on the same Windows NT or Windows 2000 machine with Sybase Open Client results in an error, and the Sybase Utilities stops working. An error message similar to the following occurs: Fail to initialize LIBTCL.DLL. Please make sure the SYBASE environment variable is set correctly. Avoid this scenario by removing the environment parameter LC_ALL from the Windows Environment parameters. LC_ALL is a locale category parameter. Locale categories are manifest constants used by the localization routines to specify which portion of the locale information for a program to use. The locale refers to the locality (or country) for which certain aspects of your program can be customized. Locale-dependent areas include, for example, the formatting of dates or the display format for monetary values. LC_ALL affects all locale-specific behavior (all categories). If you remove the LC_ALL environment parameter so that ICM can coexist with Sybase on the Windows NT platform, the following facilities no longer work: * Information Catalog User * Information Catalog Administrator * Information Catalog Manager ------------------------------------------------------------------------ 1.11 Loss of Control Center Function There should be no problems introduced against downlevel Control Center clients by applying FixPak 2 to a DB2 server. However, in DB2 Version 7.2, downlevel Control Center clients lose nearly all functionality. Downlevel in this case refers to any Version 6 client prior to FixPak 6, and any Version 7 client prior to FixPak 2. Version 5 clients are not affected. The suggested fix is to upgrade any affected clients. Version 6 clients must be upgraded to FixPak 6 or later, and Version 7 clients must be upgraded to FixPak 2 or later. ------------------------------------------------------------------------ 1.12 Netscape CD not shipped with DB2 UDB The Netscape CD is no longer being shipped with DB2 UDB. Netscape products are available from http://www.netscape.com. ------------------------------------------------------------------------ 1.13 Error in XML Readme Files The README.TXT file for DB2 XML Extender Version 7.1 says the following under "Considerations": 3. The default version of DB2 UDB is DB2 UDB Version 7.1. If you wish to use DB2 UDB Version 6.1 on AIX and Solaris, you should ensure that you are running with DB2 UDB V6.1 instance and with the DB2 UDB V6.1 libraries. This is incorrect. The DB2 XML Extender is supported only with DB2 Version 7.1 and 7.2. The files readme.aix, readme.nt, and readme.sun list Software Requirements of: * DB2 UDB 6.1 with FP1_U465423 or higher (AIX) * DB2 Universal Database Version 6.1 or higher with FixPak 3 installed (NT) * DB2 UDB Version 6.1 with FixPak FP1_U465424 or higher (Sun) This is incorrect. The DB2 XML Extender requires DB2 Version 7.1 or 7.2. ------------------------------------------------------------------------ 1.14 Possible Data Loss on Linux for S/390 When using DB2 on Linux for S/390 with a 2.2 series kernel, the amount of available RAM on the Linux machine should be limited to less than 1 GB. Limiting the RAM to 1 GB will avoid possible data loss in DB2 due to a Linux kernel bug. This only affects DB2 on Linux for S/390 and not Linux on Intel. A kernel patch will be made available at http://www10.software.ibm.com/developerworks/opensource/linux390/alpha_src.html after which it will be possible to use more than 1 GB of RAM. ------------------------------------------------------------------------ 1.15 DB2 UDB on Windows 2000 Throughout these Release Notes, when reference is made to Windows NT, this includes Windows 2000, unless otherwise specified. ------------------------------------------------------------------------ Online Documentation (HTML, PDF, and Search) ------------------------------------------------------------------------ 2.1 Supported Web Browsers on the Windows 2000 Operating System We recommend that you use Microsoft Internet Explorer on Windows 2000. If you use Netscape, please be aware of the following: * DB2 online information searches may take a long time to complete on Windows 2000 using Netscape. Netscape will use all available CPU resources and appear to run indefinitely. While the search results may eventually return, we recommend that you change focus by clicking on another window after submitting the search. The search results will then return in a reasonable amount of time. * You may notice that when you request help it is displayed correctly in a Netscape browser window, however, if you leave the browser window open and request help later from a different part of the Control Center, nothing changes in the browser. If you close the browser window and request help again, the correct help comes up. You may be able to fix this problem by following the steps in 2.4, Error Messages when Attempting to Launch Netscape. You can also get around the problem by closing the browser window before requesting help for the Control Center. * When you request Control Center help, or a topic from the Information Center, you may get an error message. To fix this, follow the steps in 2.4, Error Messages when Attempting to Launch Netscape. ------------------------------------------------------------------------ 2.2 Searching the DB2 Online Information on Solaris If you are having problems searching the DB2 online information on Solaris, check your system's kernel parameters in /etc/system. Here are the minimum kernel parameters required by DB2's search system, NetQuestion: semsys:seminfo_semmni 256 semsys:seminfo_semmap 258 semsys:seminfo_semmns 512 semsys:seminfo_semmnu 512 semsys:seminfo_semmsl 50 shmsys:shminfo_shmmax 6291456 shmsys:shminfo_shmseg 16 shmsys:shminfo_shmmni 300 To set a kernel parameter, add a line at the end of /etc/system as follows: set = value You must reboot your system for any new or changed values to take effect. ------------------------------------------------------------------------ 2.3 Switching NetQuestion for OS/2 to Use TCP/IP The instructions for switching NetQuestion to use TCP/IP on OS/2 systems are incomplete. The location of the *.cfg files mentioned in those instructions is the data subdirectory of the NetQuestion installation directory. You can determine the NetQuestion installation directory by entering one of the following commands: echo %IMNINSTSRV% //for SBCS installations echo %IMQINSTSRV% //for DBCS installations ------------------------------------------------------------------------ 2.4 Error Messages when Attempting to Launch Netscape If you encounter the following error messages when attempting to launch Netscape: Cannot find file (or one of its components). Check to ensure the path and filename are correct and that all required libraries are available. Unable to open "D:\Program Files\SQLLIB\CC\..\doc\html\db2help\XXXXX.htm" you should take the following steps to correct this problem on Windows NT, 95, or 98 (see below for what to do on Windows 2000): 1. From the Start menu, select Programs --> Windows Explorer. Windows Explorer opens. 2. From Windows Explorer, select View --> Options. The Options Notebook opens. 3. Click the File types tab. The File types page opens. 4. Highlight Netscape Hypertext Document in the Registered file types field and click Edit. The Edit file type window opens. 5. Highlight "Open" in the Actions field. 6. Click the Edit button. The Editing action for type window opens. 7. Uncheck the Use DDE check box. 8. In the Application used to perform action field, make sure that "%1" appears at the very end of the string (include the quotation marks, and a blank space before the first quotation mark). If you encounter the messages on Windows 2000, you should take the following steps: 1. From the Start menu, select Windows Explorer. Windows Explorer opens. 2. From Windows Explorer, select Tools --> Folder Options. The Folder Options notebook opens. 3. Click the File Types tab. 4. On the File Types page, in the Registered file types field, highlight: HTM Netscape Hypertext Document and click Advanced. The Edit File Type window opens. 5. Highlight "open" in the Actions field. 6. Click the Edit button. The Editing Action for Type window opens. 7. Uncheck the Use DDE check box. 8. In the Application used to perform action field, make sure that "%1" appears at the very end of the string (include the quotation marks, and a blank space before the first quotation mark). 9. Click OK. 10. Repeat steps 4 through 8 for the HTML Netscape Hypertext Document and SHTML Netscape Hypertext Document file types. ------------------------------------------------------------------------ 2.5 Configuration Requirement for Adobe Acrobat Reader on UNIX Based Systems Acrobat Reader is only offered in English on UNIX based platforms, and errors may be returned when attempting to open PDF files with language locales other than English. These errors suggest font access or extraction problems with the PDF file, but are actually due to the fact that the English Acrobat Reader cannot function correctly within a UNIX non-English language locale. To view such PDF files, switch to the English locale by performing one of the following steps before launching the English Acrobat Reader: * Edit the Acrobat Reader's launch script, by adding the following line after the #!/bin/sh statement in the launch script file: LANG=C;export LANG This approach will ensure correct behavior when Acrobat Reader is launched by other applications, such as Netscape Navigator, or an application help menu. * Enter LANG=C at the command prompt to set the Acrobat Reader's application environment to English. For further information, contact Adobe Systems (http://www.Adobe.com). ------------------------------------------------------------------------ 2.6 SQL Reference is Provided in One PDF File The "Using the DB2 Library" appendix in each book indicates that the SQL Reference is available in PDF format as two separate volumes. This is incorrect. Although the printed book appears in two volumes, and the two corresponding form numbers are correct, there is only one PDF file, and it contains both volumes. The PDF file name is db2s0x70. ------------------------------------------------------------------------ Installation and Configuration * General Installation Information o 3.1 Downloading Installation Packages for All Supported DB2 Clients o 3.2 Installing DB2 on Windows 2000 o 3.3 Migration Issue Regarding Views Defined with Special Registers o 3.4 IPX/SPX Protocol Support on Windows 2000 o 3.5 Stopping DB2 Processes Before Upgrading a Previous Version of DB2 o 3.6 Run db2iupdt After Installing DB2 If Another DB2 Product is Already Installed o 3.7 Setting up the Linux Environment to Run the DB2 Control Center o 3.8 DB2 Universal Database Enterprise Edition and DB2 Connect Enterprise Edition for Linux on S/390 o 3.9 DB2 Universal Database Enterprise - Extended Edition for UNIX Quick Beginnings o 3.10 shmseg Kernel Parameter for HP-UX o 3.11 Migrating IBM Visual Warehouse Control Databases o 3.12 Accessing Warehouse Control Databases * Data Links Manager Quick Beginnings o 4.1 Dlfm start Fails with Message: "Error in getting the afsfid for prefix" o 4.2 Setting Tivoli Storage Manager Class for Archive Files o 4.3 Disk Space Requirements for DFS Client Enabler o 4.4 Monitoring the Data Links File Manager Back-end Processes on AIX o 4.5 Installing and Configuring DB2 Data Links Manager for AIX: Additional Installation Considerations in DCE-DFS Environments o 4.6 Failed "dlfm add_prefix" Command o 4.7 Installing and Configuring DB2 Data Links Manager for AIX: Installing DB2 Data Links Manager on AIX Using the db2setup Utility o 4.8 Installing and Configuring DB2 Data Links Manager for AIX: DCE-DFS Post-Installation Task o 4.9 Installing and Configuring DB2 Data Links Manager for AIX: Manually Installing DB2 Data Links Manager Using Smit o 4.10 Installing and Configuring DB2 Data Links DFS Client Enabler o 4.11 Installing and Configuring DB2 Data Links Manager for Solaris o 4.12 Choosing a Backup Method for DB2 Data Links Manager on AIX o 4.13 Choosing a Backup Method for DB2 Data Links Manager on Solaris Operating Environment o 4.14 Choosing a Backup Method for DB2 Data Links Manager on Windows NT o 4.15 Backing up a Journalized File System on AIX o 4.16 Administrator Group Privileges in Data Links on Windows NT o 4.17 Minimize Logging for Data Links File System Filter (DLFF) Installation + 4.17.1 Logging Messages after Installation o 4.18 Uninstalling DLFM Components Using SMIT May Remove Additional Filesets o 4.19 Before You Begin/Determine hostname o 4.20 Working with the Data Links File Manager: Cleaning up After Dropping a DB2 Data Links Manager from a DB2 Database o 4.21 DLFM1001E (New Error Message) o 4.22 DLFM Setup Configuration File Option o 4.23 Error when Running Data Links/DFS Script dmapp_prestart on AIX o 4.24 Tivoli Space Manager Integration with Data Links + 4.24.1 Restrictions and Limitations o 4.25 Chapter 4. Installing and Configuring DB2 Data Links Manager for AIX + 4.25.1 Common Installation Considerations + 4.25.1.1 Migrating from DB2 File Manager Version 5.2 to DB2 Data Links Manager Version 7 * Installation and Configuration Supplement o 5.1 Chapter 5. Installing DB2 Clients on UNIX Operating Systems + 5.1.1 HP-UX Kernel Configuration Parameters o 5.2 Chapter 12. Running Your Own Applications + 5.2.1 Binding Database Utilities Using the Run-Time Client + 5.2.2 UNIX Client Access to DB2 Using ODBC o 5.3 Chapter 24. Setting Up a Federated System to Access Multiple Data Sources + 5.3.1 Federated Systems + 5.3.1.1 Restriction + 5.3.2 Installing DB2 Relational Connect + 5.3.2.1 Installing DB2 Relational Connect on Windows NT servers + 5.3.2.2 Installing DB2 Relational Connect on AIX, Linux, and Solaris Operating Environment servers o 5.4 Chapter 26. Accessing Oracle data sources + 5.4.1 Documentation Errors o 5.5 Accessing Sybase data sources (new chapter) + 5.5.1 Adding Sybase data sources to a federated server + 5.5.1.1 Step 1: Set the environment variables and update the profile registry + 5.5.1.2 Step 2: Link DB2 to Sybase client software (AIX and Solaris only) + 5.5.1.3 Step 3: Recycle the DB2 instance + 5.5.1.4 Step 4: Create and set up an interfaces file + 5.5.1.5 Step 5: Create the wrapper + 5.5.1.6 Step 6: Optional: Set the DB2_DJ_COMM environment variable + 5.5.1.7 Step 7: Create the server + 5.5.1.8 Optional: Step 8: Set the CONNECTSTRING server option + 5.5.1.9 Step 9: Create a user mapping + 5.5.1.10 Step 10: Create nicknames for tables and views + 5.5.2 Specifying Sybase code pages o 5.6 Accessing Microsoft SQL Server data sources using ODBC (new chapter) + 5.6.1 Adding Microsoft SQL Server data sources to a federated server + 5.6.1.1 Step 1: Set the environment variables (AIX only) + 5.6.1.2 Step 2: Run the shell script (AIX only) + 5.6.1.3 Step 3: Optional: Set the DB2_DJ_COMM environment variable + 5.6.1.4 Step 4: Recycle the DB2 instance (AIX only) + 5.6.1.5 Step 5: Create the wrapper + 5.6.1.6 Step 6: Create the server + 5.6.1.7 Step 7: Create a user mapping + 5.6.1.8 Step 8: Create nicknames for tables and views + 5.6.1.9 Step 9: Optional: Obtain ODBC traces + 5.6.2 Reviewing Microsoft SQL Server code pages ------------------------------------------------------------------------ General Installation Information ------------------------------------------------------------------------ 3.1 Downloading Installation Packages for All Supported DB2 Clients To download installation packages for all supported DB2 clients, which include all the pre-Version 7 clients, connect to the IBM DB2 Client Application Enabler Pack Web site at http://www.ibm.com/software/data/db2/db2tech/clientpak.html ------------------------------------------------------------------------ 3.2 Installing DB2 on Windows 2000 On Windows 2000, when installing over a previous version of DB2 or when reinstalling the current version, ensure that the recovery options for all of the DB2 services are set to "Take No Action". ------------------------------------------------------------------------ 3.3 Migration Issue Regarding Views Defined with Special Registers Views become unusable after database migration if the special register USER or CURRENT SCHEMA is used to define a view column. For example: create view v1 (c1) as values user In Version 5, USER and CURRENT SCHEMA were of data type CHAR(8), but since Version 6, they have been defined as VARCHAR(128). In this example, the data type for column c1 is CHAR if the view is created in Version 5, and it will remain CHAR after database migration. When the view is used after migration, it will compile at run time, but will then fail because of the data type mismatch. The solution is to drop and then recreate the view. Before dropping the view, capture the syntax used to create it by querying the SYSCAT.VIEWS catalog view. For example: select text from syscat.views where viewname='<>' ------------------------------------------------------------------------ 3.4 IPX/SPX Protocol Support on Windows 2000 This information refers to the Planning for Installation chapter in your Quick Beginnings book, in the section called "Possible Client-to-Server Connectivity Scenarios." The published protocol support chart is not completely correct. A Windows 2000 client connected to any OS/2 or UNIX based server using IPX/SPX is not supported. Also, any OS/2 or UNIX based client connected to a Windows 2000 server using IPX/SPX is not supported. ------------------------------------------------------------------------ 3.5 Stopping DB2 Processes Before Upgrading a Previous Version of DB2 This information refers to the migration information in your DB2 for Windows Quick Beginnings book. If you are upgrading a previous version of DB2 that is running on your Windows machine, the installation program provides a warning containing a list of processes that are holding DB2 DLLs in memory. At this point, you have the option to manually stop the processes that appear in that list, or you can let the installation program shut down these processes automatically. It is recommended that you manually stop all DB2 processes before installing to avoid loss of data. The best way to ensure that DB2 processes are not running is to view your system's processes through the Windows Services panel. In the Windows Services panel, ensure that there are no DB2 services, OLAP services, or Data warehouse services running. Note: You can only have one version of DB2 running on Windows platforms at any one time. For example, you cannot have DB2 Version 7 and DB2 Version 6 running on the same Windows machine. If you install DB2 Version 7 on a machine that has DB2 Version 6 installed, the installation program will delete DB2 Version 6 during the installation. Refer to the appropriate Quick Beginnings manual for more information on migrating from previous versions of DB2. ------------------------------------------------------------------------ 3.6 Run db2iupdt After Installing DB2 If Another DB2 Product is Already Installed The following information should have been available in your Quick Beginnings installation documentation. When installing DB2 UDB Version 7 on UNIX based systems, and a DB2 product is already installed, you will need to run the db2iupdt command to update those instances with which you intend to use the new features of this product. Some features will not be available until this command is run. ------------------------------------------------------------------------ 3.7 Setting up the Linux Environment to Run the DB2 Control Center This information should be included with the "Installing the DB2 Control Center" chapter in your Quick Beginnings book. After leaving the DB2 installer on Linux and returning to the terminal window, type the following commands to set the correct environment to run the DB2 Control Center: su -l export JAVA_HOME=/usr/jdk118 export DISPLAY=:0 Then, open another terminal window and type: su root xhost + Close that terminal window and return to the terminal where you are logged in as the instance owner ID, and type the command: db2cc to start the Control Center. ------------------------------------------------------------------------ 3.8 DB2 Universal Database Enterprise Edition and DB2 Connect Enterprise Edition for Linux on S/390 DB2 Universal Database Enterprise Edition and DB2 Connect Enterprise Edition are now available for Linux on S/390. Before installing Linux on an S/390 machine, you should be aware of the software and hardware requirements: Hardware S/390 9672 Generation 5 or higher, Multiprise 3000. Software * SuSE Linux v7.0 for S/390 or Turbolinux Server 6 for zSeries and S/390 * kernel level 2.2.16, with patches for S/390 (see below) * glibc 2.1.3 * libstdc++ 6.1 The following patches are required for Linux on S/390: * no patches are required at this time. For the latest updates, go to the http://www.software.ibm.com/data/db2/linux Web site. Notes: 1. Only 32-bit Intel-based Linux and Linux on S/390 are supported. 2. The following are not available on Linux/390 in DB2 Version 7: o DB2 UDB Enterprise - Extended Edition o DB2 Extenders o Data Links Manager o DB2 Administrative Client o Change Password Support o LDAP Support ------------------------------------------------------------------------ 3.9 DB2 Universal Database Enterprise - Extended Edition for UNIX Quick Beginnings Chapter 5. Installing and Configuring DB2 Universal Database on Linux should indicate that each physical node in a Linux EEE cluster must have the same kernel, glibc, and libstdc++ levels. A trial version of DB2 EEE for Linux can be downloaded from the following Web site: http://www6.software.ibm.com/dl/db2udbdl/db2udbdl-p ------------------------------------------------------------------------ 3.10 shmseg Kernel Parameter for HP-UX Information about updating the HP-UX kernel configuration parameters provided in your Quick Beginnings book is incorrect. The recommended value for the shmseg kernel parameter for HP-UX should be ignored. The default HP-UX value (120) should be used instead. ------------------------------------------------------------------------ 3.11 Migrating IBM Visual Warehouse Control Databases DB2 Universal Database Quick Beginnings for Windows provides information about how the active warehouse control database is migrated during a typical install of DB2 Universal Database Version 7 on Windows NT and Windows 2000. If you have more than one warehouse control database to be migrated, you must use the Warehouse Control Database Management window to migrate the additional databases. Only one warehouse control database can be active at a time. If the last database that you migrate is not the one that you intend to use when you next log on to the Data Warehouse Center, you must use the Warehouse Control Database Management window to register the database that you intend to use. ------------------------------------------------------------------------ 3.12 Accessing Warehouse Control Databases In a typical installation of DB2 Version 7 on Windows NT, a DB2 Version 7 warehouse control database is created along with the warehouse server. If you have a Visual Warehouse warehouse control database, you must upgrade the DB2 server containing the warehouse control database to DB2 Version 7 before the metadata in the warehouse control database can be migrated for use by the DB2 Version 7 Data Warehouse Center. You must migrate any warehouse control databases that you want to continue to use to Version 7. The metadata in your active warehouse control database is migrated to Version 7 during the DB2 Version 7 install process. To migrate the metadata in any additional warehouse control databases, use the Warehouse Control Database Migration utility, which you start by selecting Start --> Programs --> IBM DB2 --> Warehouse Control Database Management on Windows NT. For information about migrating your warehouse control databases, see DB2 Universal Database for Windows Quick Beginnings. ------------------------------------------------------------------------ Data Links Manager Quick Beginnings ------------------------------------------------------------------------ 4.1 Dlfm start Fails with Message: "Error in getting the afsfid for prefix" For a Data Links Manager running in the DCE-DFS environment, contact IBM Service if dlfm start fails with the following error: Error in getting the afsfid for prefix The error may occur when a DFS file set registered to the Data Links Manager using "dlfm add_prefix" is deleted. ------------------------------------------------------------------------ 4.2 Setting Tivoli Storage Manager Class for Archive Files To specify which TSM management class to use for the archive files, set the DLFM_TSM_MGMTCLASS DB2 registry entry to the appropriate management class name. ------------------------------------------------------------------------ 4.3 Disk Space Requirements for DFS Client Enabler The DFS Client Enabler is an optional component that you can select during DB2 Universal Database client or server installation. You cannot install a DFS Client Enabler without installing a DB2 Universal Database client or server product, even though the DFS Client Enabler runs on its own without the need for a DB2 UDB client or server. In addition to the 2 MB of disk space required for the DFS Client Enabler code, you should set aside an additional 40 MB if you are installing the DFS Client Enabler as part of a DB2 Run-Time Client installation. You will need more disk space if you install the DFS Client Enabler as part of a DB2 Administration Client or DB2 server installation. For more information about disk space requirements for DB2 Universal Database products, refer to the DB2 for UNIX Quick Beginnings manual. ------------------------------------------------------------------------ 4.4 Monitoring the Data Links File Manager Back-end Processes on AIX There has been a change to the output of the dlfm see command. When this command is issued to monitor the Data Links File Manager back-end processes on AIX, the output that is returned will be similar to the following: PID PPID PGID RUNAME UNAME ETIME DAEMON NAME 17500 60182 40838 dlfm root 12:18 dlfm_copyd_(dlfm) 41228 60182 40838 dlfm root 12:18 dlfm_chownd_(dlfm) 49006 60182 40838 dlfm root 12:18 dlfm_upcalld_(dlfm) 51972 60182 40838 dlfm root 12:18 dlfm_gcd_(dlfm) 66850 60182 40838 dlfm root 12:18 dlfm_retrieved_(dlfm) 67216 60182 40838 dlfm dlfm 12:18 dlfm_delgrpd_(dlfm) 60182 1 40838 dlfm dlfm 12:18 dlfmd_(dlfm) DLFM SEE request was successful. The name that is enclosed within the parentheses is the name of the dlfm instance, in this case "dlfm". ------------------------------------------------------------------------ 4.5 Installing and Configuring DB2 Data Links Manager for AIX: Additional Installation Considerations in DCE-DFS Environments In the section called "Installation prerequisites", there is new information that should be added: You must also install either an e-fix for DFS 3.1, or PTF set 1 (when it becomes available). The e-fix is available from: http://www.transarc.com/Support/dfs/datalinks/efix_dfs31_main_page.html Also: The dfs client must be running before you install the Data Links Manager. Use db2setup or smitty. In the section called "Keytab file", there is an error that should be corrected as: The keytab file, which contains the principal and password information, should be called datalink.ktb and .... The correct name: datalink.ktb is used in the example below. The "Keytab file" section should be moved under "DCE-DFS Post-Installation Task", because the creation of this file cannot occur until after the DLMADMIN instance has been created. In the section called "Data Links File Manager servers and clients", it should be noted that the Data Links Manager server must be installed before any of the Data Links Manager clients. A new section, "Backup directory", should be added: If the backup method is to a local file system, this must be a directory in the DFS file system. Ensure that this DFS file set has been created by a DFS administrator. This should not be a DMLFS file set. ------------------------------------------------------------------------ 4.6 Failed "dlfm add_prefix" Command For a Data Links Manager running in the DCE/DFS environment, the dlfm add_prefix command might fail with a return code of -2061 (backup failed). If this occurs, perform the following steps: 1. Stop the Data Links Manager daemon processes by issuing the dlfm stop command. 2. Stop the DB2 processes by issuing the dlfm stopdbm command. 3. Get dce root credentials by issuing the dce_login root command. 4. Start the DB2 processes by issuing the dlfm startdbm command. 5. Register the file set with the Data Links Manager by issuing the dlfm add_prefix command. 6. Start the Data Links Manager daemon processes by issuing the dlfm start command. ------------------------------------------------------------------------ 4.7 Installing and Configuring DB2 Data Links Manager for AIX: Installing DB2 Data Links Manager on AIX Using the db2setup Utility In the section "DB2 database DLFM_DB created", the DLFM_DB is not created in the DCE_DFS environment. This must be done as a post-installation step. In the section "DCE-DFS pre-start registration for DMAPP", Step 2 should be changed to the following: 2. Commands are added to /opt/dcelocal/tcl/user_cmd.tcl to ensure that the DMAPP is started when DFS is started. ------------------------------------------------------------------------ 4.8 Installing and Configuring DB2 Data Links Manager for AIX: DCE-DFS Post-Installation Task The following new section, "Complete the Data Links Manager Install", should be added: On the Data Links Manager server, the following steps must be performed to complete the installation: 1. Create the keytab file as outlined under "Keytab file" in the section "Additional Installation Considerations in DCE-DFS Environment", in the chapter "Installing and Configuring DB2 Data Links Manager for AIX". 2. As root, enter the following commands to start the DMAPP: stop.dfs all start.dfs all 3. Run "dlfm setup" using dce root credentials as follows: a. Login as the Data Links Manager administrator, DLMADMIN. b. As root, issue dce_login. c. Enter the command: dlfm setup. On the Data Links Manager client, the following steps must be performed to complete the installation: 1. Create the keytab file as outlined under "Keytab file" in the section "Additional Installation Considerations in DCE-DFS Environment", in the chapter "Installing and Configuring DB2 Data Links Manager for AIX". 2. As root, enter the following commands to start the DMAPP: stop.dfs all start.dfs all ------------------------------------------------------------------------ 4.9 Installing and Configuring DB2 Data Links Manager for AIX: Manually Installing DB2 Data Links Manager Using Smit Under the section, "SMIT Post-installation Tasks", modify step 7 to indicate that the command "dce_login root" must be issued before "dlfm setup". Step 11 is not needed. This step is performed automatically when Step 6 (dlfm server_conf) or Step 8 (dlfm client_conf) is done. Also remove step 12 (dlfm start). To complete the installation, perform the following steps: 1. Create the keytab file as outlined under "Keytab file" in the section "Additional Installation Considerations in DCE-DFS Environment", in the chapter "Installing and Configuring DB2 Data Links Manager for AIX". 2. As root, enter the following commands to start the DMAPP: stop.dfs all start.dfs all ------------------------------------------------------------------------ 4.10 Installing and Configuring DB2 Data Links DFS Client Enabler In the section "Configuring a DFS Client Enabler", add the following information to Step 2: Performing the "secval" commands will usually complete the configuration. It may, however, be necessary to reboot the machine as well. If problems are encountered in accessing READ PERMISSION DB files, reboot the machine where the DB2 DFS Client Enabler has just been installed. ------------------------------------------------------------------------ 4.11 Installing and Configuring DB2 Data Links Manager for Solaris The following actions must be performed after installing DB2 Data Links Manager for Solaris: 1. Add the following three lines to the /etc/system file: set dlfsdrv:glob_mod_pri=0x100800 set dlfsdrv:glob_mesg_pri=0xff set dlfsdrv:ConfigDlfsUid=UID where UID represents the user ID of the id dlfm. 2. Reboot the machine to activate the changes. ------------------------------------------------------------------------ 4.12 Choosing a Backup Method for DB2 Data Links Manager on AIX In addition to Disk Copy and XBSA, you can also use Tivoli Storage Manager (TSM) for backing up files that reside on a Data Links server. To use Tivoli Storage Manager as an archive server: 1. Install Tivoli Storage Manager on the Data Links server. For more information, refer to your Tivoli Storage Manager product documentation. 2. Register the Data Links server client application with the Tivoli Storage Manager server. For more information, refer to your Tivoli Storage Manager product documentation. 3. Add the following environment variables to the Data Links Manager Administrator's db2profile or db2cshrc script files: (for Bash, Bourne, or Korn shell) export DSMI_DIR=/usr/tivoli/tsm/client/api/bin export DSMI_CONFIG=$HOME/tsm/dsm.opt export DSMI_LOG=$HOME/dldump export PATH=$PATH:$DSMI_DIR (for C shell) setenv DSMI_DIR /usr/tivoli/tsm/client/api/bin setenv DSMI_CONFIG ${HOME}/tsm/dsm.opt setenv DSMI_LOG ${HOME}/dldump setenv PATH=${PATH}:$DSMI_DIR 4. Ensure that the dsm.sys TSM system options file is located in the $DSMI_DIR directory. 5. Ensure that the dsm.opt TSM user options file is located in the INSTHOME/tsm directory, where INSTHOME is the home directory of the Data Links Manager Administrator. 6. Set the PASSWORDACCESS option to generate in the /usr/tivoli/tsm/client/api/bin/dsm.sys Tivoli Storage Manager system options file. 7. Register TSM password with the generate option before starting the Data Links File Manager for the first time. This way, you will not need to provide a password when the Data Links File Manager initiates a connection to the TSM server. For more information, refer to your TSM product documentation. 8. Set the DLFM_BACKUP_TARGET registry variable to TSM. The value of DLFM_BACKUP_DIR_NAME registry variable will be ignored in this case. This will activate the Tivoli Storage Manager backup option. Notes: 1. If you change the setting of the DLFM_BACKUP_TARGET registry variable between TSM and disk at run time, you should be aware that the archived files are not moved to the newly specified archive location. For example, if you start the Data Links File Manager with the DLFM_BACKUP_TARGET registry value set to TSM, and change the registry value to a disk location, all newly archived files will be stored in the new location on the disk. The files that were previously archived to TSM will not be moved to the new disk location. 2. To override the default TSM management class there is a new registry variable called DLFM_TSM_MGMTCLASS. If this registry variable is left unset then the default TSM management class will be used. 9. Stop the Data Links File Manager by entering the dlfm stop command. 10. Start the Data Links File Manager by entering the dlfm start command. ------------------------------------------------------------------------ 4.13 Choosing a Backup Method for DB2 Data Links Manager on Solaris Operating Environment In addition to Disk Copy and XBSA, you can also use Tivoli Storage Manager (TSM) for backing up files that reside on a Data Links server. To use Tivoli Storage Manager as an archive server: 1. Install Tivoli Storage Manager on the Data Links server. For more information, refer to your Tivoli Storage Manager product documentation. 2. Register the Data Links server client application with the Tivoli Storage Manager server. For more information, refer to your Tivoli Storage Manager product documentation. 3. Add the following environment variables to the Data Links Manager Administrator's db2profile or db2cshrc script files: (for Bash, Bourne, or Korn shell) export DSMI_DIR=/opt/tivoli/tsm/client/api/bin export DSMI_CONFIG=$HOME/tsm/dsm.opt export DSMI_LOG=$HOME/dldump export PATH=$PATH:/opt/tivoli/tsm/client/api/bin (for C shell) setenv DSMI_DIR /opt/tivoli/tsm/client/api/bin setenv DSMI_CONFIG ${HOME}/tsm/dsm.opt setenv DSMI_LOG ${HOME}/dldump setenv PATH=${PATH}:/opt/tivoli/tsm/client/api/bin 4. Ensure that the dsm.sys TSM system options file is located in the /opt/tivoli/tsm/client/api/bin directory. 5. Ensure that the dsm.opt TSM user options file is located in the INSTHOME/tsm directory, where INSTHOME is the home directory of the Data Links Manager Administrator. 6. Set the PASSWORDACCESS option to generate in the /opt/tivoli/tsm/client/api/bin/dsm.sys Tivoli Storage Manager system options file. 7. Register TSM password with the generate option before starting the Data Links File Manager for the first time. This way, you will not need to provide a password when the Data Links File Manager initiates a connection to the TSM server. For more information, refer to your TSM product documentation. 8. Set the DLFM_BACKUP_TARGET registry variable to TSM. The value of DLFM_BACKUP_DIR_NAME registry variable will be ignored in this case. This will activate the Tivoli Storage Manager backup option. Notes: 1. If you change the setting of the DLFM_BACKUP_TARGET registry variable between TSM and disk at run time, you should be aware that the archived files are not moved to the newly specified archive location. For example, if you start the Data Links File Manager with the DLFM_BACKUP_TARGET registry value set to TSM, and change the registry value to a disk location, all newly archived files will be stored in the new location on the disk. The files that were previously archived to TSM will not be moved to the new disk location. 2. To override the default TSM management class there is a new registry variable called DLFM_TSM_MGMTCLASS. If this registry variable is left unset then the default TSM management class will be used. 9. Stop the Data Links File Manager by entering the dlfm stop command. 10. Start the Data Links File Manager by entering the dlfm start command. ------------------------------------------------------------------------ 4.14 Choosing a Backup Method for DB2 Data Links Manager on Windows NT Whenever a DATALINK value is inserted into a table with a DATALINK column that is defined for recovery, the corresponding DATALINK files on the Data Links server are scheduled to be backed up to an archive server. Currently, Disk Copy (default method) and Tivoli Storage Manager are the two options that are supported for file backup to an archive server. Future releases of DB2 Data Links Manager for Windows NT will support other vendors' backup media and software. Disk Copy (default method) When the backup command is entered on the DB2 server, it ensures that the linked files in the database are backed up on the Data Links server to the directory specified by the DLFM_BACKUP_DIR_NAME environment variable. The default value for this variable is c:\dlfmbackup, where c:\ represents the Data Links Manager backup installation drive. To set this variable to c:\dlfmbackup, enter the following command: db2set -g DLFM_BACKUP_DIR_NAME=c:\dlfmbackup The location specified by the DLFM_BACKUP_DIR_NAME environment variable must not located on a file system using a Data Links Filesystem Filter and that the required space is available in the directory you specified for the backup files. Also, ensure that the DLFM_BACKUP_TARGET variable is set to LOCAL by entering the following command: db2set -g DLFM_BACKUP_TARGET=LOCAL After setting or changing these variables, stop and restart the Data Links File Manager using the dlfm stop and dlfm start commands. Tivoli Storage Manager To use Tivoli Storage Manager as an archive server: 1. Install Tivoli Storage Manager on the Data Links server. For more information, refer to your Tivoli Storage Manager product documentation. 2. Register the Data Links server client application with the Tivoli Storage Manager server. For more information, refer to your Tivoli Storage Manager product documentation. 3. Click on Start and select Settings --> Control Panel --> System. The System Properties window opens. Select the Environment tab and enter the following environment variables and corresponding values: Variable Value DSMI_DIR c:\tsm\baclient DSMI_CONFIG c:\tsm\baclient\dsm.opt DSMI_LOG c:\tsm\dldump 4. Ensure that the dsm.sys TSM system options file is located in the c:\tsm\baclient directory. 5. Ensure that the dsm.opt TSM user options file is located in the c:\tsm\baclient directory. 6. Set the PASSWORDACCESS option to generate in the c:\tsm\baclient\dsm.sys Tivoli Storage Manager system options file. 7. Register TSM password with the generate option before starting the Data Links File Manager for the first time. This way, you will not need to provide a password when the Data Links File Manager initiates a connection to the TSM server. For more information, refer to your TSM product documentation. 8. Set the DLFM_BACKUP_TARGET environment variable to TSM using the following command: db2set -g DLFM_BACKUP_TARGET=TSM The value of the DLFM_BACKUP_DIR_NAME environment variable will be ignored in this case. This will activate the Tivoli Storage Manager backup option. Notes: 1. If you change the setting of the DLFM_BACKUP_TARGET environment variable between TSM and LOCAL at run time, you should be aware that the archived files are not moved to the newly specified archive location. For example, if you start the Data Links File Manager with the DLFM_BACKUP_TARGET environment variable set to TSM, and change its value to LOCAL, all newly archived files will be stored in the new location on the disk. The files that were previously archived to TSM will not be moved to the new disk location. 2. To override the default TSM management class there is a new environment variable called DLFM_TSM_MGMTCLASS. If this variable is left unset then the default TSM management class will be used. 9. Stop the Data Links File Manager by entering the dlfm stop command. 10. Start the Data Links File Manager by entering the dlfm start command. ------------------------------------------------------------------------ 4.15 Backing up a Journalized File System on AIX The book states that the Data Links Manager must be stopped, and that an offline backup should be made of the file system. The following approach, which removes the requirement of stopping the Data Links Manager, is suggested for users who require higher availability. 1. Access the CLI source file quiesce.c and the shell script online.sh. These files are located in the /samples/dlfm directory. 2. Compile quiesce.c: xlC -o quiesce -L$HOME/sqllib/lib -I$HOME/sqllib/include -c quiesce.c 3. As root, run the script on the node that has the DLFS file system. The shell script online.sh assumes that you have a catalog entry on the Data Link Manager node for each database that is registered with the Data Link Manager. It also assumes that /etc/filesystems has the complete entry for the DLFS file system. The shell script does the following: * Quiesces all the tables in databases that are registered with the Data Links Manager. This will stop any new activity. * Unmounts and remounts the file system as a read-only file system. * Performs a file system backup. * Unmounts and remounts the file system as a read-write file system. * Resets the DB2 tables; that is, brings them out of the quiesce state. The script must be modified to suit your environment as follows: 1. Select the backup command and put in the do_backup function of the script. 2. Set the following environment variables within the script: o DLFM_INST: set this to the DLFM instance name. o PATH_OF_EXEC: set this to the path where the "quiesce" executable resides. Invoke the script as follows: online.sh ------------------------------------------------------------------------ 4.16 Administrator Group Privileges in Data Links on Windows NT On Windows NT, a user belonging to the adminstrator group would have the same privileges with regard to files linked using DataLinks as a root user on UNIX for most functions. The following table compares both. Operation Unix (root) Windows NT (Administrator) Rename Yes Yes Access file without tokenYes Yes Delete Yes No (see note below) Update Yes No (see note below) Note: The NTFS disallows these operations for a read-only file. The administrator user can make these operations successful by enabling the write permission for the file. ------------------------------------------------------------------------ 4.17 Minimize Logging for Data Links File System Filter (DLFF) Installation You can minimize logging for the Data Links File System Filter (DLFF) Installation by changing the dlfs_cfg file. The dlfs_cfg file is passed to strload routine to load the driver and configuration parameters. The file is located in the /usr/lpp/db2_07_01/cfg/ directory. Through a symbolic link, the file can also be found in the /etc directory. The dlfs_cfg file has the following format: d 'driver-name' 'vfs number' 'dlfm id' 'global message priority' 'global module priority' - 0 1 where: d The d parameter specifies that the driver is to be loaded. driver-name The driver-name is the full path of the driver to be loaded. For instance, the full path for DB2 Version 7 is /usr/lpp/db2_07_01/bin/dlfsdrv. The name of the driver is dlfsdrv. vfs number This is the vfs entry for DLFS in /etc/vfs. dlfm id This is the user id of the DataLinks Manager administrator. global message priority This is the global message priority global module priority This is the global module priority 0 1 0 1 are the minor numbers for creating non clone nodes for this driver. The node names are created by appending the minor number to the cloned driver node name. No more than five minor numbers can be given (0-4). A real-world example might look as follows: d /usr/lpp/db2_07_01/bin/dlfsdrv 14,208,255,-1 - 0 1 The messages that are logged depend on the settings for the global message priority and global module priority. To minimize logging, you can change the value for the global message priority. There are four message priority values you can use: #define LOG_EMERGENCY 0x01 #define LOG_TRACING 0x02 #define LOG_ERROR 0x04 #define LOG_TROUBLESHOOT 0x08 Most of the messages in DLFF have LOG_TROUBLESHOOT as the message priority. Here are a few alternative configuration examples: If you do require emergency messages and error messages, set the global message priority to 5 (1+4) in the dlfs_cfg configuration file: d /usr/lpp/db2_07_01/bin/dlfsdrv 14,208,5,-1 - 0 1 If only error messages are required, set the global message priority to 4: d /usr/lpp/db2_07_01/bin/dlfsdrv 14,208,4,-1 - 0 1 If you do not require logging for DLFS, then set global message priority to 0: d /usr/lpp/db2_07_01/bin/dlfsdrv 14,208,0,-1 - 0 1 4.17.1 Logging Messages after Installation If you need to log emergency, error, and troubleshooting messages after installation, you must modify the dlfs_cfg file. The dlfs_cfg file is located in the /usr/lpp/db2_07_01/cfg directory. The global message priority must be set to 255 (maximum priority) or to 13 (8+4+1). Setting the priority to 13 (8+4+1) will log emergency, error, and troubleshooting information. After setting the global message priority, unmount the DLFS filter file system and reload the dlfsdrv driver to have the new priority values set at load time. After reloading the dlfsdrv driver, the DLFS filter file system must be re-mounted. Note: The settings for dlfs_cfg will remain for any subsequent loading of dlfsdrv driver until the dlfs_cfg file is changed again. ------------------------------------------------------------------------ 4.18 Uninstalling DLFM Components Using SMIT May Remove Additional Filesets Before uninstalling DB2 (Versions 5, 6, or 7) from an AIX machine on which the Data Links Manager is installed, follow these steps: 1. As root, make a copy of /etc/vfs using the command: cp -p /etc/vfs /etc/vfs.bak 2. Uninstall DB2. 3. As root, replace /etc/vfs with the backup copy made in step 1: cp -p /etc/vfs.bak /etc/vfs ------------------------------------------------------------------------ 4.19 Before You Begin/Determine hostname You must determine the names of each of your DB2 servers and Data Links servers. You will need to know these hostnames to verify the installation. When connecting to a DB2 Data Links File Manager, the DB2 UDB server internally sends the following information to the DLFM: * Database name * Instance name * Hostname The DLFM then compares this information with its internal tables to determine whether the connection should be allowed. It will allow the connection only if this combination of database name, instance name, and hostname has been registered with it, using the dlfm add_db command. The hostname that is used in the dlfm add_db command must exactly match the hostname that is internally sent by the DB2 UDB server. Use the exact hostname that is obtained as follows: 1. Enter the hostname command on your DB2 server. For example, this command might return db2server. 2. Depending on your platform, do one of the following: o On AIX, enter the host db2server command, where db2server is the name obtained in the previous step. This command should return output similar to the following: db2server.services.com is 9.11.302.341, Aliases: db2server o On Windows NT, enter the nslookup db2server command, where db2server is the name obtained in the previous step. This command should return output similar to the following: Server: dnsserv.services.com Address: 9.21.14.135 Name: db2server.services.com Address: 9.21.51.178 o On Solaris, enter cat /etc/hosts | grep 'hostname'. This should return output similar to the following if the hostname is specified without a domain name in /etc/hosts: 9.112.98.167 db2server loghost If the hostname is specified with a domain name, the command returns output similar to the following: 9.112.98.167 db2server.services.com loghost Use db2server.services.com for the hostname when registering a DB2 UDB database using the dlfm add_db command. The DB2 server's internal connections to the DLFM will fail if any other aliases are used in the dlfm add_db command. A Data Links server is registered to a DB2 database using the DB2 "add datalinks manager for database database_alias using node hostname port port_number" command. The hostname is the name of the Data Links server. Any valid alias of the Data Links server can be used in this command. DATALINK values that are references to this Data Links server must specify the hostname in the URL value; that is, the exact name that was used in the "add datalinks manager" command must be used when assigining URL values to DATALINK columns. Using a different alias will cause the SQL statement to fail. ------------------------------------------------------------------------ 4.20 Working with the Data Links File Manager: Cleaning up After Dropping a DB2 Data Links Manager from a DB2 Database When a DB2 Data Links Manager is dropped from a database using the DROP DATALINKS MANAGER command, the command itself does not clean up the corresponding information on the DB2 Data Links Manager. Users can explicitly initiate unlinking of any files linked to the database and garbage collection of backup information. This can be done using the dlfm drop_dlm command. This command initiates asynchronous deletion of all information for a particular database. The DB2 Data Links Manager must be running for this command to be successful. It is extremely important that this command only be used after dropping a DB2 Data Links Manager; otherwise, important information about the DB2 Data Links Manager will be lost and cannot be recovered. To initiate unlink processing and garbage collection of backup information for a particular database: 1. Log on to the system as the DB2 Data Links Manager Administrator. 2. Issue the following command: dlfm drop_dlm database instance hostname where: database is the name of the remote DB2 UDB database; instance is the instance under which the database resides; and hostname is the host name of the DB2 UDB server on which the database resides. 3. Log off. For a complete usage scenario that shows the context in which this command should be used, see the Command Reference. A new error code has been created for this command (see 4.21, DLFM1001E (New Error Message)). ------------------------------------------------------------------------ 4.21 DLFM1001E (New Error Message) DLFM1001E: Error in drop_dlm processing. Cause: The Data Links Manager was unable to initiate unlink and garbage collection processing for the specified database. This can happen because of any of the following reasons: * The Data Links Manager is not running. * An invalid combination of database, instance, and hostname was specified in the command. * There was a failure in one of the component services of the Data Links Manager. Action: Perform the following steps: 1. Ensure that the Data Links Manager is running. Start the Data Links Manager if it is not already running. 2. Ensure that the combination of database, instance, and hostname identifies a registered database. You can do this using the "dlfm list registered databases" command on the Data Links Manager. 3. If an error still occurs, refer to information in the db2diag.log file to see if any component services (for example, the Connection Management Service, the Transaction Management Service, and so on) have failed. Note the error code in db2diag.log, and take the appropriate actions suggested under that error code. ------------------------------------------------------------------------ 4.22 DLFM Setup Configuration File Option The dlfm setup dlfm.cfg option has been removed. Any references to it in the documentation should be ignored. ------------------------------------------------------------------------ 4.23 Error when Running Data Links/DFS Script dmapp_prestart on AIX If the command /usr/sbin/cfgdmepi -a "/usr/lib/drivers/dmlfs.ext" fails with a return code of 1 when you run the Data Links/DFS script dmapp_prestart, install DFS 3.1 ptfset1 to fix the cfgdmepi. ------------------------------------------------------------------------ 4.24 Tivoli Space Manager Integration with Data Links DB2 Data Links Manager will now be able to take advantage of the functionality of Tivoli Space Manager. The Tivoli Space Manager Hierarchical Storage Manager (HSM) client program automatically migrates eligible files to storage to maintain specific levels of free space on local file systems. It automatically recalls migrated files when they are accessed, and permits users to migrate and recall specific files. This new feature benefits customers who have file systems with large files that are required to be moved to tertiary storage periodically, in which the space of the file system needs to be managed on a regular basis. For many customers, Tivoli Space Manager currently provides the means to manage their tertiary storage. The new DB2 Data Links Manager support of Tivoli Space Manager provides greater flexibility in managing the space for DATALINK files. Rather than pre-allocating enough storage in the DB2 Data Links Manager file system for all files which may be stored there, Tivoli Space Manager allows allocations of the Data Links-managed file system to be adjusted over a period of time without the risk of inadvertently filling up the file system during normal usage. Adding both Data Links and HSM support to a file system When registering a file system with Hierarchical Storage Management (HSM), register it with HSM first and then with the DataLinks File Manager. 1. Register with HSM, using the command "dsmmigfs add /fs". 2. Register with DLM, using the command "dlfmfsmd /fs". Data Links support for a file system is reflected in the stanza in /etc/filesystems for an HSM file system via the following entries: vfs = dlfs mount = false options = rw,Basefs=fsm nodename = - Adding Data Links support to an existing HSM file system Register with DLM, using the command "dlfmfsmd /fs". Adding HSM support to an existing Data Links file system 1. Register with HSM, using the command "dsmmigfs add /fs". 2. Register with DLM, using the command "dlfmfsmd /fs". Removing Data Links support from a Data Links-HSM file system Remove Data Links support, using the command "dlfmfsmd -j /fs". Removing HSM support from a Data Links-HSM file system 1. Remove HSM support, using the command "dsmmigfs remove /fs". 2. Remove Data Links support, "dlfmfsmd -j /fs". 3. Register with DLM, using the command "dlfmfsmd /fs". Removing both Data Links and HSM support from a Data Links-HSM file system 1. Remove HSM support, using the command "dsmmigfs remove /fs". 2. Remove Data Links support, using the command "dlfmfsmd -j /fs". 4.24.1 Restrictions and Limitations This function is currently supported on AIX only. Selective migration (dsmmigrate) and recall of an FC (Read permission DB) linked file should be done by a root user only. Selective migration can be performed only by the file owner which in the case of Read Permission DB files is the DataLink Manager Administrator (dlfm). To access such files a token is required from the host database side. The only user who does not require a token is the "root" user. It will be easier for a "root" user to perform the selective migrate and recall on Read Permission DB files. The dlfm user can migrate an FC file using a valid token only the first time. The second time migration is attempted (after a recall ), the operation will fail with error message "ANS1028S Internal program error. Please see your service representative." Running dsmmigrate on an FC file by a non-root user will not succeed. This limitation is minor as it is typically the administrators who will access the files on the fileserver. stat and statfs system calls will show Vfs-type as fsm rather than dlfs, although dlfs is mounted over fsm. The above behavior is for the normal functionality of dsmrecallddaemons, which performs statfs on the file system to check if its Vfs-type is fsm or not. Command "dsmls" does not show any output if a file having the minimum inode number is FC (Read permission DB) linked The dsmls command is similar to the ls command and lists the files being administered by TSM. No user action is required ------------------------------------------------------------------------ 4.25 Chapter 4. Installing and Configuring DB2 Data Links Manager for AIX 4.25.1 Common Installation Considerations 4.25.1.1 Migrating from DB2 File Manager Version 5.2 to DB2 Data Links Manager Version 7 The information in step 3 is incorrect. Step 3 should read as follows: "3. As DLFM administrator, run the /usr/lpp/db2_07_01/adm/db2dlmmg command. ------------------------------------------------------------------------ Installation and Configuration Supplement ------------------------------------------------------------------------ 5.1 Chapter 5. Installing DB2 Clients on UNIX Operating Systems 5.1.1 HP-UX Kernel Configuration Parameters The recommendation for setting HP-UX kernel parameters incorrectly states that msgmbn and msgmax should be set to 65535 or higher. Both parameters must be set to exactly 65535. ------------------------------------------------------------------------ 5.2 Chapter 12. Running Your Own Applications 5.2.1 Binding Database Utilities Using the Run-Time Client The Run-Time Client cannot be used to bind the database utilities (import, export, reorg, the command line processor) and DB2 CLI bind files to each database before they can be used with that database. You must use the DB2 Administration Client or the DB2 Application Development Client instead. You must bind these database utilities and DB2 CLI bind files to each database before they can be used with that database. In a network environment, if you are using multiple clients that run on different operating systems, or are at different versions or service levels of DB2, you must bind the utilities once for each operating system and DB2-version combination. 5.2.2 UNIX Client Access to DB2 Using ODBC Chapter 12 ("Running Your Own Applications") states that you need to update odbcinst.ini if you install an ODBC Driver Manager with your ODBC client application or ODBC SDK. This is partially incorrect. You do not need to update odbcinst.ini if you install a Merant ODBC Driver Manager product. ------------------------------------------------------------------------ 5.3 Chapter 24. Setting Up a Federated System to Access Multiple Data Sources 5.3.1 Federated Systems A DB2 federated system is a special type of distributed database management system (DBMS). A federated system allows you to query and retrieve data located on other DBMSs, such as Oracle, Sybase, and Microsoft SQL Server. SQL statements can refer to multiple DBMSs or individual databases in a single statement. For example, you can join data located in a DB2 Universal Database table, an Oracle table, and a Sybase view. Supported DBMSs include Oracle, Sybase, Microsoft SQL Server (for AIX and Windows NT), and members of the DB2 Universal Database family (such as DB2 for OS/390, DB2 for AS/400, and DB2 for Windows). A DB2 federated system consists of a server with a DB2 instance (a database that will serve as the federated database) and one or more data sources. The federated database contains catalog entries identifying data sources and their characteristics. A data source consists of a DBMS and data. DB2 Universal Database has protocols, called wrappers, that you can use to access these data sources. Wrappers are the mechanism that federated servers use to communicate with and retrieve data from data sources. Nicknames are used to refer to tables and views located in the data sources. Applications connect to the federated database just like any other DB2 database. The wrapper that you use depends on the platform on which DB2 Universal Database is running. After a federated system is set up, the information in data sources can be accessed as though it were in one large database. Users and applications send queries to one federated database, which retrieves data from the data sources. A DB2 federated system operates under some restrictions. Distributed requests are limited to read-only operations in DB2 Version 7. In addition, you cannot execute utility operations (LOAD, REORG, REORGCHK, IMPORT, RUNSTATS, and so on) against nicknames. You can, however, use a pass-through facility to submit DDL and DML statements directly to DBMSs using the SQL dialect associated with that data source. 5.3.1.1 Restriction The new wrappers in Version 7.2 (such as Oracle on Linux and Solaris, Sybase on AIX and Solaris, and Microsoft SQL Server on NT and AIX) are not available in FixPak 3; you must purchase DB2 Relational Connect Version 7.2. 5.3.2 Installing DB2 Relational Connect This section provides instructions for installing DB2 Relational Connect on the server that you will use as your federated system server. 5.3.2.1 Installing DB2 Relational Connect on Windows NT servers Before you install DB2 Relational Connect on your Windows NT federated server: * Make sure that you have either DB2 Universal Database Enterprise Edition or DB2 Universal Database Enterprise -- Extended Edition installed on the federated server. If you intend to include DB2 family databases in your distributed requests, you must have selected the Distributed Join for DB2 data sources option when you installed DB2 Universal Database. To verify that this option was implemented, check that the FEDERATED parameter is set to YES. You can check this setting by issuing the GET DATABASE MANAGER CONFIGURATION command, which displays all of the parameters and their current settings. * Make sure that you have installed the client software for the data source (such as Sybase Open Client) on the federated server. 1. Log on to the system with the user account that you created to perform the installation. 2. Shut down any programs that are running so that the setup program can update files as required. 3. Invoke the setup program. You can either invoke the setup program automatically or manually. If the setup program fails to start automatically, or if you want to run the setup in a different language, invoke the setup program manually. o To automatically invoke the setup program, insert the DB2 Relational Connect CD into the drive. The auto-run feature automatically starts the setup program. The system language is determined, and the setup program for that language is launched. o To manually invoke the setup program: a. Click Start and select the Run option. b. In the Open field, type the following command: x:\setup /i language where: x: Represents your CD-ROM drive. language Represents the country code for your language (for example, EN for English). c. Click OK. The installation launchpad opens. 4. Click Install to begin the installation process. 5. Follow the prompts in the setup program. When the program completes, DB2 Relational Connect will be installed in your install directory with your other DB2 products. 5.3.2.2 Installing DB2 Relational Connect on AIX, Linux, and Solaris Operating Environment servers Before you install DB2 Relational Connect on your AIX, Linux, and Solaris Operating Environment federated servers: * Make sure that you have either DB2 Universal Database Enterprise Edition or DB2 Universal Database -- Extended Edition installed on the federated server. If you intend to include DB2 family databases in your distributed requests, you must have selected the Distributed Join for DB2 data sources option when you installed DB2 Universal Database. To verify that this option was implemented, check that the FEDERATED parameter is set to YES. You can check this setting by issuing the GET DATABASE MANAGER CONFIGURATION command, which displays all of the parameters and their current settings. * Make sure that you have installed the client software for the data source (such as Sybase Open Client) on the federated server. To install DB2 Relational Connect on your AIX, Linux, and Solaris Operating Environment servers, use the db2setup utility: 1. Log in as a user with root authority. 2. Insert and mount your DB2 product CD-ROM. For information on how to mount a CD-ROM, see Quick Beginnings for AIX. 3. Change to the directory where the CD-ROM is mounted by entering the cd /cdrom command, where cdrom is the mount point of your product CD-ROM. 4. Type the ./db2setup command. After a few moments, the DB2 Setup Utility window opens. 5. Select Install. The Install DB2 V7 window opens. 6. Navigate to the DB2 Relational Connect product for your client, for example, Relational Connect for Sybase, and press the space bar to select it. An asterisk appears next to the option when it is selected. 7. Select OK. The Create DB2 Services window opens. 8. You can choose to create a DB2 instance. Select OK.The Summary Report Installation window opens. Two items are automatically installed: the distributed join for Oracle and the Product Signature for DB2 Relational Connect. The Product Signature is required for you to connect to Sybase data sources. 9. Choose Continue. A window appears to indicate this is your final chance to stop the Relational Connect setup. Choose OK to continue with the setup. It may take a few minutes to complete the setup. 10. When a notice appears indicating the installation completed successfully, select OK. The Summary Report window opens indicating the success or failure of each installed option; select OK again. When the installation is complete, DB2 Relational Connect will be installed in the directory with your other DB2 products. On AIX, this is the /usr/lpp/db2_07_01 directory. On Solaris, this is the /opt/IBMdb2/V7.1 directory. On Linux, this is the /usr/IBMdb2/V7.1 directory. ------------------------------------------------------------------------ 5.4 Chapter 26. Accessing Oracle data sources In addition to supporting wrappers on AIX and Windows NT, DB2 Universal Database now supports the Oracle wrapper on Linux and the Solaris Operating Environment. This support is limited to Oracle Version 8. To access the wrappers for these platforms, you need to insert the V7.2 DB2 Relational Connect CD and select DB2 Relational Connect for Oracle data sources. Once you have installed DB2 Relational Connect, you can add an Oracle data source to a federated server: 1. Install and configure the Oracle client software on the DB2 federated server. 2. Set the data source environment variables by modifying the db2dj.ini file and issuing the db2set command. 3. For DB2 federated servers running on UNIX platforms, run the djxlink script to link-edit the Oracle SQL*Net or Net8 libraries to your DB2 federated server. 4. Ensure that the SQL*Net or Net8 tnsnames.ora file is updated. 5. Recycle the DB2 instance. 6. Create the wrapper. 7. Optional: Set the DB2_DJ_COMM environment variable. 8. Create a server. 9. Create a user mapping. 10. Create nicknames for tables and views. Detailed instructions for these steps, including setting the environment variables, are in Chapter 26. Setting Up a Federated System to Access Oracle Data Sources in the DB2 Installation and Configuration Supplement. 5.4.1 Documentation Errors The section, "Adding Oracle Data Sources to a Federated System" has the following errors: * A step is missing in the procedure. The correct steps are: 1. Install and configure the Oracle client software on the DB2 federated server using the documentation provided by Oracle. 2. Set data source environment variables by modifying the db2dj.ini file and issuing the db2set command. The db2set command updates the DB2 profile registry with your settings. Detailed instructions for setting the environment variables are in Chapter 26. Setting Up a Federated System to Access Oracle Data Sources of the DB2 Installation and Configuration Supplement. 3. For DB2 federated servers running on UNIX platforms, run the djxlink script to link-edit the Oracle SQL*Net or Net8 libraries to your DB2 federated server. Depending on your platform, the djxlink script is located in: /usr/lpp/db2_07_01/bin on AIX /opt/IBMdb2/V7.1/bin Solaris /usr/IBMdb2/V7.1/bin Linux Run the djxlink script only after installing Oracle's client software on the DB2 federated server. * The documentation indicates to set: DB2_DJ_INI = sqllib/cfg/db2dj.ini This is incorrect, it should be set to the following: DB2_DJ_INI = $INSTHOME/sqllib/cfg/db2dj.ini ------------------------------------------------------------------------ 5.5 Accessing Sybase data sources (new chapter) Before you add Sybase data sources to a federated server, you need to install and configure the Sybase Open Client software on the DB2 federated server. See the installation procedures in the documentation that comes with Sybase database software for specific details on how to install the Open Client software. As part of the installation, make sure that you include the catalog stored procedures and Sybase Open Client libraries. To set up your federated server to access data stored on Sybase data sources, you need to: 1. Install DB2 Relational Connect Version 7.2. See 5.3.2, Installing DB2 Relational Connect. 2. Add Sybase data sources to your federated server. 3. Specify the Sybase code pages. This chapter discusses steps 2 and 3. The instructions in this chapter apply to Windows NT, AIX, and the Solaris Operating Environment. The platform-specific differences are noted where they occur. 5.5.1 Adding Sybase data sources to a federated server To add a Sybase data source to a federated server, you need to: 1. Set the environment variables and update the profile registry. 2. Link DB2 to Sybase client software (AIX and Solaris only) 3. Recycle the DB2 instance. 4. Create and set up an interfaces file. 5. Create the wrapper. 6. Optional: Set the DB2_DJ_COMM environment variable. 7. Create a server. 8. Optional: Set the CONNECTSTRING server option 9. Create a user mapping. 10. Create nicknames for tables and views. These steps are explained in detail in this section. 5.5.1.1 Step 1: Set the environment variables and update the profile registry Set data source environment variables by modifying the db2dj.ini file and issuing the db2set command. The db2dj.ini file contains configuration information about the Sybase client software installed on your federated server. The db2set command updates the DB2 profile registry with your settings. In a partitioned database system, you can use a single db2dj.ini file for all nodes in a particular instance, or you can use a unique db2dj.ini file for one or more nodes in a particular instance. A non-partitioned database system can have only one db2dj.ini file per instance. To set the environment variables: 1. Edit the db2dj.ini file located in sqllib/cfg, and set the following environment variable: SYBASE="" where is the directory where the Sybase client is installed. 2. Update the .profile file of the DB2 instance with the Sybase environment variable. You can do this by issuing the following command: export PATH="$SYBASE/bin:$PATH" export SYBASE="" where is the directory where the Sybase client is installed. 3. Execute the DB2 instance .profile by entering: . .profile 4. Issue the db2set command to update the DB2 profile registry with your changes. The syntax of this command, db2set, is dependent upon your database system structure. This step is only necessary if you are using the db2dj.ini file in any of the following database system structures: If you are using the db2dj.ini file in a non-partitioned database system, or if you want the db2dj.ini file to apply to the current node only, issue: db2set DB2_DJ_INI = sqllib/cfg/db2dj.ini If you are using the db2dj.ini file in a partitioned database system, and you want the values in the db2dj.ini file to apply to all nodes within this instance, issue: db2set -g DB2_DJ_INI = sqllib/cfg/db2dj.ini If you are using the db2dj.ini file in a partitioned database system, and you want the values in the db2dj.ini file to apply to a specific node, issue: db2set -i INSTANCEX 3 DB2_DJ_INI = sqllib/cfg/node3.ini where: INSTANCEX Is the name of the instance. 3 Is the node number as listed in the db2nodes.cfg file. node3.ini Is the modified and renamed version of the db2dj.ini file. 5.5.1.2 Step 2: Link DB2 to Sybase client software (AIX and Solaris only) To enable access to Sybase data sources, the DB2 federated server must be link-edited to the client libraries. The link-edit process creates a wrapper for each data source with which the federated server will communicate. When you run the djxlink script you create the wrapper library. To issue the djxlink script type: djxlink 5.5.1.3 Step 3: Recycle the DB2 instance To ensure that the environment variables are set in the program, recycle the DB2 instance. When you recycle the instance, you refresh the DB2 instance to accept the changes that you made. Issue the following commands to recycle the DB2 instance: On DB2 for Windows NT servers: NET STOP instance_name NET START instance_name On DB2 for AIX and Solaris servers: db2stop db2start 5.5.1.4 Step 4: Create and set up an interfaces file To create and set up an interfaces file, you must create the file and make the file accessible. 1. Use the Sybase-supplied utility to create an interfaces file that includes the data for all the Sybase Open Servers that you want to access. See the installation documentation from Sybase for more information about using this utility. Windows NT typically names this file sql.ini. Rename the file you just created from sql.ini to interfaces to name the file universally across all platforms. If you choose not to rename sql.ini to interfaces you must use the IFILE parameter or the CONNECTSTRING option that is explained in step 8. On AIX and Solaris systems this file is named /sqllib/interfaces. 2. Make the interfaces file accessible to DB2. On DB2 for Windows NT servers: Put the file in the DB2 instance's %DB2PATH% directory. On DB2 for AIX and Solaris servers: Put the file in the DB2 instance's $HOME/sqllib directory. Use the ln command to link to the file from the DB2 instance's $HOME/sqllib directory. For example: ln -s -f /home/sybase/interfaces /home/db2djinst1/sqllib 5.5.1.5 Step 5: Create the wrapper Use the CREATE WRAPPER statement to specify the wrapper that will be used to access Sybase data sources. Wrappers are mechanisms that federated servers use to communicate with and retrieve data from data sources. DB2 includes two wrappers for Sybase, CTLIB and DBLIB. The following example shows a CREATE WRAPPER statement: CREATE WRAPPER CTLIB where CTLIB is the default wrapper name used with Sybase Open Client software. The CTLIB wrapper can be used on Windows NT, AIX, and Solaris servers. You can substitute the default wrapper name with a name that you choose. However, if you do so, you must also include the LIBRARY parameter and the name of the wrapper library for your federated server in the CREATE WRAPPER statement. See the CREATE WRAPPER statement in the DB2 SQL Reference for more information about wrapper library names. 5.5.1.6 Step 6: Optional: Set the DB2_DJ_COMM environment variable To improve performance when the Sybase data source is accessed, set the DB2_DJ_COMM environment variable. This variable determines whether a wrapper is loaded when the federated server initializes. Set the DB2_DJ_COMM environment variable to include the wrapper library that corresponds to the wrapper that you specified in the previous step; for example: On DB2 for Windows NT servers: db2set DB2_DJ_COMM='ctlib.dll' On DB2 for AIX servers: db2set DB2_DJ_COMM='libctlib.a' On DB2 for Solaris servers: db2set DB2_DJ_COMM='libctlib.so' Ensure that there are no spaces on either side of the equal sign (=). Refer to the DB2 SQL Reference for more information about wrapper library names. Refer to the Administration Guide for information about the DB2_DJ_COMM environment variable. 5.5.1.7 Step 7: Create the server Use the CREATE SERVER statement to define each Sybase server whose data sources you want to access; for example: CREATE SERVER SYBSERVER TYPE SYBASE VERSION 12.0 WRAPPER CTLIB OPTIONS (NODE 'sybnode', DBNAME'sybdb') where: SYBSERVER Is a name that you assign to the Sybase server. This name must be unique. SYBASE Is the type of data source to which you are configuring access. Sybase is the only data source that is supported. 12.0 Is the version of Sybase that you are accessing. The supported versions are 10.0, 11.0, 11.1, 11.5, 11.9, and 12.0. CTLIB Is the wrapper name that you specified in the CREATE WRAPPER statement. 'sybnode' Is the name of the node where SYBSERVER resides. Obtain the node value from the interfaces file. This value is case-sensitive. Although the name of the node is specified as an option, it is required for Sybase data sources. See the DB2 SQL Reference for information on additional options. 'sybdb' Is the name of the Sybase database that you want to access. 5.5.1.8 Optional: Step 8: Set the CONNECTSTRING server option Specify the timeout thresholds, the path and name of the interfaces file, and the packet size of the interfaces file. Sybase Open Client uses timeout thresholds to interrupt queries and responses that run for too long a period of time. You can set these thresholds in DB2 by using the CONNECTSTRING option of the CREATE SERVER OPTION DDL statement. Use the CONNECTSTRING option to specify: * Timeout duration for SQL queries. * Timeout duration for login response. * Path and name of the interfaces file. * Packet size. .-;----------------------------------. V | >>----+------------------------------+--+---------------------->< +-TIMEOUT-- = --seconds--------+ +-LOGIN_TIMEOUT-- = --seconds--+ +-IFILE-- = --"string"---------+ +-PACKET_SIZE-- = --bytes------+ '-;----------------------------' TIMEOUT Specifies the number of seconds for DB2 Universal Database to wait for a response from Sybase Open Client for any SQL statement. The value of seconds is a positive whole number in DB2 Universal Database's integer range. The timeout value that you specify depends on which wrapper you are using. Windows NT, AIX, and Solaris servers are all able to utilize the DBLIB wrapper. The default value for the DBLIB wrapper is 0. On Windows NT, AIX, and Solaris servers the default value for DBLIB causes DB2 Universal Database to wait indefinitely for a response. LOGIN_TIMEOUT Specifies the number of seconds for DB2 Universal Database to wait for a response from Sybase Open Client to the login request. The default values are the same as for TIMEOUT. IFILE Specifies the path and name of the Sybase Open Client interfaces file. The path that is identified in string must be enclosed in double quotation marks ("). On Windows NT servers, the default is %DB2PATH%. On AIX and Solaris servers, the default value is sqllib/interfaces in the home directory of your DB2 Universal Database instance. PACKET_SIZE Specifies the packet size of the interfaces file in bytes. If the data source does not support the specified packet size, the connection will fail. Increasing the packet size when each record is very large (for example, when inserting rows into large tables) significantly increases performance. The byte size is a numeric value. See the Sybase reference manuals for more information. Examples: On Windows NT servers, to set the timeout value to 60 seconds and the interfaces file to C:\etc\interfaces, use: CREATE SERVER OPTION connectstring FOR SERVER sybase1 SETTING 'TIMEOUT=60;LOGIN_TIMEOUT=5;IFILE="C:\etc\interfaces";' On AIX and Solaris servers, set the timeout value to 60 seconds and the interfaces file to/etc/interfaces, use: CREATE SERVER OPTION connectstring FOR SERVER sybase1 SETTING 'TIMEOUT=60;PACKET_SIZE=4096;IFILE="/etc/interfaces";' 5.5.1.9 Step 9: Create a user mapping If a user ID or password on the federated server is different from a user ID or password on a Sybase data source, use the CREATE USER MAPPING statement to map the local user ID to the user ID and password defined at the Sybase data source; for example: CREATE USER MAPPING FOR DB2USER SERVER SYBSERVER OPTIONS ( REMOTE_AUTHID 'sybuser', REMOTE_PASSWORD 'dayl1te') where: DB2USER Is the local user ID that you are mapping to a user ID defined at an Sybase data source. SYBSERVER Is the name of the Sybase data source that you defined in the CREATE SERVER statement. 'sybuser' Is the user ID at the Sybase data source to which you are mapping DB2USER. This value is case sensitive. 'dayl1te' Is the password associated with 'sybuser'. This value is case sensitive. See the DB2 SQL Reference for more information on additional options. 5.5.1.10 Step 10: Create nicknames for tables and views Assign a nickname for each view or table located at your Sybase data source. You will use these nicknames when you query the Sybase data source. Sybase nicknames are case sensitive. Enclose both the schema and table names in double quotation marks ("). The following example shows a CREATE NICKNAME statement: CREATE NICKNAME SYBSALES FOR SYBSERVER."salesdata"."europe" where: SYBSALES Is a unique nickname for the Sybase table or view. SYBSERVER."salesdata"."europe" Is a three-part identifier that follows this format: data_source_name."remote_schema_name"."remote_table_name" Repeat this step for each table or view to which you want create nicknames. When you create the nickname, DB2 will use the connection to query the data source catalog. This query tests your connection to the data source. If the connection does not work, you receive an error message. See the DB2 SQL Reference for more information about the CREATE NICKNAME statement. For more information about nicknames in general and to verify data type mappings, see the DB2 Administration Guide. 5.5.2 Specifying Sybase code pages This step is necessary only when the DB2 federated server and the Sybase server are running different code pages. Data sources that are using the same code set as DB2 require no translation. The following table provides equivalent Sybase options for common National Language Support (NLS) code pages. Either your Sybase data sources must be configured to correspond to these equivalents, or the client code must be able to detect the mismatch and flag it as an error or map the data by using its own semantics. If no conversion table can be found from the source code page to the target code page, DB2 issues an error message. Refer to your Sybase documentation for more information. Table 1. Sybase Code Page Options Code page Equivalent Sybase option 850 cp850 897 sjis 819 iso_1 912 iso_2 1089 iso_6 813 iso_7 916 iso_8 920 iso_9 ------------------------------------------------------------------------ 5.6 Accessing Microsoft SQL Server data sources using ODBC (new chapter) Before you add Microsoft SQL Server data sources to a DB2 federated server, you need to install and configure the ODBC driver on the federated server. See the installation procedures in the documentation that comes with the ODBC driver for specific details on how install the ODBC driver. To set up your federated server to access data stored in Microsoft SQL Server data sources, you need to: 1. Install and configure the ODBC driver on the federated server. See the installation procedures in the documentation that comes with the ODBC driver for specific details on how to install the ODBC driver. On DB2 for Windows NT servers: Configure a system DSN, using the ODBC device manager. On DB2 for AIX servers: Install the threaded version of the libraries supplied by MERANT, specify the MERANT library directory as the first entry in the LIBPATH, and set up the .odbc.ini file. Create the .odbc.ini file in the home directory. 2. Install DB2 Relational Connect Version 7.2. See 5.3.2, Installing DB2 Relational Connect. 3. Add Microsoft SQL Server data sources to your federated server. 4. Specify the Microsoft SQL Server code pages. This chapter discusses steps 3 and 4. The instructions in this chapter apply to Windows NT and AIX platforms. The platform-specific differences are noted where they occur. 5.6.1 Adding Microsoft SQL Server data sources to a federated server After you install the ODBC driver and DB2 Relational Connect, add Microsoft SQL Server data sources to your federated server using these steps: 1. Set the environment variables (AIX only). 2. Run the shell script (AIX only). 3. Optional: Set the DB2_DJ_COMM environment variable. 4. Recycle the DB2 instance (AIX only). 5. Create the wrapper. 6. Create the server. 7. Create a user mapping. 8. Create nicknames for the tables and views. 9. Optional: Obtain the ODBC traces. These steps are explained in detail in the following sections. 5.6.1.1 Step 1: Set the environment variables (AIX only) Set data source environment variables by modifying the db2dj.ini file and issuing the db2set command. The db2dj.ini file contains configuration information to connect to Microsoft SQL Server data sources. The db2set command updates the DB2 profile registry with your settings. In a partitioned database system, you can use a single db2dj.ini file for all nodes in a particular instance, or you can use a unique db2dj.ini file for one or more nodes in a particular instance. A non-partitioned database system can have only one db2dj.ini file per instance. To set the environment variables: 1. Edit the db2dj.ini file located in $HOME/sqllib/cfg/, and set the following environment variables: ODBCINI=$HOME/.odbc.ini DJX_ODBC_LIBRARY_PATH=/lib LIBPATH=/lib DB2ENVLIST=LIBPATH Issue the db2set command to update the DB2 profile registry with your changes. The syntax of db2set is dependent upon your database system structure: * If you are using the db2dj.ini file in a non-partitioned database system, or if you are using the db2dj.ini file in a partitioned database system and you want the values in the db2dj.ini file to apply to the current node only, issue this command: db2set DB2_DJ_INI=/db2dj.ini * If you are using the db2dj.ini file in a partitioned database system and you want the values in the db2dj.ini file to apply to all nodes within this instance, issue this command: db2set -g DB2_DJ_INI=/db2dj.ini * If you are using the db2dj.ini file in a partitioned database system, and you want the values in the db2dj.ini file to apply to a specific node, issue this command: db2set -i INSTANCEX 3 DB2_DJ_INI=$HOME/sqllib/cfg/node3.ini where: INSTANCEX Is the name of the instance. 3 Is the node number as listed in the db2nodes.cfg file. node3.ini Is the modified and renamed version of the db2dj.ini file. 5.6.1.2 Step 2: Run the shell script (AIX only) The djxlink.sh shell script links the client libraries to the wrapper libraries. To run the shell script: djxlink 5.6.1.3 Step 3: Optional: Set the DB2_DJ_COMM environment variable If you find it takes an inordinate amount of time to access the Microsoft SQL Server data source, you can improve the performance by setting the DB2_DJ_COMM environment variable to load the wrapper when the federated server initializes rather than when you attempt to access the data source. Set the DB2_DJ_COMM environment variable to include the wrapper library that corresponds to the wrapper that you specified in Step 5. For example: On DB2 for Windows NT servers: db2set DB2_DJ_COMM=djxmssql3.dll On DB2 for AIX servers: db2set DB2_DJ_COMM=libmssql3.a Ensure that there are no spaces on either side of the equal sign (=). See the DB2 SQL Reference for more information about wrapper library names. 5.6.1.4 Step 4: Recycle the DB2 instance (AIX only) To ensure that the environment variables are set in the program, recycle the DB2 instance. When you recycle the instance, you refresh the DB2 instance to accept the changes that you made. Recycle the DB2 instance by issuing the following commands: db2stop db2start 5.6.1.5 Step 5: Create the wrapper DB2 Universal Database has two different protocols, called wrappers, that you can use to access Microsoft SQL Server data sources. Wrappers are the mechanism that federated servers use to communicate with and retrieve data from data sources. The wrapper that you use depends on the platform on which DB2 Universal Database is running. Use Table 2 as a guide to selecting the appropriate wrapper. Table 2. ODBC drivers ODBC driver Platform Wrapper Name ODBC 3.0 (or higher) driver Windows NT DJXMSSQL3 MERANT DataDirect Connect ODBC 3.6 AIX MSSQLODBC3 driver Use the CREATE WRAPPER statement to specify the wrapper that will be used to access Microsoft SQL Server data sources. The following example shows a CREATE WRAPPER statement: CREATE WRAPPER DJXMSSQL3 where DJXMSSQL3 is the default wrapper name used on a DB2 for Windows NT server (using the ODBC 3.0 driver). If you have a DB2 for AIX server, you would specify the MSSQLODBC3 wrapper name. You can substitute the default wrapper name with a name that you choose. However, if you do so, you must include the LIBRARY parameter and the name of the wrapper library for your federated server platform in the CREATE WRAPPER statement. For example: On DB2 for Windows NT servers: CREATE WRAPPER wrapper_name LIBRARY 'djxmssql3.dll' where wrapper_name is the name that you want to give the wrapper, and 'djxmssql3.dll' is the library name. On DB2 for AIX servers: CREATE WRAPPER wrapper_name LIBRARY 'libmssql3.a' where wrapper_name is the name that you want to give the wrapper, and 'libdjxmssql.a' is the library name. See the CREATE WRAPPER statement in the DB2 SQL Reference for more information about wrapper library names. 5.6.1.6 Step 6: Create the server Use the CREATE SERVER statement to define each Microsoft SQL Server data source to which you want to connect. For example: CREATE SERVER sqlserver TYPE MSSQLSERVER VERSION 7.0 WRAPPER djxmssql3 OPTIONS (NODE 'sqlnode', DBNAME 'database_name') where: sqlserver Is a name that you assign to the Microsoft SQL Server server. This name must be unique. MSSQLSERVER Is the type of data source to which you are configuring access. 7.0 Is the version of Microsoft SQL Server that you are accessing. DB2 Universal Database supports versions 6.5 and 7.0 of Microsoft SQL Server. DJXMSSQL3 Is the wrapper name that you defined in the CREATE WRAPPER statement. 'sqlnode' Is the system DSN name that references the Microsoft SQL Server version of Microsoft SQL Server that you are accessing. This value is case sensitive. DB2 Universal Database supports versions 6.5 and 7.0 of Microsoft SQL Server. Although the name of the node (System DSN name) is specified as an option in the CREATE SERVER statement, it is required for Microsoft SQL Server data sources. See the DB2 SQL Reference for additional options that you can use with the CREATE WRAPPER statement. 'database_name' Is the name of the database to which you are connecting. Although the name of the database is specified as an option in the CREATE SERVER statement, it is required for Microsoft SQL Server data sources. 5.6.1.7 Step 7: Create a user mapping If a user ID or password at the federated server is different from a user ID or password at a Microsoft SQL Server data source, use the CREATE USER MAPPING statement to map the local user ID to the user ID and password defined at the Microsoft SQL Server data source; for example: CREATE USER MAPPING FOR db2user SERVER server_name OPTIONS (REMOTE_AUTHID 'mssqluser', REMOTE_PASSWORD 'dayl1te') where: db2user Is the local user ID that you are mapping to a user ID defined at the Microsoft SQL Server data source. server_name Is the name of the server that you defined in the CREATE SERVER statement. 'mssqluser' Is the user ID at the Microsoft SQL Server data source to which you are mapping db2user. This value is case sensitive. 'dayl1ite' Is the password associated with 'mssqluser'. This value is case sensitive. See the DB2 SQL Reference for additional options that you can use with the CREATE USER MAPPING statement. 5.6.1.8 Step 8: Create nicknames for tables and views Assign a nickname for each view or table located in your Microsoft SQL Server data source that you want to access. You will use these nicknames when you query the Microsoft SQL Server data source. Use the CREATE NICKNAME statement to assign a nickname. Nicknames are case sensitive. The following example shows a CREATE NICKNAME statement: CREATE NICKNAME mssqlsales FOR server_name.salesdata.europe where: mssqlsales Is a unique nickname for the Microsoft SQL Server table or view. server_name.salesdata.europe Is a three-part identifier that follows this format: data_source_server_name.remote_schema_name.remote_table_name Double quotes are recommended for the remote_schema_name and remote_table_name portions of the nickname. When you create a nickname, DB2 attempts to access the data source catalog tables (Microsoft SQL Server refers to these as system tables). This tests the connection to the data source. If the connection fails, you receive an error message. Repeat this step for all database tables and views for which you want to create nicknames. For more information about the CREATE NICKNAME statement, see the DB2 SQL Reference. For more information about nicknames in general, and to verify data type mappings see the DB2 Administration Guide. 5.6.1.9 Step 9: Optional: Obtain ODBC traces If you are experiencing problems when accessing the data source, you can obtain ODBC tracing information to analyze and resolve these problems. To ensure the ODBC tracing works properly, use the trace tool provided by the ODBC Data Source Administrator. Activating tracing impacts your system performance, therefore you should turn off tracing once you have resolved the problems. 5.6.2 Reviewing Microsoft SQL Server code pages Microsoft SQL Server supports many of the common National Language Support (NLS) code page options that DB2 UDB supports. Data sources that are using the same code set as DB2 require no translation. Table 3 lists the code pages that are supported by both DB2 Universal Database and Microsoft SQL Server. Table 3. DB2 UDB and Microsoft SQL Server Code Page Options Code page Language supported 1252 ISO character set 850 Multilingual 437 U.S. English 874 Thai 932 Japanese 936 Chinese (simplified) 949 Korean 950 Chinese (traditional) 1250 Central European 1251 Cyrillic 1253 Greek 1254 Turkish 1255 Hebrew 1256 Arabic When the DB2 federated server and the Microsoft SQL Server are running different National Language Support (NLS) code pages either your Microsoft SQL Server data sources must be configured to correspond to these equivalents, or the client code must be able to detect the mismatch and flag it as an error or map the data by using its own semantics. If no conversion table can be found from the source code page to the target code page, DB2 issues an error message. Refer to your Microsoft SQL Server documentation for more information. ------------------------------------------------------------------------ Administration * Administration Guide: Planning o 6.1 Chapter 8. Physical Database Design + 6.1.1 Partitioning Keys o 6.2 Designing Nodegroups o 6.3 Chapter 9. Designing Distributed Databases + 6.3.1 Updating Multiple Databases o 6.4 Chapter 13. High Availability in the Windows NT Environment + 6.4.1 Need to Reboot the Machine Before Running DB2MSCS Utility o 6.5 Chapter 14. DB2 and High Availability on Sun Cluster 2.2 o 6.6 Veritas Support on Solaris o 6.7 Appendix B. Naming Rules + 6.7.1 Notes on Greater Than 8-Character User IDs and Schema Names + 6.7.2 User IDs and Passwords o 6.8 Appendix D. Incompatibilities Between Releases + 6.8.1 Windows NT DLFS Incompatible with Norton's Utilities + 6.8.2 SET CONSTRAINTS Replaced by SET INTEGRITY o 6.9 Appendix E. National Language Support + 6.9.1 National Language Versions of DB2 Version 7 + 6.9.1.1 Control Center and Documentation Filesets + 6.9.2 Locale Setting for the DB2 Administration Server + 6.9.3 DB2 UDB Supports the Baltic Rim Code Page (MS-1257) on Windows Platforms + 6.9.4 Deriving Code Page Values + 6.9.5 Country Code and Code Page Support + 6.9.6 Character Sets * Administration Guide: Implementation o 7.1 Adding or Extending DMS Containers (New Process) o 7.2 Chapter 1. Administering DB2 using GUI Tools o 7.3 Chapter 3. Creating a Database + 7.3.1 Creating a Table Space + 7.3.1.1 Using Raw I/O on Linux + 7.3.2 Creating a Sequence + 7.3.3 Comparing IDENTITY Columns and Sequences + 7.3.4 Creating an Index, Index Extension, or an Index Specification o 7.4 Chapter 4. Altering a Database + 7.4.1 Adding a Container to an SMS Table Space on a Partition + 7.4.2 Altering an Identity Column + 7.4.3 Altering a Sequence + 7.4.4 Dropping a Sequence + 7.4.5 Switching the State of a Table Space + 7.4.6 Modifying Containers in a DMS Table Space o 7.5 Chapter 5. Controlling Database Access + 7.5.1 Sequence Privileges + 7.5.2 Data Encryption o 7.6 Chapter 8. Recovering a Database + 7.6.1 How to Use Suspended I/O + 7.6.2 Incremental Backup and Recovery + 7.6.2.1 Restoring from Incremental Backup Images + 7.6.3 Parallel Recovery + 7.6.4 Backing Up to Named Pipes + 7.6.5 Backup from Split Image + 7.6.6 On Demand Log Archive + 7.6.7 Log Mirroring + 7.6.8 Cross Platform Backup and Restore Support on Sun Solaris and HP + 7.6.9 DB2 Data Links Manager Considerations/Backup Utility Considerations + 7.6.10 DB2 Data Links Manager Considerations/Restore and Rollforward Utility Considerations + 7.6.11 Restoring Databases from an Offline Backup without Rolling Forward + 7.6.12 Restoring Databases and Table Spaces, and Rolling Forward to the End of the Logs + 7.6.13 DB2 Data Links Manager and Recovery Interactions + 7.6.14 Detection of Situations that Require Reconciliation o 7.7 Appendix C. User Exit for Database Recovery o 7.8 Appendix D. Issuing Commands to Multiple Database Partition Servers o 7.9 Appendix I. High Speed Inter-node Communications + 7.9.1 Enabling DB2 to Run Using VI * Administration Guide: Performance o 8.1 Chapter 3. Application Considerations + 8.1.1 Specifying the Isolation Level + 8.1.2 Adjusting the Optimization Class + 8.1.3 Dynamic Compound Statements o 8.2 Chapter 4. Environmental Considerations + 8.2.1 Using Larger Index Keys o 8.3 Chapter 5. System Catalog Statistics + 8.3.1 Collecting and Using Distribution Statistics + 8.3.2 Rules for Updating Catalog Statistics + 8.3.3 Sub-element Statistics o 8.4 Chapter 6. Understanding the SQL Compiler + 8.4.1 Replicated Summary Tables + 8.4.2 Data Access Concepts and Optimization o 8.5 Chapter 8. Operational Performance + 8.5.1 Managing the Database Buffer Pool + 8.5.2 Managing Multiple Database Buffer Pools o 8.6 Chapter 9. Using the Governor o 8.7 Chapter 13. Configuring DB2 + 8.7.1 Sort Heap Size (sortheap) + 8.7.2 Sort Heap Threshold (sheapthres) + 8.7.3 Maximum Percent of Lock List Before Escalation (maxlocks) + 8.7.4 Configuring DB2/DB2 Data Links Manager/Data Links Access Token Expiry Interval (dl_expint) + 8.7.5 MIN_DEC_DIV_3 Database Configuration Parameter + 8.7.6 Application Control Heap Size (app_ctl_heap_sz) + 8.7.7 Database System Monitor Heap Size (mon_heap_sz) + 8.7.8 Maximum Number of Active Applications (maxappls) + 8.7.9 Recovery Range and Soft Checkpoint Interval (softmax) + 8.7.10 Track Modified Pages Enable (trackmod) + 8.7.11 Change the Database Log Path (newlogpath) + 8.7.12 Location of Log Files (logpath) + 8.7.13 Maximum Storage for Lock List (locklist) o 8.8 Appendix A. DB2 Registry and Environment Variables + 8.8.1 Table of New and Changed Registry Variables o 8.9 Appendix C. SQL Explain Tools * Administering Satellites Guide and Reference o 9.1 Setting up Version 7.2 DB2 Personal Edition and DB2 Workgroup Edition as Satellites + 9.1.1 Prerequisites + 9.1.1.1 Installation Considerations + 9.1.2 Configuring the Version 7.2 System for Synchronization + 9.1.3 Installing FixPak 2 or Higher on a Version 6 Enterprise Edition System + 9.1.3.1 Upgrading Version 6 DB2 Enterprise Edition for Use as the DB2 Control Server + 9.1.4 Upgrading a Version 6 Control Center and Satellite Administration Center * Command Reference o 10.1 db2batch - Benchmark Tool o 10.2 db2cap (new command) + db2cap - CLI/ODBC Static Package Binding Tool o 10.3 db2ckrst (new command) + db2ckrst - Check Incremental Restore Image Sequence o 10.4 db2gncol (new command) + db2gncol - Update Generated Column Values o 10.5 db2inidb - Initialize a Mirrored Database o 10.6 db2look - DB2 Statistics Extraction Tool o 10.7 db2updv7 - Update Database to Version 7 Current Fix Level o 10.8 New Command Line Processor Option (-x, Suppress printing of column headings) o 10.9 True Type Font Requirement for DB2 CLP o 10.10 ADD DATALINKS MANAGER o 10.11 ARCHIVE LOG (new command) + Archive Log o 10.12 BACKUP DATABASE + 10.12.1 Syntax Diagram + 10.12.2 DB2 Data Links Manager Considerations o 10.13 BIND o 10.14 CALL o 10.15 DROP DATALINKS MANAGER (new command) + DROP DATALINKS MANAGER o 10.16 EXPORT o 10.17 GET DATABASE CONFIGURATION o 10.18 GET ROUTINE (new command) + GET ROUTINE o 10.19 GET SNAPSHOT o 10.20 IMPORT o 10.21 LIST HISTORY o 10.22 LOAD o 10.23 PING (new command) + PING o 10.24 PUT ROUTINE (new command) + PUT ROUTINE o 10.25 RECONCILE o 10.26 REORGANIZE TABLE o 10.27 RESTORE DATABASE + 10.27.1 Syntax + 10.27.2 DB2 Data Links Manager Considerations o 10.28 ROLLFORWARD DATABASE o 10.29 Documentation Error in CLP Return Codes * Data Movement Utilities Guide and Reference o 11.1 Chapter 2. Import + 11.1.1 Using Import with Buffered Inserts o 11.2 Chapter 3. Load + 11.2.1 Pending States After a Load Operation + 11.2.2 Load Restrictions and Limitations + 11.2.3 totalfreespace File Type Modifier o 11.3 Chapter 4. AutoLoader + 11.3.1 rexecd Required to Run Autoloader When Authentication Set to YES * Replication Guide and Reference o 12.1 Replication and Non-IBM Servers o 12.2 Replication on Windows 2000 o 12.3 Known Error When Saving SQL Files o 12.4 DB2 Maintenance o 12.5 Data Difference Utility on the Web o 12.6 Chapter 3. Data replication scenario + 12.6.1 Replication Scenarios o 12.7 Chapter 5. Planning for replication + 12.7.1 Table and Column Names + 12.7.2 DATALINK Replication + 12.7.3 LOB Restrictions + 12.7.4 Planning for Replication o 12.8 Chapter 6. Setting up your replication environment + 12.8.1 Update-anywhere Prerequisite + 12.8.2 Setting Up Your Replication Environment o 12.9 Chapter 8. Problem Determination o 12.10 Chapter 9. Capture and Apply for AS/400 o 12.11 Chapter 10. Capture and Apply for OS/390 + 12.11.1 Prerequisites for DB2 DataPropagator for OS/390 + 12.11.2 UNICODE and ASCII Encoding Schemes on OS/390 + 12.11.2.1 Choosing an Encoding Scheme + 12.11.2.2 Setting Encoding Schemes o 12.12 Chapter 11. Capture and Apply for UNIX platforms + 12.12.1 Setting Environment Variables for Capture and Apply on UNIX and Windows o 12.13 Chapter 14. Table Structures o 12.14 Chapter 15. Capture and Apply Messages o 12.15 Appendix A. Starting the Capture and Apply Programs from Within an Application * System Monitor Guide and Reference o 13.1 db2ConvMonStream * Troubleshooting Guide o 14.1 Starting DB2 on Windows 95, Windows 98, and Windows ME When the User Is Not Logged On o 14.2 Chapter 2. Troubleshooting the DB2 Universal Database Server * Using DB2 Universal Database on 64-bit Platforms o 15.1 Chapter 5. Configuration + 15.1.1 LOCKLIST + 15.1.2 shmsys:shminfo_shmmax o 15.2 Chapter 6. Restrictions * XML Extender Administration and Programming * MQSeries o 17.1 Installation and Configuration for the DB2 MQSeries Functions + 17.1.1 Install MQSeries + 17.1.2 Install MQSeries AMI + 17.1.3 Enable DB2 MQSeries Functions o 17.2 MQSeries Messaging Styles o 17.3 Message Structure o 17.4 MQSeries Functional Overview + 17.4.1 Limitations + 17.4.2 Error Codes o 17.5 Usage Scenarios + 17.5.1 Basic Messaging + 17.5.2 Sending Messages + 17.5.3 Retrieving Messages + 17.5.4 Application-to-Application Connectivity + 17.5.4.1 Request/Reply Communications + 17.5.4.2 Publish/Subscribe o 17.6 enable_MQFunctions + enable_MQFunctions o 17.7 disable_MQFunctions + disable_MQFunctions ------------------------------------------------------------------------ Administration Guide: Planning ------------------------------------------------------------------------ 6.1 Chapter 8. Physical Database Design 6.1.1 Partitioning Keys In the "Nodegroup Design Considerations" subsection of the "Designing Nodegroups" section , the following text from the "Partitioning Keys" sub-subsection stating the points to be considered when defining partitioning keys should be deleted only if DB2_UPDATE_PART_KEY=ON: Note: If DB2_UPDATE_PART_KEY=OFF, then the restrictions still apply. Note: In FixPak 3 and later, the default value will be ON. * You cannot update the partitioning key column value for a row in the table. * You can only delete or insert partitioning key column values. ------------------------------------------------------------------------ 6.2 Designing Nodegroups Within the section titled "Designing Nodegroups," the subsection titled "Nodegroup Design Considerations," and the subsection titled "Replicated Summary Tables," disregard the last sentence of the second paragraph: The REPLICATED keyword can only be specified for a summary table that is defined with the REFRESH DEFERRED option. ------------------------------------------------------------------------ 6.3 Chapter 9. Designing Distributed Databases 6.3.1 Updating Multiple Databases In the section "Updating Multiple Databases", the list of setup steps has an inaccuracy. Step 4, which now reads as follows: Precompile your application program to specify a type 2 connection (that is, specify CONNECT 2 on the PRECOMPILE PROGRAM command), and one-phase commit (that is, specify SYNCPOINT ONEPHASE on the PRECOMPILE PROGRAM command), as described in the Application Development Guide. should be changed to: Precompile your application program to specify a type 2 connection (that is, specify CONNECT 2 on the PRECOMPILE PROGRAM command), and two-phase commit (that is, specify SYNCPOINT TWOPHASE on the PRECOMPILE PROGRAM command), as described in the Application Development Guide. ------------------------------------------------------------------------ 6.4 Chapter 13. High Availability in the Windows NT Environment 6.4.1 Need to Reboot the Machine Before Running DB2MSCS Utility The DB2MSCS utility is used to perform the required setup to enable DB2 for failover support under the Microsoft Cluster Service environment. For the DB2MSCS utility to run successfully, the Cluster Service must be able to locate the resource DLL, db2wolf.dll, which resides under the %ProgramFiles%\SQLLIB\bin directory. The DB2 UDB Version 7 Installation Program sets the PATH system environment variable to point to the %ProgramFiles%\SQLLIB\bin directory. However, it is not required that you reboot the machine after installation if you are running on the Windows 2000 operating system. If you want to run the DB2MSCS utility, you must reboot the machine so that the PATH environment variable is updated for the Cluster Service. ------------------------------------------------------------------------ 6.5 Chapter 14. DB2 and High Availability on Sun Cluster 2.2 DB2 Connect is supported on Sun Cluster 2.2 if: * The protocol to the host is TCP/IP (not SNA) * Two-phase commit is not used. This restriction is relaxed if the user configures the SPM log to be on a shared disk (this can be done through the spm_log_path database manager configuration parameter), and the failover machine has an identical TCP/IP configuration (the same host name, IP address, and so on). ------------------------------------------------------------------------ 6.6 Veritas Support on Solaris DB2 now supports Veritas, which provides cluster support for DB2 High Availability on Solaris. Description Brings online, takes offline, and monitors a DB2 UDB instance. Entry Points Online Use db2start to bring up instance. Offline Use db2stop to bring down instance. Monitor Determines if specified DB2 instance is up. Uses appropriate process monitoring and (optional) database monitoring. Clean Removes DB2 instance resources. Attribute Type Definition probeDatabase string Database to be monitored instanceOwner string Instance owner name instanceHome string Home directory of instance owner probeTable string Table in probeDatabase to monitor monitorLevel integer 1 implies process monitoring, 2 implies db mon nodeNumber integer Nodenumber of instance to start (unset is EE) Type Defninition type DB2UDB ( static int CleanTimeout = 240 static int MonitorTimeout = 30 static int OfflineTimeout = 240 static int OnlineRetryLimit = 2 static int OnlineTimeout = 120 static int OnlineWaitLimit = 1 static int RestartLimit = 3 static int ToleranceLimit = 1 static str ArgList[] = { probeDatabase, instanceOwner, instanceHome, probeTable, monitorLevel, nodeNumber } NameRule = resource.db2udb str probeDatabase str instanceOwner str instanceHome str probeTable int monitorLevel int nodeNumber ) Sample Configuration DB2UDB db2_resource_n0 ( probeDatabase = sample probeTable = vcstable instanceOwner = stevera instanceHome = "/export/home/stevera" monitorLevel = 2 ) Installation o Create the directory /opt/VRTSvcs/bin/DB2UDB. o Copy the files online, offline, monitor, clean, DB2UDBAgent into /opt/VRTSvcs/bin/DB2UDB and ensure that they are marked executable. o Copy the file db2udb.type.cf into /etc/VRTSvcs/conf/config. o Stop the cluster (for example, hastop -all). o Add the line include db2udb.type.cf into the file main.cf after the line include types.cf o Verify the cluster configuration is valid with /opt/VRTSvcs/bin/hacf -verify /etc/VRTSvcs/conf/config You are now ready to create the DB2 resources necessary to control DB2 instances. ------------------------------------------------------------------------ 6.7 Appendix B. Naming Rules 6.7.1 Notes on Greater Than 8-Character User IDs and Schema Names * DB2 Version 7 products on Windows 32-bit platforms support user IDs that are up to 30 characters long. However, because of native support of Windows NT and Windows 2000, the practical limit for user ID is 20 characters. * DB2 Version 7 supports non-Windows 32-bit clients connecting to Windows NT and Windows 2000 with user IDs longer than 8 characters when user ID and password are being specified explicitly. This excludes connections using Client or DCE authentication. * DCE authentication on all platforms continues to have the 8-character user ID limit. * The authid returned in the SQLCA from a successful CONNECT or ATTACH is truncated to 8 characters. The SQLWARN fields contain warnings when truncation occurs. For more information, refer to the description of the CONNECT statement in the SQL Reference. * The authid returned by the command line processor (CLP) from a successful CONNECT or ATTACH is truncated to 8 characters. An ellipsis (...) is appended to the authid to indicate truncation. * DB2 Version 7 supports schema names with length up to 30 bytes, with the following exceptions: o Tables with schema names longer than 18 bytes cannot be replicated. o User defined types (UDTs) cannot have schema names longer than 8 bytes. 6.7.2 User IDs and Passwords Within the section titled "User IDs and Passwords," change the reference to "A through Z" to: Single-byte uppercase and lowercase Latin letters (A...Z, a...z). Support for other letters and characters depends on the code page being used. See the appendix titled "National Language Support (NLS)" for more information on code page support. ------------------------------------------------------------------------ 6.8 Appendix D. Incompatibilities Between Releases 6.8.1 Windows NT DLFS Incompatible with Norton's Utilities The Windows NT Data Links File System is incompatible with Norton Utilities. When a file is deleted from a drive controlled by DLFS, a kernel exception results: error 0x1E (Kernel Mode Exception Not Handled). The exception being 0xC00000005 (Access Violation). This access violation happens because the Norton Utilities driver gets loaded after the DLFS filter driver gets loaded. A temporary work-around is to load the DLFSD driver, after the Norton Utilities driver is loaded. This work-around can be done by changing the DLFSD driver startup to manual. Click on Start and select Settings--> Control Panel-->Devices-->DLFSD and set it to manual. A batch file, that can be added to the startup folder, can be created which loads the DLFSD driver and the DLFM Service on system startup. The contents of the batch file are as follows: net start dlfsd net start "dlfm service" Name this batch file start_dlfs.bat, and copy it into the C:\WINNT\Profiles\Administrator\Start Menu\Programs\Startup directory. Only the administrator has the privilege to load the DLFS filter driver and the DLFM service. 6.8.2 SET CONSTRAINTS Replaced by SET INTEGRITY The SET CONSTRAINTS statement has been replaced by the SET INTEGRITY statement. For backwards compatibility, both statements are accepted in DB2 UDB V7. ------------------------------------------------------------------------ 6.9 Appendix E. National Language Support 6.9.1 National Language Versions of DB2 Version 7 DB2 Version 7 is available in English, French, German, Italian, Spanish, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, Traditional Chinese, Danish, Finnish, Norwegian, Swedish, Czech, Dutch, Hungarian, Polish, Turkish, Russian, Bulgarian, and Slovenian. On UNIX-based platforms, the DB2 product messages and library can be installed in several different languages. The DB2 installation utility lays down the message catalog file sets into the most commonly used locale directory for a given platform as shown in the following tables. Table 4 provides information for AIX, HP-UX, and Solaris. Table 5 provides information for Linux, Linux/390, SGI, and Dynix. Table 4. AIX, HP-UX, Solaris Operating SystemAIX HP-UX Solaris Language Locale Cde Locale Cde Locale Cde Pg Pg Pg French fr_FR 819 fr_FR.iso88591 819 fr 819 Fr_FR 850 fr_FR.roman8 1051 German de_DE 819 de_DE.iso88591 819 de 819 De_DE 850 de_DE.roman8 1051 Italian it_IT 819 it_IT.iso88591 819 it 819 It_IT 850 it_IT.roman8 1051 Spanish es_ES 819 es_ES.iso88591 819 es 819 Es_ES 850 es_ES.roman8 1051 Brazilian Portu-pt_BR 819 pt_BR 819 guese Japanese ja_JP 954 ja_JP.eucJP 954 ja 954 Ja_JP 932 Korean ko_KR 970 ko_KR.eucKR 970 ko 970 Simplified zh_CN 1383 zh_CN.hp15CN 1383 zh 1383 Chinese Zh_ 1386 CN.GBK Traditional zh_TW 964 zh_TW.eucTW 964 zh_TW 964 Chinese Zh_TW 950 zh_TW.big5 950 zh_TW.BIG5 950 Danish da_DK 819 da_DK.iso88591 819 da 819 Da_DK 850 da_DK.roman8 1051 Finnish fi_FI 819 fi_FI.iso88591 819 fi 819 Fi_FI 850 fi_FI.roman8 1051 Norwegian no_NO 819 no_NO.iso88591 819 no 819 No_NO 850 no_NO.roman8 1051 Sweden sv_SE 819 sv_SE.iso88591 819 sv 819 Sv_SE 850 sv_SE.roman8 1051 Czech cs_CZ 912 Hungarian hu_HU 912 Polish pl_PL 912 Dutch nl_NL 819 Nl_NL 850 Turkish tr_TR 920 Russian ru_RU 915 Bulgarian bg_BG 915 bg_BG.iso88595 915 Slovenian sl_SI 912 sl_SI.iso88592 912 sl_SI 912 Table 5. Linux, Linux/390, SGI, Dynix Operating Linux Linux/390 SGI Dynix System Language Locale Cde Locale Cde Locale Cde Locale Cde Pg Pg Pg Pg French fr 819 fr 819 fr 819 German de 819 de 819 de 819 Italian es 819 Spanish Brazilian Portuguese Japanese ja_JP.ujis 954 ja_JP.ujis954 ja_JP.EUC 954 Korean ko 970 ko 970 ko_KO.euc 970 Simplified zh 1386 zh 1386 Chinese zh_CN.GBK zh_CN.GBK Traditional zh_TW.Big5 950 zh_TW.Big5950 Chinese Danish Finnish Norwegian Sweden Czech Hungarian Polish Dutch nl 819 Turkish Russian Bulgarian Slovenian If your system uses the same code pages but different locale names than those provided above, you can still see the translated messages by creating a link to the appropriate message directory. For example, if your AIX machine default locale is ja_JP.IBM-eucJP and the code page of ja_JP.IBM-eucJP is 954, you can create a link from /usr/lpp/db2_07_01/msg/ja_JP.IBM-eucJP to /usr/lpp/db2_07_01/msg/ja_JP by issuing the following command: ln -s /usr/lpp/db2_07_01/msg/ja_JP /usr/lpp/db2_07_01/msg/ja_JP.IBM-eucJP After the execution of this command, all DB2 messages come up in Japanese. 6.9.1.1 Control Center and Documentation Filesets The Control Center, Control Center Help and documentation filesets are placed in the following directories on the target workstation: * DB2 for AIX: o /usr/lpp/db2_07_01/cc/%L o /usr/lpp/db2_07_01/java/%L o /usr/lpp/db2_07_01/doc/%L o /usr/lpp/db2_07_01/qp/$L o /usr/lpp/db2_07_01/spb/%L * DB2 for HP-UX: o /opt/IBMdb2/V7.1/cc/%L o /opt/IBMdb2/V7.1/java/%L o /opt/IBMdb2/V7.1/doc/%L * DB2 for Linux: o /usr/IBMdb2/V7.1/cc/%L o /usr/IBMdb2/V7.1/java/%L o /usr/IBMdb2/V7.1/doc/%L * DB2 for Solaris: o /opt/IBMdb2/V7.1/cc/%L o /usr/IBMdb2/V7.1/java/%L o /opt/IBMdb2/V7.1/doc/%L Control Center file sets are in Unicode code page. Documentation and Control Center help file sets are in a browser-recognized code set. If your system uses a different locale name than the one provided, you can still run the translated version of the Control Center and see the translated version of help by creating links to the appropriate language directories. For example, if your AIX machine default locale is ja_JP.IBM-eucJP, you can create links from /usr/lpp/db2_07_01/cc/ja_JP.IBM-eucJP to /usr/lpp/db2_07_01/cc/ja_JP and from /usr/lpp/db2_07_01/doc/ja_JP.IBM-eucJP to /usr/lpp/db2_07_01/doc/ja_JP by issuing the following commands: * ln -s /usr/lpp/db2_07_01/cc/ja_JP /usr/lpp/db2_07_01/cc/ja_JP.IBM-eucJP * ln -s /usr/lpp/db2_07_01/doc/ja_JP /usr/lpp/db2_07_01/doc/ja_JP.IBM-eucJP After the execution of these commands, the Control Center and help text come up in Japanese. Note: The Web Control Center is not supported running natively on Linux/390 or NUMA-Q. It can be used from a client workstation to manage databases on these platforms. 6.9.2 Locale Setting for the DB2 Administration Server Please ensure that the locale of the DB2 Administration Server instance is compatible to the locale of the DB2 instance. Otherwise, the DB2 instance cannot communicate with the DB2 Administration Server. If the LANG environment variable is not set in the user profile of the DB2 Administration Server, the DB2 Administration Server will be started with the default system locale. If the default system locale is not defined, the DB2 Administration Server will be started with code page 819. If the DB2 instance uses one of the DBCS locales, and the DB2 Administration Server is started with code page 819, the instance will not be able to communicate with the DB2 Administration Server. The locale of the DB2 Administration Server and the locale of the DB2 instance must be compatible. For example, on a Simplified Chinese Linux system, "LANG=zh_CN" should be set in the DB2 Administration Server's user profile. 6.9.3 DB2 UDB Supports the Baltic Rim Code Page (MS-1257) on Windows Platforms DB2 UDB supports the Baltic Rim code page, MS-1257, on Windows 32-bit operating systems. This code page is used for Latvian, Lithuanian, and Estonian. 6.9.4 Deriving Code Page Values Within the section titled "Deriving Code Page Values," change the first paragraph, from: However, it is not necessary to set the DB2CODEPAGE registry variable, because DB2 will determine the appropriate code page value from the operating system. to: Normally, you do not need to set the DB2CODEPAGE registry variable because DB2 automatically derives the code page information from the operating system. 6.9.5 Country Code and Code Page Support Within the section titled "Country Code and Code Page Support," add the following information to the table: Code Country Page Group Code-Set Tr. Code Locale OS Country Name ---- ----- -------- -- --- ----- ---- --------------- 943 D-1 IBM-943 JP 81 ja_JP.PCK Sun Japan 6.9.6 Character Sets Within the section titled "Character Sets" and the subsection "Character Set for Identifiers," replace the last two sentences in the first paragraph with the following: Use special characters #, @, and $ with care in an NLS environment because they are not included in the NLS host (EBCDIC) invariant character set. Characters from the extended character set can also be used, depending on the code page that is being used. If you are using the database in a multiple code page environment, you must ensure that all code pages support any elements from the extended character set you plan to use. ------------------------------------------------------------------------ Administration Guide: Implementation ------------------------------------------------------------------------ 7.1 Adding or Extending DMS Containers (New Process) DMS containers (both file containers and raw device containers) which are added (during tablespace creation or after) or extended are now done so in parallel through the prefetchers. To achieve an increase in parallelism of these create / resize container operations, one can increase the number of prefetchers running in the system. The only process which is not done in parallel is the logging of these actions and, in the case of creating containers, the tagging of the containers. Note: Parallelism of the CREATE TABLESPACE / ALTER TABLESPACE (with respect to adding new containers to an existing tablespace) will no longer increase when the number of prefetchers equals the number of containers being added. ------------------------------------------------------------------------ 7.2 Chapter 1. Administering DB2 using GUI Tools Within the section titled "The Alert Center", remove the last two sentences in the section. Within the section titled "Performance Monitor", remove the second bullet item from the "Define performance variables" list in the subsection "Monitoring Performance at a Point in Time." Also later in this same subsection, the last few paragraphs in the section should be rewritten as follows: For each, a variety of performance variables can be monitored. The Performance Variable Reference Help, available from the Help menu of any Snapshot Monitor window, provides a description of all the performance variables. These variables are organized into categories. The following categories exist: * Instance: Agents, Connections, Sort * Database: Lock and Deadlock, Buffer Pool and I/O, Connections, Sort, SQL Statement Activity * Table: Table * Table space: Buffer Pool and I/O * Database Connections: Buffer Pool and I/O, Lock and Deadlock, Sort, SQL Statement Activity For detailed information on how to generate snapshots, see the online help. In this same section, remove the last sentence in the subsection titled "Action Required When an Object Appears in the Alert Center." ------------------------------------------------------------------------ 7.3 Chapter 3. Creating a Database 7.3.1 Creating a Table Space 7.3.1.1 Using Raw I/O on Linux Linux has a pool of raw device nodes that must be bound to a block device before raw I/O can be performed on it. There is a raw device controller that acts as the central repository of raw to block device binding information. Binding is performed using a utility named raw, which is normally supplied by the Linux distributor. Before you set up raw I/O on Linux, you require the following: * one or more free IDE or SCSI disk partitions * Linux kernel 2.4.0 or later (However, some Linux distributions offer raw I/O on 2.2 kernels.) * a raw device controller named /dev/rawctl or /dev/raw. If not, create a symbolic link: # ln -s /dev/your_raw_dev_ctrl /dev/rawctl * the raw utility, which is usually provided with the Linux distribution * DB2 Version 7.1 FixPak 3 or later Note: Of the distributions currently supporting raw I/O, the naming of raw device nodes is different: Distribution Raw device nodes Raw device controller ------------ -------------------- --------------------- RedHat 6.2 /dev/raw/raw1 to 255 /dev/rawctl SuSE 7.0 /dev/raw1 to 63 /dev/raw DB2 supports either of the above raw device controllers, and most other names for raw device nodes. Raw devices are not supported by DB2 on Linux/390. To configure raw I/O on Linux: In this example, the raw partition to be used is /dev/sda5. It should not contain any valuable data. Step 1. Calculate the number of 4 096-byte pages in this partition, rounding down if necessary. For example: # fdisk /dev/sda Command (m for help): p Disk /dev/sda: 255 heads, 63 sectors, 1106 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Start End Blocks Id System /dev/sda1 1 523 4200997 83 Linux /dev/sda2 524 1106 4682947+ 5 Extended /dev/sda5 524 1106 4682947 83 Linux Command (m for help): q # The number of pages in /dev/sda5 is num_pages = floor( ((1106-524+1)*16065*512)/4096 ) num_pages = 11170736 Step 2. Bind an unused raw device node to this partition. This needs to be done every time the machine is rebooted, and requires root access. Use raw -a to see which raw device nodes are already in use: # raw /dev/raw/raw1 /dev/sda5 /dev/raw/raw1: bound to major 8, minor 5 Step 3. Set global read permissions on the raw device controller and the disk partition. Set global read and write permissions on the raw device: # chmod a+r /dev/rawctl # chmod a+r /dev/sdb1 # chmod a+rw /dev/raw/raw1 Step 4. Create the table space in DB2, specifying the raw device, not the disk partition. For example: CREATE TABLESPACE dms1 MANAGED BY DATABASE USING (DEVICE '/dev/raw/raw1' 11170736) Table spaces on raw devices are also supported for all other page sizes supported by DB2. 7.3.2 Creating a Sequence Following the section titled "Defining an Identity Column on a New Table," add the following section, "Creating a Sequence": A sequence is a database object that allows the automatic generation of values. Sequences are ideally suited to the task of generating unique key values. Applications can use sequences to avoid possible concurrency and performance problems resulting from the generation of a unique counter outside the database. Unlike an identity column attribute, a sequence is not tied to a particular table column nor is it bound to a unique table column and only accessible through that table column. A sequence can be created, or altered, so that it generates values in one of these ways: * Increment or decrement monotonically without bound * Increment or decrement monotonically to a user-defined limit and stop * Increment or decrement monotonically to a user-defined limit and cycle back to the beginning and start again The following is an example of creating a sequence object: CREATE SEQUENCE order_seq START WITH 1 INCREMENT BY 1 NOMAXVALUE NOCYCLE CACHE 24 In this example, the sequence is called order_seq. It will start at 1 and increase by 1 with no upper limit. There is no reason to cycle back to the beginning and restart from 1 because there is no assigned upper limit. The number associated with the CACHE parameter specifies the maximum number of sequence values that the database manager preallocates and keeps in memory. The sequence numbers generated have the following properties: * Values can be any exact numeric data type with a scale of zero. Such data types include: SMALLINT, BIGINT, INTEGER, and DECIMAL. * Consecutive values can differ by any specified integer increment. The default increment value is 1. * Counter value is recoverable. The counter value is reconstructed from logs when recovery is required. * Values can be cached to improve performance. Preallocating and storing values in the cache reduces synchronous I/O to the log when values are generated for the sequence. In the event of a system failure, all cached values that have not been committed are never used and considered lost. The value specified for CACHE is the maximum number of sequence values that could be lost. If a database that contains one or more sequences is recovered to a prior point in time, then this could cause the generation of duplicate values for some sequences. To avoid possible duplicate values, a database with sequences should not be recovered to a prior point in time. Sequences are only supported in a single node database. There are two expressions used with a sequence. The PREVVAL expression returns the most recently generated value for the specified sequence for a previous statement within the current session. The NEXTVAL expression returns the next value for the specified sequence. A new sequence number is generated when a NEXTVAL expression specifies the name of the sequence. However, if there are multiple instances of a NEXTVAL expression specifying the same sequence name within a query, the counter for the sequence is incremented only once for each row of the result. The same sequence number can be used as a unique key value in two separate tables by referencing the sequence number with a NEXTVAL expression for the first table, and a PREVVAL expression for any additional tables. For example: INSERT INTO order (orderno, custno) VALUES (NEXTVAL FOR order_seq, 123456); INSERT INTO line_item (orderno, partno, quantity) VALUES (PREVVAL FOR order_seq, 987654, 1) The NEXTVAL or PREVVAL expressions can be used in the following locations: * INSERT statement, VALUES clause * SELECT statement, SELECT list * SET assignment statement * UPDATE statement, SET clause * VALUES or VALUES INTO statement 7.3.3 Comparing IDENTITY Columns and Sequences Following the new section titled "Creating a Sequence", add the following section: While there are similarities between IDENTITY columns and sequences, there are also differences. The characteristics of each can be used when designing your database and applications. An identity column has the following characteristics: * An identity column can be defined as part of a table only when the table is created. Once a table is created, you cannot alter it to add an identity column. (However, existing identity column characteristics may be altered.) * An identity column automatically generates values for a single table. * When an identity column is defined as GENERATED ALWAYS, the values used are always generated by the database manager. Applications are not allowed to provide their own values during the modification of the contents of the table. A sequence object has the following characteristics: * A sequence object is a database object that is not tied to any one table. * A sequence object generates sequential values that can be used in any SQL statement. * Since a sequence object can be used by any application, there are two expressions used to control the retrieval of the next value in the specified sequence and the value generated previous to the statement being executed. The PREVVAL expression returns the most recently generated value for the specified sequence for a previous statement within the current session. The NEXTVAL expression returns the next value for the specified sequence. The use of these expressions allows the same value to be used across several SQL statements within several tables. While these are not all of the characteristics of these two items, these characteristics will assist you in determining which to use depending on your database design and the applications using the database. 7.3.4 Creating an Index, Index Extension, or an Index Specification Within the section titled "Creating an Index, Index Extension, or an Index Specification", add the following note in the paragraph beginning with the sentence: "Any column that is part of an index key is limited to 255 bytes." Note: The DB2_INDEX_2BYTEVARLEN registry variable can be used to allow columns with a length greater than 255 bytes to be specified as part of an index key. ------------------------------------------------------------------------ 7.4 Chapter 4. Altering a Database Under the section "Altering a Table Space", the following new sections are to be added: 7.4.1 Adding a Container to an SMS Table Space on a Partition You can add a container to an SMS table space on a partition (or node) that currently has no containers. The contents of the table space are rebalanced across all containers. Access to the table space is not restricted during the rebalancing. If you need to add more than one container, you should add them all at the same time. To add a container to an SMS table space using the command line, enter the following: ALTER TABLESPACE ADD ('') ON NODE () The partition specified by number, and every partition (or node) in the range of partitions, must exist in the nodegroup on which the table space is defined. A partition_number may only appear explicitly or within a range in exactly one on-nodes-clause for the statement. The following example shows how to add a new container to partition number 3 of the nodegroup used by table space "plans" on a UNIX based operating system: ALTER TABLESPACE plans ADD ('/dev/rhdisk0') ON NODE (3) Following the section titled "Changing Table Attributes," add the following sections: 7.4.2 Altering an Identity Column Modify the attributes of an existing identity column with the ALTER TABLE statement. For more information on this statement, including its syntax, refer to the SQL Reference. There are several ways to modify an identity column so that it has some of the characteristics of sequences. There are some tasks that are unique to the ALTER TABLE and the identity column: * RESTART resets the sequence associated with the identity column to the value specified implicitly or explicitly as the starting value when the identity column was originally created. * RESTART WITH resets the sequence associated with the identity column to the exact numeric constant value. The numeric constant could be any positive or negative value with no non-zero digits to the right of any decimal point that could be assigned to the identity column. 7.4.3 Altering a Sequence Modify the attributes of an existing sequence with the ALTER SEQUENCE statement. For more information on this statement, including its syntax, refer to the SQL Reference. The attributes of the sequence that can be modified include: * Changing the increment between future values * Establishing new minimum or maximum values * Changing the number of cached sequence numbers * Changing whether the sequence will cycle or not * Changing whether sequence numbers must be generated in order of request * Restarting the sequence There are two tasks that are not found as part of the creation of the sequence. They are: * RESTART. Resets the sequence to the value specified implicitly or explicitly as the starting value when the sequence was created. * RESTART WITH numeric-constant. Resets the sequence to the exact numeric constant value. The numeric constant can be any positive or negative value with no non-zero digits to the right of any decimal point. After restarting a sequence or changing to CYCLE, it is possible to generate duplicate sequence numbers. Only future sequence numbers are affected by the ALTER SEQUENCE statement. The data type of a sequence cannot be changed. Instead, you must drop the current sequence and then create a new sequence specifying the new data type. All cached sequence values not used by DB2 are lost when a sequence is altered. 7.4.4 Dropping a Sequence To delete a sequence, use the DROP statement. For more information on this statement, including its syntax, refer to the SQL Reference. A specific sequence can be dropped by using: DROP SEQUENCE sequence_name where the sequence_name is the name of the sequence to be dropped and includes the implicit or explicit schema name to exactly identify an existing sequence. Sequences that are system-created for IDENTITY columns cannot be dropped using the DROP SEQUENCE statement. Once a sequence is dropped, all privileges on the sequence are also dropped. 7.4.5 Switching the State of a Table Space The SWITCH ONLINE clause of the ALTER TABLESPACE statement can be used to move table spaces in an OFFLINE state to an ONLINE state if the containers associated with that table space have become accessible. The table space is moved to an ONLINE state while the rest of the database is still up and being used. An alternative to the use of this clause is to disconnect all applications from the database and then to have the applications connect to the database again. This moves the table space from an OFFLINE state to an ONLINE state. To switch the table space to an ONLINE state using the command line, enter: ALTER TABLESPACE SWITCH ONLINE 7.4.6 Modifying Containers in a DMS Table Space DMS tables spaces are now created and resized in parallel, which offers a performance benefit. The degree of parallelism is equal to the number of prefetchers plus 1. ------------------------------------------------------------------------ 7.5 Chapter 5. Controlling Database Access Following the section titled "Index Privileges," add the following section: 7.5.1 Sequence Privileges The creator of a sequence automatically receives the USAGE privilege. The USAGE privilege is needed to use NEXTVAL and PREVVAL expressions for the sequence. To allow other users to use the NEXTVAL and PREVVAL expressions, sequence privileges must be granted to PUBLIC. This allows all users to use the expressions with the specified sequence. Following the section titled "Monitoring Access to Data Using the Audit Facility," add the following section: 7.5.2 Data Encryption One part of your security plan may involve encrypting your data. To do this, you can use encryption and decryption built-in functions: ENCRYPT, DECRYPT_BIN, DECRYPT_CHAR, and GETHINT. For more information on these functions, including their syntax, refer to the SQL Reference section of the Release Notes. The ENCRYPT function encrypts data using a password-based encryption method. These functions also allow you to encapsulate a password hint. The password hint is embedded in the encrypted data. Once encrypted, the only way to decrypt the data is by using the correct password. Developers that choose to use these functions should plan for the management of forgotten passwords and unusable data. The result of the ENCRYPT functions is the same data type as the first argument. Only VARCHARs can be encrypted. The declared length of the result is one of the following: * The length of the data argument plus 42 when the optional hint parameter is specified. * The length of the data argument plus 10 when the optional hint parameter is not specified. The DECRYPT_BIN and DECRYPT_CHAR functions decrypt data using password-based decryption. The result of the DECRYPT_BIN and DECRYPT_CHAR functions is the same data type as the first argument. The declared length of the result is the length of the original data. The GETHINT function returns an encapsulated password hint. A password hint is a phrase that will help data owners remember passwords. For example, the word "Ocean" can be used as a hint to remember the password "Pacific". The password that is used to encrypt the data is determined in one of two ways: * Password Argument. The password is a string that is explicitly passed when the ENCRYPT function is invoked. The data is encrypted and decrypted with the given password. * Special Register Password. The SET ENCRYPTION PASSWORD statement encrypts the password value and sends the encrypted password to the database manager to store in a special register. ENCRYPT, DECRYPT_BIN and DECRYPT_CHAR functions invoked without a password parameter use the value in the ENCRYPTION PASSWORD special register. The initial or default value for the special register is an empty string. Valid lengths for passwords are between 6 and 127 inclusive. Valid lengths for hints are between 0 and 32 inclusive. When the ENCRYPTION PASSWORD special register is set from the client, the password is encrypted at the client, sent to the database server, and then decrypted. To ensure that the password is not left readable, it is also re-encrypted at the database server. DECRYPT_BIN and DECRYPT_CHAR functions must decrypt the special register before use. The value found in the ENCRYPTION PASSWORD is also not left readable. Gateway security is not supported. ------------------------------------------------------------------------ 7.6 Chapter 8. Recovering a Database 7.6.1 How to Use Suspended I/O In Chapter 8."Recovering a Database", the following new section on using the suspended I/O function is to be added: Note: The information below about the db2inidb utility supercedes the information in the Version 7.2 What's New book. db2inidb is a new tool shipped with DB2 that can perform crash recovery and put a database in rollforward pending state. Suspended I/O supports continuous system availability by providing a full implementation for online split mirror handling, that is, splitting a mirror without shutting down the database. If a customer cannot afford doing offline or online backups on a large database, backups or system copies can be done from a mirror image by using suspended I/O and split mirror. Depending on how the storage devices are being mirrored, the uses of db2inidb will vary. The following uses assume that the entire database is mirrored consistently through the storage system. In a multi-node environment, the db2inidb tool must be run on every partition before the split image can be used from any of the partitions. The db2inidb tool can be run on all partitions simultaneously. 1. Making a Clone Database The objective here is to have a clone of the primary database to be used for read-only purposes. The following procedure describes how a clone database may be made: a. Suspend I/O on the primary system by entering the following command: db2 set write suspend for database b. Use the operating system level command to split the mirror from the primary database. c. Resume I/O on primary system by entering the following command: db2 set write resume for database After running the command, the database on the primary system should be back to a normal state. d. Attach to the mirrored database from another machine. e. Start the database instance by entering the following command: db2start f. Start the DB2 crash recovery by entering the following command: db2inidb database_name AS SNAPSHOT Note: This command will rollback the changes made by transactions that are inflight at the time of the split. You can also use this process for an offline backup, but if restored on the primary system, this backup cannot be used to roll forward, because the log chain will not match. 2. Using the Split Mirror as a Standby Database As the mirrored (standby) database is continually rolling forward through the logs, new logs that are being created by the primary database, are constantly fetched from the primary system. The following procedure describes how the split mirror can be used as a standby database: a. Suspend I/O writes on the primary database. b. Split the mirror from the primary system. c. Resume the I/O writes on the primary database so that the primary database goes back to normal processing. d. Attach the mirrored database to another instance. e. Place the mirror in roll-forward pending and roll forward the mirror. Run the db2inidb tool (db2inidb as standby) to remove the suspend write state and to place the mirrored database in a roll-forward pending state. f. Copy logs by setting up a user exit program to retrieve log files from the primary system to ensure that the latest logs will be available for this mirrored database. g. Roll forward the database to the end of the logs. h. Go back to step f and repeat this process until the primary database is down. 3. Using the Split Mirror as a Backup Image The following procedure describes how to use the mirrored system as a backup image to restore over the primary system: a. Use operating system commands to copy the mirrored data and logs on top of the primary system. b. Start the database instance by entering the following command: db2start c. Run the following command to place the mirrored database in a roll-forward pending state and to remove the suspend write state. db2inidb database_alias AS MIRROR d. Roll forward the database to the end of the logs. 7.6.2 Incremental Backup and Recovery In Chapter 8,"Recovering a Database," the following is a new section about incremental backup and recovery: As the size of databases, and particularly warehouses, continues to expand into the terabyte and petabyte range, the time and hardware resources required to back up and recover these databases are also growing substantially. Full database and table space backups are not always the best approach when dealing with large databases, because the storage requirements for multiple copies of such databases are enormous. Consider the following issues: * When a small percentage of the data in a warehouse changes, it should not be necessary to back up the entire database. * Appending table spaces to existing databases and then taking only table space backups is risky, because data outside of the backed up table spaces may change. DB2 now supports incremental backup and recovery (but not of long field or large object data). An incremental backup is a backup image that contains only pages that have been updated since the previous backup was taken. In addition to updated data and index pages, each incremental backup image also contains all of the initial database meta-data (such as database configuration, table space definitions, database history, and so on) that is normally stored in full backup images. Two types of incremental backup are supported: * Incremental. An incremental backup image is a copy of all database data that has changed since the most recent successful full backup operation. This is also known as a cumulative backup image, because a series of incremental backups taken over time will each have the contents of the previous incremental backup image. The predecessor of an incremental backup image is always the most recent successful full backup of the same object. * Delta. A delta, or incremental delta, backup image is a copy of all database data that has changed since the last successful backup (full, incremental, or delta) of the table space in question. This is also known as a differential, or non-cumulative, backup image. The predecessor of a delta backup image is the most recent successful backup containing a copy of each of the table spaces in the delta backup image. The key difference between incremental and delta backup images is their behavior when successive backups are taken of an object that is continually changing over time. Each successive incremental image contains the entire contents of the previous incremental image, plus any data that has changed, or is new, since the previous backup was produced. Delta backup images contain only the pages that have changed since the previous image was produced. Combinations of database and table space incremental backups are permitted, in both online and offline modes of operation. Be careful when planning your backup strategy, because combining database and table space incremental backups implies that the predecessor of a database backup (or a table space backup of multiple table spaces) is not necessarily a single image, but could be a unique set of previous database and table space backups taken at different times. To rebuild the database or the table space to a consistent state, the recovery process must begin with a consistent image of the entire object (database or table space) to be restored, and must then apply each of the appropriate incremental backup images in the order described below (see the "Restore Method" section). To enable the tracking of database updates, DB2 supports a new database configuration parameter, TRACKMOD, which can have one of two accepted values: * NO. Incremental backup is not permitted with this configuration. Database page updates are not tracked or recorded in any way. * YES. Incremental backup is permitted with this configuration. When update tracking is enabled, the change becomes effective at the first successful connection to any database in the instance. A full database backup is necessary before an incremental backup can be taken. The default TRACKMOD setting for existing databases is NO; for new databases, it is YES. The granularity of the tracking is at the table space level for both SMS and DMS table spaces. Although minimal, the tracking of updates to the database can have an impact on the run-time performance of transactions that update or insert data. 7.6.2.1 Restoring from Incremental Backup Images A restore operation from incremental backup images always consists of the following steps: 1. Identifying the incremental target image.The DBA must first determine the final image to be restored, and request an incremental restore operation from the DB2 restore utility. This image is known as the target image of the incremental restore, because it will be the last image to be restored. An incremental restore command against this image may initiate the creation of a new database with the configuration and table space definitions from this target image. The incremental target image is specified using the TAKEN AT parameter in the RESTORE DATABASE command. 2. Restoring the most recent full database or table space image to establish a baseline against which each of the subsequent incremental backup images can be applied. 3. Restoring each of the required full or table space incremental backup images, in the order in which they were produced, on top of the baseline image restored in Step 2. 4. Repeating Step 3 until the target image from Step 1 is read a second time. The target image is accessed twice during a complete incremental restore operation. During the first access, only initial data is read from the image; none of the user data is read. The complete image is read and processed only during the second access. The target image of the incremental restore operation must be accessed twice to ensure that the database is initially configured with the correct history, database configuration, and table space definitions for the database that will be created during the restore operation. In cases where a table space has been dropped since the initial full database backup image was taken, the table space data for that image will be read from the backup images but ignored during incremental restore processing. For example: 1. db2 restore database sample incremental taken at where: points to the last incremental backup image to be restored 2. db2 restore database sample incremental taken at where: points to the initial full database (or table space) image 3. db2 restore database sample incremental taken at where: points to each incremental backup image in creation sequence 4. Repeat Step 3, restoring each incremental backup image up to and including image In cases where a database restore operation is being attempted, and table space incremental backup images have been produced, the table space images must be restored in the chronological order of their backup time stamps. 7.6.3 Parallel Recovery DB2 now uses multiple agents to perform both crash recovery and database rollforward recovery. You can expect better performance during these operations, particularly on symmetric multi-processor (SMP) machines; using multiple agents during database recovery takes advantage of the extra CPUs that are available on SMP machines. The new agent type introduced by this enhancement is db2agnsc. DB2 chooses the number of agents to be used for database recovery based on the number of CPUs on the machine. For SMP machines, the number of agents used is (number of CPUs + 1). On a machine with a single CPU, three agents are used for more efficient reading of logs, processing of log records, and prefetching of data pages. DB2 distributes log records to these agents so that they can be reapplied concurrently, where appropriate. The processing of log records is parallelized at the page level (log records on the same data page are processed by the same agent); therefore, performace is enhanced, even if all the work was done on one table. 7.6.4 Backing Up to Named Pipes Support is now available for database backup to (and database restore from) local named pipes on UNIX based systems. Both the writer and the reader of the named pipe must be on the same machine. The pipe must exist and be located on a local file system. Because the named pipe is treated as a local device, there is no need to specify that the target is a named pipe. Following is an AIX example: 1. Create a named pipe: mkfifo /u/dbuser/mypipe 2. Use this pipe as the target for a database backup operation: db2 backup db sample to /u/dbuser/mypipe 3. Restore the database: db2 restore db sample into mynewdb from /u/dbuser/mypipe 7.6.5 Backup from Split Image DB2 now supports a full offline database backup on the split mirrored copy of a database. Online backup is not supported and is not necessary because the database, which is in rollforward pending state, is unavailable. When a split mirrored backup image is restored, it must be rolled forward because there may have been active transactions when the split occurred. Note: For DB2 Version 7.1 FixPak 3 and DB2 Version 7.2 this support will be limited to databases that contain DMS table spaces only. If an attempt is made to back up a database after a split and the database contains any SMS table spaces, the backup will fail. Once a database has been split, the db2inidb utility must be used to specify one of the following options: * Snapshot. This initiates crash recovery, making the database consistent. A new log chain starts, and the database will not be able to roll forward through any of the logs from the original database. The database is available for any operation, including backup. * Standby. This places the database in rollforward pending state. Crash recovery is not performed, and the database remains inconsistent. * Mirror. This causes a mirrored copy of the database to replace the original database. The database is placed in rollforward pending state, and the WRITE SUSPEND state is turned off. Crash recovery is not performed, and the database remains inconsistent. Following are some usage scenarios: * Making a database clone. The objective here is to have a read-only clone of the primary database that can be used, for example, to create reports. To do this, follow these steps: 1. Suspend I/O on the primary system: db2 set write suspend for database 2. Split the mirror. Use operating system level commands to split the mirror from the primary database. 3. Resume I/O on the primary system: db2 set write resume for database The database on the primary system should now be back to a normal state. 4. Mount the split mirrors of the database to another host. 5. Start the instance: db2start 6. Start DB2 crash recovery: db2inidb as snapshot You can also use this process for an offline backup, but if restored on the primary system, this backup cannot be used to roll forward, because the log chain will not match. * Using the split mirror as a standby database. The idea here is that the mirrored (standby) database is continually rolling forward through the logs, and even new logs that are being created by the primary database are continually fetched from the primary system. To use the split mirror as a standby database, follow these steps: 1. Suspend I/O on the primary system: db2 set write suspend for database 2. Split the mirror. Use operating system level commands to split the mirror from the primary database. 3. Resume I/O on the primary system: db2 set write resume for database The database on the primary system should now be back to a normal state. 4. Mount the split mirrors of the database to another host. 5. Remove the suspend write state and put the mirrored database in rollforward pending state. db2inidb as standby 6. Copy logs. Set up a user exit program to retrieve log files from the primary system's archive location, so that the latest logs will be available for this mirrored database. 7. Rollforward the mirror to the end of the logs. db2 rollforward db to end of logs 8. Repeat the process from step 6 until the primary database is down. * Using the split mirror to recover the primary system. The following procedure describes how to use the mirrored system as a backup image to restore the primary system: 1. Copy over. Use operating system commands to copy the mirrored data and logs on top of the primary system. 2. Start the instance: db2start 3. Put the restored mirror in rollforward pending state and roll the mirror forward to the end of the logs: db2inidb as mirror * Taking a backup without performing crash recovery. Performing an offline backup on the split mirror without performing crash recovery means that you can restore this backup image on top of the primary system. To do this, follow these steps: 1. Suspend I/O on the primary system: db2 set write suspend for database 2. Split the mirror. Use operating system level commands to split the mirror from the primary database. 3. Resume I/O on the primary system: db2 set write resume for database The database on the primary system should now be back to a normal state. 4. Mount the split mirrors of the database to another host. 5. Start the instance: db2start 6. Put the mirrored database in rollforward pending state: db2inidb as standby 7. Invoke a database backup operation: db2 backup database This results in an implicit database connection, but does not initiate DB2 crash recovery. 7.6.6 On Demand Log Archive DB2 now supports the closing (and, if the user exit option is enabled, the archiving) of the active log for a recoverable database at any time. This allows you to collect a complete set of log files up to a known point, and then to use these log files to update a standby database. Note: On demand log archiving does not guarantee the log files will be archived immediately; it truncates the log file and issues an archive request, but it is still subject to delays associated with the user exit program You can initiate on demand log archiving by invoking the new DB2 ARCHIVE LOG command, or by calling the new db2ArchiveLog API. 7.6.7 Log Mirroring In Chapter 8,"Recovering a Database," the following new section on using the suspended I/O function is to be added: DB2 now supports log mirroring at the database level. Mirroring log files helps protect a database from: * Accidental deletion of an active log * Data corruption caused by hardware failure If you are concerned that your active logs may be damaged (as a result of a disk crash), you should consider using a new DB2 registry variable, DB2_NEWLOGPATH2, to specify a secondary path for the database to manage copies of the active log, mirroring the volumes on which the logs are stored. The DB2_NEWLOGPATH2 registry variable allows the database to write an identical second copy of log files to a different path. It is recommended that you place the secondary log path on a physically separate disk (preferably one that is also on a different disk controller). That way, the disk controller cannot be a single point of failure. Note: Because Windows NT and OS/2 do not allow "mounting" a device under an arbitrary path name, it is not possible (on these platforms) to specify a secondary path on a separate device. DB2_NEWLOGPATH2 can be enabled (set to 1) or disabled (set to 0). The default value is zero. If this variable is set to 1, the secondary path name is the current value of the LOGPATH variable concatenated with the character 2. For example, in an SMP environment, if LOGPATH is /u/dbuser/sqllogdir/logpath, the secondary log path will be /u/dbuser/sqllogdir/logpath2. In an MPP environment, if LOGPATH is /u/dbuser/sqllogdir/logpath, DB2 will append the node indicator to the path and use /u/dbuser/sqllogdir/logpath/NODE0000 as the primary log path. In this case, the secondary log path will be /u/dbuser/sqllogdir/logpath2/NODE0000. When DB2_NEWLOGPATH2 is first enabled, it will not actually be used until the current log file is completed on the next database startup. This is similar to how NEWLOGPATH is currently used. If there is an error writing to either the primary or secondary log path, the database will mark the failing path as "bad", write a message to the db2diag.log file, and write subsequent log records to the remaining "good" log path only. DB2 will not attempt to use the "bad" path again until the current log file is completed. When DB2 needs to open the next log file, it will verify that this path is valid, and if so, will begin to use it. If not, DB2 will not attempt to use the path again until the next log file is accessed for the first time. There is no attempt to synchronize the log paths, but DB2 keeps information about access errors that occur, so that the correct paths are used when log files are archived. If a failure occurs while writing to the remaining "good" path, the database abends. 7.6.8 Cross Platform Backup and Restore Support on Sun Solaris and HP Support is now available for cross platform backup and restore support between Sun Solaris and HP. When you transfer the backup image between systems, you must transfer it in binary mode. On the target system, the database must be created with the same code page/territory as the system on which the original database was created. 7.6.9 DB2 Data Links Manager Considerations/Backup Utility Considerations Replace the second paragraph in this section with: When files are linked, the Data Links servers schedule them to be copied asynchronously to an archive server such as ADSM, or to disk. When the backup utility runs, DB2 ensures that all files scheduled for copying have been copied. At the beginning of backup processing, DB2 contacts all Data Links servers that are specified in the DB2 configuration file. If a Data Links server has one or more linked files and is not running, or stops running during the backup operation, the backup will not contain complete DATALINK information. The backup operation will complete successfully. Before the Data Links server can be marked as available to the database again, backup processing for all outstanding backups must complete successfully. If a backup is initiated when there are already twice the value of num_db_backups (see below) outstanding backups waiting to be completed on the Data Links server, the backup operation will fail. That Data Links server must be restarted and the outstanding backups completed before additional backups are allowed. 7.6.10 DB2 Data Links Manager Considerations/Restore and Rollforward Utility Considerations Replace paragraphs beginning with: When you restore a database or table space and do not specify the WITHOUT DATALINK... and When you restore a database or table space and you do specify the WITHOUT DATALINK option... with: When you restore a database or table space, the following conditions must be satisfied for the restore operation to succeed: o If any Data Links Server recorded in the backup file is not running, the restore operation will still complete successfully. Tables with DATALINK column information that are affected by the missing Data Links server will be put into datalink reconcile pending state after the restore operation (or the rollforward operation, if used) completes. Before the Data Links servers can be marked as available to the database again, this restore processing must complete successfully. o If any Data Links Server recorded in the backup file stops running during the restore operation, the restore operation will fail. The restore can be restarted with the Data Links Server down (see above). o If a previous database restore operation is still incomplete on any Data Links server, subsequent database or table space restore operations will fail until those Data Links servers are restarted, and the incomplete restore is completed. o Information about all DATALINK columns that are recorded in the backup file must exist in the appropriate Data Links servers' registration tables. If all the information about the DATALINK columns is not recorded in the registration tables, the table with the missing DATALINK column information is put into datalink reconcile not possible state after the restore operation (or the roll-forward operation, if used) completes. If the backup is not recorded in the registration tables, it may mean that the backup file that is provided is earlier than the value for num_db_backups and has already been "garbage collected". This means that the archived files from this earlier backup have been removed and cannot be restored. All tables that have DATALINK columns are put into datalink reconcile pending state. If the backup is not recorded in the registration tables, it may mean that backup processing has not yet been completed because the Data Links server is not running. All tables that have DATALINK columns are put into datalink reconcile pending state. When the Data Links server is restarted, backup processing will be completed before restore processing. The table remains available to users, but the values in the DATALINK columns may not reference the files accurately (for example, a file may not be found that matches a value for the DATALINK column). If you do not want this behavior, you can put the table into check pending state by issuing the "SET CONSTRAINTS for tablename TO DATALINK RECONCILE PENDING" statement. If, after a restore operation, you have a table in datalink reconcile not possible state, you can fix the DATALINK column data in one of the ways suggested under "Removing a Table from the Datalink_Reconcile_Not_Possible State". The note at the bottom of the first paragraph remains the same. Add the following at the end of this section: It is strongly recommended that the datalink.cfg file be archived to cover certain unusual recovery cases, since the datalink.cfg file in the database backup image only reflects the datalink.cfg as of the backup time. Having the latest datalink.cfg file is required to cover all recovery cases. Therefore, the datalink.cfg file must be backed up after every ADD DATALINKS MANAGER or DROP DATALINKS MANAGER command invocation. This would help to retrieve the latest datalink.cfg file, if the latest datalink.cfg file is not available on disk. If the latest datalink.cfg file is not available on disk, replace the existing datalink.cfg file (restored from a backup image) with the latest datalink.cfg file that was archived before running a rollforward operation. Do this after the database is restored. 7.6.11 Restoring Databases from an Offline Backup without Rolling Forward You can only restore without rolling forward at the database level, not the table space level. To restore a database without rolling forward, you can either restore a nonrecoverable database (that is, a database that uses circular logging), or specify the WITHOUT ROLLING FORWARD parameter on the RESTORE DATABASE command. If you use the restore utility with the WITHOUT DATALINK option, all tables with DATALINK columns are placed in datalink reconcile pending (DRP) state, and no reconciliation is performed with the Data Links servers during the restore operation. If you do not use the WITHOUT DATALINK option, and a Data Links server recorded in the backup file is no longer defined to the database (that is, it has been dropped using the DROP DATALINKS MANAGER command), tables that contain DATALINK data referencing the dropped Data Links server are put in DRP state by the restore utility. If you do not use the WITHOUT DATALINK option, all the Data Links servers are available, and all information about the DATALINK columns is fully recorded in the registration tables, the following occurs for each Data Links server recorded in the backup file: * All files linked after the backup image that was used for the database restore operation are marked as unlinked (because they are not recorded in the backup image as being linked). * All files that were unlinked after the backup image, but that were linked before the backup image was taken, are marked as linked (because they are recorded in the backup image as being linked). If the file was subsequently linked to another table in another database, the restored table is put into the datalink reconcile pending state. Note: The above cannot be done if the backup image that was used for the database restore operation was taken when at least one Data Links server was not running, since the DATALINK information in the backup is incomplete. The above is also not done if the backup image that was used for the database restore operation was taken after a database restore with or without rollforward. In both cases, all tables with DATALINK columns are placed in datalink reconcile pending state, and no reconciliation is performed with the Data Links servers during the restore operation. 7.6.12 Restoring Databases and Table Spaces, and Rolling Forward to the End of the Logs If you restore, then roll forward the database or table space to the end of the logs (meaning that all logs are provided), a reconciliation check is not required unless at least one of the Data Links servers recorded in the backup file is not running during the restore operation. If you are not sure whether all the logs were provided for the roll-forward operation, or think that you may need to reconcile DATALINK values, do the following: 1. Issue the SQL statement for the table (or tables) involved: SET CONSTRAINTS FOR tablename TO DATALINK RECONCILE PENDING This puts the table into datalink reconcile pending state and check pending state. 2. If you do not want a table to be in check pending state, issue the following SQL statement: SET CONSTRAINTS FOR tablename IMMEDIATE CHECKED This takes the table out of check pending state, but leaves it in datalink reconcile pending state. You must use the reconcile utility to take the table out of this state. It may happen that the backup file contains DATALINK data that refers to a DB2 Data Links Manager (that is, a DB2 Data Links Manager was registered to the database when the backup was taken) that has been dropped from the database. For each table space being rolled forward that contains at least one table with DATALINK data referencing the dropped DB2 Data Links Manager, all tables are put in DRP state by the rollforward utility. 7.6.13 DB2 Data Links Manager and Recovery Interactions The following table shows the different types of recovery that you can perform, the DB2 Data Links Manager processing that occurs during restore and roll-forward processing, and whether you need to run the Reconcile utility after the recovery is complete: Type of Recovery DB2 Data Links Manager DB2 Data Links Manager Reconcile Processing during Restore Processing during Rollforward Non-recoverable database (logretain=NO) Database restore Fast reconcile is performed N/A Can be optionally of a complete run if problem with backup, all Data file links is Links Servers up suspected Database restore Tables put in N/A Required using WITHOUT Datalink_Reconcile _Pending DATALINK option state Database restore Fast reconcile is performed NA Required for tables of a complete only on those tables in in table spaces backup, at least table spaces that do not with links to the one Data Links have links to a Data Links Data Links server server down server that is down, other that is down tables put in Datalink_Reconcile_Pending state Database restore Fast reconcile is not NA Required of an incomplete performed, all tables with backup, all Data DATALINK columns put in Links servers up Datalink_Reconcile_Pending state Recoverable database (logretain=YES) Database restore Fast reconcile is performed N/A Optional using WITHOUT ROLLING FORWARD option, using a complete backup, all Data Links servers up Database restore Tables put in N/A Required using WITHOUT Datalink_Reconcile _Pending ROLLING FORWARD state and WITHOUT DATALINK options, using a complete or incomplete backup, Data Links servers up or down Database restore Fast reconcile is performed N/A Required on tables using WITHOUT only on those tables in in table spaces ROLLING FORWARD table spaces that do not with links to the option, using a have links to the Data Data Links servers complete backup, Links servers that are that are down at least one Datadown, other tables put in Links server downDatalink_Reconcile_Pending state Database restore Fast reconcile is not N/A Required using WITHOUT performed, all tables with ROLLING FORWARD DATALINK columns put into option, using an Datalink_Reconcile_Pending incomplete state backup, Data Links servers up or down Database restore No action No action Optional and roll forward to end of logs, using a complete backup, all Data Links servers up Database restore No action No action Optional and roll forward to end of logs, using a complete backup, at least one Data Links server down during roll forward processing Database restore No action All tables with DATALINK Required for all and roll forward columns put into tables with to end of logs, Datalink_Reconcile_Pending DATALINK columns using a complete state or an incomplete backup, any Data Links server down during restore Database restore No action No action Optional and roll forward to end of logs, using an incomplete backup, all Data Links servers up during restore Database restore No action All tables in table spaces Required and roll forward with links to a Data Links to end of logs, server where the backup is using a complete unknown put in or an incomplete Datalink_Reconcile_Pending backup, all Data state Links servers up, backup unknown at any Data Links server Table space No action No action Optional restore and roll forward to end of logs, using a complete backup, all Data Links servers up Table space No action No action Optional restore and roll forward to end of logs, using a complete backup, at least one Data Links server down during roll forward processing Table space No action All tables in table spaces Required for tables restore and roll with links to any Data in table spaces forward to end of Links server that is down with links to any logs, using a put into Data Links server complete or an Datalink_Reconcile_Pending that is down incomplete state backup, any Data Links server down during restore processing Table space No action No action Optional restore and roll forward to end of logs, using an incomplete backup, all Data Links servers up Database restore No action Tables put in Required and roll forward Datalink_Reconcile _Pending to a point in state time, using a complete or an incomplete backup, Data Links servers up or down during restore and/or roll forward processing Table space No action Tables put in Required restore and roll Datalink_Reconcile _Pending forward to a state point in time, using a complete or an incomplete backup, Data Links servers up or down during restore and/or rollfoward processing Database restore Tables put in N/A Optional, but to a different Datalink_Reconcile tables in database name, _Not_Possible state Datalink_Reconcile alias, hostname, _Not_Possible state or instance with must be manually no roll forward fixed (NOTE1) Database restore No action Tables put in Optional, but to a different Datalink_Reconcile tables in database name, _Not_Possible state Datalink_Reconcile alias, hostname _Not_Possible state or instance, and must be manually roll forward fixed Database restore Tables put in No action Required from an unusable Datalink_Reconcile _Pending backup (image hasstate been garbage-collected on the Data Links server) with no roll forward (NOTE1), with or without WITHOUT DATALINK option Database restore No action Tables put in Required from an unusable Datalink_Reconcile _Pending backup (image has state been garbage-collected on the Data Links server), and roll forward, with or without WITHOUT DATALINK option Table space No action Tables put in Required restore from an Datalink_Reconcile _Pending unusable backup state (image has been garbage-collected on the Data Links server), and roll forward Notes: 1. A restore using an offline backup and the WITHOUT ROLLING FORWARD option (logretain is on), or a restore using an offline backup (logretain is off). 2. A complete backup is a backup taken when all required Data Links servers were running. An incomplete backup is a backup taken when at least one required Data Links server was not running. 3. Fast reconcile processing will not be performed if the backup image that was used for the database restore operation was taken after a database restore, with or without rollforward. In this case, all tables with DATALINK columns are put in Datalink_Reconcile_Pending state. 7.6.14 Detection of Situations that Require Reconciliation Following are some situations in which you may need to run the reconcile utility: * The entire database is restored and rolled forward to a point in time. Because the entire database is rolled forward to a committed transaction, no tables will be in check pending state (due to referential constraints or check constraints). All data in the database is brought to a consistent state. The DATALINK columns, however, may not be synchronized with the metadata in the DB2 Data Links Manager, and reconciliation is required. In this situation, tables with DATALINK data will already be in DRP state. You should invoke the reconcile utility for each of these tables. * A particular Data Links server running the DB2 Data Links Manager loses track of its metadata. This can occur for different reasons. For example: o The Data Links server was cold started. o The Data Links server metadata was restored to a back-level state. In some situations, such as during SQL UPDATEs and DELETEs, DB2 may be able to detect a problem with the metadata in a Data Links server. In these situations, the SQL statement would fail. You would put the table in DRP state by using the SET CONSTRAINTS statement, then run the reconcile utility on that table. * A file system is not available (for example, because of a disk crash) and is not restored to the current state. In this situation, files may be missing. * A DB2 Data Links Manager is dropped from a database, and there are DATALINK FILE LINK CONTROL values referencing that DB2 Data Links Manager. You should run the reconcile utility on such tables. ------------------------------------------------------------------------ 7.7 Appendix C. User Exit for Database Recovery Under the section "Archive and Retrieve Considerations", the following paragraph is no longer true and should be removed from the list: A user exit may be interrupted if a remote client loses its connection to the DB2 server. That is, while handling the archiving of logs through a user exit, one of the other SNA-connected clients dies or powers off resulting in a signal (SIGUSR1) being sent to the server. The server passes the signal to the user exit causing an interrupt. The user exit program can be modified to check for an interrupt and then continue. The Error Handling section has a Notes list that should replace the contents of Note 3 with the following information: * User exit program requests are suspended for five minutes. During this time, all requests are ignored including the log file request that caused the return code. Following the five minute suspension in processing requests, the next request is processed. If no error occurs with the processing of this request, then processing of new user exit program requests continues and DB2 will reissue the archive request for the log files that either failed to archive previously, or were suspended. If a return code of greater than 8 is generated during the retry, requests are suspended for an additional five minutes. The five minute suspensions continue until the problem is corrected or the database is stopped and restarted. Once all applications disconnect from the database and the database is reopened, DB2 will issue the archive request for any log file that might not have been successfully archived in the previous use of the database. If the user exit program fails to archive log files, your disk can be filled with log files and performance may be degraded because of extra work to format these log files. Once the disk becomes full, the database manager will not accept further application requests for database changes. If the user exit program was called to retrieve log files, roll-forward recovery is suspended but not stopped unless a stop was specified in the ROLLFORWARD DATABASE utility. If a stop was not specified, you can correct the problem and resume recovery. ------------------------------------------------------------------------ 7.8 Appendix D. Issuing Commands to Multiple Database Partition Servers At the bottom of the section "Specifying the Command to Run", add the following: When you run any korn-shell shell-script which contains logic to read from stdin in the background, you should explicitly redirect stdin to a source where the process can read without getting stopped on the terminal (SIGTTIN message). To redirect stdin, you can run a script with the following form: shell_script 8.5 AND C <= 10. The estimate of the r_2 value using linear interpolation must be changed to the following: 10 - 8.5 r_2 *= ---------- x (number of rows with value > 8.5 and <= 100.0) 100 - 8.5 10 - 8.5 r_2 *= ---------- x (10 - 7) 100 - 8.5 1.5 r_2 *= ---- x (3) 91.5 r_2 *= 0 The paragraph following this new example must also be modified to read as follows: The final estimate is r_1 + r_2 *= 7, and the error is only -12.5%. 8.3.2 Rules for Updating Catalog Statistics Within the section titled "Rules for Updating Column Statistics", the last bulleted-list item in the first list item should be replaced by the following: HIGH2KEY must be greater than LOW2KEY whenever there are more than 3 distinct values in the corresponding column. In the case of 3 or less distinct values in the column, HIGH2KEY can be equal to LOW2KEY. 8.3.3 Sub-element Statistics In FixPak 1, an option is provided to collect and use sub-element statistics. These are statistics about the content of data in columns when the data has a structure in the form of a series of sub-fields or sub-elements delimited by blanks. For example, suppose a database contains a table DOCUMENTS in which each row describes a document, and suppose that in DOCUMENTS there is a column called KEYWORDS containing a list of relevant keywords relating to this document for text retrieval purposes. The values in KEYWORDS might be as follows: 'database simulation analytical business intelligence' 'simulation model fruitfly reproduction temperature' 'forestry spruce soil erosion rainfall' 'forest temperature soil precipitation fire' In this example, each column value consists of 5 sub-elements, each of which is a word (the keyword), separated from the others by one blank. For queries that specify LIKE predicates on such columns using the % match_all character: SELECT .... FROM DOCUMENTS WHERE KEYWORDS LIKE '%simulation%' it is often beneficial for the optimizer to know some basic statistics about the sub-element structure of the column, namely: SUB_COUNT The average number of sub-elements. SUB_DELIM_LENGTH The average length of each delimiter separating each sub-element, where a delimiter, in this context, is one or more consecutive blank characters. In the KEYWORDS column example, SUB_COUNT is 5, and SUB_DELIM_LENGTH is 1, because each delimiter is a single blank character. In FixPak 1, the system administrator controls the collection and use of these statistics by means of an extension to the DB2_LIKE_VARCHAR registry variable. This registry variable affects how the DB2 UDB optimizer deals with a predicate of the form: COLUMN LIKE '%xxxxxx' where xxxxxx is any string of characters; that is, any LIKE predicate whose search value starts with a % character. (It may or may not end with a % character). These are referred to as "wildcard LIKE predicates" below. For all predicates, the optimizer has to estimate how many rows match the predicate. For wildcard LIKE predicates, the optimizer assumes that the COLUMN being matched has a structure of a series of elements concatenated together to form the entire column, and estimates the length of each element based on the length of the string, excluding leading and trailing % characters. The new syntax is: db2set DB2_LIKE_VARCHAR=[Y|N|S|num1][,Y|N|num2] where - the first term (preceding the comma) means the following, but only for columns that do not have positive sub-element statistics S Use the algorithm as used in DB2 Version 2. N Use a fixed-length sub-element algorithm. Y (default) Use a variable-length sub-element algorithm with a default value for the algorithm parameter. num1 Use a variable-length sub-element algorithm, and use num1 as the algorithm parameter. - the second term (following the comma) means: N (default) Do not collect or use sub-element statistics. Y Collect sub-element statistics. Use a variable-length sub-element algorithm that uses those statistics, together with a default value for the algorithm parameter in the case of columns with positive sub-element statistics. num2 Collect sub-element statistics. Use a variable-length sub-element algorithm that uses those statistics, together with num2 as the algorithm parameter in the case of columns with positive sub-element statistics. If the value of DB2_LIKE_VARCHAR contains only the first term, no sub-element statistics are collected, and any that have previously been collected are ignored. The value specified affects how the optimizer calculates the selectivity of wildcard LIKE predicates in the same way as before; that is: * If the value is S, the optimizer uses the same algorithm as was used in DB2 Version 2, which does not presume the sub-element model. * If the value is N, the optimizer uses an algorithm that presumes the sub-element model, and assumes that the COLUMN is of a fixed length, even if it is defined as variable length. * If the value is Y (the default) or a floating point constant, the optimizer uses an algorithm that presumes the sub-element model and recognizes that the COLUMN is of variable length, if so defined. It also infers sub-element statistics from the query itself, rather than from the data. This algorithm involves a parameter (the "algorithm parameter") that specifies how much longer the element is than the string enclosed by the % characters. * If the value is Y, the optimizer uses a default value of 1.9 for the algorithm parameter. * If the value is a floating point constant, the optimizer uses the specified value for the algorithm parameter. This constant must lie within the range of 0 to 6.2. If the value of DB2_LIKE_VARCHAR contains two terms, and the second is Y or a floating point constant, sub-element statistics on single-byte character set string columns of type CHAR, VARCHAR, GRAPHIC, or VARGRAPHIC are collected during a RUNSTATS operation and used during compilation of queries involving wildcard LIKE predicates. The optimizer uses an algorithm that presumes the sub-element model and uses the SUB_COUNT and SUB_DELIM_LENGTH statistics, as well as an algorithm parameter, to calculate the selectivity of the predicate. The algorithm parameter is specified in the same way that the inferential algorithm is specified, that is: * If the value is Y, the optimizer uses a default value of 1.9 for the algorithm parameter. * If the value is a floating point constant, the optimizer uses the specified value for the algorithm parameter. This constant must lie within the range of 0 to 6.2. If, during compilation, the optimizer finds that sub-element statistics have not been collected on the column involved in the query, it will use the "inferential" sub-element algorithm; that is, the one used when only the first term of DB2_LIKE_VARCHAR is specified. Thus, in order for the sub-element statistics to be used by the optimizer, the second term of DB2_LIKE_VARCHAR must be set both during RUNSTATS and compilation. The values of the sub-element statistics can be viewed by querying SYSIBM.SYSCOLUMNS. For example: select substr(NAME,1,16), SUB_COUNT, SUB_DELIM_LENGTH from sysibm.syscolumns where tbname = 'DOCUMENTS' The SUB_COUNT and SUB_DELIM_LENGTH columns are not present in the SYSSTAT.COLUMNS statistics view, and therefore cannot be updated. Note: RUNSTATS may take longer if this option is used. For example, RUNSTATS may take between 15 and 40% longer on a table with five character columns, if the DETAILED and DISTRIBUTION options are not used. If either the DETAILED or the DISTRIBUTION option is specified, the percentage overhead is less, even though the absolute amount of overhead is the same. If you are considering using this option, you should assess this overhead against improvements in query performance. ------------------------------------------------------------------------ 8.4 Chapter 6. Understanding the SQL Compiler The following sections require changes: 8.4.1 Replicated Summary Tables The following information will replace or be added to the existing information already in this section: Replicated summary tables can be used to assist in the collocation of joins. For example, if you had a star schema where there is a large fact table spread across twenty nodes, then the joins between the fact table and the dimension tables are most efficient if these tables are collocated. By placing all of the tables in the same nodegroup, at most there would one dimension table partitioned correctly for a collocated join. All other dimension tables would not be able to be used in a collocated join because the join column(s) on the fact table would not correspond to the fact table's partitioning key. For example, you could have a table called FACT (C1, C2, C3, ...) partitioned on C1; and a table called DIM1 (C1, dim1a, dim1b, ...) partitioned on C1; and a table called DIM2 (C2, dim2a, dim2b, ...) partitioned on C2; and so on. From this example, you could see that the join between FACT and DIM1 is perfect because the predicate DIM1.C1 = FACT.C1 would be collocated. Both of these tables are partitioned on column C1. The join between DIM2 with the predicate WHERE DIM2.C2 = FACT.C2 cannot be collocated because FACT is partitioned on column C1 and not on column C2. In this case, it would be good to replicate DIM2 in the fact table's nodegroup. In this way we can do the join locally on each partition. Note: The replicated summary tables discussion here has to do with intra-database replication. Inter-database replication has to do with subscriptions, control tables, and data located in different databases and on different operating systems. If you are interested in inter-database replication refer to the Replication Guide and Reference for more information. When creating a replicated summary table, the source table could be a single-node nodegroup table or a multi-node nodegroup table. In most cases, the table is small and can be placed in a single-node nodegroup. You may place a limit on the data to be replicated by specifying only a subset of the columns from the table, or by limiting the number of rows through the predicates used, or by using both methods when creating the replicated summary table. Note: The data capture option is not required for replicated summary tables to function. The replicated summary table could also be created in a multi-node nodegroup. The nodegroup is the same as the nodegroup in which you have placed your large tables. In this case, copies of the source table are created on all of the partitions of the nodegroup. Joins between a large fact table and the dimension tables have a better chance of being done locally in this environment rather than having to broadcast the source table to all partitions. Indexes on replicated tables are not created automatically. Indexes are created and may be different from those identified in the source table. Note: You cannot create unique indexes (or put on any constraints) on the replicated tables. This will prevent constraint violations that are not present on the source tables. These constraints are disallowed even if there is the same constraint on the source table. After using the REFRESH statement, you should run RUNSTATS on the replicated table as you would any other table. The replicated tables can be referenced directly within a query. However, you cannot use the NODENUMBER() predicate with a replicated table to see the table data on a particular partition. To see if a created replicated summary table was used (given a query that referenced the source table), you can use the EXPLAIN facility. First, you would ensure the EXPLAIN tables existed. Then, you would create an explain plan for the SELECT statement you are interested in. Finally, you would use db2exfmt utility to format the EXPLAIN output. The access plan chosen by the optimizer may or may not use the replicated summary table depending on the information that needs to be joined. Not using the replicated summary table could occur if the optimizer determined that it would be cheaper to broadcast the original source table to the other partitions in the nodegroup. 8.4.2 Data Access Concepts and Optimization The section "Multiple Index Access" under "Index Scan Concepts" has changed. Add the following information before the note at the end of the section: To realize the performance benefits of dynamic bitmaps when scanning multiple indexes, it may be necessary to change the value of the sort heap size (sortheap) database configuration parameter, and the sort heap threshold (sheapthres) database manager configuration parameter. Additional sort heap space is required when dynamic bitmaps are used in access plans. When sheapthres is set to be relatively close to sortheap (that is, less than a factor of two or three times per concurrent query), dynamic bitmaps with multiple index access must work with much less memory than the optimizer anticipated. The solution is to increase the value of sheapthres relative to sortheap. The section "Search Strategies for Star Join" under "Predicate Terminology" has changed. Add the following information at the end of the section: The dynamic bitmaps created and used as part of the Star Join technique uses sort heap memory. See Chapter 13, "Configuring DB2" in the Administration Guide: Performance manual for more information on the Sort Heap Size (sortheap) database configuration parameter. ------------------------------------------------------------------------ 8.5 Chapter 8. Operational Performance 8.5.1 Managing the Database Buffer Pool Within the section titled "Managing the Database Buffer Pool", add the following information after the paragraph that begins "When creating the buffer pool, by default the page size is 4 KB.": When working with Windows 2000, buffer pool sizes up to 64 GB in size are supported less the size of DB2 and the operating system. (This assumes that DB2 is the primary product on the system.) This support is available through Microsoft Address Windowing Extensions (AWE). Although AWE can be used with buffer pools of any size, if you require AWE use on larger buffer pools there are other recommended Windows products. Windows 2000 Advanced Server provides support for up to 8 GB of memory. Windows 2000 Data Center Server provides support for up to 64 GB of memory. DB2 and Windows 2000 must be configured correctly to support AWE buffer pools. The buffer pool that will take advantage of AWE must exist in the database. To have a 3 GB user space allocated, use the /3GB Windows 2000 boot option. This allows a larger AWE window size to be used. To enable access to more than 4 GB of memory via the AWE memory interface, use the /PAE Windows 2000 boot option. To verify that you have the correct boot option selected, under Control, select System, then select "Startup and Recovery". From the drop-down list you can see the available boot options. If the boot option (/3GB or /PAE) you want is selected, then you are ready to proceed to the next task in setting up AWE support. If the option you want is not available for selection, you must add the option to the boot.ini file on the system drive. The boot.ini file contains a list of actions to be done when the operating system is started. Add /3GB, or /PAE, or both (separated by blanks) at the end of the list of existing parameters. Once you have saved this changed file, you can verify and select the correct boot option as mentioned above. Windows 2000 also has to be modified to associate the "lock pages in memory"-right with the user under which DB2 is installed. To set the "lock pages in memory"-right, once you have logged on to Windows 2000 as the user who installed DB2, under the Start menu on Windows 2000 select the "Administrative Tools" folder, and then the "Local Security Policy" program. Under the local policies, you can select the user rights assignment for the "lock pages in memory"-right. DB2 requires the setting of the DB2_AWE registry variable. To set this registry variable correctly, you will need to know the buffer pool ID of the buffer pool you wish to allow support of AWE. You also need to know the number of physical pages and the address window pages to allocate. The number of physical pages to allocate should be some value less than the total available physical pages. The actual number chosen will depend on your working environment. For example, if you have an environment where only DB2 and database applications are used on your system, then you can choose to have from one-half to one GB less than the total size of the physical pages as the value used with the DB2_AWE variable. If you have an environment where other non-database applications are using the system, then you will have to increase the value you subtract from the total to allow more physical pages for those other applications. The number used in the DB2_AWE registry variable is the number of physical pages to be used in support of AWE and for use by DB2. The upper limit on the address window pages is 1.5 GB, or 2.5 GB when the /3GB Windows 2000 boot option is in effect. For information on setting the DB2 registry variable DB2_AWE, see the table of new and changed registry variables in "Appendix A. DB2 Registry and Environment Variables" later in this section. 8.5.2 Managing Multiple Database Buffer Pools Within the section titled "Managing Multiple Database Buffer Pools", add the following paragraph after the paragraph that begins "When working with your database design, you may have determined that tables with 8 KB page sizes are best.": When working with Windows 2000, the DB2_AWE registry variable can be used to override the buffer pool size settings in the catalog and configuration files. Use of this registry variable allows buffer pool sizes of up to approximately 64 GB. Within the same section, replace the paragraph just before the note with the following: The reason for allowing the database manager to start with minimal-sized values is to allow you to connect to the database. You can then reconfigure the buffer pool sizes, or perform other critical tasks, with the goal of restarting the database with correct buffer pool sizes. Do not consider operating the database for an extended time in such a state. Within the section titled "Reorganizing Catalogs and User Tables", the last sentence (with a short list) in the paragraph that begins "The REORG utility allows you to specify a temporary table space..." can be replaced by: Using the same table space to reorganize tables is faster but greater logging occurs and there must be enough space for the reorganized table. If you specify a temporary table space, it is generally recommended that you specify an SMS temporary table space. A DMS temporary table space is not recommended since you can only have on REORG in progress using this type of table space. Within the section titled "Extending Memory", add the following paragraph after the third paragraph in this section: When allocating Windows 2000 Address Windowing Extensions (AWE) buffer pools using the DB2_AWE registry variable, the extended storage cache cannot be used. ------------------------------------------------------------------------ 8.6 Chapter 9. Using the Governor Within the section titled "Creating the Governor Configuration File", the first sentence in the first paragraph following the schedule action discussion should be replaced with: If more than one rule applies to an application, all of the rules are applied. Depending on the rule and the limits being set, the action associated with the rule limit encountered first is the action that is first to be applied. ------------------------------------------------------------------------ 8.7 Chapter 13. Configuring DB2 The following parameters require changes: 8.7.1 Sort Heap Size (sortheap) The "Recommendation" section has changed. The information here should now read: When working with the sort heap, you should consider the following: * Appropriate indexes can minimize the use of the sort heap. * Hash join buffers and dynamic bitmaps (used for index ANDing and Star Joins) use sort heap memory. Increase the size of this parameter when these techniques are used. * Increase the size of this parameter when frequent large sorts are required. * ... (the rest of the items are unchanged) 8.7.2 Sort Heap Threshold (sheapthres) The second last paragraph in the description of this parameter has changed. The paragraph should now read: Examples of those operations that use the sort heap include: sorts, dynamic bitmaps (used for index ANDing and Star Joins), and operations where the table is in memory. The following information is to be added to the description of this parameter: There is no reason to increase the value of this parameter when moving from a single-node to a multi-node environment. Once you have tuned the database and database manager configuration parameters on a single node (in a DB2 EE) environment, the same values will in most cases work well in a multi-node (in a DB2 EEE) environment. The Sort Heap Threshold parameter, as a database manager configuration parameter, applies across the entire DB2 instance. The only way to set this parameter to different values on different nodes or partitions, is to create more than one DB2 instance. This will require managing different DB2 databases over different nodegroups. Such an arrangement defeats the purpose of many of the advantages of a partitioned database environment. 8.7.3 Maximum Percent of Lock List Before Escalation (maxlocks) The following change pertains to the Recommendation section of the "Maximum Percent of Lock List Before Escalation (maxlocks)" database configuration parameter. Recommendation: The following formula allows you to set maxlocks to allow an application to hold twice the average number of locks: maxlocks = 2 * 100 / maxappls Where 2 is used to achieve twice the average and 100 represents the largest percentage value allowed. If you have only a few applications that run concurrently, you could use the following formula as an alternative to the first formula: maxlocks = 2 * 100 / (average number of applications running concurrently) One of the considerations when setting maxlocks is to use it in conjunction with the size of the lock list (locklist). The actual limit of the number of locks held by an application before lock escalation occurs is: maxlocks * locklist * 4096 / (100 * 36) Where 4096 is the number of bytes in a page, 100 is the largest percentage value allowed for maxlocks, and 36 is the number of bytes per lock. If you know that one of your applications requires 1000 locks, and you do not want lock escalation to occur, then you should choose values for maxlocks and locklist in this formula so that the result is greater than 1000. (Using 10 for maxlocks and 100 for locklist, this formula results in greater than the 1000 locks needed.) If maxlocks is set too low, lock escalation happens when there is still enough lock space for other concurrent applications. If maxlocks is set too high, a few applications can consume most of the lock space, and other applications will have to perform lock escalation. The need for lock escalation in this case results in poor concurrency. You may use the database system monitor to help you track and tune this configuration parameter. 8.7.4 Configuring DB2/DB2 Data Links Manager/Data Links Access Token Expiry Interval (dl_expint) Contrary to the documentation, if dl_expint is set to "-1", the access control token expires. The workaround for this is to set dl_expint to its maximum value, 31536000 (seconds). This corresponds to an expiration time of one year, which should be adequate for all applications. 8.7.5 MIN_DEC_DIV_3 Database Configuration Parameter The addition of the MIN_DEC_DIV_3 database configuration parameter is provided as a quick way to enable a change to computation of the scale for decimal division in SQL. MIN_DEC_DIV_3 can be set to YES or NO. The default value for MIN_DEC_DIV_3 is NO. The MIN_DEC_DIV_3 database configuration parameter changes the resulting scale of a decimal arithmetic operation involving division. If the value is NO, the scale is calculated as 31-p+s-s'. Refer to the SQL Reference, Chapter 3, "Decimal Arithmetic in SQL" for more information. If set to YES, the scale is calculated as MAX(3, 31-p+s-s'). This causes the result of decimal division to always have a scale of at least 3. Precision is always 31. Changing this database configuration parameter may cause changes to applications for existing databases. This can occur when the resulting scale for decimal division would be impacted by changing this database configuration parameter. Listed below are some possible scenarios that may impact applications. These scenarios should be considered before changing the MIN_DEC_DIV_3 on a database server with existing databases. * If the resulting scale of one of the view columns is changed, a view that is defined in an environment with one setting could fail with SQLCODE -344 when referenced after the database configuration parameter is changed. The message SQL0344N refers to recursive common table expressions, however, if the object name (first token) is a view, then you will need to drop the view and create it again to avoid this error. * A static package will not change behavior until the package is rebound, either implicitly or explicitly. For example, after changing the value from NO to YES, the additional scale digits may not be included in the results until rebind occurs. For any changed static packages, an explicit rebind command can be used to force a rebind. * A check constraint involving decimal division may restrict some values that were previously accepted. Such rows now violate the constraint but will not be detected until the one of the columns involved in the check constraint row is updated or the SET INTEGRITY command with the IMMEDIATE CHECKED option is processed. To force checking of such a constraint, perform an ALTER TABLE command in order to drop the check constraint and then perform an ALTER TABLE command to add the constraint again. Note: DB2 Version 7 also has the following limitations: 1. The command GET DB CFG FOR DBNAME will not display the MIN_DEC_DIV_3 setting. The best way to determine the current setting is to observe the side-effect of a decimal division result. For example, consider the following statement: VALUES (DEC(1,31,0)/DEC(1,31,5)) If this statement returns sqlcode SQL0419N, then the database does not have MIN_DEC_DIV_3 support or it is set to OFF. If the statement returns 1.000, then MIN_DEC_DIV_3 is set to ON. 2. MIN_DEC_DIV_3 does not appear in the list of configuration keywords when you run the following command: ? UPDATE DB CFG 8.7.6 Application Control Heap Size (app_ctl_heap_sz) The text for this parameter should now read: For partitioned databases and non-partitioned databases with intra-parallelism enabled (intra_parallel=ON), this is the size of the shared memory area allocated for the application control heap. For non-partitioned databases where intra-parallelism is disabled (intra_parallel=OFF), this is the maximum private memory that will be allocated for the heap. There is one application control heap per connection per partition. The application control heap is required primarily for sharing information between agents working on behalf of the same request, and, in a partitioned database environment, for storing executable sections representing SQL statements. Usage of this heap is minimal for non-partitioned databases when running queries with a degree of parallelism less than or equal to 1. This heap is also used to store descriptor information for declared temporary tables. The descriptor information for all declared temporary tables that have not been explicitly dropped is kept in this heap's memory and cannot be dropped until the declared temporary table is dropped. The "Recommendation" portion remains unchanged. 8.7.7 Database System Monitor Heap Size (mon_heap_sz) The default for the OS/2 and Windows NT Database server with local and remote clients and Satellite database server with local clients has changed from 24 to 32. The range is unchanged. 8.7.8 Maximum Number of Active Applications (maxappls) The upper range limit for all platforms has changed from 64 000 to 60 000. The default value is unchanged. 8.7.9 Recovery Range and Soft Checkpoint Interval (softmax) The unit of measure is changed to the percentage of the size of one primary log file. 8.7.10 Track Modified Pages Enable (trackmod) Configuration Type: Database Parameter Type: Configurable Default [Range]: Off [ On; Off ] When this parameter is set to ON, the database manager will track which pages in the database have changed since the most recent full backup was taken. This allows the backup utility to determine which pages should be included in an incremental backup without having to examine every page individually. For SMS tablespaces, the granularity of this tracking is at the tables pace level. For DMS table spaces, the granularity is at the extent level for data and index pages and at the table space level for other page types. After setting this parameter to ON, you must take a full database backup in order to have a baseline against which incremental backups can be taken. 8.7.11 Change the Database Log Path (newlogpath) Configuration Type: Database Parameter Type: Configurable Default [Range]: Null [ any valid path or device] Related Parameters: Location of Log Files (logpath); Database is Consistent (database_consistent) This parameter allows you to specify a string of up to 242 bytes to change the location where the log files are stored. The string can point to either a path name, or to a raw device. If the string points to a path name, it must be a fully qualified path name, not a relative path name. Note: In a partitioned database environment, the node number is automatically appended to the path. This is done to maintain the uniqueness of the path in multiple logical node configurations. To specify a device, specify a string that the operating system identifies as a device. For example, on Windows NT, \\.\d: or \\.\PhysicalDisk5 Note: You must have Windows NT Version 4.0 with Service Pack 3 installed to be able to write logs to a device. On UNIX-based platforms, /dev/rdblog8 Note: You can only specify a device on AIX, Windows 2000, Windows NT, Solaris, HP-UX, NUMA-Q, and Linux platforms. The new setting does not become the value of logpath until both of the following occur: * The database is in a consistent state, as indicated by the database_consistent parameter. * All users are disconnected from the database. When the first new connection is made to the database, the database manager will move the logs to the new location specified by logpath. There might be log files in the old log path. These log files might not have been archived. You might need to archive these log files manually. Also, if you are running replication on this database, replication might still need the log files from before the log path change. If the database is configured with the User Exit Enable (userexit) database configuration parameter set to "Yes", and if all the log files have been archived either by DB2 automatically or by yourself manually, then DB2 will be able to retrieve the log files to complete the replication process. Otherwise, you can copy the files from the old log path to the new log path. Recommendation: Ideally, the log files will be on a physical disk which does not have high I/O. For instance, avoid putting the logs on the same disk as the operating system or high volume databases. This will allow for efficient logging activity with a minimum of overhead such as waiting for I/O. You can use the database system monitor to track the number of I/Os related to database logging. For more information, refer to the following monitor element descriptions in the System Monitor Guide and Reference: * log_reads (number of log pages read) * log_writes (number of log pages written) The preceding data elements return the amount of I/O activity related to database logging. You can use an operating system monitor tool to collect information about other disk I/O activity, then compare the two types of I/O activity. 8.7.12 Location of Log Files (logpath) Configuration Type: Database Parameter Type: Informational Related Parameters: Change the Database Log Path (newlogpath) This parameter contains the current path being used for logging purposes. You cannot change this parameter directly as it is set by the database manager after a change to the newlogpath parameter becomes effective. When a database is created, the recovery log file for it is created in a subdirectory of the directory containing the database. The default is a subdirectory named SQLOGDIR under the directory created for the database. 8.7.13 Maximum Storage for Lock List (locklist) The maximum value is increased from 60 000 to 524 288. ------------------------------------------------------------------------ 8.8 Appendix A. DB2 Registry and Environment Variables The following registry variables are new or require changes: 8.8.1 Table of New and Changed Registry Variables Table 6. Registry Variables Variable Name Operating Values System Description DB2MAXFSCRSEARCH All Default=5 Values: -1, 1 to 33554 Specifies the number of free space control records to search when adding a record to a table. The default is to search five free space control records. Modifying this value allows you to balance insert speed with space reuse. Use large values to optimize for space reuse. Use small values to optimize for insert speed. Setting the value to -1 forces the database manager to search all free space control records. DLFM_TSM_MGMTCLASS AIX, Windows Default: the default TSM NT, Solaris management class Values: any valid TSM management class Specifies which TSM management class to use to archive and retrieve linked files. If there is no value set for this variable, the default TSM management class is used. DB2_CORRELATED_PREDICATES All Default=YES Values: YES or NO The default for this variable is YES. When there are unique indexes on correlated columns in a join, and this registry variable is YES, the optimizer attempts to detect and compensate for correlation of join predicates. When this registry variable is YES, the optimizer uses the KEYCARD information of unique index statistics to detect cases of correlation, and dynamically adjusts the combined selectivities of the correlated predicates, thus obtaining a more accurate estimate of the join size and cost. DB2_VI_DEVICE Windows NT Default=null Values: nic0 or VINIC Specifies the symbolic name of the device or Virtual Interface Provider Instance associated with the Network Interface Card (NIC). Independent hardware vendors (IHVs) each produce their own NIC. Only one (1) NIC is allowed per Windows NT machine; Multiple logical nodes on the same physical machine will share the same NIC. The symbolic device name "VINIC" must be in upper case and can only be used with Synfinity Interconnect. All other currently supported implementations use "nic0" as the symbolic device name. DB2_SELECTIVITY ALL Default=NO Values: YES or NO This registry variable controls where the SELECTIVITY clause can be used. See the SQL Reference, Language Elements, Search Conditions for complete details on the SELECTIVITY clause. When this registry variable is set to YES, the SELECTIVITY clause can be specified when the predicate is a basic predicate where at least one expression contains host variables. DB2_UPDATE_PART_KEY ALL Default=YES Values: YES or NO For FixPak 3 and later, the default value is YES. This registry variable specifies whether or not update of the partitioning key is permitted. DB2_BLOCK_ON_LOG_DISK_FULL ALL Default=NO Values: YES or NO This DB2 registry variable can be set to prevent "disk full" errors from being generated when DB2 cannot create a new log file in the active log path. Instead, DB2 attempts to create the log file every 5 minutes until it succeeds. After each attempt, DB2 writes a message to the db2diag.log file. The only way that you can confirm that your application is hanging because of a log disk full condition is to monitor the db2diag.log file. Until the log file is successfully created, any user application that attempts to update table data will not be able to commit transactions. Read-only queries may not be directly affected; however, if a query needs to access data that is locked by an update request, or a data page that is fixed in the buffer pool by the updating application, read-only queries will also appear to hang. DB2_INDEX_2BYTEVARLEN All Default=NO Values: YES or NO This registry variable allows columns with a length greater than 255 bytes to be specified as part of an index key. Indexes already created before turning this registry variable YES will continue to have the 255 key limit restriction. Indexes created after turning this registry variable YES will behave as a two-byte index even when the registry variable is turned NO again. Several SQL statements are affected by changes to this registry variable including CREATE TABLE, CREATE INDEX, and ALTER TABLE. For more information on these statements, refer to the changes documented for the SQL Reference. DB2_FORCE_FCM_BP AIX Default=NO Values: YES or NO Specifies from where the fast communications manager (FCM) resources are allocated. The resources may be allocated from either the database manager shared memory segment or a separate one. With multiple logical nodes on the same machine, this registry variable should be used. On a partitioned database system with symmetric multi-processing (SMP) enabled, the setting of this registry variable has no effect on how communication takes place. In this case, communication is always through shared memory. However, it does affect the number of shared memory segments DB2 will use. DB2_AWE Windows 2000 Default=Null Values: [; ;...] where =, , Allows DB2 UDB on Windows 2000 to allocate buffer pools that use up to 64 GB of memory. Windows 2000 must be configured correctly to support Address Windowing Extensions (AWE) buffer pools. This includes associating the "lock pages in memory"-right with the user on Windows 2000 and setting this registry variable on DB2. In setting this variable you need to know the buffer pool ID that is to be used for AWE support. You also need to determine the number of physical pages to allocate and the number of address windows. For information on determining the number of physical pages to allocate and the number of address windows, see the section on "Managing the Database Buffer Pool" found in "Chapter 8. Operational Performance" earlier in this section. Note: If AWE support is enabled, extended storage (ESTORE) cannot be used for any of the buffer pools in the database. The buffer pools referenced by this variable must already exist in SYSIBM.SYSBUFFERPOOLS. DB2_STPROC_LOCKUP_FIRST All Default=NO Values: YES or NO This registry variable has been renamed from DB2_DARI_LOOKUP_ALL. DB2MEMDISCLAIM AIX Default=YES Values: YES or NO On AIX, memory used by DB2 processes may have some associated paging space. This paging space may remain reserved, even when the associated memory has been freed. The reservation of the paging space depends on the AIX system's tunable virtual memory management allocation policy. This registry variable controls whether DB2 agents explicitly request that AIX disassociate the reserved paging space from the freed memory. A setting of "YES" results in smaller paging space requirements, and possibly less disk activity from paging. A setting of "NO" results in greater paging space requirements, and possibly more disk activity from paging. In some situations, such as if paging space is plentiful, and if real memory is so plentiful that paging never occurs, then a setting of NO will provide a small performance improvement. DB2MEMMAXFREE All Default=8 388 608 bytes Values: 0 to 232-1 bytes This registry variable controls the maximum amount of unused memory in bytes retained by DB2 processes. DB2_ANTIJOIN All Default=NO in a EEE environment Default=YES in a non-EEE environment Values: YES or NO For DB2 Universal Database EEE environments: When YES is specified, the optimizer will search for opportunities to transform NOT EXISTS subqueries into anti-joins which can be processed more efficiently by DB2. For non-EEE environments: When NO is specified, the optimizer will limit the opportunities to transform NOT EXISTS subqueries into anti-joins. NEWLOGPATH2 UNIX Default=NO Values: YES or NO This parameter allows you to specify whether a secondary path should be used to implement dual logging. The path that will be used is generated by appending the character '2' to the current value of 'LOGPATH'. DB2DOMAINLIST Windows NT Default=Null Values: one or more valid Windows NT domains (comma separating each) Defines one or more Windows NT domains. Only users belonging to these domains will have their connection or attachment requests accepted. This registry variable should only be used under a pure Windows NT domain environment with DB2 servers and clients running DB2 Universal Database Version 7.1 (or later). DB2_LIKE_VARCHAR All Default=Y,N Values: Y, N, S, floating point constant between 0 and 6.2 Controls the collection and use of sub-element statistics. These are statistics about the content of data in columns when the data has a structure in the form of a series of sub-fields or sub-elements delimited by blanks. This registry variable affects how the optimizer deals with a predicate of the form: COLUMN LIKE '%xxxxxx%' where the xxxxxx is any string of characters. The syntax showing how this registry variable is used is: db2set DB2_LIKE_VARCHAR=[Y|N|S|num1] [,Y|N|S|num2] where * The term preceding the comma, or the only term to the right of the predicate, means the following but only for columns that do not have positive sub-element statistics: o S - The optimizer estimates the length of each element in a series of elements concatenated together to form a column based on the length of the string enclosed in the % characters. o Y - The default. Use a default value of 1.9 for the algorithm parameter. Use a variable-length sub-element algorithm with the algorithm parameter. o N - Use a fix-length sub-element algorithm. o num1 - Use the value of num1 as the algorithm parameter with the variable length sub-element algorithm. * The term following the comma means the following: o N - The default. Do not collect of use sub-element statistics. o Y - Collect sub-element statistics. Use a variable-length sub-element algorithm that uses the collected statistics together with the 1.9 default value for the algorithm parameter in the case of columns with positive sub-element statistics. o num2 - Collect sub-element statistics. Use a variable-length sub-element algorithm that uses the collected statistics together with the value of num2 as the algorithm parameter in the case of columns with positive sub-element statistics. DB2_PINNED_BP AIX, HP-UX Default=NO Values: YES or NO This variable is used to hold the database global memory (including buffer pools) associated with the database in the main memory on some AIX operating systems. Keeping this database global memory in the system main memory allows database performance to be more consistent. If, for example, the buffer pool was swapped out of the system main memory then database performance would deteriorate. The reduction of disk I/O by having the buffer pools in system memory improves database performance. If you have other applications that require more of the main memory, you will want to allow the database global memory, depending on the system main memory requirements, to be swapped out of main memory. When working with HP-UX in a 64-bit environment, in addition to modifying this registry variable, the DB2 instance group must be given the MLOCK privilege. This is done by having a user with root access rights do the following: 1. Add the DB2 instance group to the /etc/privgroup file. For example, if the DB2 instance group belongs to db2iadm1 group then the following line must be added to the /etc/privgroup file: db2iadm1 MLOCK 2. Issue the following command: setprivgrp -f /etc/privgroup DB2_RR_TO_RS All Default=NO Values: YES or NO Next key locking guarantees Repeatable Read (RR) isolation level by automatically locking the next key for all INSERT and DELETE statements and the next higher key value above the result set for SELECT statements. For UPDATE statements that alter key parts of an index, the original index key is deleted and the new key value is inserted. Next key locking is done on both the key insertion and key deletion. Next key locking is required to guarantee ANSI and SQL92 standard RR, and is the DB2 default. If your application appears to stop or hang, you should examine snapshot information for your application. If the problem appears to be with next key locking, you can set the DB2_RR_TO_RS registry variable on based on two conditions. You can turn DB2_RR_TO_RS on if none of your applications rely on Repeatable Read (RR) behavior and if it is acceptable for scans to skip over uncommitted deletes. The skipping behavior affects the RR, Read Stability (RS), and Cursor Stability (CS) isolation levels. (There is no row locking for Uncommitted Read (UR) isolation level.) When DB2_RR_TO_RS is on, RR behavior cannot be guaranteed for scans on user tables because next key locking is not done during index key insertion and deletion. Catalog tables are not affected by this option. The other change in behavior is that with DB2_RR_TO_RS on, scans will skip over rows that have been deleted but not committed, even though the row may have qualified for the scan. ------------------------------------------------------------------------ 8.9 Appendix C. SQL Explain Tools The section titled "Running db2expln and dynexpln" should have the last paragraph replaced with the following: To run db2expln, you must have SELECT privilege to the system catalog views as well as EXECUTE authority for the db2expln package. To run dynexpln, you must have BINDADD authority for the database, the schema you are using to connect to the database must exist or you must have the EXPLICIT_SCHEMA authority for the database, and you must have any privileges needed for the SQL statements being explained. (Note that if you have SYSADM or DBADM authority, you will automatically have all these authorization levels.) ------------------------------------------------------------------------ Administering Satellites Guide and Reference ------------------------------------------------------------------------ 9.1 Setting up Version 7.2 DB2 Personal Edition and DB2 Workgroup Edition as Satellites The sections that follow describe how to set up Windows-based Version 7.2 DB2 Personal Edition and DB2 Workgroup Edition systems so that they can be used as fully functional satellites in a satellite environment. For information about the terms and concepts used in the information that follows, refer to the Administering Satellites Guide and Reference. You can find this book at the following URL: http://www-4.ibm.com/cgi-bin/db2www/data/db2/udb/winos2unix/support/v6pubs.d2w/en_main For Technotes that supplement the information in the Administering Satellites Guide and Reference, refer to the following URL: http://www-4.ibm.com/cgi-bin/db2www/data/db2/udb/winos2unix/support/browse.d2w/ report?type=tech5udb&tech5udb=Y 9.1.1 Prerequisites To set up either DB2 Personal Edition or DB2 Workgroup Edition as satellites, you require the following: 1. A DB2 control server The DB2 control server is a DB2 Enterprise Edition system that runs on Windows NT or AIX, and has the Control Server component installed. The DB2 Enterprise Edition system that you use must be at Version 6 with FixPak 2 or higher, or Version 7 at any FixPak level. o If you have a Version 6 Enterprise Edition system that you want to use as the DB2 control server, see 9.1.3, Installing FixPak 2 or Higher on a Version 6 Enterprise Edition System. o If you are using Version 7 and do not have the Control Server component installed, install this component, re-install any FixPaks that you have already installed, then create the DB2 control server instance and satellite control database. Refer to the Administering Satellites Guide and Reference for instructions on creating these objects. Note: If you are installing a Version 7.2 Enterprise Edition system on Windows NT for use as the DB2 control server, and you want to perform a response file installation, see the Technote entitled DB2 Control Server Response File Keywords for information about the keywords to specify in the response file. 2. The DB2 control server instance and the satellite control database The DB2 control server instance is typically called DB2CTLSV, and the satellite control database is called SATCTLDB. The DB2 control server instance and the satellite control database are on the Enterprise Edition system, and, on Windows NT, are automatically created when you install DB2 with the Control Server component. If you install DB2 on AIX, see the Administering Satellites Guide and Reference for information about creating the DB2 control server instance and the satellite control database. 3. The Satellite Administration Center The Satellite Administration Center is the set of GUI tools that you use to set up and administer the satellite environment. You access this set of tools from the Control Center. For more information about the Satellite Administration Center and the satellite environment, see the Administering Satellites Guide and Reference, and the online help that is available from the Satellite Administration Center. If you are running a Version 6 Control Center, see 9.1.4, Upgrading a Version 6 Control Center and Satellite Administration Center. If you have not already used the Satellite Administration Center to set up the satellite environment and to create the object that represents the new satellite in the Satellite Administration Center, you should do so before installing the satellite. For more information, see the description of how to set up and test a satellite environment in the Administering Satellites Guide and Reference. 4. A Version 7.2 Personal Edition or Workgroup Edition system that you want to use as a satellite. 9.1.1.1 Installation Considerations When you install either DB2 Personal Edition or DB2 Workgroup Edition, you do not have to select any special component to enable either system to synchronize. If you intend to perform a response file installation, see Performing a Response File Installation for the keywords that you should specify when installing the Version 7.2 system. If you are performing an interactive installation of your Version 7.2 system, see 9.1.2, Configuring the Version 7.2 System for Synchronization after you finish installing DB2 for values that you must set at the Version 7.2 system to enable it to synchronize. Performing a Response File Installation If you are performing a response file installation of Version 7.2 DB2 Personal Edition or DB2 Workgroup Edition, you can set the following keywords in the response file. If you decide to not specify one or more of these keywords during the response file installation, see 9.1.2, Configuring the Version 7.2 System for Synchronization for additional steps that you must perform after installing DB2 to enable the Version 7.2 system to synchronize. You can also use the instructions in this section if you want to change any values that were specified during the response file installation. db2.db2satelliteid Sets the satellite ID on the system. Note: If you do not specify this keyword, the satellite ID is automatically set to the user ID that was used to install DB2. If you want to use this user ID as the satellite ID, you do not have to specify a value for this keyword. db2.db2satelliteappver Sets the application version on the system. Note: If you do not specify this keyword, the application version on the satellite is automatically set to V1R0M00. If you want to use this value as the application version, you do not have to specify a value for this keyword. db2.satctldb_username Sets the user name to be used for the system to connect to the satellite control database. db2.satctldb_password Sets the password that the user name passes to the DB2 control server when the user name connects to the satellite control database. After you complete the response file installation, the Version 7.2 system is ready to synchronize. You should issue the db2sync -t command on the satellite to verify that the values specified on the satellite are correct, and that the satellite can connect to the satellite control database. For additional information about performing a response file installation, refer to the Administering Satellites Guide and Reference. Notes: 1. In Version 7, user IDs and passwords are required for the creation of all services on Windows NT and Windows 2000. These user IDs and passwords are specified in the response file by keyword pairs. The first keyword pair found in the response file becomes the default user ID and password for all services, unless you provide an override for a service by specifying the specific keyword pair for that service. In Version 6, the admin.userid and the admin.password keywords could be specified during a response file installation of DB2 Satellite Edition to specify the user ID and password that would be used by the Remote Command Service. For Version 7.2 Personal Edition and Workgroup Edition, if you specify these keywords, they are used for the DB2DAS00 instance on the Version 7.2 system. For a DB2 Version 7.2 system, the Remote Command Service will use the user ID and password that is used by the DB2 instance on the system. If you do not specify values for db2.userid and db2.password, the defaulting rule described above applies. 2. In Version 6, you could create a database when installing DB2 Satellite Edition using a response file installation. You cannot create a database during a response file installation on the Version 7.2 Personal Edition or Workgroup Edition system that you intend to use as a satellite. The following keywords (which are described in the Administering Satellites Guide and Reference), are not supported: o db2.userdb_name o db2.userdb_recoverable o db2.userdb_rep_src 9.1.2 Configuring the Version 7.2 System for Synchronization If you install the Version 7.2 system interactively, several values must be set on the DB2 Personal Edition or DB2 Workgroup Edition system after installing DB2 before the system can synchronize. Note: You can execute an operating system script on the system to set all values at the satellite except for the user ID and password that the satellite uses to connect to the satellite control database (see step 4). 1. Set the satellite ID by using the db2set command. If you install DB2 Personal Edition or DB2 Workgroup Edition interactively, the satellite ID is automatically set to the user ID that was used to install DB2. If you want to use this user ID as the satellite ID, you do not have to perform this step. For information about setting the satellite ID, see the Administering Satellites Guide and Reference. 2. Set the application version on the satellite by using the db2sync -s command. If you install DB2 Personal Edition or DB2 Workgroup Edition interactively, the application version on the satellite is automatically set to V1R0M00. If you want to use this value as the application version, you do not have to perform this step. You can use the db2sync -g command on the satellite to view the current setting of the application version. If you want to change this value, issue the db2sync -s command. You are prompted to provide a new value for the application version. For more information about setting the application version, see the Administering Satellites Guide and Reference. 3. Issue the catalog node and catalog database commands on the satellite to catalog the DB2 control server instance and the satellite control database, SATCTLDB, at the satellite. You can also use the db2sync -t command on the satellite to open the DB2 Synchronizer application in test mode. If the SATCTLDB database is not cataloged at the satellite when you issue the command, the Catalog Control Database window opens. You can either use the DB2 discovery feature that is available from the Catalog Control Database window to catalog the DB2 control server and the SATCTLDB database, or you can type the hostname and server name in this window. You will also be prompted to specify the user ID and password that the satellite will use to connect to the satellite control database, as described in step 4. Note: After you install Version 7.2 DB2 Personal Edition or DB2 Workgroup Edition interactively, the DB2 Synchronizer does not start automatically in test mode (as was the case for Version 6 DB2 Satellite Edition). 4. Issue the db2sync -t command on the satellite to: o Specify the user ID and the password that the satellite will use to connect to the satellite control database If synchronization credentials are not already stored at the satellite, the Connect to Control Database window opens. You must use this window to specify the user ID and password the satellite will use to connect to the satellite control database. o Verify the values that are set on the satellite are correct o Verify that the satellite can connect to the satellite control database After you complete these configuration tasks, the Version 7.2 system is ready to synchronize. 9.1.3 Installing FixPak 2 or Higher on a Version 6 Enterprise Edition System The sections that follow describe the tasks that you must perform to upgrade a Version 6 Enterprise Edition system on Windows NT or AIX for use as a DB2 control server. If you are using a Version 6 Control Center, also perform the steps in 9.1.4, Upgrading a Version 6 Control Center and Satellite Administration Center to verify that you have the correct level of the Control Center and the Satellite Administration Center to administer the satellite environment. 9.1.3.1 Upgrading Version 6 DB2 Enterprise Edition for Use as the DB2 Control Server For a Version 6 DB2 Enterprise Edition system to be used as the DB2 control server, it must be installed with the Control Server component, and DB2 Enterprise Edition should be at the FixPak 2 service level, or higher. Depending on whether the DB2 control server component is installed, and the service level of DB2 Enterprise Edition, you will have to perform one of the following tasks: * Install the DB2 control server component to an existing DB2 Enterprise Edition V6.1 system and install FixPak 2 or higher. Then update the satellite control database (SATCTLDB) on the system. * Upgrade an already installed DB2 control server to the FixPak 2 level or higher. Use the information that follows to identify which of the two preceding tasks you need to perform, and the steps that apply to your situation. The following is a summary of the steps that you will perform. 1. First, assess the current state of your DB2 Enterprise Edition installation. You will determine whether the Control Server component is installed, and the service level of DB2. 2. Second, based on the state information that you obtain, you will determine what needs to be done. 3. Third, you will perform the necessary steps to upgrade DB2 Enterprise Edition. The DB2 control server can only run on DB2 Enterprise Edition for Windows NT and AIX. Continue with the instructions that are appropriate for your platform: * Upgrading DB2 Enterprise Edition on Windows NT * Upgrading DB2 Enterprise Edition on AIX Upgrading DB2 Enterprise Edition on Windows NT Use the information in the sections that follow to determine the current service level of your Version 6 DB2 Enterprise Edition system, and the steps that you need to perform to update the system to the FixPak 2 service level or higher. You will need to perform the steps of one or more of the following sections: * Assessing DB2 Enterprise Edition on Windows NT * Determining What Needs to Be Done * Installing the Control Server Component on Windows NT * Installing FixPak 2 or Higher on Windows NT * Upgrading the SATCTLDB on Windows NT Assessing DB2 Enterprise Edition on Windows NT If you have DB2 Enterprise Edition installed on Windows NT, perform the following steps: 1. Check whether the Control Server component is installed. Use the Registry Editor to display the list of installed components: a. Enter regedit at a command prompt. b. Under the HKEY_LOCAL_MACHINE\SOFTWARE\IBM\DB2\Components registry key, check whether the Control Server is listed. If it is not listed, the control server is not installed. 2. Determine the service level of DB2 Enterprise Edition. Issue the db2level command from a command prompt. Use the table that follows to interpret the output: Values of Key Fields in the db2level output Your DB2 Release Level Informational Tokens system is at: SQL06010 01010104 db2_v6, n990616 Version 6.1 base SQL06010 01020104 DB2 V6.1.0.1, n990824, Version 6.1 WR21136 plus FixPak 1 SQL06010 01030104 DB2 V6.1.0.6, s991030, Version 6.1 WR21163 or DB2 V6.1.0.9, plus FixPak 2 s000101, WR21173 Note: If the level is greater than 01030104, your system is at a higher FixPak than FixPak 2. 3. Record the information that you find, and continue at Determining What Needs to Be Done. Determining What Needs to Be Done Using the information that you have gathered, find the row in the following table that applies to your situation, and follow the steps that are required to prepare your DB2 Enterprise Edition system to support the DB2 control server at the FixPak 2 level or higher. Sections that follow the table provide instructions for performing the required steps. Consider checking off each step as you perform it. Only perform the steps that apply to your situation. Control Server Service Level of DB2 Steps required to Component Installed Enterprise Edition prepare your DB2 System Enterprise Edition system No Version 6.1 base, or Perform the following Version 6.1 plus FixPak steps: 1, or Version 6.1 plus FixPak 2 or higher 1. Installing the Control Server Component on Windows NT 2. Installing FixPak 2 or Higher on Windows NT 3. Upgrading the SATCTLDB on Windows NT Yes Version 6.1 base, or Perform the following Version 6.1 plus FixPak steps: 1 1. Installing FixPak 2 or Higher on Windows NT 2. Upgrading the SATCTLDB on Windows NT Yes Version 6.1, plus FixPak Perform the following 2 or higher step: 1. Upgrading the SATCTLDB on Windows NT Installing the Control Server Component on Windows NT To install the Control Server component on Windows NT: 1. Ensure that all database activity on the system is complete before proceeding. 2. Insert the DB2 Universal Database Enterprise Edition Version 6.1 CD in the CD drive. If the installation program does not start automatically, run the setup command in the root of the CD to start the installation process. 3. When prompted, shut down all the processes that are using DB2. 4. On the Welcome window, select Next. 5. On the Select Products window, ensure that DB2 Enterprise Edition is selected. 6. On the Select Installation Type panel, click Custom. 7. On the Select Components panel, ensure that the Control Server component is selected, and click Next. Note: If you select other components that are not already installed on your system, these components will be installed too. You cannot alter the drive or directory in which DB2 is installed. 8. On the Configure DB2 Services panels, you can modify the protocol values and the start-up options for the Control Server instance, or take the default values. Either modify the defaults and click Next, or click Next to use the defaults. 9. Click Next on the Start Copy files window to begin the installation process. 10. When the file copying process is complete, you have the option of rebooting your system. You should reboot now. The changes made to the system for the Control Server do not take effect until the system is rebooted. When the installation process is complete and you have rebooted the system, the satellite control database (SATCTLDB) that was created as part of the Control Server installation must be cataloged in the DB2 instance if you want to use the Control Center and Satellite Administration Center locally on the system. To catalog the SATCTLDB database: 1. Open a DB2 Command Window by selecting Start>Programs>DB2 for Windows NT>Command Window 2. Ensure that you are in the db2 instance. Issue the set command and check the value of db2instance. If the value is not db2, issue the following command: set db2instance=db2 3. Catalog the db2ctlsv instance by entering the following command: db2 catalog local node db2ctlsv instance db2ctlsv 4. Catalog the SATCTLDB database by entering the following command db2 catalog database satctldb at node db2ctlsv 5. Commit the cataloging actions by entering the following command: db2 terminate 6. Close the DB2 Command Window. Installing FixPak 2 or Higher on Windows NT To upgrade an existing Version 6 DB2 Enterprise Edition system on Windows NT to FixPak 2 or higher, either: * Download the latest FixPak for DB2 Enterprise Edition for Windows NT V6.1 from the Web, along with its accompanying readme. The FixPak can be downloaded by following the instructions at URL: http://www-4.ibm.com/software/data/db2/db2tech/version61.html Install the FixPak following the instructions in the readme.txt file. * Use a DB2 Universal Database, Version 6.1 FixPak for Windows NT CD that is at FixPak 2 level or higher, and follow the instructions in the readme.txt file in the WINNT95 directory on the CD to complete the installation. Upgrading the SATCTLDB on Windows NT To upgrade the SATCTLDB database on Windows NT 1. Determine the level of the SATCTLDB database: a. Log on with a user ID that has local administrative authority on the Windows NT system. b. Open a DB2 Command Window by selecting Start>Programs>DB2 for Windows NT>Command Window. c. Connect to the SATCTLDB by entering the following command db2 connect to satctldb d. Determine if the trigger I_BATCHSTEP_TRGSCR exists in the database by issuing the following query: db2 select name from sysibm.systriggers where name='I_BATCHSTEP_TRGSCR' Record the number of rows that are returned. e. Enter the following command to close the connection to the database: db2 connect reset If step 1d returned one row, the database is at the correct level. In this situation, skip step 2, and continue at step 3. If zero (0) rows are returned, the database is not at the correct level, and must be upgraded, as described in step 2, before you can perform step 3. 2. To upgrade the SATCTLDB database, perform the following steps. Enter all commands in the DB2 Command Window: a. Switch to the directory \misc, where is the install drive and path, for example c:\sqllib. b. Ensure that you are in the db2ctlsv instance. Issue the set command and check the value of db2instance. If the value is not db2ctlsv, issue the following command: set db2instance=db2ctlsv c. Drop the SATCTLDB database by entering the following command: db2 drop database satctldb d. Create the new SATCTLDB database by entering the following command: db2 -tf satctldb.ddl -z satctldb.log e. Issue the following command: db2 terminate 3. Bind the db2satcs.dll stored procedure to the SATCTLDB database. Perform the following steps: a. Connect to the SATCTLDB database by entering the following command db2 connect to satctldb b. Switch to the directory \bnd, where is the install drive and path, for example c:\sqllib. c. Issue the bind command, as follows: db2 bind db2satcs.bnd 4. Enter the following command to close the connection to the database: db2 connect reset 5. Close the DB2 Command Window. Upgrading DB2 Enterprise Edition on AIX Use the information in the sections that follow to determine the current service level of your Version 6 DB2 Enterprise Edition system, and the steps that you need to perform to update the system to the FixPak 2 service level, or higher. You will need to perform the steps of one or more of the following sections: * Assessing DB2 Enterprise Edition on AIX * Determining What Needs to Be Done * Installing the Control Server Component on AIX * Installing FixPak 2 or Higher on AIX * Upgrading the SATCTLDB Database on AIX Assessing DB2 Enterprise Edition on AIX If you have Version 6 DB2 Enterprise Edition installed on AIX, perform the following steps: 1. Check whether the Control Server component is installed. Enter the following command: lslpp -l | grep db2_06_01.ctsr If no data is returned, the Control Server component is not installed. 2. Determine the service level of the DB2 Enterprise Edition. Log on as a DB2 instance owner, and issue the db2level command. Use the table that follows to interpret the output: Values of Key Fields in the db2level output Your DB2 Release Level Informational Tokens system is at: SQL06010 01010104 db2_v6, n990616 Version 6.1 base SQL06010 01020104 DB2 V6.1.0.1, n990824, Version 6.1 U465423 plus FixPak 1 SQL06010 01030104 DB2 V6.1.0.6, s991030, Version 6.1 U468276 or DB2 V6.1.0.9, plus FixPak 2 s000101, U469453 Note: If the level is greater than 01030104, your system is at a higher FixPak than FixPak 2. 3. Record the information that you find, and continue at Determining What Needs to Be Done. Determining What Needs to Be Done Using the information that you have gathered, find the row in the following table that applies to your situation, and follow the steps that are required to prepare your Version 6 DB2 Enterprise Edition system to support the DB2 control server at the FixPak 2 level. Sections that follow the table provide instructions for performing the required steps. Consider checking off each step as you perform it. Only perform the steps that apply to your situation. Control Server Service Level of DB2 Steps required to Component Installed Enterprise Edition prepare your DB2 System Enterprise Edition system No Version 6.1 base, or Perform the following Version 6.1 plus FixPak steps: 1, or Version 6.1 plus FixPak 2 or higher 1. Installing the Control Server Component on AIX 2. Installing FixPak 2 or Higher on AIX 3. Upgrading the SATCTLDB Database on AIX Yes Version 6.1 base, or Perform the following Version 6.1 plus FixPak steps: 1 1. Installing FixPak 2 or Higher on AIX 2. Upgrading the SATCTLDB Database on AIX Yes Version 6.1, plus FixPak Perform the following 2 or higher step: 1. Upgrading the SATCTLDB Database on AIX Installing the Control Server Component on AIX To install the Control Server component on AIX 1. Log on as a user with root authority. 2. Insert the DB2 Universal Database Enterprise Edition Version 6.1 CD in the CD drive. 3. Change to the directory where the CD is mounted, for example, cd /cdrom. 4. Type the following command to start the DB2 installer: ./db2setup 5. When the DB2 Installer window opens, use the tab key to select the Install option, and press Enter. 6. Locate the Enterprise Edition line and use the tab key to select the Customize option beside it. Press Enter. 7. Select the DB2 Control Server component, tab to OK, and press Enter. 8. Follow the instructions on the remaining windows to complete the installation of the DB2 Control Server component. When the installation process is complete, create the DB2CTLSV instance and the SATCTLDB database. To perform these tasks, follow the detailed instructions in "Setting up the DB2 Control Server on AIX" in Chapter 13 of the Administering Satellites Guide and Reference. Installing FixPak 2 or Higher on AIX To upgrade an existing DB2 Enterprise Edition system AIX to FixPak 2 or higher, either: * Download the latest FixPak for DB2 Enterprise Edition for AIX V6.1 from the Web, along with its accompanying FixPak readme. The FixPak can be downloaded by following the instructions at URL: http://www-4.ibm.com/software/data/db2/db2tech/version61.html Install the FixPak following the instructions in the FixPak readme file. * Use a DB2 Universal Database, Version 6.1 FixPak for AIX CD that is at FixPak 2 level or higher, and follow the instructions in the readme directory on the CD to complete the installation. Ensure that you have updated the DB2CTLSV instance by running the db2iupdt command as instructed in the FixPak readme file. Upgrading the SATCTLDB Database on AIX To upgrade the SATCTLDB database on AIX: 1. Determine the level of the SATCTLDB database: a. Log in as db2ctlsv. b. Ensure that the database server has been started. If the server is not started, issue the db2start command. c. Connect to the SATCTLDB database by entering the following command: db2 connect to satctldb d. Determine if the trigger I_BATCHSTEP_TRGSCR exists in the database by issuing the following query: db2 "select name from sysibm.systriggers where name='I_BATCHSTEP_TRGSCR'" Record the number of rows that are returned. e. Enter the following command to close the connection to the database: db2 connect reset If step 1d returned one row, the database is at the correct level. In this situation, skip step 2, and continue at step 3. If zero (0) rows are returned, the database is not at the correct level, and must be upgraded, as described in step 2, before you can perform step 3. 2. To upgrade the SATCTLDB database to the FixPak 2 level, perform the following steps. Enter all commands in the DB2 Command Window: a. Switch to the $HOME/sqllib/misc directory. b. Drop the SATCTLDB database by entering the following command: db2 drop database satctldb c. Create the new SATCTLDB database by entering the following command: db2 -tf satctldb.ddl -z $HOME/satctldb.log d. Issue the following command: db2 terminate 3. Bind the db2satcs.dll stored procedure to the SATCTLDB database. Perform the following steps: a. Connect to the SATCTLDB database by entering the following command db2 connect to satctldb b. Switch to the directory $HOME/sqllib/bnd. c. Issue the bind command, as follows: db2 bind db2satcs.bnd 4. Enter the following command to close the connection to the database: db2 connect reset 9.1.4 Upgrading a Version 6 Control Center and Satellite Administration Center To use a Version 6 Control Center and Satellite Administration Center with a Version 6 DB2 control server and satellite control database (SATCTLDB) that have been upgraded to FixPak 2 or higher, the tools must also be upgraded to FixPak 2 or higher. If the Control Center and the Satellite Administration Center are running on the same system as the DB2 control server, they were upgraded when the DB2 Enterprise Edition system was upgraded to FixPak 2. However, if you run these tools on another system, you must upgrade this system to the FixPak 2 level or higher. To upgrade this system to FixPak 2 or higher: * Download the latest FixPak for your product at the V6.1 level from the Web, along with its accompanying readme. FixPaks can be downloaded by following the instructions at URL: http://www-4.ibm.com/software/data/db2/db2tech/version61.html Install the FixPak following the instructions in the readme file. * Use a DB2 Universal Database, Version 6.1 FixPak CD for the operating system that you are running that is at FixPak 2 level or higher, and follow the instructions in the readme to complete the installation. ------------------------------------------------------------------------ Command Reference ------------------------------------------------------------------------ 10.1 db2batch - Benchmark Tool The last sentence in the description of the PERF_DETAIL parameter should read: A value greater than 1 is only valid on DB2 Version 2 and DB2 UDB servers, and is not currently supported on host machines. ------------------------------------------------------------------------ 10.2 db2cap (new command) db2cap - CLI/ODBC Static Package Binding Tool Binds a capture file to generate one or more static packages. A capture file is generated during a static profiling session of a CLI/ODBC/JDBC application, and contains SQL statements that were captured during the application run. This utility processes the capture file so that it can be used by the CLI/ODBC/JDBC driver to execute static SQL for the application. For more information on how to use static SQL in CLI/ODBC/JDBC applications, see the Static Profiling feature in the CLI Guide and Reference. Authorization * Access privileges to any database objects referenced by SQL statements recorded in the capture file. * Sufficient authority to set bind options such as OWNER and QUALIFIER if they are different from the connect ID used to invoke the db2cap command. * BINDADD authority if the package is being bound for the first time; otherwise, BIND authority is required. Command Syntax >>-db2cap----+----+--bind--capture-file----d--database_alias----> +--h-+ '--?-' >-----+--------------------------------+----------------------->< '--u--userid--+---------------+--' '--p--password--' Command Parameters -h/-? Displays help text for the command syntax. bind capture-file Binds the statements from the capture file and creates one or more packages. -d database_alias Specifies the database alias for the database that will contain one or more packages. -u userid Specifies the user ID to be used to connect to the data source. Note: If a user ID is not specified, a trusted authorization ID is obtained from the system. -p password Specifies the password to be used to connect to the data source. Usage Notes The command must be entered in lowercase on UNIX platforms, but can be entered in either lowercase or uppercase on Windows operating systems and OS/2. This utility supports a number of user-specified bind options that can be found in the capture file. For performance and security reasons, the file can be examined and edited with a text editor to change these options. The SQLERROR(CONTINUE) and the VALIDATE(RUN) bind options can be used to create a package. When using this utility to create a package, static profiling must be disabled. The number of packages created depends on the isolation levels used for the SQL statements that are recorded in the capture file. The package name consists of up to a maximum of the first seven characters of the package keyword from the capture file, and one of the following single-character suffixes: * 0 - Uncommitted Read (UR) * 1 - Cursor Stability (CS) * 2 - Read Stability (RS) * 3 - Repeatable Read (RR) * 4 - No Commit (NC) To obtain specific information about packages, the user can: * Query the appropriate SYSIBM catalog tables using the COLLECTION and PACKAGE keywords found in the capture file. * View the capture file. ------------------------------------------------------------------------ 10.3 db2ckrst (new command) db2ckrst - Check Incremental Restore Image Sequence Queries the database history and generates a list of timestamps for the backup images required for an incremental restore. A simplified restore syntax for a manual incremental restore is also generated. Authorization None Required Connection None Command Syntax >>-db2ckrst----d--database name----t--timestamp-----------------> >-----+---------------------+---+-----------------------------+-> | .-database---. | | .--------------------. | '--r--+-tablespace-+--' | V | | '--n-----tablespace name---+--' >-----+----+--------------------------------------------------->< +--h-+ +--u-+ '--?-' Command Parameters -d database namefile-name Specifies the alias name for the database that will be restored. -t timestamp Specifies the timestamp for a backup image that will be incrementally restored. -r Specifies the type of restore that will be executed. The default is database. Note: If tablespace is chosen and no table space names are given, the utility looks into the history entry of the specified image and uses the table space names listed to do the restore. -n tablespace name Specifies the name of one or more table spaces that will be restored. Note: If a database restore type is selected and a list of table space names is specified, the utility will continue as a tablespace restore using the table space names given. -h/-u/-? Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed. Examples db2ckrst -d mr -t 20001015193455 -r database db2ckrst -d mr -t 20001015193455 -r tablespace db2ckrst -d mr -t 20001015193455 -r tablespace -n tbsp1 tbsp2 > db2 backup db mr Backup successful. The timestamp for this backup image is : 20001016001426 > db2 backup db mr incremental Backup successful. The timestamp for this backup image is : 20001016001445 > db2ckrst -d mr -t 20001016001445 Suggested restore order of images using timestamp 20001016001445 for database mr. =================================================================== db2 restore db mr incremental taken at 20001016001445 db2 restore db mr incremental taken at 20001016001426 db2 restore db mr incremental taken at 20001016001445 =================================================================== > db2ckrst -d mr -t 20001016001445 -r tablespace -n userspace1 Suggested restore order of images using timestamp 20001016001445 for database mr. =================================================================== db2 restore db mr tablespace ( USERSPACE1 ) incremental taken at 20001016001445 db2 restore db mr tablespace ( USERSPACE1 ) incremental taken at 20001016001426 db2 restore db mr tablespace ( USERSPACE1 ) incremental taken at 20001016001445 =================================================================== Usage Notes The database history must exist in order for this utility to be used. If the database history does not exist, specify the HISTORY FILE option in the RESTORE command before using this utility. If the FORCE option of the PRUNE HISTORY command is used, it will be possible to delete entries that are required for recovery from the most recent, full database backup image. The default operation of the PRUNE HISTORY command prevents required entries from being deleted. It is recommended that the FORCE option of the PRUNE HISTORY command not be used. It is recommended that you keep good records of your backups and use this utility as a guide. ------------------------------------------------------------------------ 10.4 db2gncol (new command) db2gncol - Update Generated Column Values Updates generated columns in tables that are in check pending mode and have limited log space. This tool is used to prepare for a SET INTEGRITY statement on a table that has columns which are generated by expressions. Authorization One of the following * sysadm * dbadm Command Syntax >>-db2gncol----d--database----s--schema_name----t--table_name---> >-----c--commit_count----+---------------------------+----------> '--u--userid---p--password--' >-----+-----+-------------------------------------------------->< '--h--' Command Parameters -d database Specifies an alias name for the database in which the table is located. -s schema_name Specifies the schema name for the table. The schema name is case sensitive. -t table_name Specifies the table for which new column values generated by expressions are to be computed. The table name is case sensitive. -c commit_count Specifies the number of rows updated between commits. This parameter influences the size of the log space required to generate the column values. -u userid Specifies a user ID with system administrator or database administrator privileges. If this option is omitted, the current user is assumed. -p password Specifies the password for the specified user ID. -h Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed. Usage Notes Using this tool instead of the FORCE GENERATED option on the SET INTEGRITY statement may be necessary if a table is large and the following conditions exist: * All column values must be regenerated after altering the generation expression of a generated column. * An external UDF used in a generated column was changed, causing many column values to change. * A generated column was added to the table. * A large load or load append was performed that did not provide values for the generated columns. * The log space is too small due to long-running concurrent transactions or the size of the table. This tool will regenerate all column values that were created based on expressions. While the table is being updated, intermittent commits are performed to avoid using up all of the log space. Once db2gncol has been run, the table can be taken out of check pending mode using the SET INTEGRITY statement. ------------------------------------------------------------------------ 10.5 db2inidb - Initialize a Mirrored Database In a split mirror environment, this command is used to initialize a mirrored database for different purposes. Authorization Must be one of the following: o sysadm o sysctrl o sysmaint Required Connection None Command Syntax >>-db2inidb----database_alias----AS----+-SNAPSHOT-+------------>< +-STANDBY--+ '-MIRROR---' Command Parameters database_alias Specifies the alias of the database to be initialized. SNAPSHOT Specifies that the mirrored database will be initialized as a clone of the primary database. This database is read only. STANDBY Specifies that the database will be placed in roll forward pending state. New logs from the primary database can be fetched and applied to the standby database. The standby database can then be used in place of the primary database if it goes down. MIRROR Specifies that the mirrored database is to be used as a backup image which can be used to restore the primary database. ------------------------------------------------------------------------ 10.6 db2look - DB2 Statistics Extraction Tool The syntax diagram should appear as follows: >>-db2look---d--DBname----+--------------+---+-----+---+-----+--> '--u--Creator--' '--s--' '--g--' >-----+-----+---+-----+---+-----+---+-----+---+-----+-----------> '--a--' '--h--' '--r--' '--c--' '--p--' >-----+------------+---+-------------------+--------------------> '--o--Fname--' '--e--+----------+--' '--t Tname-' >-----+-------------------+---+-----+---+-----+-----------------> '--m--+----------+--' '--l--' '--x--' '--t Tname-' >-----+---------------------------+---+-----+------------------>< '--i--userid---w--password--' '--f--' The -td x parameter has been added following the -c parameter. Its definition is as follows: Specifies the statement delimiter for SQL statements generated by db2look. If this option is not specified, the defaults is the semicolon ';'. It is recommened that this option be used if the -e option is specified. In this case, the extracted objects may contain triggers or SQL routines. The following example will also be added: Generate the DDL statements for objects created by all users in the database DEPARTMENT. The db2look output is sent to file db2look.sql: db2look -d department -a -e -td % -o db2look.sql db2 -td% -f db2look.sql ------------------------------------------------------------------------ 10.7 db2updv7 - Update Database to Version 7 Current Fix Level This command updates the system catalogs in a database to support the current FixPak in the following ways: * Enables the use of the new built-in functions (ABS, DECRYPT_BIN, DECRYPT_CHAR, ENCRYPT, GETHINT, MULTIPLY_ALT, and ROUND). * Enables the use of the new built-in procedures (GET_ROUTINE_SAR and PUT_ROUTINE_SAR). * Adds or applies corrections to WEEK_ISO and DAYOFWEEK_ISO functions on Windows and OS/2 databases. * Applies a correction to table packed descriptors for tables migrated from Version 2 to Version 6. * Creates the view SYSCAT.SEQUENCES. Authorization sysadm Required Connection Database. This command automatically establishes a connection to the specified database. Command Syntax >>-db2updv7----d---database_name--------------------------------> >-----+---------------------------+---+-----+------------------>< '--u--userid---p--password--' '--h--' Command Parameters -d database-name Specifies the name of the database to be updated. -u userid Specifies the user ID. -p password Specifies the password for the user. -h Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed. Example After installing the FixPak, update the system catalog in the sample database by issuing the following command: db2updv7 -d sample Usage Notes This tool can only be used on a database running DB2 Version 7.1 or Version 7.2 with at least FixPak 2 installed. If the command is issued more than once, no errors are reported and each of the catalog updates is applied only once. To enable the new built-in functions, all applications must disconnect from this database and the database must be deactivated if it has been activated. ------------------------------------------------------------------------ 10.8 New Command Line Processor Option (-x, Suppress printing of column headings) A new option, -x, tells the command line processor to return data without any headers, including column names. The default setting for this command option is OFF. ------------------------------------------------------------------------ 10.9 True Type Font Requirement for DB2 CLP To correctly display the national characters for single byte (SBCS) languages correctly from the DB2 command line processor (CLP) window, change the font to True Type. ------------------------------------------------------------------------ 10.10 ADD DATALINKS MANAGER The required authorization level for this command is one of the following: * sysadm * sysctrl * sysmaint The following usage note should be added: This command is effective only after all applications have been disconnected from the database. The DB2 Data Links Manager being added must be completely set up and running for this command to be successful. The database must also be registered on the DB2 Data Links Manager using the dlfm add_db command. The maximum number of DB2 Data Links Managers that can be added to a database is 16. ------------------------------------------------------------------------ 10.11 ARCHIVE LOG (new command) Archive Log Closes and truncates the active log file for a recoverable database. If user exit is enabled, issues an archive request. Authorization One of the following: * sysadm * sysctrl * sysmaint * dbadm Required Connection This command automatically establishes a connection to the specified database. If a connection already exists, an error is returned. Command Syntax >>-ARCHIVE LOG FOR----+-DATABASE-+--database-alias--------------> '-DB-------' >-----+---------------------------------------+-----------------> '-USER--username--+------------------+--' '-USING--password--' >-------| On Node clause |------------------------------------->< On Node clause |---ON----+-| Node List clause |-------------------------+------| '-ALL NODES--+-------------------------------+-' '-EXCEPT--| Node List clause |--' Node List clause .-,-----------------------------------. V | |---+-NODE--+--(-----node number--+------------------+--+---)---| '-NODES-' '-TO--node number--' Command Parameters DATABASE database-alias Specifies the alias of the database whose active log is to be archived. USER username Identifies the user name under which a connection will be attempted. USING password Specifies the password to authenticate the user name. ON ALL NODES Specifies that the command should be issued on all nodes in the db2nodes.cfg file. This is the default if a node clause is not specified. EXCEPT Specifies that the command should be issued on all nodes in the db2nodes.cfg file, except those specified in the node list. ON NODE/ON NODES Specifies that the logs should be archived for the specified database on a set of nodes. node number Specifies a node number in the node list. TO node number Used when specifying a range of nodes for which the logs should be archived. All nodes from the first node number specified up to and including the second node number specified are included in the node list. Usage Notes This command can be used to collect a complete set of log files up to a known point. The log files can then be used to update a standby database. This function can only be executed when there is no database connection to the specified database. This prevents a user from executing the command with uncommitted transactions. As such, the ARCHIVE LOG command will not forcibly commit the user's incomplete transactions. If a database connection to the specified database already exists and this command is executed, the command will terminate and return an error. If another application has transactions in progress with the specified database when this command is executed, there will be a slight performance degradation since the command flushes the log buffer to disk. Any other transactions attempting to write log records to the buffer will have to wait until the flush is complete. If used in an MPP environment, a subset of nodes may be specified by using a node clause. If the node clause is not specified, the default behaviour for this command is to close and archive the active log on all nodes. Using this command will cause a database to lose a portion of its LSN space, and thereby hasten the exhaustion of valid LSNs. ------------------------------------------------------------------------ 10.12 BACKUP DATABASE 10.12.1 Syntax Diagram The syntax diagram for BACKUP DATABASE will be updated to reflect the new INCREMENTAL parameter and the optional DELTA argument. Specifying the INCREMENTAL option alone will result in a cumulative backup image being produced. The optional DELTA argument can be used to specify the production of a non-cumulative backup image. >>-BACKUP----+-DATABASE-+---database-alias----------------------> '-DB-------' >-----+---------------------------------------+-----------------> '-USER--username--+------------------+--' '-USING--password--' >-----+--------------------------------------------+------------> | .-,------------------. | | V | | '-TABLESPACE--(-----tablespace-name---+---)--' >-----+---------+---+--------------------------+----------------> '-ONLINE--' '-INCREMENTAL--+--------+--' '-DELTA--' >-----+-------------------------------------------------------+-> +-USE TSM--+-------------------------------+------------+ | '-OPEN--num-sessions--SESSIONS--' | | .-,--------. | | V | | +-TO----+-dir-+--+--------------------------------------+ | '-dev-' | '-LOAD--library-name--+-------------------------------+-' '-OPEN--num-sessions--SESSIONS--' >-----+-----------------------------+---------------------------> '-WITH--num-buffers--BUFFERS--' >-----+----------------------+---+-----------------+------------> '-BUFFER--buffer-size--' '-PARALLELISM--n--' >----+-------------------+------------------------------------->< '-WITHOUT PROMPTING-' 10.12.2 DB2 Data Links Manager Considerations If one or more Data Links servers are configured for the database, the backup operation will succeed, even if a Data Links server is not available. When the Data Links server restarts, backup processing will be completed on that Data Links server before it becomes available to the database again. Note: If there are twice as many backups still waiting for an unavailable Data Links server as are retained in the history file for the database (database configuration parameter num_db_backups), the backup operation will fail. ------------------------------------------------------------------------ 10.13 BIND The command syntax for DB2 should be modified to show the federated parameter as follows: FEDERATED--+--NO--+-- '-YES--' FEDERATED Specifies whether a static SQL statement in a package references a nickname or a federated view. If this option is not specified and a static SQL statement in the package references a nickname or a federated view, a warning is returned and the package is created. NO A nickname or federated view is not referenced in the static SQL statements of the package. If a nickname or federated view is encountered in a static SQL statement during the prepare or bind of this package, an error is returned and the package is not created. YES A nickname or federated view can be referenced in the static SQL statements of the package. If no nicknames or federated views are encountered in static SQL statements during the prepare or bind of the package, no errors or warnings are returned and the package is created. Note: In Version 7 FixPak 2, an SQL1179W warning message is generated by the server when precompiling a source file or binding a bind file without specifying a value for the FEDERATED option. The same message is generated when the source file or bind file includes static SQL references to a nickname. There are two exceptions: o For clients that are at an earlier FixPak than Version 7 FixPak 2 or for downlevel clients, the sqlaprep() API does not report this SQL1179W warning in the message file. The Command Line Processor PRECOMPILE command also does not output the warning in this case. o For clients that are at an earlier FixPak than Version 7 FixPak 2 or for downlevel clients, the sqlabndx API does report this SQL1179W warning in the message file. However, the message file also incorrectly includes an SQL0092N message indicating that no package was created. This is not correct as the package is indeed created. The Command Line Processor BIND command returns the same erroneous warning. ------------------------------------------------------------------------ 10.14 CALL The syntax for the CALL command should appear as follows: .-,---------------. V | >>-CALL--proc-name---(-----+-----------+--+---)---------------->< '-argument--' The description of the argument parameter has been changed to: Specifies one or more arguments for the stored procedure. All input and output arguments must be specified in the order defined by the procedure. Output arguments are specified using the "?" character. For example, a stored procedure foo with one integer input parameter and one output parameter would be invoked as "call foo (4, ?)". Notes: 1. When invoking this utility from an operating system prompt, it may be necessary to delimit the command as follows: "call DEPT_MEDIAN (51)" A single quotation mark (') can also be used. 2. The stored procedure being called must be uniquely named in the database. 3. The stored procedure must be cataloged. If an uncataloged procedure is called, a DB21036 error message is returned. 4. A DB21101E message is returned if not enough parameters are specified on the command line, or the command line parameters are not in the correct order (input, output), according to the stored procedure definition. 5. There is a maximum of 1023 characters for a result column. 6. LOBS and binary data (FOR BIT DATA, VARBINARY, LONGVARBINARY, GRAPHIC, VARGAPHIC, or LONGVARGRAPHIC) are not supported. 7. CALL supports result sets. 8. If an SP with an OUTPUT variable of an unsupported type is called, the CALL fails, and message DB21036 is returned. 9. The maximum length for an INPUT parameter to CALL is 1024. ------------------------------------------------------------------------ 10.15 DROP DATALINKS MANAGER (new command) DROP DATALINKS MANAGER Drops a DB2 Data Links Manager from the list of registered DB2 Data Links Managers for a specified database. Authorization One of the following: * sysadm * sysctrl * sysmaint Command Syntax >>-DROP DATALINKS MANAGER FOR----+-DATABASE-+--dbname---USING---> '-DB-------' >----name------------------------------------------------------>< Command Parameters DATABASE dbname Specifies a database name. USING name Specifies the name of the DB2 Data Links Manager server as shown by the LIST DATALINKS MANAGER command. Examples Example 1 Dropping a DB2 Data Links Manager micky.almaden.ibm.com from database TEST under instance validate residing on host bramha.almaden.ibm.com when some database tables have links to micky.almaden.ibm.com. It is extremely important that the following steps be taken when dropping a DB2 Data Links Manager. 1. Take a database backup for database TEST. 2. If there are any links to micky.almaden.ibm.com, unlink them: a. Log on with a user ID belonging to SYSADM_GROUP and obtain an exclusive mode connection the the database TEST. connect to test in exclusive mode Ensure that this is the only connection to test using that user ID. This will prevent any new links from being created. b. Obtain a list of all FILE LINK CONTROL DATALINK columns and the tables containing them in the database. select tabname, colname from syscat.columns where substr(dl_features, 2, 1) = 'F' c. For each FILE LINK CONTROL DATALINK column in the list, issue SQL SELECT to determine if links to micky.almaden.ibm.com exist. For example, for a DATALINK column c in table t, the SELECT statement would be: select count(*) from t where dlurlserver(t.c) = 'MICKY.ALMADEN.IBM.COM' d. For each FILE LINK CONTROL DATALINK column containing such links, issue SQL UPDATE to unlink values which are links to micky.almaden.ibm.com. For example, for a DATALINK column c in table t, the UPDATE statement would be: update t set t.c = null where dlurlserver(t.c) = 'MICKY.ALMADEN.IBM.COM' If t.c is not nullable, the following can be used instead: update t set t.c = dlvalue('') where dlurlserver(t.c) = 'MICKY.ALMADEN.IBM.COM' e. Commit this SQL UPDATE: commit 3. Issue the DROP DATALINKS MANAGER command: drop datalinks manager for db test using node micky.almaden.ibm.com 4. Terminate the exclusive mode connection to make the changes effective and to allow other connections to the database: terminate 5. Initiate unlink processing and garbage collection of backup information for TEST on micky.almaden.ibm.com. As DB2 Data Links Manager Administrator, issue the following command on micky.almaden.ibm.com: dlfm drop_dlm test validate bramha.almaden.ibm.com This will unlink any files that are still linked to database TEST, just in case the user has missed unlinking them before invoking step 3. If micky.almaden.ibm.com has backup information (for example, archive files, metadata) for files previously linked to database TEST, this command will initiate garbage collection of that information. The actual unlinking and garbage collection will be performed asynchronously. Example 2 Deleting DATALINK values that are links to files on a DB2 Data Links Manager called micky.almaden.ibm.com, when the Manager has already been dropped from database TEST. This may be required if steps in Example 1 were not followed while dropping micky.almaden.ibm.com. SQL DELETE, SELECT, and UPDATE statements will not be successful for such DATALINK values (SQL0368). The user must run a reconcile operation for each table that contains such DATALINK values. Each DATALINK value that was a link to micky.almaden.ibm.com will be updated to NULL or a zero-length DATALINK value. Any row containing such a value will be inserted into the exception table (if one was specified). However, the DATALINK value will not include the prefix name. The prefix name in the original DATALINK value is no longer obtainable by the system, because micky.almaden.ibm.com has been dropped. For example, if the original DATALINK value was 'http://host.com/dlfs/x/y/a.b' and '/dlfs' is the prefix name, the DATALINK value in the exception table will contain 'http://host.com/x/y/a.b'. The files referenced by these DATALINK values will continue to remain in linked state on the DB2 Data Links Manager. The dlfm drop_dlm command can be issued on micky.almaden.ibm.com to initiate unlink processing for these files. If micky.almaden.ibm.com has backup information (for example, archive files, metadata) for files previously linked to database TEST, this command will initiate garbage collection of that information. The actual unlinking and garbage collection will be performed asynchronously. Example 3 Multiple incarnations of a DB2 Data Links Manager micky.almaden.ibm.com for a database TEST. This scenario demonstrates that a DB2 Data Links Manager can be re-registered after being dropped, and that it is then treated as a completely new DB2 Data Links Manager. The following steps are only illustrative of a scenario that is possible. If, as recommended, the steps in Example 1 are followed for dropping micky.almaden.ibm.com, links to the older incarnation of micky.almaden.ibm.com will not exist; that is, one will not see error SQL0368 in step 7 below. 1. Register micky.almaden.ibm.com to database TEST: add datalinks manager for db test using node micky.almaden.ibm.com port 14578 2. Create links to files on micky.almaden.ibm.com: connect to test create table t(c1 int, c2 datalink linktype url file link control mode db2options) insert into t values(1, dlvalue('file://micky.almaden.ibm.com/pictures/yosemite.jpg')) commit terminate 3. Drop micky.almaden.ibm.com from database TEST: drop datalinks manager for db test using micky.almaden.ibm.com 4. Select DATALINK values: connect to test select * from t terminate The user will see: SQL0368 The DB2 Data Links Manager "MICKY.ALMADEN.IBM.COM" is not registered to the database. SQLSTATE=55022. 5. Register micky.almaden.ibm.com to database TEST again: add datalinks manager for db test using node micky.almaden.ibm.com port 14578 6. Insert more DATALINK values: connect to test insert into t values(2, dlvalue('file://micky.almaden.ibm.com/pictures/tahoe.jpg')) commit 7. Select DATALINK values: select c2 from t where c1 = 2 is successful because the value being selected is a link to the currently registered incarnation of micky.almaden.ibm.com. select c2 from t where c1 = 1 returns: SQL0368 The DB2 Data Links Manager "MICKY.ALMADEN.IBM.COM" is not registered to the database. SQLSTATE=55022. because the value being selected is a link to the incarnation of micky.almaden.ibm.com which was dropped in step 3 above. Usage Notes The effects of the DROP DATALINKS MANAGER command cannot be rolled back. It is extremely important to follow the steps outlined in Example 1 when using the DROP DATALINKS MANAGER command. This command is effective only after all applications have been disconnected from the database. Upon successful completion of the command, the user is informed (DB210201I) that no processing has been done on the DB2 Data Links Manager. Before dropping a DB2 Data Links Manager, the user must ensure that the database does not have any links to files on that DB2 Data Links Manager. If links do exist in the database after a DB2 Data Links Manager has been dropped, the user must run the reconcile utility to get rid of them. The reconcile utility will set these links to NULL (if the DATALINK column is nullable), or to a zero-length DATALINK value. Files corresponding to links between a database and a dropped DB2 Data Links Manager remain in linked state. That is, they are inaccessible to operations like read, write, rename, delete, change of permissions, or change of ownership. Archived copies of unlinked files on the DB2 Data Links Manager will not be garbage collected by this command. However, users can explicitly initiate unlink processing and garbage collection using the dlfm drop_dlm command on the DB2 Data Links Manager. It is recommended that a database backup be taken before dropping a DB2 Data Links Manager. In addition, ensure that all replication subscriptions have replicated all changes involving this DB2 Data Links Manager. If a backup was taken before the DB2 Data Links Manager was dropped from a database, and that backup image is used to restore after that DB2 Data Links Manager was dropped, restore or rollforward processing may put certain tables in datalink reconcile pending (DRP) state. ------------------------------------------------------------------------ 10.16 EXPORT In the section "DB2 Data Links Manager Considerations", Step 3 of the procedure to ensure that a consistent copy of the table and the corresponding files referenced by DATALINK columns are copied for export should read: 3. Run the dlfm_export utility at each Data Links server. Input to the dlfm_export utility is the control file name, which is generated by the export utility. This produces a tar (or equivalent) archive of the files listed within the control file. For Distributed File Systems (DFS), the dlfm_export utility will get the DCE network root credentials before archiving the files listed in the control file. dlfm_export does not capture the ACLs information of the files that are archived. In the same section, the bullets following "Successful execution of EXPORT results in the generation of the following files" should be modified as follows: The second sentence in the first bullet should read: A DATALINK column value in this file has the same format as that used by the import and load utilities. The first sentence in the second bullet should read: Control files server_name, which are generated for each Data Links server. (On the Windows NT operating system, a single control file, ctrlfile.lst, is used by all Data Links servers. For DFS, there is one control file for each cell.) The following sentence should be added to the paragraph before Table 5: For more information about dlfm_export, refer to the "Data Movement Utilities Guide and Reference" under "Using Export to move DB2 Data Links Manager Data". ------------------------------------------------------------------------ 10.17 GET DATABASE CONFIGURATION The description of the DL_TIME_DROP configuration parameter should be changed to the following: Applies to DB2 Data Links Manager only. This parameter specifies the number of days files would be retained on an archive server (such as a TSM server) after a DROP DATABASE command is issued. The new parameter TRACKMOD will be added to the GET DATABASE CONFIGURATION command. The syntax will appear as follows: >>-GET----+-DATABASE-+---+-CONFIGURATION-+--FOR-----------------> '-DB-------' +-CONFIG--------+ '-CFG-----------' .-NO--. >----database-alias---TRACKMOD--+-YES-+------------------------>< The parameter description will be added as follows: TRACKMOD Indicates whether DB2 should track modified pages in the database in order to allow incremental backups to be taken. OFF Specifies that changed pages should not be tracked. This is the default for databases created prior to Version 7.1, FixPak 3. ON Specifies that changed pages should be tracked. When this parameter is set, incremental backups of the database can be made. This is the default for databases created with Version 7.1, FixPak 3 or later. ------------------------------------------------------------------------ 10.18 GET ROUTINE (new command) GET ROUTINE Retrieves a routine SQL Archive (SAR) file for a specified SQL routine. Authorization dbadm Required Connection Database. If implicit connect is enabled, a connection to the default database is established. Command Syntax >>-GET ROUTINE--INTO---file_name----FROM----+-----------+-------> '-SPECIFIC--' >-------PROCEDURE----routine_name------------------------------>< Command Parameters INTO file-name Names the file where routine SQL archive (SAR) is stored. FROM Indicates that start of the specification of the routine to be retrieved. SPECIFIC The specified routine-name is given as a specific name. PROCEDURE The routine is an SQL procedure. routine-name The name of the procedure. If SPECIFIC is specified then it is the specific name of the procedure. If the name is not qualified with a schema name, the CURRENT SCHEMA is used as the schema name of the routine. The routine-name must be an existing procedure that is defined as an SQL procedure. Examples GET ROUTINE INTO procs/proc1.sar FROM PROCEDURE myappl.proc1; ------------------------------------------------------------------------ 10.19 GET SNAPSHOT The description for the FCM FOR ALL NODES parameter should appear as follows: Provides Fast Communication Manager (FCM) statistics between the node against which the GET SNAPSHOT command was issued and the other nodes in the EEE instance. ------------------------------------------------------------------------ 10.20 IMPORT In the section "DB2 Data Links Manager Considerations", the following sentence should be added to Step 3: For Distributed File Systems (DFS), update the cell name information in the URLs (of the DATALINK columns) from the exported data for the SQL table, if required. The following sentence should be added to Step 4: For DFS, define the cells at the target configuration in the DB2 Data Links Manager configuration file. The paragraph following Step 4 should read: When the import utility runs against the target database, files referred to by DATALINK column data are linked on the appropriate Data Links servers. ------------------------------------------------------------------------ 10.21 LIST HISTORY The CREATE TABLESPACE parameter will be added to the LIST HISTORY command. The syntax diagram will appear as follows: >>-LIST HISTORY----+-------------------+------------------------> +-BACKUP------------+ +-ROLLFORWARD-------+ +-ALTER TABLESPACE--+ +-DROPPED TABLE-----+ +-LOAD--------------+ +-RENAME TABLESPACE-+ '-CREATE TABLESPACE-' >-----+-ALL--------------------------------+--------------------> +-SINCE--timestamp-------------------+ '-CONTAINING--+-_schema.objectname-+-' '-_objectname--------' >----FOR--+----------+---database-alias------------------------>< +-DATABASE-+ '-DB-------' The parameter description will be added as follows: CREATE TABLESPACE Lists all CREATE TABLESPACE and DROP TABLESPACE operations. The Usage Notes will be updated as follows: The following symbols will be added to the Operation section of the report: * A - Create tablespace * O - Drop tablespace * U - Unload The symbols under the Type section of the report will be reorganized as follows: * Backup Types o F - Offline o N - Online o I - Incremental Offline o O - Incremental Online o D - Delta Offline o E - Delta Online * Rollforward Types o E - End of log o P - Point in time * Load Types o I - Insert o R - Replace * Alter tablespace Types o C - Add containers o R - Rebalance * Quiesce Types o S - Quiesce Share o U - Quiesce Update o X - Quiesce Exclusive o Z - Quiesce Reset ------------------------------------------------------------------------ 10.22 LOAD In the section "DB2 Data Links Manager Considerations", add the following sentence to Step 1 of the procedure that is to be performed before invoking the load utility, if data is being loaded into a table with a DATALINK column that is defined with FILE LINK CONTROL: For Distributed File Systems (DFS), ensure that the DB2 Data Links Managers within the target cell are registered. The following sentence should be added to Step 5: For DFS, register the cells at the target configuration referred to by DATALINK data (to be loaded) in the DB2 Data Links Manager configuration file. In the section "Representation of DATALINK Information in an Input File", the first note following the parameter description for urlname should read: Currently "http", "file", "unc", and "dfs" are permitted as a schema name. The first sentence of the second note should read: The prefix (schema, host, and port) of the URL name is optional. For DFS, the prefix refers to the schema cellname filespace-junction portion. In the DATALINK data examples for both the delimited ASCII (DEL) file format and the non-delimited ASCII (ASC) file format, the third example should be removed. The DATALINK data examples in which the load or import specification for the column is assumed to be DL_URL_DEFAULT_PREFIX should be removed and replaced with the following: Following are DATALINK data examples in which the load or import specification for the column is assumed to be DL_URL_REPLACE_PREFIX ("http://qso"): * http://www.almaden.ibm.com/mrep/intro.mpeg This sample URL is stored with the following parts: o schema = http o server = qso o path = /mrep/intro.mpeg o comment = NULL string * /u/me/myfile.ps This is stored with the following parts: o schema = http o server = qso o path = /u/me/myfile.ps o comment = NULL string ------------------------------------------------------------------------ 10.23 PING (new command) PING Tests the network response time of the underlying connectivity between a client and a database server where DB2 Connect is used to establish the connection. Authorization None Required Connection Database Command Syntax .-time-. .-1--+------+---------------------. >>-PING---db_alias----+-+-----------------------------+-+------>< '-number_of_times--+-------+--' +-times-+ '-time--' Command Parameters db_alias Specifies the database alias for the database on a DRDA server that the ping is being sent to. Note: This parameter, although mandatory, is not currently used. It is reserved for future use. Any valid database alias name can be specified. number of times Specifies the number of iterations for this test. The value must be between 1 and 32767 inclusive. The default is 1. One timing will be returned for each iteration. Examples To test the network response time for the connection to the host database server hostdb once: db2 ping hostdb 1 or: db2 ping hostdb The command will display output that looks like this: Elapsed time: 7221 microseconds To test the network response time for the connection to the host database server hostdb 5 times: db2 ping hostdb 5 or: db2 ping hostdb 5 times The command will display output that looks like this: Elapsed time: 8412 microseconds Elapsed time: 11876 microseconds Elapsed time: 7789 microseconds Elapsed time: 10124 microseconds Elapsed time: 10988 microseconds Usage Notes A database connection must exist before invoking this command, otherwise an error will result. The elapsed time returned is for the connection between the client and a DRDA server database via DB2 Connect. ------------------------------------------------------------------------ 10.24 PUT ROUTINE (new command) PUT ROUTINE Uses the specified routine SQL Archive (SAR) file to define a routine in the database. Authorization dbadm Required Connection Database. If implicit connect is enabled, a connection to the default database is established. Command Syntax >>-PUT ROUTINE----FROM----file-name-----------------------------> >-----+---------------------------------------+---------------->< '-OWNER--new-owner--+----------------+--' '-USE REGISTERS--' Command Parameters FROM file-name Names the file where routine SQL archive (SAR) is stored. OWNER new-owner Specifies a new authorization-name for the routine that will be used for authorization checking of the routine. The new-owner must have the necessary privileges for the routine to be defined. If the OWNER clause is not specified, the authorization-name that originally defined the routine is used. USE REGISTERS Indicates that the CURRENT SCHEMA and CURRENT PATH special registers are used to define the routine. If this clause is not specified, the settings for the default schema and SQL path are the settings used when the routine was originally defined. CURRENT SCHEMA is used as the schema name for unqualified object names in the routine definition (including the name of the routine) and CURRENT PATH is used to resolve unqualified routines and data types in the routine definition. Examples PUT ROUTINE FROM procs/proc1.sar; Usage Notes No more than one procedure can be concurrently installed under a given schema. ------------------------------------------------------------------------ 10.25 RECONCILE The following usage note should be added to the command description: During reconciliation, attempts are made to link files which exist according to table data, but which do not exist according to Data Links Manager metadata, if no other conflict exists. A required DB2 Data Links Manager is one which has a referenced DATALINK value in the table. Reconcile tolerates unavailability of a required DB2 Data Links Manager as well as those that are configured to the database but are not part of the table data. If an exception table is not specified, the exception report file (filename.exp) will have the host name, file name, column ID, and reason code for each of the DATALINK column values for which file references could not be re-established. If the file reference could not be re-established because the DB2 Data Links Manager itself was dropped from the database using the DROP DATALINKS MANAGER command, the file name reported in the exception report file is not the full file name; that is, the prefix part is missing. In the exception table for the DATALINK values whose DB2 Data Links Manager is dropped or is not available, the file name in the DATALINK value is not the full file name. The prefix part is missing. For example, if the original DATALINK value was 'http://host.com/dlfs/x/y/a.b', the value reported in the exception table will be 'http://host.com/x/y/a.b'; that is, the prefix name 'dlfs' will not be included. The exception report file in this case will have 'x/y/a.b'; that is, the prefix name 'dlfs' will not be included. At the end of the reconciliation process, the table is taken out of datalink reconcile pending (DRP) state only if reconcile processing is complete on all the required DB2 Data Links Managers. If reconcile processing is pending on any of the required DB2 Data Links Managers (because they were unavailable), the table will remain, or be placed, in DRP state. The following should be added to the list of possible violations: 00010-DB2 Data Links Manager referenced by the DATALINK value has been dropped from the database using the DROP DATALINKS MANAGER command. In this case, the corresponding DATALINK value in the exception table will not contain the prefix name. For example, if the original DATALINK value was 'http://host.com/dlfs/prfx/x/y/a.b', and '/dlfs/prfx' is the prefix name, the exception table will contain 'http://host.com/x/y/a.b'. ------------------------------------------------------------------------ 10.26 REORGANIZE TABLE The following sentence will be added to the Usage Notes: REORGANIZE TABLE cannot use an index that is based on an index extension. ------------------------------------------------------------------------ 10.27 RESTORE DATABASE 10.27.1 Syntax The following option will be added to the syntax of the RESTORE DATABASE command after the TABLESPACE/TABLESPACE ONLINE/HISTORY FILE options: >>-+-------------------------+--------------------------------->< '-INCREMENTAL--+-------+--' '-ABORT-' The parameter descriptions will be added as follows: INCREMENTAL Results in a manual cumulative restore of the database. The user will issue each of the restore commands. ABORT This parameter should be used to terminate an incremental restore before successful completion. The following examples will also be added: The following is a sample weekly incremental backup strategy with a recoverable database. A full backup is scheduled once per week, a delta every day, plus an incremental mid-week: (Sun) backup db kdr use adsm (Mon) backup db kdr online incremental delta use adsm (Tue) backup db kdr online incremental delta use adsm (Wed) backup db kdr online incremental use adsm (Thu) backup db kdr online incremental delta use adsm (Fri) backup db kdr online incremental delta use adsm (Sat) backup db kdr online incremental use adsm For a manual database restore of images created above on Friday morning, issue the following commands: restore db kdr incremental taken at (Thu) restore db kdr incremental taken at (Sun) restore db kdr incremental taken at (Wed) restore db kdr incremental taken at (Thu) Note: Any RESTORE command of the form db2 restore db will perform a full database restore, regardless of whether the image being restored is a database image or a table space image. Any RESTORE command of the form db2 restore db tablespace will perform a table space restore of the table spaces found in the image. Any RESTORE command in which a list of table spaces is provided will perform a restore of whatever table spaces were explicitly listed. 10.27.2 DB2 Data Links Manager Considerations The second paragraph in the section entitled "DB2 Data Links Manager Considerations" should be replaced with: If one or more Data Links servers are configured for the database, the restore operation will succeed, even if a Data Links server is not available. When the Data Links server restarts, restore processing will be completed on that Data Links server before it becomes available to the database again. NOTE: If a database restore operation is still waiting for an unavailable Data Links server, any subsequent database or table space restore operations will fail. ------------------------------------------------------------------------ 10.28 ROLLFORWARD DATABASE The second paragraph in the section entitled "DB2 Data Links Manager Considerations" should be replaced with: If one or more Data Links servers are configured for the database, the rollforward operation will succeed, even if a Data Links server is not available. When the Data Links server restarts, rollforward processing will be completed on that Data Links server before it becomes available to the database again. ------------------------------------------------------------------------ 10.29 Documentation Error in CLP Return Codes In the Command Line Processor Return Codes section of Chapter 2, the second paragraph should appear as follows: For example, the following Bourne shell script executes the GET DATABASE MANAGER CONFIGURATION command, then inspects the CLP return code: db2 get database manager configuration if [ "$?" = "0" ] then echo "OK!" fi ------------------------------------------------------------------------ Data Movement Utilities Guide and Reference ------------------------------------------------------------------------ 11.1 Chapter 2. Import 11.1.1 Using Import with Buffered Inserts The note at the end of this section should read: Note: In all environments except EEE, the buffered inserts feature is disabled during import operations in which the INSERT_UPDATE parameter is specified. ------------------------------------------------------------------------ 11.2 Chapter 3. Load 11.2.1 Pending States After a Load Operation The first two sentences in the last paragraph in this section have been changed to the following: The fourth possible state associated with the load process (check pending state) pertains to referential and check constraints, DATALINKS constraints, AST constraints, or generated column constraints. For example, if an existing table is a parent table containing a primary key referenced by a foreign key in a dependent table, replacing data in the parent table places both tables (not the table space) in check pending state. 11.2.2 Load Restrictions and Limitations The following restrictions apply to generated columns and the load utility: * It is not possible to load a table having a generated column in a unique index unless the generated column is an "include column" of the index or the generatedoverride file type modifier is used. If this modifier is used, it is expected that all values for the column will be supplied in the input data file. * It is not possible to load a table having a generated column in the partitioning key unless the generatedoverride file type modifier is used. If this modifier is used, it is expected that all values for the column will be supplied in the input data file. 11.2.3 totalfreespace File Type Modifier The totalfreespace file type modifier (LOAD) has been modified to accept a value between 0 and 2 147 483 647. ------------------------------------------------------------------------ 11.3 Chapter 4. AutoLoader 11.3.1 rexecd Required to Run Autoloader When Authentication Set to YES In the Autoloader Options section the following note will be added to the AUTHENTICATION and PASSWORD Parameters description: In a Linux environment, if you are running the autoloader with the authentication option set to YES, rexecd must be enabled on all machines. If rexecd is not enabled the following error message will be generated: openbreeze.torolab.ibm.com: Connection refused SQL6554N An error occurred when attempting to remotely execute a process. The following error messages will be generated in the db2diag.log file: 2000-10-11-13.04.16.832852 Instance:svtdbm Node:000 PID:19612(db2atld) Appid: oper_system_services sqloRemoteExec Probe:31 ------------------------------------------------------------------------ Replication Guide and Reference ------------------------------------------------------------------------ 12.1 Replication and Non-IBM Servers You must use DataJoiner Version 2 or later to replicate data to or from non-IBM servers such as Informix, Microsoft SQL Server, Oracle, Sybase, and Sybase SQL Anywhere. You cannot use the relational connect function for this type of replication because DB2 Relational Connect Version 7 does not have update capability. Also, you must use DJRA (DataJoiner Replication Administration) to administer such heterogeneous replication on all platforms (AS/400, OS/2, OS/390, UNIX, and Windows) for all existing versions of DB2 and DataJoiner. ------------------------------------------------------------------------ 12.2 Replication on Windows 2000 DB2 DataPropagator Version 7 is compatible with the Windows 2000 operating system. ------------------------------------------------------------------------ 12.3 Known Error When Saving SQL Files If you use the Control Center in DB2 Connect Personal Edition, you cannot save SQL files. If you try to save SQL files, you get an error message that the Database Administration Server (DAS) is not active, when in fact DAS is not available because it is not shipped with DB2 Connect PE. ------------------------------------------------------------------------ 12.4 DB2 Maintenance It is recommended that you install the latest DB2 maintenance for the various DB2 products that you use in your replication environment. ------------------------------------------------------------------------ 12.5 Data Difference Utility on the Web You can download the Data Difference utility (DDU) from the Web at ftp://ftp.software.ibm.com/ps/products/datapropagator/fixes/. The DDU is a sample utility that you can use to compare two versions of the same file and produce an output file that shows the differences. See the README file that accompanies the sample utility for details. ------------------------------------------------------------------------ 12.6 Chapter 3. Data replication scenario 12.6.1 Replication Scenarios See the Library page of the DataPropagator Web site (http://www.ibm.com/software/data/dpropr/) for a new heterogeneous data replication scenario. Follow the steps in that scenario to copy changes from a replication-source table in an Oracle database on AIX to a target table in a database on DB2 for Windows NT. That scenario uses the DB2 DataJoiner Replication Administration (DJRA) tool, Capture triggers, the Apply program, and DB2 DataJoiner. On page 44 of the book, the instructions in Step 6 for creating a password file should read as follows: Step 6: Create a password file Because the Apply program needs to connect to the source server, you must create a password file for user authentication. Make sure that the user ID that will run the Apply program can read the password file. To create a password file: 1. From a Windows NT command prompt window, change to the C:\scripts directory. 2. Create a new file in this directory called DEPTQUAL.PWD. You can create this file using any text editor, such as Notepad. The naming convention for the password file is applyqual.pwd; where applyqual is a case-sensitive string that must match the case and value of the Apply qualifier used when you created the subscription set. For this scenario, the Apply qualifier is DEPTQUAL. Note: The filenaming convention from Version 5 of DB2 DataPropagator is also supported. 3. The contents of the password file has the following format: SERVER=server USER=userid PWD=password Where: server The name of the source, target, or control server, exactly as it appears in the subscription set table. For this scenario, these names are SAMPLE and COPYDB. userid The user ID that you plan to use to administer that particular database. This value is case-sensitive for Windows NT and UNIX operating systems. password The password that is associated with that user ID. This value is case-sensitive for Windows NT and UNIX operating systems. Do not put blank lines or comment lines in this file. Add only the server-name, user ID, and password information. 4. The contents of the password file should look similar to: SERVER=SAMPLE USER=subina PWD=subpw SERVER=COPYDB USER=subina PWD=subpw For more information about DB2 authentication and security, refer to the IBM DB2 Administration Guide. ------------------------------------------------------------------------ 12.7 Chapter 5. Planning for replication 12.7.1 Table and Column Names Replication does not support blanks in table and column names. 12.7.2 DATALINK Replication DATALINK replication is available on Solaris as part of Version 7.1 FixPak 1. It requires an FTP daemon that runs in the source and target DATALINK file system and supports the MDTM (modtime) command, which displays the last modification time of a given file. If you are using Version 2.6 of the Solaris operating system, or any other version that does not include FTP support for MDTM, you need additional software such as WU-FTPD. You cannot replicate DATALINK columns between DB2 databases on AS/400 and DB2 databases on other platforms. On the AS/400 platform, there is no support for the replication of the "comment" attribute of DATALINK values. If you are running AIX 4.2, before you run the default user exit program (ASNDLCOPY) you must install the PTF for APAR IY03101 (AIX 4210-06 RECOMMENDED MAINTENANCE FOR AIX 4.2.1). This PTF contains a Y2K fix for the "modtime/MDTM" command in the FTP daemon. To verify the fix, check the last modification time returned from the "modtime " command, where is a file that was modified after January 1, 2000. If the target table is an external CCD table, DB2 DataPropagator calls the ASNDLCOPY routine to replicate DATALINK files. For the latest information about how to use the ASNDLCOPY and ASNDLCOPYD programs, see the prologue section of each program's source code. The following restrictions apply: * Internal CCD tables can contain DATALINK indicators, but not DATALINK values. * Condensed external CCD tables can contain DATALINK values. * Noncondensed CCD target tables cannot contain any DATALINK columns. * When the source and target servers are the same, the subscription set must not contain any members with DATALINK columns. 12.7.3 LOB Restrictions Condensed internal CCD tables cannot contain references to LOB columns or LOB indicators. 12.7.4 Planning for Replication On page 65, "Connectivity" should include the following fact: If the Apply program cannot connect to the control server, the Apply program terminates. When using data blocking for AS/400, you must ensure that the total amount of data to be replicated during the interval does not exceed "4 million rows", not "4 MB" as stated on page 69 of the book. ------------------------------------------------------------------------ 12.8 Chapter 6. Setting up your replication environment 12.8.1 Update-anywhere Prerequisite If you want to set up update-anywhere replication with conflict detection and with more than 150 subscription set members in a subscription set, you must run the following DDL to create the ASN.IBMSNAP_COMPENSATE table on the control server: CREATE TABLE ASN.IBMSNAP_COMPENSATE ( APPLY_QUAL char(18) NOT NULL, MEMBER SMALLINT, INTENTSEQ CHAR(10) FOR BIT DATA, OPERATION CHAR(1)); 12.8.2 Setting Up Your Replication Environment Page 95, "Customizing CD table, index, and tablespace names" states that the DPREPL.DFT file is in either the \sqllib\bin directory or the \sqllib\java directory. This is incorrect, DPREPL.DFT is in the \sqllib\cc directory. On page 128, the retention limit description should state that the retention limit is used to prune rows only when Capture warm starts or when you use the Capture prune command. If you started Capture with the auto-pruning option, it will not use the retention limit to prune rows. ------------------------------------------------------------------------ 12.9 Chapter 8. Problem Determination The Replication Analyzer runs on Windows 32-bit systems and AIX. To run the Analyzer on AIX, ensure that the sqllib/bin directory appears before /usr/local/bin in your PATH environment variable to avoid conflicts with /usr/local/bin/analyze. The Replication Analyzer has two additional optional keywords: CT and AT. CT=n Show only those entries from the Capture trace table that are newer than n days old. This keyword is optional. If you do not specify this keyword, the default is 7 days. AT=n Show only those entries from the Apply trail table that are newer than n days old. This keyword is optional. If you do not specify this keyword, the default is 7 days. Example: analyze mydb1 mydb2 f=mydirectory ct=4 at=2 deepcheck q=applyqual1 For the Replication Analyzer, the following keyword information is updated: deepcheck Specifies that the Analyzer perform a more complete analysis, including the following information: CD and UOW table pruning information, DB2 for OS/390 tablespace-partitioning and compression detail, analysis of target indexes with respect to subscription keys, subscription timelines, and subscription-set SQL-statement errors. The analysis includes all servers. This keyword is optional. lightcheck Specifies that the following information be excluded from the report: all column detail from the ASN.IBMSNAP_SUBS_COLS table, subscription errors or anomalies or omissions, and incorrect or inefficient indexes. This reduction in information saves resources and produces a smaller HTML output file. This keyword is optional and is mutually exclusive with the deepcheck keyword. Analyzer tools are available in PTFs for replication on AS/400 platforms. These tools collect information about your replication environment and produce an HTML file that can be sent to your IBM Service Representative to aid in problem determination. To get the AS/400 tools, download the appropriate PTF (for example, for product 5769DP2, you must download PTF SF61798 or its latest replacement). Add the following problem and solution to the "Troubleshooting" section: Problem: The Apply program loops without replicating changes; the Apply trail table shows STATUS=2. The subscription set includes multiple source tables. To improve the handling of hotspots for one source table in the set, an internal CCD table is defined for that source table, but in a different subscription set. Updates are made to the source table but the Apply process that populates the internal CCD table runs asynchronously (for example, the Apply program might not be started or an event not triggered, and so on). The Apply program that replicates updates from the source table to the target table loops because it is waiting for the internal CCD table to be updated. To stop the looping, start the Apply program (or trigger the event that causes replication) for the internal CCD table. The Apply program will populate the internal CCD table and allow the looping Apply program to process changes from all source tables. A similar situation could occur for a subscription set that contains source tables with internal CCD tables that are populated by multiple Apply programs. ------------------------------------------------------------------------ 12.10 Chapter 9. Capture and Apply for AS/400 On page 178, "A note on work management" should read as follows: You can alter the default definitions or provide your own definitions. If you create your own subsystem description, you must name the subsystem QZSNDPR and create it in a library other than QDPR. See "OS/400 Work Management V4R3", SC41-5306 for more information about changing these definitions. Add the following to page 178, "Verifying and customizing your installation of DB2 DataPropagator for AS/400": If you have problems with lock contention due to high volume of transactions, you can increase the default wait timeout value from 30 to 120. You can change the job every time the Capture job starts or you can use the following procedure to change the default wait timeout value for all jobs running in your subsystem: 1. Issue the following command to create a new class object by duplicating QGPL/QBATCH: CRTDUPOBJ OBJ(QBATCH) FROMLIB(QGPL) OBJTYPE(*CLS) TOLIB(QDPR) NEWOBJ(QZSNDPR) 2. Change the wait timeout value for the newly created class (for example, to 300 seconds): CHGCLS CLS(QDPR/QZSNDPR) DFTWAIT(300) 3. Update the routing entry in subsystem description QDPR/QZSNDPR to use the newly created class: CHGRTGE SBSD(QDPR/QZSNDPR) SEQNBR(9999) CLS(QDPR/QZSNDPR) On page 194, "Using the delete journal receiver exit routine" should include this sentence: If you remove the registration for the delete journal receiver exit routine, make sure that all the journals used for source tables have DLTRCV(*NO). On page 195, the ADDEXITPGM command parameters should read: ADDEXITPGM EXITPNT(QIBM_QJO_DLT_JRNRCV) FORMAT(DRCV0100) PGM(QDPR/QZSNDREP) PGMNBR(*LOW) CRTEXITPNT(*NO) PGMDTA(65535 10 QSYS) ------------------------------------------------------------------------ 12.11 Chapter 10. Capture and Apply for OS/390 In Chapter 10, the following paragraphs are updated: 12.11.1 Prerequisites for DB2 DataPropagator for OS/390 You must have DB2 for OS/390 Version 5, DB2 for OS/390 Version 6, or DB2 for OS/390 Version 7 to run DB2 DataPropagator for OS/390 Version 7 (V7). 12.11.2 UNICODE and ASCII Encoding Schemes on OS/390 DB2 DataPropagator for OS/390 V7 supports UNICODE and ASCII encoding schemes. To exploit the new encoding schemes, you must have DB2 for OS/390 V7 and you must manually create or convert your DB2 DataPropagator source, target, and control tables as described in the following sections. However, your existing replication environment will work with DB2 DataPropagator for OS/390 V7 even if you do not modify any encoding schemes. 12.11.2.1 Choosing an Encoding Scheme If your source, CD, and target tables use the same encoding scheme, you can minimize the need for data conversions in your replication environment. When you choose encoding schemes for the tables, follow the single CCSID rule: Character data in a table space can be encoded in ASCII, UNICODE, or EBCDIC. All tables within a table space must use the same encoding scheme. The encoding scheme of all the tables in an SQL statement must be the same. Also, all tables that you use in views and joins must use the same encoding scheme. If you do not follow the single CCSID rule, DB2 will detect the violation and return SQLCODE -873 during bind or execution. Which tables should be ASCII or UNICODE depends on your client/server configuration. Specifically, follow these rules when you choose encoding schemes for the tables: * Source or target tables on DB2 for OS/390 can be EBCDIC, ASCII, or UNICODE. They can be copied from or to tables that have the same or different encoding scheme in any supported DBMS (DB2 family, or non-DB2 with DataJoiner). * On a DB2 for OS/390 source server, all CD, UOW, register, and prune control tables on the same server must use the same encoding scheme. To ensure this consistency, always specify the encoding scheme explicitly. * All the control tables (ASN.IBMSNAP_SUBS_xxxx) on the same control server must use the same encoding scheme. * Other control tables can use any encoding scheme; however, it is recommended that the ASN.IBMSNAP_CRITSEC table remain EBCDIC. 12.11.2.2 Setting Encoding Schemes To specify the proper encoding scheme for tables, modify the SQL that is used to generate the tables: * Create new source and target tables with the proper encoding scheme, or change the encoding schemes of the existing target and source tables. It is recommended that you stop the Capture and Apply programs before you change the encoding scheme of existing tables, and afterwards that you cold start the Capture program and restart the Apply program. To change the encoding scheme of existing tables: 1. Use the Reorg utility to copy the existing table. 2. Drop the existing table. 3. Re-create the table specifying the new encoding scheme. 4. Use the Load utility to load the old data into the new table. See the DB2 Universal Database for OS/390 Utility Guide and Reference for more information on the Load and Reorg utilities. * Create new control tables with the proper encoding scheme or modify the encoding scheme for existing ones. DPCNTL.MVS is shipped with DB2 for OS/390 in sqllib\samples\repl and it contains several CREATE TABLE statements that create the control tables. For those tables that need to be ASCII or UNICODE (for example, ASN.IBMSNAP_REGISTER and ASN.IBMSNAP_PRUNCNTL), add the CCSID ASCII or CCSID UNICODE keyword, as shown in the following example. CREATE TABLE ASN.IBMSNAP_PRUNCNTL ( TARGET_SERVER CHAR( 18) NOT NULL, TARGET_OWNER CHAR( 18) NOT NULL, TARGET_TABLE CHAR( 18) NOT NULL, SYNCHTIME TIMESTAMP, SYNCHPOINT CHAR( 10) FOR BIT DATA, SOURCE_OWNER CHAR( 18) NOT NULL, SOURCE_TABLE CHAR( 18) NOT NULL, SOURCE_VIEW_QUAL SMALLINT NOT NULL, APPLY_QUAL CHAR( 18) NOT NULL, SET_NAME CHAR( 18) NOT NULL, CNTL_SERVER CHAR( 18) NOT NULL, TARGET_STRUCTURE SMALLINT NOT NULL, CNTL_ALIAS CHAR( 8) ) CCSID UNICODE DATA CAPTURE CHANGES IN TSSNAP02; To modify existing control tables and CD tables, use the Reorg and Load utilities. * When you create new replication sources or subscription sets, modify the SQL file generated by the administration tool to specify the proper encoding scheme. The SQL has several CREATE TABLE statements that are used to create the CD and target tables for the replication source and subscription set, respectively. Add the keyword CCSID ASCII or CCSID UNICODE where appropriate. For example: CREATE TABLE user1.cdtable1 ( employee_name varchar, employee_age decimal ) CCSID UNICODE; The DB2 UDB for OS/390 SQL Reference contains more information about CCSID. ------------------------------------------------------------------------ 12.12 Chapter 11. Capture and Apply for UNIX platforms 12.12.1 Setting Environment Variables for Capture and Apply on UNIX and Windows If you created the source database with a code page other than the default code page value, set the DB2CODEPAGE environment variable to that code page. See the DB2 Administration Guide for information about deriving code page values before you set DB2CODEPAGE. Capture must be run in the same code page as the database for which it is capturing data. DB2 derives the Capture code page from the active environment where Capture is running. If DB2CODEPAGE is not set, DB2 derives the code page value from the operating system. The value derived from the operating system is correct for Capture if you used the default code page when creating the database. ------------------------------------------------------------------------ 12.13 Chapter 14. Table Structures On page 339, append the following sentence to the STATUS column description for the value "2": If you use internal CCD tables and you repeatedly get a value of "2" in the status column of the Apply trail table, go to "Chapter 8: Problem Determination" and refer to "Problem: The Apply program loops without replicating changes, the Apply trail table shows STATUS=2". ------------------------------------------------------------------------ 12.14 Chapter 15. Capture and Apply Messages Message ASN0017E should read: ASN0017E The Capture program encountered a severe internal error and could not issue the correct error message. The routine name is "routine". The return code is "return_code". Message ASN1027S should be added: ASN1027S There are too many large object (LOB) columns specified. The error code is "". Explanation: Too many large object (BLOB, CLOB, or DBCLOB) columns are specified for a subscription set member. The maximum number of columns allowed is 10. User response: Remove the excess large object columns from the subscription set member. Message ASN1048E should read as follows: ASN1048E The execution of an Apply cycle failed. See the Apply trail table for full details: "" Explanation: An Apply cycle failed. In the message, "" identifies the "", "", and "". User response: Check the APPERRM fields in the audit trail table to determine why the Apply cycle failed. ------------------------------------------------------------------------ 12.15 Appendix A. Starting the Capture and Apply Programs from Within an Application On page 399 of the book, a few errors appear in the comments of the Sample routine that starts the Capture and Apply programs; however the code in the sample is correct. The latter part of the sample pertains to the Apply parameters, despite the fact that the comments indicate that it pertains to the Capture parameters. You can get samples of the Apply and Capture API, and their respective makefiles, in the following directories: For NT - sqllib\samples\repl For UNIX - sqllib/samples/repl ------------------------------------------------------------------------ System Monitor Guide and Reference ------------------------------------------------------------------------ 13.1 db2ConvMonStream In the Usage Notes, the structure for the snapshot variable datastream type SQLM_ELM_SUBSECTION should be sqlm_subsection. ------------------------------------------------------------------------ Troubleshooting Guide ------------------------------------------------------------------------ 14.1 Starting DB2 on Windows 95, Windows 98, and Windows ME When the User Is Not Logged On For a db2start command to be successful in a Windows 95, Windows 98, or Windows Millennium Edition (ME) environment, you must either: * Log on using the Windows logon window or the Microsoft Networking logon window * Issue the db2logon command (see note (NOTE_1) for information about the db2logon command). In addition, the user ID that is specified either during the logon or for the db2logon command must meet DB2's requirements (see note (NOTE2)). When the db2start command starts, it first checks to see if a user is logged on. If a user is logged on, the db2start command uses that user's ID. If a user is not logged on, the db2start command checks whether a db2logon command has been run, and, if so, the db2start command uses the user ID that was specified for the db2logon command. If the db2start command cannot find a valid user ID, the command terminates. During the installation of DB2 Universal Database Version 7 on Windows 95, Windows 98, and Windows ME, the installation software, by default, adds a shortcut to the Startup folder that runs the db2start command when the system is booted (see note (NOTE_1) for more information). If the user of the system has neither logged on nor issued the db2logon command, the db2start command will terminate. If you or your users do not normally log on to Windows or to a network, you can hide the requirement to issue the db2logon command before a db2start command by running commands from a batch file as follows: 1. Create a batch file that issues the db2logon command followed by the db2start.exe command. For example: @echo off db2logon db2local /p:password db2start cls exit 2. Name the batch file db2start.bat, and store it in the /bin directory that is under the drive and path where you installed DB2. You store the batch file in this location to ensure that the operating system can find the path to the batch file. The drive and path where DB2 is installed is stored in the DB2 registry variable DB2PATH. To find the drive and path where you installed DB2, issue the following command: db2set -g db2path Assume that the db2set command returns the value c:\sqllib. In this situation, you would store the batch file as follows: c:\sqllib\bin\db2start.bat 3. To start DB2 when the system is booted, you should run the batch file from a shortcut in the Startup folder. You have two options: o Modify the shortcut that is created by the DB2 installation program to run the batch file instead of db2start.exe. In the preceding example, the shortcut would now run the db2start.bat batch file. The shortcut that is created by DB2 installation program is called DB2 - DB2.lnk, and is located in c:\WINDOWS\Start Menu\Programs\Start\DB2 - DB2.lnk on most systems. o Add your own shortcut to run the batch file, and delete the shortcut that is added by the DB2 installation program. Use the following command to delete the DB2 shortcut: del "C:\WINDOWS\Start Menu\Programs\Startup\DB2 - DB2.lnk" If you decide to use your own shortcut, you should set the close on exit attribute for the shortcut. If you do not set this attribute, the DOS command prompt is left in the task bar even after the db2start command has successfully completed. To prevent the DOS window from being opened during the db2start process, you can create this shortcut (and the DOS window it runs in) set to run minimized. Note: As an alternative to starting DB2 during the boot of the system, DB2 can be started prior to the running of any application that uses DB2. See note (NOTE5) for details. If you use a batch file to issue the db2logon command before the db2start command is run, and your users occasionally log on, the db2start command will continue to work, the only difference being that DB2 will use the user ID of the logged on user. See note (NOTE_1) for additional details. Notes: 1. The db2logon command simulates a user logon. The format of the db2logon command is: db2logon userid /p:password The user ID that is specified for the command must meet the DB2 naming requirements (see note (NOTE2) for more information). If the command is issued without a user ID and password, a window opens to prompt the user for the user ID and password. If the only parameter provided is a user ID, the user is not prompted for a password; under certain conditions a password is required, as described below. The user ID and password values that are set by the db2logon command are only used if the user did not log on using either the Windows logon window or the Microsoft Networking logon window. If the user has logged on, and a db2logon command has been issued, the user ID from the db2logon command is used for all DB2 actions, but the password specified on the db2logon command is ignored When the user has not logged on using the Windows logon window or the Microsoft Networking logon window, the user ID and password that are provided through the db2logon command are used as follows: o The db2start command uses the user ID when it starts, and does not require a password. o In the absence of a high-level qualifier for actions like creating a table, the user ID is used as the high-level qualifier. For example: 1. If you issue the following: db2logon db2local 2. Then issue the following: create table tab1 The table is created with a high-level qualifier as db2local.tab1. You should use a user ID that is equal to the schema name of your tables and other objects. o When the system acts as client to a server, and the user issues a CONNECT statement without a user ID and password (for example, CONNECT TO TEST) and authentication is set to server, the user ID and password from the db2logon command are used to validate the user at the remote server. If the user connects with an explicit user ID and password (for example, CONNECT TO TEST USER userID USING password), the values that are specified for the CONNECT statement are used. 2. In Version 7, the user ID that is either used to log on or specified for the db2logon command must conform to the following DB2 requirements: o It cannot be any of the following: USERS, ADMINS, GUESTS, PUBLIC, LOCAL, or any SQL reserved word that is listed in the SQL Reference. o It cannot begin with: SQL, SYS or IBM o Characters can include: + A through Z (Windows 95, Windows 98, and Windows ME support case-sensitive user IDs) + 0 through 9 + @, #, or $ 3. You can prevent the creation of the db2start shortcut in the Startup folder during a customized interactive installation, or if you are performing a response file installation and specify the DB2.AUTOSTART=NO option. If you use these options, there is no db2start shortcut in the Startup folder, and you must add your own shortcut to run the db2start.bat file. 4. On Windows 98 and Windows ME an option is available that you can use to specify a user ID that is always logged on when Windows 98 or Windows ME is started. In this situation, the Windows logon window will not appear. If you use this option, a user is logged on and the db2start command will succeed if the user ID meets DB2 requirements (see note (NOTE2) for details). If you do not use this option, the user will always be presented with a logon window. If the user cancels out of this window without logging on, the db2start command will fail unless the db2logon command was previously issued, or invoked from the batch file, as described above. 5. If you do not start DB2 during a system boot, DB2 can be started by an application. You can run the db2start.bat file as part of the initialization of applications that use DB2. Using this method, DB2 will only be started when the application that will use it is started. When the user exits the application, a db2stop command can be issued to stop DB2. Your business applications can start DB2 in this way, if DB2 is not started during the system boot. To use the DB2 Synchronizer application or call the synchronization APIs from your application, DB2 must be started if the scripts that are download for execution contain commands that operate either against a local instance or a local database. These commands can be in database scripts, instance scripts, or embedded in operating system (OS) scripts. If an OS script does not contain Command Line Processor commands or DB2 APIs that use an instance or a database, it can be run without DB2 being started. Because it may be difficult to tell in advance what commands will be run from your scripts during the synchronization process, DB2 should normally be started before synchronization begins. If you are calling either the db2sync command or the synchronization APIs from your application, you would start DB2 during the initialization of your application. If your users will be using the DB2 Synchronizer shortcut in the DB2 for Windows folder to start synchronization, the DB2 Synchronization shortcut must be modified to run a db2sync.bat file. The batch file should contain the following commands to ensure that DB2 is running before synchronization begins: @echo off db2start.bat db2sync.exe db2stop.exe cls exit In this example, it is assumed that the db2start.bat file invokes the db2logon and db2start commands as described above. If you decide to start DB2 when the application starts, ensure that the installation of DB2 does not add a shortcut to the Startup folder to start DB2. See note (NOTE3) for details. ------------------------------------------------------------------------ 14.2 Chapter 2. Troubleshooting the DB2 Universal Database Server Under the "Locking and Deadlocks" section, under the "Applications Slow or Appear to Hang" subsection, change the description under "Lock waits or deadlocks are not caused by next key locking" to : Next key locking guarantees Repeatable Read (RR) isolation level by automatically locking the next key for all INSERT and DELETE statements and the next higher key value above the result set for SELECT statements. For UPDATE statements that alter key parts of an index, the original index key is deleted and the new key value is inserted. Next key locking is done on both the key insertion and key deletion. It is required to guarantee ANSI and SQL92 standard RR, and is the DB2 default. Examine snapshot information for the application. If the problem appears to be with next key locking, you can set the DB2_RR_TO_RS option on if none of your applications rely on Repeatable Read (RR) behavior and it is acceptable for scans to skip over uncommitted deletes. When DB2_RR_TO_RS is on, RR behavior cannot be guaranteed for scans on user tables because next key locking is not done during index key insertion and deletion. Catalog tables are not affected by this option. The other change in behavior is that with DB2_RR_TO_RS on, scans will skip over rows that have been deleted but not committed, even though the row may have qualified for the scan. For example, consider the scenario where transaction A deletes the row with column1=10 and transaction B does a scan where column1>8 and column1<12. With DB2_RR_TO_RS off, transaction B will wait for transaction A to commit or rollback. If it rolls back, the row with column1=10 will be included in the result set of transaction B's query. With DB2_RR_TO_RS on, transaction B will not wait for transaction A to commit or rollback. It will immediately receive query results that do not include the deleted row. Do not use this option if you require ANSI and SQL92 standard RR or if you do not want scans to skip uncommitted deletes. ------------------------------------------------------------------------ Using DB2 Universal Database on 64-bit Platforms ------------------------------------------------------------------------ 15.1 Chapter 5. Configuration 15.1.1 LOCKLIST The following information should be added to Table 2. Parameter Previous Upper Limit Current Upper Limit LOCKLIST 60000 524288 15.1.2 shmsys:shminfo_shmmax DB2 users on the 64-bit Solaris operating system should increase the value of "shmsys:shminfo_shmmax" in /etc/system, as necessary, to be able to allocate a large database shared memory set. The DB2 for UNIX Quick Beginnings book recommends setting that parameter to "90% of the physical RAM in the machine, in bytes". This recommendation is also valid for 64-bit implementations. However, there is a problem with the following recommendation in the DB2 for UNIX Quick Beginnings book: For 32-bit systems with more than 4 GB of RAM (up to 64 GB in total is possible on the Solaris operating system), if a user sets the shmmax value to a number larger than 4 GB, and is using a 32-bit kernel, the kernel only looks at the lower 32 bits of the number, sometimes resulting in a very small value for shmmax. ------------------------------------------------------------------------ 15.2 Chapter 6. Restrictions There is currently no LDAP support on 64-bit operating systems. 32-bit and 64-bit databases cannot be created on the same path. For example, if a 32-bit database exists on , then: db2 create db on if issued from a 64-bit instance, fails with "SQL10004C An I/O error occurred while accessing the database directory." ------------------------------------------------------------------------ XML Extender Administration and Programming Release Notes for the IBM DB2 XML Extender can be found on the DB2 XML Web site: http://www-4.ibm.com/software/data/db2/extenders/xmlext/library.html ------------------------------------------------------------------------ MQSeries This section describes how DB2 and MQSeries can be used to construct applications that combine messaging and database access. The focus in this section will be a set of functions, similar to User-Defined Functions (UDFs), that may be optionally enabled in DB2 Universal Database, Version 7.2. Using these basic functions, it is possible to support a wide range of applications, from simple event notification to data warehousing. For more information about data warehousing applications, refer to 22.15, Integration of MQSeries with the Data Warehouse Center ------------------------------------------------------------------------ 17.1 Installation and Configuration for the DB2 MQSeries Functions This section describes how to configure a DB2 environment to use the DB2 MQSeries Functions. Upon successful completion of the following procedure you will be able to use the DB2 MQSeries Functions from within SQL. A description of these functions can be found in the SQL Reference section of the Release Notes. Additional information, including the latest documentation, hints and tips can be found at http://www.ibm.com/software/data/integration/MQSeries. The basic procedure for configuring and enabling the DB2 MQSeries Functions is: 1. Install MQSeries. 2. Install MQSeries AMI. 3. Enable and configure the DB2 MQSeries Functions. In addition, to make use of the publish/subscribe capabilities provided by the DB2 MQSeries Functions, you must also install either MQSeries Integrator or the MQSeries Publish/Subscribe Functions. Information on MQSeries Integrator can be found at http://www.ibm.com/software/ts/mqseries/integrator. Information on the MQSeries Publish/Subscribe feature can be found at http://www.ibm.com/software/ts/mqseries/txppacs under category 3. 17.1.1 Install MQSeries The first step is to ensure that MQSeries Version 5.2 is installed on your DB2 server. If this version of MQSeries is already installed then skip to the next step, "Install MQSeries AMI." DB2 Version 7.2 includes a copy of the MQSeries server to be used with DB2. Platform specific instructions for installing MQSeries or for upgrading an existing MQSeries installation can be found in a platform specific Quick Beginnings book at http://www.ibm.com/software/ts/mqseries/library/manuals. Be sure to set up a default queue manager as you go through the installation process. 17.1.2 Install MQSeries AMI The next step is to install the MQSeries Application Messaging Interface (AMI). This is an extension to the MQSeries programming interfaces that provides a clean separation of administrative and programming tasks. The DB2 MQSeries Functions require the installation of this interface. If the MQSeries AMI is already installed on your DB2 server then skip to the next step, "Enable DB2 MQSeries Functions." If the MQSeries AMI is not installed then you can do so from either the installation package provided with DB2 7.2 or by downloading a copy of the AMI from the MQSeries Support Pacs web site at http://www.ibm.com/software/ts/mqseries/txppacs. The AMI may be found under "Category 3 - Product Extensions." For convenience, we have provided a copy of the MQSeries AMI with DB2. This file is located in the sqllib/cfg directory. The name of the file is operating system dependent: AIX Version 4.3 and greater ma0f_ax.tar.Z HP-UX ma0f_hp.tar.Z Solaris Operating Environment ma0f_sol7.tar.Z Windows 32-bit ma0f_nt.zip Follow the normal AMI installation process as outlined in the AMI readme file contained in the compressed installation image. 17.1.3 Enable DB2 MQSeries Functions During this step, you will configure and enable a database for the DB2 MQSeries Functions. The enable_MQFunctions utility is a flexible command that first checks that the proper MQSeries environment has been set up and then installs and creates a default configuration for the DB2 MQSeries functions, enables the specified database with these functions, and confirms that the configuration works. 1. For Windows NT or Windows 2000, go to step 5. 2. Setting Groups on UNIX: If you are enabling these functions on UNIX, you must first add the DB2 instance owner (often db2inst1) and the userid associated with fenced UDFs (often db2fenc1) to the MQSeries group mqm. This is needed for the DB2 functions to access MQSeries. 3. Set DB2 Environment Variables on UNIX: Add the AMT_DATA_PATH environment variable to the list understood by DB2. You can edit the file $INSTHOME/sqllib/profile.env, add AMT_DATA_PATH to DB2ENVLIST. The db2set command can also be used. 4. On UNIX, restart the database instance: For the environment variable changes to take effect, the database instance must be restarted. 5. Change directory to $INSTHOME/sqllib/cfg for UNIX or %DB2PATH%/cfg on Windows. 6. Run the command enable_MQFunctions to configure and enable a database for the DB2 MQSeries Functions. Refer to 17.6, enable_MQFunctions for a complete description of this command. Some common examples are given below. After successful completion, the specified database will have been enabled and the configuration tested. 7. To test these functions using the Command Line Processor, issue the following commands after you have connected to the enabled database: values DB2MQ.MQSEND('a test') values DB2MQ.MQRECEIVE() The first statement will send the message "a test" to the DB2MQ_DEFAULT_Q queue and the second will receive it back. Note: As a result of running enable_MQFunctions, a default MQSeries environment will be established. The MQSeries queue manager DB2MQ_DEFAULT_MQM and the default queue DB2MQ_DEFAULT_Q will be created. The files amt.xml, amthost.xml, and amt.dtd will be created if they do not already exist in the directory pointed to by AMT_DATA_PATH. If an amthost.xml file does exist, and does not contain a definition for connectionDB2MQ, then a line will be added to the file with the appropriate information. A copy of the original file will be saved as DB2MQSAVE.amthost.xml. ------------------------------------------------------------------------ 17.2 MQSeries Messaging Styles The DB2 MQSeries functions support three messaging models: datagrams, publish/subscribe (p/s), and request/reply (r/r). Messages sent as datagrams are sent to a single destination with no reply expected. In the p/s model, one or more publishers send a message to a publication service which distributes the message to one or more subscribers. Request/reply is similar to datagram, but the sender expects to receive a response. ------------------------------------------------------------------------ 17.3 Message Structure MQSeries does not, itself, mandate or support any particular structuring of the messages it transports. Other products, such as MQSeries Integrator (MQSI) do offer support for messages formed as C or Cobol or as XML strings. Structured messages in MQSI are defined by a message repository. XML messages typically have a self-describing message structure and may also be managed through the repository. Messages may also be unstructured, requiring user code to parse or construct the message content. Such messages are often semi-structured, that is, they use either byte positions or fixed delimiters to separate the fields within a message. Support for such semi-structured messages is provided by the MQSeries Assist Wizard. Support for XML messages is provided through some new features to the DB2 XML Extender. ------------------------------------------------------------------------ 17.4 MQSeries Functional Overview A set of MQSeries functions are provided with DB2 UDB Version 7.2 to allow SQL statements to include messaging operations. This means that this support is available to applications written in any supported language, for example, C, Java, SQL using any of the database interfaces. All examples shown below are in SQL. This SQL may be used from other programming languages in all the standard ways. All of the MQSeries messaging styles described above are supported. For more information about the MQSeries functions, see the SQL Reference section of the Release Notes. In a basic configuration, an MQSeries server is located on the database server machine along with DB2. The MQSeries functions are installed into DB2 and provide access to the MQSeries server. DB2 clients may be located on any machine accessible to the DB2 server. Multiple clients can concurrently access the MQSeries functions through the database. Through the provided functions, DB2 clients may perform messaging operations within SQL statements. These messaging operations allow DB2 applications to communicate among themselves or with other MQSeries applications. The enable_MQFunctions command is used to enable a DB2 database for the MQSeries functions. It will automatically establish a simple default configuration that client applications may utilize with no further administrative action. For a description, see enable_MQFunctions and disable_MQFunctions. The default configuration allows application programmers a quick way to get started and a simpler interface for development. Additional functionality may be configured incrementally as needed. Example 1: To send a simple message using the default configuration, the SQL statement would be: VALUES DB2MQ.MQSEND('simple message') This will send the message simple message to the MQSeries queue manager and queue specified by the default configuration. The Application Messaging Interface (AMI) of MQSeries provides a clean separation between messaging actions and the definitions that dictate how those actions should be carried out. These definitions are kept in an external repository file and managed using the AMI Administration tool. This makes AMI applications simple to develop and maintain. The MQSeries functions provided with DB2 are based on the AMI MQSeries interface. AMI supports the use of an external configuration file, called the AMI Repository, to store configuration information. The default configuration includes an MQSeries AMI Repository configured for use with DB2. Two key concepts in MQSeries AMI, service points and policies, are carried forward into the DB2 MQSeries functions. A service point is a logical end-point from which a message may be sent or received. In the AMI repository, each service point is defined with an MQSeries queue name and queue manager. Policies define the quality of service options that should be used for a given messaging operation. Key qualities of service include message priority and persistence. Default service points and policy definitions are provided and may be used by developers to further simplify their applications. Example 1 can be re-written as follows to explicitly specify the default service point and policy name: Example 2: VALUES DB2MQ.MQSEND('DB2.DEFAULT.SERVICE', 'DB2.DEFAULT.POLICY', 'simple message') Queues may be serviced by one or more applications at the server upon which the queues and applications reside. In many configurations multiple queues will be defined to support different applications and purposes. For this reason, it is often important to define different service points when making MQSeries requests. This is demonstrated in the following example: Example 3: VALUES DB2MQ.MQSEND('ODS_Input', 'simple message') Note: In this example, the policy is not specified and thus the default policy will be used. 17.4.1 Limitations MQSeries provides the ability for message operations and database operations to be combined in a single unit of work as an atomic transaction. This feature is not initially supported by the MQSeries Functions on Unix and Windows. When using the sending or receiving functions, the maximum length of a message is 4000 characters. This is also the maximum message size for publishing a message using MQPublish. 17.4.2 Error Codes The return codes returned by the MQSeries Functions can be found in Appendix B of the MQSeries Application Messaging Interface Manual. ------------------------------------------------------------------------ 17.5 Usage Scenarios The MQSeries Functions can be used in a wide variety of scenarios. This section will review some of the more common scenarios, including Basic Messaging, Application Connectivity and Data Publication. 17.5.1 Basic Messaging The most basic form of messaging with the MQSeries DB2 Functions occurs when all database applications connect to the same DB2 server. Clients may be local to the database server or distributed in a network environment. In a simple scenario, Client A invokes the MQSEND function to send a user-defined string to the default service location. The MQSeries functions are then executed within DB2 on the database server. At some later time, Client B invokes the MQRECEIVE function to remove the message at the head of the queue defined by the default service and return it to the client. Again, the MQSeries functions to perform this work are executed by DB2. Database clients can use simple messaging in a number of ways. Some common uses for messaging are: * Data collection -- Information is received in the form of messages from one or more possibly diverse sources of information. Information sources may be commercial applications such as SAP or applications developed in-house. Such data may be received from queues and stored in database tables for further processing or analysis. * Workload distribution -- Work requests are posted to a queue shared by multiple instances of the same application. When an instance is ready to perform some work it receives a message from the top of the queue containing a work request to perform. Using this technique, multiple instances can share the workload represented by a single queue of pooled requests. * Application signaling -- In a situation where several processes collaborate, messages are often used to coordinate their efforts. These messages may contain commands or requests for work to be performed. Typically, this kind of signaling is one-way; that is, the party that initiates the message does not expect a reply. See 17.5.4.1, Request/Reply Communications for more information. * Application notification -- Notification is similar to signaling in that data is sent from an initiator with no expectation of a response. Typically, however, notification contains data about business events that have taken place. 17.5.4.2, Publish/Subscribe is a more advanced form of notification. The following scenario extends the simple scenario described above to incorporate remote messaging. That is, a message is sent between Machine A and Machine B. The sequence of steps is as follows: 1. The DB2 Client executes an MQSEND call, specifying a target service that has been defined to represent a remote queue on Machine B. 2. The MQSeries DB2 functions perform the actual MQSeries work to send the message. The MQSeries server on Machine A accepts the message and guarantees that it will deliver it to the destination defined by the service point definition and current MQSeries configuration of Machine A. The server determines that this is a queue on Machine B. It then attempts to deliver the message to the MQSeries server on Machine B, transparently retrying as needed. 3. The MQSeries server on Machine B accepts the message from the server on Machine A and places it in the destination queue on Machine B. 4. An MQSeries client on Machine B requests the message at the head of the queue. 17.5.2 Sending Messages Using MQSEND, a DB2 user or developer chooses what data to send, where to send it, and when it will be sent. In the industry this is commonly called "Send and Forget," meaning that the sender just sends a message, relying on the guaranteed delivery protocols of MQSeries to ensure that the message reaches its destination. The following examples illustrate this. Example 4: To send a user-defined string to the service point myplace with the policy highPriority: VALUES DB2MQ.MQSEND('myplace','highPriority','test') Here, the policy highPriority refers to a policy defined in the AMI Repository that sets the MQSeries priority to the highest level and perhaps adjusts other qualities of service, such as persistence, as well. The message content may be composed of any legal combination of SQL and user-specified data. This includes nested functions, operators, and casts. For instance, given a table EMPLOYEE, with VARCHAR columns LASTNAME, FIRSTNAME, and DEPARTMENT, to send a message containing this information for each employee in DEPARTMENT 5LGA you would do the following: Example 5: SELECT DB2MQ.MQSEND(LASTNAME || ' ' || FIRSTNAME || ' ' || DEPARTMENT) FROM EMPLOYEE WHERE DEPARTMENT = '5LGA' If this table also had an integer AGE column, it could be included as follows: Example 6: SELECT DB2MQ.MQSEND(LASTNAME || ' ' || FIRSTNAME || ' ' || DEPARTMENT|| ' ' || char(AGE)) FROM EMPLOYEE WHERE DEPARTMENT = '5LGA' Finally, the following example shows how message content may be derived using any valid SQL expression. Given a second table DEPT containing varchar columns DEPT_NO and DEPT_NAME, messages can be sent that contain employee LASTNAME and DEPT_NAME: Example 7: SELECT DB2MQ.MQSEND(e.LASTNAME || ' ' || d.DEPTNAME) FROM EMPLOYEE e, DEPT d WHERE e.DEPARTMENT = d.DEPTNAME 17.5.3 Retrieving Messages The MQSeries DB2 Functions allow messages to be either received or read. The difference between reading and receiving is that reading returns the message at the head of a queue without removing it from the queue, while receiving operations cause the message to be removed from the queue. A message retrieved using a receive operation can only be retrieved once, while a message retrieved using a read operation allows the same message to be retrieved many times. The following examples demonstrate this: Example 8: VALUES DB2MQ.MQREAD() This example returns a VARCHAR string containing the message at the head of queue defined by the default service using the default quality of service policy. It is important to note that if no messages are available to be read, a null value will be returned. The queue is not changed by this operation. Example 9: VALUES DB2MQ.MQRECEIVE('Employee_Changes') The above example shows how a message can be removed from the head of the queue defined by the Employee_Changes service using the default policy. One very powerful feature of DB2 is the ability to generate a table from a user-defined (or DB2-provided) function. You can exploit this table function feature to allow the contents of a queue to be materialized as a DB2 table. The following example demonstrates the simplest form of this: Example 10: SELECT t.* FROM table ( DB2MQ.MQREADALL()) t This query returns a table consisting of all of the messages in the queue defined by the default service and the metadata about these messages. While the full definition of the table structure returned is defined in the Appendix, the first column reflects the contents of the message and the remaining columns contain the metadata. To return just the messages, the example could be rewritten: Example 11: SELECT t.MSG FROM table (DB2MQ.MQREADALL()) t The table returned by a table function is no different from a table retrieved from the database directly. This means that you can use this table in a wide variety of ways. For instance, you can join the contents of the table with another table or count the number of messages in a queue: Example 12: SELECT t.MSG, e.LASTNAME FROM table (DB2MQ.MQREADALL() ) t, EMPLOYEE e WHERE t.MSG = e.LASTNAME Example 13: SELECT COUNT(*) FROM table (DB2MQ.MQREADALL()) t You can also hide the fact that the source of the table is a queue by creating a view over a table function. For instance, the following example creates a view called NEW_EMP over the queue referred to by the service named NEW_EMPLOYEES: Example 14: CREATE VIEW NEW_EMP (msg) AS SELECT t.msg FROM table (DB2MQ.MQREADALL()) t In this case, the view is defined with only a single column containing an entire message. If messages are simply structured, for instance containing two fields of fixed length, it is straightforward to use the DB2 built-in functions to parse the message into the two columns. For example, if you know that messages sent to a particular queue always contain an 18-character last name followed by an 18-character first name, then you can define a view containing each field as a separate column as follows: Example 15: CREATE VIEW NEW_EMP2 AS SELECT left(t.msg,18) AS LNAME, right(t.msg,18) AS FNAME FROM table(DB2MQ.MQREADALL()) t A new feature of the DB2 Stored Procedure Builder, the MQSeries Assist Wizard, can be used to create new DB2 table functions and views that will map delimited message structures to columns. Finally, it is often desirable to store the contents of one or more messages into the database. This may be done using the full power of SQL to manipulate and store message content. Perhaps the simplest example of this is: Example 16: INSERT INTO MESSAGES SELECT t.msg FROM table (DB2MQ.MQRECEIVEALL()) t Given a table MESSAGES, with a single column of VARCHAR(2000), the statement above will insert the messages from the default service queue into the table. This technique can be embellished to cover a very wide variety of circumstances. 17.5.4 Application-to-Application Connectivity Application integration is a common element in many solutions. Whether integrating a purchased application into an existing infrastructure or just integrating a newly developed application into an existing environment, we are often faced with the task of glueing a heterogeneous collection of subsystems together to form a working whole. MQSeries is commonly viewed as an essential tool for integrating applications. Accessible in most hardware, software, and language environments, MQSeries provides the means to interconnect a very heterogeneous collection of applications. This section will discuss some application integration scenarios and how they may be used with DB2. As the topic is quite broad, a comprehensive treatment of Application Integration is beyond the scope of this work. Therefore, the focus is on just two simple topics: Request/Reply communications, and MQSeries Integrator and Publish/Subscribe. 17.5.4.1 Request/Reply Communications The Request/Reply (R/R) communications method is a very common technique for one application to request the services of another. One way to do this is for the requester to send a message to the service provider requesting some work to be performed. Once the work has been completed, the provider may decide to send results (or just a confirmation of completion) back to the requestor. But using the basic messaging techniques described above, there is nothing that connects the sender's request with the service provider's response. Unless the requester waits for a reply before continuing, some mechanism must be used to associate each reply with its request. Rather than force the developer to create such a mechanism, MQSeries provides a correlation-id that allows the correlation of messages in an exchange. While there are a number of ways in which this mechanism could be used, the simplest is for the requestor to mark a message with a known correlation identifier using, for instance, the following: Example 17: DB2MQ.MQSEND ('myRequester','myPolicy','SendStatus:cust1','Req1') This statement adds a final parameter Req1 to the MQSEND statement from above to indicate the correlation id for the request. To receive a reply to this specific request, use the corresponding MQRECREIVE statement to selectively retrieve the first message defined by the indicated service that matches this correlation id as follows: Example 18: DB2MQ.MQRECEIVE('myReceiver','myPolicy','Req1') If the application servicing the request is busy and the requestor issues the above MQRECEIVE before the reply is sent, then no messages matching this correlation id will be found. To receive both the service request and the correlation-id, a statement like the following is used: Example 19: SELECT msg, correlid FROM table (DB2MQ.MQRECEIVEALL('aServiceProvider','myPolicy',1)) t This returns the message and correlation identifier of the first request from the service aServiceProvider. Once the service has been performed, it sends the reply message to the queue described by aRequester. Meanwhile, the service requester could have been doing other work. In fact, there is no guarantee that the initial service request will be responded to within a set time. Application level timeouts such as this must be managed by the developer; the requester must poll to detect the presence of the reply. The advantage of such time-independent asynchronous processing is that the requester and service provider execute completely independently of one another. This can be used both to accommodate environments in which applications are only intermittently connected and more batch-oriented environments in which multiple requests or replies are aggregated before processing. This kind of aggregation is often used in data warehouse environments to periodically update a data warehouse or operational data store. 17.5.4.2 Publish/Subscribe Simple Data Publication Another common scenario in application integration is for one application to notify other applications about events of interest. This is easily done by sending a message to a queue monitored by another application. The contents of the message can be a user-defined string or can be composed from database columns. Often a simple message is all that needs to be sent using the MQSEND function. When such messages need to be sent concurrently to multiple recipients, the Distribution List facility of the MQSeries AMI can be used. A distribution list is defined using the AMI Administration tool. A distribution list comprises a list of individual services. A message sent to a distribution list is forwarded to every service defined within the list. This is especially useful when it is known that a few services will always be interested in every message. The following example shows sending of a message to the distribution list interestedParties: Example 20: DB2MQ.MQSEND('interestedParties','information of general interest'); When more control over the messages that particular services should receive is required, a Publish/Subscribe capability is needed. Publish/Subscribe systems typically provide a scalable, secure environment in which many subscribers can register to receive messages from multiple publishers. To support this capability the MQPublish interface can be used, in conjunction with MQSeries Integrator or the MQSeries Publish/Subscribe facility. MQPublish allows users to optionally specify a topic to be associated with a message. Topics allow a subscriber to more clearly specify the messages to be accepted. The sequence of steps is as follows: 1. An MQSeries administrator configures MQSeries Integrator publish/subscribe capabilities. 2. Interested applications subscribe to subscription points defined by the MQSI configuration, optionally specifying topics of interest to them. Each subscriber selects relevant topics, and can also utilize the content-based subscription techniques of MQSeries Integrator V2. It is important to note that queues, as represented by service names, define the subscriber. 3. A DB2 application publishes a message to the service point Weather. The messages indicates that the weather is Sleet with a topic of Austin, thus notifying interested subscribers that the weather in Austin is Sleet. 4. The mechanics of actually publishing the message are handled by the MQSeries functions provided by DB2. The message is sent to MQSeries Integrator using the service named Weather. 5. MQSI accepts the message from the Weather service, performs any processing defined by the MQSI configuration, and determines which subscriptions it satisfies. MQSI then forwards the message to the subscriber queues whose criteria it meets. 6. Applications that have subscribed to the Weather service, and registered an interest in Austin will receive the message Sleet in their receiving service. To publish this data using all the defaults and a null topic, you would use the following statement: Example 21: SELECT DB2MQ.MQPUBLISH(LASTNAME || ' ' || FIRSTNAME || ' ' || DEPARTMENT|| ' ' ||char(AGE)) FROM EMPLOYEE WHERE DEPARTMENT = '5LGA' Fully specifying all the parameters and simplifying the message to contain only the LASTNAME the statement would look like: Example 22: SELECT DB2MQ.MQPUBLISH('HR_INFO_PUB', 'SPECIAL_POLICY', LASTNAME, 'ALL_EMP:5LGA', 'MANAGER') FROM EMPLOYEE WHERE DEPARTMENT = '5LGA' This statement publishes messages to the HR_INFO_PUB publication service using the SPECIAL_POLICY service. The messages indicate that the sender is the MANAGER topic. The topic string demonstrates that multiple topics, concatenated using a ':' can be specified. In this example, the use of two topics allows subscribers to register for either ALL_EMP or just 5LGA to receive these messages. To receive published messages, you must first register your interest in messages containing a given topic and indicate the name of the subscriber service that messages should be sent to. It is important to note that an AMI subscriber service defines a broker service and a receiver service. The broker service is how the subscriber communicates with the publish/subscribe broker and the receiver service is where messages matching the subscription request will be sent. The following statement registers an interest in the topic ALL_EMP. Example 23: DB2MQ.MQSUBSCRIBE('aSubscriber', 'ALL_EMP') Once an application has subscribed, messages published with the topic ALL_EMP will be forwarded to the receiver service defined by the subscriber service. An application can have multiple concurrent subscriptions. To obtain the messages that meet your subscription, any of the standard message retrieval functions can be used. For instance if the subscriber service aSubscriber defines the receiver service to be aSubscriberReceiver then the following statement will non-destructively read the first message: Example 24: DB2MQ.MQREAD('aSubscriberReceiver') To determine both the messages and the topics that they were published under, you would use one of the table functions. The following statement would receive the first five messages from aSubscriberReceiver and display both the message and the topic: Example 25: SELECT t.msg, t.topic FROM table (DB2MQ.MQRECEIVEALL('aSubscriberReceiver',5)) t To read all of the messages with the topic ALL_EMP, you can leverage the power of SQL to issue: Example 26: SELECT t.msg FROM table (DB2MQ.MQREADALL('aSubscriberReceiver')) t WHERE t.topic = 'ALL_EMP' Note: It is important to realize that if MQRECEIVEALL is used with a constraint then the entire queue will be consumed, not just those messages published with topic ALL_EMP. This is because the table function is performed before the constraint is applied. When you are no longer interested in subscribing to a particular topic you must explicitly unsubscribe using a statement such as: Example 27: DB2MQ.MQUNSUBSCRIBE('aSubscriber', 'ALL_EMP') Once this statement is issued the publish/subscribe broker will no longer deliver messages matching this subscription. Automated Publication Another important technique in database messaging is automated publication. Using the trigger facility within DB2, you can automatically publish messages as part of a trigger invocation. While other techniques exist for automated data publication, the trigger-based approach allows administrators or developers great freedom in constructing the message content and flexibility in defining the trigger actions. As with any use of triggers, attention must be paid to the frequency and cost of execution. The following examples demonstrate how triggers may be used with the MQSeries DB2 Functions. The example below shows how easy it is to publish a message each time a new employee is hired. Any users or applications subscribing to the HR_INFO_PUB service with a registered interest in NEW_EMP will receive a message containing the date, name and department of each new employee. Example 28: CREATE TRIGGER new_employee AFTER INSERT ON employee REFERENCING NEW AS n FOR EACH ROW MODE DB2SQL VALUES DB2MQ.MQPUBLISH('HR_INFO_PUB&', 'NEW_EMP', current date || ' ' || LASTNAME || ' ' || DEPARTMENT) ------------------------------------------------------------------------ 17.6 enable_MQFunctions enable_MQFunctions Enables DB2 MQSeries functions for the specified database and validates that the DB2 MQSeries functions can be executed properly. The command will fail if MQSeries and MQSeries AMI have not been installed and configured. Authorization One of the following: * sysadm * dbadm * IMPLICIT_SCHEMA on the database, if the implicit or explicit schema name of the function does not exist * CREATEIN privilege on the schema, if the schema name, DB2MQ, exists Command Syntax >>-enable_MQFunctions----n--database----u--userid---------------> >-----p--password----+--------+---+-------------+-------------->< '-force--' '-noValidate--' Command Parameters -n database Specifies the name of the database to be enabled. -u userid Specifies the user ID to connect to the database. -p password Specifies the password for the user ID. -force Specifies that warnings encountered during a re-installation should be ignored. -noValidate Specifies that validation of the DB2 MQSeries functions will not be performed. Examples In the following example, DB2MQ functions are being created. The user connects to the database SAMPLE. The default schema DB2MQ is being used. enable_MQFunctions -n sample -u user1 -p password1 Usage Notes The DB2 MQ functions run under the schema DB2MQ which is automatically created by this command. Before Executing this command: * Ensure that MQ and AMI are installed, and that the version of MQSeries is 5.2 or higher. * Ensure that the environment variable $AMT_DATA_PATH is defined. * Change the directory to the cfg subdirectory of the DB2PATH On UNIX: * Use db2set to add AMT_DATA_PATH to the DB2ENVLIST. * Ensure that the user account associated with UDF execution is a member of the mqm group. * Ensure that the user who will be calling this command is a member if the mqm group. Note: AIX 4.2 is not supported by MQSeries 5.2. ------------------------------------------------------------------------ 17.7 disable_MQFunctions disable_MQFunctions Disables the use of DB2 MQSeries functions for the specified database. Authorization One of the following: * sysadm * dbadm * IMPLICIT_SCHEMA on the database, if the implicit or explicit schema name of the function does not exist * CREATEIN privilege on the schema, if the schema name, DB2MQ, exists Command Syntax >>-disable_MQFunctions----n--database----u--userid--------------> >-----p--password---------------------------------------------->< Command Parameters -n database Specifies the name of the database. -u userid Specifies the user ID used to connect to the database. -p password Specifies the password for the user ID. Examples In the following example, DB2MQ functions are disabled for the database SAMPLE. disable_MQFunctions -n sample -u user1 -p password1 ------------------------------------------------------------------------ Administrative Tools * Control Center o 18.1 Ability to Administer DB2 Server for VSE and VM Servers o 18.2 Java 1.2 Support for the Control Center o 18.3 "Invalid shortcut" Error when Using the Online Help on the Windows Operating System o 18.4 Java Control Center on OS/2 o 18.5 "File access denied" Error when Attempting to View a Completed Job in the Journal on the Windows Operating System o 18.6 Multisite Update Test Connect o 18.7 Control Center for DB2 for OS/390 o 18.8 Required Fix for Control Center for OS/390 o 18.9 Change to the Create Spatial Layer Dialog o 18.10 Troubleshooting Information for the DB2 Control Center o 18.11 Control Center Troubleshooting on UNIX Based Systems o 18.12 Possible Infopops Problem on OS/2 o 18.13 Help for the jdk11_path Configuration Parameter o 18.14 Solaris System Error (SQL10012N) when Using the Script Center or the Journal o 18.15 Help for the DPREPL.DFT File o 18.16 Launching More Than One Control Center Applet o 18.17 Online Help for the Control Center Running as an Applet o 18.18 Running the Control Center in Applet Mode (Windows 95) o 18.19 Working with Large Query Results * Information Center o 19.1 "Invalid shortcut" Error on the Windows Operating System o 19.2 Opening External Web Links in Netscape Navigator when Netscape is Already Open (UNIX Based Systems) o 19.3 Problems Starting the Information Center * Wizards o 20.1 Setting Extent Size in the Create Database Wizard o 20.2 MQSeries Assist wizard o 20.3 OLE DB Assist wizard ------------------------------------------------------------------------ Control Center ------------------------------------------------------------------------ 18.1 Ability to Administer DB2 Server for VSE and VM Servers The DB2 Universal Database Version 7 Control Center has enhanced its support of DB2 Server for VSE and VM databases. All DB2 Server for VSE and VM database objects can be viewed by the Control Center. There is also support for the CREATE INDEX, REORGANIZE INDEX, and UPDATE STATISTICS statements, and for the REBIND command. REORGANIZE INDEX and REBIND require a stored procedure running on the DB2 Server for VSE and VM hosts. This stored procedure is supplied by the Control Center for VSE and VM feature of DB2 Server for VSE and VM. The fully integrated Control Center allows the user to manage DB2, regardless of the platform on which the DB2 server runs. DB2 Server for VSE and VM objects are displayed on the Control Center main window, along with DB2 Universal Database objects. The corresponding actions and utilities to manage these objects are invoked by selecting the object. For example, a user can list the indexes of a particular database, select one of the indexes, and reorganize it. The user can also list the tables of a database and run update statistics, or define a table as a replication source. For information about configuring the Control Center to perform administration tasks on DB2 Server for VSE and VM objects, refer to the DB2 Connect User's Guide, or the Installation and Configuration Supplement. ------------------------------------------------------------------------ 18.2 Java 1.2 Support for the Control Center The Control Center supports bi-directional languages, such as Arabic and Hebrew, using bi-di support in Java 1.2. This support is provided for the Windows NT platform only. Java 1.2 must be installed for the Control Center to recognize and use it: 1. JDK 1.2.2 is available on the DB2 UDB CD under the DB2\bidi\NT directory. ibm-inst-n122p-win32-x86.exe is the installer program, and ibm-jdk-n122p-win32-x86.exe is the JDK distribution. Copy both files to a temporary directory on your hard drive, then run the installer program from there. 2. Install it under \java\Java12, where is the installation path of DB2. 3. Do not select JDK/JRE as the System VM when prompted by the JDK/JRE installation. After Java 1.2 is installed successfully, starting the Control Center in the normal manner will use Java 1.2. To stop the use of Java 1.2, you may either uninstall JDK/JRE from \java\Java12, or simply rename the \java\Java12 sub-directory to something else. Note: Do not confuse \java\Java12 with \Java12. \Java12 is part of the DB2 installation, and includes JDBC support for Java 1.2. ------------------------------------------------------------------------ 18.3 "Invalid shortcut" Error when Using the Online Help on the Windows Operating System When using the Control Center online help, you may encounter an error like: "Invalid shortcut". If you have recently installed a new Web browser or a new version of a Web browser, ensure that HTML and HTM documents are associated with the correct browser. See the Windows Help topic "To change which program starts when you open a file". ------------------------------------------------------------------------ 18.4 Java Control Center on OS/2 The Control Center must be installed on an HPFS-formatted drive. ------------------------------------------------------------------------ 18.5 "File access denied" Error when Attempting to View a Completed Job in the Journal on the Windows Operating System On DB2 Universal Database for Windows NT, a "File access denied" error occurs when attempting to open the Journal to view the details of a job created in the Script Center. The job status shows complete. This behavior occurs when a job created in the Script Center contains the START command. To avoid this behavior, use START/WAIT instead of START in both the batch file and in the job itself. ------------------------------------------------------------------------ 18.6 Multisite Update Test Connect Multisite Update Test Connect functionality in the Version 7 Control Center is limited by the version of the target instance. The target instance must be at least Version 7 for the "remote" test connect functionality to run. To run Multisite Update Test Connect functionality in Version 6, you must bring up the Control Center locally on the target instance and run it from there. ------------------------------------------------------------------------ 18.7 Control Center for DB2 for OS/390 The DB2 UDB Control Center for OS/390 allows you to manage the use of your licensed IBM DB2 utilities. Utility functions that are elements of separately orderable features of DB2 UDB for OS/390 must be licensed and installed in your environment before being managed by the DB2 Control Center. The "CC390" database, defined with the Control Center when you configure a DB2 for OS/390 subsystem, is used for internal support of the Control Center. Do not modify this database. Although DB2 for OS/390 Version 7.1 is not mentioned specifically in the Control Center table of contents, or the Information Center Task information, the documentation does support the DB2 for OS/390 Version 7.1 functions. Many of the DB2 for OS/390 Version 6-specific functions also relate to DB2 for OS/390 Version 7.1, and some functions that are DB2 for OS/390 Version 7.1-specific in the table of contents have no version designation. If you have configured a DB2 for OS/390 Version 7.1 subsystem on your Control Center, you have access to all the documentation for that version. To access and use the Generate DDL function from the Control Center for DB2 for OS/390, you must have the Generate DDL function installed: * For Version 5, install DB2Admin 2.0 with DB2 for OS/390 Version 5. * For Version 6, install the small programming enhancement that will be available as a PTF for the DB2 Admin feature of DB2 for OS/390 Version 6. * For Version 7.1, the Generate DDL function is part of the separately priced DB2 Admin feature of DB2 for OS/390 Version 7.1. You can access Stored Procedure Builder from the Control Center, but you must have already installed it by the time you start the DB2 UDB Control Center. It is part of the DB2 Application Development Client. To catalog a DB2 for OS/390 subsystem directly on the workstation, select to use the Client Configuration Assistant tool. 1. On the Source page, specify the Manually configure a connection to a database radio button. 2. On the Protocol page, complete the appropriate communications information. 3. On the Database page, specify the subsystem name in the Database name field. 4. On the Node Options page, select the Configure node options (Optional) check box. 5. Select MVS/ESA, OS/390 from the list in the Operating system field. 6. Click Finish to complete the configuration. To catalog a DB2 for OS/390 subsystem via a gateway machine, follow steps 1-6 above on the gateway machine, and then: 1. On the client machine, start the Control Center. 2. Right click on the Systems folder and select Add. 3. In the Add System dialog, type the gateway machine name in the System name field. 4. Type DB2DAS00 in the Remote instance field. 5. For the TCP/IP protocol, in the Protocol parameters, specify the gateway machine's host name in the Host name field. 6. Type 523 in the Service name field. 7. Click OK to add the system. You should now see the gateway machine added under the Systems folder. 8. Expand the gateway machine name. 9. Right click on the Instances folder and select Add. 10. In the Add Instance dialog, click Refresh to list the instances available on the gateway machine. If the gateway machine is a Windows NT system, the DB2 for OS/390 subsystem was probably cataloged under the instance DB2. 11. Select the instance. The protocol parameters are filled in automatically for this instance. 12. Click OK to add the instance. 13. Open the Instances folder to see the instance you just added. 14. Expand the instance. 15. Right click on the Databases folder and select Add. 16. Click Refresh to display the local databases on the gateway machine. If you are adding a DB2 subsystem in the Add Database dialog, type the subsystem name in the Database name field. Option: Type a local alias name for the subsystem (or the database). 17. Click OK. You have now successfully added the subsystem in the Control Center. When you open the database, you should see the DB2 for OS/390 subsystem displayed. The first paragraph in the section "Control Center 390" states: The DB2 UDB Control Center for OS/390 allows you to manage the use of your licensed IBM DB2 utilities. Utility functions that are elements of separately orderable features of DB2 UDB for OS/390 must be licensed and installed in your environment before being managed by the DB2 Control Center. This section should now read: The DB2 Control Center for OS/390 allows you to manage the use of your licensed IBM DB2 utilities. Utility functions that are elements of separately orderable products must be licensed and installed in your environment in order to be managed by DB2 Control Center. ------------------------------------------------------------------------ 18.8 Required Fix for Control Center for OS/390 You must apply APAR PQ36382 to the 390 Enablement feature of DB2 for OS/390 Version 5 and DB2 for OS/390 Version 6 to manage these subsystems using the DB2 UDB Control Center for Version 7. Without this fix, you cannot use the DB2 UDB Control Center for Version 7 to run utilities for those subsystems. The APAR should be applied to the following FMIDs: DB2 for OS/390 Version 5 390 Enablement: FMID JDB551D DB2 for OS/390 Version 6 390 Enablement: FMID JDB661D ------------------------------------------------------------------------ 18.9 Change to the Create Spatial Layer Dialog The "<<" and ">>" buttons have been removed from the Create Spatial Layer dialog. ------------------------------------------------------------------------ 18.10 Troubleshooting Information for the DB2 Control Center In the "Control Center Installation and Configuration" chapter in your Quick Beginnings book, the section titled "Troubleshooting Information" tells you to unset your client browser's CLASSPATH from a command window if you are having problems running the Control Center as an applet. This section also tells you to start your browser from the same command window. However, the command for starting your browser is not provided. To launch Internet Explorer, type start iexplore and press Enter. To launch Netscape, type start netscape and press Enter. These commands assume that your browser is in your PATH. If it is not, add it to your PATH or switch to your browser's installation directory and reissue the start command. ------------------------------------------------------------------------ 18.11 Control Center Troubleshooting on UNIX Based Systems If you are unable to start the Control Center on a UNIX based system, set the JAVA_HOME environment variable to point to your Java distribution: * If java is installed under /usr/jdk118, set JAVA_HOME to /usr/jdk118. * For the sh, ksh, or bash shell: export JAVA_HOME=/usr/jdk118. * For the csh or tcsh shell: setenv JAVA_HOME /usr/jdk118 ------------------------------------------------------------------------ 18.12 Possible Infopops Problem on OS/2 If you are running the Control Center on OS/2, using screen size 1024x768 with 256 colors, and with Workplace Shell Palette Awareness enabled, infopops that extend beyond the border of the current window may be displayed with black text on a black background. To fix this problem, either change the display setting to more than 256 colors, or disable Workplace Shell Palette Awareness. ------------------------------------------------------------------------ 18.13 Help for the jdk11_path Configuration Parameter In the Control Center help, the description of the Java Development Kit 1.1 Installation Path (jdk11_path) configuration parameter is missing a line under the sub-heading Applies To. The complete list under Applies To is: * Database server with local and remote clients * Client * Database server with local clients * Partitioned database server with local and remote clients * Satellite database server with local clients ------------------------------------------------------------------------ 18.14 Solaris System Error (SQL10012N) when Using the Script Center or the Journal When selecting a Solaris system from the Script Center or the Journal, the following error may be encountered: SQL10012N - An unexpected operating system error was received while loading the specified library "/udbprod/db2as/sqllib/function/unfenced/ db2scdar!ScheduleInfoOpenScan". SQLSTATE=42724. This is caused by a bug in the Solaris runtime linker. To correct this problem, apply the following patch: 105490-06 (107733 makes 105490 obsolete) for Solaris 2.6 ------------------------------------------------------------------------ 18.15 Help for the DPREPL.DFT File In the Control Center, in the help for the Replication page of the Tool Settings notebook, step 5d says: Save the file into the working directory for the Control Center (for example, SQLLIB\BIN) so that the system can use it as the default file. Step 5d should say: Save the file into the working directory for the Control Center (SQLLIB\CC) so that the system can use it as the default file. ------------------------------------------------------------------------ 18.16 Launching More Than One Control Center Applet You cannot launch more than one Control Center applet simultaneously on the same machine. This restriction applies to Control Center applets running in all supported browsers. ------------------------------------------------------------------------ 18.17 Online Help for the Control Center Running as an Applet When the Control Center is running as an applet, the F1 key only works in windows and notebooks that have infopops. You can press the F1 key to bring up infopops in the following components: * DB2 Universal Database for OS/390 * The wizards In the rest of the Control Center components, F1 does not bring up any help. To display help for the other components, please use the Help push button, or the Help pull-down menu. ------------------------------------------------------------------------ 18.18 Running the Control Center in Applet Mode (Windows 95) An attempt to open the Script Center may fail if an invalid user ID and password are specified. Ensure that a valid user ID and password are entered when signing on to the Control Center. ------------------------------------------------------------------------ 18.19 Working with Large Query Results It is easy for a user to produce a query that returns a large number of rows. It is not so easy for a user to predict how many rows might actually be returned. With a query that could potentially return thousands (or millions) of rows, there are two problems: 1. It can take a long time to retrieve the result. 2. A large amount of client memory can be required to hold the result. To facilitate this process, DB2 breaks up large result sets into chunks. It will retrieve and display the results of a query one chunk at a time. As a result: 1. Display time will be reduced as the first chunk of a query is available for viewing while the remaining chunks are being retrieved. 2. Memory requirements on the client will be reduced as only one chunk of a query result will be stored on the client at any given time. To control the number of query result rows in memory:: 1. Open the General page of the Tool Settings notebook. 2. In the Maximum size section, select: o Sample Contents to limit the number of result rows displayed in the Sample Contents window. Specify the chunk size of the result set (number of rows) in the entry field. o Command Center to limit the number of result rows displayed on the Query Results page of the Command Center. Specify the chunk size of the result set (number of rows) in the entry field. When working with the results of a query in the Sample Contents window or on the Query Results page of the Command Center, the Rows in memory field indicates the number of rows being held in memory for the query. This number will never be greater than the Maximum size set. Click on Next to retrieve to the next chunk of the result set. When Next is inactive, you have reached the end of the result set. ------------------------------------------------------------------------ Information Center ------------------------------------------------------------------------ 19.1 "Invalid shortcut" Error on the Windows Operating System When using the Information Center, you may encounter an error like: "Invalid shortcut". If you have recently installed a new Web browser or a new version of a Web browser, ensure that HTML and HTM documents are associated with the correct browser. See the Windows Help topic "To change which program starts when you open a file". ------------------------------------------------------------------------ 19.2 Opening External Web Links in Netscape Navigator when Netscape is Already Open (UNIX Based Systems) If Netscape Navigator is already open and displaying either a local DB2 HTML document or an external Web site, an attempt to open an external Web site from the Information Center will result in a Netscape error. The error will state that "Netscape is unable to find the file or directory named ." To work around this problem, close the open Netscape browser before opening the external Web site. Netscape will restart and bring up the external Web site. Note that this error does not occur when attempting to open a local DB2 HTML document with Netscape already open. ------------------------------------------------------------------------ 19.3 Problems Starting the Information Center On some systems, the Information Center can be slow to start if you invoke it using the Start Menu, First Steps, or the db2ic command. If you experience this problem, start the Control Center, then select Help --> Information Center. ------------------------------------------------------------------------ Wizards ------------------------------------------------------------------------ 20.1 Setting Extent Size in the Create Database Wizard Using the Create Database Wizard, it is possible to set the Extent Size and Prefetch Size parameters for the User Table Space (but not those for the Catalog or Temporary Tables) of the new database. This feature will be enabled only if at least one container is specified for the User Table Space on the "User Tables" page of the Wizard. ------------------------------------------------------------------------ 20.2 MQSeries Assist wizard DB2 Version 7.2 provides a new MQSeries Assist wizard. This wizard creates a table function that reads from an MQSeries queue using the DB2 MQSeries Functions, which are also new in Version 7.2. The wizard can treat each MQSeries message as a delimited string or a fixed length column string depending on your specification. The created table function parses the string according to your specifications, and returns each MQSeries message as a row of the table function. The wizard also allows you to create a view on top of the table function and to preview an MQSeries message and the table function result. This wizard can be launched from Stored Procedure Builder or Data Warehouse Center. Requirements for this wizard are: * MQSeries version 5.2 * MQSeries Application Messaging Interface (AMI) * DB2 MQSeries Functions For details on these requirements, see MQSeries. For samples and MQSeries Assist wizard tutorials, go to the tutorials section at http://www.ibm.com/software/data/db2/udb/ide ------------------------------------------------------------------------ 20.3 OLE DB Assist wizard This wizard helps you to create a table function that reads data from another database provider that supports the Microsoft OLE DB standard. You can optionally create a DB2 table with the data that is read by the OLE DB table function, and you can create a view for the OLE DB table function. This wizard can be launched from Stored Procedure Builder or Data Warehouse Center. Requirements for this wizard are: * An OLE DB provider (such as Oracle, Microsoft SQL Server) * OLE DB support functions For samples and OLE DB Assist wizard tutorials, go to the tutorials section at http://www.ibm.com/software/data/db2/udb/ide ------------------------------------------------------------------------ Business Intelligence * Business Intelligence Tutorial o 21.1 Revised Business Intelligence Tutorial * Data Warehouse Center Administration Guide o 22.1 Troubleshooting o 22.2 Setting up Excel as a Warehouse Source o 22.3 Defining and Running Processes o 22.4 Export Metadata Dialog o 22.5 Defining Values for a Submit OS/390 JCL Jobstream (VWPMVS) Program o 22.6 Changes to the Data Warehousing Sample Appendix o 22.7 Data Warehouse Center Messages o 22.8 Creating an Outline and Loading Data in the DB2 OLAP Integration Server o 22.9 Using Classic Connect with the Data Warehouse Center o 22.10 Data Warehouse Center Environment Structure o 22.11 Using the Invert Transformer o 22.12 Accessing DB2 Version 5 Data with the DB2 Version 7 Warehouse Agent + 22.12.1 Migrating DB2 Version 5 Servers + 22.12.2 Changing the Agent Configuration + 22.12.2.1 UNIX Warehouse Agents + 22.12.2.2 Microsoft Windows NT, Windows 2000, and OS/2 Warehouse Agents o 22.13 IBM ERwin metadata extract program + 22.13.1 Contents + 22.13.2 Software requirements + 22.13.3 Program files + 22.13.4 Creating tag language files + 22.13.5 Importing a tag language file into the Data Warehouse Center + 22.13.6 Importing a tag language file into the Information Catalog Manager + 22.13.7 Troubleshooting + 22.13.8 ERwin to DB2 Data Warehouse Center mapping + 22.13.8.1 ERwin to Information Catalog Manager mapping o 22.14 Name and address cleansing in the Data Warehouse Center + 22.14.1 + 22.14.1.1 Requirements + 22.14.1.2 Trillium Software System components + 22.14.1.3 Using the Trillium Batch System with the Data Warehouse Center + 22.14.1.4 Importing Trillium metadata + 22.14.1.5 Mapping the metadata + 22.14.1.6 Restrictions + 22.14.2 Writing Trillium Batch System JCL file + 22.14.3 Writing Trillium Batch System script file on UNIX and Windows + 22.14.4 Defining a Trillium Batch System step + 22.14.5 Using the Trillium Batch System user-defined program + 22.14.6 Error handling + 22.14.6.1 Error return codes + 22.14.6.2 Log file o 22.15 Integration of MQSeries with the Data Warehouse Center + 22.15.1 Creating views for MQSeries messages + 22.15.1.1 Requirements + 22.15.1.2 Restrictions + 22.15.1.3 Creating a view for MQSeries messages + 22.15.2 Importing MQSeries messages and XML metadata + 22.15.2.1 Requirements + 22.15.2.2 Restrictions + 22.15.2.3 Importing MQSeries messages and XML metadata + 22.15.2.4 Using the MQSeries user-defined program + 22.15.2.5 Error return codes + 22.15.2.6 Error Log file o 22.16 Microsoft OLE DB and Data Transaction Services support + 22.16.1 Creating views for OLE DB table functions + 22.16.2 Creating views for DTS packages o 22.17 Using incremental commit with replace o 22.18 Component trace data file names o 22.19 Open Client needed for Sybase sources on AIX and the Solaris Operating Environment o 22.20 Sample entries corrected o 22.21 Chapter 3. Setting up warehouse sources + 22.21.1 Mapping the Memo field in Microsoft Access to a warehouse source o 22.22 Chapter 10. Maintaining the Warehouse Database + 22.22.1 Linking tables to a step subtype for the DB2 UDB RUNSTATS program o 22.23 The Default Warehouse Control Database o 22.24 The Warehouse Control Database Management Window o 22.25 Changing the Active Warehouse Control Database o 22.26 Creating and Initializing a Warehouse Control Database o 22.27 Creating editioned SQL steps o 22.28 Changing sources and targets in the Process Modeler window o 22.29 Adding descriptions to Data Warehouse Center objects o 22.30 Running Sample Contents o 22.31 Editing a Create DDL SQL statement o 22.32 Migrating Visual Warehouse business views o 22.33 Generating target tables and primary keys o 22.34 Using Merant ODBC drivers o 22.35 New ODBC Driver o 22.36 Defining a warehouse source or target in an OS/2 database o 22.37 Monitoring the state of the warehouse control database o 22.38 Using SQL Assist with the TBC_MD sample database o 22.39 Using the FormatDate function o 22.40 Changing the language setting o 22.41 Using the Generate Key Table transformer o 22.42 Maintaining connections to databases o 22.43 Setting up a remote Data Warehouse Center client o 22.44 Defining a DB2 for VM warehouse source o 22.45 Defining a DB2 for VM or DB2 for VSE target table o 22.46 Enabling delimited identifier support o 22.47 Data Joiner Error Indicates a Bind Problem o 22.48 Setting up and Running Replication with Data Warehouse Center o 22.49 Troubleshooting Tips o 22.50 Accessing Sources and Targets o 22.51 Additions to Supported non-IBM Database Sources o 22.52 Creating a Data Source Manually in Data Warehouse Center o 22.53 Importing and Exporting Metadata Using the Common Warehouse Metadata Interchange (CWMI) + 22.53.1 Introduction + 22.53.2 Importing Metadata + 22.53.3 Updating Your Metadata After Running the Import Utility + 22.53.4 Exporting Metadata o 22.54 OS/390 Runstats utility step o 22.55 OS/390 Load utility step o 22.56 Common Warehouse Metamodel (CWM) XML support o 22.57 Process modeler o 22.58 Schema modeler o 22.59 Mandatory fields o 22.60 Data Warehouse Center launchpad enhancements o 22.61 Printing step information to a file * Data Warehouse Center Application Integration Guide o 23.1 Additional metadata templates + 23.1.1 Commit.tag + 23.1.1.1 Tokens + 23.1.1.2 Examples of values + 23.1.2 ForeignKey.tag + 23.1.2.1 Tokens + 23.1.2.2 Examples of values + 23.1.3 ForeignKeyAdditional.tag + 23.1.3.1 Tokens + 23.1.3.2 Examples of values + 23.1.4 PrimaryKey.tag + 23.1.4.1 Tokens + 23.1.4.2 Examples of values + 23.1.5 PrimaryKeyAdditional.tag + 23.1.5.1 Tokens + 23.1.5.2 Examples of values * Data Warehouse Center Online Help o 24.1 Defining Tables or Views for Replication o 24.2 Running Essbase VWPs with the AS/400 Agent o 24.3 Using the Publish Data Warehouse Center Metadata Window and Associated Properties Window o 24.4 Foreign Keys o 24.5 Replication Notebooks o 24.6 Importing a Tag Language o 24.7 Links for Adding Data o 24.8 Importing Tables o 24.9 Correction to RUNSTATS and REORGANIZE TABLE Online Help o 24.10 Notification Page (Warehouse Properties Notebook and Schedule Notebook) o 24.11 Agent Module Field in the Agent Sites Notebook * DB2 OLAP Starter Kit o 25.1 OLAP Server Web Site o 25.2 Supported Operating System Service Levels o 25.3 Completing the DB2 OLAP Starter Kit Setup on UNIX o 25.4 Configuring ODBC for the OLAP Starter Kit + 25.4.1 Configuring Data Sources on UNIX systems + 25.4.1.1 Configuring ODBC Environment Variables + 25.4.1.2 Editing the odbc.ini File + 25.4.1.3 Adding a data source to an odbc.ini file + 25.4.1.4 Example of ODBC Settings for DB2 + 25.4.1.5 Example of ODBC Settings for Oracle + 25.4.2 Configuring the OLAP Metadata Catalog on UNIX Systems + 25.4.3 Configuring Data Sources on Windows Systems + 25.4.4 Configuring the OLAP Metadata Catalog on Windows Systems + 25.4.5 After You Configure a Data Source o 25.5 Logging in from OLAP Starter Kit Desktop + 25.5.1 Starter Kit Login Example o 25.6 Manually creating and configuring the sample databases for OLAP Starter Kit o 25.7 Migrating Applications to OLAP Starter Kit Version 7.2 o 25.8 Known Problems and Limitations o 25.9 OLAP Spreadsheet Add-in EQD Files Missing * Information Catalog Manager Administration Guide o 26.1 Information Catalog Manager Initialization Utility + 26.1.1 + 26.1.2 Licensing issues + 26.1.3 Installation Issues o 26.2 Accessing DB2 Version 5 Information Catalogs with the DB2 Version 7 Information Catalog Manager o 26.3 Setting up an Information Catalog o 26.4 Exchanging Metadata with Other Products o 26.5 Exchanging Metadata using the flgnxoln Command o 26.6 Exchanging Metadata using the MDISDGC Command o 26.7 Invoking Programs * Information Catalog Manager Programming Guide and Reference o 27.1 Information Catalog Manager Reason Codes * Information Catalog Manager User's Guide * Information Catalog Manager: Online Messages o 29.1 Message FLG0260E o 29.2 Message FLG0051E o 29.3 Message FLG0003E o 29.4 Message FLG0372E o 29.5 Message FLG0615E * Information Catalog Manager: Online Help o 30.1 Information Catalog Manager for the Web * DB2 Warehouse Manager Installation Guide o 31.1 Software requirements for warehouse transformers o 31.2 Connector for SAP R/3 + 31.2.1 Installation Prerequisites o 31.3 Connector for the Web + 31.3.1 Installation Prerequisites * Query Patroller Administration Guide o 32.1 DB2 Query Patroller Client is a Separate Component o 32.2 Migrating from Version 6 of DB2 Query Patroller Using dqpmigrate o 32.3 Enabling Query Management o 32.4 Location of Table Space for Control Tables o 32.5 New Parameters for dqpstart Command o 32.6 New Parameter for iwm_cmd Command o 32.7 New Registry Variable: DQP_RECOVERY_INTERVAL o 32.8 Starting Query Administrator o 32.9 User Administration o 32.10 Creating a Job Queue o 32.11 Using the Command Line Interface o 32.12 Query Enabler Notes o 32.13 DB2 Query Patroller Tracker may Return a Blank Column Page o 32.14 Query Patroller and Replication Tools o 32.15 Appendix B. Troubleshooting DB2 Query Patroller Clients ------------------------------------------------------------------------ Business Intelligence Tutorial ------------------------------------------------------------------------ 21.1 Revised Business Intelligence Tutorial FixPak 2 includes a revised Business Intelligence Tutorial and Data Warehouse Center Sample database which correct various problems that exist in Version 7.1. In order to apply the revised Data Warehouse Center Sample database, you must do the following: If you have not yet installed the sample databases, create new sample databases using the First Steps launch pad. ClickStart and select Programs --> IBM DB2 --> First Steps. If you have previously installed the sample databases, drop the sample databases DWCTBC, TBC_MD, and TBC. If you have added any data that you want to keep to the sample databases, back them up before dropping them. To drop the three sample databases: 1. To open the DB2 Command Window, clickStart and select Programs --> IBM DB2 --> Command Window. 2. In the DB2 Command Window, type each of the following three commands, pressing Enter after typing each one: db2 drop database dwctbc db2 drop database tbc_md db2 drop database tbc 3. Close the DB2 Command Window. 4. Create new sample databases using the First Steps launch pad. Click Start and select Programs --> IBM DB2 --> First Steps. ------------------------------------------------------------------------ Data Warehouse Center Administration Guide ------------------------------------------------------------------------ 22.1 Troubleshooting The Data Warehouse Center troubleshooting information has moved to the DB2 Troubleshooting Guide. ------------------------------------------------------------------------ 22.2 Setting up Excel as a Warehouse Source In "Chapter 3. Setting up warehouse sources," section "Setting up non-DB2 database warehouse sources in Windows NT" the section concerning Microsoft Excel is missing a step. The new step is shown below as Step 3. If you are using the Microsoft Excel 95/97 ODBC driver to access the Excel spreadsheets, you need to create a named table for each of the worksheets within the spreadsheet. To create a named table for each worksheet: 1. Select the columns and rows that you want. 2. Click Excel ---> Insert ---> Name ---> Define. 3. Ensure the "Refers to" field of the Define Name window contains the cells you have selected in Step 1. If not, click the icon on the far right of the "Refers to" field to include all the cells you have selected. 4. Type a name (or use the default name) for the marked data. 5. Click OK. ------------------------------------------------------------------------ 22.3 Defining and Running Processes In "Chapter 5. Defining and running processes", section "Starting a step from outside the Data Warehouse Center", it should be noted that JDK 1.1.8 or later is required on the warehouse server workstation and the agent site if you start a step that has a double-byte name. ------------------------------------------------------------------------ 22.4 Export Metadata Dialog In Chapter 12, in the section entitled "Exporting and Importing Data Warehouse Center Metadata," in the subsection entitle "Exporting the metadata to a tag language file," Step 5 should be as follows: If you do not want to export schedule information related to the processes that you are exporting, clear the Include schedules check box. ------------------------------------------------------------------------ 22.5 Defining Values for a Submit OS/390 JCL Jobstream (VWPMVS) Program On page 180, section "Defining values for a Submit OS/390 JCL jobstream (VWPMVS) program," step 8 states that you must define a .netrc file in the same directory as the JES file. Instead, the program creates the .netrc file. If the file does not exist, the program creates the file in the home directory. If a .netrc file already exists in the home directory, the program renames the existing file and creates a new file. When the program finishes processing, it deletes the new .netrc file that it created and renames the original file to .netrc. ------------------------------------------------------------------------ 22.6 Changes to the Data Warehousing Sample Appendix * In the Data warehousing sample appendix, section "Viewing and modifying the sample metadata", the GEOGRAPHIES table should be included in the list of source tables. * In the Data warehousing sample appendix, section "Promoting the steps", in the procedure for promoting steps to production mode, the following statement is incorrect because the target table was created when you promoted the step to test mode: The Data Warehouse Center starts to create the target table, and displays a progress window. ------------------------------------------------------------------------ 22.7 Data Warehouse Center Messages On Microsoft Windows NT and Windows 2000, the Data Warehouse Center logs events to the system event log. The Event ID corresponds to the Data Warehouse Center message number. For information about the Data Warehouse Center messages, refer to the Message Reference. ------------------------------------------------------------------------ 22.8 Creating an Outline and Loading Data in the DB2 OLAP Integration Server The example in Figure 20 on page 315 has an error. The following commands are correct: "C:\IS\bin\olapicmd" < "C:\IS\Batch\my_script.script" > "C:\IS\Batch\my_script.log" The double quotation marks around "C:\IS\bin\olapicmd" are necessary if the name of a directory in the path contains a blank, such as Program Files. ------------------------------------------------------------------------ 22.9 Using Classic Connect with the Data Warehouse Center * In "Appendix F. Using Classic Connect with the Data Warehouse Center", the section "Installing the CROSS ACCESS ODBC driver" on page 388 has been replaced with the following information: Install the CROSS ACCESS ODBC driver by performing a custom install of the DB2 Warehouse Manager Version 7, and selecting the Classic Connect Drivers component. The driver is not installed as part of a typical installation of the DB2 Warehouse Manager. The CROSS ACCESS ODBC driver will be installed in the ODBC32 subdirectory of the SQLLIB directory. After the installation is complete, you must manually add the path for the driver (for example, C:\Program Files\SQLLIB\ODBC32) to the PATH system environment variable. If you have another version of the CROSS ACCESS ODBC driver already installed, place the ...\SQLLIB\ODBC32\ path before the path for the other version. The operating system will use the first directory in the path that contains the CROSS ACCESS ODBC driver. * The following procedure should be added to "Appendix F. Using Classic Connect with the Data Warehouse Center": Installing the Classic Connect ODBC Driver: 1. Insert the Warehouse Manager CD-ROM into your CD-ROM drive. The launchpad opens. 2. Click Install from the launchpad. 3. In the Select Products window, ensure that the DB2 Warehouse Manager check box is selected, then click Next. 4. In the Select Installation Type window, select Custom, then click Next. 5. In the Select Components window, select Classic Connect Drivers and Warehouse Agent, clear all other check boxes, and then click Next. 6. In the Start Copying Files window, review your selections. If you want to change any of your selections, click Back to return to the window where you can change the selection. Click Next to begin the installation. ------------------------------------------------------------------------ 22.10 Data Warehouse Center Environment Structure In "Appendix G. Data Warehouse Center environment structure" on page 401, there is an incorrect entry in the table. C:\Program Files\SQLLIB\ODBC32 is not added to the PATH environment variable. The only update to the PATH environment variable is C:\Program Files\SQLLIB\BIN. ------------------------------------------------------------------------ 22.11 Using the Invert Transformer The book states that the Invert Transformer can create a target table based on parameters, but it misses the point that the generated target table will not have the desired output columns, which must be created explicitly in the target table. ------------------------------------------------------------------------ 22.12 Accessing DB2 Version 5 Data with the DB2 Version 7 Warehouse Agent DB2 Version 7 warehouse agents, as configured by the DB2 Version 7 install process, will support access to DB2 Version 6 and DB2 Version 7 data. If you need to access DB2 Version 5 data, you must take one of the following two approaches: * Migrate DB2 Version 5 servers to DB2 Version 6 or DB2 Version 7. * Modify the agent configuration, on the appropriate operating system, to allow access to DB2 Version 5 data. DB2 Version 7 warehouse agents do not support access to data from DB2 Version 2 or any other previous versions. 22.12.1 Migrating DB2 Version 5 Servers For information about migrating DB2 Version 5 servers, see DB2 Universal Database Quick Beginnings for your operating system. 22.12.2 Changing the Agent Configuration The following information describes how to change the agent configuration on each operating system. When you migrate the DB2 servers to DB2 Version 6 or later, remove the changes that you made to the configuration. 22.12.2.1 UNIX Warehouse Agents To set up a UNIX warehouse agent to access data from DB2 Version 5 with either CLI or ODBC access: 1. Install the DB2 Version 6 run-time client. You can obtain the run-time client by selecting the client download from the following URL: http://www.ibm.com/software/data/db2/udb/support 2. Update the warehouse agent configuration file so that the DB2INSTANCE environment variable points to a DB2 Version 6 instance. 3. Catalog all databases in this DB2 Version 6 instance that the warehouse agent is to access. 4. Stop the agent daemon process by issuing the kill command with the agent daemon process ID. The agent daemon will then restart automatically. You need root authority to kill the process. 22.12.2.2 Microsoft Windows NT, Windows 2000, and OS/2 Warehouse Agents To set up a Microsoft NT, Windows 2000 or OS/2 warehouse agent to access data from DB2 Version 5: 1. Install DB2 Connect Enterprise Edition Version 6 on a workstation other than where the DB2 Version 7 warehouse agent is installed. DB2 Connect Enterprise Edition is included as part of DB2 Universal Database Enterprise Edition and DB2 Universal Database Enterprise - Extended Edition. If Version 6 of either of these DB2 products is installed, you do not need to install DB2 Connect separately. Restriction: You cannot install multiple versions of DB2 on the same Windows NT or OS/2 workstation. You can install DB2 Connect on another Windows NT workstation or on an OS/2 or UNIX workstation. 2. Configure the warehouse agent and DB2 Connect Version 6 for access to the DB2 Version 5 data. For more information, see the DB2 Connect User's Guide. The following steps are an overview of the steps that are required: a. On the DB2 Version 5 system, use the DB2 Command Line Processor to catalog the Version 5 database that the warehouse agent is to access. b. On the DB2 Connect system, use the DB2 Command Line Processor to catalog: + The TCP/IP node for the DB2 Version 5 system + The database for the DB2 Version 5 system + The DCS entry for the DB2 Version 5 system c. On the warehouse agent workstation, use the DB2 Command Line Processor to catalog: + The TCP/IP node for the DB2 Connect system + The database for the DB2 Connect system For information about cataloging databases, see the DB2 Universal Database Installation and Configuration Supplement. 3. At the warehouse agent workstation, bind the DB2 CLI package to each database that is to be accessed through DB2 Connect. The following DB2 commands give an example of binding to v5database, a hypothetical DB2 version 5 database. Use the DB2 Command Line Processor to issue the following commands. db2cli.lst and db2ajgrt are located in the \sqllib\bnd directory. db2 connect to v5database user userid using password db2 bind db2ajgrt.bnd db2 bind @db2cli.lst blocking all grant public where userid is the user ID for the v5 database and password is the password for the user ID. An error occurs when db2cli.list is bound to the DB2 Version 5 database. This error occurs because large objects (LOBs) are not supported in this configuration. This error will not affect the warehouse agent's access to the DB2 Version 5 database. FixPak 14 for DB2 Universal Database Version 5, which is available in June, 2000, is required for accessing DB2 Version 5 data through DB2 Connect. Refer to APAR number JR14507 in that FixPak. ------------------------------------------------------------------------ 22.13 IBM ERwin metadata extract program 22.13.1 Contents Software requirements Program files Creating tag language files Importing a tag language file into the Data Warehouse Center Importing a tag language file into the Information Catalog Manager Troubleshooting ERwin to Data Warehouse Center mapping ERwin to Information Catalog Manager mapping This section describes how to use the IBM ERwin Metadata Extract Program to extract metadata from an ER1 file and create an DB2 Data Warehouse Center or Information Catalog Manager (DataGuide) tag language file. The metadata extract program extracts all physical objects such as databases, tables, and columns that are stored in the input ER1 file and writes the metadata model to a Data Warehouse Center or an Information Catalog Manager tag language file. The logical model for the Information Catalog Manager, consisting of entities and attributes, is also extracted and created, creating all the relevant relationship tags between objects, such as between databases and tables and tables and entities. For tables without a database, a default database named DATABASE is created. For tables without a schema, a default schema of USERID is used. For a model name, the ER1 file name is used. For more information about mapping of ER1 attributes to Data Warehouse Center or Information Catalog Manager tags, see "ERwin to DB2 Data Warehouse Center mapping" and "ERwin to Information Catalog Manager mapping." The metadata extract program supports all ER1 models with relational databases, including DB2, Informix, Oracle, Sybase, ODBC data sources and Microsoft SQL Server. 22.13.2 Software requirements The following software requirements must be met to run the metadata extract program: * Windows NT 4.0 or later * ERwin 3.5.2 with Service Pack 3 Build 466 The following software requirements must be met to import the ERwin tag language file: For Data Warehouse Center: IBM DB2 Universal Database Version 7.2 For Information Catalog Manager: IBM DB2 Warehouse Manager 7.2 The template tag language files (.tag) must be in the directory that is pointed to by the VWS_TEMPLATES environment variable. The type tag language files (.typ) must be in the directory that is pointed to by the DGWPATH environment variable. 22.13.3 Program files The metadata extract program is installed in the sqllib\bin subdirectory of the IBM DB2 directory. The program installs the following files in your directory: flgerwin.exe Main migration program erwext.dll Tag language file generator DLL cdmerwsn.dll ERwin API wrapper class DLL To start the extract program, issue the flgerwin command from a command prompt. 22.13.4 Creating tag language files To create a Data Warehouse Center or Information Catalog Manager tag language file, run the flgerwin.exe program and provide two main parameters. The first parameter is the ER1 file from which to extract metadata. The second parameter is the name of the output tag language file. By default, the extract program adds the MERGE parameter to the Data Warehouse Center tag language file. The command syntax is: flgerwin inputFile.er1 outputFile.tag [-dwc] [-icm] [ -m] [-u] [-a] [-d] The syntax for the command if you want to create a star schema is: flgerwin inputFile.er1 outputFile.tag [-dwc] [-starschema] -dwc Creates a Data Warehouse Center tag language file. Optional parameters available for -dwc are -m and -starschema. -icm Creates an Information Catalog Manager tag language file. Optional parameters available for -icm are -m, -u, -a, and -d. -starschema Creates an ERwin model star schema tag language file. -m Specifies the action on the object as MERGE. -u Specifies the action on the object as UPDATE. -a Specifies the action on the object as ADD. -d Specifies the action on the object as DELETE. The metadata extract program works with metadata, not data. After you complete the ERwin tag language file import and before you use the target table, you need to match passwords and user IDs. To merge metadata with existing database data: Change the Data Warehouse Center user ID and password under Properties --> Database --> Userid to match the merged database user ID and password. With the metadata extract program, you can import a tag language file as a target. In newly imported metadata, the tables are not yet populated. You can view these tables as logical or physical representations and then build a warehouse step to populate the table definitions imported from ERwin. The input ER1 file must be in a writable state. After the metadata extract program runs, the ER1 file becomes read-only. To change the file to read/write mode, use a command such as the following example: attrib -r erwinsimplemode.er1 where erwinsimplemode.er1 is the name of the ERwin flat file. The metadata extract program saves the ER1 file in a read-only state if the file is being used in a current ERwin session or if some error condition was detected. You might receive an abnormal program termination error message if the ER1 file is in a read-only state. The metadata extract program displays the table name it is currently processing. You will receive an informational message when the metadata extract program finishes processing. When you are creating star schemas by autojoining dimension tables to fact tables, processing can take a long time depending on how many tables you use. During processing, the autojoin lines are green. When saved, the lines change to black. Use the automatically generated constraint name to ensure that the constraint name is unique. During processing, you might receive the message, "Duplicate column found. Column will not be extracted." This is an informational message and does not affect the successful completion of the extract program. This message is displayed when the physical name of a foreign key is the same as the physical name of a column in the table currently being processed. 22.13.5 Importing a tag language file into the Data Warehouse Center You can import a tag language file into the Data Warehouse Center in either of two ways. You can use the Data Warehouse Center or the command line. To use the Data Warehouse Center to import a tag language file: 1. Click Start --> Programs --> IBM DB2 --> Control Center. The DB2 Control Center opens. 2. Open the Data Warehouse Center and log on. 3. Right-click on Warehouse. The Import window opens. 4. Click Import Metadata -> ERwin. The Import Metadata window opens. 5. In the Input file field, type the name of the input tag language file, and click OK. 6. Select the Extract star schema check box to define an ERwin starschema metadata model as a warehouse schema. After the import is completed, you can click on View --> Refresh to view the new step. To import a tag language file using the command line, enter the following command: iwh2imp2 tag-filename log-pathname target-control-db userid password tag-filename The full path and file name of the tag language file. log-pathname The full path name of the log file. target-control-db The name of the target database for the import. userid The user ID used to access the control database. password The password used to access the control database. To change a DB2 database definition so that it is a source in the Data Warehouse Center, you can change the tag language file: * Change the ISWH tag from ISWH(Y) to ISWH(N) for each database that you want as a source. * Change the relationship tag from :RELTYPE.TYPE(LINK) SOURCETYPE(SCGTARIR) TARGETYPE(DATABASE) to :RELTYPE.TYPE(LINK) SOURCETYPE(SCGSRCIR) TARGETYPE(DATABASE) for each database that you want as a source. When the tag language file is imported, you might receive the following message: Message: DWC13238E The object of type "COLUMN" identified by "DBNAME(___) OWNER(___) TABLE(___) COLUMNS(___)" is defined twice in the tag language file. This is an informational message, and your import was completed successfully. You might receive this message if you have an entity that has foreign keys with the same name, or an entity with similarly named columns that were affected by truncation, or other similar circumstances. Check your model for duplicate column names, and make adjustments as appropriate. 22.13.6 Importing a tag language file into the Information Catalog Manager There are two ways to import a tag language file into the Information Catalog Manager. You can use the Information Catalog Administrator or use the command line. To use the Information Catalog Administrator to import a tag language file: 1. Click Start --> Programs --> DB2 --> Information Catalog Manager. 2. Click Catalog --> Import. The Import window opens. 3. Click Find to search for the tag language file, then click Import. After the import is completed, you can double-click on the Subjects icon, which opens up a window that shows all the imported models and databases. To import a tag language file using the command interface, enter the following command: DGUIDE /USERID userid /PASSWORD password /DGNAME dgname /IMPORT filename /LOGFILE filename /ADMIN /RESTART (B|C) /USERID The user ID used to access the control database. /PASSWORD The password for this user ID. /DGNAME The Information Catalog Manager name. /IMPORT The full path and file name of the tag language file. /LOGFILE The full path name of the log file. /ADMIN Indicates that you're logging in as an administrator. /RESTART Indicates that the import will start at the beginning of the tag language file (choice B) or start from the last committed point (choice C, the default). 22.13.7 Troubleshooting If you receive an error message, look for the message here with the action you can take to resolve the error. Missing ER1 input file or tag output file. The metadata extract program requires two parameters in a specific order. The first parameter is the name of the ER1 file, and the second is the name of the tag output file. If you specify the name of an existing tag language file, the file will be overwritten. Windows system abnormal program termination. The input ER1 file is probably in a read-only state. This can happen if a problem occurred when you saved the ER1 file, and the metadata extract program put the file in read-only mode. Issue the command attrib -r inputFile.er1 in a command shell to change the state of the ER1 file to read/write. Tag language file ... could not be opened. Check if any system problems exist that might prevent a file from being created or opened on the current drive. Path to template files not found. The environment variable VWS_TEMPLATES is not set. Check to see that Data Warehouse Center is installed. Path to type files not found. The environment variable DGWPATH is not set. Check to see that Data Warehouse Center is installed. Unsupported server version: ... The input ER1 file that you are trying to extract from is stored on a target server that is not supported by the program. Start ERwin, open the ER1 file, then click Server --> Target Server and the appropriate version [see Software Requirements]. Save the ER1 file. Unknown ERwAPI error. An ERwin API error has occurred and the program was unable to obtain more information on the error. Make sure that ERwin 3.5.2 is installed. You must register the ERwin API. To register the ERwin API, run the following command from the directory where your ERwin program files are installed: regsvr32 er2api32.dll. You will see a message, "DllRegisterServer in er2api32.dll succeeded." You can start the extract program from the Data Warehouse Center, or by issuing the flgerwin command from a command shell. Extract program error: ... Check the error message and take action as appropriate. Most likely, this is an internal extract program error and the problem needs to be reported to an IBM representative. Unknown extract program error. An unknown error has occurred. Most likely, this is an internal error and the problem needs to be reported to an IBM representative. Extract program terminated due to error(s). An error has occurred that prevents the completion of the extract program. Please refer to additional error messages to solve the problem or contact an IBM representative. 22.13.8 ERwin to DB2 Data Warehouse Center mapping This section shows how the main ERwin object attributes correspond to the Data Warehouse Center tags: Database - WarehouseDatabase.tag or SourceDatabase.tag ERwin Command line tag Data Warehouse Center Diagram Name NAME Name of Warehouse Source or Warehouse Target Diagram Author RESPNSBL Contact Database Name DBNAME Database name Database Version DBTYPE Database type Diagram Description SHRTDESC Description Table - Table.tag ERwin Command line tag Data Warehouse Center Table Name NAME Table name Table Name TABLES Table name Database Name DBNAME n/a Table Owner OWNER Table schema Table Comment SHRTDESC Description Column - Column.tag ERwin Command line tag Data Warehouse Center Column Name NAME Column name Datatype NATIVEDT Datatype Length LENGTH Length Scale SCALE Scale Null Option NULLABLE Allows Nulls (check box) Position POSNO n/a Primary Key KEYPOSNO n/a Database Name DBNAME n/a Table Owner OWNER n/a Table Name TABLES n/a Column Comment SHRTDESC Description 22.13.8.1 ERwin to Information Catalog Manager mapping This section shows how the main ERwin object attributes correspond to the Information Catalog Manager tags: Database - Database.tag ERwin Command line tag Information Catalog Manager interface Diagram Name NAME Database name Diagram Author RESPNSBL Database owner Database Name DBNAME Database name Database Version DBTYPE Database type Diagram SHRTDESC Short Description Description Table - TableOrView.tag ERwin Command line tag Information Catalog Manager interface Table Name NAME Table name Table Name TABLES Table name Database Name DBNAME Database name Table Owner OWNER Table owner Table Comment SHRTDESC Short Description ERwin API TABLVIEW Definition represents a view Column - ColumnOrField.tag ERwin Command line tag Information Catalog Manager interface Column Name NAME Column name Datatype DATATYPE Datatype of column Length LENGTH Length of column Scale SCALE Scale of column Null Option NULLS Can column be null (?) Position POSNO Column position Primary Key KEYPOSNO Position of column in primary key ERwin API ISKEY Is column part of key (?) ERwin API UNIQKEY Is column a unique key (?) Database Name DBNAME Database name Table Owner OWNER Table owner Table Name TABLES Table name Column Comment SHRTDESC Short Description ERwin ISTEXT Is data text (?) ERwin API IDSRES Resolution of data Model - Model.tag ERwin Command line tag Information Catalog Manager interface ER1 file name NAME Model name Diagram Author RESPNSBL For further information... Diagram SHRTDESC Short Description Description Entity - Entity.tag ERwin Command line tag Information Catalog Manager interface Entity Name NAME Entity name Notes SHRTDESC Short Description Definition LONGDESC Long Description Entity Owner RESPNSBL For further information... Attribute - Attribute.tag ERwin Command line tag Information Catalog Manager interface Attribute Name NAME Attribute name Notes SHRTDESC Short Description Definition LONGDESC Long Description Datatype DATATYPE Datatype of member Length LENGTH Length of member ------------------------------------------------------------------------ 22.14 Name and address cleansing in the Data Warehouse Center 22.14.1 Use the Data Warehouse Center and the Trillium Software System to cleanse name and address data. The Trillium Software System is a name and address cleansing product that reformats, standardizes, and verifies name and address data. You can use the Trillium Software System in the Data Warehouse Center by starting the Trillium Batch System programs from a user-defined program. The user-defined program is added to the Warehouse tree when you import the metadata from the Trillium Batch System script or JCL. The Data Warehouse Center already provides integration with tools from Vality and Evolutionary Technologies, Inc. 22.14.1.1 Requirements * You must install the Trillium Software System on the warehouse agent site or on a remote host. * On UNIX and Windows platforms, the path to the Trillium Software System's bin directory must be added to the system environment variable PATH to enable the agent's process to run the Trillium Batch System programs. On UNIX, this must be done by adding the PATH variable in the IWH.environment file before starting the vwdaemon process. * Users must have a working knowledge of Trillium software. The following table shows the software requirements. Operating system Required software UNIX Trillium Software System Version 4.0 Data Warehouse Manager Version 7.2 warehouse agent Windows NT and Windows 2000 Trillium Software System Version 4.0 Data Warehouse Manager Version 7.2 warehouse agent For remote access the host must have ftpd and rexecd daemons installed. OS/390 Trillium Software System Version 4.0 installed on the remote OS/390 host Data Warehouse Manager Version 7.2 warehouse agent installed on UNIX, Windows NT TCP/IP 3.2 or above must be installed The OS/390 operating system is supported as a remote host only 22.14.1.2 Trillium Software System components The Trillium Software System consists of four main components: converter, parser, geocoder, and matcher. Use the components as a set of functions to perform name and address cleansing operations. You can run the components from the Trillium Batch System, which is a user-defined program. Converter Use the converter to standardize and convert the source data into the specified output format. Parser Use the parser to interpret name and address source data and create metadata about the source data. Geocoder Use the geocoder to compare the source data with postal service data to supply any missing information such as courier or ZIP+4 codes. The geocoder also performs matching operations with United States Census data. Matcher Use the matcher to compare similar names and addresses to identify duplicate records. You can perform reference matching by using the matcher to compare one record to a group of records. 22.14.1.3 Using the Trillium Batch System with the Data Warehouse Center In the Data Warehouse Center, you can import Trillium Batch System metadata and create a user-defined program step. This step calls a Trillium Batch System script on the local warehouse agent site, or on a remote warehouse agent site. In the Data Warehouse Center, the Trillium Batch System script is a step with a source and target file. The source file is the input data file used for the first Trillium Batch System command. The target file is the output data file created by the last Trillium command in the script. The step then can be copied to another process to be used with other steps. The following figures show the relationship between the Trillium Batch System input and output data files and the source and target files in the Data Warehouse Center. Figure 1. Sample Trillium script file REM Running the converter pfcondrv -parmfile c:\tril40\us_proj\parms\pfcondrv.par REM Running the parser pfprsdrv -parmfile c:\tril40\us_proj\parms\pfprsdrv.par REM Running the Matcher cfmatdrv -parmfile c:\tril40\us_proj\parms\pfmatdrv.par Figure 2. Contents of the pfcondrv.par file INP_FNAME01 c:\tril40\us_proj\data\convinp INP_DDL01 c:\tril40\us_proj\dict\input.ddl Figure 3. Contents of the pfmatdrv.par file OUT_DDNAME c:\tril40\us_proj\data\maout DDL_OUT_FNAME c:\tril40\us_proj\dict\parseout.ddl Figure 4. The Trillium Batch System step definition c:\Tril40\us_proj\data\convinp (source file) --> Trillium Batch System Step --> c:\tril40\us_proj\data\maout (target file) 22.14.1.4 Importing Trillium metadata To import Trillium metadata into the Data Warehouse Center: 1. Create a Trillium Batch System script or JCL. You can use any script or JCL writing tool to create the script or JCL file. 2. Right-click Warehouse, and click Import Metadata --> Trillium to open the Trillium Batch System window. 3. In the Script or JCL field, type the name of the Trillium Batch System script or JCL file that you want to run. 4. In the Input file field, type the name of the input data file for the Trillium Batch System program that runs first in the specified script or JCL file. 5. In the Input DDL field, type the name of the input DDL file that describes the input data file. This file must be available on the warehouse agent site. 6. In the Output file field, type the name of the output data file for the last Trillium Batch System program in the script or JCL file. 7. In the Output DDL field, type the name of the output DDL file that describes the output data file. This file must be available on the warehouse agent site. 8. Optional: In the Output error file field, type the name of the output error file that you want to use. This error file captures the run-time errors from the Trillium Batch System program. These errors are recorded in the stderr log. For local hosts, a default output error file is created if you do not specify a name here. For more information about the output error file, see the topic, "Error handling." 9. Click the Connection tab. 10. If the Trillium metadata that you are importing is on the warehouse agent site, click Local host. If the Trillium metadata that you are importing is not on the warehouse agent site, click Remote host, and specify the remote host. See the topic, "Specifying the remote host," later in this section. 11. Click OK to import the Trillium metadata and close the notebook. 12. If the script or JCL does not run from the default agent site, specify the warehouse agent site you are using in the Properties notebook for the Trillium Batch System step. The following warehouse objects are added to the Warehouse tree when the import operation is complete. * Trillium Batch System.scriptName template, where scriptName is the name of the script or JCL file. * Trillium Batch System process. * Trillium Batch System step that runs the user-defined program. * The warehouse file source and warehouse file target that you specified when you imported the metadata. The file source and file target are fixed files. * Trillium Batch System program group. Specifying the remote host To specify a remote host: 1. Click Remote host, and type the TCP/IP host name of the remote system that contains the metadata that you are importing. If you select Remote host, the target file is created as a local file because remote target files are not supported. You can add an FTP step to get the remote file to the specified local target file. 2. In the Remote operating system list, click the operating system of the remote host that you are accessing. 3. In the Remote user ID field, type the user ID for the remote host that you are accessing. 4. In the Password option list, select the password option that you want to use for the remote host that you are accessing: Password not required Specifies that no password is required to access the metadata on the remote host. Retrieve password Specifies that the password will be retrieved from a user-defined program. In the Password program field, type the name of the password program that will retrieve the password. The program must reside on the warehouse agent site and write the password to an output file in the first line. In the Program parameters field, type the parameters for the password program. The first parameter must be the output file to which the password is written. Enter password later Specifies that the password will be entered at a later time. Enter the password in the Properties notebook for the step that runs the Trillium Batch System program. 22.14.1.5 Mapping the metadata To create the metadata for the source and target files, Trillium reads the Trillium DDL files. The DDL file is converted to the following Data Warehouse Center data types: DDL data types Warehouse data type ASCII CHARACTER CHARACTER(n) ASCII NUMERIC EBCDIC CHARACTER EBCIDIC NUMERIC Other types NUMERIC Note: EBCDIC CHARACTER and EBCIDIC NUMERIC data types are supported only if the Trillium Software System is running on the OS/390 operating system. The variable n is the number of characters in the string. 22.14.1.6 Restrictions You can specify overlapping fields in the input and output DDL files with the Trillium DDL and the import metadata operation in the Data Warehouse Center. However, the corresponding warehouse source and warehouse target files cannot be used in the Data Warehouse Center with the SQL step, or Sample contents. Because the import metadata operation ignores overlapping fields that span the whole record, you can still specify these fields, but they will not be used as columns in the resulting source and target files. If an error file is specified, the name of the script cannot contain any blank spaces. 22.14.2 Writing Trillium Batch System JCL file The following requirements must be met if you are writing a Trillium Batch System JCL file. * The job name must be the user ID plus one character. * The job must be routed to the held output class. * Each job step that runs a Trillium Batch System program must include a SYSTERM DD statement that defines a permanent data set. The data set contains the errors from the Trillium Batch System programs. This dataset will be automatically deleted before the JCL is submitted. For more information about error handling and reporting, see the topic, "error handling." The output error file must be specified when the script or JCL runs on a remote host; otherwise, the error messages will not be captured and returned to the Data Warehouse Center. On UNIX or Windows, the simplest way to capture the error messages is to write another script that calls the Trillium Batch System script and pipes the standard error to an output file. Figure 5. Example of a job step that includes a SYSTERM DD statement //SYSTERM DD UNIT=&UNIT, // DISP=(MOD,CATLG,KEEP), // SPACE=(400,(20,20),,,ROUND), // DSN=&PROJPREF.&TRILVER.&PROJECT.STDERR; 22.14.3 Writing Trillium Batch System script file on UNIX and Windows If the Trillium Batch System script or parameter files contain relative paths of input files, the user must put a cd statement at the beginning of the script file to the directory of the script file. 22.14.4 Defining a Trillium Batch System step You must import the Trillium metadata that you want to use in the process before you define a Trillium Batch System step. To add a Trillium Batch System step to a process: 1. Open the process in the process modeler. 2. Click the Trillium Batch System icon on the palette. 3. Click Trillium Batch System program --> programName, where programName is the name of the Trillium Batch System program that you want to use. 4. Click the place on the canvas where you want the step to appear. 5. Complete the steps in the topic, "Defining a step that runs a user-defined program," in the DB2 Universal Database help. 22.14.5 Using the Trillium Batch System user-defined program The Trillium Batch System user-defined program is included with DB2 Data Warehouse Center Version 7.2 for Windows NT and UNIX. The Trillium Batch System step that is created when you import Trillium metadata will run the Trillium Batch System user-defined program. The user-defined program will call the Trillium Batch System script or JCL. The following table contains the parameters for the Trillium Batch System script or JCL: Parameter Values Remote host * localhost is the default value. Use this value if the Trillium Batch System is installed on the warehouse agent site. * The name of the remote host if the Trillium Batch System is installed on a remote operating system. Script or JCL The name of the script or JCL Remote operating system The name of the operating system on the remote host. This parameter is ignored if the value of the Remote host parameter is localhost. The valid values are: * MVS for the OS/390 operating system * UNIX for AIX, SUN Solaris, HP-UX, and NUMA/Q operating systems * WIN for the Windows NT or 2000 operating system Remote user ID The user ID with the authority to execute the remote command. This parameter is ignored if the value of RemotehostName is localhost. Password option The method to obtain the password. The valid values are: ENTERPASSWORD Use this value if the password is passed in the next parameter. PASSWORDNOTREQUIRED Use this value if no password is needed. GETPASSWORD Use this value if a program name is passed in the next parameter. Restrictions: * The program must reside on the agent site, write the password to an output file in the first line, and return 0 if it runs successfully. * The value of the Password parameter must be the name of the password program. * The value of the Program parameters parameter must be a string enclosed in double quotation marks. * The first parameter in the string must be the name of the output file where the password will be written. Password The valid value is the password or password program name. The password program must be local to the warehouse agent site. Program parameters The parameters for the password program. Output Error File The name of the output error file. Note: The data type for all of the parameters in this table is CHARACTER. 22.14.6 Error handling The Trillium Batch System programs write error messages to the standard error (stderr) file on the Windows NT and UNIX operating systems, and to the SYSTERM data set on the OS/390 operating system. To capture the errors from the Trillium Batch System programs on Windows NT or UNIX operating systems, the standard error must be redirected to an output error file. To capture the errors from the Trillium Batch System programs on the OS/390 operating system, the JCL must include a SYSTERM DD statement. If you specify the output error file name in the Import Metadata window, you must redirect, or store the standard error output to the error file. The Data Warehouse Center will read the file and report all of the lines that contain the string ERROR as error messages. All of the Trillium Batch System program error messages contain the string ERROR. If the output error file is not specified in a script or in JCL running on the warehouse agent site, the Data Warehouse Center will automatically create a file name and redirect the standard error output to that file. If an error is encountered, the error file will not be deleted. The error file is stored in the directory specified by the environment variable VWS_LOGGING. The file name is tbsudp-date- time.err, where date is the system date when the file is created, and time is the system time when the file is created. The following file name shows the format of the output error file name: tbsudp-021501-155606.err 22.14.6.1 Error return codes Error number Description 0 Success 4 Warning. Either the password file could not be erased or an internal error occurred while the Trillium Batch System user-defined program was accessing a temporary file. Please check the status of the password file, or all of the temporary files created under the directory that is specified by the environment variable VWS_LOGGING. 8 The number or values of the parameters are not correct. Read the log file or the documentation for the correct syntax. 12 A problem occurred while the Trillium Batch System user-defined program was connecting to the remote host through FTP. Check the FTP connection or host name, user ID, and password. 16 The Trillium Batch System user-defined program cannot create the log or an internal file. Check to see if the user has the correct authorization and check to see if the disk is full. 20 Either the OS/390 JCL cannot be executed, or an error occurred while the Trillium Batch System user-defined program was deleting or getting a file from OS/390 through FTP. Check the JESLogFile to identify the reason. 48 Environment variable VWS_LOGGING cannot be found or the log file cannot be created. Check the log file for more information. 56 Either the Windows NT or UNIX script cannot be executed, or an error occurred while the Trillium Batch System user-defined program was connecting to the remote host. Check the connection or host name, user ID, and password. 500 The script or JCL file returns an error, or it does not return an error but the error file contains data. Check the log file for more information. On OS/390, also check the JESLogFile. 22.14.6.2 Log file The Data Warehouse Center stores all diagnostic information in a log file when the Trillium Batch System user-defined program runs. The name of the log file is tbsudp-date-time.log, where date is the system date when the file is created, and time is the system time when the file is created. The log file is created in the directory specified by the environment variable VWS_LOGGING on the agent site. The log file is deleted if the Trillium Batch System user-defined program runs successfully. ------------------------------------------------------------------------ 22.15 Integration of MQSeries with the Data Warehouse Center The Data Warehouse Center now enables you to access data from an MQSeries message queue as a DB2 database view. A wizard is provided to create a DB2 table function and the DB2 view through which you can access the data. Each MQSeries message is treated as a delimited string, which is parsed according to your specification and returned as a result row. In addition, MQSeries messages that are XML documents can be accessed as a warehouse source. Using the Data Warehouse Center, you can import metadata from an MQSeries message queue and a DB2 XML Extender Document Access Definition (DAD) file. 22.15.1 Creating views for MQSeries messages 22.15.1.1 Requirements DB2 Universal Database Version 7.2. DB2 Warehouse Manager Version 7.2 MQSeries Support. Please refer to MQSeries for more information on MQSeries requirements. See the setup section for user-defined functions for information on setting up the warehouse source. 22.15.1.2 Restrictions * When cataloging a warehouse source database, the database alias is catalogued on the agent machine. However, when creating MQSeries and XML views, the Data Warehouse Center assumes that the database alias is also defined on the client machine and will attempt to connect to it using the warehouse source database userid and password. If successful, the wizard will be invoked and you can create the view. If unsuccessful, a warning message is displayed and you must either catalog or choose a different database alias in the wizard. * Please refer to SQL Reference section of the Release Notes for the maximum length of the MQ messages. 22.15.1.3 Creating a view for MQSeries messages To create a view for MQSeries messages: 1. From the Data Warehouse Center window, expand the Warehouse Sources tree. 2. Expand the warehouse source that is to contain the view. 3. Right-click the Views folder, and click Create for MQSeries messages.... The MQSeries wizard opens. When you have completed the wizard, a new view will be created in the Data Warehouse Center. When the view is selected, the MQSeries queue is accessed and each message is parsed as a delimited string according to your specifications in the wizard. 22.15.2 Importing MQSeries messages and XML metadata 22.15.2.1 Requirements DB2 Universal Database Version 7.2. DB2 XML Extender Version 7.2. MQSeries Support. Please refer to MQSeries for more information on MQSeries requirements. See the setup section on user-defined functions for information on setting up the warehouse source. 22.15.2.2 Restrictions The import will fail if the target tables exist with primary or foreign keys. You must manually delete the definitions of these keys in the Data Warehouse before import. 22.15.2.3 Importing MQSeries messages and XML metadata To import MQSeries metadata into the Data Warehouse Center: 1. Prepare the warehouse target database: o You must define the warehouse target as well as register and enable transformers. o You must enable the warehouse target for DB2 XML Extender. Refer to the DB2 XML Extender Version 7.2 Release Notes for more information. o Create an XML Extender Data Access Definition (DAD) file to tell the Data Warehouse Center how to map the contents of the XML document to warehouse tables. Enable an XML collection using the DAD file for the database. Refer to the DB2 XML Extender Version 7.2 Release Notes for more information. 2. Right-click Warehouse, and click Import Metadata --> MQSeries to open the Import Metadata window. 3. In the AMI service field, type the service point that a message is sent to or retrieved from. 4. In the AMI policy field, type the policy that the messaging system will use to perform the operation. 5. In the DAD file field, type the name of the DB2 XML Extender DAD file, or search for a file to select by clicking the ellipsis (...). This file must be local. 6. In the Warehouse Target field, select the name of the warehouse target where the step will run from the combobox list. The warehouse target must already be defined. 7. In the Schema field, type the name of a schema to qualify the table names in the DAD file that do not have a qualifer. The default schema is taken as the logon userid of the warehouse target you selected previously. 8. Choose a Target Option: If you want the step to replace the target table contents at run time, click the Replace table contents radio button. If you want the step to append to the target table contents at run time, click the Append table contents radio button. 9. Click OK. The Import Metadata window closes. The following warehouse objects are added to the Warehouse tree when the import operation is complete. * A subject area named MQSeries and XML. * A process named MQSeries and XML. * A user-defined program group named MQSeries and XML. * Definitions of all warehouse target tables described in the DAD file. * .. step. * . program template. If the Warehouse target agent site is different than the local machine, you must change the step parameter: 1. Right-click the step and select Properties. Click the Parameters tab in the properties notebook. 2. Change the name of the DAD file parameter to the name of the same DAD file on the remote warehouse target agent site. 3. Make sure the Agent Site in the Processing Options tab contains the desired agent site. 22.15.2.4 Using the MQSeries user-defined program The MQSeries and XML stored procedure is called MQXMLXF, and is included with DB2 Data Warehouse Center Version 7.2 for Windows NT and UNIX. The step that is created when you import MQSeries and XML metadata will run the stored procedure. Its parameters are described in the following table: Parameter Values MQSeries ServiceName The name of the service point that a message is sent to or retrieved from. MQSeries PolicyName The name of the policy that the messaging system will use to perform the operation. DAD file name The name of the DB2 XML Extender DAD file TargetTableList List of target tables of the step seperated by commas Option REPLACE or APPEND RUN ID Step edition number (for logging purposes) Note: The data type for all of the parameters in this table is CHARACTER. The stored procedure deletes all rows from the target tables if Option has a value of REPLACE. The stored procedure also calls the DB2 XML Extender stored procedure to populate the target tables for all existing MQSeries messages. 22.15.2.5 Error return codes When running the step, the stored procedure may return error code SQLCODE -443 and SQLSTATE 38600. To diagnose the error, see the following table of possible diagnostic texts. Error number Description AMIRC=xxxxx; xxxxx is the return code from the AMI layer. Refer to the MQSeries documentation for more details. indicates the location of the log file. XMLRC=xxxxx; xxxxx is the return code from the DB2 XML Extender. Refer to the DB2 XML Extender documentation for descriptions of the return codes. indicates the location of the log file. SQLCODE=xxxxx; xxxxx is the non zero SQLCODE returned when an SQL request is performed. indicates the location of the log file. For all errors, refer to the log file for more information. 22.15.2.6 Error Log file The Data Warehouse Center stores all diagnostic information in a log file when MQXMLXF runs. The name of the log file is mqxf.log, where is the RunID that was passed to the stored procedure. The Data Warehouse Center will create the file in the directory indicated by the VWS_LOGGING enviroment variable. If this environment is not defined, the log file will be created in the temporary directory. To make the VWS_LOGGING environment variable visible to the stored procedure on a Unix platform, you should add VWS_LOGGING to the DB2ENVLIST environment variable using the db2set command before the db2start command. The figure below is an example environment command. Figure 6. Environment variable command example db2set DB2ENVLIST="AMT_DATA_PATH VWS_LOGGING" The log file is deleted if the step runs successfully. ------------------------------------------------------------------------ 22.16 Microsoft OLE DB and Data Transaction Services support The Data Warehouse Center now enables you to access data from an OLE DB provider as a DB2 database view. You can use the OLE DB Assist wizard provided with Data Warehouse Center to create a DB2 OLE DB table function and the DB2 view through which you can access the data. Microsoft Data Transformation Services (DTS) allows you to import, export, and transform data between OLE DB sources and targets to build data warehouses and datamarts. DTS is installed with Microsoft SQL Server. All DTS tasks are stored in DTS packages that you can run and access using Microsoft OLE DB Provider for DTS Packages. Because you can access packages from DTS as OLE DB sources, you can also create views with the OLE DB Assist wizard for DTS packages the same way as for OLE DB data sources. When you access the view at run time, the DTS package executes, and the target table of the task in the DTS package becomes the created view. After you create a view in the Data Warehouse Center, you can use it as you would any other view. For example, you can join a DB2 table with an OLE DB source in an SQL step. When you use the created view in an SQL step, the DTS provider is called and the DTS package runs. Software Requirements: * DB2 Universal Database for Windows NT Version 7.2 as warehouse target database * DB2 Warehouse Manager Version 7.2 * If the warehouse target database was created before Version 7.2, you must run the db2updv7 command after installing the DB2 UDB for Windows NT Version 7.2 * When you catalog a warehouse source database, the database alias is cataloged on the warehouse agent site. However, when you start the wizard, the Data Warehouse Center assumes that the database alias is also defined on the client workstation and will attempt to connect to it using the warehouse source database user ID and password. If the connection is successful, the wizard starts and you can create the view. If the connection is not successful, a warning message is displayed and you must either catalog or choose a different database alias in the wizard. * To identify a specific table from a DTS package, you must select the DSO rowset provider check box in the Options tab of the Workflow Properties window of the DataPumpTask that creates the target table. If you turn on multiple DSO rowset provider attributes, only the result of the first selected step is used. When a view is selected, the rowset of its target table is returned and all other rowsets that you create in subsequent steps are ignored. * When you enter the table name for the wizard, use the step name, which is shown on the Options page of the Workflow Properties notebook for the task. * The DTS package connection string has the same syntax as the dtsrun command. 22.16.1 Creating views for OLE DB table functions To create a view for an OLE DB table function: 1. From the Data Warehouse Center window, expand the Warehouse Sources tree. 2. Expand the warehouse source that is to contain the view. 3. Right-click the Views folder, and click Create for OLE DB table function. The OLE DB Assist wizard opens. The wizard steps you through the task of creating a new view in the warehouse source database. 22.16.2 Creating views for DTS packages To create a view for a DTS package: 1. From the Data Warehouse Center window, expand the Warehouse Sources tree. 2. Expand the warehouse source that is to contain the view. 3. Right-click the Views folder, and click Microsoft OLE DB Provider for DTS Packages. The OLE DB Assist wizard opens. The wizard steps you through the task of creating a new view in the warehouse source database. For more information about DTS, see the Microsoft Platform SDK 2000 documentation, which includes a detailed explanation on how to build the provider string that the wizard needs to connect to the DTS provider. ------------------------------------------------------------------------ 22.17 Using incremental commit with replace In a step where the population type is Replace, an incremental commit is used only when the new data is inserted. The old data is deleted within a single commit scope. If you need to delete the data without producing log records, run a step that loads an empty file before you run the SQL step with append population type. ------------------------------------------------------------------------ 22.18 Component trace data file names The Data Warehouse Center writes these files on Windows NT: AGNTnnnn.Log contains trace information. nnnn is the numeric process ID of the warehouse agent, which can be 4 or 5 digits depending on the operating system. AGNTnnnn.Set contains environment settings for the agent. nnnn is the numeric process ID of the warehouse agent, which can be 4 or 5 digits depending on the operating system. The default directory is x:\program files\sqllib\logging, where x is the drive where DB2 is installed. ------------------------------------------------------------------------ 22.19 Open Client needed for Sybase sources on AIX and the Solaris Operating Environment In Chapter 3. Setting up warehouse sources, AIX, Table 3. Connectivity requirements for supported data sources on AIX, and for Solaris Operating Environment, Table 4. Connectivity requirements for supported data sources on the Solaris Operating Environment, the entry for Sybase should contain an additional step in the "How to connect:" column. The additional step is shown below as step 3. 3. Install the Open Client Note that the Open Client is required for connecting to Sybase sources on Windows NT or Windows 2000 platforms. ------------------------------------------------------------------------ 22.20 Sample entries corrected Figures 6, 8, 10, and 11 in "Chapter 3. Setting up warehouse sources" in the Data Warehouse Center Administration Guide, contain an incorrect path for the Driver attribute. The following paths are correct. Figure 6 Driver=/home/db2_07_01/3.6/odbc/lib/ivinf12.so Figure 8 Driver=/home/db2_07_01/3.6/odbc/lib/ivsyb1112.so Figure 10 Driver=/home/db2_07_01/3.6/lib/ivor814.so Figure 11 Driver=/home/db2_07_01/3.6/odbc/lib/ivmsss14.so ------------------------------------------------------------------------ 22.21 Chapter 3. Setting up warehouse sources 22.21.1 Mapping the Memo field in Microsoft Access to a warehouse source The Memo field of a Microsoft Access database is represented in a Data Warehouse Center source as a data type of LONG VARCHAR with column size exceeding 1 GB. To support practical system configurations, the Data Warehouse Center truncates the values exceeding 128 KB. To avoid truncating Memo field values in the warehouse source, change the data type of the column that is receiving the Memo field data from LONG VARCHAR to CLOB before you use the table in a step. If you do not change the data type of the column any values larger than 128 KB will be truncated. DRDA support for the CLOB data type is required for OS/390 and OS/400. The CLOB data type is supported for OS/390 beginning with DB2 Version 6. The CLOB data type is supported for OS/400 beginning with Version 4, Release 4 with DB FixPak 4 or later (PTF SF99104). For OS/400, the install disk Version 4, Release 4 dated February 1999 also contains support for the CLOB data type. ------------------------------------------------------------------------ 22.22 Chapter 10. Maintaining the Warehouse Database 22.22.1 Linking tables to a step subtype for the DB2 UDB RUNSTATS program The step subtype for a RUNSTATS program reads from and writes to a warehouse target. Link a target to the step subtype in the Process Model window before you define the values for the step. ------------------------------------------------------------------------ 22.23 The Default Warehouse Control Database During a typical DB2 installation on Windows NT or Windows 2000, DB2 creates and initializes a default warehouse control database for the Data Warehouse Center if there is no active warehouse control database identified in the Windows NT registry. Initialization is the process in which the Data Warehouse Center creates the control tables that are required to store Data Warehouse Center metadata. The default warehouse control database is named DWCTRLDB. When you log on, the Data Warehouse Center specifies DWCTRLDB as the warehouse control database by default. To see the name of the warehouse control database that will be used, click the Advanced button on the Data Warehouse Center Logon window. ------------------------------------------------------------------------ 22.24 The Warehouse Control Database Management Window The Warehouse Control Database Management window is installed during a typical DB2 installation on Windows NT or Windows 2000. You can use this window to change the active warehouse control database, create and initialize new warehouse control databases, and migrate warehouse control databases that have been used with IBM Visual Warehouse. The following sections discuss each of these activities. Stop the warehouse server before using the Warehouse Control Database Management window. ------------------------------------------------------------------------ 22.25 Changing the Active Warehouse Control Database If you want to use a warehouse control database other than the active warehouse control database, use the Warehouse Control Database Management window to register the database as the active control database. If you specify a name other than the active warehouse control database when you log on to the Data Warehouse Center, you will receive an error that states that the database that you specified does not match the database specified by the warehouse server. To register the database: 1. Click Start --> Programs --> IBM DB2 --> Warehouse Control Database Management. 2. In the New control database field, type the name of the control database that you want to use. 3. In the Schema field, type the name of the schema to use for the database. 4. In the User ID field, type the name of the user ID that is required to access the database 5. In the Password field, type the name of the password for the user ID. 6. In the Verify Password field, type the password again. 7. Click OK. The window remains open. The Messages field displays messages that indicate the status of the registration process. 8. After the process is complete, close the window. ------------------------------------------------------------------------ 22.26 Creating and Initializing a Warehouse Control Database If you want to create a warehouse control database other than the default, you can create it during the installation process or after installation by using the Warehouse Control Database Management window. You can use the installation process to create a database on the same workstation as the warehouse server or on a different workstation. To change the name of the warehouse control database that is created during installation, you must perform a custom installation and change the name on the Define a Local Warehouse Control Database window. The installation process will create the database with the name that you specify, initialize the database for use with the Data Warehouse Center, and register the database as the active warehouse control database. To create a warehouse control database during installation on a workstation other than where the warehouse server is installed, select Warehouse Local Control Database during a custom installation. The installation process will create the database. After installation, you must then use the Warehouse Control Database Management window on the warehouse server workstation by following the steps in 22.25, Changing the Active Warehouse Control Database. Specify the database name that you specified during installation. The database will be initialized for use with the Data Warehouse Center and registered as the active warehouse control database. To create and initialize a warehouse control database after the installation process, use the Warehouse Control Database Management window on the warehouse server workstation. If the new warehouse control database is not on the warehouse server workstation, you must create the database first and catalog it on the warehouse server workstation. Then follow the steps in 22.25, Changing the Active Warehouse Control Database. Specify the database name that you specified during installation. When you log on to the Data Warehouse Center, click the Advanced button and type the name of the active warehouse control database. ------------------------------------------------------------------------ 22.27 Creating editioned SQL steps When creating editioned SQL steps, based on usage, you might want to consider creating a non-unique index on the edition column to speed performance of deleting of editions. Consider this for large warehouse tables only, since the performance of inserts can be impacted when inserting a small numbers of rows. ------------------------------------------------------------------------ 22.28 Changing sources and targets in the Process Modeler window In the Process Model window, if you change a source or target, the change that you made is automatically saved immediately. If you make any other change, such as adding a step, you must explicitly save the change to make the change permanent. To save the change, click Process --> Save. ------------------------------------------------------------------------ 22.29 Adding descriptions to Data Warehouse Center objects You can specify up to 254 characters in the Description field of notebooks in the Data Warehouse Center. This maximum replaces the maximum lengths specified in the online help. ------------------------------------------------------------------------ 22.30 Running Sample Contents * You cannot successfully run a Sample Contents request that uses the AS/400 agent on a flat file source. Although you can create a flat file source and attempt to use an AS/400 agent to issue a sampleContent request, the request will fail. * You might receive an error when you run Sample Contents on a warehouse target in the process modeler. This error is related to the availability of a common agent site to warehouse source, warehouse target, and step in a process. The list of available agent sites for a step is obtained from the intersection of the warehouse source IR agent sites, the warehouse target IR agent sites, and the agent sites available for this particular step (The steps are selected in the last page of the agent sites properties notebook). For example, You want to view the Sample Contents for a process that runs the FTP Put program (VWPRCPY). The step used in the process must be selected for the agent site in the agent site definition. When you run Sample Contents against the Target file, the first agent site on the selected list is usually used. However, database maintenance operations might affect the order of the agent sites listed. Sample contents will fail if the agent site selected does not reside in the same system as the source or target file. ------------------------------------------------------------------------ 22.31 Editing a Create DDL SQL statement When you try to edit the Create DDL SQL statement for a target table for a step in development mode, you see the following misleading message: "Any change to the Create DDL SQL statement will not be reflected on the table definition or actual physical table. Do you want to continue?" The change will be reflected in the actual physical table. Ignore the message and continue changing the Create DDL statement. The corrected version of this message for steps in development mode should read as follows: "Any change to the Create DDL SQL statement will not be reflected in the table definition. Do you want to continue?" For steps in test or production mode, the message is correct. The Data Warehouse Center will not change the physical target table that was created when you promoted the step to test mode. ------------------------------------------------------------------------ 22.32 Migrating Visual Warehouse business views If you want to migrate Visual Warehouse metadata synchronization business views to the Data Warehouse Center, promote the business views to production status before you migrate the warehouse control database. If the business views are in production status, their schedules are migrated to the Data Warehouse Center. If the business views are not in production status, they will be migrated in test status without their schedules. You cannot promote the migrated steps to production status. You must create the synchronization steps again in the Data Warehouse Center and delete the migrated steps. ------------------------------------------------------------------------ 22.33 Generating target tables and primary keys When the Data Warehouse Center generates the target table for a step, it does not generate a primary key for the target table. Some of the transformers, such as Moving Average, use the generated table as a source table and also require that the source table have a primary key. Before you use the generated table with the transformer, define the primary key for the table by right-clicking the table in the DB2 Control Center and clicking Alter. ------------------------------------------------------------------------ 22.34 Using Merant ODBC drivers To access Microsoft SQL Server on Windows NT using the Merant ODBC drivers, verify that the system path contains the sqllib\odbc32 directory. ------------------------------------------------------------------------ 22.35 New ODBC Driver If you will be using the Data Warehouse Center AIX or Sun agent that has been linked to access Merant ODBC sources and will be accessing DB2 databases as well, change the value of the "Driver=" attribute in the DB2 source section of the .odbc.ini file as follows: AIX: The Driver name is /usr/lpp/db2_07_01/lib/db2_36.o Sample ODBC source entry for AIX: [SAMPLE] Driver=/usr/lpp/db2_07_01/lib/db2_36.o Description=DB2 ODBC Database Database=SAMPLE Sun: The Driver name is /opt/IBMdb2/V7.1/lib/libdb2_36.so Sample ODBC source entry for Sun: [SAMPLE] Driver=/opt/IBMdb2/V7.1/lib/libdb2_36.so Description=DB2 ODBC Database Database=SAMPLE ------------------------------------------------------------------------ 22.36 Defining a warehouse source or target in an OS/2 database When you define a warehouse source or warehouse target for an OS/2 database, type the database name in uppercase letters. ------------------------------------------------------------------------ 22.37 Monitoring the state of the warehouse control database The DB2 Control Center or the Command Line Processor might indicate that the warehouse control database is in an inconsistent state. This state is expected because it indicates that the warehouse server did not commit its initial startup message to the warehouse logger. ------------------------------------------------------------------------ 22.38 Using SQL Assist with the TBC_MD sample database In the data warehousing sample contained in the TBC_MD database, you cannot use SQL Assist to change the SQL in the Select Scenario SQL step, because the SQL was edited after it was generated by SQL Assist. ------------------------------------------------------------------------ 22.39 Using the FormatDate function To use the FormatDate function, click Build SQL on the SQL Statement page of the Properties notebook for an SQL step. The output of the FormatDate function is of data type varchar(255). You cannot change the data type by selecting Date, Time, or Date/Time from the Category list on the Function Parameters - FormatDate page. ------------------------------------------------------------------------ 22.40 Changing the language setting On AIX and the Solaris Operating Environment, the installation process sets the language to publish for the information catalog, and export to the OLAP Integration Server. If you want to use these functions in a language other than the language set during installation, create the following soft link by entering the following command on one line: On AIX /usr/bin/ln -sf /usr/lpp/db2_07_01/msg/locale/flgnxolv.str /usr/lpp/db2_07_01/bin/flgnxolv.str locale The locale name of the language in xx_yy format On the Solaris Operating Environment /usr/bin/ln -sf /opt/IBMdb2/V7.1/msg/locale/flgnxolv.str /opt/IBMdb2/V7.1/bin/flgnxolv.str locale The locale name of the language in xx_yy format ------------------------------------------------------------------------ 22.41 Using the Generate Key Table transformer When you use the Update the value in the key column option of the Generate Key Table transformer, the transformer updates only those rows in the table that do not have key values. (That is, the values are null). When additional rows are inserted into the table, the key values are null until you run the transformer again. To avoid this problem, use the following approach: * After the initial run of the transformer, use the Replace all values option to create the keys for all the rows again. ------------------------------------------------------------------------ 22.42 Maintaining connections to databases The warehouse server does not maintain connections to local or remote databases when the DB2 server that manages the databases is stopped and restarted. If you stop and restart DB2, then stop and restart the warehouse services as well. ------------------------------------------------------------------------ 22.43 Setting up a remote Data Warehouse Center client When you install the DB2 Administration Client and the Data Warehousing Tools to set up a Data Warehouse Center administrative client on a different workstation from the one that contains the warehouse server, you must add the TCP/IP port number at which the warehouse server workstation is listening to the services file for the client workstation. Add an entry into the services file as follows: vwkernel 11000/tcp ------------------------------------------------------------------------ 22.44 Defining a DB2 for VM warehouse source When you define a warehouse source for a DB2 for VM database, which is accessed through a DRDA gateway, there are restrictions on the use of CLOB and BLOB data types: * You cannot use the Sample Contents function to view data of CLOB and BLOB data types. * You cannot use columns of CLOB and BLOB data types with an SQL step. This restriction is a known restriction on the DB2 for VM Version 5.2 server in which LOB objects cannot be transmitted using DRDA to a DB2 Version 7 client. ------------------------------------------------------------------------ 22.45 Defining a DB2 for VM or DB2 for VSE target table When you define a DB2 for VM or DB2 for VSE target table in the Data Warehouse Center, do not select the Grant to public check box. The GRANT command syntax that the Data Warehouse Center generates is not supported on DB2 for VM and DB2 for VSE. ------------------------------------------------------------------------ 22.46 Enabling delimited identifier support To enable delimited identifier support for Sybase and Microsoft SQL Server on Windows NT: select the Enable Quoted Identifiers check box in the Advanced page of the ODBC Driver Setup notebook. To enable delimited identifier support for Sybase on UNIX, edit the Sybase data source in the .odbc.ini file to include the connect attribute EQI=1. ------------------------------------------------------------------------ 22.47 Data Joiner Error Indicates a Bind Problem Customers using DataJoiner with DB2 Version 7.1 + FixPak 2 or later may get an error indicating a bind problem. For example, when using a DataJoiner source with a Data Warehouse Center V7 agent, you may get an error like: DWC07356E An agent's processing of a command of type "importTableNames" failed for edition "0" of step "?". SQL0001N Binding or precompilation did not complete successfully. SQL0001N Package "NULLID.SQLL6D05" was not found. SQLSTATE=51002 RC = 7356 RC2 = 8600 To correct the problem, add the following lines to the db2cli.ini file: [COMMON] DYNAMIC=1 On UNIX systems, the db2cli.ini file is located in the .../sqllib/cfg directory. On Windows NT, the db2cli.ini file is located in the .../sqllib directory. ------------------------------------------------------------------------ 22.48 Setting up and Running Replication with Data Warehouse Center 1. Setting up and running replication with Data Warehouse Center requires that the Replication Control tables exist on both the Warehouse Control database and the Warehouse Target databases. Replication requires that the Replication Control tables exist on both the Control and Target databases. The Replication Control tables are found in the ASN schema and they all start with IBMSNAP. The Replication Control tables are automatically created for you on a database when you define a Replication Source via the Control Center, if the Control tables do not already exist. Note that the Control tables must also exist on the Target DB. To get a set of Control tables created on the target DB you can either create a Replication Source using Control Center, then remove the Replication Source, just leaving the Control tables in place. Or you can use the DJRA, Data Joiner Replication Administration, product to define just the control tables. 2. Installing and Using the DJRA If you want or need to use the DJRA to define the control tables, you will need to install it first. The DJRA ships as part of DB2. To install the DJRA, go to the d:\sqllib\djra directory (where your DB2 is installed) and click on the djra.exe package. This will install the DJRA on your system. To access the DJRA after that, on Windows NT, from the start menu, click on the DB2 for Windows NT selection, then select Replication, then select Replication Administration Tools. The DJRA interface is a bit different from usual NT applications. For each function that it performs, it creates a set of SQL to be run, but does not execute it. The user must manually save the generated SQL and then select the Execute SQL function to run the SQL. 3. Setup to Run Capture and Apply For the system that you are testing on, see the Replication Guide and Reference Manual for instructions on configuring your system to run the Capture and Apply program. You must bind the Capture and Apply programs on each database where they will be used. Note that you do NOT need to create a password file. The Data Warehouse Center will automatically create a password file for the Replication subscription. 4. Define a Replication Source in the Control Center Use the Control Center to define a Replication Source. The Data Warehouse Center supports five types of replication: user copy, point-in-time, base aggregate, change aggregate, and staging tables (CCD tables). The types of User Copy, Point-in-Time, and Condensed Staging table require that the replication source table have a primary key. The other replication types do not. Keep this in mind when choosing an input table to be defined as a Replication Source. A Replication Source is actually the definition of the original source table and a created CD (Change Data) table to hold the data changes before they are moved to the target table. When you define a Replication Source in the Control Center, a record is written out to ASN.IBMSNAP_REGISTER to define the source and its CD table. The CD table is created at the same time, but initially it contains no data. When you define a Replication Source you can choose to include only the after-image columns or both the before and after-image columns. These choices are made via check boxes in the Control Center Replication Source interface. Your selection of before and after image columns is then translated into columns created in the new CD table. In the CD table, after-image columns have the same name as their original source table column names. The after-image columns will have a 'X' as the first character in the column name. 5. Import the Replication Source into the Data Warehouse Center Once you have created the Replication Source in the Control Center, you can import it into the Data Warehouse Center. When importing the source, be sure to click on the check box that says "Tables that can be replicated". This tells the Data Warehouse Center to look at the records in the ASN.IBMSNAP_REGISTER table to see what tables have been defined as Replication Sources. 6. Define a Replication Step in the Data Warehouse Center On the process modeler, select one of the five Replication types: base aggregate, change aggregate, point-in-time, staging table, or user copy. If you want to define a base aggregate or change aggregate replication type, see the section below about How to setup a Base Aggregate or Change Aggregate replication in the Data Warehouse Center. Select an appropriate Replication Source for the Replication type. As mentioned above, the replication types of: user copy, point-in-time, and condensed staging tables require that the input source have a primary key. Connect the Replication Source to the Replication Step. Open the properties on the Replication Step. Go to the Parameters tab. Select the desired columns. Select the check box to have a target table created. Select a Warehouse target. Go to the Processing Options and fill in the parameters. Press OK. 7. Start the Capture Program In a DOS window, enter: ASNCCP source-database COLD PRUNE The COLD parameter indicates a COLD start and will delete any existing data in the CD tables. The PRUNE parameter tells the capture program to maintain the IBMSNAP_PRUNCNTL table. Leave the Capture program running. When it comes time to quit, you can stop it with a Ctrl-Break in its DOS window. Be aware that you need to start the Capture program before you start the Apply program. 8. Replication Step Promote-To-Test Back in the Data Warehouse Center, for the defined Replication Step, promote the step to Test mode. This causes the Replication Subscription information to be written out to the Replication Control tables. You will see records added to IBMSNAP_SUBS_SET, IBMSNAP_SUBS_MEMBR, IBMSNAP_SUBS_COLS, and IBMSNAP_SUBS_EVENT to support the subscription. The target table will also be created in the target database. If the replication type is user copy, point-in-time, or condensed staging table, a primary key is required on the target table. Go to the Control Center to create the Primary Key. Note that some replication target tables also require unique indexes on various columns. Code exists in the Data Warehouse Center to create these unique indexes when the table is created so that you do NOT have to create these yourself. Note though that if you define a primary key in the Control Center and a unique index already exists for that column then you will get a WARNING message when you create the primary key. Ignore this warning message. 9. Replication Step Promote-To-Production No replication subscription changes are made during Promote-to-Production. This is strictly a Data Warehouse Center operation like any other step. 10. Run a Replication Step After a Replication Step has been promoted to Test mode, it can be run. Do an initial run before making any changes to the source table. Go to the Work-in-Progress (WIP) section and select the Replication Step. Run it. When the step is run, the event record in the IBMSNAP_SUBS_EVENT table is updated and the subscription record in IBMSNAP_SUBS_SET is posted to be active. The subscription should run immediately. When the subscription runs, the Apply program is called by the Agent to process the active subscriptions. If you update the original source table after that point, then the changed data will be moved into the CD table. If you run the replication step following that, such that the Apply program runs again, the changed data will be moved from the CD table to the target table. 11. Replication Step Demote-To-Test No replication subscription changes are made during Demote-to-Test. This is strictly a Data Warehouse Center operation like any other step. 12. Replication Step Demote-to-Development When you demote a Replication Step to development, the subscription information is removed from the Replication Control tables. No records will remain in the Replication Control tables for that particular subscription after the Demote-to-Development finishes. The target table will also be dropped at this point. The CD table remains in place since it belongs to the definition of the Replication Source. 13. How to setup a Base Aggregate or Change Aggregate Replication in the Data Warehouse Center. o Input table. Choose an input table that can be used with a GROUP BY statement. For our example we will use an Input table that has these columns: SALES, REGION, DISTRICT. o Replication step. Choose Base or Change Aggregate. Open the Step properties. + When the Apply program runs, it needs to execute a SELECT statement that looks like: SELECT SUM(SALES), REGION, DISTRICT GROUP BY REGION, DISTRICT. Therefore in the output columns selected you will need to choose REGION, DISTRICT and one calculated column of SUM(SALES). Use the Add Calculated Column button. For our example enter: SUM(SALES) in the Expression field. Save it. + Where clause. There is a Replication requirement that when you set up a Replication step that only requires a GROUP BY clause then you must also provide a DUMMY where clause, such as 1=1. Do NOT include the word "WHERE" in the WHERE clause. Therefore in the Data Warehouse Center GUI for Base Aggregate there is only a WHERE clause entry field. In this field, for our example: enter: 1=1 GROUP BY REGION, DISTRICT For the Change Aggregate, there is both a WHERE clause and a GROUP BY entry field: In the WHERE clause field enter: 1=1 and in the GROUP BY field enter: GROUP BY REGION, DISTRICT + Setup the rest of the step properties, as you would do for any other type of Replication. Press OK to save the step and create the target table object. o Open the target table object. You now need to rename the output column for the calculated column expression to a valid column name and you need to specify a valid data type for the column. Save the target table object. o Run Promote-to-Test on the Replication step. The target table will be created. It does NOT need a primary key. o Run the step like any other Replication step. ------------------------------------------------------------------------ 22.49 Troubleshooting Tips * To turn on tracing for the Apply Program, set the Agent Trace value = 4 in the Warehouse Properties panel. The Agent turns on full tracing for Apply when Agent Trace = 4. If you don't see any data in the CD table, then most likely either the Capture program has not been started or you have not updated the original source table to create some changed data. * The mail server field of the Notification page of the Schedule notebook is missing from the online help. * The mail server needs to support ESMTP for the Data Warehouse Center notification to work. In the Open the Work in Progress window help, click Warehouse --> Work in Progress rather than Warehouse Center --> Work in Progress. ------------------------------------------------------------------------ 22.50 Accessing Sources and Targets The following tables list the version and release levels of the sources and targets that the Data Warehouse Center supports. Table 7. Version and release levels of supported IBM warehouse sources Source Version/Release IMS 5.1 DB2 Universal Database for Windows NT5.2 - 7.1 DB2 Universal Database 5.2 - 7.1 Enterprise-Extended Edition DB2 Universal Database for OS/2 5.2 - 7.1 DB2 Universal Database for AS/400 3.7 - 4.5 DB2 Universal Database for AIX 5.2 - 7.1 DB2 Universal Database for Solaris 5.2 - 7.1 Operating Environment DB2 Universal Database for OS/390 4.1 - 7.1 DB2 DataJoiner 2.1.1 DB2 for VM 5.3.4 or later DB2 for VSE 7.1 Source Windows NT AIX Informix 7.2.2 - 8.2.1 7.2.4 - 9.2.0 Oracle 7.3.2 - 8.1.5 8.1.5 Microsoft SQL Server 7.0 Microsoft Excel 97 Microsoft Access 97 Sybase 11.5 11.9.2 Table 8. Version and release levels of supported IBM warehouse targets Target Version/Release DB2 Universal Database for Windows NT6 - 7 DB2 Universal Database 6 - 7 Enterprise-Extended Edition DB2 Universal Database for OS/2 6 - 7 DB2 Universal Database for AS/400 3.1-4.5 DB2 Universal Database for AIX 6 -7 DB2 Universal Database for Solaris 6 -7 Operating Environment DB2 Universal Database for OS/390 4.1 - 7 DB2 DataJoiner 2.1.1 DB2 DataJoiner/Oracle 8 DB2 for VM 3.4 - 5.3.4 DB2 for VSE 3.2, 7.1 CA/400 3.1.2 ------------------------------------------------------------------------ 22.51 Additions to Supported non-IBM Database Sources The following table contains additions to the supported non-IBM database sources: Database client Database Operating system requirements Informix AIX Informix-Connect and ESQL/C version 9.1.4 or later Informix Solaris Operating Informix-Connect and Environment ESQL/C version 9.1.3 or later Informix Windows NT Informix-Connect for Windows Platforms 2.x or Informix-Client Software Developer's Kit for Windows Platforms 2.x Oracle 7 AIX Oracle7 SQL*Net and Oracle7 SQL*Net shared library (built by the genclntsh script) Oracle 7 Solaris Operating Oracle7 SQL*Net and Environment Oracle7 SQL*Net shared library (built by the genclntsh script) Oracle 7 Windows NT The appropriate DLLs for the current version of SQL*Net, plus OCIW32.DLL. For example, SQL*Net 2.3 requires ORA73.DLL, CORE35.DLL, NLSRTL32.DLL, CORE350.DLL and OCIW32.DLL. Oracle 8 AIX Oracle8 Net8 and the Oracle8 SQL*Net shared library (built by the genclntsh8 script) Oracle 8 Solaris Operating Oracle8 Net8 and the Environment Oracle8 SQL*Net shared library (built by the genclntsh8 script) Oracle 8 Windows NT To access remote Oracle8 database servers at a level of version 8.0.3 or later, install Oracle Net8 Client version 7.3.4.x, 8.0.4, or later. On Intel systems, install the appropriate DLLs for the Oracle Net8 Client (such as Ora804.DLL, PLS804.DLL and OCI.DLL) on your path. Sybase AIX In a non-DCE environment (ibsyb15 ODBC driver): libct library In a DCE environment (ibsyb1115 ODBC driver): Sybase 11.1 client library libct_r Sybase Solaris Operating In a non-DCE environment Environment (ibsyb15 ODBC driver): libct library In a DCE environment (ibsyb1115 ODBC driver): Sybase 11.1 client library libct_r Sybase Windows NT Sybase Open Client-Library 10.0.4 or later and the appropriate Sybase Net-Library. ------------------------------------------------------------------------ 22.52 Creating a Data Source Manually in Data Warehouse Center When a data source is created using Relational Connect and the "Create Nickname" statement, the data source will not be available in the functions related to importing tables in Data Warehouse Center. To use the data source as a source or target table, perform the following steps: 1. Define the source/target without importing any tables. 2. Expand the Warehouse Sources/Targets tree from the main window of the Data Warehouse Center, and right-click "Tables" for the desired source/target. 3. Click Define. 4. Define the data source using the notebook that opens and ensure that the columns are defined for each data source. For more information see, "Defining a Warehouse Source Table" or "Defining a Warehouse Target Table " in the Information Center. ------------------------------------------------------------------------ 22.53 Importing and Exporting Metadata Using the Common Warehouse Metadata Interchange (CWMI) 22.53.1 Introduction In addition to the existing support for tag language files, the Data Warehouse Center can now import and export metadata to and from XML files that conform to the Common Warehouse Metamodel (CWM) standard. Importing and exporting these CWM-compliant XML files is referred to as the Common Warehouse Metadata Interchange (CWMI). You can import and export metadata from the following Data Warehouse Center objects: * Warehouse sources * Warehouse targets * Subject areas, including processes, sources, targets, and steps * User-defined programs The CWMI import and export utility does not currently support certain kinds of metadata, including: schedules, warehouse schemas, shortcut steps, cascade relationships, users, and groups. The Data Warehouse Center creates a log file that contains the results of the import and export processes. Typically, the log file is created in the x:\program files\sqllib\logging directory (where x: is the drive where you installed DB2), or the directory that you specified as the VWS_LOGGING environment variable. The log file is plain text; you can view it with any text editor. 22.53.2 Importing Metadata You can import metadata either from within Data Warehouse Center, or from the command line. New objects that are created through the import process are assigned to the default Data Warehouse Center security group. For more information, see "Updating security after importing" in these Release Notes. If you are importing metadata about a step, multiple files can be associated with the step. Metadata about the step is stored in an XML file, but sometimes a step has associated data stored as BLOBs. The BLOB metadata has the same file name as the XML file, but it is in separate files that have numbered extensions. All of the related step files must be in the same directory when you import. Updating steps when they are in test or production mode A step must be in development mode before the Data Warehouse Center can update the step's metadata. If the step is in test or production mode, demote the step to development mode before importing the metadata: 1. Log on to the Data Warehouse Center. 2. Right-click the step that you want to demote, and click Mode. 3. Click Development. The step is now in development mode. Change the step back to either test or production mode after you import the metadata. Importing data from the Data Warehouse Center You can import metadata from within the Data Warehouse Center: 1. Log on to the Data Warehouse Center. 2. In the left pane, click Warehouse. 3. Click Selected --> Import Metadata --> Interchange File... 4. In the Import Metadata window, specify the file name that contains the metadata that you want to import. You can either type the file name or browse for the file. o If you know the location, type the fully qualified path and file name that you want to import. Be sure to include the .xml file extension to specify that you want to import metadata in the XML format. o To browse for your files: a. Click the ellipsis (...) push button. b. In the File window, change Files of type to XML. c. Go to the correct directory and select the file that you want to import. Note: The file must have an .xml extension. d. Click OK. 5. In the Import Metadata window, click OK to finish. The Progress window is displayed while the Data Warehouse Center imports the file. Using the command line to import metadata You can also use the command line to import metadata. Here is the import command syntax: CWMImport XML_file dwcControlDB dwcUserId dwcPW [PREFIX = DWCtbschema] XML_file The fully qualified path and file name (including the drive and directory) of the XML file that you want to import. This parameter is required. dwcControlDB The name of the warehouse control database into which you want to import your metadata. This parameter is required. dwcUserId The user ID that you use to log on to the warehouse control database. This parameter is required. dwcPW The user password that you use to log on to the warehouse control database. This parameter is required. [PREFIX=DWCtbschema] The database schema name for the Data Warehouse Center system tables, sometimes referred to as the table prefix. If no value for PREFIX= is specified, the default schema name is IWH. This parameter is optional. 22.53.3 Updating Your Metadata After Running the Import Utility Updating security after importing As a security measure, the Data Warehouse Center does not import or export passwords. You need to update the passwords on new objects as needed. For more details on import considerations, see the Data Warehouse Center Administration Guide, Chapter 12, "Exporting and importing Data Warehouse Center metadata." When you import metadata, all of the objects are assigned to the default security group. You can change the groups who have access to the object: 1. Log on to the Data Warehouse Center. 2. Right-click on the folder that contains the object that you want to change. 3. Click Properties, and then click the Security tab. 4. Remove groups from the Selected warehouse groups list or add groups from Available warehouse groups list. 5. Click OK. 22.53.4 Exporting Metadata You can export metadata either from within Data Warehouse Center, or from the command line. Some steps have metadata that is stored as a BLOB. The BLOB metadata is exported to a separate file that has the same file name as the step's XML file, but with a numbered extension (.1, .2 and so on). Exporting data from the Data Warehouse Center You can export metadata from within the Data Warehouse Center: 1. Log on to the Data Warehouse Center. 2. In the left pane, click Warehouse. 3. Click Selected --> Export Metadata--> Interchange file. 4. In the Export Metadata window, specify the file name that will contain the exported metadata. You can either enter the file name or browse for the file: o If you know the fully qualified path and file name that you want to use, type it in the File name entry field. Be sure to include the .xml file extension to specify that you want to export metadata in the XML format. o To browse for your files: a. Click the ellipsis (...) push button. b. In the File window, change Files of type to XML. c. Go to the correct directory and select the file that you want to contain the exported metadata. Note: Any existing file that you select is overwritten with the exported metadata. d. Click OK. 5. When the Export Metadata window displays the correct filename, click the object from the Available objects list whose metadata you want to export. 6. Click the > sign to move the selected object from the Available objects list to the Selected objects list. Repeat until all of the objects that you want to export are listed in the Selected objects list. 7. Click OK. The Data Warehouse Center creates an input file, which contains information about the Data Warehouse Center objects that you selected to export, and then exports the metadata about those objects. The progress window is displayed while the Data Warehouse Center is exporting the metadata. When the export process is complete, you will receive an informational message about the export process. A return code 0 indicates that the export was successful. You can also view the log file for more detailed information. Using the command line to export metadata Before you can export metadata from the command line, you must first create an input file. The input file is a text file with an .INP extension, and it lists all of the objects by object type that you want to export. When you export from within the Data Warehouse Center, the input file is created automatically, but to export from the command line you must first create the input file. You can create the input file with any text editor. Type all of the object names as they appear in the Data Warehouse Center. Make sure you create the file in a read/write directory. When you run the export utility, the Data Warehouse Center writes the XML files to the same directory where the input file is. Here's a sample input file: Tutorial Fact Table Process Tutorial file source Tutorial target New Program group In the (processes) section, list all of the processes that you want to export. In the (information resources) section, list all the warehouse sources and targets that you want to export. The Data Warehouse Center automatically includes the tables and columns that are associated with these sources and targets. In the (user defined programs) section, list all the program groups that you want to export. To export metadata, enter the following command at a DOS command prompt: CWMExport INPcontrol_file dwcControlDB dwcUserID dwcPW [PREFIX=DWCtbschema] INPcontrol_file The fully qualified path and file name (including the drive and directory) of the .INP file that contains the objects that you want to export. This parameter is required. dwcControlDB The name of the warehouse control database that you want to export from. This parameter is required. dwcUserID The user ID that you use to log on to the warehouse control database. This parameter is required. dwcPW The password that you use to log on to the warehouse control database. This parameter is required. [PREFIX=DWCtbschema] The database schema name for the Data Warehouse Center system tables, sometimes referred to as the table prefix. If no value for PREFIX= is specified, the default value is IWH. This parameter is optional. ------------------------------------------------------------------------ 22.54 OS/390 Runstats utility step When defining an OS/390 Runstats utility step in DWC, the user should be aware of the following on the Parameters tab of the Step Properties Dialog. For the tablespace field, the user should enter the name in uppercase. If the tablespace is not in database DSNDB04, the tablespace name needs to be qualified by the database containing it. For example, enter SAMPLE.EMPLOYEE. The help currently does not mention this field. ------------------------------------------------------------------------ 22.55 OS/390 Load utility step When defining an OS/390 Load utility step in DWC, the user should be aware of the following on the Parameters tab of the Step Properties Dialog. In order for the load to work, the user needs to always select the Advanced button. Otherwise the INTO clause of the load statement is not generated and the load will fail when run. In addition, FixPak 3 includes a fix to remove double quotes surrounding the load dataset name. Without this fix, a load will not work. ------------------------------------------------------------------------ 22.56 Common Warehouse Metamodel (CWM) XML support The Version 7.2 CWM toolkit works on Java Development Kit (JDK) 1.2.2 or 1.3. You can now import and export the following CWM XML objects: Shortcut steps from other processes When you export a process that contains a step that has a relationship to a step in another process (a shortcut), both processes are exported and the relationship is maintained. Conditionally cascaded relationships You can now import and export different cascaded relationships between steps, including CHILD, SUCCESS, FAILURE, and UNCONDITIONAL. Warehouse sources as view objects When you export, you can now define a warehouse source as a view object. View objects are processed the same as table objects. SQLDataType for columns and fields You can now use SQLDataType for columns and fields. Multiple correlation names for the same table During import or export, you can have multiple correlation names, each with its own column mapping for the same table. New SAP and WebSphere Site Analyzer (WSA) source support With the addition of new source support tags, you can now export SAP and WSA information that is saved in your warehouse. ------------------------------------------------------------------------ 22.57 Process modeler You can resize the process modeler palette to fit your screen. The icons on the palette will automatically reposition to a multicolumn palette. When you click on a palette icon, you will see a heading on palette objects. You can now see table and file objects by their business names by selecting the Show Business Names option in the View menu. You can also adjust the percentage settings and make your process views smaller or larger by selecting the Zoom To option. If objects are overlapping within the palette, you can click on the objects to bring them to the top of the screen. In addition, object names now wrap into multiple rows to save palette space. You can now use the Delete key to remove objects. Table changes are saved when you save a process. Selection behavior is not automatic. If you want to remove a table, file, or view from a warehouse source or target and process, you can right-click and select the Remove from Source action if the object is in a warehouse source, or Remove from Target action if the object is in a warehouse target. The cursor now shows the palette selection state. Additionally, the status line shows the name of the object that the cursor is on. ------------------------------------------------------------------------ 22.58 Schema modeler You can now minimize and maximize tables within the schema modeler. When you minimize a table, it changes into an icon. For greater visual accuracy, you can now create a star schema layout. ------------------------------------------------------------------------ 22.59 Mandatory fields The Data Warehouse Center now displays red borders on required fields. The red borders alert you to mandatory information such as database names, user IDs, or passwords that are needed to define Data Warehouse Center objects. When you enter the required information, the borders disappear. ------------------------------------------------------------------------ 22.60 Data Warehouse Center launchpad enhancements When you create a Data Warehouse Center object from the launchpad, the navigation tree expands to show the location of the new object. ------------------------------------------------------------------------ 22.61 Printing step information to a file You can now print information about a step (such as subject area, source table names, and target table names) to a text file. To print step information to a file, right-click the step icon in the process modeler, click Print --> Print to File, and specify the name of the file to which you want to print the information. ------------------------------------------------------------------------ Data Warehouse Center Application Integration Guide In Chapter 5. Metadata Templates, Table 16 describes Column tag tokens. The information in the manual should state that "*ColumnPositionNumber" should start with "1". The manual incorrectly gives "0" as the starting character. Later in Chapter 5, in Table 42, the TableTypeIfFile token is required if the type specified for the DatabaseType token in the corresponding SourceDataBase.tag is ISV_IR_FFLan. If it is not specified, an error will be detected. In Chapter 6. Data Warehouse Center metadata, the description of the POSNO column object property should be changed to: An index, starting with 1, of the column or field in the row of the table or file. In Chapter 8. Information Catalog Manager object types, the directory where you can find the .TYP files, which include the tag language for defining an object type, has been changed to \SQLLIB\DGWIN\TYPES. ------------------------------------------------------------------------ 23.1 Additional metadata templates In Chapter 5, Metadata Templates, the following metadata templates should be included. Table 9. New Metadata templates supplied with the Data Warehouse Center Template Description See: commit.tag 23.1.1, "Commit.tag" Foreignkey.tag 23.1.2, ForeignKey.tag Foreignkeyadditional.tag 23.1.3, ForeignKeyAdditional.tag Primarykey.tag 23.1.4, PrimaryKey.tag Primarykeyadditional.tag 23.1.5, PrimaryKeyAdditional.tag 23.1.1 Commit.tag Use this template to improve performance when you are using large tag language files. A commit template can be inserted between any of the groups of templates described here. A commit template cannot be inserted between templates within a group. For example, it is valid to insert a commit template between AgentSite.tag and VWPGroup.tag but invalid to insert a commit tag between VWPProgramTemplate.tag and VWPProgramTemplateParameter.tag. If commit templates are used incorrectly, import may report an error. * AgentSite.tag * VWPGroup.tag * VWPProgramTemplate.tag, VWPProgramTemplateParameter.tag * SourceDatabase.tag * WarehouseDatabase.tag * Table.tag, Column.tag * SubjectArea.tag * Process.tag * Step.tag, StepInputTable.tag, StepOutputTable.tag, StepVWPOutputTable.tag, StepVWPProgramInstance.tag, VWPProgramInstanceParameter.tag * StepCascade.tag * StarSchema.tag, StarSchemaInputTable.tag * PrimaryKey.tag, PrimaryKeyAdditional.tag * ForeignKey.tag, ForeignKeyAdditional.tag The use of the commit template is optional. 23.1.1.1 Tokens Table 10 provides information about each token in the template. Table 10. Commit.tag tokens Token Description Allowed values Relationship parameters *CurrentCheckPointID++An index, starting A numeric value. with 0, that increases each time it is substituted in a token. This token is required. 23.1.1.2 Examples of values Table 11 provides example values for each token to illustrate the kind of metadata you might provide for each token. Table 11. Example values for Commit.tag tokens Token Example value *CurrentCheckPointID++ 1 23.1.2 ForeignKey.tag Use this template to define foreign key constraints on tables. The ForeignKey.tag template defines the relationships to the table and the column on which the constraint is being defined. This template also defines the relationships to the table and column of the primary key that is being referred to. Before you use the ForeignKey.tag template, you must define the primary key constraint (using the PrimaryKey.tag template) and the tables and columns (using the Table.tag and Column.tag templates) on which you want to define the foreign key constraint. 23.1.2.1 Tokens Table 12 provides information about each token in the template. Table 12. ForeignKey.tag tokens Token Description Allowed values Entity parameters *ConstraintName The name of the A text string, up to 80 constraint. bytes in length. The name must be unique within a table or field. This token is required. *ForeignColumnKeyName Name of the column A text string, up to 254 on which the foreign bytes in length. key constraint is being defined. *ForeignKeyID The key that A numeric value. uniquely identifies the foreign key. The key must be unique from all other keys in the tag language file. Tip: Finish processing the ForeignKey.tag template before increasing the value of the key. This token is required. *MapID An arbitrary number A numeric value. that is unique from all other keys in the interchange file. Tip: Finish processing the ForeignKey.tag template before increasing the value of this token. This token is required. *PrimaryColumnKeyName The column name of A text string, up to 80 the referenced bytes in length. column. *ReferencedPrimaryKeyID The key that A numeric value. uniquely identifies the primary key. The key must be unique from all other keys in the tag language file. Tip: Finish processing the ForeignKey.tag template before increasing the value of the key. This token is required. Relationship parameters *DatabaseName The business name of A text string, up to 40 the warehouse source bytes in length. or warehouse target. This token is required. *ForeignTablePhysicalNameThe database-defined A text string, up to 254 name of the physical bytes in length. table containing the foreign keys that reference the keys in other tables. *PrimaryTablePhysicalNameThe database-defined A text string, up to 80 name of the physical bytes in length. table containing the keys that are referenced by the foreign keys. *PrimaryTableOwner The owner, A text string, up to 128 high-level bytes in length. qualifier, collection, or schema of the table that contains the primary key column that is being referenced. This token is required. *ForeignTableOwner The owner, A text string, up to 128 high-level bytes in length. qualifier, collection, or schema of the table that contains the foreign key constraint column. This token is required. 23.1.2.2 Examples of values Table 13 provides example values for each token to illustrate the kind of metadata that you might provide for each token. Table 13. Example values for ForeignKey.tag tokens Token Example value *ConstraintName Department *DatabaseName Finance Warehouse *ForeignColumnKeyName Country_code *ForeignKeyID 07011 *ForeignTablePhysicalName Geography *MapID 02568 *PrimaryColumnKeyName State_code *Referenced PrimaryKeyID Name *PrimaryTablePhysicalName City *PrimaryTableOwner DB2ADMIN *ForeignTableOwner IWH 23.1.3 ForeignKeyAdditional.tag Use this template to define a composite foreign key. Before you use the ForeignKeyAdditional.tag template, you must define a constraint (using the ForeignKey.tag template) on the first column. You can then add columns by using this template for each column that you want to add. 23.1.3.1 Tokens Table 14 provides information about each token in the template. Table 14. ForeignKeyAdditional.tag tokens Token Description Allowed values Entity parameters *ForeignColumnKeyName The name of the column on A text string, up to which the foreign key 80 bytes in length. constraint is being defined. *ForeignKeyID The key that uniquely A numeric value. identifies the foreign key. The key must be unique from all other keys in the tag language file. Tip: Finish processing the ForeignKeyAdditional.tag template before increasing the value of the key. This token is required. *MapID An arbitrary number that A numeric value. is unique from all other keys in the interchange file. Tip: Finish processing the ForeignKeyAdditional.tag template before increasing the value of this token. This token is required. *MapSeqNo A number signifying each Unique, increasing, additional column added consecutive number as part of a composite starting at 2. key to the foreign key constraint. *PrimaryColumnKeyName The column name of the A text string, up to referenced column. 80 bytes in length. Relationship parameters *DatabaseName The business name of the A text string, up to warehouse source or 40 bytes in length. warehouse target. This token is required. *ForeignTablePhysicalNameThe database-defined name A text string, up to of the physical table 80 bytes in length. containing the foreign keys that reference the keys in other tables. *PrimaryTablePhysicalNameThe database-defined name A text string, up to of the physical table 80 bytes in length. containing the keys that are referenced by the foreign keys. *PrimaryTableOwner The owner, high-level A text string, up to qualifier, collection, or 128 bytes in length. schema of the table that contains the primary key column that is being referenced. This token is required. *ForeignTableOwner The owner, high-level A text string, up to qualifier, collection, or 128 bytes in length. schema of the table that contains the foreign key constraint column. This token is required. 23.1.3.2 Examples of values Table 15 provides example values for each token to illustrate the kind of metadata that you might provide for each token. Table 15. Example values for ForeignKeyAdditional.tag tokens Token Example value *DatabaseName Finance Warehouse *ForeignColumnKeyName Country_code *ForeignKeyID 07011 *ForeignTablePhysicalName Geography *MapID 22578 *MapSeqNo 2 *PrimaryColumnKeyName State_code *PrimaryTablePhysicalName City *PrimaryTableOwner DB2ADMIN *ForeignTableOwner IWH 23.1.4 PrimaryKey.tag Use this template to define primary key constraints on tables. The template also defines the relationships to the table and the column on which the constraint is being defined. Before you use the PrimaryKey.tag template, you must define the tables and columns (using the Table.tag and Column.tag templates) on which you want to define the primary key constraint. 23.1.4.1 Tokens Table 16 provides information about each token in the template. Table 16. PrimaryKey.tag tokens Token Description Allowed values Entity parameters *ColumnName The name of the A text string, up to 80 bytes column or field. in length. The name must be unique within a table or field. This token is required. *MapID An arbitrary number A numeric value. that is unique from all other keys in the interchange file. Tip: Finish processing the PrimaryKey.tag template before increasing the value of this token. This token is required. *PrimaryKeyID The key that A numeric value. uniquely identifies the primary key. The key must be unique from all other keys in the tag language file. Tip: Finish processing the PrimaryKey.tag template before increasing the value of the key. This token is required. Relationship parameters *DatabaseName The business name of A text string, up to 40 bytes the warehouse source in length. or warehouse target. This token is required. *TableOwner The owner, A text string, up to 128 high-level bytes in length. qualifier, collection, or schema of the table that contains the column. This token is required. *TablePhysicalName The physical name of A text string, up to 80 bytes the table or file in length. that contains the column as defined to the database manager or file system. This token is required. 23.1.4.2 Examples of values Table 17 provides example values for each token to illustrate the kind of metadata that you might provide for each token. Table 17. Example values for PrimaryKey.tag tokens Token Example value *ColumnName Country_code *DatabaseName Finance Warehouse *MapID 54627 *PrimaryKeyID 74622 *TableOwner DB2ADMIN *TablePhysicalName GEOGRAPHY 23.1.5 PrimaryKeyAdditional.tag Use this template to define a composite primary key. Before you use the PrimaryKeyAdditional.tag template, you must define a constraint on the first column by using the PrimaryKey.tag template. Any additional columns can then be added using this template. The template also relates the additional primary keys to the first primary key which is defined using PrimaryKey.tag. 23.1.5.1 Tokens Table 18 provides information about each token in the template. Table 18. PrimaryKeyAdditional.tag tokens Token Description Allowed values Entity parameters *ColumnName The name of the column or A text string, up to 80 field. bytes in length. The name must be unique within a table or field. This token is required. *FirstPrimaryKeyID The key that uniquely A numeric value. identifies the primary key. The key must be unique from all other keys in the tag language file. Tip: Finish processing the PrimaryKeyAdditional.tag template before increasing the value of the key. This token is required. *MapID An arbitrary number that A numeric value. is unique from all other keys in the interchange file. Tip: Finish processing the PrimaryKeyAdditional.tag template before increasing the value of this token. This token is required. *MapSeqNo A number signifying each Unique, increasing, additional column added consecutive number as part of a composite starting at 2. key to the primary key constraint. Relationship parameters *DatabaseName The business name of the A text string, up to 40 warehouse source or bytes in length. warehouse target. This token is required. *TableOwner The owner, high-level A text string, up to 15 qualifier, collection, or bytes in length. schema of the table that contains the column. This token is required. *TablePhysicalName The physical name of the A text string, up to 80 table or file that bytes in length. contains the column as defined to the database manager or file system. This token is required. 23.1.5.2 Examples of values Table 19 provides example values for each token to illustrate the kind of metadata that you might provide for each token. Table 19. Example values for PrimaryKeyAdditional.tag tokens Token Example value *ColumnName Country_code *DatabaseName Finance Warehouse *MapID 99542 *MapSeqNo 2 *FirstPrimaryKeyID 07801 *TableOwner DB2ADMIN *TablePhysicalName GEOGRAPHY ------------------------------------------------------------------------ Data Warehouse Center Online Help ------------------------------------------------------------------------ 24.1 Defining Tables or Views for Replication A table or view must be defined for replication using the DB2 Control Center before it can be used as a replication source in the Data Warehouse Center. ------------------------------------------------------------------------ 24.2 Running Essbase VWPs with the AS/400 Agent Before running the Essbase VWPs with the AS/400 agent, ARBORLIB and ARBORPATH need to be set as *sys environment variables. To set these, the user ID must have *jobctl authority. These environment variables need to point to the library where Essbase is installed. ------------------------------------------------------------------------ 24.3 Using the Publish Data Warehouse Center Metadata Window and Associated Properties Window In step 10 of the task help, there is an example states that if you specify a limit value of 1 (Limit the levels of objects in the tree) and publish a process, only 1 step from that process is published and displayed. This example is not correct in all situations. In step 8, on the second bulleted item, the first statement is incorrect. It should read "Click at the column level to generate a transformation object between an information catalog source column and a target column." ------------------------------------------------------------------------ 24.4 Foreign Keys Any references in the online help to "foreign keys" should read "warehouse foreign keys." ------------------------------------------------------------------------ 24.5 Replication Notebooks Any references in the online help to the "Define Replication notebook" should read "replication step notebook." ------------------------------------------------------------------------ 24.6 Importing a Tag Language Importing a tag language online help: In the bulleted list showing common import errors, one item in the list is "Importing a tag language file that was not exported properly". This item is not applicable to the list of common input errors. ------------------------------------------------------------------------ 24.7 Links for Adding Data In the "Add data" topic of the online help, the links to the "Adding source tables to a process" and "Adding target tables to a process" topics are broken. You can find these topics in the help index. ------------------------------------------------------------------------ 24.8 Importing Tables The help topics "Importing source tables and views into a warehouse source" and "Importing target tables into a warehouse target" contain incorrect information regarding the wildcard character. The sentence: For example, XYZ* would return tables and views with schemas that start with these characters. should read: For example, XYZ% would return tables and views with schemas that start with these characters. ------------------------------------------------------------------------ 24.9 Correction to RUNSTATS and REORGANIZE TABLE Online Help The online help for these utilities states that the table that you want to run statistics on, or that is to be reorganized, must be linked as both the source and the target. However, because the step writes to the source, you only need to link from the source to the step. ------------------------------------------------------------------------ 24.10 Notification Page (Warehouse Properties Notebook and Schedule Notebook) On the Notification page of the Warehouse Properties notebook, the statement: The Sender entry field is initialized with the string . should be changed to: The Sender entry field is initialized with the string . On the Notification page of the Schedule notebook, the sender will be initialized to what is set in the Warehouse Properties notebook. If nothing is set, it is initialized to the current logon user e-mail address. If there is no e-mail address associated with the logon user, the sender is set to the logon user ID. ------------------------------------------------------------------------ 24.11 Agent Module Field in the Agent Sites Notebook The Agent Module field in the Agent Sites notebook provides the name of the program that is run when the warehouse agent daemon spawns the warehouse agent. Do not change the name of the field unless IBM directs you to do so. ------------------------------------------------------------------------ DB2 OLAP Starter Kit The IBM DB2 OLAP Starter Kit 7.2 adds support for Oracle, MS-SQL, Sybase, and Informix relational database management systems (RDBMSs) on certain operating system platforms. Version 7.2 contains scripts and tools for all supported RDBMSs, including DB2. There are some restrictions; see 25.8, Known Problems and Limitations for more information. The service level of DB2 OLAP Starter Kit for DB2 Universal Database Version 7.2 is the equivalent of patch 2 for Hyperion Essbase 6.1 plus patch 2 for Hyperion Integration Server 2.0. ------------------------------------------------------------------------ 25.1 OLAP Server Web Site For the latest installation and usage tips for the DB2 OLAP Starter Kit, check the Library page of the DB2 OLAP Server Web site: http://www.ibm.com/software/data/db2/db2olap/library.html ------------------------------------------------------------------------ 25.2 Supported Operating System Service Levels The server components of the OLAP Starter Kit for Version 7.2 support the following operating systems and service levels: * Windows NT 4.0 servers with SP 5 and Windows 2000 * AIX version 4.3.3 or higher * Solaris Operating System version 2.6, 7, and 8 (Sun OS 5.6, 5.7, or 5.8) The client components run on Windows 95, Windows 98, Windows NT 4.0 SP5, and Windows 2000. ------------------------------------------------------------------------ 25.3 Completing the DB2 OLAP Starter Kit Setup on UNIX The DB2 OLAP Starter Kit install follows the basic procedures of the DB2 Universal Database install for UNIX. The product files are laid down by the installation program to a system directory: (for AIX: /usr/lpp/db2_07_01; for Solaris: /opt/IBMdb2/V7.1). Then during the instance creation phase, two DB2 OLAP directories are created (essbase and is) within the instance user's home directory under sqllib. Only one instance of OLAP server can run on a machine at a time. To complete the set up, the user must manually set the is/bin directory so that it is not a link to the is/bin directory in the system. It should link to a writable directory within the instance's home directory. To complete the setup for Solaris, logon using the instance ID, change to the sqllib/is directory, then enter the following: rm bin mkdir bin cd bin ln -s /opt/IBMdb2/V7.1/is/bin/ismesg.mdb ismesg.mdb ln -s /opt/IBMdb2/V7.1/is/bin/olapicmd olapicmd ln -s /opt/IBMdb2/V7.1/is/bin/olapisvr olapisvr ln -s /opt/IBMdb2/V7.1/is/bin/essbase.mdb essbase.mdb ln -s /opt/IBMdb2/V7.1/is/bin/libolapams.so libolapams.so ------------------------------------------------------------------------ 25.4 Configuring ODBC for the OLAP Starter Kit IBM DB2 OLAP Starer Kit 7.2 requires an ODBC.ini file for operation of Open Database Connectivity (ODBC) connections from OLAP Integration Server to the relational data source and to the OLAP Metadata Catalog. * On Windows systems, this file is in the Registry under HKEY_LOCAL_MACHINE/SOFTWARE/ODBC. Use ODBC Data Source Administrator to store information about how to connect to a relational data source. * On UNIX systems, the installation program creates a model odbc.ini file. To store information about how to connect to a relational data source, edit the file using your preferred editor. The ODBC.ini file is available in ODBC software packages and is included with Microsoft Office software. For more information about applications that install ODBC drivers or the ODBC Administrator, visit the following web site: http://support.microsoft.com/support/kb/articles/Q113/1/08.asp. For Oracle users on AIX machines: To configure ODBC for Oracle, you must update the ODBC.ini file to point to the MERANT 3.6 drivers. In Version 7.2, the OLAP Starter Kit manages ODBC connections to the relational data source and to the OLAP Metadata Catalog. To accommodate these ODBC connections, the OLAP Starter Kit uses ODBC drivers on Windows NT 4.0, Windows 2000, AIX, and Solaris. * DB2 Universal Database Version 6 Database Client: DB2 Version 6 ODBC drivers on Windows NT 4.0 SP5 or Windows 2000, AIX 4.3.3, and Solaris Operating System 2.6, 7, or 8 (Sun OS 5.6, 5.7, or 5.8). * DB2 Universal Database 7.1 Database Client: DB2 Version 7 ODBC drivers on Windows NT 4.0 SP5 or Windows 2000, AIX 4.3.3, and Solaris Operating System 2.6, 7, or 8 (Sun OS 5.6, 5.7, or 5.8). * Oracle 8.04 and 8i SQL*Net 8.0 Database Client: MERANT 3.6 ODBC drivers on Windows NT 4.0 SP5 or Windows 2000, AIX 4.3.3, Solaris Operating System 2.6, 7 or 8 (Sun OS 5.6, 5.7, or 5.8). * MS SQL Server 6.5.201 (no Database Client required): MS SQL Server 6.5 ODBC drivers on Windows NT 4.0 SP5 or Windows 2000. * MS SQL Server 7.0 (no Database Client required): MS SQL Server 7.0 ODBC drivers on Windows NT 4.0 SP5 or Windows 2000. 25.4.1 Configuring Data Sources on UNIX systems On AIX and Solaris, you must manually set environment variables for ODBC and edit the odbc.ini file to configure the relational data source and OLAP Metadata Catalog. Make sure you edit the odbc.ini file if you add a new driver or data source or if you change the driver or data source. If you will be using the DB2 OLAP Starter Kit on AIX or Solaris to access Merant ODBC sources and DB2 databases, change the value of the "Driver=" attribute in the DB2 source section of the .odbc.ini file as follows: AIX: The Driver name is /usr/lpp/db2_07_01/lib/db2_36.o Sample ODBC source entry for AIX: [SAMPLE] Driver=/usr/lpp/db2_07_01/lib/db2_36.o Description=DB2 ODBC Database Database=SAMPLE Solaris Operating Environment: The Driver name is /opt/IBMdb2/V7.1/lib/libdb2_36.so Sample ODBC source entry for Solaris: [SAMPLE] Driver=/opt/IBMdb2/V7.1/lib/libdb2_36.so Description=DB2 ODBC Database Database=SAMPLE 25.4.1.1 Configuring ODBC Environment Variables On UNIX systems, you must set environment variables to enable access to ODBC core components. The is.sh and is.csh shell scripts that set the required variables are provided in the Starter Kit home directory. You must run one of these scripts before using ODBC to connect to data sources. You should include these scripts in the login script for the user name you use to run the OLAP Starter Kit. 25.4.1.2 Editing the odbc.ini File To configure a data source in an odbc.ini file, you must add a name and description for the ODBC data source and provide the ODBC driver path, file name, and other driver settings in a separate section that you create for the data source name. The installation program installs a sample odbc.ini file in the ISHOME directory. The file contains generic ODBC connection and configuration information for supported ODBC drivers. Use the file as a starting point to map the ODBC drivers that you use to the relational data source and OLAP Metadata Catalog. If you use a different file than the odbc.ini file, be sure to set the ODBCINI environment variable to the name of the file you use. 25.4.1.3 Adding a data source to an odbc.ini file 1. On the system running the OLAP Starter Kit servers, open the odbc.ini file by using a text editor such as vi. 2. Find the section starting with [ODBC Data Sources] and add a new line with the data source name and description, such as: mydata=data source for analysis. To minimize confusion, the name of the data source should match the name of the database in the RDBMS. 3. Add a new section to the file by creating a new line with the name of the new data source enclosed in brackets, such as: [mydata]. 4. On the lines following the data source name, add the full path and file name for the ODBC driver required for this data source and any other required ODBC driver information. Use the examples shown in the following sections as a guideline to map to the data source on your RDBMS. Make sure that the ODBC driver file actually exists in the location you specify for the Driver= setting. 5. When you have finished editing odbc.ini, save the file and exit the text editor. 25.4.1.4 Example of ODBC Settings for DB2 The following example shows how you might edit odbc.ini to connect to a relational data source, db2data, on DB2 Universal Database Version 6.1 on AIX, using an IBM DB2 native ODBC driver. In the vi editor, use the $ODBCINI command to edit the odbc.ini and insert the following statements: [ODBC Data Sources] db2data=DB2 Source Data on AIX ... [db2data] Driver=/home/db2inst1/sqllib/lib/db2.o Description=DB2 Data Source - AIX, native 25.4.1.5 Example of ODBC Settings for Oracle Here is an example of how you might edit odbc.ini to connect to a relational data source, oradata, on Oracle Version 8 (on Solaris), using a MERANT Version 3.6 ODBC driver. In this example, LogonID and Password are overridden with the actual values used in the OLAP Starter Kit user name and password. [ODBC Data Sources] oradata=Oracle8 Source Data on Solaris ... [myoracle] Driver= /export/home/users/dkendric/is200/odbclib/ARor815.so Description=my oracle source 25.4.2 Configuring the OLAP Metadata Catalog on UNIX Systems Configuring an OLAP Metadata Catalog on AIX and Solaris is similar to configuring a data source. For the OLAP Metadata Catalog database, add a data source name and section to the odbc.ini file, as described in 25.4.1.2, Editing the odbc.ini File. No other changes are required. You must create an OLAP Metadata Catalog database in a supported RDBMS before configuring it as an ODBC data source. Here is an example how you might edit odbc.ini to connect to the OLAP Metadata Catalog, TBC_MD, on DB2 Version 6.1 (on Solaris), using a native ODBC driver: [ODBC Data Sources] ocd6a5a=db2 v6 ... [ocd6a5a] Driver=/home/db2instl/sqllib/lib/db2.0 Description=db2 25.4.3 Configuring Data Sources on Windows Systems To configure a relational data source on Windows NT or Windows 2000 systems, you must start ODBC Administrator and then create a connection to the data source that you will use for creating OLAP models and metaoutlines. Run the ODBC Administrator utility from the Windows Control Panel. The following example creates a DB2 data source; the dialog boxes for other RDBMSs will differ. To configure a relational data source with ODBC Administrator, complete the following steps: 1. On the Windows desktop, open the Control Panel window. 2. In the Control Panel window, perform one of the following steps: a. On Windows NT, double-click the ODBC icon to open the ODBC Data Source Administrator dialog box. b. On Windows 2000, double-click the Administrative Tools icon, and then double-click the Data Sources (ODBC) icon to open the ODBC Data Source Administrator dialog box. 3. In the ODBC Data Source Administrator dialog box, click the System DSN tab. 4. Click Add to open the Create New Data Source dialog box. 5. In the driver list box of the Create New Data Source dialog box of ODBC Administrator, select an appropriate driver, such as IBM DB2 ODBC Driver, and click Finish to open the ODBC IBMDB2 Driver - Add dialog box. 6. In the ODBC IBM DB2 Driver - Add dialog box, in the Database alias drop-down list, select the name of the database for your relational source data (for example, TBC in the sample application). 7. In the Description text box, type an optional description that indicates how you use this driver and click Add. For example, type the following words to describe the My Business database: Customers, products, markets You might type the following words to describe the sample application database: Sample relational data source The descriptions help to identify the available data sources for your selection when you connect from OLAP Starter Kit Desktop. 8. Click OK to return to the ODBC Data Source Administrator dialog box. The data source name you entered and the driver you mapped to it are displayed in the System Data Sources list box on the System DSN tab. To edit configuration information for a data source: 1. Select the data source name and click Configure to open the ODBC IBM DB2 - Add dialog box. 2. Correct any information you want to change. 3. Click OK twice to exit. 25.4.4 Configuring the OLAP Metadata Catalog on Windows Systems To configure an OLAP Metadata Catalog on Windows NT or Windows 2000, start ODBC Administrator and then create a connection to the data source that contains the OLAP Metadata Catalog database. The following example creates a DB2 data source; dialog boxes for other RDBMSs will differ. To create a data source for the OLAP Metadata Catalog, complete the following steps: 1. On the desktop, open the Control Panel window. 2. In the Control Panel window, perform one of the following steps: a. On Windows NT, double-click the ODBC icon to open the ODBC Data Source Administrator dialog box. b. On Windows 2000, double-click the Administrative Tools icon, and then double-click the Data Sources (ODBC) icon to open the ODBC Data Source Administrator dialog box. 3. In the ODBC Data Source Administrator dialog box, click the System DSN tab. 4. Click Add to open the Create New Data Source dialog box. 5. In the driver list box of the Create New Data Source dialog box of ODBC Administrator, select an appropriate driver, such as IBM DB2 ODBC Driver, and click Finish to open the ODBC IBMDB2 Driver - Add dialog box. 6. In the ODBC IBM DB2 Driver - Add dialog box, in the Database alias drop-down list, select the name of the database for your OLAP Metadata Catalog (for example, TBC_MD in the sample application). The name of the selected database is automatically displayed in the Data Source Name text box. 7. If you want to change the name of the data source, select the name displayed in the Data Source Name text box, type a new name to indicate how you use this driver, and click Add. For example, you might type the following name to indicate that you are using the driver to connect to the first OLAP Metadata Catalog: OLAP Catalog first You would type the following name to indicate that you are connecting to the sample application OLAP Metadata Catalog database: TBC_MD 8. In the Description text box, enter a description that indicates how you use this driver. For example, you might type the following words to describe the OLAP Metadata Catalog: My first models and metaoutlines You might type the following words to describe the sample application OLAP Metadata Catalog database: Sample models and metaoutlines The descriptions help you to identify the catalog that you want to select when you connect to the OLAP Metadata Catalog from the OLAP Starter Kit Desktop. 9. Click OK to return to the ODBC Data Source Administrator dialog box. The data source name you entered and the driver you mapped to it are displayed in the System Data Sources list box on the System DSN tab. To edit configuration information for a data source: 1. Select the data source name and click Configure to open the ODBC IBM DB2 - Add dialog box. 2. Correct any information you want to change. 3. Click OK twice to exit. 25.4.5 After You Configure a Data Source After you configure the relational data source and OLAP Metadata Catalog, you can connect to them from the OLAP Starter Kit. You can then create, modify, and save OLAP models and metaoutlines. The SQL Server ODBC driver may time out during a call to an SQL Server database. Try again when the database is not busy. Increasing the driver time-out period may avoid this problem. For more information, see the ODBC documentation for the driver you are using. For more information on ODBC connection problems and solutions, see the OLAP Integration Server System Administrator's Guide. ------------------------------------------------------------------------ 25.5 Logging in from OLAP Starter Kit Desktop To use the OLAP Starter Kit Desktop to create OLAP models and metaoutlines, you must connect the client software to two server components: DB2 OLAP Integration Server and DB2 OLAP Server. The login dialog prompts you for the necessary information for the Desktop to connect to these two servers. On the left side of the dialog, enter information about DB2 OLAP Integration Server. On the right side, enter information about DB2 OLAP Server. To connect to DB2 OLAP Integration Server: * Server: Enter the host name or IP address of your Integration Server. If you have installed the Integration Server on the same workstation as your desktop, then typical values are "localhost" or "127.0.0.1". * OLAP Metadata Catalog: When you connect to OLAP Integration Server you must also specify a Metadata Catalog. OLAP Integration Server stores information about the OLAP models and metaoutlines you create in a relational database known as the Metadata Catalog. This relational database must be registered for ODBC. The catalog database contains a special set of relational tables that OLAP Integration Server recognizes. On the login dialog, you can specify an Integration Server and then expand the pull-down menu for the OLAP Metadata Catalog field to see a list of the ODBC data source names known to the OLAP Integration Server. Choose an ODBC database that contains the metadata catalog tables. * User Name and Password: OLAP Integration Server will connect to the Metadata Catalog using the User name and password that you specify on this panel. This is a login account that exists on the server (not the client, unless the server and client are running on the same machine). The user name must be the user who created the OLAP Metadata Catalog. Otherwise, OLAP Integration Server will not find the relational tables in the catalog database because the table schema names are different. The DB2 OLAP Server information is optional, so the input fields on the right side of the Login dialog may be left blank. However, some operations in the Desktop and the Administration Manager require that you connect to a DB2 OLAP Server. If you leave these fields blank, then the Desktop will display the Login dialog again if the Integration Server needs to connect to DB2 OLAP Server in order to complete an operation that you requested. It is recommended that you always fill in the DB2 OLAP Server fields on the Login dialog. To connect to DB2 OLAP Server: * Server: Enter the host name or IP address of your DB2 OLAP Server. If you are running the OLAP Starter Kit, then your OLAP Server and Integration Server are the same. If the Integration Server and OLAP Server are installed on different hosts, then enter the host name or an IP address that is defined on OLAP Integration Server. * User Name and Password: OLAP Integration Server will connect to DB2 OLAP Server using the user name and password that you specify on this panel. This user name and password must already be defined to the DB2 OLAP Server. OLAP Server manages its own user names and passwords separately from the host operating system. 25.5.1 Starter Kit Login Example The following example assumes that you created the OLAP Sample, and you selected db2admin as your administrator user ID, and password as your administrator password during OLAP Starter Kit installation. * For OLAP Integration Server: Server is localhost, OLAP Metadata Catalog is TBC_MD, User Name is db2admin, Password is password * For DB2 OLAP Server: Server is localhost, User Name is db2admin ------------------------------------------------------------------------ 25.6 Manually creating and configuring the sample databases for OLAP Starter Kit The sample databases are created automatically when you install OLAP Starter Kit. The following instructions explain how to setup the Catalog and Sample databases manually, if necessary. 1. In Windows, open the Command Center window by clicking Start -->Programs-->DB2 for Windows NT--> Command Window. 2. Create the production catalog database: a. Type db2 create db OLAP_CAT b. Type db2 connect to OLAP_CAT 3. Create tables in the database: a. Navigate to \SQLLIB\IS\ocscript\ocdb2.sql b. Type db2 -tf ocdb2.sql 4. Create the sample source database: a. Type db2 connect reset b. Type db2 create db TBC c. Type db2 connect to TBC 5. Create tables in the database: a. Navigate to \SQLLIB\IS\samples\ b. Copy tbcdb2.sql to \SQLLIB\samples\db2sampl\tbc c. Copy lddb2.sql to \SQLLIB\samples\db2sampl\tbc d. Navigate to \SQLLIB\samples\db2sampl\tbc e. Type db2 -tf tbcdb2.sql f. Type db2 - vf lddb2.sql to load sample source data into the tables. 6. Create the sample catalog database: a. Type db2 connect reset b. Type db2 create db TBC_MD c. Type db2 connect to TBC_MD 7. Create tables in the database: a. Navigate to \SQLLIB\IS\samples\tbc_md b. Copy ocdb2.sql to \SQLLIB\samples\db2sampl\tbcmd c. Copy lcdb2.sql to \SQLLIB\samples\db2sampl\tbcmd d. Navigate to \SQLLIB\samples\db2sampl\tbcmd e. Type db2 -tf ocdb2.sql f. Type db2 -vf lcdb2.sql to load sample metadata into the tables. 8. Configure ODBC for TBC_MD, TBC, AND OLAP_CAT: a. Open the NT control panel by clicking Start-->Settings-->Control Panel b. Select ODBC (or ODBC data sources) from the list. c. Select the System DSM tab. d. Click Add. The Create New Data Source window opens. e. Select IBM DB2 ODBC DRIVER from the list. f. Click Finish. The ODBC IBM D2 Driver - Add window opens. g. Type the name of the data source (OLAP_CAT) in the Data source name field. h. Type the alias name in the Database alias field, or click the down arrow and select OLAP_CAT from the list. i. Click OK. j. Repeat these steps for the TBC_MD and the TBC databases. ------------------------------------------------------------------------ 25.7 Migrating Applications to OLAP Starter Kit Version 7.2 The installation program does not reinstall the OLAP Starter Kit sample applications, databases, and data files. Your existing applications and databases are not affected in any way. However, it is always a good idea to back up your applications and databases before an installation. Your applications are automatically migrated to Version 7.2 when you open them. ------------------------------------------------------------------------ 25.8 Known Problems and Limitations This section lists known limitations for DB2 OLAP Starter Kit. Informix RDBMS Compatibility with Merant Drivers for Windows Platforms In order for the Merant drivers for Windows platforms to work with the Informix RDBMS, the following two entries must be added to the PATH statement: o C:\Informix o C:\Informix\bin Both entries must be at the beginning of the PATH. Possible Inconsistency Between Dimensions in OLAP Models and Associated Metaoutlines Under certain conditions, you can create a dimension in a metaoutline that has no corresponding dimension in the OLAP model. This can occur in the following scenario: 1. Create a new OLAP model and save it. 2. Create a metaoutline based on the model but do not save the metaoutline. 3. Return to the OLAP model and delete a dimension on which one of the metaoutline dimensions is based. 4. Return to the metaoutline, save it, close it, and reopen it. The metaoutline will contain a dimension that does not have a corresponding dimension in the OLAP model. The OLAP Starter Kit cannot distinguish between an inconsistent dimension created in this manner and a user-defined dimension in a metaoutline. Consequently, the inconsistent dimension will be displayed in the metaoutline, but the metaoutline regards it as a user-defined dimension since no corresponding dimension exists in the OLAP model. On Windows 2000 Platforms, the Environment Variable Setting for TMP Causes Member and Data Loads to Fail Because of a difference in the default system and user environment variable settings for TMP between Windows 2000 and Windows NT, member and data loads fail when the OLAP Starter Kit is running on Windows 2000 platforms. The resulting error message tells users that the temp file could not be created. You can work around this limitation on Windows 2000 by taking the following steps: 1. Create a directory named C:\TEMP 2. Set the environment variable TMP for both the system and the user to TMP=C:\TEMP Installation of ODBC Does Not Replace Existing Merant Driver The existing 3.6 Merant ODBC drivers will not be updated with this installation. If you are upgrading from the OLAP Starter Kit Version 7.1, fixpack 2 or earlier, you should continue using the previously-installed ODBC drivers Using Merant Informix ODBC Drives on UNIX Platforms To use the Merant Informix ODBC drivers on UNIX platforms, you must do one of the following: o Before starting the Starter Kit, set the LANG environment variable to "en_US". For example, for korn shell, type: export LANG='en_US' Set this variable every time you start the OLAP Starter Kit. o If your LANG environment variable is already set to a different value, make the following symbolic link after installation: ln -s $ISHOME/locale/en_US $ISHOME/locale/$LANG Mixing service levels of OLAP clients and servers IBM recommends that you keep both client and server components of the DB2 OLAP Starter Kit at the same version and fixpack level. But in some situations, you might be able to mix different service levels of client and server components: Using clients and servers at different service levels within a version IBM does not support, and recommends against, using newer clients with older servers. However, you might be able to use older clients with newer servers, although IBM does not support it. You might experience some problems. For example: + Messages from the server might be incorrect. You can work around this problem by upgrading the message.MDB file on the client to match the level on the server. + New server features do not work. The client, server, or both may fail when you attempt to use a new feature. + The client might not connect properly with the server. Using multiple servers with a single client within a version If you need to connect a client to several OLAP servers on different machines or operating systems, IBM recommends that you make them all the same version and service level. Your client should at least be at the same as the lowest level server. If you experience problems, you might need to use different client machines to match up with the appropriate host, or upgrade all clients and servers to the same service level. Mixing clients and servers from different versions IBM does not support using OLAP Starter Kit clients and servers from Version 7.1 with clients and servers from Version 7.2. When IBM OLAP products are upgraded to a new version level, there are often network updates and data format changes that require that the client and server be at the same version level. Mixing IBM products (DB2 OLAP Starter Kit) with Hyperion products (Hyperion Essbase and Hyperion Integration Server) IBM does not support mixing OLAP clients and servers from IBM with OLAP clients and servers from Hyperion Solutions. There are some differences in feature that may cause problems, even though mixing these components might work in some situations. ------------------------------------------------------------------------ 25.9 OLAP Spreadsheet Add-in EQD Files Missing In the DB2 OLAP Starter Kit, the Spreadsheet add-in has a component called the Query Designer (EQD). The online help menu for EQD includes a button called Tutorial that does not display anything. The material that should be displayed in the EQD tutorials is a subset of chapter two of the OLAP Spreadsheet Add-in User's Guide for Excel, and the OLAP Spreadsheet Add-in User's Guide for 1-2-3. All the information in the EQD tutorial is available in the HTML versions of these books in the Information Center, and in the PDF versions. ------------------------------------------------------------------------ Information Catalog Manager Administration Guide ------------------------------------------------------------------------ 26.1 Information Catalog Manager Initialization Utility 26.1.1 With the Initialize Information Catalog Manager (ICM) utility, you can now append an SQL statement to the end of the CREATE TABLE statement using the following command: CREATEIC \DBTYPE dbtype \DGNAME dgname \USERID userid \PASSWORD password \KA1 userid \TABOPT "directory:\tabopt.file" You can specify the TABOPT keyword in the CREATEIC utility from the directory where DB2 is installed. The value following the TABOPT keyword is the tabopt.file file name with the full path. If the directory name contains blanks, enclose the name with quotation marks. The contents of the tabopt.file file must contain information to append to the CREATE TABLE statement. You can use any of the SQL statements below to write to this tabopt.file file. The ICM utility will read this file and then append it to the CREATE TABLE statement. Table 20. SQL statements IN MYTABLESPACE Creates a table with its data in MYTABLESPACE DATA CAPTURE CHANGES Creates a table and logs SQL changes in an extended format IN ACCOUNTING INDEX IN Creates a table with its data in ACCOUNTING ACCOUNT_IDX and its index in ACCOUNT_IDX The maximum size of the content file is 1000 single-byte characters. This new capability is available only on Windows and UNIX systems. 26.1.2 Licensing issues If you get the following message: FLG0083E: You do not have a valid license for the IBM Information Catalog Manager Initialization utility. Please contact your local software reseller or IBM marketing representative. You must purchase the DB2 Warehouse Manager or the IBM DB2 OLAP Server and install the Information Catalog Manager component, which includes the Information Catalog Initialization utility. 26.1.3 Installation Issues If you installed the DB2 Warehouse Manager or IBM DB2 OLAP Server and then installed another Information Catalog Manager Administrator component (using the DB2 Universal Database CD-ROM) on the same workstation, you might have overwritten the Information Catalog Initialization utility. In that case, from the \sqllib\bin directory, find the files createic.bak and flgnmwcr.bak and rename them to createic.exe and flgnmwcr.exe respectively. If you install additional Information Catalog Manager components from DB2 Universal Database, the components must be on a separate workstation from where you installed the Data Warehouse Manager. For more information, see Chapter 3, Installing Information Catalog Manager components, in the DB2 Warehouse Manager Installation Guide. ------------------------------------------------------------------------ 26.2 Accessing DB2 Version 5 Information Catalogs with the DB2 Version 7 Information Catalog Manager The DB2 Version 7 Information Catalog Manager subcomponents, as configured by the DB2 Version 7 install process, support access to information catalogs stored in DB2 Version 6 and DB2 Version 7 databases. You can modify the configuration of the subcomponents to access information catalogs that are stored in DB2 Version 5 databases. The DB2 Version 7 Information Catalog Manager subcomponents do not support access to data from DB2 Version 2 or any other previous versions. To set up the Information Catalog Administrator, the Information Catalog User, and the Information Catalog Initialization Utility to access information catalogs that are stored in DB2 Version 5 databases: 1. Install DB2 Connect Enterprise Edition Version 6 on a workstation other than where the DB2 Version 7 Information Catalog Manager is installed. DB2 Connect Enterprise Edition is included as part of DB2 Universal Database Enterprise Edition and DB2 Universal Database Enterprise - Extended Edition. If Version 6 of either of these DB2 products is installed, you do not need to install DB2 Connect separately. Restriction: You cannot install multiple versions of DB2 on the same Windows NT or OS/2 workstation. You can install DB2 Connect on another Windows NT workstation or on an OS/2 or UNIX workstation. 2. Configure the Information Catalog Manager and DB2 Connect Version 6 for access to the DB2 Version 5 data. For more information, see the DB2 Connect User's Guide. The following steps are an overview of the steps that are required: a. On the DB2 Version 5 system, use the DB2 Command Line Processor to catalog the Version 5 database that the Information Catalog Manager is to access. b. On the DB2 Connect system, use the DB2 Command Line Processor to catalog: + The TCP/IP node for the DB2 Version 5 system + The database for the DB2 Version 5 system + The DCS entry for the DB2 Version 5 system c. On the workstation with the Information Catalog Manager, use the DB2 Command Line Processor to catalog: + The TCP/IP node for the DB2 Connect system + The database for the DB2 Connect system For information about cataloging databases, see the DB2 Universal Database Installation and Configuration Supplement. 3. At the warehouse with the Information Catalog Manager, bind the DB2 CLI package to each database that is to be accessed through DB2 Connect. The following DB2 commands give an example of binding to v5database, a hypothetical DB2 version 5 database. Use the DB2 Command Line Processor to issue the following commands. db2cli.lst and db2ajgrt are located in the \sqllib\bnd directory. db2 connect to v5database user userid using password db2 bind db2ajgrt.bnd db2 bind @db2cli.lst blocking all grant public where userid is the user ID for v5database and password is the password for the user ID. An error occurs when db2cli.list is bound to the DB2 Version 5 database. This error occurs because large objects (LOBs) are not supported in this configuration. This error will not affect the warehouse agent's access to the DB2 Version 5 database. FixPak 14 for DB2 Universal Database Version 5, which is available in June, 2000, is required for accessing DB2 Version 5 data through DB2 Connect. Refer to APAR number JR14507 in that FixPak. ------------------------------------------------------------------------ 26.3 Setting up an Information Catalog Step 2 in the first section of Chapter 1, "Setting up an information catalog", says: When you install either the DB2 Warehouse Manager or the DB2 OLAP Server, a default information catalog is created on DB2 Universal Database for Windows NT. The statement is incorrect. You must define a new information catalog. See the "Creating the Information Catalog" section for more information. ------------------------------------------------------------------------ 26.4 Exchanging Metadata with Other Products In Chapter 6, "Exchanging metadata with other products", in the section "Identifying OLAP objects to publish", there is a statement in the second paragraph that says: When you publish DB2 OLAP Integration Server metadata, a linked relationship is created between an information catalog "dimensions within a multi-dimensional database" object type and a table object in the OLAP Integration Server. The statement should say: When you publish DB2 OLAP Integration Server metadata, a linked relationship is created between an information catalog "dimensions within a multi-dimensional database object and a table object". This statement also appears in Appendix C, "Metadata mappings", in the section "Metadata mappings between the Information Catalog Manager and OLAP Server". ------------------------------------------------------------------------ 26.5 Exchanging Metadata using the flgnxoln Command In Chapter 6, "Exchanging Metadata", there is a section entitled "Identifying OLAP objects to publish". At the end of this section there is an example of using the flgnxoln command to publish OLAP server metadata to an information catalog. The example incorrectly shows the directory for the db2olap.ctl and db2olap.ff files as x:\Program Files\sqllib\logging. The directory name should be x:\Program Files\sqllib\exchange as described on page 87. ------------------------------------------------------------------------ 26.6 Exchanging Metadata using the MDISDGC Command Chapter 6. Exchanging metadata with other products: "Converting MDIS-conforming metadata into a tag language file", page 97. You cannot issue the MDISDGC command from the MS-DOS command prompt. You must issue the MDISDGC command from a DB2 command window. The first sentence of the section, "Converting a tag language file into MDIS-conforming metadata," also says you must issue the DGMDISC command from the MS-DOS command prompt. You must issue the DGMDISC command from a DB2 command window. ------------------------------------------------------------------------ 26.7 Invoking Programs Some examples in the Information Catalog Administration Guide show commands that contain the directory name Program Files. When you invoke a program that contains Program Files as part of its path name, you must enclose the program invocation in double quotation marks. For example, Appendix B, "Predefined Information Catalog Manager object types", contains an example in the section called "Initializing your information catalog with the predefined object types". If you use the example in this section, you will receive an error when you run it from the DOS prompt. The following example is correct: "X:Program Files\SQLLIB\SAMPLES\SAMPDATA\DGWDEMO" /T userid password dgname ------------------------------------------------------------------------ Information Catalog Manager Programming Guide and Reference ------------------------------------------------------------------------ 27.1 Information Catalog Manager Reason Codes In Appendix D: Information Catalog Manager reason codes, some text might be truncated at the far right column for the following reason codes: 31014, 32727, 32728, 32729, 32730, 32735, 32736, 32737, 33000, 37507, 37511, and 39206. If the text is truncated, please see the HTML version of the book to view the complete column. ------------------------------------------------------------------------ Information Catalog Manager User's Guide In Chapter 2, there is a section called "Registering a server node and remote information catalog." The section lists steps that you can complete from the DB2 Control Center before registering a remote information catalog using the Information Catalog Manager. The last paragraph of the section says that after completing a set of steps from the DB2 Control Center (add a system, add an instance, and add a database), you must shut down the Control Center before opening the Information Catalog Manager. That information is incorrect. It is not necessary to shut down the Control Center before opening the Information Catalog Manager. The same correction also applies to the online help task "Registering a server node and remote information catalog", and the online help for the Register Server Node and Information Catalog window. ------------------------------------------------------------------------ Information Catalog Manager: Online Messages ------------------------------------------------------------------------ 29.1 Message FLG0260E The second sentence of the message explanation should say: The error caused a rollback of the information catalog, which failed. The information catalog is not in stable condition, but no changes were made. ------------------------------------------------------------------------ 29.2 Message FLG0051E The second bullet in the message explanation should say: The information catalog contains too many objects or object types. The administrator response should say: Delete some objects or object types from the current information catalog using the import function. ------------------------------------------------------------------------ 29.3 Message FLG0003E The message explanation should say: The information catalog must be registered before you can use it. The information catalog might not have been registered correctly. ------------------------------------------------------------------------ 29.4 Message FLG0372E The first sentence of the message explanation should say: The ATTACHMENT-IND value was ignored for an object because that object is an Attachment object. ------------------------------------------------------------------------ 29.5 Message FLG0615E The second sentence of the message should say: The Information Catalog Manager has encountered an unexpected database error or cannot find the bind file in the current directory or path. ------------------------------------------------------------------------ Information Catalog Manager: Online Help Information Catalog window: The online help for the Selected menu Open item incorrectly says "Opens the selected object". It should say "Opens the Define Search window". ------------------------------------------------------------------------ 30.1 Information Catalog Manager for the Web When using an information catalog that is located on a DB2 UDB for OS/390 system, case insensitive search is not available. This is true for both a simple search and an advanced search. The online help does not explain that all searches on a DB2 UDB for OS/390 information catalog are case sensitive for a simple search. Moreover, all grouping category objects are expandable, even when there are no underlying objects. ------------------------------------------------------------------------ DB2 Warehouse Manager Installation Guide ------------------------------------------------------------------------ 31.1 Software requirements for warehouse transformers The Java Developer's Kit (JDK) Version 1.1.8 or later must be installed on the database where you plan to use the warehouse transformers. ------------------------------------------------------------------------ 31.2 Connector for SAP R/3 When mapping columns from fields of an SAP R/3 business object to DB2 tables, some generated column names might be longer than 30 characters. In this case, the generated column name will reflect only the first 30 characters of the SAP field name. If the generated name is not what you want, you can change it using the Properties notebook for the table. 31.2.1 Installation Prerequisites Set the RFC_INI environment variable. For example, Set RFC_INI=c:\rfcapl.ini. After you set this variable, you must reboot the machine. ------------------------------------------------------------------------ 31.3 Connector for the Web If you have problems running the Connector for the Web, IBM Service might request that you send a trace for the Connector. To enable tracing for the Connector for the Web, set the Warehouse Center agent trace to a level greater than 0. The trace file is named WSApid.log, where pid is the Windows process ID for the agent. The trace file is created in the \sqllib\logging directory. 31.3.1 Installation Prerequisites Install the Java run-time environment (JRE) or Java virtual machine (JVM), version 1.2.2 or later, and make it your default. To make a version of the JRE your default, add the path for the 1.2.2 JRE to your system PATH variable (for example, C:\JDKs\IBM\java12\bin;). After you change your default JRE, you must reboot the machine. If you do not have Java installed, you can install it from the Data Warehouse Connectors installation CD. ------------------------------------------------------------------------ Query Patroller Administration Guide ------------------------------------------------------------------------ 32.1 DB2 Query Patroller Client is a Separate Component The DB2 Query Patroller client is a separate component that is not part of the DB2 Administration client. This means that it is not installed during the installation of the DB2 Administration Client, as indicated in the Query Patroller Installation Guide. Instead, the Query Patroller client must be installed separately. The version and level of the Query Patroller client and the Query Patroller server must be the same. ------------------------------------------------------------------------ 32.2 Migrating from Version 6 of DB2 Query Patroller Using dqpmigrate The dqpmigrate command must be used if the Version 7 Query Patroller Server was installed over the Version 6 Query Patroller Server. For FixPak 2 or later, you do not have to run dqpmigrate manually as the installation of the FixPak runs this command for you. Without using this command, the existing users defined in v6 have no EXECUTE privileges on several new stored procedures added in Version 7. Note: dqpmigrate.bnd is found in the sqllib/bnd directory and dqpmigrate.exe is found in the sqllib/bin directory. To use dqpmigrate manually to grant the EXECUTE privileges, perform the following after installing the FixPak: 1. Bind the /sqllib/bnd/dqpmigrate.bnd package file to the database where the Query Patroller server has been installed by entering the following command: db2 bind dqpmigrate.bnd 2. Execute dqpmigrate by entering the following: dqpmigrate dbalias userid passwd ------------------------------------------------------------------------ 32.3 Enabling Query Management In the "Getting Started" chapter under "Enabling Query Management", the text should read: You must be the owner of the data base, or you must have SYSADM, SYSCTRL, or SYSMAINT authority to set database configuration parameters. ------------------------------------------------------------------------ 32.4 Location of Table Space for Control Tables In Chapter 1, System Overview, under DB2 Query Patroller Control Tables, the following text is to be added at the end of the section's first paragraph: The table space for the DB2 Query Patroller control tables must reside in a single-node nodegroup, or DB2 Query Patroller will not function properly. ------------------------------------------------------------------------ 32.5 New Parameters for dqpstart Command In Chapter 2, Getting Started, under Starting and Stopping DB2 Query Patroller, the following text is to be added following the last paragraph: New Parameters for the dqpstart command: RESTART parameter: Allows the user to replace the host name and/or the node type of the specified node in the dqpnodes.cfg file. DB2 Query Patroller will be started on this node. Note: Before running the DQPSTART command with the RESTART parameter, ensure the following: 1. DB2 Query Patroller is already stopped on the host that is going to be replaced. 2. DB2 Query Patroller is not already running on the new host. The syntax is as follows: dqpstart nodenum node_num restart hostname server | agent | none ADDNODE parameter: Allows the user to add a new node to the dqpnodes.cfg file. DB2 Query Patroller will be started on this node after the new node entry is added to the dqpnodes.cfg file. The syntax is as follows: dqpstart nodenum node_num addnode hostname server | agent | none DROPNODE parameter: Allows the user to drop a node from the dqnodes.cfg file. DB2 Query Patroller will be stopped on this node before the node entry is dropped from the dqpnodes.cfg file. The syntax is as follows: dqpstop nodenum node_num dropnode ------------------------------------------------------------------------ 32.6 New Parameter for iwm_cmd Command A new -v parameter has been added to the iwm_cmd command to allow the user to recover the status of the jobs that were running on the node specified. Only jobs on an inactive node are allowed to be recovered. This command should be issued when there is a node failure and there are some jobs running on that node or being cancelled at the time. Jobs that were in "Running" state will be resubmitted and set back to "Queued" state. Jobs that were in "Cancelling" state will be set to "Cancelled" state. The partial syntax is as follows: >>-iwm_cmd----+---------------------------------+---------------> '--u--user_id--+---------------+--' '--p--password--' >-----v--node_id_to_recover------------------------------------>< node_id_to_recover Specifies the node on which the jobs are to be recovered. ------------------------------------------------------------------------ 32.7 New Registry Variable: DQP_RECOVERY_INTERVAL There is a new registry variable called DQP_RECOVERY_INTERVAL which is used to set the interval of time in minutes that the iwm_scheduler searches for recovery files. The default is 60 minutes. ------------------------------------------------------------------------ 32.8 Starting Query Administrator In the "Using QueryAdministrator to Administer DB2 Query Patroller" chapter, instructions are provided for starting QueryAdministrator from the Start menu on Windows. The first step provides the following text: If you are using Windows, you can select DB2 Query Patroller --> QueryAdministrator from the IBM DB2 program group. The text should read: DB2 Query Patroller --> QueryAdmin. ------------------------------------------------------------------------ 32.9 User Administration In the "User Administration" section of the "Using QueryAdministrator to Administer DB2 Query Patroller" chapter, the definition for the Maximum Elapsed Time parameter indicates that if the value is set to 0 or -1, the query will always run to completion. This parameter cannot be set to a negative value. The text should indicate that if the value is set to 0, the query will always run to completion. The Max Queries parameter specifies the maximum number of jobs that the DB2 Query Patroller will run simultaneously. Max Queries must be an integer within the range of 0 to 32767. ------------------------------------------------------------------------ 32.10 Creating a Job Queue In the "Job Queue Administration" section of the "Using QueryAdministrator to Administer DB2 Query Patroller" chapter, the screen capture in the steps for "Creating a Job Queue" should be displayed after the second step. The Information about new Job Queue window opens once you click New on the Job Queue Administration page of the QueryAdministrator tool. References to the Job Queues page or the Job Queues tab should read Job Queue Administration page and Job Queue Administration tab, respectively. ------------------------------------------------------------------------ 32.11 Using the Command Line Interface For a user with User authority on the DB2 Query Patroller system to submit a query and have a result table created, the user may require CREATETAB authority on the database. The user does not require CREATETAB authority on the database if the DQP_RES_TBLSPC profile variable is left unset, or if the DQP_RES_TBLSPC profile variable is set to the name of the default table space. The creation of the result tables will succeed in this case because users have the authority to create tables in the default table space. ------------------------------------------------------------------------ 32.12 Query Enabler Notes * When using third-party query tools that use a keyset cursor, queries will not be intercepted. In order for Query Enabler to intercept these queries, you must modify the db2cli.ini file to include: [common] DisableKeySetCursor=1 * For AIX clients, please ensure that the environment variable LIBPATH is not set. Library libXext.a, shipped with the JDK, is not compatible with the library in the /usr/lib/X11 subdirectory. This will cause problems with the Query Enabler GUI. ------------------------------------------------------------------------ 32.13 DB2 Query Patroller Tracker may Return a Blank Column Page FixPak 3 includes a fix for the DB2 Query Patroller Tracker. The Tracker will now correctly report queries which hit no columns. An example of such a query is "SELECT COUNT(*) FROM ...". Since this kind of query does not hit any column in the table, the Tracker will present a blank page for the column page. This blank column page is not a defect. ------------------------------------------------------------------------ 32.14 Query Patroller and Replication Tools Query Patroller Version 7 will intercept the queries of the replication tools (asnapply, asnccp, djra and analyze) and cause these tools to malfunction. A workaround is to disable dynamic query management when running these tools. ------------------------------------------------------------------------ 32.15 Appendix B. Troubleshooting DB2 Query Patroller Clients In Appendix B, Troubleshooting DB2 Query Patroller Clients, section: Common Query Enabler Problems, problem #2, the text of the first bullet is replaced with: Ensure that the path setting includes jre. ------------------------------------------------------------------------ Application Development * Administrative API Reference o 33.1 db2ArchiveLog (new API) + db2ArchiveLog o 33.2 db2ConvMonStream o 33.3 db2DatabasePing (new API) + db2DatabasePing - Ping Database o 33.4 db2HistData o 33.5 db2HistoryOpenScan o 33.6 db2XaGetInfo (new API) + db2XaGetInfo - Get Information for Resource Manager o 33.7 db2XaListIndTrans (new API that supercedes sqlxphqr) + db2XaListIndTrans - List Indoubt Transactions o 33.8 db2GetSnapshot - Get Snapshot o 33.9 Forget Log Record o 33.10 sqlaintp - Get Error Message o 33.11 sqlbctcq - Close Tablespace Container Query o 33.12 sqlubkp - Backup Database o 33.13 sqlureot - Reorganize Table o 33.14 sqlurestore - Restore Database o 33.15 Documentation Error Regarding AIX Extended Shared Memory Support (EXTSHM) o 33.16 SQLFUPD + 33.16.1 locklist o 33.17 SQLEDBDESC o 33.18 SQLFUPD Documentation Error * Application Building Guide o 34.1 Chapter 1. Introduction + 34.1.1 Supported Software + 34.1.2 Sample Programs o 34.2 Chapter 3. General Information for Building DB2 Applications + 34.2.1 Build Files, Makefiles, and Error-checking Utilities o 34.3 Chapter 4. Building Java Applets and Applications + 34.3.1 Setting the Environment + 34.3.1.1 JDK Level on OS/2 + 34.3.1.2 Java2 on HP-UX o 34.4 Chapter 5. Building SQL Procedures + 34.4.1 Setting the SQL Procedures Environment + 34.4.2 Setting the Compiler Environment Variables + 34.4.3 Customizing the Compilation Command + 34.4.4 Retaining Intermediate Files + 34.4.5 Backup and Restore + 34.4.6 Creating SQL Procedures + 34.4.7 Calling Stored Procedures + 34.4.8 Distributing Compiled SQL Procedures o 34.5 Chapter 7. Building HP-UX Applications. + 34.5.1 HP-UX C + 34.5.2 HP-UX C++ o 34.6 Chapter 9. Building OS/2 Applications + 34.6.1 VisualAge C++ for OS/2 Version 4.0 o 34.7 Chapter 10. Building PTX Applications + 34.7.1 ptx/C++ o 34.8 Chapter 12. Building Solaris Applications + 34.8.1 SPARCompiler C++ o 34.9 Chapter 13. Building Applications for Windows 32-bit Operating Systems + 34.9.1 VisualAge C++ Version 4.0 * Application Development Guide o 35.1 Chapter 2. Coding a DB2 Application + 35.1.1 Activating the IBM DB2 Universal Database Project and Tool Add-ins for Microsoft Visual C++ o 35.2 Chapter 6. Common DB2 Application Techniques + 35.2.1 Generating Sequential Values + 35.2.1.1 Controlling Sequence Behavior + 35.2.1.2 Improving Performance with Sequence Objects + 35.2.1.3 Comparing Sequence Objects and Identity Columns o 35.3 Chapter 7. Stored Procedures + 35.3.1 DECIMAL Type Fails in Linux Java Routines + 35.3.2 Using Cursors in Recursive Stored Procedures + 35.3.3 Writing OLE Automation Stored Procedures o 35.4 Chapter 12. Working with Complex Objects: User-Defined Structured Types + 35.4.1 Inserting Structured Type Attributes Into Columns o 35.5 Chapter 13. Using Large Objects (LOBs) + 35.5.1 Large object (LOBs) support in federated database systems + 35.5.1.1 How DB2 retrieves LOBs + 35.5.1.2 How applications can use LOB locators + 35.5.1.3 Restrictions on LOBs + 35.5.1.4 Mappings between LOB and non-LOB data types + 35.5.2 Tuning the system o 35.6 Part 5. DB2 Programming Considerations + 35.6.1 IBM DB2 OLE DB Provider o 35.7 Chapter 20. Programming in C and C++ + 35.7.1 C/C++ Types for Stored Procedures, Functions, and Methods o 35.8 Chapter 21. Programming in Java + 35.8.1 Java Method Signature in PARAMETER STYLE JAVA Procedures and Functions + 35.8.2 Connecting to the JDBC Applet Server o 35.9 Appendix B. Sample Programs * CLI Guide and Reference o 36.1 Binding Database Utilities Using the Run-Time Client o 36.2 Using Static SQL in CLI Applications o 36.3 Limitations of JDBC/ODBC/CLI Static Profiling o 36.4 ADT Transforms o 36.5 Chapter 3. Using Advanced Features + 36.5.1 Writing Multi-Threaded Applications + 36.5.2 Scrollable Cursors + 36.5.2.1 Server-side Scrollable Cursor Support for OS/390 + 36.5.3 Using Compound SQL + 36.5.4 Using Stored Procedures + 36.5.4.1 Writing a Stored Procedure in CLI + 36.5.4.2 CLI Stored Procedures and Autobinding o 36.6 Chapter 4. Configuring CLI/ODBC and Running Sample Applications + 36.6.1 Configuration Keywords o 36.7 Chapter 5. DB2 CLI Functions + 36.7.1 SQLBindFileToParam - Bind LOB File Reference to LOB Parameter + 36.7.2 SQLNextResult - Associate Next Result Set with Another Statement Handle + 36.7.2.1 Purpose + 36.7.2.2 Syntax + 36.7.2.3 Function Arguments + 36.7.2.4 Usage + 36.7.2.5 Return Codes + 36.7.2.6 Diagnostics + 36.7.2.7 Restrictions + 36.7.2.8 References o 36.8 Appendix D. Extended Scalar Functions + 36.8.1 Date and Time Functions o 36.9 Appendix K. Using the DB2 CLI/ODBC/JDBC Trace Facility * Message Reference o 37.1 Getting Message and SQLSTATE Help o 37.2 SQLCODE Remapping Change in DB2 Connect o 37.3 New and Changed Messages + 37.3.1 Call Level Interface (CLI) Messages + 37.3.2 DB2 Messages + 37.3.3 DBI Messages + 37.3.4 Data Warehouse Center (DWC) Messages + 37.3.5 SQL Messages o 37.4 Corrected SQLSTATES * SQL Reference o 38.1 SQL Reference is Provided in One PDF File o 38.2 Chapter 3. Language Elements + 38.2.1 Naming Conventions and Implicit Object Name Qualifications + 38.2.2 DATALINK Assignments + 38.2.3 Expressions + 38.2.3.1 Syntax Diagram + 38.2.3.2 OLAP Functions + 38.2.3.3 Sequence Reference o 38.3 Chapter 4. Functions + 38.3.1 Enabling the New Functions and Procedures + 38.3.2 Scalar Functions + 38.3.2.1 ABS or ABSVAL + 38.3.2.2 DECRYPT_BIN and DECRYPT_CHAR + 38.3.2.3 ENCRYPT + 38.3.2.4 GETHINT + 38.3.2.5 IDENTITY_VAL_LOCAL + 38.3.2.6 LCASE and UCASE (Unicode) + 38.3.2.7 MQPUBLISH + 38.3.2.8 MQREAD + 38.3.2.9 MQRECEIVE + 38.3.2.10 MQSEND + 38.3.2.11 MQSUBSCRIBE + 38.3.2.12 MQUNSUBSCRIBE + 38.3.2.13 MULTIPLY_ALT + 38.3.2.14 REC2XML + 38.3.2.15 ROUND + 38.3.2.16 WEEK_ISO + 38.3.3 Table Functions + 38.3.3.1 MQREADALL + 38.3.3.2 MQRECEIVEALL + 38.3.4 Procedures + 38.3.4.1 GET_ROUTINE_SAR + 38.3.4.2 PUT_ROUTINE_SAR o 38.4 Chapter 5. Queries + 38.4.1 select-statement/syntax diagram + 38.4.2 select-statement/fetch-first-clause o 38.5 Chapter 6. SQL Statements + 38.5.1 Update of the Partitioning Key Now Supported + 38.5.1.1 Statement: ALTER TABLE + 38.5.1.2 Statement: CREATE TABLE + 38.5.1.3 Statement: DECLARE GLOBAL TEMPORARY TABLE PARTITIONING KEY (column-name,...) + 38.5.1.4 Statement: UPDATE + 38.5.2 Larger Index Keys for Unicode Databases + 38.5.2.1 ALTER TABLE + 38.5.2.2 CREATE INDEX + 38.5.2.3 CREATE TABLE + 38.5.3 ALTER SEQUENCE + ALTER SEQUENCE + 38.5.4 ALTER TABLE + 38.5.5 Compound SQL (Embedded) + 38.5.6 Compound Statement (Dynamic) + Compound Statement (Dynamic) + 38.5.7 CREATE FUNCTION (Source or Template) + 38.5.8 CREATE FUNCTION (SQL Scalar, Table or Row) + 38.5.9 CREATE METHOD + CREATE METHOD + 38.5.10 CREATE SEQUENCE + CREATE SEQUENCE + 38.5.11 CREATE TRIGGER + CREATE TRIGGER + 38.5.12 CREATE WRAPPER + 38.5.13 DECLARE CURSOR + 38.5.14 DELETE + 38.5.15 DROP + 38.5.16 GRANT (Sequence Privileges) + GRANT (Sequence Privileges) + 38.5.17 INSERT + 38.5.18 SELECT INTO + 38.5.19 SET ENCRYPTION PASSWORD + SET ENCRYPTION PASSWORD + 38.5.20 SET transition-variable + SET Variable + 38.5.21 UPDATE o 38.6 Chapter 7. SQL Procedures now called Chapter 7. SQL Control Statements + 38.6.1 SQL Procedure Statement + SQL Procedure Statement + 38.6.2 FOR + FOR + 38.6.3 Compound Statement changes to Compound Statement (Procedure) + 38.6.4 RETURN + RETURN + 38.6.5 SIGNAL + SIGNAL o 38.7 Appendix A. SQL Limits o 38.8 Appendix D. Catalog Views + 38.8.1 SYSCAT.SEQUENCES * DB2 Stored Procedure Builder o 39.1 Java 1.2 Support for the DB2 Stored Procedure Builder o 39.2 Remote Debugging of DB2 Stored Procedures o 39.3 Building SQL Procedures on Windows, OS/2 or UNIX Platforms o 39.4 Using the DB2 Stored Procedure Builder on the Solaris Platform o 39.5 Known Problems and Limitations o 39.6 Using DB2 Stored Procedure Builder with Traditional Chinese Locale o 39.7 UNIX (AIX, Sun Solaris, Linux) Installations and the Stored Procedure Builder o 39.8 Building SQL Stored Procedures on OS/390 o 39.9 Debugging SQL Stored Procedures o 39.10 Exporting Java Stored Procedures o 39.11 Inserting Stored Procedures on OS/390 o 39.12 Setting Build Options for SQL Stored Procedures on a Workstation Server o 39.13 Automatically Refreshing the WLM Address Space for Stored Procedures Built on OS/390 o 39.14 Developing Java stored procedures on OS/390 o 39.15 Building a DB2 table user defined function (UDF) for MQ Series and OLE DB * Unicode Updates o 40.1 Introduction + 40.1.1 DB2 Unicode Databases and Applications + 40.1.2 Documentation Updates o 40.2 SQL Reference + 40.2.1 Chapter 3 Language Elements + 40.2.1.1 Promotion of Data Types + 40.2.1.2 Casting Between Data Types + 40.2.1.3 Assignments and Comparisons + 40.2.1.4 Rules for Result Data Types + 40.2.1.5 Rules for String Conversions + 40.2.1.6 Expressions + 40.2.1.7 Predicates + 40.2.2 Chapter 4 Functions + 40.2.2.1 Scalar Functions o 40.3 CLI Guide and Reference + 40.3.1 Chapter 3. Using Advanced Features + 40.3.1.1 Writing a DB2 CLI Unicode Application + 40.3.2 Appendix C. DB2 CLI and ODBC + 40.3.2.1 ODBC Unicode Applications o 40.4 Data Movement Utilities Guide and Reference + 40.4.1 Appendix C. Export/Import/Load Utility File Formats ------------------------------------------------------------------------ Administrative API Reference ------------------------------------------------------------------------ 33.1 db2ArchiveLog (new API) db2ArchiveLog Closes and truncates the active log file for a recoverable database. If user exit is enabled, issues an archive request. Authorization One of the following: * sysadm * sysctrl * sysmaint * dbadm Required Connection This API automatically establishes a connection to the specified database. If a connection to the specified database already exists, the API will return an error. Notes db2ApiDf.h C API Syntax /* File: db2ApiDf.h */ /* API: Archive Active Log */ SQL_API_RC SQL_API_FN db2ArchiveLog ( db2Uint32 version, void *pDB2ArchiveLogStruct, struct sqlca * pSqlca); typedef struct { char *piDatabaseAlias; char *piUserName; char *piPassword; db2Uint16 iAllNodeFlag; db2Uint16 iNumNodes; SQL_PDB_NODE_TYPE *piNodeList; db2Uint32 iOptions; } Generic API Syntax /* File: db2ApiDf.h */ /* API: Archive Active Log */ SQL_API_RC SQL_API_FN db2gArchiveLog ( db2Uint32 version, void *pDB2ArchiveLogStruct, struct sqlca * pSqlca); typedef struct { db2Uint32 iAliasLen; db2Uint32 iUserNameLen; db2Uint32 iPasswordLen; char *piDatabaseAlias; char *piUserName; char *piPassword; db2Uint16 iAllNodeFlag; db2Uint16 iNumNodes; SQL_PDB_NODE_TYPE *piNodeList; db2Uint32 iOptions; } API Parameters version Input. Specifies the version and release level of the variable passed in as the second parameter, pDB2ArchiveLogStruct. pDB2ArchiveLogStruct Input. A pointer to the db2ArchiveLogStruct structure. pSqlca Output. A pointer to the sqlca structure. iAliasLen Input. A 4-byte unsigned integer representing the length in bytes of the database alias. iUserNameLen A 4-byte unsigned integer representing the length in bytes of the user name. Set to zero if no user name is used. iPasswordLen Input. A 4-byte unsigned integer representing the length in bytes of the password. Set to zero if no password is used. piDatabaseAlias Input. A string containing the database alias (as cataloged in the system database directory) of the database for which the active log is to be archived. piUserName Input. A string containing the user name to be used when attempting a connection. piPassword Input. A string containing the password to be used when attempting a connection. iAllNodeFlag MPP only. Input. Flag indicating whether the operation should apply to all nodes listed in the db2nodes.cfg file. Valid values are: DB2ARCHIVELOG_NODE_LIST Apply to nodes in a node list that is passed in piNodeList. DB2ARCHIVELOG_ALL_NODES Apply to all nodes. piNodeList should be NULL. This is the default value. DB2ARCHIVELOG_ALL_EXCEPT Apply to all nodes except those in the node list passed in piNodeList. iNumNodes MPP only. Input. Specifies the number of nodes in the piNodeList array. piNodeList MPP only. Input. A pointer to an array of node numbers against which to apply the archive log operation. iOptions Input. Reserved for future use. ------------------------------------------------------------------------ 33.2 db2ConvMonStream In the Usage Notes, the structure for the snapshot variable datastream type SQLM_ELM_SUBSECTION should be sqlm_subsection. ------------------------------------------------------------------------ 33.3 db2DatabasePing (new API) db2DatabasePing - Ping Database Tests the network response time of the underlying connectivity between a client and a database server. This API can be used by an application when a host database server is accessed via DB2 Connect either directly or through a gateway. Authorization None Required Connection Database Notes db2ApiDf.h C API Syntax /* File: db2ApiDf.h */ /* API: Ping Database */ /* ... */ SQL_API_RC SQL_API_FN db2DatabasePing ( db2Uint32 versionNumber, void *pParmStruct, struct sqlca *pSqlca); /* ... */ typedef SQL_STRUCTURE db2DatabasePingStruct { char iDbAlias[SQL_ALIAS_SZ + 1]; db2Uint16 iNumIterations; db2Uint32 *poElapsedTime; } Generic API Syntax /* File: db2ApiDf.h */ /* API: Ping Database */ /* ... */ SQL_API_RC SQL_API_FN db2gDatabasePing ( db2Uint32 versionNumber, void *pParmStruct, struct sqlca *pSqlca); /* ... */ typedef SQL_STRUCTURE db2gDatabasePingStruct { db2Uint16 iDbAliasLength; char iDbAlias[SQL_ALIAS_SZ]; db2Uint16 iNumIterations; db2Uint32 *poElapsedTime; } API Parameters versionNumber Input. Version and release of the DB2 Universal Database or DB2 Connect product that the application is using. Note: Constant db2Version710 or higher should be used for DB2 Version 7.1 or higher. iDbAliasLength Input. Length of the database alias name. Note: This parameter is not currently used. It is reserved for future use. iDbAlias Input. Database alias name. Note: This parameter is not currently used. It is reserved for future use. iNumIterations Input. Number of test request iterations. The value must be between 1 and 32767 inclusive. poElapsedTime Output. A pointer to an array of 32-bit integers where the number of elements is equal to iNumIterations. Each element in the array will contain the elapsed time in microseconds for one test request iteration. Note: The application is responsible for allocating the memory for this array prior to calling this API. pSqlca Output. A pointer to the sqlca structure. For more information about this structure, see the Administrative API Reference. Usage Notes A database connection must exist before invoking this API, otherwise an error will result. This function can also be invoked using the PING command. For a description of this command, see the Command Reference. ------------------------------------------------------------------------ 33.4 db2HistData The following entries should be added to Table 11. Fields in the db2HistData Structure: Field Name Data Type Description oOperation char See Table 12. oOptype char See Table 13. The following table will be added following Table 11. Table 12. Valid event values for oOperation in the db2HistData Structure Value Description C Definition COBOL/FORTRAN Definition A add DB2HISTORY_OP_ADD_TABLESPACE DB2HIST_OP_ADD_TABLESPACE tablespace B backup DB2HISTORY_OP_BACKUP DB2HIST_OP_BACKUP C load-copy DB2HISTORY_OP_LOAD_COPY DB2HIST_OP_LOAD_COPY D dropped DB2HISTORY_OP_DROPPED_TABLE DB2HIST_OP_DROPPED_TABLE table F roll DB2HISTORY_OP_ROLLFWD DB2HIST_OP_ROLLFWD forward G reorganize DB2HISTORY_OP_REORG DB2HIST_OP_REORG table L load DB2HISTORY_OP_LOAD DB2HIST_OP_LOAD N rename DB2HISTORY_OP_REN_TABLESPACE DB2HIST_OP_REN_TABLESPACE tablespace O drop DB2HISTORY_OP_DROP_TABLESPACEDB2HIST_OP_DROP_TABLESPACE tablespace Q quiesce DB2HISTORY_OP_QUIESCE DB2HIST_OP_QUIESCE R restore DB2HISTORY_OP_RESTORE DB2HIST_OP_RESTORE S run DB2HISTORY_OP_RUNSTATS DB2HIST_OP_RUNSTATS statistics T alter DB2HISTORY_OP_ALT_TABLESPACE DB2HIST_OP_ALT_TBS tablespace U unload DB2HISTORY_OP_UNLOAD DB2HIST_OP_UNLOAD The following table will also be added. Table 13. Valid oOptype values db2HistData Structure oOperationoOptype Description C/COBOL/FORTRAN Definition B F Offline DB2HISTORY_OPTYPE_OFFLINE N Online DB2HISTORY_OPTYPE_ONLINE I Incremental offline DB2HISTORY_OPTYPE_INCR_OFFLINE O Incremental online DB2HISTORY_OPTYPE_INCR_ONLINE D Delta offline DB2HISTORY_OPTYPE_DELTA_OFFLINE E Delta online DB2HISTORY_OPTYPE_DELTA_ONLIN F E End of log DB2HISTORY_OPTYPE_EOL P Point in time DB2HISTORY_OPTYPE_PIT L I Insert DB2HISTORY_OPTYPE_INSERT R Replace DB2HISTORY_OPTYPE_REPLACE Q S Quiesce share DB2HISTORY_OPTYPE_SHARE U Quiesce update DB2HISTORY_OPTYPE_UPDATE X Quiesce exclusive DB2HISTORY_OPTYPE_EXCL Z Quiesce reset DB2HISTORY_OPTYPE_RESET R F Offline DB2HISTORY_OPTYPE_OFFLINE N Online DB2HISTORY_OPTYPE_ONLINE I Incremental offline DB2HISTORY_OPTYPE_INCR_OFFLINE O Incremental online DB2HISTORY_OPTYPE_INCR_ONLINE T C Add containers DB2HISTORY_OPTYPE_ADD_CONT R Rebalance DB2HISTORY_OPTYPE_REB ------------------------------------------------------------------------ 33.5 db2HistoryOpenScan The following value will be added to the iCallerAction parameter. DB2HISTORY_LIST_CRT_TABLESPACE Select only the CREATE TABLESPACE and DROP TABLESPACE records that pass the other filters. ------------------------------------------------------------------------ 33.6 db2XaGetInfo (new API) db2XaGetInfo - Get Information for Resource Manager Extracts information for a particular resource manager once an xa_open call has been made. Authorization None Required Connection Database Notes sqlxa.h C API Syntax /* File: sqlxa.h */ /* API: Get Information for Resource Manager */ /* ... */ SQL_API_RC SQL_API_FN db2XaGetInfo ( db2Uint32 versionNumber, void * pParmStruct, struct sqlca * pSqlca); typedef SQL_STRUCTURE db2XaGetInfoStruct { db2int32 iRmid; struct sqlca oLastSqlca; } db2XaGetInfoStruct; API Parameters versionNumber Input. Specifies the version and release level of the structure passed in as the second parameter, pParmStruct. pParmStruct Input. A pointer to the db2XaGetInfoStruct structure. pSqlca Output. A pointer to the sqlca structure. For more information about this structure, see the Administrative API Reference. iRmid Input. Specifies the resource manager for which information is required. oLastSqlca Output. Contains the sqlca for the last XA API call. Note: Only the sqlca that resulted from the last failing XA API can be retrieved. ------------------------------------------------------------------------ 33.7 db2XaListIndTrans (new API that supercedes sqlxphqr) db2XaListIndTrans - List Indoubt Transactions Provides a list of all indoubt transactions for the currently connected database. Scope This API affects only the node on which it is issued. Authorization One of the following: * sysadm * dbadm Required Connection Database Notes db2ApiDf.h C API Syntax /* File: db2ApiDf.h */ /* API: List Indoubt Transactions */ /* ... */ SQL_API_RC SQL_API_FN db2XaListIndTrans ( db2Uint32 versionNumber, void * pParmStruct, struct sqlca * pSqlca); typedef SQL_STRUCTURE db2XaListIndTransStruct { db2XaRecoverStruct * piIndoubtData; db2Uint32 iIndoubtDataLen; db2Uint32 oNumIndoubtsReturned; db2Uint32 oNumIndoubtsTotal; db2Uint32 oReqBufferLen; } db2XaListIndTransStruct; typedef SQL_STRUCTURE db2XaRecoverStruct { sqluint32 timestamp; SQLXA_XID xid; char dbalias[SQLXA_DBNAME_SZ]; char applid[SQLXA_APPLID_SZ]; char sequence_no[SQLXA_SEQ_SZ]; char auth_id[SQL_USERID_SZ]; char log_full; char connected; char indoubt_status; char originator; char reserved[8]; } db2XaRecoverStruct; API Parameters versionNumber Input. Specifies the version and release level of the structure passed in as the second parameter, pParmStruct. pParmStruct Input. A pointer to the db2XaListIndTransStruct structure. pSqlca Output. A pointer to the sqlca structure. For more information about this structure, see the Administrative API Reference. piIndoubtData Input. A pointer to the application supplied buffer where indoubt data will be returned. The indoubt data is in db2XaRecoverStruct format. The application can traverse the list of indoubt transactions by using the size of the db2XaRecoverStruct structure, starting at the address provided by this parameter. If the value is NULL, DB2 will calculate the size of the buffer required and return this value in oReqBufferLen. oNumIndoubtsTotal will contain the total number of indoubt transactions. The application may allocate the required buffer size and issue the API again. oNumIndoubtsReturned Output. The number of indoubt transaction records returned in the buffer specified by pIndoubtData. oNumIndoubtsTotal Output. The Total number of indoubt transaction records available at the time of API invocation. If the piIndoubtData buffer is too small to contain all the records, oNumIndoubtsTotal will be greater than the total for oNumIndoubtsReturned. The application may reissue the API in order to obtain all records. Note: This number may change between API invocations as a result of automatic or heuristic indoubt transaction resynchronisation, or as a result of other transactions entering the indoubt state. oReqBufferLen Output. Required buffer length to hold all indoubt transaction records at the time of API invocation. The application can use this value to determine the required buffer size by calling the API with pIndoubtData set to NULL. This value can then be used to allocate the required buffer, and the API can be issued with pIndoubtData set to the address of the allocated buffer. Note: The required buffer size may change between API invocations as a result of automatic or heuristic indoubt transaction resynchronisation, or as a result of other transactions entering the indoubt state. The application may allocate a larger buffer to account for this. timestamp Output. Specifies the time when the transaction entered the indoubt state. xid Output. Specifies the XA identifier assigned by the transaction manager to uniquely identify a global transaction. dbalias Output. Specifies the alias of the database where the indoubt transaction is found. applid Output. Specifies the application identifier assigned by the database manager for this transaction. sequence_no Output. Specifies the sequence number assigned by the database manager as an extension to the applid. auth_id Output. Specifies the authorization ID of the user who ran the transaction. log_full Output. Indicates whether or not this transaction caused a log full condition. Valid values are: SQLXA_TRUE This indoubt transaction caused a log full condition. SQLXA_FALSE This indoubt transaction did not cause a log full condition. connected Output. Indicates whether or not the application is connected. Valid values are: SQLXA_TRUE The transaction is undergoing normal syncpoint processing, and is waiting for the second phase of the two-phase commit. SQLXA_FALSE The transaction was left indoubt by an earlier failure, and is now waiting for resynchronisation from the transaction manager. indoubt_status Output. Indicates the status of this indoubt transaction. Valid values are: SQLXA_TS_PREP The transaction is prepared. The connected parameter can be used to determine whether the transaction is waiting for the second phase of normal commit processing or whether an error occurred and resynchronisation with the transaction manager is required. SQLXA_TS_HCOM The transaction has been heuristically committed. SQLXA_TS_HROL The transaction has been heuristically rolled back. SQLXA_TS_MACK The transaction is missing commit acknowledgement from a node in a partitioned database. SQLXA_TS_END The transaction has ended at this database. This transaction may be re-activated, committed, or rolled back at a later time. It is also possible that the transaction manager encountered an error and the transaction will not be completed. If this is the case, this transaction requires heuristic actions, because it may be holding locks and preventing other applications from accessing data. Usage Notes A typical application will perform the following steps after setting the current connection to the database or to the partitioned database coordinator node: 1. Call db2XaListIndTrans with piIndoubtData set to NULL. This will return values in oReqBufferLen and oNumIndoubtsTotal. 2. Use the returned value in oReqBufferLen to allocate a buffer. This buffer may not be large enough if there are additional indoubt transactions because the initial invocation of this API to obtain oReqBufferLen. The application may provide a buffer larger than oReqBufferLen. 3. Determine if all indoubt transaction records have been obtained. This can be done by comparing oNumIndoubtsReturned to oNumIndoubtTotal. If oNumIndoubtsTotal is greater than oNumIndoubtsReturned, the application can repeat the above steps. See Also "sqlxhfrg - Forget Transaction Status", "sqlxphcm - Commit an Indoubt Transaction", and "sqlxphrl - Roll Back an Indoubt Transaction" in the Administrative API Reference. ------------------------------------------------------------------------ 33.8 db2GetSnapshot - Get Snapshot The syntax for the db2GetSnapshot API should be as follows: int db2GetSnapshot( unsigned char version; db2GetSnapshotData *data, struct sqlca *sqlca); The parameters described in data are: typedef struct db2GetSnapshotData{ sqlma *piSqlmaData; sqlm_collected *poCollectedData void *poBuffer; db2uint32 iVersion; db2int32 iBufferSize; db2uint8 iStoreResult; db2uint16 iNodeNumber; db2uint32 *poOutputFormat; }db2GetSnapshotData; ------------------------------------------------------------------------ 33.9 Forget Log Record The following information will be added to Appendix F following the MPP Subordinator Prepare section. This log record is written after a rollback of indoubt transactions or after a commit of two-phase commit. The log record is written to mark the end of the transaction and releases any log resources held. In order for the transaction to be forgotten, it must be in a heuristically completed state. Table 21. Forget Log Record Structure Description Type Offset (Bytes) Log header LogManagerLogRecordHeader 0(20) time sqluint64 20(8) Total Length: 28 bytes ------------------------------------------------------------------------ 33.10 sqlaintp - Get Error Message The following usage note is to be added to the description of this API: In a multi-threaded application, sqlaintp must be attached to a valid context; otherwise, the message text for SQLCODE -1445 cannot be obtained. ------------------------------------------------------------------------ 33.11 sqlbctcq - Close Tablespace Container Query Load is not a valid Authorization level for this API. ------------------------------------------------------------------------ 33.12 sqlubkp - Backup Database For the BackupType parameter the SQLUB_FULL value will be replaced by the SQLUB_DB. A backup of all tablespaces in the database will be taken. To support the new incremental backup functionality the SQLUB_INCREMENTAL and SQLUB_DELTA parameters will also be added. An incremental backup image is a copy of all database data which has changed since the most recent successful, full backup. A delta backup image is a copy of all database data that has changed since the most recent successful backup of any type ------------------------------------------------------------------------ 33.13 sqlureot - Reorganize Table The following sentence will be added to the Usage Notes: REORGANIZE TABLE cannot use an index that is based on an index extension. ------------------------------------------------------------------------ 33.14 sqlurestore - Restore Database For the RestoreType parameter the SQLUD_FULL value will be replaced by the SQLUD_DB. A restore of all table spaces in the database will be taken. This will be run offline. To support the new incremental restore functionality the SQLUD_INCREMENTAL parameter will also be added. An incremental backup image is a copy of all database data which has changed since the most recent successful full backup. ------------------------------------------------------------------------ 33.15 Documentation Error Regarding AIX Extended Shared Memory Support (EXTSHM) In "Appendix E. Threaded Applications with Concurrent Access", Note 2 should now read: 2. By default, AIX does not permit 32-bit applications to attach to more than 11 shared memory segments per process, of which a maximum of 10 can be used for local DB2 connections. To use EXTSHM with DB2, do the following: In client sessions: export EXTSHM=ON When starting DB2: export EXTSHM=ON db2set DB2ENVLIST=EXTSHM db2start On EEE, also add the following lines to sqllib/db2profile: EXTSHM=ON export EXTSHM ------------------------------------------------------------------------ 33.16 SQLFUPD 33.16.1 locklist The name of the token has changed from SQLF_DBTN_LOCKLIST to SQLF_DBTN_LOCK_LIST. The locklist parameter has been changed from a SMALLINT to a 64-bit unsigned INTEGER. The following addition should be made to the table of Updatable Database Configuration Parameters. Parameter Name Token Token Value Data Type locklist SQLF_DBTN_LOCK_LIST704 Uint64 The new maximum for this parameter is 524 288. ------------------------------------------------------------------------ 33.17 SQLEDBDESC Two values will be added to the list of valid values for SQLDBCSS (defined in sqlenv). They are: SQL_CS_SYSTEM_NLSCHAR Collating sequence from system using the NLS version of compare routines for character types. SQL_CS_USER_NLSCHAR Collating sequence from user using the NLS version of compare routines for character types. ------------------------------------------------------------------------ 33.18 SQLFUPD Documentation Error In "Chapter 3. Data Structures", Table 53. Updatable Database Configuration Parameters incorrectly lists the token value for dbheap as 701. The correct value is 58. ------------------------------------------------------------------------ Application Building Guide ------------------------------------------------------------------------ 34.1 Chapter 1. Introduction 34.1.1 Supported Software Note: PHP. PHP can now be used as a method to access DB2 from web-based applications. PHP is a server-side, HTML-embedded, cross-platform scripting language. It supports DB2 access using the Unified-ODBC access method, in which the user-level PHP communicates to DB2 using ODBC calls. Unlike standard ODBC, with the Unified-ODBC method, communication is directly to the DB2 CLI layer, not through the ODBC layer. For more information about using PHP with DB2, search the DB2 support site at www.ibm.com/software/data/db2/udb/winos2unix/support. AIX The listed versions for C and C++ compilers should be the following: IBM C and C++ Compilers for AIX Version 3.6.6 (Version 3.6.6.3 for 64-bit) IBM C for AIX 4.4 IBM VisualAge C++ Version 4.0 Note: Please download the latest available FixPaks for these compiler versions from http://www.ibm.com/software/ad/vacpp/service/csd.html The listed versions for the Micro Focus COBOL compiler should be the following: AIX 4.2.1 Micro Focus COBOL Version 4.0.20 (PRN 12.03 or later) Micro Focus COBOL Version 4.1.10 (PRN 13.04 or later) AIX 4.3 Micro Focus COBOL Server Express Version 1.0 Note: For information on DB2 support for Micro Focus COBOL stored procedures and UDFs on AIX 4.3, see the DB2 Application Development Web page: http://www.ibm.com/software/data/db2/udb/ad To build 64-bit applications with the IBM XL Fortran for AIX Version 5.1.0 compiler, use the "-q64" option in the compile and link steps. Note that 64-bit applications are not supported on earlier versions of this compiler. HP-UX The listed version for the C++ compiler should be the following: HP aC++, Version A.03.25 Note: HP does not support binary compatibility among objects compiled with old and new compilers, so this will force recompiles of any C++ application built to access DB2 on HP-UX. C++ applications must also be built to handle exceptions with this new compiler. This is the URL for the aCC transition guide: http://www.hp.com/esy/lang/cpp/tguide. The C++ incompatibility portion is here: http://www.hp.com/esy/lang/cpp/tguide/transcontent.html#RN.CVT.1.2 http://www.hp.com/esy/lang/cpp/tguide/transcontent.html#RN.CVT.3.3 The C vs C++ portion is here: http://www.hp.com/esy/lang/cpp/tguide/transcontent.html#RN.CVT.3.3.1 Even though C and aCC are compatible, when using the two different object types, the object containing "main" must be compiled with aCC, and the final executable must be linked with aCC. Linux DB2 for Linux supports the following REXX version: Object REXX Interpreter for Linux Version 2.1 Linux/390 DB2 for Linux/390 supports only Java, C and C++. OS/2 The listed versions for C/C++ compiler should be the following: IBM VisualAge C++ for OS/2 Version 3.6.5 and Version 4.0 Note: Please download the latest available FixPaks for these compiler versions from http://www.ibm.com/software/ad/vacpp/service/csd.html For limitations on future service support for these VisualAge C++ compilers, please see the news section at: http://www-4.ibm.com/software/ad/vacpp/ Solaris The listed version for the Micro Focus COBOL compiler should be: Micro Focus COBOL Server Express Version 1.0 Windows 32-bit Operating Systems The listed versions for the IBM VisualAge C++ compiler should be the following: IBM VisualAge C++ for Windows Versions 3.6.5 and 4.0 Note: Please download the latest available FixPaks for these compiler versions from http://www.ibm.com/software/ad/vacpp/service/csd.html For limitations on future service support for these VisualAge C++ compilers, please see the news section at: http://www-4.ibm.com/software/ad/vacpp/ The listed versions for the Micro Focus COBOL compiler should be the following: Micro Focus COBOL Version 4.0.20 Micro Focus COBOL Net Express Version 3.0 34.1.2 Sample Programs The following should be added to the "Object Linking and Embedding Samples" section: salarycltvc A Visual C++ DB2 CLI sample that calls the Visual Basic stored procedure, salarysrv. SALSVADO A sample OLE automation stored procedure (SALSVADO) and a SALCLADO client (SALCLADO), implemented in 32-bit Visual Basic and ADO, that calculates the median salary in table staff2. The following should be added to the "Log Management User Exit Samples" section: Applications on AIX using the ADSM API Client at level 3.1.6 and higher must be built with the xlc_r or xlC_r compiler invocations, not with xlc or xlC, even if the applications are single-threaded. This ensures that the libraries are thread-safe. This applies to the Log Management User Exit Sample, db2uext2.cadsm. If you have an application that is compiled with a non thread-safe library, you can apply fixtest IC21925E or contact your application provider. The fixtest is available on the index.storsys.ibm.com anonymous ftp server. This will regress the ADSM API level to 3.1.3. ------------------------------------------------------------------------ 34.2 Chapter 3. General Information for Building DB2 Applications 34.2.1 Build Files, Makefiles, and Error-checking Utilities The entry for bldevm in table 16 should read: bldevm The event monitor sample program, evm (only available on AIX, OS/2, and Windows 32-bit operating systems). Table 17 should include the entries: bldmevm The event monitor sample program, evm, with the Microsoft Visual C++ compiler. bldvevm The event monitor sample program, evm, with the VisualAge C++ compiler. ------------------------------------------------------------------------ 34.3 Chapter 4. Building Java Applets and Applications 34.3.1 Setting the Environment If you are using IBM JDK 1.1.8 on supported platforms to build SQLJ programs, a JDK build date of November 24, 1999 (or later) is required. Otherwise you may get JNI panic errors during compilation. If you are using IBM JDK 1.2.2 on supported platforms to build SQLJ programs, a JDK build date of April 17, 2000 (or later) is required. Otherwise, you may get Invalid Java type errors during compilation. For sub-sections AIX, HP-UX, Linux, and Solaris, replace the information on JDBC 2.0 with the following: Using the JDBC 2.0 Driver with Java Applications The JDBC 1.22 driver is still the default driver on all operating systems. To take advantage of the new features of JDBC 2.0, you must install JDK 1.2 support. Before executing an application that takes advantage of the new features of JDBC 2.0, you must set your environment by issuing the usejdbc2 command from the sqllib/java12 directory. If you want your applications to always use the JDBC 2.0 driver, consider adding the following line to your login profile, such as .profile, or your shell initialization script, such as .bashrc, .cshrc, or .kshrc: . sqllib/java12/usejdbc2 Ensure that this command is placed after the command to run db2profile, as usejdbc2 should be run after db2profile. To switch back to the JDBC 1.22 driver, execute the following command from the sqllib/java12 directory: . usejdbc1 Using the JDBC 2.0 Driver with Java Stored Procedures and UDFs To use the JDBC 2.0 driver with Java stored procedures and UDFs, you must set the environment for the fenced user ID for your instance. The default fenced user ID is db2fenc1. To set the environment for the fenced user ID, perform the following steps: 1. Add the following line to the fenced user ID profile, such as .profile, or the fenced user ID shell initialization script, such as .bashrc, .cshrc, or .kshrc: . sqllib/java12/usejdbc2 2. Issue the following command from the CLP: db2set DB2_USE_JDK12=1 To switch back to the JDBC 1.22 driver support for Java UDFs and stored procedures, perform the following steps: 1. Remove the following line from the fenced user ID profile, such as .profile, or the fenced user ID shell initialization script, such as .bashrc, .cshrc, or .kshrc: . sqllib/java12/usejdbc2 2. Issue the following command from the CLP: db2set DB2_USE_JDK12= If you want your applications to always use the JDBC 2.0 driver, you can add the following line to your login profile, such as .profile, or your shell initialization script, such as .bashrc, .cshrc, or .kshrc: . sqllib/java12/usejdbc2 Ensure that this command is placed after the command to run db2profile, as usejdbc2 should be run after db2profile. HP-UX Java stored procedures and user-defined functions are not supported on DB2 for HP-UX with JDK 1.1. Silicon Graphics IRIX When building SQLJ applications with the -o32 object type, using the Java JIT compiler with JDK 1.2.2, if the SQLJ translator fails with a segmentation fault, try turning off the JIT compiler with this command: export JAVA_COMPILER=NONE JDK 1.2.2 is required for building Java SQLJ programs on Silicon Graphics IRIX. Windows 32-bit Operating Systems Using the JDBC 2.0 Driver with Java Stored Procedures and UDFs To use the JDBC 2.0 driver with Java stored procedures and UDFs, you must set the environment by performing the following steps: 1. Issue the following command in the sqllib\java12 directory: usejdbc2 2. Issue the following command from the CLP: db2set DB2_USE_JDK12=1 To switch back to the JDBC 1.22 driver support for Java UDFs and stored procedures, perform the following steps: 1. Issue the following command in the sqllib\java12 directory: usejdbc2 2. Issue the following command from the CLP: db2set DB2_USE_JDK12= 34.3.1.1 JDK Level on OS/2 Some messages will not display on OS/2 running versions of JDK 1.1.8 released prior to 09/99. Ensure that you have the latest JDK Version 1.1.8. 34.3.1.2 Java2 on HP-UX To run Java2 stored procedures, the shared library path has to be changed to be similar to the following: export SHLIB_PATH=$JAVADIR/jre/lib/PA_RISC:$JAVADIR/ jre/lib/PA_RISC/classic:$HOME/sqllib/lib:/usr/lib:$SHLIB_PATH $JAVADIR is the location of the Java2 SDK. ------------------------------------------------------------------------ 34.4 Chapter 5. Building SQL Procedures 34.4.1 Setting the SQL Procedures Environment These instructions are in addition to the instructions for setting up the DB2 environment in "Setup". For SQL procedures support, you have to install the Application Development Client on the server. For information about installing the Application Development Client, refer to the Quick Beginnings book for your platform. For the C and C++ compilers supported by DB2 on your platform, see "Supported Software by Platform". Note: On an OS/2 FAT file system, you are limited to a schema name for SQL Procedures of eight characters or less. You have to use the HPFS file system for schema names longer than eight characters. The compiler configuration consists of two parts: setting the environment variables for the compiler, and defining the compilation command. The environment variables provide the paths to the compiler's binaries, libraries and include files. The compilation command is the full command that DB2 will use to compile the C files generated for SQL procedures. 34.4.2 Setting the Compiler Environment Variables There are different rules for configuring the environment on OS/2, Windows, and UNIX based operating systems, as explained below. In some cases, no configuration is needed; in other cases, the DB2_SQLROUTINE_COMPILER_PATH DB2 registry variable must be set to point to an executable script that sets the environment variables appropriately. Note: You can either use the db2set command or use the SQL Stored Procedures Build Options dialog from the Stored Procedure Builder to set the value of this DB2 registry variable. Using the SQL Stored Procedures Build Options dialog eliminates the need to physically access the database server or for the database server to be restarted in order for the changes to take effect. On OS/2: for IBM VisualAge C++ for OS/2 Version 3.6: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcxxo\bin\setenv.cmd" for IBM VisualAge C++ for OS/2 Version 4: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcpp40\bin\setenv.cmd" Note: For these commands, it is assumed that the C++ compiler is installed on the c: drive. Change the drive or the path, if necessary, to reflect the location of the C++ compiler on your system. On Windows 32-bit operating systems, if the environment variables for your compiler are set as SYSTEM variables, no configuration is needed. Otherwise, set the DB2_SQLROUTINE_COMPILER_PATH DB2 registry variable as follows: for Microsoft Visual C++ Versions 5.0: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\devstudio\vc\bin\vcvars32.bat" for Microsoft Visual C++ Versions 6.0: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\Micros~1\vc98\bin\vcvars32.bat" for IBM VisualAge C++ for Windows Version 3.6: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcxxw\bin\setenv.bat" for IBM VisualAge C++ for Windows Version 4: db2set DB2_SQLROUTINE_COMPILER_PATH="c:\ibmcppw40\bin\setenv.bat" Note: For these commands, it is assumed that the C++ compiler is installed on the c: drive. Change the drive or the path, if necessary, to reflect the location of the C++ compiler on your system. On UNIX based operating systems, DB2 will generate the executable script file $HOME/sqllib/function/routine/sr_cpath (which contains the default values for the compiler environment variables) the first time you compile a stored procedure. You can edit this file if the default values are not appropriate for your compiler. Alternatively, you can set the DB2_SQLROUTINE_COMPILER_PATH DB2 registry variable to contain the full path name of another executable script that specifies the desired settings (see examples above). 34.4.3 Customizing the Compilation Command The installation of the Application Development Client provides a default compilation command that works for at least one of the compilers supported on each platform: AIX: IBM C Set++ for AIX Version 3.6.6 Solaris: SPARCompiler C++ Versions 4.2 and 5.0 HP-UX: HP-UX C++ Version A.12.00 Linux: GNU/Linux g++ Version egcs-2.90.27 980315 (egcs-1.0.2 release) PTX: ptx/C++ Version 5.2 OS/2: IBM VisualAge C++ for OS/2 Version 3 Windows NT and Windows 2000: Microsoft Visual C++ Versions 5.0 and 6.0 To use other compilers, or to customize the default command, you must set the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable with a command like: db2set DB2_SQLROUTINE_COMPILE_COMMAND=compilation_command where compilation_command is the C or C++ compilation command, including the options and parameters required to create stored procedures. In the compilation command, use the keyword SQLROUTINE_FILENAME to replace the filename for the generated SQC, C, PDB, DEF, EXP, messages log and shared library files. For AIX only, use the keyword SQLROUTINE_ENTRY to replace the entry point name. Note: You can either use the db2set command or use the SQL Stored Procedures Build Options dialog from the Stored Procedure Builder to set the value of this DB2 registry variable. Using the SQL Stored Procedures Build Options dialog eliminates the need to physically access the database server or for the database server to be restarted in order for the changes to take effect. The following are the default values for the DB2_SQLROUTINE_COMPILE_COMMAND for C or C++ compilers on supported server platforms. AIX To use IBM C for AIX Version 3.6.6: db2set DB2_SQLROUTINE_COMPILE_COMMAND=xlc -H512 -T512 \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c -bE:SQLROUTINE_FILENAME.exp \ -e SQLROUTINE_ENTRY -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -lc -ldb2 To use IBM C Set++ for AIX Version 3.6.6: db2set DB2_SQLROUTINE_COMPILE_COMMAND=xlC -H512 -T512 \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c -bE:SQLROUTINE_FILENAME.exp \ -e SQLROUTINE_ENTRY -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -lc -ldb2 This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. Note: To compile 64-bit SQL procedures on AIX, add the -q64 option to the above commands. To use IBM VisualAge C++ for AIX Version 4: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld" If you do not specify the configuration file after vacbld command, DB2 will create the following default configuration file at the first attempt of creating any SQL procedure: $HOME/sqllib/function/routine/sqlproc.icc If you want to use your own configuration file, you can specify your own configuration file when setting the DB2 registry value for DB2_SQLROUTINE_COMPILE_COMMAND: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld %DB2PATH%/function/sqlproc.icc" HP-UX To use HP C Compiler Version A.11.00.03: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc +DAportable +ul -Aa +z \ -I$HOME/sqllib/include -c SQLROUTINE_FILENAME.c; \ ld -b -o SQLROUTINE_FILENAME SQLROUTINE_FILENAME.o \ -L$HOME/sqllib/lib -ldb2 To use HP-UX C++ Version A.12.00: db2set DB2_SQLROUTINE_COMPILE_COMMAND=CC +DAportable +a1 +z -ext \ -I$HOME/sqllib/include -c SQLROUTINE_FILENAME.c; \ ld -b -o SQLROUTINE_FILENAME SQLROUTINE_FILENAME.o \ -L$HOME/sqllib/lib -ldb2 This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. Linux To use GNU/Linux gcc Version 2.7.2.3: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -shared -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -ldb2 To use GNU/Linux g++ Version egcs-2.90.27 980315 (egcs-1.0.2 release): db2set DB2_SQLROUTINE_COMPILE_COMMAND=g++ \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -shared -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -ldb2 This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. PTX To use ptx/C Version 4.5: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc -KPIC \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -G -o SQLROUTINE_FILENAME.so -L$HOME/sqllib/lib -ldb2 ; \ cp SQLROUTINE_FILENAME.so SQLROUTINE_FILENAME To use ptx/C++ Version 5.2: db2set DB2_SQLROUTINE_COMPILE_COMMAND=c++ -KPIC \ -D_RWSTD_COMPILE_INSTANTIATE=0 -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -G -o SQLROUTINE_FILENAME.so -L$HOME/sqllib/lib -ldb2 ; \ cp SQLROUTINE_FILENAME.so SQLROUTINE_FILENAME This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. OS/2 To use IBM VisualAge C++ for OS/2 Version 3: db2set DB2_SQLROUTINE_COMPILE_COMMAND="icc -Ge- -Gm+ -W2 -I%DB2PATH%\include SQLROUTINE_FILENAME.c /B\"/NOFREE /NOI /ST:64000\" SQLROUTINE_FILENAME.def %DB2PATH%\lib\db2api.lib" This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. To use IBM VisualAge C++ for OS/2 Version 4: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld" If you do not specify the configuration file after vacbld command, DB2 will create the following default configuration file at the first attempt of creating any SQL procedure: %DB2PATH%\function\routine\sqlproc.icc If you want to use your own configuration file, you can specify your own configuration file when setting the DB2 registry value for DB2_SQLROUTINE_COMPILE_COMMAND: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld %DB2PATH%\function\sqlproc.icc" Solaris To use SPARCompiler C Versions 4.2 and 5.0: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cc -xarch=v8plusa -Kpic \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -G -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib \ -R$HOME/sqllib/lib -ldb2 To use SPARCompiler C++ Versions 4.2 and 5.0: db2set DB2_SQLROUTINE_COMPILE_COMMAND=CC -xarch=v8plusa -Kpic \ -I$HOME/sqllib/include SQLROUTINE_FILENAME.c \ -G -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib \ -R$HOME/sqllib/lib -ldb2 This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. Notes: 1. The compiler option -xarch=v8plusa has been added to the default compiler command. For details on why this option has been added, see 34.8, "Chapter 12. Building Solaris Applications". 2. To compile 64-bit SQL procedures on Solaris, take out the -xarch=v8plusa option and add the -xarch=v9 option to the above commands. Windows NT and Windows 2000 Note: SQL procedures are not supported on Windows 98 or Windows 95. To use Microsoft Visual C++ Versions 5.0 and 6.0: db2set DB2_SQLROUTINE_COMPILE_COMMAND=cl -Od -W2 /TC -D_X86_=1 -I%DB2PATH%\include SQLROUTINE_FILENAME.c /link -dll -def:SQLROUTINE_FILENAME.def /out:SQLROUTINE_FILENAME.dll %DB2PATH%\lib\db2api.lib This is the default compile command if the DB2_SQLROUTINE_COMPILE_COMMAND DB2 registry variable is not set. To use IBM VisualAge C++ for Windows Version 3.6: db2set DB2_SQLROUTINE_COMPILE_COMMAND="ilib /GI SQLROUTINE_FILENAME.def & icc -Ti -Ge- -Gm+ -W2 -I%DB2PATH%\include SQLROUTINE_FILENAME.c /B\"/ST:64000 /PM:VIO /DLL\" SQLROUTINE_FILENAME.exp %DB2PATH%\lib\db2api.lib" To use IBM VisualAge C++ for Windows Version 4: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld" If you do not specify the configuration file after vacbld command, DB2 will create the following default configuration file at the first attempt of creating any SQL procedure: %DB2PATH%\function\routine\sqlproc.icc If you want to use your own configuration file, you can specify your own configuration file when setting the DB2 registry value for DB2_SQLROUTINE_COMPILE_COMMAND: db2set DB2_SQLROUTINE_COMPILE_COMMAND="vacbld %DB2PATH%\function\sqlproc.icc" To return to the default compiler options, set the DB2 registry value for DB2_SQLROUTINE_COMPILE_COMMAND to null with the following command: db2set DB2_SQLROUTINE_COMPILE_COMMAND= 34.4.4 Retaining Intermediate Files You have to manually delete intermediate files that may be left when an SQL procedure is not created successfully. These files are in the following directories: UNIX $DB2PATH/function/routine/sqlproc/$DATABASE/$SCHEMA/tmp where $DB2PATH represents the directory in which the instance was created, $DATABASE represents the database name, and $SCHEMA represents the schema name with which the SQL procedures were created. OS/2 and Windows %DB2PATH%\function\routine\sqlproc\%DATABASE%\%SCHEMA%\tmp where %DB2PATH% represents the directory in which the instance was created, %DATABASE% represents the database name, and %SCHEMA% represents the schema name with which the SQL procedures were created. 34.4.5 Backup and Restore When an SQL procedure is created, the generated shared library/DLL is also kept in the catalog table if the generated shared library/DLL is smaller than 2 MB. When the database is backed up and restored, any SQL procedure with a generated shared library/DLL less than 2 MB will be backed up and restored with the version kept in the catalog table. If you have SQL procedures with a generated shared library/DLL larger than 2 MB, ensure that you also do the filesystem backup and restore with the database backup and restore. If not, you will have to recreate the shared library/DLL of the SQL procedure manually by using the source in the syscat.procedures catalog table. Note: At database recovery time, all the SQL procedure executables on the filesystem belonging to the database being recovered will be removed. If the index creation configuration parameter (indexrec) is set to RESTART, all SQL procedure executables will be extracted from the catalog table and put back on the filesystem at next connect time. Otherwise, the SQL executables will be extracted on first execution of the SQL procedures. The executables will be put back in the following directory: UNIX $DB2PATH/function/routine/sqlproc/$DATABASE where $DB2PATH represents the directory in which the instance was created and $DATABASE represents the database name with which the SQL procedures were created. OS/2 and Windows %DB2PATH%\function\routine\sqlproc\%DATABASE% where %DB2PATH% represents the directory in which the instance was created and %DATABASE% represents the database name with which the SQL procedures were created. 34.4.6 Creating SQL Procedures Set the database manager configuration parameter KEEPDARI to 'NO' for developing SQL procedures. If an SQL procedure is kept loaded once it is executed, you may have problems dropping and recreating the stored procedure with the same name, as the library cannot be refreshed and the executables cannot be dropped from the filesystem. You will also have problems when you try to rollback the changes or drop the database because the executables cannot be deleted. See 'Updating the Database Manager Configuration File' in "Chapter 2. Setup" of the 'Application Building Guide' for more information on setting the KEEPDARI parameter. Note: SQL procedures do not support the following data types for parameters: o LONG VARGRAPHIC o Binary Large Object (BLOB) o Character Large Object (CLOB) o Double-byte Character Large Object (DBCLOB) 34.4.7 Calling Stored Procedures The first paragraph in 'Using the CALL Command' should read: To use the call command, you must enter the stored procedure name plus any IN or INOUT parameters, as well as '?' as a place-holder for each OUT parameter. For details on the syntax of the CALL command, see 10.14, "CALL". 34.4.8 Distributing Compiled SQL Procedures Note: To distribute compiled SQL procedures between DB2 servers, you must perform the following steps for every DB2 server that serves as the source of, or the destination for, a compiled SQL procedure: Step 1. Install FixPak 3 Step 2. Issue the db2updv7 command to enable DB2 to extract and install compiled SQL procedures: db2updv7 -d database_name When you define an SQL procedure, it is converted to a C program, precompiled, bound against the target database, compiled and linked to create a shared library. The compile and link steps require a C or C++ compiler to be available on the database server machine. However, once you define an SQL procedure, you can distribute it in compiled form to DB2 databases that run on the same platform but do not necessarily have access to a C or C++ compiler. DB2 allows the user to extract SQL procedures in compiled form from one database and install SQL procedures in compiled form into another database. DB2 provides both a command line interface and a programming interface to the extraction and installation operations. The command line interface consists of two CLP commands: GET ROUTINE and PUT ROUTINE. The programmatic interface consists of two built-in stored procedures: GET_ROUTINE_SAR and PUT_ROUTINE_SAR. For more information on the command line interface, refer to the Command Reference. For more information on the the programming interface, refer to the SQL Reference. To distribute a compiled SQL procedure from one database server to another database server, perform the following steps: Step 1. Develop the application, including defining the SQL procedures that are part of the application. Step 2. After testing the procedures, extract the compiled version of each procedure into a different file. For more information, refer to the GET ROUTINE command in the Command Reference or the GET_ROUTINE_SAR stored procedure in the SQL Reference. Step 3. Install the compiled version of each procedure on each server, either by issuing the PUT ROUTINE command, or by invoking the PUT_ROUTINE_SAR stored procedure, using the files created by 2. Each database server must have the same operating system and DB2 level. ------------------------------------------------------------------------ 34.5 Chapter 7. Building HP-UX Applications. 34.5.1 HP-UX C In "Multi-threaded Applications", the bldmt script file has been revised with different compile options. The new version is in the sqllib/samples/c directory. 34.5.2 HP-UX C++ In the build scripts, the C++ compiler variable CC has been replaced by aCC, for the HP aC++ compiler. The revised build scripts are in the sqllib/samples/cpp directory. The "+u1" compile option should be used to build stored procedures and UDFs with the aCC compiler. This option allows unaligned data access. The sample build scripts shipped with DB2 for HP-UX, bldsrv and bldudf, and the sample makefile, have not been updated with this option. They should be revised to add this option before use. Here is the new compile step for the bldsrv and bldudf scripts: aCC +DAportable +u1 -Aa +z -ext -I$DB2PATH/include -c $1.C In "Multi-threaded Applications", the bldmt script file has been revised with different compile options. The new version is in the sqllib/samples/cpp directory. ------------------------------------------------------------------------ 34.6 Chapter 9. Building OS/2 Applications 34.6.1 VisualAge C++ for OS/2 Version 4.0 For OS/2 and Windows, use the set command instead of the export command documented in this section. For example, set CLI=tbinfo. In 'DB2 CLI Applications', sub-section 'Building and Running Embedded SQL Applications', for OS/2 and Windows the cliapi.icc file must be used instead of the cli.icc file, as embedded SQL applications need the db2api.lib library linked in by cliapi.icc. ------------------------------------------------------------------------ 34.7 Chapter 10. Building PTX Applications 34.7.1 ptx/C++ Libraries need to be linked with the -shared option to build stored procedures and user-defined functions. In the sqllib/samples directory, the makefile, the build scripts bldsrv, and bldudf have been updated to include this option, as in the following link step from bldsrv: c++ -shared -G -o $1 $1.o -L$DB2PATH/lib -ldb2 ------------------------------------------------------------------------ 34.8 Chapter 12. Building Solaris Applications 34.8.1 SPARCompiler C++ Problems with executing C/C++ Applications and running SQL Procedures on Solaris When using the Sun WorkShop Compiler C/C++, if you experience problems with your executable where you receive errors like the following: 1. syntax error at line 1: `(' unexpected 2. ksh: : cannot execute (where application name is the name of the compiled executable) you may be experiencing a problem that the compiler does not produce valid executables when linking with libdb2.so. One suggestion to fix this is to add the following compiler option to your compile and link commands: -xarch=v8plusa for example, when compiling the sample application, dynamic.sqc: embprep dynamic sample embprep utilemb sample cc -c utilemb.c -xarch=v8plusa -I/export/home/db2inst1/sqllib/include cc -o dynamic dynamic.c utilemb.o -xarch=v8plusa -I/export/home/db2inst1/sqllib/include \ -L/export/home/db2inst1/sqllib/lib -R/export/home/db2inst1/sqllib/lib -l db2 Notes: 1. If you are using SQL Procedures on Solaris, and you are using your own compile string via the DB2_SQLROUTINE_COMPILE_COMMAND profile variable, please ensure that you include the compiler option given above. The default compiler command includes this option: db2set DB2_SQLROUTINE_COMPILE_COMMAND="cc -# -Kpic -xarch=v8plusa -I$HOME/sqllib/include \ SQLROUTINE_FILENAME.c -G -o SQLROUTINE_FILENAME -L$HOME/sqllib/lib -R$HOME/sqllib/lib -ldb2 2. To compile 64-bit SQL procedures on Solaris, take out the -xarch=v8plusa option and add the -xarch=v9 option to the above commands. ------------------------------------------------------------------------ 34.9 Chapter 13. Building Applications for Windows 32-bit Operating Systems 34.9.1 VisualAge C++ Version 4.0 For OS/2 and Windows, use the set command instead of the export command documented in this section. For example, set CLI=tbinfo. In 'DB2 CLI Applications', sub-section 'Building and Running Embedded SQL Applications', for OS/2 and Windows the cliapi.icc file must be used instead of the cli.icc file, as embedded SQL applications need the db2api.lib library linked in by cliapi.icc. ------------------------------------------------------------------------ Application Development Guide ------------------------------------------------------------------------ 35.1 Chapter 2. Coding a DB2 Application 35.1.1 Activating the IBM DB2 Universal Database Project and Tool Add-ins for Microsoft Visual C++ Before running the db2vccmd command (step 1), please ensure that you have started and stopped Visual C++ at least once with your current login ID. The first time you run Visual C++, a profile is created for your user ID, and that is what gets updated by the db2vccmd command. If you have not started it once, and you try to run db2vccmd, you may see errors like the following: "Registering DB2 Project add-in ...Failed! (rc = 2)" ------------------------------------------------------------------------ 35.2 Chapter 6. Common DB2 Application Techniques 35.2.1 Generating Sequential Values Generating sequential values is a common database application development problem. The best solution to that problem is to use sequence objects and sequence expressions in SQL. Each sequence object is a uniquely named database object that can be accessed only by sequence expressions. There are two sequence expressions: the PREVVAL expression and the NEXTVAL expression. The PREVVAL expression returns the most recently generated value for the specified sequence for a previous statement. The NEXTVAL sequence expression increments the value of the sequence object and returns the new value of the sequence object. To create a sequence object, issue the CREATE SEQUENCE statement. For example, to create a sequence object called id_values using the default attributes, issue the following statement: CREATE SEQUENCE id_values To display the current value of the sequence object, issue a VALUES statement using the PREVVAL expression: VALUES PREVVAL FOR id_values 1 ----------- 1 1 record(s) selected. You can repeatedly retrieve the current value of the sequence object, and the value that the sequence object returns does not change until you issue a NEXTVAL expression. In the following example, the PREVVAL expression returns a value of 1, until the NEXTVAL expression increments the value of the sequence object: VALUES PREVVAL FOR id_values 1 ----------- 1 1 record(s) selected. VALUES PREVVAL FOR id_values 1 ----------- 1 1 record(s) selected. VALUES NEXTVAL FOR id_values 1 ----------- 2 1 record(s) selected. VALUES PREVVAL FOR id_values 1 ----------- 2 1 record(s) selected. To update the value of a column with the next value of the sequence object, include the NEXTVAL expression in the UPDATE statement, as follows: UPDATE staff SET id = NEXTVAL FOR id_values WHERE id = 350 To insert a new row into a table using the next value of the sequence object, include the NEXTVAL expression in the INSERT statement, as follows: INSERT INTO staff (id, name, dept, job) VALUES (NEXTVAL FOR id_values, 'Kandil', 51, 'Mgr') For more information on the PREVVAL and NEXTVAL expressions, refer to the SQL Reference. 35.2.1.1 Controlling Sequence Behavior You can tailor the behavior of sequence objects to meet the needs of your application. You change the attributes of a sequence object when you issue the CREATE SEQUENCE statement to create a new sequence object, and when you issue the ALTER SEQUENCE statement for an existing sequence object. Following are some of the attributes of a sequence object that you can specify: Data type The AS clause of the CREATE SEQUENCE statement specifies the numeric data type of the sequence object. The data type, as specified in the "SQL Limits" appendix of the SQL Reference, determines the possible minimum and maximum values of the sequence object. You cannot change the data type of a sequence object; instead, you must drop the sequence object by issuing the DROP SEQUENCE statement and issuing a CREATE SEQUENCE statement with the new data type. Start value The START WITH clause of the CREATE SEQUENCE statement sets the initial value of the sequence object. The RESTART WITH clause of the ALTER SEQUENCE statement resets the value of the sequence object to a specified value. Minimum value The MINVALUE clause sets the minimum value of the sequence object. Maximum value The MAXVALUE clause sets the maximum value of the sequence object. Increment value The INCREMENT BY clause sets the value that each NEXTVAL expression adds to the sequence object. To decrement the value of the sequence object, specify a negative value. Sequence cycling The CYCLE clause causes the value of a sequence object that reaches its maximum or minimum value to return to its start value on the following NEXTVAL expression. For example, to create a sequence object called id_values that starts with a value of 0, has a maximum value of 1000, increments by 2 with each NEXTVAL expression, and returns to its start value when the maximum value is reached, issue the following statement: CREATE SEQUENCE id_values START WITH 0 INCREMENT BY 2 MAXVALUE 1000 CYCLE For more information on the CREATE SEQUENCE and ALTER SEQUENCE statements, refer to the SQL Reference. 35.2.1.2 Improving Performance with Sequence Objects Like identity columns, using sequence objects to generate values generally improves the performance of your applications in comparison to alternative approaches. The alternative to sequence objects is to create a single-column table that stores the current value and incrementing that value with either a trigger or under the control of the application. In a distributed environment where applications concurrently access the single-column table, the locking required to force serialized access to the table can seriously affect performance. Sequence objects avoid the locking issues that are associated with the single-column table approach and can cache sequence values in memory to improve DB2 response time. To maximize the performance of applications that use sequence objects, ensure that your sequence object caches an appropriate amount of sequence values. The CACHE clause of the CREATE SEQUENCE and ALTER SEQUENCE statements specifies the maximum number of sequence values that DB2 generates and stores in memory. If your sequence object must generate values in order, without introducing gaps in that order due to a system failure or database deactivation, use the ORDER and NO CACHE clauses in the CREATE SEQUENCE statement. The NO CACHE clause guarantees that no gaps appear in the generated values at the cost of some of your application's performance because it forces your sequence object to write to the database log every time it generates a new value. 35.2.1.3 Comparing Sequence Objects and Identity Columns Although sequence objects and identity columns appear to serve similar purposes for DB2 applications, there are a number of important differences: * An identity column automatically generates values for a column in a single table. A sequence object generates sequential values that can be used in any SQL statement. * An identity column generates values that are guaranteed to be unique. Including the CYCLE clause in a CREATE SEQUENCE or ALTER SEQUENCE statement enables that sequence object to generate duplicate values. ------------------------------------------------------------------------ 35.3 Chapter 7. Stored Procedures 35.3.1 DECIMAL Type Fails in Linux Java Routines This problem occurs because the IBM Developer Kit for Java does not create links for its libraries in the /usr/lib directory. The security model for DB2 routines does not allow them to access libraries outside of the standard system libraries. To enable DECIMAL support in Java routines on Linux, perform the following steps: 1. Create symbolic links from the IBM Developer Kit for Java libraries to /usr/lib/ by issuing the following command with root authority: For IBM Developer Kit for Java 1.1.8: ln -sf /usr/jdk118/lib/linux/native_threads/* /usr/lib/ For IBM Developer Kit for Java 1.3: ln -sf /opt/IBMJava2-13/jre/bin/*.so /usr/lib/ 2. Issue the ldconfig command to update the list of system-wide libraries. 35.3.2 Using Cursors in Recursive Stored Procedures To avoid errors when using SQL Procedures or stored procedures written in embedded SQL, close all open cursors before issuing a recursive CALL statement. For example, assume the stored procedure MYPROC contains the following code fragment: OPEN c1; CALL MYPROC(); CLOSE c1; DB2 returns an error when MYPROC is called because cursor c1 is still open when MYPROC issues a recursive CALL statement. The specific error returned by DB2 depends on the actions MYPROC performs on the cursor. To successfully call MYPROC, rewrite MYPROC to close any open cursors before the nested CALL statement as shown in the following example: OPEN c1; CLOSE c1; CALL MYPROC(); Close all open cursors before issuing the nested CALL statement to avoid an error. 35.3.3 Writing OLE Automation Stored Procedures The last sentence in the following paragraph is missing from the second paragraph under section "Writing OLE automation Stored Procedures": After you code an OLE automation object, you must register the methods of the object as stored procedures using the CREATE PROCEDURE statement. To register an OLE automation stored procedure, issue a CREATE PROCEDURE statement with the LANGUAGE OLE clause. The external name consists of the OLE progID identifying the OLE automation object and the method name separated by ! (exclamation mark). The OLE automation object needs to be implemented as an in-process server (.DLL). ------------------------------------------------------------------------ 35.4 Chapter 12. Working with Complex Objects: User-Defined Structured Types 35.4.1 Inserting Structured Type Attributes Into Columns The following rule applies to embedded static SQL statements: To insert an attribute of a user-defined structured type into a column that is of the same type as the attribute, enclose the host variable that represents the instance of the type in parentheses, and append the double-dot operator and attribute name to the closing parenthesis. For example, consider the following situation: - PERSON_T is a structured type that includes the attribute NAME of type VARCHAR(30). - T1 is a table that includes a column C1 of type VARCHAR(30). - personhv is the host variable declared for type PERSON_T in the programming language. The proper syntax for inserting the NAME attribute into column C1 is: EXEC SQL INSERT INTO T1 (C1) VALUES ((:personhv)..NAME) ------------------------------------------------------------------------ 35.5 Chapter 13. Using Large Objects (LOBs) 35.5.1 Large object (LOBs) support in federated database systems DB2 supports three types of large objects (LOBs): character large objects (CLOBs), double-byte character large objects (DBCLOBs) and binary large objects (BLOBs). For general information about DB2 LOB support, see the following DB2 books: * DB2 Application Development Guide * DB2 SQL Reference * DB2 Administration Guide: Planning In a federated database system, you can access and manipulate LOBs at remote data sources. Because LOBs can be very large, transferring LOBs from a remote data source can be time consuming. The DB2 federated database attempts to minimize transferring LOB data from the data sources, and also attempts to deliver requested LOB data directly from the data source to the requesting application without materializing the LOB at DB2. This section discusses: * How DB2 retrieves LOBs * How applications can use LOB locators * Restrictions on LOBs * Mappings between LOB and non-LOB data types * Tuning the system 35.5.1.1 How DB2 retrieves LOBs DB2 federated systems use two mechanisms to retrieve LOBs: LOB streaming and LOB materialization. LOB streaming In LOB streaming, LOB data is retrieved in stages. DB2 uses LOB streaming for data in result sets of queries that are completely pushed down. For example, consider the following query: SELECT empname, picture FROM orc_emp_table WHERE empno = '01192345' where picture represents a LOB column and orc_emp_table represents a nickname referencing an Oracle table containing employee data. The DB2 query processor marks the picture column for streaming if it decides to run the entire query at the Oracle data source. At execution time, if DB2 notes that a LOB is marked for streaming, it retrieves the LOB in stages from the data source. DB2 then transfers the data to the application memory space. LOB materialization In LOB materialization, the remote LOB data is retrieved by DB2 and stored locally at the federated server. DB2 uses LOB materialization when: * The LOB column cannot be deferred or streamed. * A function must be applied to a LOB column locally, before the data is transferred. This happens when DB2 compensates for functions not available at a remote data source. For example, Microsoft SQL Server does not provide a SUBSTR function for LOB columns. To compensate, DB2 materializes the LOB column locally and applies the DB2 SUBSTR function to the retrieved LOB. 35.5.1.2 How applications can use LOB locators Applications can request LOB locators for LOBs stored in remote data sources. A LOB locator is a 4-byte value stored in a host variable that a program can use to refer to a LOB value (or LOB expression) held in the database system. Using a LOB locator, a program can manipulate the LOB value as if the LOB value was stored in a regular host variable. The difference in using the LOB locator is that there is no need to transport the LOB value from the server to the application (and possibly back again). See the DB2 Application Development Guide for additional information about LOB locators. DB2 can retrieve LOBs from remote data sources, store them at DB2, and then issue a LOB locator against the stored LOB. LOB locators are released when: * Applications issue "FREE LOCATOR" SQL statements. * Applications issue COMMIT statements. * DB2 is restarted. 35.5.1.3 Restrictions on LOBs When using and retrieving LOBs, consider that: * DB2 is unable to bind remote LOBs to a file reference variable. * LOBs are not supported in pass-through mode. 35.5.1.4 Mappings between LOB and non-LOB data types There are a few cases in which you can map a DB2 LOB data type to a non-LOB data type at a data source. When you need to create a mapping between a column with a DB2 LOB type and its counterpart column at a data source, it is recommended that you use a LOB data type as a counterpart if at all possible. To create a mapping, use the create type mapping DDL statement. For example: CREATE TYPE MAPPING my_oracle_lob FROM sysibm.clob TO SERVER TYPE oracle TYPElong where: my_oracle_lob Is the name of the type mapping. sysibm.clob Is the DB2 CLOB data type. oracle Is the type of server you are connecting to. long Is the Oracle data type counterpart. 35.5.2 Tuning the system If an application that retrieves remote LOBs returns an error message indicating there is not enough system resources to process the statement, increase the value of the application heap size parameter, APPLHEAPSZ, in the database configuration file. For example: DB2 UPDATE DB CFG FOR EMPLOYEE USING APPLHEAPSZ 512 where EMPLOYEE is the name of the database you are tuning and 512 is the value of the application heap size parameter. ------------------------------------------------------------------------ 35.6 Part 5. DB2 Programming Considerations 35.6.1 IBM DB2 OLE DB Provider Installing IBM DB2 Version 7.1 FixPak 1 or later corrects the condition that caused DB2 to issue the following error: Test connection failed because of an error in initializing provider. The IBM OLE DB Provider is not available at this time. Please refer to the readme file for more information. For more information on using the IBM OLE DB Provider for DB2, please refer to http://www.ibm.com/software/data/db2/udb/ad/v71/oledb.html. ------------------------------------------------------------------------ 35.7 Chapter 20. Programming in C and C++ The following table supplements the information included in chapter 7, "Stored Procedures", chapter 15, "Writing User-Defined Functions and Methods", and chapter 20, "Programming in C and C++". The table lists the supported mappings between SQL data types and C data types for stored procedures, UDFs, and methods. 35.7.1 C/C++ Types for Stored Procedures, Functions, and Methods Table 22. SQL Data Types Mapped to C/C++ Declarations SQL Column Type C/C++ Data Type SQL Column Type Description SMALLINT sqlint16 16-bit signed integer (500 or 501) INTEGER sqlint32 32-bit signed integer (496 or 497) BIGINT sqlint64 64-bit signed integer (492 or 493) REAL float Single-precision floating (480 or 481) point DOUBLE double Double-precision floating (480 or 481) point DECIMAL(p,s) Not supported. To pass a decimal value, (484 or 485) define the parameter to be of a data type castable from DECIMAL (for example CHAR or DOUBLE) and explicitly cast the argument to this type. CHAR(n) char[n+1] where n is Fixed-length, (452 or 453) large enough to hold null-terminated character the data string 1<=n<=254 CHAR(n) FOR BIT DATA char[n+1] where n is Fixed-length character (452 or 453) large enough to hold string the data 1<=n<=254 VARCHAR(n) char[n+1] where n is Null-terminated varying (448 or 449) (460 or large enough to hold length string 461) the data 1<=n<=32 672 VARCHAR(n) FOR BIT struct { Not null-terminated varying DATA sqluint16 length; length character string (448 or 449) char[n] } 1<=n<=32 672 LONG VARCHAR struct { Not null-terminated varying (456 or 457) sqluint16 length; length character string char[n] } 32 673<=n<=32 700 CLOB(n) struct { Non null-terminated varying (408 or 409) sqluint32 length; length character string with char data[n]; 4-byte string length } indicator 1<=n<=2 147 483 647 BLOB(n) struct { Non null-terminated varying (404 or 405) sqluint32 length; binary string with 4-byte char data[n]; string length indicator } 1<=n<=2 147 483 647 DATE char[11] null-terminated character (384 or 385) form TIME char[9] null-terminated character (388 or 389) form TIMESTAMP char[27] null-terminated character (392 or 393) form Note: The following data types are only available in the DBCS or EUC environment when precompiled with the WCHARTYPE NOCONVERT option. GRAPHIC(n) sqldbchar[n+1] where n Fixed-length, (468 or 469) is large enough to null-terminated double-byte hold the data character string 1<=n<=127 VARGRAPHIC(n) sqldbchar[n+1] where n Not null-terminated, (400 or 401) is large enough to variable-length double-byte hold the data character string 1<=n<=16 336 LONG VARGRAPHIC struct { Not null-terminated, (472 or 473) sqluint16 length; variable-length double-byte sqldbchar[n] character string } 16 337<=n<=16 350 DBCLOB(n) struct { Non null-terminated varying (412 or 413) sqluint32 length; length character string with sqldbchar data[n]; 4-byte string length } indicator 1<=n<=1 073 741 823 ------------------------------------------------------------------------ 35.8 Chapter 21. Programming in Java 35.8.1 Java Method Signature in PARAMETER STYLE JAVA Procedures and Functions If specified after the Java method name in the EXTERNAL NAME clause of the CREATE PROCEDURE or CREATE FUNCTION statement, the Java method signature must correspond to the default Java type mapping for the signature specified after the procedure or function name. For example, the default Java mapping of the SQL type INTEGER is "int", not "java.lang.Integer". 35.8.2 Connecting to the JDBC Applet Server It is essential that the db2java.zip file used by the Java applet be at the same FixPak level as the JDBC applet server. Under normal circumstances, db2java.zip is loaded from the Web Server where the JDBC applet server is running, as shown in Figure 22 of the book. This ensures a match. If, however, your configuration has the Java applet loading db2java.zip from a different location, a mismatch can occur. Prior to FixPak 2, this could lead to unexpected failures. As of FixPak 2, matching FixPak levels between the two files is strictly enforced at connection time. If a mismatch is detected, the connection is rejected, and the client receives one of the following exceptions: * If db2java.zip is at FixPak 2 or later: COM.ibm.db2.jdbc.DB2Exception: [IBM][JDBC Driver] CLI0621E Unsupported JDBC server configuration. * If db2java.zip is prior to FixPak 2: COM.ibm.db2.jdbc.DB2Exception: [IBM][JDBC Driver] CLI0601E Invalid statement handle or statement is closed. SQLSTATE=S1000 If a mismatch occurs, the JDBC applet server logs one of the following messages in the jdbcerr.log file: * If the JDBC applet server is at FixPak 2 or later: jdbcFSQLConnect: JDBC Applet Server and client (db2java.zip) versions do not match. Unable to proceed with connection., einfo= -111 * If the JDBC applet server is prior to FixPak 2: jdbcServiceConnection(): Invalid Request Received., einfo= 0 ------------------------------------------------------------------------ 35.9 Appendix B. Sample Programs The following should be added to the "Object Linking and Embedding Samples" section: salarycltvc A Visual C++ DB2 CLI sample that calls the Visual Basic stored procedure, salarysrv. SALSVADO A sample OLE automation stored procedure (SALSVADO) and a SALCLADO client (SALCLADO), implemented in 32-bit Visual Basic and ADO, that calculates the median salary in table staff2. ------------------------------------------------------------------------ CLI Guide and Reference ------------------------------------------------------------------------ 36.1 Binding Database Utilities Using the Run-Time Client The Run-Time Client cannot be used to bind the database utilities (import, export, reorg, the command line processor) and DB2 CLI bind files to each database before they can be used with that database. You must use the DB2 Administration Client or the DB2 Application Development Client instead. You must bind these database utilities and DB2 CLI bind files to each database before they can be used with that database. In a network environment, if you are using multiple clients that run on different operating systems, or are at different versions or service levels of DB2, you must bind the utilities once for each operating system and DB2-version combination. ------------------------------------------------------------------------ 36.2 Using Static SQL in CLI Applications For more information on using static SQL in CLI applications, see the Web page at: http://www.ibm.com/software/data/db2/udb/staticcli/ ------------------------------------------------------------------------ 36.3 Limitations of JDBC/ODBC/CLI Static Profiling JDBC/ODBC/CLI static profiling currently targets straightforward applications. It is not meant for complex applications with many functional components and complex program logic during execution. An SQL statement must have successfully executed for it to be captured in a profiling session. In a statement matching session, unmatched dynamic statements will continue to execute as dynamic JDBC/ODBC/CLI calls. An SQL statement must be identical character-by-character to the one that was captured and bound to be a valid candidate for statement matching. Spaces are significant: for example, "COL = 1" is considered different than "COL=1". Use parameter markers in place of literals to improve match hits. When executing an application with pre-bound static SQL statements, dynamic registers that control the dynamic statement behavior will have no effect on the statements that are converted to static. If an application issues DDL statements for objects that are referenced in subsequent DML statements, you will find all of these statements in the capture file. The JDBC/ODBC/CLI Static Profiling Bind Tool will attempt to bind them. The bind attempt will be successful with DBMSs that support the VALIDATE(RUN) bind option, but it fail with ones that do not. In this case, the application should not use Static Profiling. The Database Administrator may edit the capture file to add, change, or remove SQL statements, based on application-specific requirements. ------------------------------------------------------------------------ 36.4 ADT Transforms The following supercedes existing information in the book. * There is a new descriptor type (smallint) SQL_DESC_USER_DEFINED_TYPE_CODE, with values: SQL_TYPE_BASE 0 (this is not a USER_DEFINED_TYPE) SQL_TYPE_DISTINCT 1 SQL_TYPE_STRUCTURED 2 This value can be queried with either SQLColAttribute or SQLGetDescField (IRD only). The following attributes are added to obtain the actual type names: SQL_DESC_REFERENCE_TYPE SQL_DESC_STRUCTURED_TYPE SQL_DESC_USER_TYPE The above values can be queried using SQLColAttribute or SQLGetDescField (IRD only). * Add SQL_DESC_BASE_TYPE in case the application needs it. For example, the application may not recognize the structured type, but intends to fetch or insert it, and let other code deal with the details. * Add a new connection attribute called SQL_ATTR_TRANSFORM_GROUP to allow an application to set the transform group (rather than use the SQL "SET CURRENT DEFAULT TRANSFORM GROUP" statement). * Add a new statement/connection attribute called SQL_ATTR_RETURN_USER_DEFINED_TYPES that can be set or queried using SQLSetConnectAttr, which causes CLI to return the value SQL_DESC_USER_DEFINED_TYPE_CODE as a valid SQL type. This attribute is required before using any of the transforms. o By default, the attribute is off, and causes the base type information to be returned as the SQL type. o When enabled, SQL_DESC_USER_DEFINED_TYPE_CODE will be returned as the SQL_TYPE. The application is expected to check for SQL_DESC_USER_DEFINED_TYPE_CODE, and then to retrieve the appropriate type name. This will be available to SQLColAttribute, SQLDescribeCol, and SQLGetDescField. * The SQLBindParameter does not give an error when you bind SQL_C_DEFAULT, because there is no code to allow SQLBindParameter to specify the type SQL_USER_DEFINED_TYPE. The standard default C types will be used, based on the base SQL type flowed to the server. For example: sqlrc = SQLBindParameter (hstmt, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR, 30, 0, &c2, 30, NULL); ------------------------------------------------------------------------ 36.5 Chapter 3. Using Advanced Features 36.5.1 Writing Multi-Threaded Applications The following should be added to the end of the "Multi-Threaded Mixed Applications" section: Note: It is recommended that you do not use the default stack size, but instead increase the stack size to at least 256 000. DB2 requires a minimum stack size of 256 000 when calling a DB2 function. You must ensure therefore, that you allocate a total stack size that is large enough for both your application and the minimum requirements for a DB2 function call. 36.5.2 Scrollable Cursors The following information should be added to the "Scrollable Cursors" section: 36.5.2.1 Server-side Scrollable Cursor Support for OS/390 The UDB client for the Unix, Windows, and OS/2 platforms supports updatable server-side scrollable cursors when run against OS/390 Version 7 databases. To access an OS/390 scrollable cursor on a three-tier environment, the client and the gateway must be running DB2 UDB Version 7.1, FixPak 3 or later. There are two application enablement interfaces that can access scrollable cursors: ODBC and JDBC. The JDBC interface can only access static scrollable cursors, while the ODBC interface can access static and keyset-driven server-side scrollable cursors. Cursor Attributes The table below lists the default attributes for OS/390 Version 7 cursors in ODBC. Table 23. Default attributes for OS/390 cursors in ODBC Cursor Type Cursor Cursor Cursor Cursor Sensitivity Updatable Concurrency Scrollable forward-onlya unspecified non-updatable read-only non-scrollable concurrency static insensitive non-updatable read-only scrollable concurrency keyset-driven sensitive updatable values scrollable concurrency a Forward-only is the default behavior for a scrollable cursor without the FOR UPDATE clause. Specifying FOR UPDATE on a forward-only cursor creates an updatable, lock concurrency, non-scrollable cursor. Supported Fetch Orientations All ODBC fetch orientations are supported via the SQLFetchScroll or SQLExtendedFetch interfaces. Updating the Keyset-Driven Cursor A keyset-driven cursor is an updatable cursor. The CLI driver appends the FOR UPDATE clause to the query, except when the query is issued as a SELECT ... FOR READ ONLY query, or if the FOR UPDATE clause already exists. The keyset-driven cursor implemented in DB2 for OS/390 is a values concurrency cursor. A values concurrency cursor results in optimistic locking, where locks are not held until an update or delete is attempted. When an update or delete is attempted, the database server compares the previous values the application retrieved to the current values in the underlying table. If the values match, then the update or delete succeeds. If the values do not match, then the operation fails. If failure occurs, the application should query the values again and re-issue the update or delete if it is still applicable. An application can update a keyset-driven cursor in two ways: * Issue an UPDATE WHERE CURRENT OF "" or DELETE WHERE CURRENT OF "" using SQLPrepare() with SQLExecute() or SQLExecDirect(). * Use SQLSetPos() or SQLBulkOperations() to update, delete, or add a row to the result set. Note: Rows added to a result set via SQLSetPos() or SQLBulkOperations() are inserted into the table on the server, but are not added to the server's result set. Therefore, these rows are not updatable nor are they sensitive to changes made by other transactions. The inserted rows will appear, however, to be part of the result set, since they are cached on the client. Any triggers that apply to the inserted rows will appear to the application as if they have not been applied. To make the inserted rows updatable, sensitive, and to see the result of applicable triggers, the application must issue the query again to regenerate the result set. Troubleshooting for Applications Created Before Scrollable Cursor Support Since scrollable cursor support is new, some ODBC applications that were working with previous releases of UDB for OS/390 or UDB for Unix, Windows, and OS/2 may encounter behavioral or performance changes. This occurs because before scrollable cursors were supported, applications that requested a scrollable cursor would receive a forward-only cursor. To restore an application's previous behavior before scrollable cursor support, set the following configuration keywords in the db2cli.ini file: Table 24. Configuration keyword values restoring application behavior before scrollable cursor support Configuration Keyword Setting Description PATCH2=6 Returns a message that scrollable cursors (both keyset-driven and static) are not supported. CLI automatically downgrades any request for a scrollable cursor to a forward-only cursor. DisableKeysetCursor=1 Disables both the server-side and client-side keyset-driven scrollable cursors. This can be used to force the CLI driver to give the application a static cursor when a keyset-driven cursor is requested. UseServerKeysetCursor=0 Disables the server-side keyset-driven cursor for applications that are using the client-side keyset-driven cursor library to simulate a keyset-driven cursor. Only use this option when problems are encountered with the server-side keyset-driven cursor, since the client-side cursor incurs a large amount of overhead and will generally have poorer performance than a server-side cursor. 36.5.3 Using Compound SQL The following note is missing from the book: Any SQL statement that can be prepared dynamically, other than a query, can be executed as a statement inside a compound statement. Note: Inside Atomic Compound SQL, savepoint, release savepoint, and rollback to savepoint SQL statements are also disallowed. Conversely, Atomic Compound SQL is disallowed in savepoint. 36.5.4 Using Stored Procedures 36.5.4.1 Writing a Stored Procedure in CLI Following is an undocumented limitation on CLI stored procedures: If you are making calls to multiple CLI stored procedures, the application must close the open cursors from one stored procedure before calling the next stored procedure. More specifically, the first set of open cursors must be closed before the next stored procedure tries to open a cursor. 36.5.4.2 CLI Stored Procedures and Autobinding The following supplements information in the book: The CLI/ODBC driver will normally autobind the CLI packages the first time a CLI/ODBC application executes SQL against the database, provided the user has the appropriate privilege or authorization. Autobinding of the CLI packages cannot be performed from within a stored procedure, and therefore will not take place if the very first thing an application does is call a CLI stored procedure. Before running a CLI application that calls a CLI stored procedure against a new DB2 database, you must bind the CLI packages once with this command: UNIX db2 bind /@db2cli.lst blocking all Windows and OS/2 db2bind "%DB2PATH%\bnd\@db2cli.lst" blocking The recommended approach is to always bind these packages at the time the database is created to avoid autobind at runtime. Autobind can fail if the user does not have privilege, or if another application tries to autobind at the same time. ------------------------------------------------------------------------ 36.6 Chapter 4. Configuring CLI/ODBC and Running Sample Applications 36.6.1 Configuration Keywords Disregard the last paragraph in the CURRENTFUNCTIONPATH keyword. The correct information is as follows: This keyword is used as part of the process for resolving unqualified function and stored procedure references that may have been defined in a schema name other than the current user's schema. The order of the schema names determines the order in which the function and procedure names will be resolved. For more information on function and procedure resolution, refer to the SQL Reference. ------------------------------------------------------------------------ 36.7 Chapter 5. DB2 CLI Functions 36.7.1 SQLBindFileToParam - Bind LOB File Reference to LOB Parameter The last parameter - IndicatorValue - in the SQLBindFileToParam() CLI function is currently documented as "output (deferred)". It should be "input (deferred)". 36.7.2 SQLNextResult - Associate Next Result Set with Another Statement Handle The following text should be added to Chapter 5, "DB2 CLI Functions": 36.7.2.1 Purpose Specification: DB2 CLI 7.x 36.7.2.2 Syntax SQLRETURN SQLNextResult (SQLHSTMT StatementHandle1 SQLHSTMT StatementHandle2); 36.7.2.3 Function Arguments Table 25. SQLNextResult Arguments Data Type Argument Use Description SQLHSTMT StatementHandle input Statement handle. SQLHSTMT StatementHandle input Statement handle. 36.7.2.4 Usage A stored procedure returns multiple result sets by leaving one or more cursors open after exiting. The first result set is always accessed by using the statement handle that called the stored procedure. If multiple result sets are returned, either SQLMoreResults() or SQLNextResult() can be used to describe and fetch the result set. SQLMoreResults() is used to close the cursor for the first result set and allow the next result set to be processed, whereas SQLNextResult() moves the next result set to StatementHandle2, without closing the cursor on StatementHandle1. Both functions return SQL_NO_DATA_FOUND if there are no result sets to be fetched. Using SQLNextResult() allows result sets to be processed in any order once they have been transferred to other statement handles. Mixed calls to SQLMoreResults() and SQLNextResult() are allowed until there are no more cursors (open result sets) on StatementHandle1. When SQLNextResult() returns SQL_SUCCESS, the next result set is no longer associated with StatementHandle1. Instead, the next result set is associated with StatementHandle2, as if a call to SQLExecDirect() had just successfully executed a query on StatementHandle2. The cursor, therefore, can be described using SQLNumResultSets(), SQLDescribeCol(), or SQLColAttribute(). After SQLNextResult() has been called, the result set now associated with StatementHandle2 is removed from the chain of remaining result sets and cannot be used again in either SQLNextResult() or SQLMoreResults(). This means that for 'n' result sets, SQLNextResult() can be called successfully at most 'n-1' times. If SQLFreeStmt() is called with the SQL_CLOSE option, or SQLFreeHandle() is called with HandleType set to SQL_HANDLE_STMT, all pending result sets on this statement handle are discarded. SQLNextResult() returns SQL_ERROR if StatementHandle2 has an open cursor or StatementHandle1 and StatementHandle2 are not on the same connection. If any errors or warnings are returned, SQLError() must always be called on StatementHandle1. Note: SQLMoreResults() also works with a parameterized query with an array of input parameter values specified with SQLParamOptions() and SQLBindParameter(). SQLNextResult(), however, does not support this. 36.7.2.5 Return Codes * SQL_SUCCESS * SQL_SUCCESS_WITH_INFO * SQL_STILL_EXECUTING * SQL_ERROR * SQL_INVALID_HANDLE * SQL_NO_DATA_FOUND 36.7.2.6 Diagnostics Table 26. SQLNextResult SQLSTATEs SQLSTATE Description Explanation 40003 Communication Link The communication link between the 08S01 failure. application and data source failed before the function completed. 58004 Unexpected system Unrecoverable system error. failure. HY001 Memory allocation DB2 CLI is unable to allocate the memory failure. required to support execution or completion of the function. HY010 Function sequence The function was called while in a error. data-at-execute (SQLParamData(), SQLPutData()) operation. StatementHandle2 has an open cursor associated with it. The function was called while within a BEGIN COMPOUND and END COMPOUND SQL operation. HY013 Unexpected memory DB2 CLI was unable to access the memory handling error. required to support execution or completion of the function. HYT00 Time-out expired. The time-out period expired before the data source returned the result set. Time-outs are only supported on non-multitasking systems such as Windows 3.1 and Macintosh System 7. The time-out period can be set using the SQL_ATTR_QUERY_TIMEOUT attribute for SQLSetConnectAttr(). 36.7.2.7 Restrictions Only SQLMoreResults() can be used for parameterized queries. 36.7.2.8 References * "SQLMoreResults - Determine If There Are More Result Sets" on page 535 * "Returning Result Sets from Stored Procedures" on page 120 ------------------------------------------------------------------------ 36.8 Appendix D. Extended Scalar Functions 36.8.1 Date and Time Functions The following functions are missing from the Date and Time Functions section of Appendix D "Extended Scalar Functions": DAYOFWEEK_ISO( date_exp ) Returns the day of the week in date_exp as an integer value in the range 1-7, where 1 represents Monday. Note the difference between this function and the DAYOFWEEK() function, where 1 represents Sunday. WEEK_ISO( date_exp ) Returns the week of the year in date_exp as an integer value in the range of 1-53. Week 1 is defined as the first week of the year to contain a Thursday. Therefore, Week1 is equivalent to the first week that contains Jan 4, since Monday is considered to be the first day of the week. Note that WEEK_ISO() differs from the current definition of WEEK(), which returns a value up to 54. For the WEEK() function, Week 1 is the week containing the first Saturday. This is equivalent to the week containing Jan. 1, even if the week contains only one day. DAYOFWEEK_ISO() and WEEK_ISO() are automatically available in a database created in Version 7. If a database was created prior to Version 7, these functions may not be available. To make DAYOFWEEK_ISO() and WEEK_ISO() functions available in such a database, use the db2updb system command. For more information about db2updb, see the "Command Reference" section in these Release Notes. ------------------------------------------------------------------------ 36.9 Appendix K. Using the DB2 CLI/ODBC/JDBC Trace Facility The sections within this appendix have been updated. See the "Traces" chapter in the Troubleshooting Guide for the most up-to-date information on this trace facility. ------------------------------------------------------------------------ Message Reference ------------------------------------------------------------------------ 37.1 Getting Message and SQLSTATE Help The help available from the command line processor contains new and updated help for messages and SQLSTATE values that is not available in the Message Reference. To display message help from the command line processor, enter the following command at the operating system command prompt: db2 "? XXXnnnnn" where XXX represents the message prefix and where nnnnn represents the message number. For example, db2 "? SQL30081" displays help about the SQL30081 message. To display SQLSTATE text from the command line processor, enter the following command at the operating system command prompt: db2 "? XXXXX" where XXXXX represents the SQLSTATE value. For example, db2 "? 428F1" displays the text for SQLSTATE 428F1. ------------------------------------------------------------------------ 37.2 SQLCODE Remapping Change in DB2 Connect The default SQLCODE remapping for DB2 Connect has changed in Version 7.2. When a host database returns SQLCODE value -567, DB2 Connect now remaps the SQLCODE value to -551 before returning the SQLCODE value to the DB2 client. ------------------------------------------------------------------------ 37.3 New and Changed Messages The following list contains the message numbers of messages that have changed since the Message Reference was published for DB2 Version 7.1. If you receive one of these messages while using DB2, you will receive the correct updated message; however, the message will not correspond with the information in the Message Reference. 37.3.1 Call Level Interface (CLI) Messages CLI0645E CLI0646E CLI0647E 37.3.2 DB2 Messages DB21086I DB210060E DB210061E DB210062E DB210113E DB210114E DB210115E DB210116E DB210117E DB210118E DB210120E DB210121E DB210200I DB210201I 37.3.3 DBI Messages DBI1172E DBI1793W DBI1794E DBI1795E DBI1796W DBI1797I 37.3.4 Data Warehouse Center (DWC) Messages DWC0000I DWC03504E DWC08900E DWC08901E DWC08902E DWC08903E DWC08904E DWC08907C DWC08908C DWC08909C DWC08910E DWC08911E DWC08912E DWC08913E DWC08914E DWC08915E DWC08917E DWC08919I DWC08930E DWC08931E DWC08932E DWC08933E DWC08934E DWC08935E DWC08936W DWC08937I DWC08938I DWC08939I DWC08940I DWC08941I DWC08960I DWC08961I DWC08962I DWC08963I DWC08964I DWC08965I DWC08966E DWC08967E DWC08968E DWC13239E DWC13300E DWC13301E DWC13302E DWC13304E DWC13603E DWC13700E DWC13701E DWC13702E DWC13703E DWC13705E DWC13706E DWC13707E 37.3.5 SQL Messages SQL0017N SQL0056N SQL0057N SQL0058N SQL0097N SQL0224N SQL0225N SQL0227N SQL0228N SQL0231W SQL0243N SQL0244N SQL0270N SQL0301N SQL0303N SQL0336N SQL0348N SQL0349N SQL0357N SQL0358N SQL0368N SQL0408N SQL0423N SQL0590N SQL0670N SQL0845N SQL0846N SQL1179W SQL1186N SQL1550N SQL1551N SQL1552N SQL1553N SQL1704N SQL2077W SQL2078N SQL2417N SQL2426N SQL2571N SQL2572N SQL2573N SQL2574N SQL2575N SQL2576N SQL4942N SQL5012N SQL6583N SQL20005N SQL20117N SQL20121N SQL20133N SQL20134N SQL20135N SQL20143N SQL20144N SQL20145N SQL20146N SQL20147N SQL20148N SQL20153N SQL21000N ------------------------------------------------------------------------ 37.4 Corrected SQLSTATES Table 27. 42630 An SQLSTATE or SQLCODE variable is not valid in this context. 42631 An expression must be specified on a RETURN statement in an SQL function. 42632 There must be a RETURN statement in an SQL function or method. 428F2 An integer expression must be specified on a RETURN statement in an SQL procedure. 560B7 For a multiple row INSERT, the usage of a NEXTVAL sequence expression must be the same for each row. ------------------------------------------------------------------------ SQL Reference ------------------------------------------------------------------------ 38.1 SQL Reference is Provided in One PDF File The "Using the DB2 Library" appendix in each book indicates that the SQL Reference is available in PDF format as two separate volumes. This is incorrect. Although the printed book appears in two volumes, and the two corresponding form numbers are correct, there is only one PDF file, and it contains both volumes. The PDF file name is db2s0x70. ------------------------------------------------------------------------ 38.2 Chapter 3. Language Elements 38.2.1 Naming Conventions and Implicit Object Name Qualifications Add the following note to this section in Chapter 3: The following names, when used in the context of SQL Procedures, are restricted to the characters allowed in an ordinary identifier, even if the names are delimited: - condition-name - label - parameter-name - procedure-name - SQL-variable-name - statement-name 38.2.2 DATALINK Assignments A paragraph in this section has been changed to the following: Note that the size of a URL parameter or function result is the same on both input or output and is bound by the length of the DATALINK column. However, in some cases the URL value returned has an access token attached. In situations where this is possible, the output location must have sufficient storage space for the access token and the length of the DATALINK column. Hence, the actual length of the comment and URL in its fully expanded form provided on input should be restricted to accommodate the output storage space. If the restricted length is exceeded, this error is raised. 38.2.3 Expressions 38.2.3.1 Syntax Diagram The syntax diagram has changed: .-operator------------------------------. V | >>----+-----+---+-function--------------+--+------------------->< +- + -+ +-(expression)----------+ '- - -' +-constant--------------+ +-column-name-----------+ +-host-variable---------+ +-special-register------+ +-(scalar-fullselect)---+ +-labeled-duration------+ +-case-expression-------+ +-cast-specification----+ +-dereference-operation-+ +-OLAP-function---------+ +-method-invocation-----+ +-subtype-treatment-----+ '-sequence-reference----' operator (1) |---+-CONCAT------+---------------------------------------------| +- / ---------+ +- * ---------+ +- + ---------+ '- - ---------' Notes: 1. || may be used as a synonym for CONCAT. 38.2.3.2 OLAP Functions The following represents a correction to the "OLAP Functions" section under "Expressions" in Chapter 3. aggregation-function |--column-function--OVER---(--+------------------------------+--> '-| window-partition-clause |--' >----+--------------------------------------------------------------------+> '-| window-order-clause |--+--------------------------------------+--' '-| window-aggregation-group-clause |--' >---------------------------------------------------------------| window-order-clause .-,-------------------------------------------. V .-| asc option |---. | |---ORDER BY-----sort-key-expression--+------------------+--+---| '-| desc option |--' asc option .-NULLS LAST--. |---ASC--+-------------+----------------------------------------| '-NULLS FIRST-' desc option .-NULLS FIRST--. |---DESC--+--------------+--------------------------------------| '-NULLS LAST---' window-aggregation-group-clause |---+-ROWS--+---+-| group-start |---+---------------------------| '-RANGE-' +-| group-between |-+ '-| group-end |-----' group-end |---+-UNBOUNDED FOLLOWING-----------+---------------------------| '-unsigned-constant--FOLLOWING--' In the window-order-clause description: NULLS FIRST The window ordering considers null values before all non-null values in the sort order. NULLS LAST The window ordering considers null values after all non-null values in the sort order. In the window-aggregation-group-clause description: window-aggregation-group-clause The aggregation group of a row R is a set of rows, defined relative to R in the ordering of the rows of R's partition. This clause specifies the aggregation group. If this clause is not specified, the default is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, providing a cumulative aggregation result. ROWS Indicates the aggregation group is defined by counting rows. RANGE Indicates the aggregation group is defined by an offset from a sort key. group-start Specifies the starting point for the aggregation group. The aggregation group end is the current row. Specification of the group-start clause is equivalent to a group-between clause of the form "BETWEEN group-start AND CURRENT ROW". group-between Specifies the aggregation group start and end based on either ROWS or RANGE. group-end Specifies the ending point for the aggregation group. The aggregation group start is the current row. Specification of the group-end clause is equivalent to a group-between clause of the form "BETWEEN CURRENT ROW AND group-end". UNBOUNDED PRECEDING Includes the entire partition preceding the current row. This can be specified with either ROWS or RANGE. Also, this can be specified with multiple sort-key-expressions in the window-order-clause. UNBOUNDED FOLLOWING Includes the entire partition following the current row. This can be specified with either ROWS or RANGE. Also, this can be specified with multiple sort-key-expressions in the window-order-clause. CURRENT ROW Specifies the start or end of the aggregation group based on the current row. If ROWS is specified, the current row is the aggregation group boundary. If RANGE is specified, the aggregation group boundary includes the set of rows with the same values for the sort-key-expressions as the current row. This clause cannot be specified in group-bound2 if group-bound1 specifies value FOLLOWING. value PRECEDING Specifies either the range or number of rows preceding the current row. If ROWS is specified, then value is a positive integer indicating a number of rows. If RANGE is specified, then the data type of value must be comparable to the type of the sort-key-expression of the window-order-clause. There can only be one sort-key-expression, and the data type of the sort-key-expression must allow subtraction. This clause cannot be specified in group-bound2 if group-bound1 is CURRENT ROW or value FOLLOWING. value FOLLOWING Specifies either the range or number of rows following the current row. If ROWS is specified, then value is a positive integer indicating a number of rows. If RANGE is specified, then the data type of value must be comparable to the type of the sort-key-expression of the window-order-clause. There can only be one sort-key-expression, and the data type of the sort-key-expression must allow addition. 38.2.3.3 Sequence Reference The following information should be added to the end of the Expressions section (after "Subtype Treatment"). sequence-reference |--+-| nextval-expression |-+-----------------------------------| '-| prevval-expression |-' nextval-expression |---NEXTVAL FOR--sequence-name----------------------------------| prevval-expression |---PREVVAL FOR--sequence-name----------------------------------| NEXTVAL FOR sequence-name A NEXTVAL expression returns the next value for the sequence specified by sequence-name. PREVVAL FOR sequence-name A PREVVAL expression returns the most recently generated value for the specified sequence for a previous statement within the current session. This value can be repeatedly referenced using PREVVAL expressions specifying the name of the sequence. There may be multiple instances of PREVVAL expressions specifying the same sequence name within a single statement and they all return the same value. A PREVVAL expression can only be used if a NEXTVAL expression specifying the same sequence name has already been referenced in the current user session (in the current or a previous transaction) (SQLSTATE 51035). Note: o A new sequence number is generated when a NEXTVAL expression specifies the name of the sequence. However, if there are multiple instances of a NEXTVAL expression specifying the same sequence name within a query, the counter for the sequence is incremented only once for each row of the result. o The most recently generated value for a sequence can be repeatedly referenced using a PREVVAL expression specifying the name of the sequence. There may be multiple instances of a PREVVAL expression specifying the same sequence name within a single statement. o The same sequence number can be used as a unique key value in two separate tables by referencing the sequence number with a NEXTVAL expression for the first row (this generates the sequence value), and a PREVVAL expression for the other rows (this instance of PREVVAL refers to the sequence value generated by the NEXTVAL expression in the previous statement), as shown below: INSERT INTO order(orderno, custno) VALUES (NEXTVAL FOR order_seq, 123456); INSERT INTO line_item (orderno, partno, quantity) VALUES (PREVVAL FOR order_seq, 987654, 1); o Examples of where NEXTVAL and PREVVAL expressions can be specified are: + select-statement or SELECT INTO statement: within the select-clause as long as the statement does not contain a DISTINCT keyword, a GROUP BY clause, an ORDER BY clause, a UNION keyword, an INTERSECT keyword, or EXCEPT keyword + INSERT statement: within a VALUES clause + INSERT statement: within the select-clause of the fullselect + UPDATE statement: within the select-clause of the fullselect of an expression in the SET clause (either searched or positioned UPDATE statement) + VALUES INTO statement: within the select-clause of the fullselect of an expression o Examples of where NEXTVAL and PREVVAL expressions cannot be specified (SQLSTATE 428F9) are: + join condition of a full outer join + DEFAULT value for a column in a CREATE TABLE or ALTER TABLE statement + generated column definition in a CREATE TABLE or ALTER TABLE statement + condition of a CHECK constraint + CREATE TRIGGER statement + CREATE VIEW statement + CREATE METHOD statement + CREATE FUNCTION statement. o In addition, a NEXTVAL expression cannot be specified (SQLSTATE 428F9) in : + CASE expression + parameter list of an aggregate function + subquery + SELECT statement that contains a DISTINCT operator + join condition of join + GROUP BY clause of a SELECT statement + SELECT statement that is combined with another SELECT statement using the UNION, INTERSECT, or EXCEPT set operator + nested table expression + parameter list of a table function + WHERE clause of a SELECT, DELETE, or UPDATE statement + ORDER BY clause + parameter list of a CALL statement. o When a value is generated for a sequence, that value is consumed, and the next time that a value is needed, a new value will be generated. This is true even when the statement containing the NEXTVAL expression fails. o If an INSERT statement includes a NEXTVAL expression in the VALUES list for the column, and if some error occurs at some point during the execution of the INSERT (it could be a problem in generating the next sequence value, or a problem with the value for another column), then an insertion failure occurs, and the value generated for the sequence is considered to be consumed. In some cases, reissuing the same INSERT statement might lead to success. For example, consider an error that is the result of the existence of a unique index for the column for which NEXTVAL was used and the sequence value generated already exists in the index. It is possible that the next value generated for the sequence is a value that does not exist in the index and so the subsequent INSERT would succeed. o If in generating a value for a sequence, the maximum value for the sequence is exceeded (or the minimum value for a descending sequence) and cycles are not permitted, then an error occurs (SQLSTATE 23522). In this case, the user could ALTER the sequence to extend the range of acceptable values, or enable cycles for the sequence, or DROP and CREATE a new sequence with a different data type that has a larger range of values. For example, a sequence may have been defined with a data type of SMALLINT, and eventually the sequence runs out of assignable values. To redefine the sequence as INTEGER, you would need to drop and recreate the sequence with the new definition. o A reference to PREVVAL in a select statement of a cursor refers to a value that was generated for the specified sequence prior to the opening of the cursor. However, closing the cursor can affect the values returned by PREVVAL for the specified sequence in subsequent statements, or even for the same statement in the event that the cursor is reopened. This would be the case when the select statement of the cursor included a reference to NEXTVAL for the same sequence name. Examples: These examples assume that there is a table called "order" and that a sequence called "order_seq" is created as follows: CREATE SEQUENCE order_seq START WITH 1 INCREMENT BY 1 NOMAXVALUE NOCYCLE CACHE 24 * Some examples of how to generate an "order_seq" sequence number with a NEXTVAL expression for the sequence created above: INSERT INTO order(orderno, custno) VALUES (NEXTVAL FOR order_seq, 123456); or, UPDATE order SET orderno = NEXTVAL FOR order_seq WHERE custno = 123456; or, VALUES NEXTVAL FOR order_seq INTO :hv_seq; ------------------------------------------------------------------------ 38.3 Chapter 4. Functions 38.3.1 Enabling the New Functions and Procedures Version 7 FixPaks deliver new SQL built-in scalar functions. Refer to the SQL Reference updates for a description of these new functions. The new functions are not automatically enabled on each database when the database server code is upgraded to the new service level. To enable these new functions, the system administrator must issue the command db2updv7, specifying each database at the server. This command makes an entry in the database that ensures that database objects created prior to executing this command use existing function signatures that may match the new function signatures. For information on enabling the MQSeries functions (those defined in the MQDB2 schema), see MQSeries. 38.3.2 Scalar Functions 38.3.2.1 ABS or ABSVAL >>-+-ABS----+--(expression)------------------------------------>< '-ABSVAL-' The schema is SYSIBM. This function was first available in FixPak 2 of Version 7.1. Note: The SYSFUN version of the ABS (or ABSVAL) function continues to be available. Returns the absolute value of the argument. The argument is an expression that returns a value of any built-in numeric data type. The result of the function has the same data type and length attribute as the argument. If the argument can be null or the database is configured with DFT_SQLMATHWARN set to yes, then the result can be null; if the argument is null, the result is the null value. For example: ABS(-51234) returns an INTEGER with a value of 51234. 38.3.2.2 DECRYPT_BIN and DECRYPT_CHAR >>-+-DECRYPT_BIN--+---------------------------------------------> '-DECRYPT_CHAR-' >----(--encrypted-data--+--------------------------------+---)-->< '-,--password-string-expression--' The schema is SYSIBM. This function was first available in FixPak 3 of Version 7.1. The DECRYPT_BIN and DECRYPT_CHAR functions return a value that is the result of decrypting encrypted-data. The password used for decryption is either the password-string-expression value or the ENCRYPTION PASSWORD value (as assigned using the SET ENCRYPTION PASSWORD statement). The DECRYPT_BIN and DECRYPT_CHAR functions can only decrypt values that were encrypted using the ENCRYPT function (SQLSTATE 428FE). encrypted-data An expression that returns a CHAR FOR BIT DATA or VARCHAR FOR BIT DATA value that is a complete, encrypted data string that was encrypted using the ENCRYPT function. password-string-expression An expression that returns a CHAR or VARCHAR value with at least 6 bytes and no more than 127 bytes (SQLSTATE 428FC). This should be the same password used to encrypt the data or decryption will result in an error (SQLSTATE 428FD). If the value of the password argument is null or not provided, the data will be decrypted using the ENCRYPTION PASSWORD value, which must have been set for the session (SQLSTATE 51039). The result of the DECRYPT_BIN function is VARCHAR FOR BIT DATA. The result of the DECRYPT_CHAR function is VARCHAR. If the encrypted-data included a hint, the hint is not returned by the function. The length attribute of the result is the length attribute of the data type of encrypted-data minus 8 bytes. The actual length of the value returned by the function will match the length of the original string that was encrypted. If the encrypted-data includes bytes beyond the encrypted string, these bytes are not returned by the function. If the first argument can be null, the result can be null; if the first argument is null, the result is the null value. If the data is decrypted on a different system using a code page other than the code page in which the encryption took place, it is possible that expansion may occur when converting the decrypted value to the database code page. In such situations, the encrypted-data value should be cast to a VARCHAR string with a larger number of bytes. Also see 38.3.2.3, ENCRYPT and 38.3.2.4, GETHINT for additional information on using this function. Examples: Example 1: This example uses the ENCRYPTION PASSWORD value to hold the encryption password. SET ENCRYPTION PASSWORD = 'Ben123'; INSERT INTO EMP (SSN) VALUES ENCRYPT('289-46-8832'); SELECT DECRYPT_CHAR(SSN) FROM SSN; The value returned is '289-46-8832'. Example 2: This example explicitly passes the encryption password. SELECT DECRYPT_CHAR(SSN,'Ben123') FROM SSN; The value returned is '289-46-8832'. 38.3.2.3 ENCRYPT >>-ENCRYPT------------------------------------------------------> >----(--data-string-expression--+----------------------------------------------------------------+---)-> '-,--password-string-expression--+----------------------------+--' '-,--hint-string-expression--' >-------------------------------------------------------------->< The schema is SYSIBM. This function was first available in FixPak 3 of Version 7.1. The ENCRYPT function returns a value that is the result of encrypting data-string-expression. The password used for encryption is either the password-string-expression value or the ENCRYPTION PASSWORD value (as assigned using the SET ENCRYPTION PASSWORD statement). data-string-expression An expression that returns a CHAR or VARCHAR value to be encrypted. The length attribute for the data type of data-string-expression is limited to 32663 without a hint-string-expression argument and 32631 when the hint-string-expression argument is specified (SQLSTATE 42815). password-string-expression An expression that returns a CHAR or VARCHAR value with at least 6 bytes and no more than 127 bytes (SQLSTATE 428FC). The value represents the password used to encrypt the data-string-expression. If the value of the password argument is null or not provided, the data will be encrypted using the ENCRYPTION PASSWORD value, which must have been set for the session (SQLSTATE 51039). hint-string-expression An expression that returns a CHAR or VARCHAR value up to 32 bytes that will help data owners remember passwords (for example, 'Ocean' as a hint to remember 'Pacific'). If a hint value is given, the hint is embedded into the result and can be retrieved using the GETHINT function. If this argument is null or not provided, no hint will be embedded in the result. The result data type of the function is VARCHAR FOR BIT DATA. The length attribute of the result is: * When the optional hint parameter is specified, the length attribute of the non-encrypted data + 8 bytes + the number of bytes to the next 8 byte boundary + 32 bytes for the hint length. * With no hint parameter, the length attribute of the non-encrypted data + 8 bytes + the number of bytes to the next 8 byte boundary. If the first argument can be null, the result can be null; if the first argument is null, the result is the null value. Notice that the encrypted result is longer than the data-string-expression value. Therefore, when assigning encrypted values, ensure that the target is declared with sufficient size to contain the entire encrypted value. Notes: * Encryption Algorithm: The internal encryption algorithm used is RC2 block cipher with padding, the 128-bit secret key is derived from the password using a MD2 message digest. * Encryption Passwords and Data: It is the user's responsibility to perform password management. Once the data is encrypted only the password used to encrypt it can be used to decrypt it (SQLSTATE 428FD). Be careful when using CHAR variables to set password values as they may be padded with blanks. The encrypted result may contain null terminator and other non-printable characters. * Table Column Definition: When defining columns and types to contain encrypted data always calculate the length attribute as follows. For encrypted data with no hint: Maximum length of the non-encrypted data + 8 bytes + the number of bytes to the next 8 byte boundary = encrypted data column length. For encrypted data with embedded hint : Maximum length of the non-encrypted data + 8 bytes + the number of bytes to the next 8 byte boundary + 32 bytes for the hint length = encrypted data column length. Any assignment or cast to a length shorter than the suggested data length may result in failed decryption in the future and lost data. Blanks are valid encrypted data values that may be truncated when stored in a column that is too short. Sample Column Length Calculations Maximum length of non-encrypted data 6 bytes 8 bytes 8 bytes Number of bytes to the next 8 byte boundary 2 bytes --------- Encrypted data column length 16 bytes Maximum length of non-encrypted data 32 bytes 8 bytes 8 bytes Number of bytes to the next 8 byte boundary 8 bytes --------- Encrypted data column length 48 bytes * Administration of encrypted data: Encrypted data can only be decrypted on servers that support the decryption functions that correspond to the ENCRYPT function. Hence, replication of columns with encrypted data should only be done to servers that support the DECRYPT_BIN or DECRYPT_CHAR function. Also see 38.3.2.2, DECRYPT_BIN and DECRYPT_CHAR and 38.3.2.4, GETHINT for additional information on using this function. Examples: Example 1: This example uses the ENCRYPTION PASSWORD value to hold the encryption password. SET ENCRYPTION PASSWORD = 'Ben123'; INSERT INTO EMP (SSN) VALUES ENCRYPT('289-46-8832'); Example 2: This example explicitly passes the encryption password. INSERT INTO EMP (SSN) VALUES ENCRYPT('289-46-8832','Ben123',''); Example 3: The hint 'Ocean' is stored to help the user remember the encryption password of 'Pacific'. INSERT INTO EMP (SSN) VALUES ENCRYPT('289-46-8832','Pacific','Ocean'); 38.3.2.4 GETHINT >>-GETHINT--(--encrypted-data--)------------------------------->< The schema is SYSIBM. This function was first available in FixPak 3 of Version 7.1. The GETHINT function will return the password hint if one is found in the encrypted-data. A password hint is a phrase that will help data owners remember passwords (For example, 'Ocean' as a hint to remember 'Pacific'). encrypted-data An expression that returns a CHAR FOR BIT DATA or VARCHAR FOR BIT DATA value that is a complete, encrypted data string that was encrypted using the ENCRYPT function (SQLSTATE 428FE). The result of the function is VARCHAR(32). The result can be null; if the hint parameter was not added to the encrypted-data by the ENCRYPT function or the first argument is null, the result is the null value. Also see 38.3.2.2, DECRYPT_BIN and DECRYPT_CHAR and 38.3.2.3, ENCRYPT for additional information on using this function. Example: In this example the hint 'Ocean' is stored to help the user remember the encryption password 'Pacific'. INSERT INTO EMP (SSN) VALUES ENCRYPT('289-46-8832', 'Pacific','Ocean'); SELECT GETHINT(SSN) FROM EMP; The value returned is 'Ocean'. 38.3.2.5 IDENTITY_VAL_LOCAL >>-IDENTITY_VAL_LOCAL--(--)------------------------------------>< The schema is SYSIBM. This procedure was first available in FixPak 3 of Version 7.1. The IDENTITY_VAL_LOCAL function is a non-deterministic function that returns the most recently assigned value for an identity column, where the assignment occurred as a result of a single row INSERT statement using a VALUES clause. The function has no input parameters. The result is a DECIMAL(31,0), regardless of the actual data type of the corresponding identity column. The value returned by the function is the value assigned to the identity column of the table identified in the most recent single row INSERT statement. The INSERT statement must be made using a VALUES clause on a table containing an identity column. Also, the INSERT statement must be issued at the same level 1 (that is, the value is available locally at the level it was assigned, until it is replaced by the next assigned value). The assigned value is either a value supplied by the user (if the identity column is defined as GENERATED BY DEFAULT), or an identity value generated by DB2. The function returns a null value in the following situations: * When a single row INSERT statement with a VALUES clause has not been issued at the current processing level for a table containing an identity column. * When a COMMIT or ROLLBACK of a unit of work has occurred since the most recent INSERT statement that assigned a value 2 . The result of the function is not affected by the following: * A single row INSERT statement with a VALUES clause for a table without an identity column. * A multiple row INSERT statement with a VALUES clause. * An INSERT statement with a fullselect. * A ROLLBACK TO SAVEPOINT statement. Notes: * Expressions in the VALUES clause of an INSERT statement are evaluated prior to the assignments for the target columns of the INSERT statement. Thus, an invocation of an IDENTITY_VAL_LOCAL function inside the VALUES clause of an INSERT statement will use the most recently assigned value for an identity column from a previous INSERT statement. The function returns the null value if no previous single row INSERT statement with a VALUES clause for a table containing an identity column has been executed within the same level as the IDENTITY_VAL_LOCAL function. * The identity column value of the table for which the trigger is defined can be determined within a trigger, by referencing the trigger transition variable for the identity column. * The result of invoking the IDENTITY_VAL_LOCAL function from within the trigger condition of an insert trigger is a null value. * It is possible that multiple before or after insert triggers exist for a table. In this case each trigger is processed separately, and identity values assigned by one triggered action are not available to other triggered actions using the IDENTITY_VAL_LOCAL function. This is true even though the multiple triggered actions are conceptually defined at the same level. * It is not generally recommended to use the IDENTITY_VAL_LOCAL function in the body of a before insert trigger. The result of invoking the IDENTITY_VAL_LOCAL function from within the triggered action of a before insert trigger is the null value. The value for the identity column of the table for which the trigger is defined cannot be obtained by invoking the IDENTITY_VAL_LOCAL function within the triggered action of a before insert trigger. However, the value for the identity column can be obtained in the triggered action, by referencing the trigger transition variable for the identity column. * The result of invoking the IDENTITY_VAL_LOCAL function from within the triggered action of an after insert trigger 3 is the value assigned to an identity column of the table identified in the most recent single row INSERT statement invoked in the same triggered action that had a VALUES clause for a table containing an identity column. If a single row INSERT statement with a VALUES clause for a table containing an identity column was not executed within the same triggered action, prior to the invocation of the IDENTITY_VAL_LOCAL function, then the function returns a null value. * Since the results of the IDENTITY_VAL_LOCAL function are not deterministic, the result of an invocation of the IDENTITY_VAL_LOCAL function within the SELECT statement of a cursor can vary for each FETCH statement. * The assigned value is the value actually assigned to the identity column (that is, the value that would be returned on a subsequent SELECT statement). This value is not necessarily the value provided in the VALUES clause of the INSERT statement, or a value generated by DB2. The assigned value could be a value specified in a SET transition variable statement, within the body of a before insert trigger, for a trigger transition variable associated with the identity column. * The value returned by the function is unpredictable following a failed single row INSERT with a VALUES clause into a table with an identity column. The value may be the value that would have been returned from the function had it been invoked prior to the failed INSERT, or it may be the value that would have been assigned had the INSERT succeeded. The actual value returned depends on the point of failure and is therefore unpredictable. Examples: Example 1: Set the variable IVAR to the value assigned to the identity column in the EMPLOYEE table. If this insert is the first into the EMPLOYEE table, then IVAR would have a value of 1. CREATE TABLE EMPLOYEE (EMPNO INTEGER GENERATED ALWAYS AS IDENTITY, NAME CHAR(30), SALARY DECIMAL(5,2), DEPTNO SMALLINT) Example 2: An IDENTITY_VAL_LOCAL function invoked in an INSERT statement returns the value associated with the previous single row INSERT statement, with a VALUES clause for a table with an identity column. Assume for this example that there are two tables, T1 and T2. Both T1 and T2 have an identity column named C1. DB2 generates values in sequence, starting with 1, for the C1 column in table T1, and values in sequence, starting with 10, for the C1 column in table T2. CREATE TABLE T1 (C1 INTEGER GENERATED ALWAYS AS IDENTITY, C2 INTEGER), CREATE TABLE T2 (C1 DECIMAL(15,0) GENERATED BY DEFAULT AS IDENTITY (START WITH 10), C2 INTEGER), INSERT INTO T1 (C2) VALUES (5), INSERT INTO T1 (C2) VALUES (6), SELECT * FROM T1 which gives a result of: C1 C2 ----------- ---------- 1 5 2 6 and now, declaring the function for the variable IVAR: VALUES IDENTITY_VAL_LOCAL() INTO :IVAR At this point, the IDENTITY_VAL_LOCAL function would return a value of 2 in IVAR, because that was the value most recently assigned by DB2. The following INSERT statement inserts a single row into T2, where column C2 gets a value of 2 from the IDENTITY_VAL_LOCAL function. INSERT INTO T2 (C2) VALUES (IDENTITY_VAL_LOCAL()); SELECT * FROM T2 WHERE C1 = DECIMAL(IDENTITY_VAL_LOCAL(),15,0) returning a result of: C1 C2 ----------------- ---------- 10. 2 Invoking the IDENTITY_VAL_LOCAL function after this insert results in a value of 10, which is the value generated by DB2 for column C1 of T2. In a nested environment involving a trigger, use the IDENTITY_VAL_LOCAL function to retrieve the identity value assigned at a particular level, even though there might have been identity values assigned at lower levels. Assume that there are three tables, EMPLOYEE, EMP_ACT, and ACCT_LOG. There is an after insert trigger defined on EMPLOYEE that results in additional inserts into the EMP_ACT and ACCT_LOG tables. CREATE TABLE EMPLOYEE (EMPNO SMALLINT GENERATED ALWAYS AS IDENTITY (START WITH 1000), NAME CHAR(30), SALARY DECIMAL(5,2), DEPTNO SMALLINT); CREATE TABLE EMP_ACT (ACNT_NUM SMALLINT GENERATED ALWAYS AS IDENTITY (START WITH 1), EMPNO SMALLINT); CREATE TABLE ACCT_LOG (ID SMALLINT GENERATED ALWAYS AS IDENTITY (START WITH 100), ACNT_NUM SMALLINT, EMPNO SMALLINT); CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMPLOYEE REFERENCING NEW AS NEW_EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC INSERT INTO EMP_ACT (EMPNO) VALUES (NEW_EMP.EMPNO); INSERT INTO ACCT_LOG (ACNT_NUM EMPNO) VALUES (IDENTITY_VAL_LOCAL(), NEW_EMP.EMPNO); END The first triggered INSERT statement inserts a row into the EMP_ACT table. This INSERT statement uses a trigger transition variable for the EMPNO column of the EMPLOYEE table, to indicate that the identity value for the EMPNO column of the EMPLOYEE table is to be copied to the EMPNO column of the EMP_ACT table. The IDENTITY_VAL_LOCAL function could not be used to obtain the value assigned to the EMPNO column of the EMPLOYEE table. This is because an INSERT statement has not been issued at this level of the nesting, and as such, if the IDENTITY_VAL_LOCAL function were invoked in the VALUES clause of the INSERT for EMP_ACT, then it would return a null value. This INSERT statement for the EMP_ACT table also results in the generation of a new identity column value for the ACNT_NUM column. A second triggered INSERT statement inserts a row into the ACCT_LOG table. This statement invokes the IDENTITY_VAL_LOCAL function to indicate that the identity value assigned to the ACNT_NUM column of the EMP_ACT table in the previous INSERT statement in the triggered action is to be copied to the ACNT_NUM column of the ACCT_LOG table. The EMPNO column is assigned the same value as the EMPNO column of EMPLOYEE table. From the invoking application (that is, the level at which the INSERT to EMPLOYEE is issued), set the variable IVAR to the value assigned to the EMPNO column of the EMPLOYEE table by the original INSERT statement. INSERT INTO EMPLOYEE (NAME, SALARY, DEPTNO) VALUES ('Rupert', 989.99, 50); The contents of the three tables after processing the original INSERT statement and all of the triggered actions are: SELECT EMPNO, SUBSTR(NAME,10) AS NAME, SALARY, DEPTNO FROM EMPLOYEE; EMPNO NAME SALARY DEPTNO ----------- ----------- ---------------------------------- ----------- 1000 Rupert 989.99 50 SELECT ACNT_NUM, EMPNO FROM EMP_ACT; ACNT_NUM EMPNO ----------- ----------- 1 1000 SELECT * FROM ACCT_LOG; ID ACNT_NUM EMPNO ----------- ----------- ----------- 100 1 1000 The result of the IDENTITY_VAL_LOCAL function is the most recently assigned value for an identity column at the same nesting level. After processing the original INSERT statement and all of the triggered actions, the IDENTITY_VAL_LOCAL function returns a value of 1000, because this is the value assigned to the EMPNO column of the EMPLOYEE table. The following VALUES statement results in setting IVAR to 1000. The insert into the EMP_ACT table (which occurred after the insert into the EMPLOYEE table and at a lower nesting level) has no affect on what is returned by this invocation of the IDENTITY_VAL_LOCAL function. VALUES IDENTITY_VAL_LOCAL() INTO :IVAR; 38.3.2.6 LCASE and UCASE (Unicode) In a Unicode database, the entire repertoire of Unicode characters is uppercase (or lowercase) based on the Unicode properties of these characters. Double-wide versions of ASCII characters, as well as Roman numerals, now convert to upper or lower case correctly. 38.3.2.7 MQPUBLISH >>-MQPUBLISH---(------------------------------------------------> >-----+-----------------------------------------------+---------> '-publisher-service--,--+--------------------+--' '-service-policy--,--' >----msg-data----+-----------------------------------+--)------>< '-,--topic--+--------------------+--' | (1) | '-,--correl-id-------' Notes: 1. The correl-id cannot be specified unless a service and a policy are previously defined. The schema is MQDB2. The MQPUBLISH function publishes data to MQSeries. This function requires the installation of either MQSeries Publish/Subscribe or MQSeries Integrator. Please consult www.ibm.com/software/MQSeries for further details. The MQPUBLISH function publishes the data contained in msg-data to the MQSeries publisher specified in publisher-service, and using the quality of service policy defined by service-policy. An optional topic for the message can be specified, and an optional user-defined message correlation identifier may also be specified. The function returns a value of '1' if successful or a '0' if unsuccessful. publisher-service A string containing the logical MQSeries destination where the message is to be sent. If specified, the publisher-service must refer to a publisher Service Point defined in the AMT.XML repository file. A service point is a logical end-point from which a message is sent or received. Service point definitions include the name of the MQSeries Queue Manager and Queue. See the MQSeries Application Messaging Interface for further details. If publisher-service is not specified, then the DB2.DEFAULT.PUBLISHER will be used. The maximum size of publisher-service is 48 characters. service-policy A string containing the MQSeries AMI Service Policy to be used in handling of this message. If specified, the service-policy must refer to a Policy defined in the AMT.XML repository file. A Service Policy defines a set of quality of service options that should be applied to this messaging operation. These options include message priority and message persistence. See the MQSeries Application Messaging Interface manual for further details. If service-policy is not specified, then the default DB2.DEFAULT.POLICY will be used. The maximum size of service-policy is 48 characters. msg-data A string expression containing the data to be sent via MQSeries. The maximum size is 4000 characters. topic A string expression containing the topic for the message publication. If no topic is specified, none will be associated with the message. The maximum size of topic is 40 characters. Multiple topics can be specified in one string (up to 40 characters long). Each topic must be separated by a colon. For example, "t1:t2:the third topic" indicates that the message is associated with all three topics: t1, t2, and "the third topic". correl-id An optional string expression containing a correlation identifier to be associated with this message. The correl-id is often specified in request and reply scenarios to associate requests with replies. If not specified, no correlation id will be added to the message. The maximum size of correl-id is 24 characters. Examples Example 1: This example publishes the string "Testing 123" to the default publisher service (DB2.DEFAULT.PUBLISHER) using the default policy (DB2.DEFAULT.POLICY). No correlation identifier or topic is specified for the message. VALUES MQPUBLISH('Testing 123') Example 2: This example publishes the string "Testing 345" to the publisher service "MYPUBLISHER" under the topic "TESTS". The default policy is used and no correlation identifier is specified. VALUES MQPUBLISH('MYPUBLISHER','Testing 345', 'TESTS') Example 3: This example publishes the string "Testing 678" to the publisher service "MYPUBLISHER" using the policy "MYPOLICY" with a correlation identifier of "TEST1". The message is published with topic "TESTS". VALUES MQPUBLISH('MYPUBLISHER','MYPOLICY','Testing 678','TESTS','TEST1') Example 4: This example publishes the string "Testing 901" to the publisher service "MYPUBLISHER" under the topic "TESTS" using the default policy (DB2.DEFAULT.POLICY) and no correlation identifier. VALUES MQPUBLISH('Testing 901','TESTS') All examples return the value '1' if successful. 38.3.2.8 MQREAD >>-MQREAD---(----+------------------------------------------+---> '-receive-service--+--------------------+--' '-,--service-policy--' >----)--------------------------------------------------------->< The schema is MQDB2. The MQREAD function returns a message from the MQSeries location specified by receive-service, using the quality of service policy defined in service-policy. Executing this operation does not remove the message from the queue associated with receive-service, but instead returns the message at the head of the queue. The return value is a VARCHAR(4000) containing the message. If no messages are available to be returned, a NULL is returned. receive-service A string containing the logical MQSeries destination from where the message is to be received. If specified, the receive-service must refer to a Service Point defined in the AMT.XML repository file. A service point is a logical end-point from where a message is sent or received. Service points definitions include the name of the MQSeries Queue Manager and Queue. See the MQSeries Application Messaging Interface for further details. If receive-service is not specified, then the DB2.DEFAULT.SERVICE will be used. The maximum size of receive-service is 48 characters. service-policy A string containing the MQSeries AMI Service Policy used in handling this message. If specified, the service-policy must refer to a Policy defined in the AMT.XML repository file. A Service Policy defines a set of quality of service options that should be applied to this messaging operation. These options include message priority and message persistence. See the MQSeries Application Messaging Interface manual for further details. If service-policy is not specified, then the default DB2.DEFAULT.POLICY will be used. The maximum size of service-policy is 48 characters. Examples: Example 1: This example reads the message at the head of the queue specified by the default service (DB2.DEFAULT.SERVICE), using the default policy (DB2.DEFAULT.POLICY). VALUES MQREAD() Example 2: This example reads the message at the head of the queue specified by the service "MYSERVICE" using the default policy (DB2.DEFAULT.POLICY). VALUES MQREAD('MYSERVICE') Example 3: This example reads the message at the head of the queue specified by the service "MYSERVICE", and using the policy "MYPOLICY". VALUES MQREAD('MYSERVICE','MYPOLICY') All of these examples return the contents of the message as a VARCHAR(4000) if successful. If no messages are available, then a NULL is returned. 38.3.2.9 MQRECEIVE >>-MQRECEIVE----------------------------------------------------> >----(--+-------------------------------------------------------------+---)-> '-receive-service--+---------------------------------------+--' '-,--service-policy--+---------------+--' '-,--correl-id--' >-------------------------------------------------------------->< The schema is MQDB2. The MQRECEIVE function returns a message from the MQSeries location specified by receive-service, using the quality of service policy service-policy. Performing this operation removes the message from the queue associated with receive-service. If the correl-id is specified, then the first message with a matching correlation identifier will be returned. If correl-id is not specified, then the message at the head of the queue will be returned. The return value is a VARCHAR(4000) containing the message. If no messages are available to be returned, a NULL is returned. receive-service A string containing the logical MQSeries destination from which the message is received. If specified, the receive-service must refer to a Service Point defined in the AMT.XML repository file. A service point is a logical end-point from which a message is sent or received. Service points definitions include the name of the MQSeries Queue Manager and Queue. See the MQSeries Application Messaging Interface for further details. If receive-service is not specified, then the DB2.DEFAULT.SERVICE is used. The maximum size of receive-service is 48 characters. service-policy A string containing the MQSeries AMI Service Policy to be used in the handling of this message. If specified, the service-policy must refer to a Policy defined in the AMT.XML repository file 4 . If service-policy is not specified, then the default DB2.DEFAULT.POLICY is used. The maximum size of service-policy is 48 characters. correl-id A string containing an optional correlation identifier to be associated with this message. The correl-id is often specified in request and reply scenarios to associate requests with replies. If not specified, no correlation id will be used. The maximum size of correl-id is 24 characters. Examples: Example 1: This example receives the message at the head of the queue specified by the default service (DB2.DEFAULT.SERVICE), using the default policy (DB2.DEFAULT.POLICY). VALUES MQRECEIVE() Example 2: This example receives the message at the head of the queue specified by the service "MYSERVICE" using the default policy (DB2.DEFAULT.POLICY). VALUES MQRECEIVE('MYSERVICE') Example 3: This example receives the message at the head of the queue specified by the service "MYSERVICE" using the policy "MYPOLICY". VALUES MQRECEIVE('MYSERVICE','MYPOLICY') Example 4: This example receives the first message with a correlation id that matches '1234' from the head of the queue specified by the service "MYSERVICE" using the policy "MYPOLICY". VALUES MQRECEIVE('MYSERVICE',MYPOLICY','1234') All these examples return the contents of the message as a VARCHAR(4000) if successful. If no messages are available, a NULL will be returned. 38.3.2.10 MQSEND >>-MQSEND---(----+------------------------------------------+---> '-send-service--,--+--------------------+--' '-service-policy--,--' >----msg-data----+--------------------+--)--------------------->< | (1) | '-,--correl-id-------' Notes: 1. The correl-id cannot be specified unless a service and a policy are previously defined. The schema is MQDB2. The MQSEND function sends the data contained in msg-data to the MQSeries location specified by send-service, using the quality of service policy defined by service-policy. An optional user defined message correlation identifier may be specified by correl-id. The function returns a value of '1' if successful or a '0' if unsuccessful. msg-data A string expression containing the data to be sent via MQSeries. The maximum size is 4000 characters. send-service A string containing the logical MQSeries destination where the message is to be sent. If specified, the send-service refers to a service point defined in the AMT.XML repository file. A service point is a logical end-point from which a message may be sent or received. Service point definitions include the name of the MQSeries Queue Manager and Queue. See the MQSeries Application Messaging Interface manual for further details. If send-service is not specified, then the value of DB2.DEFAULT.SERVICE is used. The maximum size of send-service is 48 characters. service-policy A string containing the MQSeries AMI Service Policy used in handling of this message. If specified, the service-policy must refer to a service policy defined in the AMT XML repository file. A Service Policy defines a set of quality of service options that should be applied to this messaging operation. These options include message priority and message persistence. See the MQSeries Application Messaging Interface manual for further details. If service-policy is not specified, then a default value of DB2.DEFAULT.POLICY will be used. The maximum size of service-policy is 48 characters. correl-id An optional string containing a correlation identifier associated with this message. The correl-id is often specified in request and reply scenarios to associate requests with replies. If not specified, no correlation id will be sent. The maximum size of correl-id is 24 characters. Examples: Example 1: This example sends the string "Testing 123" to the default service (DB2.DEFAULT.SERVICE), using the default policy (DB2.DEFAULT.POLICY), with no correlation identifier. VALUES MQSEND('Testing 123') Example 2: This example sends the string "Testing 345" to the service "MYSERVICE", using the policy "MYPOLICY", with no correlation identifier. VALUES MQSEND('MYSERVICE','MYPOLICY','Testing 345') Example 3: This example sends the string "Testing 678" to the service "MYSERVICE", using the policy "MYPOLICY", with correllation identifier "TEST3". VALUES MQSEND('MYSERVICE','MYPOLICY','Testing 678','TEST3') Example 4: This example sends the string "Testing 901" to the service "MYSERVICE", using the default policy (DB2.DEFAULT.POLICY), and no correlation identifier. VALUES MQSEND('MYSERVICE','Testing 901') All examples return a scalar value of '1' if successful. 38.3.2.11 MQSUBSCRIBE >>-MQSUBSCRIBE---(----------------------------------------------> >-----+------------------------------------------------+--------> '-subscriber-service--,--+--------------------+--' '-service-policy--,--' >----topic---)------------------------------------------------->< The schema is MQDB2. The MQSUBSCRIBE function is used to register interest in MQSeries messages published on a specified topic. The subscriber-service specifies a logical destination for messages that match the specified topic. Messages that match topic will be placed on the queue defined by subscriber-service and can be read or received through a subsequent call to MQREAD, MQRECEIVE, MQREADALL, or MQRECEIVEALL. This function requires the installation and configuration of an MQSeries based publish and subscribe system, such as MQSeries Integrator or MQSeries Publish/Subscribe. See www.ibm.com/software/MQSeries for further details. The function returns a value of '1' if successful or a '0' if unsuccessful. Successfully executing this function will cause the publish and subscribe server to forward messages matching the topic to the service point defined by subscriber-service. subscriber-service A string containing the logical MQSeries subscription point to where messages matching topic will be sent. If specified, the subscriber-service must refer to a Subscribers Service Point defined in the AMT.XML repository file. Service points definitions include the name of the MQSeries Queue Manager and Queue. See the MQSeries Application Messaging Interface manual for further details. If subscriber-service is not specified, then the DB2.DEFAULT.SUBSCRIBER will be used instead. The maximum size of subscriber-service is 48 characters. service-policy A string containing the MQSeries AMI Service Policy to be used in handling the message. If specified, the service-policy must refer to a Policy defined in the AMT.XML repository file. A Service Policy defines a set of quality of service options to be applied to this messaging operation. These options include message priority and message persistence. See the MQSeries Application Messaging Interface manual for further details. If service-policy is not specified, then the default DB2.DEFAULT.POLICY will be used instead. The maximum size of service-policy is 48 characters. topic A string defining the types of messages to receive. Only messages published with the specified topics will be received by this subscription. Multiple subscriptions may coexist. The maximum size of topic is 40 characters. Multiple topics can be specified in one string (up to 40 characters long). Each topic must be separated by a colon. For example, "t1:t2:the third topic" indicates that the message is associated with all three topics: t1, t2, and "the third topic". Examples: Example 1: This example registers an interest in messages containing the topic "Weather". The default subscriber-service (DB2.DEFAULT.SUBSCRIBER) is registered as the subscriber and the default service-policy (DB2.DEFAULT.POLICY) specifies the quality of service. VALUES MQSUBSCRIBE('Weather') Example 2: This example demonstrates a subscriber registering interest in messages containing "Stocks". The subscriber registers as "PORTFOLIO-UPDATES" with policy "BASIC-POLICY". VALUES MQSUBSCRIBE('PORTFOLIO-UPDATES','BASIC-POLICY','Stocks') All examples return a scalar value of '1' if successful. 38.3.2.12 MQUNSUBSCRIBE >>-MQUNSUBSCRIBE---(--------------------------------------------> >-----+------------------------------------------------+--------> '-subscriber-service--,--+--------------------+--' '-service-policy--,--' >----topic---)------------------------------------------------->< The schema is MQDB2. The MQUNSUBSCRIBE function is used to unregister an existing message subscription. The subscriber-service, service-policy, and topic are used to identify which subscription is cancelled. This function requires the installation and configuration of an MQSeries based publish and subscribe system, such as MQSeries Integrator or MQSeries Publish/Subscribe. See www.ibm.com/software/MQSeries for further details. The function returns a value of '1' if successful or a '0' if unsuccessful. The result of successfully executing this function is that the publish and subscribe server will remove the subscription defined by the given parameters. Messages with the specified topic will no longer be sent to the logical destination defined by subscriber-service. subscriber-service If specified, the subscriber-service must refer to a Subscribers Service Point defined in the AMT.XML repository file. Service point definitions include the name of the MQSeries Queue Manager and Queue. See the MQSeries Application Messaging Interface manual for further details. If subscriber-service is not specified, then the DB2.DEFAULT.SUBSCRIBER value is used. The maximum size of subscriber-service is 48 characters. service-policy If specified, the service-policy must refer to a Policy defined in the AMT.XML repository file. A Service Policy defines a set of quality of service options to be applied to this messaging operation. See the MQSeries Application Messaging Interface manual for further details. If service-policy is not specified, then the default DB2.DEFAULT.POLICY will be used. The maximum size of service-policy is 48 characters. topic A string specifying the subject of messages that are not to be received. The maximum size of topic is 40 characters. Multiple topics can be specified in one string (up to 40 characters long). Each topic must be separated by a colon. For example, "t1:t2:the third topic" indicates that the message is associated with all three topics: t1, t2, and "the third topic". Examples: Example 1: This example cancels an interest in messages containing the topic "Weather". The default subscriber-service (DB2.DEFAULT.SUBSCRIBER) is registered as the unsubscriber and the default service-policy (DB2.DEFAULT.POLICY) specifies the quality of service. VALUES MQUNSUBSCRIBE('Weather') Example 2: This example demonstrates a subscriber cancelling an interest in messages containing "Stocks". The subscriber is registered as "PORTFOLIO-UPDATES" with policy "BASIC-POLICY". VALUES MQUNSUBSCRIBE('PORTFOLIO-UPDATES','BASIC-POLICY','Stocks') These examples return a scalar value of '1' if successful and a scalar value of '0' if unsuccessful. 38.3.2.13 MULTIPLY_ALT >>-MULTIPLY_ALT-------------------------------------------------> >----(exact_numeric_expression, exact_numeric_expression)------>< The schema is SYSIBM. This function was first available in FixPak 2 of Version 7.1. The MULTIPLY_ALT scalar function returns the product of the two arguments as a decimal value. It is provided as an alternative to the multiplication operator, especially when the sum of the precisions of the arguments exceeds 31. The arguments can be any built-in exact numeric data type (DECIMAL, BIGINT, INTEGER, or SMALLINT). The result of the function is a DECIMAL. The precision and scale of the result are determined as follows, using the symbols p and s to denote the precision and scale of the first argument, and the symbols p' and s' to denote the precision and scale of the second argument. * The precision is MIN(31, p + p') * The scale is: o 0 if the scale of both arguments is 0 o MIN(31, s+s') if p+p' is less than or equal to 31 o MAX(MIN(3, s+s'), 31-(p-s+p'-s') ) if p+p' is greater than 31. The result can be null if at least one argument can be null or the database is configured with DFT_SQLMATHWARN set to yes; the result is the null value if one of the arguments is null. The MULTIPLY_ALT function is a better choice than the multiplication operator when performing decimal arithmetic where a scale of at least 3 is needed and the sum of the precisions exceeds 31. In these cases, the internal computation is performed so that overflows are avoided. The final result is then assigned to the result data type using truncation where necessary to match the scale. Note that overflow of the final result is still possible when the scale is 3. The following is a sample comparing the result types using MULTIPLY_ALT and the multiplication operator. Type of argument 1Type of argument Result using Result using 2 MULTIPLY_ALT multiplication operator DECIMAL(31,3) DECIMAL(15,8) DECIMAL(31,3) DECIMAL(31,11) DECIMAL(26,23) DECIMAL(10,1) DECIMAL(31,19) DECIMAL(31,24) DECIMAL(18,17) DECIMAL(20,19) DECIMAL(31,29) DECIMAL(31,31) DECIMAL(16,3) DECIMAL(17,8) DECIMAL(31,9) DECIMAL(31,11) DECIMAL(26,5) DECIMAL(11,0) DECIMAL(31,3) DECIMAL(31,5) DECIMAL(21,1) DECIMAL(15,1) DECIMAL(31,2) DECIMAL(31,2) Example: Multiply two values where the data type of the first argument is DECIMAL(26, 3) and the data type of the second argument is DECIMAL(9,8). The data type of the result is DECIMAL(31,7). values multiply_alt(98765432109876543210987.654,5.43210987) 1 --------------------------------- 536504678578875294857887.5277415 Note that the complete product of these two numbers is 536504678578875294857887.52774154498 but the last 4 digits were truncated to match the scale of the result data type. Using the multiplication operator with the same values results in an arithmetic overflow since the result data type is DECIMAL(31,11) and the result value has 24 digits left of the decimal, but the result data type only supports 20 digits. 38.3.2.14 REC2XML >>-REC2XML---(--decimal-constant---,--format-string-------------> >----,--row-tag-string----+------------------------+--)-------->< | .------------------. | | V | | '----,--column-name---+--' The schema is SYSIBM. The REC2XML function returns a string formatted with XML tags and containing column names and column values. decimal-constant The expansion factor for replacing column value characters. The decimal value must be greater than 0.0 and less than or equal to 6.0 (SQLSTATE 42820). The decimal-constant value is used to calculate the result length of the function. For every column with a character data type, the length attribute of the column is multiplied by this expansion factor before it is added in to the result length. To specify no expansion, use a value of 1.0. Specifying a value less than 1.0 reduces the calculated result length. If the actual length of the result string is greater than the calculated result length of the function, then an error is raised (SQLSTATE 22001). format-string The string constant that specifies which format the function is to use during execution. The format-string is case-sensitive, so the following values must be specified in uppercase to be recognized. COLATTVAL or COLATTVAL_XML These formats return a string with columns as attribute values. >>-<--row-tag-string-->-----------------------------------------> .-------------------------------------------------------------------------. V | >--------<--column-name--=--"column-name"--+->--column-value----+--+> '-null="true"------------------------------------------->< Column names may or may not be valid XML attribute values. For those column names which are not valid XML attribute values, character replacement is performed on the column name before it is included in the result string. Column values may or may not be valid XML element values. If the format-string COLATTVAL is specified, for those column values which are not valid XML element values, character replacement is performed on the column value before it is included in the result string. If the format-string COLATTVAL_XML is specified, character replacement is not performed on column values (note that character replacement is still performed on column names). row-tag-string A string constant that specifies the tag used for each row. If an empty string is specified, then a value of 'row' is assumed. If a string of one or more blank characters is specified, then no beginning row-tag-string or ending row-tag-string (including the angle bracket delimiters) will appear in the result string. column-name A qualified or unqualified name of a table column. The column must have one of the following data types (SQLSTATE 42815): o numeric (SMALLINT, INTEGER, BIGINT, DECIMAL, NUMERIC, REAL, DOUBLE) o character string (CHAR, VARCHAR) 5 o datetime (DATE, TIME, TIMESTAMP) o a user-defined type based on one of the above types The same column name cannot be specified more than once (SQLSTATE 42734). The result of the function is VARCHAR. The maximum length is 32672 bytes (SQLSTATE 54006). Consider the following invocation: REC2XML (dc, fs, rt, c1, c2, ..., cn) If the value of fs is "COLATTVAL" or "COLATTVAL_XML" the result is the same as the following expression: '<' CONCAT rt CONCAT '>' CONCAT y1 CONCAT y2 CONCAT ... CONCAT yn CONCAT '' where yn is equivalent to: '' CONCAT rn CONCAT '' if the column is not null, and '" null="true"/>' if the column value is null. xvcn is equivalent to a string representation of the column name of cn, where any characters appearing in Table 29 are replaced with the corresponding representation. This ensures that the resulting string is a valid XML attribute or element value token. rn is equivalent to a string representation as indicated in Table 28. Result Column Values: Based on the data type of the column and the actual format-string specified, the column values from the table may be transformed before being concatenated into the result string. The following table shows the transformations done on the column values. Table 28. Column Values String Result Data type of cn rn CHAR, VARCHAR The value is a string. If the format-string does not end in the characters "_XML", then each character in cn is replaced with the corresponding replacement representation from Table 29, as indicated. The length attribute is: dc * the length attribute of cn. SMALLINT, INTEGER, BIGINT, DECIMAL, The value is LTRIM(RTRIM(CHAR(cn))). NUMERIC, REAL, DOUBLE The length attribute is the result length of CHAR(cn). The decimal character is always the period character. DATE The value is CHAR(cn, ISO). The length attribute is the result length of CHAR(cn, ISO). TIME The value is CHAR(cn, JIS). The length attribute is the result length of CHAR(cn, JIS) TIMESTAMP The value is CHAR(cn). The length attribute is the result length of CHAR(cn). Character Replacement: Depending on the value specified for the format-string, certain characters in column names and column values will be replaced to ensure that the column names form valid XML attribute values and the column values form valid XML element values. Table 29. Character Replcements for XML Attribute Values and Element Values < is replaced by < > is replaced by > " is replaced by " & is replaced by & ' is replaced by ' Examples: * Using the DEPARTMENT table, format the department table row, except the DEPTNAME and LOCATION columns, for department 'D01' into a string of valid XML. Since the data does not contain any of the characters which require replacement, the expansion factor will be 1.0 (no expansion). Also note that the MGRNO value is null for this row. SELECT REC2XML (1.0, 'COLATTVAL', '', DEPTNO, MGRNO, ADMRDEPT) FROM DEPARTMENT WHERE DEPTNO = 'D01' This example returns the following VARCHAR(117) string: D01 A00 Note: REC2XML does not insert new line characters in the output. The above example output is formatted for the sake of readability. * A 5-day university schedule introduces a class with the name '&43' and '' overhead, 21 for the column names, 75 for the '', '' and double quotes, 7 for the CLASS_CODE data, 6 for the DAY data, and 8 for the STARTING data). Since the '&' and '<' characters will be replaced, an expansion factor of 1.0 will not be sufficient. The length attribute of the function will need to support an increase from 7 to 14 characters for the new format CLASS_CODE data. However, since it is known that the DAY value will never be more than 1 digit long, an extra 5 is calculated into the length that will never be used. Therefore, the expansion only needs to handle an increase of 2. Since CLASS_CODE is the only character string column in the argument list, this is the only column value to which the expansion factor applies. To get an increase of 2 for the length, an expansion factor of 9/7 (approximately 1.2857) would be needed. An expansion factor of 1.3 will be used. SELECT REC2XML (1.3, 'COLATTVAL', 'record', CLASS_CODE, DAY, STARTING) FROM CL_SCHED WHERE CLASS_CODE = '&43 &43<FIE 5 06:45:00 Note: REC2XML does not insert new line characters in the output. The above example output is formatted for the sake of readability. * This example shows characters replaced in a column name. SELECT REC2XML (1.3,'COLATTVAL', '', Class, "time &43<FIE >-GET_ROUTINE_SAR----------------------------------------------> >----(--sarblob--,--type--,--routine_name_string--)------------>< The schema is SYSFUN. This procedure was first available in FixPak 3 of Version 7.1. The GET_ROUTINE_SAR procedure retrieves the necessary information to install the same routine in another database server running the same level on the same operating system. The information is retrieved into a single BLOB string representing an SQL archive file. The invoker of the GET_ROUTINE_SAR procedure must have DBADM authority. sarblob An output argument of type BLOB(3M) that contains the routine SAR file contents. type An input argument of type CHAR(2) that specifies whether the type of routine, using one of the following values: o P for a procedure. o SP for the specific name of a procedure. routine_name_string An input argument of type VARCHAR(257) that specifies a qualified name of the routine. If no schema name is specified, the default is the CURRENT SCHEMA when the routine is processed. Note: The routine_name_string cannot include the double quote character ("). The qualified name of the routine is used to determine which routine to retrieve. The routine that is found must be an SQL routine or an error is raised (SQLSTATE 428F7). When not using a specific name, this may result in more than one routine and an error is raised (SQLSTATE 42725). If this occurs, the specific name of the routine must be used to get the routine. The SAR file must include a bind file which may not be available at the server. If the bind file cannot be found and stored in the SAR file, an error is raised (SQLSTATE 55045). 38.3.4.2 PUT_ROUTINE_SAR >>-PUT_ROUTINE_SAR----------------------------------------------> >----(--sarblob--+-------------------------------------+--)---->< '-,--new_owner--,--use_register_flag--' The schema is SYSFUN. This procedure was first available in FixPak 3 of Version 7.1. The PUT_ROUTINE_SAR procedure passes the necessary file to create an SQL routine at the server and then defines the routine. The invoker of the PUT_ROUTINE_SAR procedure must have DBADM authority. sarblob An input argument of type BLOB(3M) that contains the routine SAR file contents. new_owner An input argument of type VARCHAR(128) that contains an authorization-name used for authorization checking of the routine. The new-owner must have the necessary privileges for the routine to be defined. If new-owner is not specified, the authorization-name of the original routine definer is used. use_register_flag An input argument of type INTEGER that indicates whether or not the CURRENT SCHEMA and CURRENT PATH special registers are used to define the routine. If the special registers are not used, the settings for the default schema and SQL path are the settings used when the routine was originally defined. Possible values for use-register-flag: 0 Do not use the special registers of the current environment 1 Use the CURRENT SCHEMA and CURRENT PATH special registers. If the value is 1, CURRENT SCHEMA is used for unqualified object names in the routine definition (including the name of the routine) and CURRENT PATH is used to resolve unqualified routines and data types in the routine definition. If the use-registers-flag is not specified, the behavior is the same as if a value of 0 was specified. The identification information contained in sarblob is checked to confirm that the inputs are appropriate for the environment, otherwise an error is raised (SQLSTATE 55046). The PUT_ROUTINE_SAR procedure then uses the contents of the sarblob to define the routine at the server. The contents of the sarblob argument are extracted into the separate files that make up the SQL archive file. The shared library and bind files are written to files in a temporary directory. The environment is set so that the routine definition statement processing is aware that compiling and linking are not required, and that the location of the shared library and bind files is available. The contents of the DDL file are then used to dynamically execute the routine definition statement. Note: No more than one procedure can be concurrently installed under a given schema. Processing of this statement may result in the same errors as executing the routine definition statement using other interfaces. During routine definition processing, the presence of the shared library and bind files is noted and the precompile, compile and link steps are skipped. The bind file is used during bind processing and the contents of both files are copied to the usual directory for an SQL routine. Note: If a GET ROUTINE or a PUT ROUTINE operation (or their corresponding procedure) fails to execute successfully, it will always return an error (SQLSTATE 38000), along with diagnostic text providing information about the cause of the failure. For example, if the procedure name provided to GET ROUTINE does not identify an SQL procedure, diagnostic "100, 02000" text will be returned, where "100" and "02000" are the SQLCODE and SQLSTATE, respectively, that identify the cause of the problem. The SQLCODE and SQLSTATE in this example indicate that the row specified for the given procedure name was not found in the catalog tables. ------------------------------------------------------------------------ 38.4 Chapter 5. Queries 38.4.1 select-statement/syntax diagram The syntax diagram changes to: >>-+---------------------------------------+--fullselect--------> | .-,--------------------------. | | V | | '-WITH-----common-table-expression---+--' >----+-----------------+--+--------------------+----------------> '-order-by-clause-' '-fetch-first-clause-' >----*--+---------------------+--*--+---------------------+--*--> +-read-only-clause----+ '-optimize-for-clause-' | (1) | '-update-clause-------' >-----+---------------+---------------------------------------->< '-WITH--+-RR-+--' +-RS-+ +-CS-+ '-UR-' Notes: 1. The update-clause and the order-by-clause cannot both be specified in the same select-statement. Add the following paragraph to the description below the syntax diagram: The optional WITH clause specifies the isolation level at which the select statement is executed. o RR - Repeatable Read o RS - Read Stability o CS - Cursor Stability o UR - Uncommitted Read The default isolation level of the statement is the isolation level of the package in which the statement is bound. 38.4.2 select-statement/fetch-first-clause The last paragraph in the description of the fetch-first-clause: Specification of the fetch-first-clause in a select-statement makes the cursor not deletable (read-only). This clause cannot be specified with the FOR UPDATE clause. is incorrect and should be removed. ------------------------------------------------------------------------ 38.5 Chapter 6. SQL Statements 38.5.1 Update of the Partitioning Key Now Supported Update the partitioning key is now supported. The following text from various statements in Chapter 6 should be deleted only if the DB2_UPDATE_PART_KEY=ON: Note: If DB2_UPDATE_PART_KEY=OFF, then the restrictions still apply. 38.5.1.1 Statement: ALTER TABLE Rules * A partitioning key column of a table cannot be updated (SQLSTATE 42997). * A nullable column of a partitioning key cannot be included as a foreign key column when the relationship is defined with ON DELETE SET NULL (SQLSTATE 42997). 38.5.1.2 Statement: CREATE TABLE Rules * A partitioning key column of a table cannot be updated (SQLSTATE 42997). * A nullable column of a partitioning key cannot be included as a foreign key column when the relationship is defined with ON DELETE SET NULL (SQLSTATE 42997). 38.5.1.3 Statement: DECLARE GLOBAL TEMPORARY TABLE PARTITIONING KEY (column-name,...) Note: The partitioning key columns cannot be updated (SQLSTATE 42997). 38.5.1.4 Statement: UPDATE Footnotes * 108 A column of a partitioning key is not updatable (SQLSTATE 42997). The row of data must be deleted and inserted to change columns in a partitioning key. 38.5.2 Larger Index Keys for Unicode Databases 38.5.2.1 ALTER TABLE The length of variable length columns that are part of any index, including primary and unique keys, defined when the registry variable DB2_INDEX_2BYTEVARLEN was on, can be altered to a length greater than 255 bytes. The fact that a variable length column is involved in a foreign key will no longer prevent the length of that column from being altered to larger than 255 bytes, regardless of the registry variable setting. However, data with length greater than 255 cannot be inserted into the table unless the column in the corresponding primary key has length greater than 255 bytes, which is only possible if the primary key was created with the registry variable ON. 38.5.2.2 CREATE INDEX Indexes can be defined on variable length columns whose length is greater than 255 bytes if the registry variable DB2_INDEX_2BYTEVARLEN is ON. 38.5.2.3 CREATE TABLE Primary and unique keys with variable keyparts can have a size greater than 255 if the registry variable DB2_INDEX_2BYTEVARLEN is ON. Foreign keys can be defined on variable length columns whose length is greater than 255 bytes. 38.5.3 ALTER SEQUENCE ALTER SEQUENCE The ALTER SEQUENCE statement modifies the attributes of a sequence by: * Restarting the sequence * Changing the increment between future sequence values * Setting new minimum or maximum values * Changing the number of cached sequence numbers * Changing whether the sequence can cycle or not * Changing whether sequence numbers must be generated in order of request Invocation This statement can be embedded in an application program or issued through the use of dynamic SQL statements. It is an executable statement that can be dynamically prepared. However, if the bind option DYNAMICRULES BIND applies, the statement cannot be dynamically prepared (SQLSTATE 42509). Authorization The privileges held by the authorization ID of the statement must include at least one of the following: * Definer of the sequence * The ALTERIN privilege for the schema implicitly or explicitly specified * SYSADM or DBADM authority Syntax >>-ALTER SEQUENCE--sequence-name--------------------------------> .-------------------------------------------. V | >-------+-RESTART--+-------------------------+-+--+------------>< | '-WITH--numeric-constant--' | +-INCREMENT BY--numeric-constant-------+ +-+-MINVALUE--numeric-constant--+------+ | '-NO MINVALUE-----------------' | +-+-MAXVALUE--numeric-constant--+------+ | '-NO MAXVALUE-----------------' | +-+-CYCLE----+-------------------------+ | '-NO CYCLE-' | +-+-CACHE--integer-constant--+---------+ | '-NO CACHE-----------------' | '-+-ORDER----+-------------------------' '-NO ORDER-' Description sequence-name Identifies the particular sequence. The combination of name, and the implicit or explicit schema name must identify an existing sequence at the current server. If no sequence by this name exists in the explicitly or implicitly specified schema, an error (SQLSTATE 42704) is issued. RESTART Restarts the sequence. If numeric-constant is not specified, the sequence is restarted at the value specified implicitly or explicitly as the starting value on the CREATE SEQUENCE statement that originally created the sequence. WITH numeric-constant Restarts the sequence with the specified value. This value can be any positive or negative value that could be assigned to a column of the data type associated with the sequence (SQLSTATE 42820) as long as there are no non-zero digits to the right of the decimal point (SQLSTATE 42894). INCREMENT BY Specifies the interval between consecutive values of the sequence. This value can be any positive or negative value that could be assigned to a column of the data type associated with the sequence (SQLSTATE 42820), and does not exceed the value of a large integer constant (SQLSTATE 42815), without non-zero digits existing to the right of the decimal point (SQLSTATE 428FA). If this value is negative, then the sequence of values descends. If this value is positive, then the sequence of values ascends. If this value is 0 or greater than the range defined by MINVALUE and MAXVALUE, only one value will be generated, but the sequence is treated as an ascending sequence otherwise. MINVALUE or NO MINVALUE Specifies the minimum value at which a descending sequence either cycles or stops generating values, or an ascending sequence cycles to after reaching the maximum value. MINVALUE numeric-constant Specifies the numeric constant that is the minimum value. This value can be any positive or negative value that could be assigned to a column of the data type associated with the sequence (SQLSTATE 42820), without non-zero digits existing to the right of the decimal point (SQLSTATE 428FA), but the value must be less than or equal to the maximum value (SQLSTATE 42815). NO MINVALUE For an ascending sequence, the value is the START WITH value, or 1 if START WITH is not specified. For a descending sequence, the value is the minimum value of the data type associated with the sequence. This is the default. MAXVALUE or NO MAXVALUE Specifies the maximum value at which an ascending sequence either cycles or stops generating values, or a descending sequence cycles to after reaching the minimum value. MAXVALUE numeric-constant Specifies the numeric constant that is the maximum value. This value can be any positive or negative value that could be assigned to a column of the data type associated with the sequence (SQLSTATE 42820), without non-zero digits existing to the right of the decimal point (SQLSTATE 428FA), but the value must be greater than or equal to the minimum value (SQLSTATE 42815). NO MAXVALUE For an ascending sequence, the value is the maximum value of the data type associated with the sequence. For a descending sequence, the value is the START WITH value, or -1 if START WITH is not specified. This is the default. CYCLE or NOCYCLE Specifies whether the sequence should continue to generate values after reaching either its maximum or minimum value. The boundary of the sequence can be reached either with the next value landing exactly on the boundary condition, or by overshooting it in which case the next value would be determined from wrapping around to the START WITH value if cycles were permitted. CYCLE Specifies that values continue to be generated for this sequence after the maximum or minimum value has been reached. If this option is used, after an ascending sequence reaches its maximum value, it generates its minimum value; or after a descending sequence reaches its minimum value, it generates its maximum value. The maximum and minimum values for the sequence determine the range that is used for cycling. When CYCLE is in effect, then duplicate values can be generated for the sequence. NO CYCLE Specifies that values will not be generated for the sequence once the maximum or minimum value for the sequence has been reached. This is the default. CACHE or NO CACHE Specifies whether to keep some preallocated values in memory for faster access. This is a performance and tuning option. CACHE integer-constant Specifies the maximum number of sequence values that are preallocated and kept in memory. Preallocating and storing values in the cache reduces synchronous I/O to the log when values are generated for the sequence. In the event of a system failure, all cached sequence values that have not been used in committed statements are lost (that is, they will never be used). The value specified for the CACHE option is the maximum number of sequence values that could be lost in case of system failure. The minimum value is 2 (SQLSTATE 42815). The default value is CACHE 20. NO CACHE Specifies that values of the sequence are not to be preallocated. It ensures that there is not a loss of values in the case of a system failure, shutdown or database deactivation. When this option is specified, the values of the sequence are not stored in the cache. In this case, every request for a new value for the sequence results in synchronous I/O to the log. NO ORDER or ORDER Specifies whether the sequence numbers must be generated in order of request. ORDER Specifies that the sequence numbers are generated in order of request. NO ORDER Specifies that the sequence numbers do not need to be generated in order of request. This is the default. After restarting a sequence or changing to CYCLE, it is possible for sequence numbers to be duplicate values of ones generated by the sequence previously. Notes * Only future sequence numbers are affected by the ALTER SEQUENCE statement. * The data type of a sequence cannot be changed. Instead, drop and recreate the sequence specifying the desired data type for the new sequence. * All cached values are lost when a sequence is altered. Examples Example 1: A possible reason for specifying RESTART without a numeric value would be to reset the sequence to the START WITH value. In this example, the goal is to generate the numbers from 1 up to the number of rows in the table and then inserting the numbers into a column added to the table using temporary tables. Another use would be to get results back where all the resulting rows are numbered: ALTER SEQUENCE org_seq RESTART SELECT NEXTVAL for org_seq, org.* FROM org 38.5.4 ALTER TABLE Changes to syntax fragments: column-alteration |--column-name--------------------------------------------------> >-----+-SET--+-DATA TYPE--+-VARCHAR-----------+---(--integer--)--+-------+> | | +-CHARACTER VARYING-+ | | | | '-CHAR VARYING------' | | | '-EXPRESSION AS--(--generation-expression--)--------' | +-ADD SCOPE--+-typed-table-name-+----------------------------------+ | '-typed-view-name--' | '-+-| identity-alteration |--------------------------------------+-' '-SET GENERATED--+-ALWAYS-----+---+--------------------------+-' '-BY DEFAULT-' '-| identity-alteration |--' >---------------------------------------------------------------| identity-alteration |---+-RESTART--+--------------------------+-+-------------------| | '-WITH--numeric-constant---' | +-SET INCREMENT BY--numeric-constant----+ | (1) | +-SET--+-NO MINVALUE-----------------+--+ | '-MINVALUE--numeric-constant--' | +-SET--+-NO MAXVALUE-----------------+--+ | '-MAXVALUE--numeric-constant--' | +-SET--+-CYCLE----+---------------------+ | '-NO CYCLE-' | +-SET--+-NO CACHE-----------------+-----+ | '-CACHE--integer-constant--' | '-SET--+-NO ORDER-+---------------------' '-ORDER----' Notes: 1. These parameters can be specified without spaces: NOMINVALUE, NOMAXVALUE, NOCYCLE, NOCACHE, and NOORDER. These single word versions are all acceptable alternatives to the two word versions. Add the following parameters: SET GENERATED Specifies whether values are to be generated for the column always or only when a default value is needed. ALWAYS A value will always be generated for the column when a row is inserted or updated in the table. The column must already be defined as a generated column (SQLSTATE 42837). BY DEFAULT The value will be generated for the column when a row is inserted into the table, unless a value is specified. The column must already be defined as a generated column (SQLSTATE 42837). RESTART or RESTART WITH numeric-constant Resets the state of the sequence associated with the identity column. If WITH numeric-constant is not specified, then the sequence for the identity column is restarted at the value that was specified, either implicitly or explicitly, as the starting value when the identity column was originally created. The numeric-constant is an exact numeric constant that can be any positive or negative value that could be assigned to this column (SQLSTATE 42820) as long as there are no non-zero digits to the right of the decimal point (SQLSTATE 42894). The column must already be defined with the IDENTITY attribute (SQLSTATE 42837). The numeric-constant will be used as the next value for the column. SET INCREMENT BY numeric-constant Specifies the interval between consecutive values of the identity column. The column must already be defined with the IDENTITY attribute (SQLSTATE 42837). This value is any positive or negative value that could be assigned to this column (SQLSTATE 42820), and does not exceed the value of a large integer constant (SQLSTATE 42815), as long as there are no non-zero digits to the right of the decimal point (SQLSTATE 42894). If this value is negative, then the sequence of values for this identity column descends. If this value is positive, then the sequence of values for this identity column ascends. If this value is 0, or is greater than the range defined by MINVALUE and MAXVALUE, then DB2 will only generate one value, but the sequence is treated as an ascending sequence otherwise. SET MINVALUE numeric-constant or NO MINVALUE Specifies the minimum value at which a descending identity column either cycles or stops generating values, or the value to which an ascending identity column cycles to after reaching the maximum value. The column must already be defined with the IDENTITY attribute (SQLSTATE 42837). MINVALUE numeric-constant Specifies the minimum numeric constant value . This value can be any positive or negative value that could be assigned to this column (SQLSTATE 42820), without non-zero digits existing to the right of the decimal point (SQLSTATE 42894), but the value must be less than the maximum value (SQLSTATE 42815). NO MINVALUE For an ascending sequence, the value is the START WITH value, or 1 if START WITH is not specified. For a descending sequence, the value is the minimum value of the data type of the column. SET MAXVALUE numeric-constant or NO MAXVALUE Specifies the maximum value at which an ascending identity column either cycles or stops generating values, or the value to which a descending identity column cycles to after reaching the minimum value. The column must already be defined with the IDENTITY attribute (SQLSTATE 42837). MAXVALUE numeric-constant Specifies the numeric constant that is the maximum value. This value can be any positive or negative value that could be assigned to this column (SQLSTATE 42820), without non-zero digits existing to the right of the decimal point (SQLSTATE 42894), but the value must be greater than the minimum value (SQLSTATE 42815). NO MAXVALUE For an ascending sequence, the value is the maximum value of the data type of the column. For a descending sequence, the value is the START WITH value, or -1 if START WITH is not specified. SET CYCLE or NO CYCLE Specifies whether this identity column should continue to generate values after generating either the maximum or minimum value. The column must already be defined with the IDENTITY attribute (SQLSTATE 42837). CYCLE Specifies that values continue to be generated for this column after the maximum or minimum value has been reached. If this option is used, then after an ascending identity column reaches the maximum value, it generates its minimum value; or after a descending sequence reaches the minimum value, it generates its maximum value. The maximum and minimum values for the identity column determine the range that is used for cycling. When CYCLE is in effect, then duplicate values can be generated for an identity column. Although not required, if unique values are desired, a single-column unique index defined using the identity column will ensure uniqueness. If a unique index exists on such an identity column and a non-unique value is generated, then an error occurs (SQLSTATE 23505). NO CYCLE Specifies that values will not be generated for the identity column once the maximum or minimum value has been reached. SET CACHE integer-constant or NO CACHE Specifies whether to keep some preallocated values in memory for faster access. This is a performance and tuning option. The column must already be defined with the IDENTITY attribute (SQLSTATE 42837). CACHE integer-constant Specifies how many values of the identity sequence are preallocated and kept in memory. When values are generated for the identity column, preallocating and storing values in the cache reduces synchronous I/O to the log. If a new value is needed for the identity column and there are no unused values available in the cache, then the allocation of the value requires waiting for I/O to the log. However, when a new value is needed for the identity column and there is an unused value in the cache, the allocation of that identity value can happen more quickly by avoiding the I/O to the log. When a database manager is stopped (database deactivation, system failure, or shutdown, for example), all cached sequence values that have not been used in committed statements are lost (that is, they will never be used). The value specified for the CACHE option is the maximum number of values for the identity column that could be lost in case of system failure. The minimum value is 2 (SQLSTATE 42615). NO CACHE Specifies that values for the identity column are not to be preallocated. When this option is specified, the values of the identity column are not stored in the cache. In this case, every request for a new identity value results in synchronous I/O to the log. SET ORDER or NO ORDER Specifies whether the identity column values must be generated in order of request. The column must already be defined with the IDENTITY attribute (SQLSTATE 42837). ORDER Specifies that the identity column values are generated in order of request. NO ORDER Specifies that the identity column values do not need to be generated in order of request. 38.5.5 Compound SQL (Embedded) A prepared COMMIT statement is not allowed in an ATOMIC compound SQL statement. 38.5.6 Compound Statement (Dynamic) Compound Statement (Dynamic) A compound statement groups other statements together into an executable block. You can declare SQL variables within a dynamically prepared atomic compound statement. Invocation This statement can be embedded in a trigger, SQL Function, or SQL Method, or issued through the use of dynamic SQL statements. It is an executable statement that can be dynamically prepared. Authorization No privileges are required to invoke a dynamic compound statement. However, the authorization ID of the compound statement must hold the necessary privileges to invoke the SQL statements embedded in the compound statement. Syntax dynamic-compound-statement >>-+--------------+--BEGIN ATOMIC-------------------------------> | (1) | '-label:-------' >-----+-----------------------------------------------+---------> | .-----------------------------------------. | | V | | '-----+-| SQL-variable-declaration |-+---;---+--' '-| condition-declaration |----' .-,-----------------------------. V | >--------SQL-procedure-statement--;---+---END--+--------+------>< '-label--' SQL-variable-declaration .-,--------------------. V | |---DECLARE-------SQL-variable-name---+--data-type--------------> .-DEFAULT NULL-------------. >-----+--------------------------+------------------------------| '-DEFAULT--default-values--' condition-declaration |---DECLARE--condition-name--CONDITION--FOR---------------------> .-VALUE-. .-SQLSTATE--+-------+---. >----+-----------------------+---string-constant----------------| Notes: 1. A label can only be specified when the statement is in a function, method, or trigger definition. Description label Defines the label for the code block. If the beginning label is specified, it can be used to qualify SQL variables declared in the dynamic compound statement and can also be specified on a LEAVE statement. If the ending label is specified, it must be the same as the beginning label. ATOMIC ATOMIC indicates that, if an error occurs in the compound statement, all SQL statements in the compound statement will be rolled back and any remaining SQL statements in the compound statement are not processed. SQL-procedure-statement The following list of SQL-control-statements can be used within the dynamic compound statement: o FOR Statement o GET DIAGNOSTICS Statement o IF Statement o ITERATE Statement o LEAVE Statement o SIGNAL Statement o WHILE Statement The SQL statements that can be issued are: o fullselect 6 o Searched UPDATE o Searched DELETE o INSERT o SET variable statement SQL-variable-declaration Declares a variable that is local to the dynamic compound statement. SQL-variable-name Defines the name of a local variable. DB2 converts all SQL variable names to uppercase. The name cannot: + Be the same as another SQL variable within the same compound statement. + Be the same as a parameter name. + Be the same as column names. If an SQL statement contains an identifier with the same name as an SQL variable and a column reference, DB2 interprets the identifier as a column. data-type Specifies the data type of the variable. DEFAULT default-values or NULL Defines the default for the SQL variable. The variable is initialized when the dynamic compound statement is called. If a default value is not specified, the variable is initialized to NULL. condition-declaration Declares a condition name and corresponding SQLSTATE value. condition-name Specifies the name of the condition. The condition name must be unique within the procedure body and can be referenced only within the compound statement in which it is declared. FOR SQLSTATE string-constant Specifies the SQLSTATE associated with the condition. The string-constant must be specified as five characters enclosed in single quotes, and cannot be '00000'. Notes * Dynamic compound statements are compiled by DB2 as one single statement. This statement is effective for short scripts involving little control flow logic but significant data flow. For larger constructs with nested complex control flow, a better choice is to use SQL procedures for details on using SQL procedures. 38.5.7 CREATE FUNCTION (Source or Template) The syntax diagram changes to the following >>-CREATE FUNCTION--function-name-------------------------------> >----(--+------------------------------------------+---)---*----> | .-,----------------------------------. | | V | | '----+-----------------+---data-type1---+--' '-parameter-name--' >----RETURNS--data-type2---*----+--------------------------+----> '-SPECIFIC--specific-name--' >----*----------------------------------------------------------> >-----+-SOURCE--+-function-name--------------------------------+------------------+> | +-SPECIFIC--specific-name----------------------+ | | '-function-name--(--+-------------------+---)--' | | | .-,-----------. | | | | V | | | | '----data-type---+--' | | .-NOT DETERMINISTIC--. .-EXTERNAL ACTION----. | '-AS TEMPLATE--*----+--------------------+--*----+--------------------+--*--' '-DETERMINISTIC------' '-NO EXTERNAL ACTION-' >----*--------------------------------------------------------->< Add the following to the "Description" section: DETERMINISTIC or NOT DETERMINISTIC This optional clause specifies whether the function always returns the same results for given argument values (DETERMINISTIC) or whether the function depends on some state values that affect the results (NOT DETERMINISTIC). That is, a DETERMINISTIC function must always return the same table from successive invocations with identical inputs. Optimizations taking advantage of the fact that identical inputs always produce the same results are prevented by specifying NOT DETERMINISTIC. NOT DETERMINISTIC must be explicitly or implicitly specified if the body of the function accesses a special register or calls another non-deterministic function (SQLSTATE 428C2). NO EXTERNAL ACTION or EXTERNAL ACTION This optional clause specifies whether or not the function takes some action that changes the state of an object not managed by the database manager. By specifying NO EXTERNAL ACTION, the system can use certain optimizations that assume functions have no external impacts. EXTERNAL ACTION must be explicitly or implicitly specified if the body of the function calls another function that has an external action (SQLSTATE 428C2). 38.5.8 CREATE FUNCTION (SQL Scalar, Table or Row) The syntax diagram changes to: >>-CREATE FUNCTION--function-name-------------------------------> >----(--+------------------------------------+---)---*----------> | .-,----------------------------. | | V | | '----parameter-name--data-type1---+--' >----RETURNS--+-data-type2--------------------+--*--------------> '--+-ROW---+---| column-list |--' '-TABLE-' .-LANGUAGE SQL--. >-----+--------------------------+--*----+---------------+--*---> '-SPECIFIC--specific-name--' .-NOT DETERMINISTIC--. .-EXTERNAL ACTION----. >-----+--------------------+--*----+--------------------+--*----> '-DETERMINISTIC------' '-NO EXTERNAL ACTION-' .-READS SQL DATA--. .-STATIC DISPATCH--. >-----+-----------------+--*----+------------------+--*---------> '-CONTAINS SQL----' (1) .-CALLED ON NULL INPUT-------. >-----+----------------------------+--*-------------------------> >-----+-----------------------------------------------------+---> | (2) | '-PREDICATES--(--| predicate-specification |--)-------' >----| SQL-function-body |------------------------------------->< column-list .-,--------------------------. V | |---(-----column-name--data-type3---+---)-----------------------| SQL-function-body |---+-RETURN Statement-----------+------------------------------| '-dynamic-compound-statement-' Notes: 1. NULL CALL may be specified in place of CALLED ON NULL INPUT 2. Valid only if RETURNS specifies a scalar result (data-type2) Change the following parameters: LANGUAGE SQL Specifies that the function is written using SQL. This parameter section replaces the "RETURN expression, NULL, WITH common-table-expression, fullselect" parameter section. SQL-function-body Specifies the body of the function. Parameter names can be referenced in the SQL-function-body. Parameter names may be qualified with the function name to avoid ambiguous references. If the SQL-function-body is a dynamic compound statement, it must contain at least one RETURN statement and a RETURN statement must be executed when the function is called (SQLSTATE 42632). If the function is a table or row function, then it can contain only one RETURN statement which must be the last statement in the dynamic compound (SQLSTATE 429BD). For additional details, see Compound Statement (Dynamic) and RETURN. 38.5.9 CREATE METHOD The syntax diagram changes to: CREATE METHOD Syntax >>-CREATE-------------------------------------------------------> >-----+-METHOD--+-method-name----------+---FOR--type-name--+----> | '-| method-signature |-' | '-SPECIFIC METHOD--specific-name---------------------' >-----+-*----EXTERNAL--+-----------------------+--*----+------------------------------+--*--+> | '-NAME--+-'string'---+--' '-TRANSFORM GROUP--group-name--' | | '-identifier-' | '-| SQL-method-body |-----------------------------------------------------------------' >-------------------------------------------------------------->< method-signature |---method-name--(--+---------------------------------------------------------+---)--> | .-,--------------------------------------------------. | | V | | '----+-----------------+---data-type1--+-------------+--+-' '-parameter-name--' '-AS LOCATOR--' >----+------------------------------------------------------------------+-> '-RETURNS--+-data-type2--+-------------+------------------------+--' | '-AS LOCATOR--' | '-data-type3--CAST FROM--data-type4--+-------------+-' '-AS LOCATOR--' >---------------------------------------------------------------| SQL-method-body |---+-RETURN Statement-----------+------------------------------| '-dynamic-compound-statement-' The following parameters replace the "RETURN scalar-expression or NULL" section: SQL-method-body The SQL-method-body defines the how the method is implemented if the method specification in CREATE TYPE is LANGUAGE SQL. The SQL-method-body must comply with the following parts of the method specification: o DETERMINISTIC or NOT DETERMINISTIC (SQLSTATE 428C2) o EXTERNAL ACTION or NO EXTERNAL ACTION (SQLSTATE 428C2) o CONTAINS SQL or READS SQL DATA (SQLSTATE 42985) Parameter names can be referenced in the SQL-method-body. The subject of the method is passed to the method implementation as an implicit first parameter named SELF. For additional details, see Compound Statement (Dynamic) and RETURN. 38.5.10 CREATE SEQUENCE CREATE SEQUENCE The CREATE SEQUENCE statement creates a sequence at the application server. Invocation This statement can be embedded in an application program or issued through the use of dynamic SQL statements. It is an executable statement that can be dynamically prepared. However, if the bind option DYNAMICRULES BIND applies, the statement cannot be dynamically prepared (SQLSTATE 42509). Authorization The privileges held by the authorization ID of the statement must include at least one of the following: * CREATEIN privilege for the implicitly or explicitly specified schema * SYSADM or DBADM authority Syntax .-AS INTEGER-----. >>-CREATE SEQUENCE--sequence-name---*----+----------------+--*--> '-AS--data-type--' >-----+-------------------------------+--*----------------------> '-START WITH--numeric-constant--' .-INCREMENT BY 1------------------. >-----+---------------------------------+--*--------------------> '-INCREMENT BY--numeric-constant--' (1) .-NO MINVALUE-----------------. >-----+-----------------------------+--*------------------------> '-MINVALUE--numeric-constant--' .-NO MAXVALUE-----------------. .-NO CYCLE--. >-----+-----------------------------+--*----+-----------+--*----> '-MAXVALUE--numeric-constant--' '-CYCLE-----' .-CACHE 20-----------------. .-NO ORDER--. >-----+--------------------------+--*----+-----------+--*------>< +-CACHE--integer-constant--+ '-ORDER-----' '-NO CACHE-----------------' Notes: 1. These parameters can be specified without spaces: NOMINVALUE, NOMAXVALUE, NOCYCLE, NOCACHE, and NOORDER. These single word versions are all acceptable alternatives to the two word versions. Description sequence-name Names the sequence. The combination of name, and the implicit or explicit schema name must not identify an existing sequence at the current server (SQLSTATE 42710). The unqualified form of sequence-name is an SQL identifier. The qualified form is a qualifier followed by a period and an SQL identifier. The qualifier is a schema name. If the sequence name is explicitly qualified with a schema name, the schema name cannot begin with 'SYS' or an error (SQLSTATE 42939) is raised. AS data-type Specifies the data type to be used for the sequence value. The data type can be any exact numeric type (SMALLINT, INTEGER, BIGINT or DECIMAL) with a scale of zero or a user-defined distinct type for which the source type is an exact numeric type with a scale of zero (SQLSTATE 42815). The default is INTEGER. START WITH numeric-constant Specifies the first value for the sequence. This value can be any positive or negative value that could be assigned to a column of the data type associated with the sequence (SQLSTATE 42820), without non-zero digits existing to the right of the decimal point (SQLSTATE 428FA). The default is MINVALUE for ascending sequences and MAXVALUE for descending sequences. This value is not necessarily the value that a sequence would cycle to after reaching the maximum or minimum value of the sequence. The START WITH clause can be used to start a sequence outside the range that is used for cycles. The range used for cycles is defined by MINVALUE and MAXVALUE. INCREMENT BY numeric-constant Specifies the interval between consecutive values of the sequence. This value can be any positive or negative value that could be assigned to a column of the data type associated with the sequence (SQLSTATE 42820), and does not exceed the value of a large integer constant (SQLSTATE 42815), without non-zero digits existing to the right of the decimal point (SQLSTATE 428FA). If this value is negative, then the sequence of values descends. If this value is positive, then the sequence of values ascends. If this value is 0 or greater than the range defined by MINVALUE and MAXVALUE, only one value will be generated, but the sequence is treated as an ascending sequence otherwise. The default is 1. MINVALUE or NO MINVALUE Specifies the minimum value at which a descending sequence either cycles or stops generating values, or an ascending sequence cycles to after reaching the maximum value. MINVALUE numeric-constant Specifies the numeric constant that is the minimum value. This value can be any positive or negative value that could be assigned to a column of the data type associated with the sequence (SQLSTATE 42820), without non-zero digits existing to the right of the decimal point (SQLSTATE 428FA), but the value must be less than or equal to the maximum value (SQLSTATE 42815). NO MINVALUE For an ascending sequence, the value is the START WITH value, or 1 if START WITH is not specified. For a descending sequence, the value is the minimum value of the data type associated with the sequence. This is the default. MAXVALUE or NO MAXVALUE Specifies the maximum value at which an ascending sequence either cycles or stops generating values, or a descending sequence cycles to after reaching the minimum value. MAXVALUE numeric-constant Specifies the numeric constant that is the maximum value. This value can be any positive or negative value that could be assigned to a column of the data type associated with the sequence (SQLSTATE 42820), without non-zero digits existing to the right of the decimal point (SQLSTATE 428FA), but the value must be greater than or equal to the minimum value (SQLSTATE 42815). NO MAXVALUE For an ascending sequence, the value is the maximum value of the data type associated with the sequence. For a descending sequence, the value is the START WITH value, or -1 if START WITH is not specified. This is the default. CYCLE or NO CYCLE Specifies whether the sequence should continue to generate values after reaching either its maximum or minimum value. The boundary of the sequence can be reached either with the next value landing exactly on the boundary condition, or by overshooting it. CYCLE Specifies that values continue to be generated for this sequence after the maximum or minimum value has been reached. If this option is used, after an ascending sequence reaches its maximum value it generates its minimum value; after a descending sequence reaches its minimum value it generates its maximum value. The maximum and minimum values for the sequence determine the range that is used for cycling. When CYCLE is in effect, then duplicate values can be generated for the sequence. NO CYCLE Specifies that values will not be generated for the sequence once the maximum or minimum value for the sequence has been reached. This is the default. CACHE or NO CACHE Specifies whether to keep some preallocated values in memory for faster access. This is a performance and tuning option. CACHE integer-constant Specifies the maximum number of sequence values that are preallocated and kept in memory. Preallocating and storing values in the cache reduces synchronous I/O to the log when values are generated for the sequence. In the event of a system failure, all cached sequence values that have not been used in committed statements are lost (that is, they will never be used). The value specified for the CACHE option is the maximum number of sequence values that could be lost in case of system failure. The minimum value is 2 (SQLSTATE 42815). The default value is CACHE 20. NO CACHE Specifies that values of the sequence are not to be preallocated. It ensures that there is not a loss of values in the case of a system failure, shutdown or database deactivation. When this option is specified, the values of the sequence are not stored in the cache. In this case, every request for a new value for the sequence results in synchronous I/O to the log. NO ORDER or ORDER Specifies whether the sequence numbers must be generated in order of request. ORDER Specifies that the sequence numbers are generated in order of request. NO ORDER Specifies that the sequence numbers do not need to be generated in order of request. This is the default. Notes * It is possible to define a constant sequence, that is, one that would always return a constant value. This could be done by specifying the same value for MINVALUE or MAXVALUE, or by specifying an INCREMENT value of zero. In either case, in order to allow for NEXTVAL to generate the same value more than once, CYCLE must be specified. A constant sequence can be used as a numeric global variable. ALTER SEQUENCE can be used to adjust the values that will be generated for a constant sequence. * A sequence can be cycled manually, by using the ALTER SEQUENCE statement. If NO CYCLE is implicitly or explicitly specified, the sequence can be restarted or extended using the ALTER SEQUENCE statement to cause values to continue to be generated once the maximum or minimum value for the sequence has been reached. * Caching sequence numbers implies that a range of sequence numbers can be kept in memory for fast access. When an application accesses a sequence that can allocate the next sequence number from the cache, the sequence number allocation can happen quickly. However, if an application accesses a sequence that cannot allocate the next sequence number from the cache, the sequence number allocation may require having to wait for I/O operations to persistent storage. The choice of the value for CACHE should be done keeping in mind the performance and application requirements tradeoffs. * The owner has the ALTER and USAGE privileges on the new sequence. Only the USAGE privilege can be granted by the owner and only to PUBLIC. * The following syntax is also supported: NOMINVALUE, NOMAXVALUE, NOCYCLE, NOCACHE, and NOORDER. Examples Example 1: Create a sequence called org_seq: CREATE SEQUENCE org_seq START WITH 1 INCREMENT BY 1 NO MAXVALUE NO CYCLE CACHE 24 38.5.11 CREATE TRIGGER CREATE TRIGGER Syntax >>-CREATE TRIGGER--trigger-name----+-NO CASCADE BEFORE-+--------> '-AFTER-------------' >-----+-INSERT-----------------------------+--ON--table-name----> +-DELETE-----------------------------+ '-UPDATE--+------------------------+-' | .-,--------------. | | V | | '-OF----column-name---+--' >-----+----------------------------------------------------------------------+> | .----------------------------------------------------. | | V (1) (2) .-AS-. | | '-REFERENCING-------------------+-OLD--+----+--correlation-name--+--+--' | .-AS-. | +-NEW-+----+--correlation-name---+ | .-AS-. | +-OLD_TABLE-+----+--identifier---+ | .-AS-. | '-NEW_TABLE-+----+--identifier---' >-----+-FOR EACH ROW---------------+--MODE DB2SQL---------------> | (3) | '--------FOR EACH STATEMENT--' >-----| triggered-action |------------------------------------->< triggered-action |--+-------------------------------+--SQL-procedure-statement---| '-WHEN--(--search-condition--)--' Notes: 1. OLD and NEW may only be specified once each. 2. OLD_TABLE and NEW_TABLE may only be specified once each and only for AFTER triggers. 3. FOR EACH STATEMENT may not be specified for BEFORE triggers. Replace the description of "triggered-action" with the following: triggered-action Specifies the action to be performed when a trigger is activated. A triggered-action is composed of an SQL-procedure-statement and an optional condition for the execution of the SQL-procedure-statement. WHEN (search-condition) Specifies a condition that is true, false, or unknown. The search-condition provides a capability to determine whether or not a certain triggered action should be executed. The associated action is performed only if the specified search condition evaluates as true. If the WHEN clause is omitted, the associated SQL-procedure statement is always performed. SQL-procedure-statement The SQL-procedure-statement can contain a dynamic compound statement or any of the SQL control statements listed in Compound Statement (Dynamic). If the trigger is a BEFORE trigger, then an SQL-procedure-statement can also include a fullselect or a SET variable statement (SQLSTATE 42987). If the trigger is an AFTER trigger, then an SQL-procedure-statement can also include one of the following (SQLSTATE 42987): o an INSERT SQL statement o a searched UPDATE SQL statement o a searched DELETE SQL statement o a SET variable statement o a fullselect 7 The SQL-procedure-statement cannot reference an undefined transition variable (SQLSTATE 42703) or a declared temporary table (SQLSTATE 42995). The SQL-procedure-statement in a BEFORE trigger cannot reference a summary table defined with REFRESH IMMEDIATE (SQLSTATE 42997). The SQL-procedure-statement in a BEFORE trigger cannot reference a generated column, other than the identity column, in the new transition variable (SQLSTATE 42989). The Notes section changes to the following: * The result of a fullselect specified in the SQL-procedure-statement is not available inside or outside of the trigger. * Inoperative triggers: An inoperative trigger is a trigger that is no longer available and is therefore never activated. A trigger becomes inoperative if: o A privilege that the creator of the trigger is required to have for the trigger to execute is revoked. o An object such as a table, view or alias, upon which the triggered action is dependent, is dropped. o A view, upon which the triggered action is dependent, becomes inoperative. o An alias that is the subject table of the trigger is dropped. In practical terms, an inoperative trigger is one in which a trigger definition has been dropped as a result of cascading rules for DROP or REVOKE statements. For example, when an view is dropped, any trigger with an SQL-procedure-statement defined using that view is made inoperative. When a trigger is made inoperative, all packages with statements performing operations that were activating the trigger will be marked invalid. When the package is rebound (explicitly or implicitly) the inoperative trigger is completely ignored. Similarly, applications with dynamic SQL statements performing operations that were activating the trigger will also completely ignore any inoperative triggers. The trigger name can still be specified in the DROP TRIGGER and COMMENT ON TRIGGER statements. An inoperative trigger may be recreated by issuing a CREATE TRIGGER statement using the definition text of the inoperative trigger. This trigger definition text is stored in the TEXT column of SYSCAT.TRIGGERS. Note that there is no need to explicitly drop the inoperative trigger in order to recreate it. Issuing a CREATE TRIGGER statement with the same trigger-name as an inoperative trigger will cause that inoperative trigger to be replaced with a warning (SQLSTATE 01595). Inoperative triggers are indicated by an X in the VALID column of the SYSCAT.TRIGGERS catalog view. * Errors executing triggers: Errors that occur during the execution of triggered SQL statements are returned using SQLSTATE 09000 unless the error is considered severe. If the error is severe, the severe error SQLSTATE is returned. The SQLERRMC field of the SQLCA for non-severe error will include the trigger name, SQLCODE, SQLSTATE and as many tokens as will fit from the tokens of the failure. The SQL-procedure-statement could include a SIGNAL SQLSTATE statement or contain a RAISE_ERROR function. In both these cases, the SQLSTATE returned is the one specified in the SIGNAL SQLSTATE statement or the RAISE_ERROR condition. 38.5.12 CREATE WRAPPER Linux uses libraries called LIBDRDA.SO and LIBSQLNET.SO, not LIBDRDA.A and LIBSQLNET.A. 38.5.13 DECLARE CURSOR Within the "DECLARE CURSOR" statement, near the end of the Notes section the following sentence should be changed from: An ambiguous cursor is considered read-only if the BLOCKING bind option is ALL, otherwise it is considered deletable. to: An ambiguous cursor is considered read-only if the BLOCKING bind option is ALL; otherwise, it is considered updatable. The change is from the word "deletable" to the word "updatable". 38.5.14 DELETE The searched DELETE syntax diagram changes to the following: >>-DELETE FROM----+-table-name-------------------+--------------> +-view-name--------------------+ '-ONLY--(--+-table-name-+---)--' '-view-name--' >-----+---------------------------+-----------------------------> | .-AS-. | '-+----+--correlation-name--' >-----+--------------------------+---+---------------+--------->< '-WHERE--search-condition--' '-WITH--+-RR-+--' +-RS-+ +-CS-+ '-UR-' Positioned DELETE: >>-DELETE FROM----+-table-name-------------------+--------------> +-view-name--------------------+ '-ONLY--(--+-table-name-+---)--' '-view-name--' >----WHERE CURRENT OF--cursor-name----------------------------->< Add the following to the description section: WITH Specifies the isolation level used when locating the rows to be deleted. RR Repeatable Read RS Read Stability CS Cursor Stability UR Uncommitted Read The default isolation level of the statement is the isolation level of the package in which the statement is bound. 38.5.15 DROP Add the following option: >>-SEQUENCE--sequence-name--RESTRICT--------------------------->< Add the following parameters: SEQUENCE sequence-name RESTRICT Identifies the particular sequence that is to be dropped. The sequence-name, along with the implicit or explicit schema name, must identify an existing sequence at the current server. If no sequence by this name exists in the explicitly or implicitly specified schema, an error (SQLSTATE 42704) is raised. The RESTRICT keyword enforces the rule that the sequence is not dropped if the definition of a table column refers to the sequence (through an IDENTITY column). Note: o System created sequences for IDENTITY columns cannot be dropped using the DROP sequence command. o When a sequence is dropped, all privileges on the sequence are also dropped. The table showing the dependencies that objects have on each other (Table 27) needs to be updated as follows: New row: DROP SEQUENCE The entry at the intersection of the new row "DROP SEQUENCE" and the column "PACKAGE" will be an "A". The rest of the entries in this new row will be "-" 38.5.16 GRANT (Sequence Privileges) GRANT (Sequence Privileges) This form of the GRANT statement grants privileges on a user-defined sequence. Invocation This statement can be embedded in an application program or issued through the use of dynamic SQL statements. It is an executable statement that can be dynamically prepared. However, if the bind option DYNAMICRULES BIND applies, the statement cannot be dynamically prepared (SQLSTATE 42509). Authorization The privileges held by the authorization ID of the statement must include at least one of the following: * Owner of the sequence * SYSADM or DBADM authority Syntax >>-GRANT--USAGE--ON SEQUENCE--sequence-name--TO PUBLIC--------->< Description USAGE Grants the USAGE privilege for a sequence. ON SEQUENCE sequence-name Identifies the sequence on which the USAGE privilege is to be granted. The sequence-name, including the implicit or explicit schema qualifier, must uniquely identify an existing sequence at the current server. If no sequence by this name exists in the specified schema, an error (SQLSTATE 42704) is raised. TO PUBLIC Grants the USAGE privilege to all users. Examples Example 1: Grant any user the privilege on a sequence called MYNUM GRANT USAGE ON SEQUENCE MYNUM TO PUBLIC 38.5.17 INSERT Syntax diagram changes to: >>-INSERT INTO----+-table-name-+--------------------------------> '-view-name--' >-----+----------------------------+----------------------------> | .-,--------------. | | V | | '-(-----column-name---+---)--' .-,------------------------------------. V | >-----+-VALUES------+-+-expression-+----------------+--+--------+> | | +-NULL-------+ | | | | '-DEFAULT----' | | | | .-,-----------------. | | | | V | | | | '-(------+-expression-+--+---)--' | | +-NULL-------+ | | '-DEFAULT----' | '-+---------------------------------------+---fullselect--' | .-,--------------------------. | | V | | '-WITH-----common-table-expression---+--' >-----+---------------+---------------------------------------->< '-WITH--+-RR-+--' +-RS-+ +-CS-+ '-UR-' Add the following to the description section: WITH Specifies the isolation level at which the fullselect is executed. RR Repeatable Read RS Read Stability CS Cursor Stability UR Uncommitted Read The default isolation level of the statement is the isolation level of the package in which the statement is bound. 38.5.18 SELECT INTO The syntax diagram changes to: .-,----------------. V | >>-select-clause--INTO-------host-variable---+--from-clause-----> >----+--------------+--+-----------------+--+---------------+---> '-where-clause-' '-group-by-clause-' '-having-clause-' >-----+---------------+---------------------------------------->< '-WITH--+-RR-+--' +-RS-+ +-CS-+ '-UR-' Add the following to the description section: WITH Specifies the isolation level at which the SELECT INTO statement is executed. RR Repeatable Read RS Read Stability CS Cursor Stability UR Uncommitted Read The default isolation level of the statement is the isolation level of the package in which the statement is bound. 38.5.19 SET ENCRYPTION PASSWORD SET ENCRYPTION PASSWORD The SET ENCRYPTION PASSWORD statement sets the password that will be used by the encryption and decryption functions. The password is not tied to DB2 authentication, and is used for data encryption only. This statement is not under transaction control. Invocation The statement can be embedded in an application program or issued interactively. It is an executable statement that can be dynamically prepared. Authorization No authorization is required to execute this statement. Syntax .-=-. >>-SET--ENCRYPTION PASSWORD--+---+--+-host-variable---+-------->< '-string-constant-' Description The ENCRYPTION PASSWORD can be used by the ENCRYPT, DECRYPT_BIN, and DECRYPT_CHAR built-in functions for password based encryption. The length must be between 6 and 127 inclusive. All characters must be specified in the exact case intended as there is no conversion to uppercase characters. host-variable A variable of type CHAR or VARCHAR. The length of the contents of the host-variable must be between 6 and 127 inclusive (SQLSTATE 428FC). It cannot be set to null. All characters must be specified in the exact case intended as there is no conversion to uppercase characters. string-constant A character string constant. The length must be between 6 and 127 inclusive (SQLSTATE 428FC). Rules * The initial ENCRYPTION PASSWORD value is the empty string (''). * The host-variable or string-constant is transmitted to the database server using normal DB2 mechanisms. Notes * See 38.3.2.3, ENCRYPT and 38.3.2.2, DECRYPT_BIN and DECRYPT_CHAR for additional information on using this statement. Examples Example 1: The following statement sets the ENCRYPTION PASSWORD. SET ENCRYPTION PASSWORD = 'bubbalu' 38.5.20 SET transition-variable This section changes to the following: SET Variable The SET Variable statement assigns values to local variables or to new transition variables. It is under transaction control. Invocation This statement can only be used as an SQL statement in either a dynamic compound statement, trigger, SQL function or SQL method. Authorization To reference a transition variable, the privileges held by the authorization ID of the trigger creator must include at least one of the following: * UPDATE of the columns referenced on the left hand side of the assignment and SELECT for any columns referenced on the right hand side. * CONTROL privilege on the table (subject table of the trigger) * SYSADM or DBADM authority. To execute this statement with a row-fullselect as the right hand side of the assignment, the privileges held by the authorization ID of either the trigger definer or the dynamic compound statement owner must also include at least one of the following, for each table or view referenced: * SELECT privilege * CONTROL privilege * SYSADM or DBADM. Syntax >>-SET----------------------------------------------------------> .-,---------------------------------------------------------------------------------. V | >--------+-| target-variable |--=--+-expression-+--------------------------------------+--+> | +-NULL-------+ | | '-DEFAULT----' | | .-,----------------------. .-,--------------------. | | V | V (1) | | '-(-----| target-variable |---+---)--=--(--+----+-expression------+--+-+---)--' | +-NULL------------+ | | '-DEFAULT---------' | | (2) | '-row-fullselect------------' >-------------------------------------------------------------->< target-variable |---+-SQL-variable-name--------+---+--------------------------+-| '-transition-variable-name-' | .--------------------. | | V | | '----..attribute-name---+--' Notes: 1. The number of expressions, NULLs and DEFAULTs must match the number of target-variables. 2. The number of columns in the select list must match the number of target-variables. Description target-variable Identifies the target variable of the assignment. A target-variable representing the same variable must not be specified more than once (SQLSTATE 42701). SQL-variable-name Identifies the SQL variable that is the assignment target. SQL variables must be declared before they are used. SQL variables can be defined in a dynamic compound statement. transition-variable-name Identifies the column to be updated in the transition row. A transition-variable-name must identify a column in the subject table of a trigger, optionally qualified by a correlation name that identifies the new value (SQLSTATE 42703). ..attribute name Specifies the attribute of a structured type that is set (referred to as an attribute assignment). The SQL-variable-nameor transition-variable-name specified must be defined with a user-defined structured type (SQLSTATE 428DP). The attribute-name must be an attribute of the structured type (SQLSTATE 42703). An assignment that does not involve the ..attribute name clause is referred to as a conventional assignment. expression Indicates the new value of the target-variable. The expression is any expression of the type described in Chapter 2 of the SQL Reference. The expression can not include a column function except when it occurs within a scalar fullselect (SQLSTATE 42903). In the context of a CREATE TRIGGER statement, an expression may contain references to OLD and NEW transition variables and must be qualified by the correlation-name to specify which transition variable (SQLSTATE 42702). NULL Specifies the null value and can only be specified for nullable columns (SQLSTATE 23502). NULL cannot be the value in an attribute assignment (SQLSTATE 429B9), unless it was specifically cast to the data type of the attribute. DEFAULT Specifies that the default value should be used. If target-variable is a column, the value inserted depends on how the column was defined in the table. o If the column was defined using the WITH DEFAULT clause, then the value is set to the default defined for the column. o If the column was defined using the IDENTITY clause, the value is generated by the database manager. o If the column was defined without specifying the WITH DEFAULT clause, the IDENTITY clause, or the NOT NULL clause, then the value is NULL. o If the column was defined using the NOT NULL clause and the IDENTITY clause is not used, or the WITH DEFAULT clause was not used or DEFAULT NULL was used, the DEFAULT keyword cannot be specified for that column (SQLSTATE 23502). If target-variable is an SQL variable, then the value inserted is the default as specified or implied in the variable declaration. row-fullselect A fullselect that returns a single row with the number of columns corresponding to the number of target-variables specified for assignment. The values are assigned to each corresponding target-variable. If the result of the row-fullselect is no rows, then null values are assigned. In the context of a CREATE TRIGGER statement, a row-fullselect may contain references to OLD and NEW transition variables which must be qualified by their correlation-name to specify which transition variable to use (SQLSTATE 42702). An error is returned if there is more than one row in the result (SQLSTATE 21000). Rules * The number of values to be assigned from expressions, NULLs and DEFAULTs or the row-fullselect must match the number of target-variables specified for assignment (SQLSTATE 42802). * A SET Variable statement cannot assign an SQL variable and a transition variable in one statement (SQLSTATE 42997). * Values are assigned to target-variables under the assignment rules described in Chapter 2 of the SQL Reference. If the statement is used in a BEFORE UPDATE trigger, and the registry variable DB2_UPDATE_PART_KEY=OFF, then a transition-variable specified as target-variable cannot be a partitioning key column (SQLSTATE 42997). Notes * If more than one assignment is included, all expressions and row-fullselects are evaluated before the assignments are performed. Thus references to target-variables in an expression or row fullselect are always the value of the target-variable prior to any assignment in the single SET statement. * When an identity column defined as a distinct type is updated, the entire computation is done in the source type, and the result is cast to the distinct type before the value is actually assigned to the column. 8 * To have DB2 generate a value on a SET statement for an identity column, use the DEFAULT keyword: SET NEW.EMPNO = DEFAULT In this example, NEW.EMPNO is defined as an identity column, and the value used to update this column is generated by DB2. The examples for this statement stay the same. 38.5.21 UPDATE The searched UPDATE syntax diagram is changed to: >>-UPDATE----+-table-name-------------------+-------------------> +-view-name--------------------+ '-ONLY--(--+-table-name-+---)--' '-view-name--' >-----+---------------------------+-----------------------------> | .-AS-. | '-+----+--correlation-name--' >-----SET--| assignment-clause |--------------------------------> >-----+--------------------------+---+---------------+--------->< '-WHERE--search-condition--' '-WITH--+-RR-+--' +-RS-+ +-CS-+ '-UR-' Add the following to the description section: WITH Specifies the isolation level at which the UPDATE statement is executed. RR Repeatable Read RS Read Stability CS Cursor Stability UR Uncommitted Read The default isolation level of the statement is the isolation level of the package in which the statement is bound. ------------------------------------------------------------------------ 38.6 Chapter 7. SQL Procedures now called Chapter 7. SQL Control Statements Control statements are SQL statements that allow SQL to be used in a manner similar to writing a program in a structured programming language. SQL control statements can be used in the body of a routine, trigger or a dynamic compound statement. This chapter contains the syntax and descriptions of the supported SQL control statements, along with the SQL-procedure-statement. 38.6.1 SQL Procedure Statement The SQL Procedure Statement information changes to the following: SQL Procedure Statement This chapter contains syntax diagrams, semantic descriptions, rules, and examples of the use of the statements that constitute the procedure body of an SQL routine, trigger, or dynamic compound statement. Syntax >>-+---------+---+-| SQL-control-statement |-+----------------->< '-label:--' '-| SQL-statement |---------' SQL-control-statement (1) |---+-ALLOCATE CURSOR statement---------+-----------------------| | (1) | +-assignment statement--------------+ | (1) | +-ASSOCIATE LOCATORS statement------+ | (1) | +-CASE statement--------------------+ | (2) | +-dynamic-compound statement--------+ +-FOR statement---------------------+ +-GET DIAGNOSTICS statement---------+ | (1) | +-GOTO statement--------------------+ +-IF statement----------------------+ +-ITERATE statement-----------------+ +-LEAVE statement-------------------+ | (1) | +-LOOP statement--------------------+ | (1) | +-procedure-compound statement------+ | (1) | +-REPEAT statement------------------+ | (1) | +-RESIGNAL statement----------------+ +-RETURN statement------------------+ +-SIGNAL statement------------------+ '-WHILE statement-------------------' Notes: 1. This statement is only supported in the scope of an SQL Procedure. 2. This statement is only supported within a trigger, SQL function, or SQL method. It must be the outermost statement. Description label: Specifies the label for an SQL procedure statement. The label must be unique within a list of SQL procedure statements, including any compound statements nested within the list. Note that compound statements that are not nested may use the same label. A list of SQL procedure statements is possible in a number of SQL control statements. In the context of a trigger, an SQL function or method, or a dynamic compound statement, only the dynamic compound statement, the FOR statement and the WHILE statement may be labeled. SQL-statement In the body of an SQL procedure, all executable SQL statements can be contained, with the exception of the following: o CONNECT o CREATE any object other than indexes, tables, or views o DESCRIBE o DISCONNECT o DROP any object other than indexes, tables, or views o FLUSH EVENT MONITOR o REFRESH TABLE o RELEASE (connection only) o RENAME TABLE o RENAME TABLESPACE o REVOKE o SET CONNECTION o SET INTEGRITY Note: You may include CALL statements within an SQL procedure body, but these CALL statements can only call another SQL procedure or a C procedure. CALL statements within an SQL procedure body cannot call other types of stored procedures. 38.6.2 FOR FOR The FOR statement executes a statement or group of statements for each row of a table. Syntax >>-+---------+---FOR--for-loop-name--AS-------------------------> '-label:--' >-----+-------------------------------+--select-statement---DO--> | (1) | '-cursor-name--CURSOR FOR-------' .-------------------------------. V | >--------SQL-procedure-statement--;---+--END FOR----+--------+->< '-label--' Notes: 1. This option can only be used in the context of an SQL Procedure. Description label Specifies the label for the FOR statement. If the beginning label is specified, that label can be used in LEAVE and ITERATE statements. If the ending label is specified, it must be the same as the beginning label. for-loop-name Specifies a label for the implicit compound statement generated to implement the FOR statement. It follows the rules for the label of a compound statement except that it cannot be used with and ITERATE or LEAVE statement within the FOR statement. The for-loop-name is used to qualify the column names returned by the specified select-statement. cursor-name Names the cursor that is used to select rows from the result table from the SELECT statement. If not specified, DB2 generates a unique cursor name. select-statement Specifies the SELECT statement of the cursor. All columns in the select list must have a name and there cannot be two columns with the same name. In a trigger, function, method, or dynamic compound statement, the select-statement must consist of only a fullselect with optional common table expressions. SQL-procedure-statement Specifies a statement (or statements) to be invoked for each row of the table. Rules * The select list must consist of unique column names and the table specified in the select list must exist when the procedure is created, or it must be a table created in a previous SQL procedure statement. * The cursor specified in a for-statement cannot be referenced outside the for-statement and cannot be specified in an OPEN, FETCH, or CLOSE statement. Examples In the following example, the for-statement is used to iterate over the entire employee table. For each row in the table, the SQL variable fullname is set to the last name of the employee, followed by a comma, the first name, a blank space, and the middle initial. Each value for fullname is inserted into table tnames. BEGIN DECLARE fullname CHAR(40); FOR vl AS SELECT firstnme, midinit, lastname FROM employee DO SET fullname = lastname || ',' || firstnme ||' ' || midinit; INSERT INTO tnames VALUE (fullname); END FOR END 38.6.3 Compound Statement changes to Compound Statement (Procedure) A procedure compound statement groups other statements together in an SQL procedure. You can declare SQL variables, cursors, and condition handlers within a compound statement. The syntax diagram now has a title: procedure-compound-statement. .-NOT ATOMIC--. >>-+---------+--BEGIN----+-------------+------------------------> '-label:--' '-ATOMIC------' >-----+-----------------------------------------------+---------> | .-----------------------------------------. | | V | | '-----+-| SQL-variable-declaration |-+---;---+--' +-| condition-declaration |----+ '-| return-codes-declaration |-' >-----+--------------------------------------+------------------> | .--------------------------------. | | V | | '----| statement-declaration |--;---+--' >-----+-------------------------------------+-------------------> | .-------------------------------. | | V | | '----DECLARE-CURSOR-statement--;---+--' >-----+------------------------------------+--------------------> | .------------------------------. | | V | | '----| handler-declaration |--;---+--' .-------------------------------. V | >--------SQL-procedure-statement--;---+---END--+--------+------>< '-label--' SQL-variable-declaration .-,--------------------. V | |---DECLARE-------SQL-variable-name---+-------------------------> .-DEFAULT NULL-------. >-----+-data-type----+--------------------+-+-------------------| | '-DEFAULT--constant--' | '-RESULT_SET_LOCATOR--VARYING---------' condition-declaration |---DECLARE--condition-name--CONDITION--FOR---------------------> .-VALUE-. .-SQLSTATE--+-------+---. >----+-----------------------+---string-constant----------------| statement-declaration .-,-----------------. V | |---DECLARE-----statement-name---+---STATEMENT------------------| return-codes-declaration |---DECLARE----+-SQLSTATE--CHAR (5)--+---+--------------------+-| '-SQLCODE--INTEGER----' '-DEFAULT--constant--' handler-declaration |---DECLARE----+-CONTINUE-+---HANDLER--FOR----------------------> +-EXIT-----+ '-UNDO-----' .-,-----------------------------------. V .-VALUE-. | >---------+-SQLSTATE--+-------+--string--+--+-------------------> +-condition-name---------------+ +-SQLEXCEPTION-----------------+ +-SQLWARNING-------------------+ '-NOT FOUND--------------------' >----SQL-procedure-statement------------------------------------| statement-declaration A statement-declaration declares a list of one or more names that are local to the compound statement. A statement name cannot be the same as another statement name within the same compound statement. 38.6.4 RETURN RETURN The RETURN statement is used to return from the routine. For SQL functions or methods, it returns the result of the function or method. For an SQL procedure, it optionally returns an integer status value. Syntax >>-RETURN--+---------------------------------------------------------+-> +-expression----------------------------------------------+ +-NULL----------------------------------------------------+ '-+---------------------------------------+---fullselect--' | .-,--------------------------. | | V | | '-WITH-----common-table-expression---+--' >-------------------------------------------------------------->< Description expression Specifies a value that is returned from the routine: o If the routine is a function or method, one of expression, NULL, or fullselect must be specified (SQLSTATE 42630) and the data type of the result must be assignable to the RETURNS type of the routine (SQLSTATE 42866). o A scalar expression (other than a scalar fullselect) cannot be specified for a table function (SQLSTATE 428F1). o If the routine is a procedure, the data type of expression must be INTEGER (SQLSTATE 428E2). A procedure cannot return NULL or a fullselect. NULL Specifies that the function or method returns a null value of the data type defined in the RETURNS clause. NULL cannot be specified for a RETURN from a procedure. WITH common-table-expression Defines a common table expression for use with the fullselect that follows. fullselect Specifies the row or rows to be returned for the function. The number of columns in the fullselect must match the number of columns in the function result (SQLSTATE 42811). In addition, the static column types of the fullselect must be assignable to the declared column types of the function result, using the rules for assignment to columns (SQLSTATE 42866). The fullselect cannot be specified for a RETURN from a procedure. If the routine is a scalar function or method, then the fullselect must return one column (SQLSTATE 42823) and, at most, one row (SQLSTATE 21000). If the routine is a row function, it must return, at most, one row (SQLSTATE 21505). If the routine is a table function, it can return zero or more rows with one or more columns. Rules * The execution of an SQL function or method must end with a RETURN (SQLSTATE 42632). * In an SQL table or row function using a dynamic-compound-statement, the only RETURN statement allowed is the one at the end of the compound statement (SQLSTATE 429BD). Notes * When a value is returned from a procedure, the caller may access the value using: o the GET DIAGNOSTICS statement to retrieve the RETURN_STATUS when the SQL procedure was called from another SQL procedure o the parameter bound for the return value parameter marker in the escape clause CALL syntax (?=CALL...) in a CLI application o directly from the SQLCA returned from processing the CALL of an SQL procedure by retrieving the value of SQLERRD[0] when the SQLCODE is not less than zero (assume a value of -1 when SQLCODE is less than zero). Examples Use a RETURN statement to return from an SQL stored procedure with a status value of zero if successful, and -200 if not. BEGIN ... GOTO FAIL ... SUCCESS: RETURN 0 FAIL: RETURN -200 END 38.6.5 SIGNAL The SIGNAL SQLSTATE Statement is no longer used, in favor of this usage. SIGNAL The SIGNAL statement is used to signal an error or warning condition. It causes an error or warning to be returned with the specified SQLSTATE, along with optional message text. Syntax .-VALUE-. >>-SIGNAL----+-SQLSTATE--+-------+--sqlstate-string-constant--+-> '-condition-name---------------------------------' >-----+--------------------------------------------------------+->< +-SET--MESSAGE_TEXT-- = --+-variable-name--------------+-+ | '-diagnostic-string-constant-' | | (1) | '-(--diagnostic-string--)--------------------------------' Notes: 1. This option is only provided within the scope of a CREATE TRIGGER statement for compatibility with older versions of DB2. Description SQLSTATE VALUE sqlstate-string-constant The specified string constant represents an SQLSTATE. It must be a character string constant with exactly 5 characters that follow the rules for SQLSTATEs: o Each character must be from the set of digits ('0' through '9') or non-accented upper case letters ('A' through 'Z'). o The SQLSTATE class (first two characters) cannot be '00', since this represents successful completion. In the context of either a dynamic compound statement, trigger, SQL function, or SQL method, the following rules must also be applied: o The SQLSTATE class (first two characters) cannot be '01' or '02', since these are not error classes. o If the SQLSTATE class starts with the numbers '0' through '6' or the letters 'A' through 'H', then the subclass (the last three characters) must start with a letter in the range of 'I' through 'Z'. o If the SQLSTATE class starts with the numbers '7', '8', '9', or the letters 'I' through 'Z', then the subclass can be any of '0' through '9' or 'A' through 'Z'. If the SQLSTATE does not conform to these rules, an error is raised (SQLSTATE 428B3). condition-name Specifies the name of the condition. The condition name must be unique within the procedure and can only be referenced within the compound statement in which it is declared. SET MESSAGE_TEXT= Specifies a string that describes the error or warning. The string is returned in the SQLERRMC field of the SQLCA. If the actual string is longer than 70 bytes, it is truncated without warning. This clause can only be specified if a SQLSTATE or condition-name is also specified (SQLSTATE 42601). variable-name Identifies an SQL variable that must be declared within the compound statement. The SQL variable must be defined as a CHAR or VARCHAR data type. diagnostic-string-constant Specifies a character string constant that contains the message text. diagnostic-string An expression with a type of CHAR or VARCHAR that returns a character string of up to 70 bytes to describe the error condition. If the string is longer than 70 bytes, it will be truncated. This option is only provided within the scope of a CREATE TRIGGER statement, for compatibility with older versions of DB2. Regular use is not recommended. Notes * If a SIGNAL statement is issued, the SQLCODE that is assigned is: +438 if the SQLSTATE begins with '01' or '02' -438 otherwise * If the SQLSTATE or condition indicates that an exception (SQLSTATE class other than '01' or '02') is signaled: o Then the exception is handled and control is transferred to a handler, provided that a handler exists in the same compound statement (or an outer compound statement) as the signal statement, and the compound statement contains a handler for the specified SQLSTATE, condition-name, or SQLEXCEPTION; o If the exception cannot be handled, then control is immediately returned to the end of the compound statement. * If the SQLSTATE or condition indicates that a warning (SQLSTATE class '01') or not found condition (SQLSTATE class '02') is signaled: o Then the warning or not found condition is handled and control is transferred to a handler, provided that a handler exists in the same compound statement (or an outer compound statement) as the signal statement, and the compound statement contains a handler for the specified SQLSTATE, condition-name, SQLWARNING (if the SQLSTATE class is '01'), or NOT FOUND (if the SQLSTATE class is '02'); o If the warning cannot be handled, then processing continues with the next statement. * SQLSTATE values are comprised of a two-character class code value, followed by a three-character subclass code value. Class code values represent classes of successful and unsuccessful execution conditions. Any valid SQLSTATE value can be used in the SIGNAL statement. However, it is recommended that programmers define new SQLSTATEs based on ranges reserved for applications. This prevents the unintentional use of an SQLSTATE value that might be defined by the database manager in a future release. o SQLSTATE classes that begin with the characters '7' through '9', or 'I' through 'Z' may be defined. Within these classes, any subclass may be defined. o SQLSTATE classes that begin with the characters '0' through '6', or 'A' through 'H' are reserved for the database manager. Within these classes, subclasses that begin with the characters '0' through 'H' are reserved for the database manager. Subclasses that begin with the characters 'I' through 'Z' may be defined. Examples An SQL procedure for an order system that signals an application error when a customer number is not known to the application. The ORDERS table includes a foreign key to the CUSTOMER table, requiring that the CUSTNO exist before an order can be inserted. CREATE PROCEDURE SUBMIT_ORDER (IN ONUM INTEGER, IN CNUM INTEGER, IN PNUM INTEGER, IN QNUM INTEGER) SPECIFIC SUBMIT_ORDER MODIFIES SQL DATA LANGUAGE SQL BEGIN DECLARE EXIT HANDLER FOR SQLSTATE VALUE '23503' SIGNAL SQLSTATE '75002' SET MESSAGE_TEXT = 'Customer number is not known'; INSERT INTO ORDERS (ORDERNO, CUSTNO, PARTNO, QUANTITY) VALUES (ONUM, CNUM, PNUM, QNUM); END ------------------------------------------------------------------------ 38.7 Appendix A. SQL Limits There is a change to Table 33, Database Manager Limits. With the registry variable DB2_INDEX_2BYTEVARLEN set to ON, the longest variable index key part (in bytes) can now be greater than 255. ------------------------------------------------------------------------ 38.8 Appendix D. Catalog Views A new catalog view has been added: 38.8.1 SYSCAT.SEQUENCES The view SYSCAT.SEQUENCES is automatically generated for databases created with FixPak 3 or later. For databases created prior to FixPak 3, run the db2updv7 command in order to add the view to the database. See the Command Reference update in the Release Notes for details. This catalog view is updated during normal operation, in response to SQL data definition statements, environment routines, and certain utilities. Data in the catalog view is available through normal SQL query facilities. Columns have consistent names based on the type of objects that they describe. Table 30. Columns in SYSCAT.SEQUENCES Catalog View Column Name Data Type NullableDescription SEQSCHEMA VARCHAR(128) Schema of the sequence. SEQNAME VARCHAR(128) Sequence name (generated by DB2 for an identity column). DEFINER VARCHAR(128) Definer of the sequence. OWNER VARCHAR(128) Owner of the sequence. SEQID INTEGER Internal ID of the sequence. SEQTYPE CHAR(1) Sequence type S - Regular sequence INCREMENT DECIMAL(31,0) Increment value. START DECIMAL(31,0) Starting value. MAXVALUE DECIMAL(31,0) Yes Maximal value. MINVALUE DECIMAL(31,0) Minimum value. CYCLE CHAR(1) Whether cycling will occur when a boundary is reached: Y - cycling will occur N - cycling will not occur CACHE INTEGER Number of sequence values to preallocate in memory for faster access. 0 indicates that values are not preallocated. ORDER CHAR(1) Whether or not the sequence numbers must be generated in order of request: Y - sequence numbers must be generated in order of request N - sequence numbers are not required to be generated in order of request DATATYPEID INTEGER For built-in types, the internal ID of the built-in type. For distinct types, the internal ID of the distinct type. SOURCETYPEID INTEGER For a built-in type, this has a value of 0. For a distinct type, this is the internal ID of the built-in type that is the source type for the distinct type. CREATE_TIME TIMESTAMP Time when the sequence was created. ALTER_TIME TIMESTAMP Time when the last ALTER SEQUENCE statement was executed for this sequence. PRECISION SMALLINT The precision defined for a sequence with a decimal or numeric type. Values are: 5 for a SMALLINT, 10 for INTEGER, and 19 for BIGINT. ORIGIN CHAR(1) Sequence Origin U - User generated sequence S - System generated sequence REMARKS VARCHAR(254) Yes User supplied comments, or null. ------------------------------------------------------------------------ DB2 Stored Procedure Builder ------------------------------------------------------------------------ 39.1 Java 1.2 Support for the DB2 Stored Procedure Builder The DB2 Stored Procedure Builder supports building Java stored procedures using Java 1.2 functionality. In addition, the Stored Procedure Builder supports bi-directional languages, such as Arabic and Hebrew, using the bi-di support in Java 1.2. This support is provided for Windows NT platforms only. In order for the Stored Procedure Builder to recognize and use Java 1.2 functionality, Java 1.2 must be installed. To install Java 1.2: 1. JDK 1.2.2 is available on the DB2 UDB CD under the DB2\bidi\NT directory. ibm-inst-n122p-win32-x86.exe is the installer program, and ibm-jdk-n122p-win32-x86.exe is the JDK distribution. Copy both files to a temporary directory on your hard drive, then run the installer program from there. 2. Install it under \java\Java12, where is the installation path of DB2. 3. Do not select JDK/JRE as the System VM when prompted by the JDK/JRE installation. After Java 1.2 is installed successfully, start the Stored Procedure Builder in the normal manner. To execute Java stored procedures using JDK 1.2 support, set the database server environment variable DB2_USE_JDK12 to TRUE using the following command: DB2SET DB2_USE_JDK12=TRUE Also, set your JDK11_PATH to point to the directory where your Java 1.2 support is installed. You set this path by using the following command: DB2 UPDATE DBM CFG USING JDK11_PATH To stop the use of Java 1.2, you can either uninstall the JDK/JRE from \java\Java12, or simply rename the \java\Java12 subdirectory. Important: Do not confuse \java\Java12 with \Java12. \Java12 is part of the DB2 installation and includes JDBC support for Java 1.2. ------------------------------------------------------------------------ 39.2 Remote Debugging of DB2 Stored Procedures To use the remote debugging capability for Java and C stored procedures on the UNIX and Windows platforms, you need to install the IBM Distributed Debugger. The IBM Distributed Debugger is included on the Visual Age for Java Professional Edition CD. The debugger client runs only on the Windows platform. Supported server platforms include: Windows, AIX and Solaris. Use the Stored Procedure Builder built-in SQL debug capability to debug local and remote SQL Stored Procedures for the Windows and UNIX platforms. Support for the OS/2 platform is not available at this time. For more information on the DB2 for OS/390 Stored Procedure Builder, go to the following Web site: http://www-4.ibm.com/software/data/db2/os390/spb/exciting To debug SQL procedures on the OS/390 platform, you must also have the IBM C/C++ Productivity Tools for OS/390 R1 product. For more information on the IBM C/C++ Productivity Tools for OS/390 R1, go to the following Web site: http://www.ibm.com/software/ad/c390/pt/ ------------------------------------------------------------------------ 39.3 Building SQL Procedures on Windows, OS/2 or UNIX Platforms Before you can use the Stored Procedure Builder to successfully build SQL Procedures on your Windows, OS/2 or UNIX database, you must configure your server for SQL Procedures. For information on how to configure your server for SQL Procedures, see 34.3, Chapter 4. Building Java Applets and Applications. ------------------------------------------------------------------------ 39.4 Using the DB2 Stored Procedure Builder on the Solaris Platform To use the Stored Procedure Builder on the Solaris platform: 1. Download and install JDK 1.1.8. You can download JDK 1.1.8 from the JavaSoft web site. 2. Set the environment variable JAVA_HOME to the location where you installed the JDK. 3. Set your DB2 JDK11_PATH to the directory where you installed the JDK. To set the DB2 JDK11_PATH, use the command: DB2 UPDATE DBM CFG USING JDK11_PATH. ------------------------------------------------------------------------ 39.5 Known Problems and Limitations * SQL Procedures are not currently supported on Windows 98. * For Java stored procedures, the JAR ID, class names, and method names cannot contain non-ASCII characters. * On AS/400 the following V4R4 PTFs must be applied to OS/400 V4R4: - SF59674 - SF59878 * Stored procedure parameters with a character subtype of FOR MIXED DATA or FOR SBCS DATA are not shown in the source code in the editor pane when the stored procedure is restored from the database. * Currently, there is a problem when Java source code is retrieved from a database. At retrieval time, the comments in the code come out collapsed. This will affect users of the DB2 Stored Procedure Builder who are working in non-ASCII code pages, and whose clients and servers are on different code pages. ------------------------------------------------------------------------ 39.6 Using DB2 Stored Procedure Builder with Traditional Chinese Locale There is a problem when using Java Development Kit or Java Runtime 1.1.8 with the Traditional Chinese locale. Graphical aspects of the Stored Procedure Builder program (including menus, editor text, messages, and so on) will not display properly. The solution is to make a change to the file font.properties.zh_TW, which appears in one or both of the following directories: sqllib/java/jdk/lib sqllib/java/jre/lib Change: monospaced.0=\u7d30\u660e\u9ad4,CHINESEBIG5_CHARSET,NEED_CONVERTED to: monospaced.0=Courier New,ANSI_CHARSET ------------------------------------------------------------------------ 39.7 UNIX (AIX, Sun Solaris, Linux) Installations and the Stored Procedure Builder For Sun Solaris installations, and if you are using a Java Development Kit or Runtime other than the one installed on AIX with UDB, you must set the environment variable JAVA_HOME to the path where Java is installed (that is, to the directory containing the /bin and /lib directories). Stored Procedure Builder is not supported on Linux, but can be used on supported platforms to build and run stored procedures on DB2 UDB for Linux systems. Supported platforms include AIX, Solaris and NT for the client and AIX, Solaris, NT, Linux, OS/2, HP-UX and NUMA-Q for the server. ------------------------------------------------------------------------ 39.8 Building SQL Stored Procedures on OS/390 DB2 Stored Procedure Builder supports building SQL stored procedures on the DB2 UDB for OS390 V7 server. ------------------------------------------------------------------------ 39.9 Debugging SQL Stored Procedures Debugging of SQL stored procedures on Windows and UNIX platforms is now directly integrated into the DB2 Stored Procedure Builder. The KEEPDARI database manager configuration option can be set to YES or NO when debugging unfenced (trusted) SQL Procedures; however, it must be set to YES (the default) when debugging fenced (non-trusted) SQL Procedures. See the Stored Procedure Builder help for additional information about using the integrated debugger. ------------------------------------------------------------------------ 39.10 Exporting Java Stored Procedures DB2 Stored Procedure Builder now supports exporting Java stored procedures. To export a Java stored procedure: 1. Right click the stored procedures folder, and click Export Java Stored Procedures to open the Export Java Stored Procedures window. 2. Select the stored procedures that you want to export, and move them to the "Selected stored procedures" column. 3. Select your preferred options, then click OK. ------------------------------------------------------------------------ 39.11 Inserting Stored Procedures on OS/390 For DB2 Stored Procedure Builder Version 5 and later, running on OS/390, if you use the wizard to insert a stored procedure and indicate no WLM environment options, the generated code contains the following text: NO WLM ENVIRONMENT. This line of code causes the stored procedure to run in the SPAS address space as expected. This fix resolves a problem that existed on DB2 Stored Procedure Builder version 6 and above. The generated code after the fix appears as follows: CREATE PROCEDURE SYSPROC.Proc2 ( ) RESULT SETS 1 LANGUAGE SQL MODIFIES SQL DATA COLLID TEST NO WLM ENVIRONMENT ASUTIME NO LIMIT RUN OPTIONS 'NOTEST(ALL,*,,VADTCPIP&9.112.14.91:*)' ------------------------------------------------------------------- -- SQL Stored Procedure ------------------------------------------------------------------- P1: BEGIN -- Declare cursor DECLARE cursor1 CURSOR WITH RETURN FOR SELECT * FROM SYSIBM.SYSPROCEDURES; -- Cursor left open for client application OPEN cursor1; END P1 ------------------------------------------------------------------------ 39.12 Setting Build Options for SQL Stored Procedures on a Workstation Server Using DB2 Stored Procedure Builder on UNIX and Windows platforms, you can set build options for all SQL stored procedures. These build options include the following compiler and precompiler DB2 registry variables: * DB2_SQLROUTINE_PREPOPTS * DB2_SQLROUTINE_COMPILER_PATH * DB2_SQLROUTINE_COMPILE_COMMAND * DB2_SQLROUTINE_KEEP_FILES Although it is possible to set these registry variables using the db2set command, using the Stored Procedure Builder eliminates the need to physically access the database server to issue the command or to stop then restart the server in order for the changes to take effect. To open the SQL Stored Procedure Build Options window, right-click a database connection in your project view, and click SQL Stored Procedure Build Options. For more information about setting these options, see the DB2 Stored Procedure help. ------------------------------------------------------------------------ 39.13 Automatically Refreshing the WLM Address Space for Stored Procedures Built on OS/390 After you successfully build a stored procedure on OS/390 that will run in WLM, DB2 Stored Procedure Builder automatically refreshes the WLM address space. ------------------------------------------------------------------------ 39.14 Developing Java stored procedures on OS/390 DB2 Stored Procedure Builder supports the development of Java stored procedures on DB2 UDB for OS/390 Version 6 and above. You can create new Java stored procedures or change existing ones. ------------------------------------------------------------------------ 39.15 Building a DB2 table user defined function (UDF) for MQ Series and OLE DB DB2 Stored Procedure Builder provides wizards that help you to create table UDFs for both the MQSeries and OLE DB. You can use the Create OLE DB table UDF wizard to access OLE DB data providers. The wizard creates an OLE table UDF and optional table view. You can use the Create MQSeries table UDF wizard to create a table UDF with an option table view to access MQSeries messages and parse the data in a tabular format. ------------------------------------------------------------------------ Unicode Updates ------------------------------------------------------------------------ 40.1 Introduction The Unicode standard is the universal character encoding scheme for written characters and text. Unicode is multi-byte representation of a character. It defines a consistent way of encoding multilingual text that enables the exchange of text data internationally and creates the foundation for global software. Unicode provides the following two encoding schemes. The default encoding scheme is UTF-16, which is a 16-bit encoding format. UCS-2 is a subset of UTF-16 which uses two bytes to represent a character. UCS-2 is generally accepted as the universal code page capable of representing all the necessary characters from all existing single and double byte code pages. UCS-2 is registered in IBM as code page 1200. The other Unicode encoding format is UTF-8, which is byte-oriented and has been designed for ease of use with existing ASCII-based systems. UTF-8 uses a varying number of bytes (usually 1-3, sometimes 4) to store each character. The invariant ASCII characters are stored as single bytes. All other characters are stored using multiple bytes. In general, UTF-8 data can be treated as extended ASCII data by code that was not designed for multi-byte code pages. UTF-8 is registered in IBM as code page 1208. It is important that applications take into account the requirements of data as it is converted between the local code page, UCS-2 and UTF-8. For example, 20 characters will require exactly 40 bytes in UCS-2 and somewhere between 20 and 60 bytes in UTF-8, depending on the original code page and the characters used. 40.1.1 DB2 Unicode Databases and Applications A DB2 Universal database for Unix, Windows, or OS/2 created with a UTF-8 codeset can be used to store data in both UCS-2 and UTF-8 formats. Such a database is referred to as a Unicode database. SQL CHAR data is encoded using UTF-8 and SQL GRAPHIC data is encoded using UCS-2. This can be equated to storing Single-Byte (SBCS) and Multi-Byte(MBCS) codesets in CHAR columns and Double-Byte (DBCS) codesets in GRAPHIC columns. The code page of an application may not match the code page that DB2 uses to store data. In a non-Unicode database, when the code pages are not the same, the database manager converts character and graphic (pure DBCS) data that is transferred between client and server. In a Unicode database, the conversion of character data between the client code page and UTF-8 is automatically performed by the database manager, but all graphic (UCS-2) data is passed without any conversion between the client and the server. Notes: 1. When connecting to Unicode Databases, if the application sets DB2CODEPAGE=1208, the local code page is UTF-8, so no code page conversion is needed. 2. When connected to a Unicode Database, CLI applications can also receive character data as graphic data, and graphic data as character data. It is possible for an application to specify a UTF-8 code page, indicating that it will send and receive all graphic data in UCS-2 and character data in UTF-8. This application code page is only supported for Unicode databases. Other points to consider when using Unicode: 1. The database code page is determined at the time the database is created, and by default its value is determined from the operating system locale (or code page). The CODESET and TERRITORY keywords can be used to explicitly create a Unicode DB2 database. For example: CREATE DATABASE unidb USING CODESET UTF-8 TERRITORY US 2. The application code page also defaults to the local code page, but this can be overridden by UTF-8 in one of two ways: o Setting the application code page to UTF-8 (1208) with this command: db2set DB2CODEPAGE=1208 o For CLI/ODBC applications, by calling SQLSetConnectAttr() and setting the SQL_ATTR_ANSI_APP to SQL_AA_FALSE. The default setting is SQL_AA_TRUE. 3. Data in GRAPHIC columns will take exactly two bytes for each Unicode character, whereas data in CHAR columns will take from 1 to 3 bytes for each Unicode character. SQL limits in terms of characters for GRAPHIC columns are generally half of those for CHAR columns, but in terms of bytes they are equal. The maximum character length for a CHAR column is 254. The maximum character length for a graphic column is 127. For more information, see MAX in the "Functions" chapter of the SQL Reference. 4. A graphic literal is differentiated from a character literal by a G prefix. For example: SELECT * FROM mytable WHERE mychar = 'utf-8 data' AND mygraphic = G'ucs-2 data' Note: The G prefix is not required for Unicode databases. See "Literals in Unicode Databases" for more information and updated support. 5. Support for CLI/ODBC and JDBC applications differ from the support for Embedded applications. For information specific to CLI/ODBC support, see 40.3, "CLI Guide and Reference". 6. The byte ordering of UCS-2 data may differ between platforms. Internally, DB2 uses big-endian format. 40.1.2 Documentation Updates This document updates the following information on using Unicode with DB2 Version 7.1: * SQL Reference: Chapter 3 Language Elements Chapter 4 Functions * CLI Guide and Reference: Chapter 3. Using Advanced Features Appendix C. DB2 CLI and ODBC * Data Movement Utilities Guide and Reference, Appendix C. Export/Import/Load Utility File Formats For more information on using Unicode with DB2 refer to the Administration Guide, Appendix J. National Language Support (NLS): "Unicode/UCS-2 and UTF-8 Support in DB2 UDB". ------------------------------------------------------------------------ 40.2 SQL Reference 40.2.1 Chapter 3 Language Elements 40.2.1.1 Promotion of Data Types In this section table 5 shows the precedence list for each data type. Please note: 1. For a Unicode database, the following are considered to be equivalent data types: o CHAR and GRAPHIC o VARCHAR and VARGRAPHIC o LONG VARCHAR and LONG VARGRAPHIC o CLOB and DBCLOB 2. In a Unicode database, it is possible to create functions where the only difference in the function signature is between equivalent CHAR and GRAPHIC data types, for example, foo(CHAR(8)) and foo(GRAPHIC(8)). We strongly recommend that you do not define such duplicate functions since migration to a future release will require one of them to be dropped before the migration will proceed. If such duplicate functions do exist, the choice of which one to invoke is determined by a two pass algorithm. The first pass attempts to find a match using the same algorithm as is used for resolving functions in a non-Unicode database. If no match is found, then a second pass will be done taking into account the following promotion precedence for CHAR and GRAPHIC strings: GRAPHIC-->CHAR-->VARGRAPHIC-->VARCHAR-->LONG VARGRAPHIC-->LONG VARCHAR-->DBCLOB-->CLOB 40.2.1.2 Casting Between Data Types The following entry has been added to the list introduced as: "The following casts involving distinct types are supported": * for a Unicode database, cast from a VARCHAR or VARGRAPHIC to distinct type DT with a source data type CHAR or GRAPHIC. The following are updates to Table 6. Supported Casts between Built-in Data Types. Only the affected rows of the table are included. Table 31. Supported Casts between Built-in Data Types L O N L G Target Data Type O V V > N A A G R R V V G G G A A R R R D R R A A A B C C C C P P P C H H H L H H H L Source Data Type A A A O I I I O V R R R B C C C B CHAR Y Y Y Y Y1 Y1 - - VARCHAR Y Y Y Y Y1 Y1 - - LONGVARCHAR Y Y Y Y - - Y1 Y1 CLOB Y Y Y Y - - - Y1 GRAPHIC Y1 Y1 - - Y Y Y Y VARGRAPHIC Y1 Y1 - - Y Y Y Y LONGVARGRAPHIC - - Y1 Y1 Y Y Y Y DBCLOB - - - Y1 Y Y Y Y 1 Cast is only supported for Unicode databases. 40.2.1.3 Assignments and Comparisons Assignments and comparisons involving both character and graphic data are only supported when one of the strings is a literal. For function resolution, graphic literals and character literals will both match character and graphic function parameters. The following are updates to Table 7. Data Type Compatibility for Assignments and Comparisons. Only the affected rows of the table, and the new footnote 6, are included: Binary Decimal Floating CharacterGraphic Time- Binary Operands Integer Number Point String String Date Time stamp String UDT CharacterNo No No Yes Yes 6 1 1 1 No 3 2 String Graphic No No No Yes 6 Yes No No No No 2 String 6 Only supported for Unicode databases. String Assignments Storage Assignment The last paragraph of this sub-section is modified as follows: When a string is assigned to a fixed-length column and the length of the string is less than the length attribute of the target, the string is padded to the right with the necessary number of single-byte, double-byte, or UCS-22 blanks. The pad character is always a blank even for columns defined with the FOR BIT DATA attribute. Retrieval Assignment The third paragraph of this sub-section is modified as follows: When a character string is assigned to a fixed-length variable and the length of the string is less than the length attribute of the target, the string is padded to the right with the necessary number of single-byte, double-byte, or UCS-22 blanks. The pad character is always a blank even for strings defined with the FOR BIT DATA attribute. 2 UCS-2 defines several SPACE characters with different properties. For a Unicode database, the database manager always uses the ASCII SPACE at position x'0020' as UCS-2 blank. For an EUC database, the IDEOGRAPHIC SPACE at position x'3000' is used for padding GRAPHIC strings. Conversion Rules for String Assignments The following paragraph has been added to the end of this sub-section: For Unicode databases, character strings can be assigned to a graphic column, and graphic strings can be assigned to a character column. DBCS Considerations for Graphic String Assignments The first paragraph of this sub-section has been modified as follows: Graphic string assignments are processed in a manner analogous to that for character strings. For non-Unicode databases, graphic string data types are compatible only with other graphic string data types, and never with numeric, character string, or datetime data types. For Unicode databases, graphic string data types are compatible with character string data types. String Comparisons Conversion Rules for Comparison This sub-section has been modified as follows: When two strings are compared, one of the strings is first converted, if necessary, to the encoding scheme and/or code page of the other string. For details, see the "Rules for String Conversions" section of Chapter 3 Language Elements in the SQL Reference. 40.2.1.4 Rules for Result Data Types Character and Graphic Strings in a Unicode Database This is a new sub-section inserted after the sub-section "Graphic Strings". In a Unicode database, character strings and graphic strings are compatible. If one operand is... And the other operand The data type of the is... result is... GRAPHIC(x) CHAR(y) or GRAPHIC(y) GRAPHIC(z) where z = max(x,y) VARGRAPHIC(x) CHAR(y) or VARCHAR(y) VARGRAPHIC(z) where z = max(x,y) VARCHAR(x) GRAPHIC(y) or VARGRAPHIC(z) where z = VARGRAPHIC max(x,y) LONG VARGRAPHIC CHAR(y) or VARCHAR(y) LONG VARGRAPHIC or LONG VARCHAR LONG VARCHAR GRAPHIC(y) or LONG VARGRAPHIC VARGRAPHIC(y) DBCLOB(x) CHAR(y) or VARCHAR(y) DBCLOB(z) where z = or CLOB(y) max(x,y) DBCLOB(x) LONG VARCHAR DBCLOB(z) where z = max(x,16350) CLOB(x) GRAPHIC(y) or DBCLOB(z) where z = VARGRAPHIC(y) max(x,y) CLOB(x) LONG VARGRAPHIC DBCLOB(z) where z = max(x,16350) 40.2.1.5 Rules for String Conversions The third point has been added to the following list in this section: For each pair of code pages, the result is determined by the sequential application of the following rules: * If the code pages are equal, the result is that code page. * If either code page is BIT DATA (code page 0), the result code page is BIT DATA. * In a Unicode database, if one code page denotes data in an encoding scheme different from the other code page, the result is UCS-2 over UTF-8 (that is, the graphic data type over the character data type).1 * Otherwise, the result code page is determined by Table 8 of the "Rules for String Conversions" section of Chapter 3 Language Elements in the SQL Reference. An entry of 'first' in the table means the code page from the first operand is selected and an entry of 'second' means the code page from the second operand is selected. 1 In a non-Unicode database, conversion between different encoding schemes is not supported. 40.2.1.6 Expressions The following has been added: In a Unicode database, an expression that accepts a character or graphic string will accept any string types for which conversion is supported. With the Concatenation Operator The following has been added to the end of this sub-section: In a Unicode database, concatenation involving both character string operands and graphic string operands will first convert the character operands to graphic operands. Note that in a non-Unicode database, concatenation cannot involve both character and graphic operands. 40.2.1.7 Predicates The following entry has been added to the list introduced by the sentence: "The following rules apply to all types of predicates": * In a Unicode database, all predicates that accept a character or graphic string will accept any string types for which conversion is supported. 40.2.2 Chapter 4 Functions 40.2.2.1 Scalar Functions The following sentence has been added to the end of this section: In a Unicode database, all scalar functions that accept a character or graphic string will accept any string types for which conversion is supported. ------------------------------------------------------------------------ 40.3 CLI Guide and Reference 40.3.1 Chapter 3. Using Advanced Features The following is a new section for this chapter. 40.3.1.1 Writing a DB2 CLI Unicode Application There are two main areas of support for DB2 CLI Unicode Applications: 1. The addition of a set of functions that can accept Unicode string arguments in place of ANSI string arguments. 2. The addition of new C and SQL data types to describe data as ANSI or Unicode data. The following sections provide more information for both of these areas. To be considered a Unicode application, the application must set the SQL_ATTR_ANSI_APP connection attribute to SQL_AA_FALSE, before a connection is made. This will ensure that CLI will connect as a Unicode client, and all Unicode data will be sent in either UTF-8 for CHAR data or UCS-2 for GRAPHIC data. Unicode Functions The following is a list of the ODBC API functions that support both Unicode (W) and ANSI (A) versions (the function name will have a W for Unicode): SQLBrowseConnect SQLForeignKeys SQLPrimaryKeys SQLColAttribute SQLGetConnectAttr SQLProcedureColumns SQLColAttributes SQLGetConnectOption SQLProcedures SQLColumnPrivileges SQLGetCursorName SQLSetConnectAttr SQLColumns SQLGetDescField SQLSetConnectOption SQLConnect SQLGetDescRec SQLSetCursorName SQLDataSources SQLGetDiagField SQLSetDescField SQLDescribeCol SQLGetDiagRec SQLSetStmtAttr SQLDriverConnect SQLGetInfo SQLSpecialColumns SQLDrivers SQLGetStmtAttr SQLStatistics SQLError SQLNativeSQL SQLTablePrivileges SQLExecDirect SQLPrepare SQLTables Unicode functions that always return, or take, string length arguments are passed as count-of-characters. For functions that return length information for server data, the display size and precision are described in number of characters. When the length (transfer size of the data) could refer to string or nonstring data, the length is described in octet lengths. For example, SQLGetInfoW will still take the length as count-of-bytes, but SQLExecDirectW will use count-of-characters. CLI will return result sets in either Unicode or ANSI, depending on the application's binding. If an application binds to SQL_C_CHAR, the driver will convert SQL_WCHAR data to SQL_CHAR. The driver manager maps SQL_C_WCHAR to SQL_C_CHAR for ANSI drivers but does no mapping for Unicode drivers. New datatypes and valid conversions There are two new CLI or ODBC defined data types, SQL_C_WCHAR and SQL_WCHAR. SQL_C_WCHAR indicates that the C buffer contains UCS-2 data. SQL_WCHAR indicates that a particular column or parameter marker contains Unicode data. For DB2 Unicode Servers, graphic columns will be described as SQL_WCHAR. Conversion will be allowed between SQL_C_WCHAR and SQL_CHAR, SQL_VARCHAR, SQL_LONGVARCHAR and SQL_CLOB, as well as between the graphic data types. Table 32. Supported Data Conversions S S Q Q L S S L _ Q Q _ C L L C S S _ _ _ _ Q Q T C C D S L L Y _ _ B S Q S _ _ P S S C B C S Q S S L S Q C C E Q Q L L L Q L S Q S Q _ Q L _ _ _ L L O O O L _ Q L Q L C L _ T T T _ S _ B B B _ C L _ L _ _ _ C Y Y I C Q C _ _ _ C _ _ C _ C T C _ P P M _ L _ L L L _ N C _ C _ I _ D E E E B _ D O O O B U _ W _ S N F O _ _ S I C B C C C I M C C L H Y L U D T T N _ C A A A G E H H O O I O B A I A A B H T T T I R A A N R N A L T M M R I A O O O N I SQL Data Type R R G T T T E E E P Y T R R R R T C BLOB X X D X CHAR D X X X X X X X X X X X X X CLOB D X X X DATE X X D X DBCLOB X X D X DECIMAL D X X X X X X X X X X DOUBLE X X X X X X D X X X FLOAT X X X X X X D X X X GRAPHIC X X D (Non-Unicode) GRAPHIC X X X X X X X X X X X X D X (Unicode) INTEGER X X D X X X X X X X LONG D X X VARCHAR LONG X X X D VARGRAPHIC (Non-Unicode) LONG X X X D VARGRAPHIC (Unicode) NUMERIC D X X X X X X X X REAL X X X X X D X X X SMALLINT X X X D X X X X X X BIGINT X X X X X X X X X D X TIME X X D X TIMESTAMP X X X X D VARCHAR D X X X X X X X X X X X X X VARGRAPHIC X X D (Non-Unicode) VARGRAPHIC X X X X X X X X X X X X D X (Unicode) Note: D Conversion is supported. This is the default conversion for the SQL data type. X All IBM DBMSs support the conversion. blank No IBM DBMS supports the conversion. o Data is not converted to LOB Locator types, rather locators represent a data value, refer to Using Large Objects for more information. o SQL_C_NUMERIC is only available on 32-bit Windows operating systems. Obsolete Keyword/Patch Value Before Unicode applications were supported, applications that were written to work with single byte character data could be made to work with double byte graphic data by a series of cli ini file keywords, such as GRAPHIC=1,2 or 3, Patch2=7 etc. These workarounds presented graphic data as character data, and also affected the reported length of the data. These keywords are no longer required for Unicode applications, and in fact should not be used otherwise there could be serious side effects. If it is not known if a particular application is a Unicode application, we suggest you try without any of the keywords that affect the handling of graphic data. Literals in Unicode Databases In Non-Unicode databases, data in LONG VARGRAPHIC and LONG VARCHAR columns cannot be compared. Data in GRAPHIC/VARGRAPHIC and CHAR/VARCHAR columns can only be compared, or assigned to each other, using explicit cast functions since no implicit code page conversion is supported. This includes GRAPHIC/VARGRAPHIC and CHAR/VARCHAR literals where a GRAPHIC/VARGRAPHIC literal is differentiated from a CHAR/VARCHAR literal by a G prefix. For Unicode databases, casting between GRAPHIC/VARGRAPHIC and CHAR/VARCHAR literals is not required. Also, a G prefix is not required in front of a GRAPHIC/VARGRAPHIC literal. Provided at least one of the arguments is a literal, implicit conversions occur. This allows literals with or without the G prefix to be used within statements that use either SQLPrepareW() or SQLExecDirect(). Literals for LONG VARGRAPHICs still must have a G prefix. For more information, please see "Casting Between Data Types" in Chapter 3 Language Elements of the SQL Reference. New CLI Configuration Keywords The following three keywords have been added to avoid any extra overhead when Unicode applications connect to a database. 1. DisableUnicode Keyword Description: Disables the underlying support for Unicode. db2cli.ini Keyword Syntax: DisableUnicode = 0 | 1 Default Setting: 0 (false) DB2 CLI/ODBC Settings Tab: This keyword cannot be set using the CLI/ODBC Settings notebook. The db2cli.ini file must be modified directly to make use of this keyword. Usage Notes: With Unicode support enabled, and when called by a Unicode application, CLI will attempt to connect to the database using the best client codepage possible to ensure there is no unnecessary data loss due to codepage conversion. This may increase the connection time as codepages are exchanged, or may cause codepage conversions on the client that did not occur before this support was added. Setting this keyword to True will cause all Unicode data to be converted to the application's local codepage first, before the data is sent to the server. This can cause data loss for any data that cannot be represented in the local codepage. 2. ConnectCodepage Keyword Description: Specifies a specific codepage to use when connecting to the data source to avoid extra connection overhead. db2cli.ini Keyword Syntax: ConnectCodepage = 0 | 1 Default Setting: 0 DB2 CLI/ODBC Settings Tab: This keyword cannot be set using the CLI/ODBC Settings notebook. The db2cli.ini file must be modified directly to make use of this keyword. Usage Notes: Non-Unicode applications always connect to the database using the application's local codepage, or the DB2Codepage environment setting. By default, CLI will ensure that Unicode applications will connect to Unicode databases using UTF-8 and UCS-2 codepages, and will connect to non-Unicode databases using the database's codepage. This ensures there is no unnecessary data loss due to codepage conversion. This keyword allows the user to specify the database's codepage when connecting to a non-Unicode database in order to avoid any extra overhead on the connection. Specify a value of 1 to cause SQLDriverConnect() to return the correct value in the output connection string, so the value can be used on future SQLDriverConnect() calls. 3. UnicodeServer Keyword Description: Indicates that the data source is a Unicode Server. Equivalent to setting ConnectCodepage=1208. db2cli.ini Keyword Syntax: UnicodeServer = 0 | 1 Default Setting: 0 DB2 CLI/ODBC Settings Tab: This keyword cannot be set using the CLI/ODBC Settings notebook. The db2cli.ini file must be modified directly to make use of this keyword. Usage Notes: This keyword is equivalent to ConnectCodepage=1208, and is added only for convenience. Set this keyword to avoid extra connect overhead when connecting to DB2 for OS/390 Version 7 or higher. There is no need to set this keyword for DB2 for Windows, DB2 for Unix or DB2 for OS/2 databases, since there is no extra processing required. 40.3.2 Appendix C. DB2 CLI and ODBC The following is a new section added to this appendix. 40.3.2.1 ODBC Unicode Applications A Unicode ODBC application sends and retrieves character data primarily in UCS-2. It does this by calling Unicode versions of the ODBC functions ('W' suffix) and by indicating Unicode data types. The application does not explicitly specify a local code page. The application can still call the ANSI functions and pass local code page strings. For example, the application may call SQLConnectW() and pass the DSN, User ID and Password as Unicode arguments. It may then call SQLExecDirectW() and pass in a Unicode SQL statement string, and then bind a combination of ANSI local code page buffers (SQL_C_CHAR) and Unicode buffers (SQL_C_WCHAR). The database data types may be local code page or UCS-2 and UTF-8. If a CLI application calls SQLConnectW or calls SQLSetConnectAttr with SQL_ATTR_ANSI_APP set to SQL_AA_FALSE, the application is considered a Unicode application. This means all CHAR data is sent and received from the database in UTF-8 format. The application can then fetch CHAR data into SQL_C_CHAR buffers in local code page (with possible data loss), or into SQL_C_WCHAR buffers in UCS-2 without any data loss. If the application does not do either of the two calls above, CHAR data is converted to the applications local codepage at the server. This means CHAR data fetched into SQL_C_WCHAR may suffer data loss. If the DB2CODEPAGE instance variable is set (using db2set) to code page 1208 (UTF-8), the application will receive all CHAR data in UTF-8 since this is now the local code page. The application must also ensure that all CHAR input data is also in UTF-8. ODBC also assumes that all SQL_C_WCHAR data is in the native endian format. CLI will perform any required byte-reversal for SQL_C_WCHAR. ODBC Unicode Versus Non-Unicode Applications This release of DB2 Universal Database contains the SQLConnectW() API. A Unicode driver must export SQLConnectW in order to be recognized as a Unicode driver by the driver manager. It is important to note that many ODBC applications (such as Microsoft Access and Visual Basic) call SQLConnectW(). In previous releases of DB2 Universal Database, DB2 CLI has not supported this API, and thus was not recognized as a Unicode driver by the ODBC driver manager. This caused the ODBC driver manager to convert all Unicode data to the application's local code page. With the added support of the SQLConnectW() function, these applications will now connect as Unicode applications and DB2 CLI will take care of all required data conversion. DB2 CLI now accepts Unicode APIs (with a suffix of "W") and regular ANSI APIs. ODBC defines a set of functions with a suffix of "A", but the driver manager does not pass ANSI functions with the "A" suffix to the driver. Instead, it converts these functions to ANSI function calls without the suffix, and then passes them to the driver. An ODBC application that calls the SQLConnectW() API is considered a Unicode application. Since the ODBC driver manager will always call the SQLConnectW() API regardless of what version the application called, ODBC introduced the SQL_ATTR_ANSI_APP connect attribute to notify the driver if the application should be considered an ANSI or UNICODE application. If SQL_ATTR_ANSI_APP is not set to SQL_AA_FALSE, DB2 CLI converts all Unicode data to the local code page before sending it to the server. ------------------------------------------------------------------------ 40.4 Data Movement Utilities Guide and Reference 40.4.1 Appendix C. Export/Import/Load Utility File Formats The following update has been added to this Appendix: The export, import, and load utilities are not supported when they are used with a Unicode client connected to a non-Unicode database. Unicode client files are only supported when the Unicode client is connected to a Unicode database. ------------------------------------------------------------------------ Connecting to Host Systems * Connectivity Supplement o 41.1 Setting Up the Application Server in a VM Environment o 41.2 CLI/ODBC/JDBC Configuration PATCH1 and PATCH2 Settings ------------------------------------------------------------------------ Connectivity Supplement ------------------------------------------------------------------------ 41.1 Setting Up the Application Server in a VM Environment Add the following sentence after the first (and only) sentence in the section "Provide Network Information", subsection "Defining the Application Server": The RDB_NAME is provided on the SQLSTART EXEC as the DBNAME parameter. ------------------------------------------------------------------------ 41.2 CLI/ODBC/JDBC Configuration PATCH1 and PATCH2 Settings The CLI/ODBC/JDBC driver can be configured through the Client Configuration Assistant or the ODBC Driver Manager (if it is installed on the system), or by manually editing the db2cli.ini file. For more details, see either the Installation and Configuration Supplement, or the CLI Guide and Reference. The DB2 CLI/ODBC driver default behavior can be modified by specifying values for both the PATCH1 and PATCH2 keyword through either the db2cli.ini file or through the SQLDriverConnect() or SQLBrowseConnect() CLI API. The PATCH1 keyword is specified by adding together all keywords that the user wants to set. For example, if patch 1, 2, and 8 were specified, then PATCH1 would have a value of 11. Following is a description of each keyword value and its effect on the driver: 1 - This makes the driver search for "count(exp)" and replace it with "count(distinct exp)". This is needed because some versions of DB2 support the "count(exp)" syntax, and that syntax is generated by some ODBC applications. Needed by Microsoft applications when the server does not support the "count(exp)" syntax. 2 - Some ODBC applications are trapped when SQL_NULL_DATA is returned in the SQLGetTypeInfo() function for either the LITERAL_PREFIX or LITERAL_SUFFIX column. This forces the driver to return an empty string instead. Needed by Impromptu 2.0. 4 - This forces the driver to treat the input time stamp data as date data if the time and the fraction part of the time stamp are zero. Needed by Microsoft Access. 8 - This forces the driver to treat the input time stamp data as time data if the date part of the time stamp is 1899-12-30. Needed by Microsoft Access. 16 - Not used. 32 - This forces the driver to not return information about SQL_LONGVARCHAR, SQL_LONGVARBINARY, and SQL_LONGVARGRAPHIC columns. To the application it appears as though long fields are not supported. Needed by Lotus 123. 64 - This forces the driver to NULL terminate graphic output strings. Needed by Microsoft Access in a double byte environment. 128 - This forces the driver to let the query "SELECT Config, nValue FROM MSysConf" go to the server. Currently the driver returns an error with associated SQLSTATE value of S0002 (table not found). Needed if the user has created this configuration table in the database and wants the application to access it. 256 - This forces the driver to return the primary key columns first in the SQLStatistics() call. Currently, the driver returns the indexes sorted by index name, which is standard ODBC behavior. 512 - This forces the driver to return FALSE in SQLGetFunctions() for both SQL_API_SQLTABLEPRIVILEGES and SQL_API_SQLCOLUMNPRIVILEGES. 1024 - This forces the driver to return SQL_SUCCESS instead of SQL_NO_DATA_FOUND in SQLExecute() or SQLExecDirect() if the executed UPDATE or DELETE statement affects no rows. Needed by Visual Basic applications. 2048 - Not used. 4096 - This forces the driver to not issue a COMMIT after closing a cursor when in autocommit mode. 8192 - This forces the driver to return an extra result set after invoking a stored procedure. This result set is a one row result set consisting of the output values of the stored procedure. Can be accessed by Powerbuild applications. 32768 - This forces the driver to make Microsoft Query applications work with DB2 MVS synonyms. 65536 - This forces the driver to manually insert a "G" in front of character literals which are in fact graphic literals. This patch should always be supplied when working in an double byte environment. 131072 - This forces the driver to describe a time stamp column as a CHAR(26) column instead, when it is part of an unique index. Needed by Microsoft applications. 262144 - This forces the driver to use the pseudo-catalog table db2cli.procedures instead of the SYSCAT.PROCEDURES and SYSCAT.PROCPARMS tables. 524288 - This forces the driver to use SYSTEM_TABLE_SCHEMA instead of TABLE_SCHEMA when doing a system table query to a DB2/400 V3.x system. This results in better performance. 1048576 - This forces the driver to treat a zero length string through SQLPutData() as SQL_NULL_DATA. The PATCH2 keyword differs from the PATCH1 keyword. In this case, multiple patches are specified using comma separators. For example, if patch 1, 4, and 5 were specified, then PATCH2 would have a value of "1,4,5". Following is a description of each keyword value and its effect on the driver: 1 - This forces the driver to convert the name of the stored procedure in a CALL statement to uppercase. 2 - Not used. 3 - This forces the driver to convert all arguments to schema calls to uppercase. 4 - This forces the driver to return the Version 2.1.2 like result set for schema calls (that is, SQLColumns(), SQLProcedureColumns(), and so on), instead of the Version 5 like result set. 5 - This forces the driver to not optimize the processing of input VARCHAR columns, where the pointer to the data and the pointer to the length are consecutive in memory. 6 - This forces the driver to return a message that scrollable cursors are not supported. This is needed by Visual Basic programs if the DB2 client is Version 5 and the server is DB2 UDB Version 5. 7 - This forces the driver to map all GRAPHIC column data types to the CHAR column data type. This is needed in a double byte environment. 8 - This forces the driver to ignore catalog search arguments in schema calls. 9 - Do not commit on Early Close of a cursor 10 - Not Used 11 - Report that catalog name is supported, (VB stored procedures) 12 - Remove double quotes from schema call arguments, (Visual Interdev) 13 - Do not append keywords from db2cli.ini to output connection string 14 - Ignore schema name on SQLProcedures() and SQLProcedureColumns() 15 - Always use period for decimal separator in character output 16 - Force return of describe information for each open 17 - Do not return column names on describe 18 - Attempt to replace literals with parameter markers 19 - Currently, DB2 MVS V4.1 does not support the ODBC syntax where parenthesis are allowed in the ON clause in an Outer join clause. Turning on this PATCH2 will cause IBM DB2 ODBC driver to strip the parenthesis when the outer join clause is in an ODBC escape sequence. This PATCH2 should only be used when going against DB2 MVS 4.1. 20 - Currently, DB2 on MVS does not support BETWEEN predicate with parameter markers as both operands (expression ? BETWEEN ?). Turning on this patch will cause the IBM ODBC Driver to rewrite the predicate to (expression >= ? and expression <= ?). 21 - Set all OUTPUT only parameters for stored procedures to SQL_NULL_DATA 22 - This PATCH2 causes the IBM ODBC driver to report OUTER join as not supported. This is for application that generates SELECT DISTINCT col1 or ORDER BY col1 when using outer join statement where col1 has length greater than 254 characters and causes DB2 UDB to return an error (since DB2 UDB does not support greater-than-254 byte column in this usage 23 - Do not optimize input for parameters bound with cbColDef=0 24 - Access workaround for mapping Time values as Characters 25 - Access workaround for decimal columns - removes trailing zeros in char representation 26 - Do not return sqlcode 464 to application - indicates result sets are returned 27 - Force SQLTables to use TABLETYPE keyword value, even if the application specifies a valid value 28 - Describe real columns as double columns 29 - ADO workaround for decimal columns - removes leading zeroes for values x, where 1 > x > -1 (Only needed for some MDAC versions) 30 - Disable the Stored Procedure caching optimization 31 - Report statistics for aliases on SQLStatistics call 32 - Override the sqlcode -727 reason code 4 processing 33 - Return the ISO version of the time stamp when converted to char (as opposed to the ODBC version) 34 - Report CHAR FOR BIT DATA columns as CHAR 35 - Report an invalid TABLENAME when SQL_DESC_BASE_TABLE_NAME is requested - ADO readonly optimization 36 - Reserved 37 - Reserved ------------------------------------------------------------------------ General Information * General Information o 42.1 DB2 Universal Database Business Intelligence Quick Tour o 42.2 DB2 Everywhere is Now DB2 Everyplace o 42.3 Mouse Required o 42.4 Attempting to Bind from the DB2 Run-time Client Results in a "Bind files not found" Error o 42.5 Search Discovery o 42.6 Memory Windows for HP-UX 11 o 42.7 User Action for dlfm client_conf Failure o 42.8 In the Rare Event that the Copy Daemon Does Not Stop on dlfm stop o 42.9 Uninstalling DB2 DFS Client Enabler o 42.10 Client Authentication on Windows NT o 42.11 AutoLoader May Hang During a Fork o 42.12 DATALINK Restore o 42.13 Define User ID and Password in IBM Communications Server for Windows NT (CS/NT) + 42.13.1 Node Definition o 42.14 Federated Systems Restrictions o 42.15 DataJoiner Restriction o 42.16 Hebrew Information Catalog Manager for Windows NT o 42.17 Microsoft SNA Server and SNA Multisite Update (Two Phase Commit) Support o 42.18 DB2's SNA SPM Fails to Start After Booting Windows o 42.19 Locale Setting for the DB2 Administration Server o 42.20 Shortcuts Not Working o 42.21 Service Account Requirements for DB2 on Windows NT and Windows 2000 o 42.22 Lost EXECUTE Privilege for Query Patroller Users Created in Version 6 o 42.23 Query Patroller Restrictions o 42.24 Need to Commit all User-defined Programs That Will Be Used in the Data Warehouse Center (DWC) o 42.25 New Option for Data Warehouse Center Command Line Export o 42.26 Backup Services APIs (XBSA) o 42.27 OS/390 agent + 42.27.1 Installation overview + 42.27.2 Installation details + 42.27.3 Setting up additional agent functions + 42.27.4 Scheduling warehouse steps with the trigger program (XTClient) + 42.27.5 Transformers + 42.27.6 Accessing databases outside of the DB2 family + 42.27.7 Running DB2 for OS/390 utilities + 42.27.8 Replication + 42.27.9 Agent logging o 42.28 Client Side Caching on Windows NT o 42.29 Trial Products on Enterprise Edition UNIX CD-ROMs o 42.30 Trial Products on DB2 Connect Enterprise Edition UNIX CD-ROMs o 42.31 Drop Data Links Manager o 42.32 Uninstalling DLFM Components Using SMIT May Remove Additional Filesets o 42.33 Error SQL1035N when Using CLP on Windows 2000 o 42.34 Enhancement to SQL Assist o 42.35 Gnome and KDE Desktop Integration for DB2 on Linux o 42.36 Running DB2 under Windows 2000 Terminal Server, Administration Mode o 42.37 Online Help for Backup and Restore Commands o 42.38 "Warehouse Manager" Should Be "DB2 Warehouse Manager" ------------------------------------------------------------------------ General Information ------------------------------------------------------------------------ 42.1 DB2 Universal Database Business Intelligence Quick Tour The Quick Tour is not available on DB2 for Linux or Linux/390. The Quick Tour is optimized to run with small system fonts. You may have to adjust your Web browser's font size to correctly view the Quick Tour on OS/2. Refer to your Web browser's help for information on adjusting font size. To view the Quick Tour correctly (SBCS only), it is recommended that you use an 8-point Helv font. For Japanese and Korean customers, it is recommended that you use an 8-point Mincho font. When you set font preferences, be sure to select the "Use my default fonts, overriding document-specified fonts" option in the Fonts page of the Preference window. In some cases the Quick Tour may launch behind a secondary browser window. To correct this problem, close the Quick Tour, and follow the steps in 2.4, Error Messages when Attempting to Launch Netscape. When launching the Quick Tour, you may receive a JavaScript error similar to the following: file:/C/Program Files/SQLLIB/doc/html/db2qt/index4e.htm, line 65: Window is not defined. This JavaScript error prevents the Quick Tour launch page, index4e.htm, from closing automatically after the Quick Tour is launched. You can close the Quick Tour launch page by closing the browser window in which index4e.htm is displayed. In the "What's New" section, under the Data Management topic, it is stated that "on-demand log archive support" is supported in Version 7.1. This is not the case. It is also stated that: The size of the log files has been increased from 4GB to 32GB. This sentence should read: The total active log space has been increased from 4GB to 32GB. The section describing the DB2 Data Links Manager contains a sentence that reads: Also, it now supports the use of the Veritas XBSA interface for backup and restore using NetBackup. This sentence should read: Also, it now supports the XBSA interface for file archival and restore. Storage managers that support the XBSA interface include Legato NetWorker and Veritas NetBackup. ------------------------------------------------------------------------ 42.2 DB2 Everywhere is Now DB2 Everyplace The name of DB2 Everywhere has changed to DB2 Everyplace. ------------------------------------------------------------------------ 42.3 Mouse Required For all platforms except Windows, a mouse is required to use the tools. ------------------------------------------------------------------------ 42.4 Attempting to Bind from the DB2 Run-time Client Results in a "Bind files not found" Error Because the DB2 Run-time Client does not have the full set of bind files, the binding of GUI tools cannot be done from the DB2 Run-time Client, and can only be done from the DB2 Administration Client. ------------------------------------------------------------------------ 42.5 Search Discovery Search discovery is only supported on broadcast media. For example, search discovery will not function through an ATM adapter. However, this restriction does not apply to known discovery. ------------------------------------------------------------------------ 42.6 Memory Windows for HP-UX 11 Memory windows is for users on large HP 64-bit machines, who want to take advantage of greater than 1.75 GB of shared memory for 32-bit applications. Memory windows is not required if you are running the 64-bit version of DB2. Memory windows makes available a separate 1 GB of shared memory per process or group of processes. This allows an instance to have its own 1 GB of shared memory, plus the 0.75 GB of global shared memory. If users want to take advantage of this, they can run multiple instances, each in its own window. Following are prerequisites and conditions for using memory windows: * DB2 EE environment o Patches: Extension Software 12/98, and PHKL_17795. o The $DB2INSTANCE variable must be set for the instance. o There must be an entry in the /etc/services.window file for each DB2 instance that you want to run under memory windows. For example: db2instance1 50 db2instance2 60 Note: There can only be a single space between the name and the ID. o Any DB2 commands that you want to run on the server, and that require more than a single statement, must be run using a TCP/IP loopback method. This is because the shell will terminate when memory windows finishes processing the first statement. DB2 Service knows how to accomplish this. o Any DB2 command that you want to run against an instance that is running in memory windows must be prefaced with db2win (located in sqllib/bin). For example: db2win db2start db2win db2stop o Any DB2 command that is run outside of memory windows (but when memory windows is running) should return a 1042. For example: db2win db2start <== OK db2 connect to db <==SQL1042 db2stop <==SQL1042 db2win db2stop <== OK * DB2 EEE environment o Patches: Extension Software 12/98, and PHKL_17795. o The $DB2INSTANCE variable must be set for the instance. o The DB2_ENABLE_MEM_WINDOWS registry variable must be set to TRUE. o There must be an entry in the /etc/services.window file for each logical node of each instance that you want to run under memory windows. The first field of each entry should be the instance name concatenated with the port number. For example: === $HOME/sqllib/db2nodes.cfg for db2instance1 === 5 host1 0 7 host1 1 9 host2 0 === $HOME/sqllib/db2nodes.cfg for db2instance2 === 1 host1 0 2 host2 0 3 host2 1 === /etc/services.window on host1 === db2instance10 50 db2instance11 55 db2instance20 60 === /etc/services.window on host2 === db2instance10 30 db2instance20 32 db2instance21 34 o You must not preface any DB2 command with db2win, which is to be used in an EE environment only. ------------------------------------------------------------------------ 42.7 User Action for dlfm client_conf Failure If, on a DLFM client, dlfm client_conf fails for some reason, "stale" entries in DB2 catalogs may be the reason. The solution is to issue the following commands: db2 uncatalog db db2 uncatalog node db2 terminate Then try dlfm client_conf again. ------------------------------------------------------------------------ 42.8 In the Rare Event that the Copy Daemon Does Not Stop on dlfm stop It could happen in very rare situations that dlfm_copyd (the copy daemon) does not stop when a user issues a dlfm stop, or there is an abnormal shutdown. If this happens, issue a dlfm shutdown before trying to restart dlfm. ------------------------------------------------------------------------ 42.9 Uninstalling DB2 DFS Client Enabler Before the DB2 DFS Client Enabler is uninstalled, root should ensure that no DFS file is in use, and that no user has a shell open in DFS file space. As root, issue the command: stop.dfs dfs_cl Check that /... is no longer mounted: mount | grep -i dfs If this is not done, and DB2 DFS Client Enabler is uninstalled, the machine will need to be rebooted. ------------------------------------------------------------------------ 42.10 Client Authentication on Windows NT A new DB2 registry variable DB2DOMAINLIST is introduced to complement the existing client authentication mechanism in the Windows NT environment. This variable is used on the DB2 for Windows NT server to define one or more Windows NT domains. Only connection or attachment requests from users belonging to the domains defined in this list will be accepted. This registry variable should only be used under a pure Windows NT domain environment with DB2 servers and clients running at Version 7 (or higher). For information about setting this registry variable, refer to the "DB2 Registry and Environment Variables" section in the Administration Guide: Performance. ------------------------------------------------------------------------ 42.11 AutoLoader May Hang During a Fork AIX 4.3.3 contains a fix for a libc problem that could cause the AutoLoader to hang during a fork. The AutoLoader is a multithreaded program. One of the threads forks off another process. Forking off a child process causes an image of the parent's memory to be created in the child. It is possible that locks used by libc.a to manage multiple threads allocating memory from the heap within the same process have been held by a non-forking thread. Since the non-forking thread will not exist in the child process, this lock will never be released in the child, causing the parent to sometimes hang. ------------------------------------------------------------------------ 42.12 DATALINK Restore Restore of any offline backup that was taken after a database restore, with or without rollforward, will not involve fast reconcile processing. In such cases, all tables with DATALINK columns under file link control will be put in datalink reconcile pending (DRP) state. ------------------------------------------------------------------------ 42.13 Define User ID and Password in IBM Communications Server for Windows NT (CS/NT) If you are using APPC as the communication protocol for remote DB2 clients to connect to your DB2 server and if you use CS/NT as the SNA product, make sure that the following keywords are set correctly in the CS/NT configuration file. This file is commonly found in the x:\ibmcs\private directory. 42.13.1 Node Definition TG_SECURITY_BEHAVIOR This parameter allows the user to determine how the node is to handle security information present in the ATTACH if the TP is not configured for security IGNORE_IF_NOT_DEFINED This parameter allows the user to determine if security parameters are present in the ATTACH and to ignore them if the TP is not configured for security. If you use IGNORE_IF_NOT_DEFINED, you don't have to define a User ID and password in CS/NT. VERIFY_EVEN_IF_NOT_DEFINED This parameter allows the user to determine if security parameters are present in the ATTACH and verify them even if the TP is not configured for security. This is the default. If you use VERIFY_EVEN_IF_NOT_DEFINED, you have to define User ID and password in CS/NT. To define the CS/NT User ID and password, perform the following steps: 1. Start --> Programs --> IBM Communications Server --> SNA Node Configuration. The Welcome to Communications Server Configuration window opens. 2. Choose the configuration file you want to modify. Click Next. The Choose a Configuration Scenario window opens. 3. Highlight CPI-C, APPC or 5250 Emulation. Click Finish. The Communications Server SNA Node Window opens. 4. Click the [+] beside CPI-C and APPC. 5. Click the [+] beside LU6.2 Security. 6. Right click on User Passwords and select Create. The Define a User ID Password window opens. 7. Fill in the User ID and password. Click OK. Click Finish to accept the changes. ------------------------------------------------------------------------ 42.14 Federated Systems Restrictions Following are restrictions that apply to federated systems: * The Oracle data types NCHAR, NVARCHAR2, NCLOB, and BFILE are not supported in queries involving nicknames. * The Create Server Option, Alter Server Option, and Drop Server Option commands are not supported from the Control Center. To issue any of these commands, you must use the command line processor (CLP). * For queries involving nicknames, DB2 UDB does not always abide by the DFT_SQLMATHWARN database configuration option. Instead, DB2 UDB returns the arithmetic errors or warnings directly from the remote data source regardless of the DFT_SQLMATHWARN setting. * The CREATE SERVER OPTION statement does not allow the COLSEQ server option to be set to 'I' for data sources with case-insensitive collating sequences. * The ALTER NICKNAME statement returns SQL0901N when an invalid option is specified. * For Oracle, Microsoft SQL Server, and Sybase data sources, numeric data types cannot be mapped to DB2's BIGINT data type. By default, Oracle's number(p,s) data type, where 10 <= p <= 18, and s = 0, maps to DB2's DECIMAL data type. ------------------------------------------------------------------------ 42.15 DataJoiner Restriction Distributed requests issued within a federated environment are limited to read-only operations. ------------------------------------------------------------------------ 42.16 Hebrew Information Catalog Manager for Windows NT The Information Catalog Manager component is available in Hebrew and is provided on the DB2 Warehouse Manager for Windows NT CD. The Hebrew translation is provided in a zip file called IL_ICM.ZIP and is located in the DB2\IL directory on the DB2 Warehouse Manager for Windows NT CD. To install the Hebrew translation of Information Catalog Manager, first install the English version of DB2 Warehouse Manager for Windows NT and all prerequisites on a Hebrew Enabled version of Windows NT. After DB2 Warehouse Manager for Windows NT has been installed, unzip the IL_ICM.ZIP file from the DB2\IL directory into the same directory where DB2 Warehouse Manager for Windows NT was installed. Ensure that the correct options are supplied to the unzip program to create the directory structure in the zip file. After the file has been unzipped, the global environment variable LC_ALL must be changed from En_US to Iw_IL. To change the setting: 1. Open the Windows NT Control Panel and double click on the System icon. 2. In the System Properties window, click on the Environment tab, then locate the variable LC_ALL in the System Variables section. 3. Click on the variable to display the value in the Value edit box. Change the value from En_US to Iw_IL. 4. Click on the Set button. 5. Close the System Properties window and the Control Panel. The Hebrew version of Information Catalog Manager should now be installed. ------------------------------------------------------------------------ 42.17 Microsoft SNA Server and SNA Multisite Update (Two Phase Commit) Support Host and AS/400 applications cannot access DB2 UDB servers using SNA two phase commit when Microsoft SNA Server is the SNA product in use. Any DB2 UDB publications indicating this is supported are incorrect. IBM Communications Server for Windows NT Version 5.02 or greater is required. Note: Applications accessing host and AS/400 database servers using DB2 UDB for Windows can use SNA two phase commit using Microsoft SNA Server Version 4 Service Pack 3 or greater. ------------------------------------------------------------------------ 42.18 DB2's SNA SPM Fails to Start After Booting Windows If you are using Microsoft SNA Server Version 4 SP3 or later, please verify that DB2's SNA SPM started properly after a reboot. Check the \sqllib\\db2diag.log file for entries that are similar to the following: 2000-04-20-13.18.19.958000 Instance:DB2 Node:000 PID:291(db2syscs.exe) TID:316 Appid:none common_communication sqlccspmconnmgr_APPC_init Probe:19 SPM0453C Sync point manager did not start because Microsoft SNA Server has not been started. 2000-04-20-13.18.23.033000 Instance:DB2 Node:000 PID:291(db2syscs.exe) TID:302 Appid:none common_communication sqlccsna_start_listen Probe:14 DIA3001E "SNA SPM" protocol support was not successfully started. 2000-04-20-13.18.23.603000 Instance:DB2 Node:000 PID:291(db2syscs.exe) TID:316 Appid:none common_communication sqlccspmconnmgr_listener Probe:6 DIA3103E Error encountered in APPC protocol support. APPC verb "APPC(DISPLAY 1 BYTE)". Primary rc was "F004". Secondary rc was "00000000". If such entries exist in your db2diag.log, and the time stamps match your most recent reboot time, you must: 1. Invoke db2stop. 2. Start the SnaServer service (if not already started). 3. Invoke db2start. Check the db2diag.log file again to verify that the entries are no longer appended. ------------------------------------------------------------------------ 42.19 Locale Setting for the DB2 Administration Server Please ensure that the locale of the DB2 Administration Server instance is compatible to the locale of the DB2 instance. Otherwise, the DB2 instance cannot communicate with the DB2 Administration Server. If the LANG environment variable is not set in the user profile of the DB2 Administration Server, the DB2 Administration Server will be started with the default system locale. If the default system locale is not defined, the DB2 Administration Server will be started with code page 819. If the DB2 instance uses one of the DBCS locales, and the DB2 Administration Server is started with code page 819, the instance will not be able to communicate with the DB2 Administration Server. The locale of the DB2 Administration Server and the locale of the DB2 instance must be compatible. For example, on a Simplified Chinese Linux system, "LANG=zh_CN" should be set in the DB2 Administration Server's user profile. ------------------------------------------------------------------------ 42.20 Shortcuts Not Working In some languages, for the Control Center on UNIX based systems and on OS/2, some keyboard shortcuts do not work. Please use the mouse to select options. ------------------------------------------------------------------------ 42.21 Service Account Requirements for DB2 on Windows NT and Windows 2000 During the installation of DB2 for Windows NT or Windows 2000, the setup program creates several Windows services and assigns a service account for each service. To run DB2 properly, the setup program grants the following user rights to the service account that is associated with the DB2 service: * Act as part of the operating system * Create a token object * Increase quotas * Log on as a service * Replace a process level token. If you want to use a different service account for the DB2 services, you must grant these user rights to the service account. In addition to these user rights, the service account must also have write access to the directory where the DB2 product is installed. The service account for the DB2 Administration Server service (DB2DAS00 service) must also have the authority to start and stop other DB2 services (that is, the service account must belong to the Power Users group) and have DB2 SYSADM authority against any DB2 instances that it administers. ------------------------------------------------------------------------ 42.22 Lost EXECUTE Privilege for Query Patroller Users Created in Version 6 Because of some new stored procedures (IWM.DQPGROUP, IWM.DQPVALUR, IWM.DQPCALCT, and IWM.DQPINJOB) added in Query Patroller Version 7, existing users created in Query Patroller Version 6 do not hold the EXECUTE privilege on those packages. An application to automatically correct this problem has been added to FixPak 1. When you try to use DQP Query Admin to modify DQP user information, please do not try to remove existing users from the user list. ------------------------------------------------------------------------ 42.23 Query Patroller Restrictions Because of JVM (Java Virtual Machine) platform restrictions, the Query Enabler is not supported on HP-UX and NUMA-Q. In addition, the Query Patroller Tracker is not supported on NUMA-Q. If all of the Query Patroller client tools are required, we recommend the use of a different platform (such as Windows NT) to run these tools against the HP-UX or NUMA-Q server. ------------------------------------------------------------------------ 42.24 Need to Commit all User-defined Programs That Will Be Used in the Data Warehouse Center (DWC) If you want to use a stored procedure built by the DB2 Stored Procedure Builder as a user-defined program in the Data Warehouse Center (DWC), you must insert the following statement into the stored procedure before the con.close(); statement: con.commit(); If this statement is not inserted, changes made by the stored procedure will be rolled back when the stored procedure is run from the DWC. For all user-defined programs in the DWC, it is necessary to explicitly commit any included DB2 functions for the changes to take effect in the database; that is, you must add the COMMIT statements to the user-defined programs. ------------------------------------------------------------------------ 42.25 New Option for Data Warehouse Center Command Line Export Command line export to tag files has a new option, /B. This option is not available through the Data Warehouse Center interface. The new syntax for the iwh2exp2 command is: iwh2exp2 filename.INP dbname userid password [PREFIX=table_schema] [/S] [/R] [/B] where - filename.INP is the full path name of the INP file - dbname is the Data Warehouse Center control database name - userid is the user ID used to log on to the database - password is the password used to log on to the database - optional parameters are: - PREFIX=table_schema: the table schema for the control database tables (the default value is IWH) - /S: export schedules with selected steps - /R: do not export warehouse sources with selected steps - /B: do not export contributing steps with selected steps Note: If /R or /B is specified, the warehouse sources or contributing steps must already exist when the resulting tag file is imported, or an error is returned. ------------------------------------------------------------------------ 42.26 Backup Services APIs (XBSA) Backup Services APIs (XBSA) have been defined by the Open Group in the United Kingdom as an open application programming interface between applications or facilities needing data storage management for backup or archiving purposes. This is documented in "Open Group Technical Standard System Management: Backup Services API (XBSA)", Document Number C425 (ISBN: 1-85912-056-3). In support of this, two new DB2 registry variables have been created and are currently supported on AIX, HP, Solaris, and Windows NT: DB2_VENDOR_INI Points to a file containing all vendor-specific environment settings. The value is picked up when the database manager starts. DB2_XBSA_LIBRARY Points to the vendor-supplied XBSA library. On AIX, the setting must include the shared object if it is not named shr.o. HP, Solaris, and Windows NT do not require the shared object name. For example, to use Legato's NetWorker Business Suite Module for DB2, the registry variable must be set as follows: db2set DB2_XBSA_LIBRARY="/usr/lib/libxdb2.a(bsashr10.o)" The XBSA interface can be invoked through the BACKUP DATABASE or the RESTORE DATABASE command. For example: db2 backup db sample use XBSA db2 restore db sample use XBSA ------------------------------------------------------------------------ 42.27 OS/390 agent What's in this document? Installing OS/390 and its features In this document, you'll find instructions on how to install the OS/390 agent and information about its features. See "Installation Overview" for a quick review of the installation process and "Installation details" for detailed instructions. See "Setting up additional agent functions," "Transformers," and "Accessing databases outside of the DB2 family" for information on the agent's features. Overview DB2 Warehouse Center includes an OS/390 agent. You can use the agent to communicate between DB2 Universal Database for OS/390 and other databases, including DB2 databases on other platforms and non-DB2 databases. The agent can communicate with supported data sources that uses an ODBC connection. The agent runs on OS/390 UNIX Systems Services. It requires OS/390 V2R6 or later, and it is compatible with DB2 for OS/390 Versions 5, 6, and 7. The OS/390 agent supports the following tasks: * Copy data from a source DB2 database to a target DB2 database * Sample contents from a table or file * Run user-defined programs * Access non-DB2 databases through IBM DataJoiner * Access VSAM or IMS data through Cross Access Classic Connect * Run DB2 Universal Database for OS/390 utilities * Run the apply job for IBM Data Propagator 42.27.1 Installation overview These steps summarize the installation process. The "Installation details" section provides more details on these steps. 1. Install the OS/390 agent from the DB2 Universal Database for OS/390 tape. 2. Update the environment variables in your profile file. 3. Set up connections: o Between the kernel and the agent daemon. o Between the agent and the databases that it will access. 4. Bind CLI locally and to any remote databases. 5. Set up your ODBC initialization file. 6. Set up authorizations so that the user: o Can run the agent daemon. o Has execute authority on plan DSNAOCLI. o Has read and write authorization to the logging and ODBC trace directories, if needed. 7. Start the agent daemon. 42.27.2 Installation details Installing the OS/390 agent The OS/390 agent is included in the DB2 Universal Database for OS/390 Version 7 tape. See the Program Directory that accompanies the tape for more information on installing the OS/390 agent. You must apply apar PQ36585 or PQ36586 to your DB2 subsystem before installing the OS/390 agent. Updating the environment variables in your profile file The variables point the agent to various DB2 libraries, output directories, and so on. The following example shows the contents of a sample .profile file. The .profile file defines the environment variables, and it must be in the home directory of the user who starts the agent daemon: export VWS_LOGGING=/usr/lpp/DWC/logs export VWP_LOG=/usr/lpp/DWC/vwp.log export VWS_TEMPLATES=usr/lpp/DWC/ export DSNAOINI=/usr/lpp/DWC/dsnaoini export LIBPATH=usr/lpp/DWC/:$LIBPATH export PATH=/usr/lpp/DWC/:$PATH export STEPLIB=DSN710.SDSNEXIT:DSN710.SDSNLOAD Setting up connections To set up the kernel and daemon connections, add the following lines to your /etc/services file or TCPIP.ETC.SERVICES file: vwkernal 11000/tcp vwd 11001/tcp vwlogger 11002/tcp To set up connections between the OS/390 agent and databases, add any remote databases to your OS/390 communications database (CDB). Here are some sample CDB inserts: INSERT INTO SYSIBM.LOCATIONS (LOCATION, LINKNAME, PORT) VALUES ('NTDB','VWNT704','60002'); INSERT INTO SYSIBM.IPNAMES (LINKNAME, SECURITY_OUT, USERNAMES, IPADDR) VALUES ('VWNT704', 'P', 'O', 'VWNT704.STL.IBM.COM'); INSERT INTO SYSIBM.USERNAMES (TYPE, AUTHID, LINKNAME, NEWAUTHID, PASSWORD) VALUES ('O', 'MVSUID', 'VWNT704', 'NTUID', 'NTPW'); For more information on setting up connections and updating your communications database, see the "Connecting Distributed Database Systems" chapter in DB2 UDB for OS/390 Installation Guide, GC26-9008-00. Binding CLI Because the OS/390 agent uses CLI to communicate with DB2, you must bind your CLI plan to all of the remote databases that your agent will access. Here are some sample bind package statements for a local DB2 for OS/390 database: BIND PACKAGE (DWC6CLI) MEMBER(DSNCLICS) ISO(CS) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLINC) ISO(NC) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIRR) ISO(RR) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIRS) ISO(RS) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIUR) ISO(UR) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIMS) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIC1) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIC2) BIND PACKAGE (DWC6CLI) MEMBER(DSNCLIF4) Here are some sample bind package statements for a DB2 database running on Windows NT: BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLICS) ISO(CS) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLINC) ISO(NC) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIRR) ISO(RR) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIRS) ISO(RS) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIUR) ISO(UR) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIC1) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIC2) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIQR) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIF4) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIV1) BIND PACKAGE (NTDB.DWC6CLI) MEMBER(DSNCLIV2) Here is a sample bind statement to bind the CLI packages together in a plan: BIND PLAN(DWC6CLI) PKLIST(*.DWC6CLI.* ) Setting up your ODBC initialization file A sample ODBC initialization file, inisamp, is included in the usr/lpp/DWC/ directory. You can edit this file to work with your system, or you can create your own file. To be sure that the file works correctly, verify that it is properly configured: * The DSNAOINI environment variable must point to the initialization file. * The file name should use the naming convention dsnaoini.location_name. * The file must include CONNECTTYPE=2 and MVSATTACHTYPE=CAF parameters. For more information about binding CLI and the DSNAOINI file, see DB2 UDB for OS/390 ODBC Guide and Reference, SC26-9005. Setting up authorizations The OS/390 agent is a daemon process. You can run the agent daemon with regular UNIX security or with OS/390 UNIX security. Because the agent requires daemon authority, define these agent executables to RACF Program Control: * libtls4d.dll * iwhcomnt.dll * vwd To define the executable programs to RACF program control, change to the directory where the Data Warehouse Center executable files are stored and run the following commands: extattr +p libtls4d.dll extattr +p iwhcomnt.dll extattr +p vwd To use the extattr command with the +p parameter, you must have at least read access to the BPX.FILEATTR.PROGCTL FACILITY class. The following example shows the RACF command used to give this permission to user ID SMORG: RDEFINE FACILITY BPX.FILEATTR.PROGCTL UACC(NONE) PERMIT BPX.FILEATTR.PROGCTL CLASS(FACILITY) ID(SMORG) ACCESS(READ) SETROPTS RACLIST(FACILITY) REFRESH For more information, about authorizations see OS/390 UNIX System Services Planning, SC28-1890. Starting the agent daemon After you finish configuring the system, start the agent daemon: 1. Telnet to UNIX Systems Services on OS/390 through the OS/390 host name and USS port. 2. Start the agent daemon: o To start the daemon in the foreground enter vwd on the command line. o To start the daemon in the background, enter: vwd>/usr/lpp/DWC/logs/vwd.log 2>&1 & To verify that the OS/390 agent daemon is running, enter the following command on a UNIX shell command line: ps -e | grep vwd Or, enter D OMVS,a=all on the OS/390 console and search for the string vwd 42.27.3 Setting up additional agent functions The DB2 Warehouse Manager package includes the following user-defined programs: * vwpftp: Runs an FTP command file. * vwpmvs: Submits a JCL jobstream. * vwprcpy: Copies a file using FTP. * XTClient: Client trigger program. * etidlmvs: A utility from ETI (Evolutionary Technologies International); deletes a file on MVS. * etircmvs: A utility from ETI; runs FTP on an MVS host. * etiexmvs: A utility from ETI; runs JCL on MVS. In addition, you can create user-defined programs and stored procedures in the Data Warehouse Center. The OS/390 agent supports any executable programs that run under UNIX Systems Services. A user-defined program is assigned to one or more steps. When you run the user-defined program, the following actions occur: * The agent starts. * The agent runs the user-defined program. * The user-defined program returns a return code and a feedback file to the agent. * The agent returns the results to the kernel. To run ETI programs on OS/390, you must first apply FixPack 2 to DB2 Universal Database Version 7.1. Use the VWP_LOG environment variable to define a directory where the user-defined-programs can write output. If you use a user-defined program to submit a job using FTP, you must first create the JCL and data that you want to submit. The job name in the JCL must be USERIDx, where x is a 1-character letter or number (example: MYUSERA). The output class for the MSGCLASS and SYSOUT files contained in your JCL must specify a JES-held output class. Restriction: The maximum LRECL for the submitted job is 254 characters. JES scans only the first 72 characters of JCL. Changing the Data Warehouse Center template for FTP support The Data Warehouse Center installs a JCL template for transferring files using FTP. If you plan to have the OS/390 agent use the FTP commands GET or PUT to transfer files from an OS/390 host to another remote host, you need to change the account information in the JCL template for your OS/390 system: 1. Log on with an ID that has authority to copy and update files in the /usr/lpp/DWC directory. 2. Find ftp.jcl and copy the file with the new file name systemname.ftp.jcl, where systemname is the name of the OS/390 system. 3. Create a copy of this file for each OS/390 system on which you plan to run the conversion programs vwpmvs or ETI extract. For example, if you want to run either of these programs on STLMVS1, create a copy of the file called STLMVS1.ftp.jcl. 4. Use a text editor to customize the JCL to meet your site's requirements. Change the account information to match the standard account information for your MVS system. Do not change any parameters that are contained in brackets, such as [USERID] and [FTPFILE]. (The brackets are the hexadecimal characters x'AD' and x'BD', respectively. If your TSO terminal type is not set to 3278A in SPF option 0, these values might display as special characters rather than as brackets. This is not a problem if you do not change the x'AD' or the x'BD', or any of the data that is between the characters.) 5. Update the environment variable VWS_TEMPLATES to point to the directory of the copied template file. The Data Warehouse Center includes this sample JCL template: //[USERID]A JOB , 'PUT/GET', // CLASS=A, // USER=&SYSUID, // NOTIFY=&SYSUID, // TIME=(,30), // MSGCLASS=H //STEP1 EXEC PGM=FTP,PARM='( EXIT' //INPUT DD DSN=[FTPFILE],DISP=SHR //OUTPUT DD SYSOUT=* //SYSPRINT DD SYSOUT=* Sampling contents of a table or file Using the OS/390 agent, you can sample contents of DB2 tables and flat files such as UNIX Systems Services files and OS/390 native flat files. You can also sample contents of IMS or VSAM files with Classic Connect using the OS/390 agent. For flat files, the agent looks at the parameters in the properties of the file definition to determine the file format. 42.27.4 Scheduling warehouse steps with the trigger program (XTClient) Use the trigger program to schedule warehouse steps from the OS/390 platform. You or an OS/390 job scheduler can submit a job that triggers a step in the Data Warehouse Center. If the step is successful, the trigger step in the JCL returns a return code of 0. You must have the Java Development Kit (JDK) 1.1.8 or later installed on your OS/390 UNIX Systems Services to use the trigger program. To start the trigger, first start XTServer on the machine where your warehouse server is running. This process is described in Chapter 5 of the Data Warehouse Center Administration Guide, in topic "Starting a step from outside the Data Warehouse Center." After XTServer is started, start the XTClient on OS/390. The following example shows sample JCL to start the trigger. //DBA1A JOB 1,'XTCLIENT',CLASS=A,MSGCLASS=H, // MSGLEVEL=(1,1),REGION=4M,NOTIFY=&SYSUID //****************************************************** //* submit iwhetrig //****************************************************** //BRADS EXEC PGM=BPXBATCH, // PARM=('sh cd /usr/lpp/DWC/; java XTClient 9.317.171.133 1100x // 9 drummond pw bvmvs2nt 1 1 100') //STDOUT DD PATH='/tmp/xtclient.stdout', // PATHOPTS=(OWRONLY,OCREAT), // PATHMODE=SIRWXU //STDERR DD PATH='/tmp/xtclient.stderr', // PATHOPTS=(OWRONLY,OCREAT), // PATHMODE=SIRWXU // Note: The above sample JCL code shows how to continue the parameters to a new line. To do so, type the parameters up to column 71, put a 'X' in column 72 and continue in column 16 on the next line. The first part of the parameter is a statement (cd /usr/lpp/DWC/;) that changes to the directory where the OS/390 agent is installed. The second part of the parameter starts XTClient and passes the 8 following parameters: * The Data Warehouse Center server host name or IP address * The Data Warehouse Center server port (normally 11009) * The Data Warehouse Center user ID * The Data Warehouse Center password * The name of the step to run * The Data Warehouse Center server command, where: o 1 = populate the step o 2 = promote the step to test mode o 3 = promote the step to production mode o 4 = demote the step to test mode o 5 = demote the step to development mode * The option whether to wait for the step completion, where 1= yes and 0 = no * The maximum number of rows (use 0 or blank to fetch all rows) 42.27.5 Transformers Introduction These 12 transformers are Java stored procedures which provide some basic data transformations. In order to run these transformers, you must first set up Java stored procedures on your DB2 subsystem. Additional information on these transformers is available in the IBM DB2 Universal Database Data Warehouse Center Administration Guide Version 7, SC26-9993-00. IWH.CLEAN IWH.PERIODTABLE IWH.KEYTABLE IWH.CHISQUARE IWH.CORRELATION IWH.STATISTICS IWH.INVERTDATA IWH.PIVOTDATA IWH.REGRESSION IWH.ANOVA IWH.SUBTOTAL IWH.MOVINGAVERAGE Setting up Java Stored Procedures These instructions are a brief version of the complete instructions on how to set up Java stored procedures, which can be found in the Application Programming Guide and Reference for Java(TM), SC26-9018 1. Apply PTFs UQ46170 and UQ46114 to your DB2 subsystem. 2. Install Visual Age for Java 2.0 or above on your OS/390 system. 3. Install JDBC on your DB2, and bind the JDBC packages in your DB2 subsystem. 4. Set up RRS and DB2 WLM stored procedures for your DB2 subsystem. 5. Set up Java stored procedures for your DB2. This includes creating a Java WLM startup procedure for the Java stored procedures address space. 6. Under WLM, you must associate your Java WLM startup procedure with a WLM environment name. Use the WLM application environment panel entitled "Create an Application Environment" to associate the environment name with the JCL procedure. 7. Specify the WLM application environment name for the WLM_ENVIRONMENT option on CREATE or ALTER PROCEDURE to associate a stored procedure or user-defined function with an application environment. 8. Ensure that the owner of your DB2 started tasks have access to the libraries in the Java WLM startup procedure. Steps to setting up Warehouse Transformers These instructions are a brief version of the complete instructions, which can be found in the IBM DB2 Universal Database Data Warehouse Center Administration Guide Version 7, SC26-9993-00 1. Either apply Fixpack 3 to DB2 Universal Database for NT version 7, or update the Warehouse Control Database to TRANSREGISTERED = 1 and TRANSFENCED=1. To update the Warehouse Control Database, enter the following SQL on a DB2 Universal Database command line processor: CONNECT TO your_vw_control_database UPDATE IWH.INFORESOURCE SET TRANSREGISTERED = '1' WHERE SUBDBTYPE = 'DB2 MVS' UPDATE IWH.INFORESOURCE SET TRANSFENCED = '1' WHERE SUBDBTYPE = 'DB2 MVS' 2. Define the transformers to DB2 o If you have DB2 for OS/390 Version 7, use the SQL statements in /usr/lpp/DWC/createXfSQLV7. o If you have DB2 for OS/390 Version 6, use the SQL statements in /usr/lpp/DWC/createXfSQL . o If you have DB2 for OS/390 Version 5, use the commented SQL statements in /usr/lpp/DWC/createXfSQL. Comment out all of the CREATE PROCEDURE statements. Then uncomment and use the INSERT INTO SYSIBM.SYSPROCEDURES statements to define the transformers to DB2 for OS/390 Version 5. When you set up Java stored procedures, you use WLM to associate the Java WLM startup procedure with a WLM environment name. The environment name is specified in the WLM ENVIRONMENT option of the CREATE PROCEDURE statement. DSNWLMJ is the WLM environment name included with the transformer definitions described above. You can either add a WLM association name of DSNWLMJ, or change the WLM ENVIRONMENT option for each transformer definition to a name that you have already associated with your startup procedure. 3. Set up links from UNIX Systems services to the transformer load modules in IWH710.SIWHLOAD. o Telnet to UNIX Systems Services on your OS/390 host system. o Change to the directory where you installed the OS/390 agent. The default installation directory is /usr/lpp/DWC. o If you are using DB2 V7, skip to step 4. If you are using DB2 V5 or V6, edit the trlinks data set in the installed directory. Comment this line by putting a pound sign (#) in column 1: ln -e IWHXFV7 xf.jll; Uncomment this line by removing the pound sign(#) in column 1. #ln -e IWHXF xf.jll; Save your changes. o Type trlinks and press enter. This will create an xf.jll link in the directory that will direct the agent to the load either the IWHXF or IWHXFV7 modules. 4. APF-authorize IWH710.SIWHPDSE, then add it to the STEPLIB concatenation in your DB2 Java stored procedures startup procedure. 5. Add the directory where your xf.jll link is (default: /usr/lpp/DWC) to the CLASSPATH and LIBPATH environment variables in your WLM environment data set. o If you're not sure where your WLM environment data set is, look in your DB2 Java stored procedures startup procedure. Your WLM environment data set is the one that your JAVAENV DD card points to. 6. Start the stored procedures, then create and run your warehouse steps. Restrictions for Java stored procedures Java objects in a stored procedure's signature are only supported in DB2 for OS/390 Version 7. For this reason, the transformers do not support null values in their parameters in DB2 for OS/390 versions 5 or 6. In these versions, if you pass a null parameter, it will behave like a zero. The versions 5 and 6 transformers treat zero parameters like null strings. DB2 supports the COMMIT SQL statement in stored procedures only in DB2 for OS/390 Version 7. The INVERTDATA stored procedure drops and re-creates a table within the stored procedure; therefore it requires a commit statement. For that reason, IWH.INVERTDATA is not supported in DB2 for OS/390 Version 5 or Version 6. DB2 for OS/390 does not support Java user-defined functions, so IWH.FORMATDATE is not supported on the 390 platform.. Sample startup proc for Java Stored Procedures (described in "DB2 for OS/390 Application Programming Guide and Reference for Java"): //DSNWLMJ PROC DB2SSN=DSN,NUMTCB=5,APPLENV=DSNWLMJ <-- WLM ENVIRONMENT value in CREATE PROC //******************************************************************* //* THIS PROC IS USED TO START THE WLM-ESTABLISHED SPAS * //* ADDRESS SPACE FOR THE DSNWLMJ APPLICATION ENVIRONMENT * //* V WLM,APPLENV=DSNWLMJ,RESUME * //******************************************************************* //DSNWLMJ EXEC PGM=DSNX9WLM,TIME=1440,REGION=0M, // PARM='&DB2SSN, &NUMTCB, &APPLENV' //STEPLIB DD DSN=DSN.TESTLIB,DISP=SHR // DD DSN=IWH710.SIWHPDSE,DISP=SHR <-- This has the transformers in it // DD DSN=DSN.HPJSP.PDSE.JDBC,DISP=SHR <-- HPJ DLLs from HPJ setup // DD DSN=SYS1.PP.PDSELINK,DISP=SHR <-- HPJ runtime libraries // DD DSN=DSN710.SDSNEXIT,DISP=SHR // DD DSN=DSN710.SDSNLOAD,DISP=SHR // DD DSN=SYS1.SCEERUN,DISP=SHR // DD DSN=DSN.PDSE,DISP=SHR <-- HPJ setup info //JAVAENV DD DSN=DSN.WLMENVJ.JSPENV,DISP=SHR <-- Environment variables, see below //CEEDUMP DD SYSOUT=A //DSSPRINT DD SYSOUT=A //JSPDEBUG DD SYSOUT=A //SYSABEND DD SYSOUT=A //SYSPRINT DD SYSOUT=A Sample environment variable data set (described in "DB2 for OS/390 Application Programming Guide and Reference for Java"): ENVAR("TZ=PST07", "DB2SQLJPROPERTIES=/usr/lpp/db2/jdbc/db2710/classes/db2sqljjdbc.properties", "LIBPATH=/usr/lpp/DWC", "VWSPATH=/usr/lpp/DWC", "CLASSPATH=/usr/lpp/db2/jdbc/db2710/classes:/usr/lpp/DWC:/usr/lpp/hpj/lib"), MSGFILE(JSPDEBUG) National Language Support for the Transformers Most messages produced by the OS/390 agent are sent to the NT platform to be interpreted, so in most cases the message language is dependent on how UDB for NT was installed. The transformers are an exception. The OS/390 agent ships the following message files for the transformers: File name: For language: Xf.properties_Fi_FI Finnish in Finland Xf.properties_No_NO Norwegian in Norway Xf.properties_Ru_RU Russian in Russia Xf.properties_Zh_CN Chinese in China (People's Republic of China) Xf.properties_Zh_TW Chinese in Taiwan Xf.properties_Da_DK Danish in Denmark Xf.properties_De_DE German in Germany Xf.properties_En_US English in U.S. Xf.properties_Es_ES Spanish in Spain Xf.properties_Fr_FR French in France Xf.properties_It_IT Italian in Italy Xf.properties_Ja_JP Japanese in Japan Xf.properties_Ko_KR Korean in Korea Xf.properties_Pt_BR Portugese in Brazil Xf.properties_Sv_SE Swedish in Sweden If your transformer messages should be in a language other than English, select one of the above files and copy its contents to Xf.properties. 42.27.6 Accessing databases outside of the DB2 family To access non-DB2 Universal Database systems, the OS/390 agent uses DataJoiner. DataJoiner enables the agent to use a normal DRDA flow to it as if it were a UDB database. If an ODBC request is directed to a non-DB2 family database source, DataJoiner invokes an additional layer of code to access foreign databases. DataJoiner can access Oracle, Sybase, Informix, Microsoft SQL Server, Teradata, and any other database that has an ODBC driver that runs on Windows NT, AIX or Sun's Solaris Operating Environment. The OS/390 agent can access DataJoiner as a source, but not as a target. DataJoiner does not support two-phase commit. Although DataJoiner supports TCP/IP as an application requestor in Versions 2.1 and 2.1.1, it does not have an application server. Because the OS/390 agent requires an application server to use TCP/IP, you must use an SNA connection instead to access DataJoiner from OS/390. Accessing IMS and VSAM on OS/390 Classic Connect is purchased and installed separately from the warehouse agent. The OS/390 agent can access IMS and VSAM through the Classic Connect ODBC driver. With Classic Connect, you can set up a DB2-like definition of IMS and VSAM data sets, and then access them using ODBC. The OS/390 agent loads the correct ODBC driver based on whether a request is directed to Classic Connect or DB2. If you are accessing a DB2 source, the agent loads the DB2 ODBC driver. If you are accessing a VSAM or IMS source, the agent loads the Classic Connect ODBC driver. The agent request is then processed. Setting up the Classic Connect ODBC driver and warehouse access Classic Connect is purchased and installed separately from the OS/390 agent. Classic Connect can view a single file or a portion of a file as one or more relational tables. You must map the IMS and VSAM data for Classic Connect to access it. You can map the data manually or use the Microsoft Windows Classic Connect non-relational data mapper. 1. Install Classic Connect Data Server on OS/390. 2. Optional: Install the Classic Connect Data Mapper product on NT. 3. Define Classic Connect's logical table definitions so that Classic Connect can access data relationally. You can use the data mapper to create the definitions for IMS and VSAM structures, or create the definitions manually. 4. After you set up Classic Connect, you can set up access to your warehouse: a. Create a Classic Connect .ini file. A sample Classic Connect application configuration file cxa.ini is in the/usr/lpp/DWC/ directory, and it is reproduced here for you: * national language for messages NL = US English * resource master file NL CAT = usr/lpp/DWC/v4r1m00/msg/engcat FETCH BUFFER SIZE = 32000 DEFLOC = CXASAMP USERID = uid USERPASSWORD = pwd DATASOURCE = DJX4DWC tcp/9.112.46.200/1035 MESSAGE POOL SIZE = 1000000 b. Update the DATASOURCE line in the .ini file. This line contains a data source name and a protocol address. The data source name must correspond to a Query Processor name defined on the Classic Connect Data Server, which is located in the QUERY PROCESSOR SERVICE INFO ENTRY in the data server configuration file. The protocol address can be found in the same file in the TCP/IP SERVICE INFO entry. The USERID and USERPASSWORD in this file are used when defining a warehouse data source. c. Export the CXA_CONFIG environment variable to your Classic Connect program files, which are usually in the same directory as your .ini file. d. Update your LIBPATH environment variable to include the path to your Classic Connect program files, which are usually in the same directory as your .ini file. e. Optional: Verify the installation with the test program cxasamp: Enter cxasamp from the directory that contains your .ini file. The location/uid/pwd is the data source name/userid/userpassword that is defined in your .ini file. f. Define a data source to the warehouse in the same way that you define any DB2 data source. You do not need to update your dsnaoini file, because DB2 for OS/390 does not have a driver manager. The driver manager for Classic Connect is built into the OS/390 agent. 42.27.7 Running DB2 for OS/390 utilities You must apply apar PQ44904 to the OS/390 agent in order to use the agent to run utilities. DSNUTILS is a DB2 for OS/390 stored procedure that runs in a WLM and RRS environment. You can use it to run any installed DB2 utilities by using the user-defined stored procedure interface. The DB2 for OS/390 LOAD, REORG and RUNSTATS utilities have property sheets that you can use to change how the utility runs. To change the utilities' properties, right-click the utility from the Process Modeler window and click Properties. The Warehouse Manager also provides an interface to DSNUTILS so you can include DB2 utilities in Warehouse Manager steps. To set up the DSNUTILS stored procedure: 1. Run the DSNTIJSG job when installing DB2 to set and bind the DSNUTILS stored procedure. Make sure that the definition of DSNUTILS has PARAMETER STYLE GENERAL. 2. Enable the WLM-managed stored procedures. 3. Set up your RRS and WLM environments. 4. Run the sample batch DSNUTILS programs supplied by DB2. (This step is recommended but not required.) 5. Bind the DSNUTILS plan with your DSNCLI plan so that CLI can call the stored procedure: BIND PLAN(DSNAOCLI) PKLIST(*.DSNAOCLI.*, *.DSNUTILS.*) 6. Set up a step using the Warehouse Manager and run the step. The population type should be APPEND. If it is not, the Warehouse Manager deletes everything in the table before running the utility. Copying data between DB2 for OS/390 tables using the LOAD utility Suppose you want to copy a table by unloading it into a flat file, then loading the flat file to a different table. To do this, you normally have to unload the data, edit the load control statements that unload produces, then load the data. Using the warehouse, you can specify that you want to reload to a different table without having to stop between steps and manually edit the control statements. Here's how: Use the Reorg/Generic interface to create a step which unloads a file using the UNLOAD utility or the REORG TABLESPACE utility. Both of these utilities produce two output data sets, one with the table data and one with the utility control statement which can be input to LOAD. In the control statement that the utility generates, the INTO TABLE table name is the name of the unloaded table. Here is an example of the DSNUTILS parameters you might use for the Reorg Unload step: Table 33. Properties for the Reorg Unload step UTILITY_ID REORGULX RESTART NO UTSTMT REORG TABLESPACE DBVW.USAINENT UNLOAD EXTERNAL UTILITY_NAME REORG TABLESPACE RECDSN DBVW.DSNURELD.RECDSN RECDEVT SYSDA RECSPACE 50 PNCHDSN DBVW.DSNURELD.PNCHDSN PNCHDEVT SYSDA PNCHSPACE 3 Use the Reorg/Generic DSNUTILS interface to create a load step. Normally the DSNUTILS utility statement parameter specifies a utility control statement. The warehouse utility interface also allows a file name in the utility statement field. You can specify the file that contains the valid control statement using the keyword :FILE: , and the name of the table you want to load using the keyword :TABLE:. To use the LOAD utility to work with the output from the previous example, apply the following parameter values in the LOAD properties: Note: In the UTSTMT field, type either a load statement or the name of the file which was output from the REORG utility with the UNLOAD EXTERNAL option. Table 34. LOAD step properties UTILITY_ID LOADREORG RESTART NO UTSTMT :FILE:DBVW.DSNURELD.PNCHDSN:TABLE:[DBVW].INVENTORY UTILITY_NAME LOAD RECDSN DBVW.DSNURELD.RECDSN RECDEVT SYSDA This will work for any DB2 for OS/390 source and target tables on the same or different DB2 subsystems. The control statement flat file can be either HFS or native MVS files. For more detailed information about DSNUTILS and the DB2 utilities available for OS/390, see the DB2 for OS/390 Utility Guide and Reference. 42.27.8 Replication You can use the OS/390 agent to automate your Data Propagator replication apply steps. Replication requires a source database, a control database, and a target database. These can be different or the same databases. A capture job reads the DB2 log to determine which of the rows in the source database are added, updated, or changed. The job then writes the changes to a change-data table. An apply job is then run to apply the changes to a target database. The DB2 Warehouse Manager package can automate the apply job by creating a replication step. Use Warehouse Manager to define the type of apply job to run and when to run it. You need to export your SASNLINK library to the steplib environment variable. Adding replication support to the Data Warehouse Center template The Data Warehouse Center includes a JCL template for replication support. If you plan to use the OS/390 agent to run the apply program, you need to change the account and data set information in this template for your OS/390 system. To change the template: 1. Log on with an ID that has authority to copy and update files in the /usr/lpp/DWC/ directory. 2. Find apply.jcl and copy this file as systemname.apply.jcl, where systemname is the name of the MVS system. For example, on STLMVS1, create a copy of the file named STLMVS1.apply.jcl. 3. Use a text editor to customize the JCL to meet your site's requirements. Change the account information to match the standard account information, and change the data set for STEPLIB DD and MSGS DD for your MVS system. 4. If necessary, change the program name on the EXEC card. For details on changing program names, see the DB2 Replication Guide and Reference. Do not change any parameters that are contained in brackets, such as [USERID] and [APPLY_PARMS]. (The brackets are the hexadecimal characters x'AD' and x'BD', respectively. If your TSO terminal type is not set to 3278A in SPF option 0, these values might display as special characters rather than as brackets. This is not a problem if you do not change the x'AD' or the x'BD', or any of the data that is between the characters.) 5. Update the environment variable VWS_TEMPLATES to point to the directory of the copied template file. The following example shows the JCL template that is included with the Data Warehouse Center: Apply JCL template: //[USERID]A JOB ,MSGCLASS=H,MSGLEVEL=(1,1), // REGION=2M,TIME=1440,NOTIFY=&SYSUID //* DON'T CHANGE THE FIRST LINE OF THIS TEMPLATE. //* THE REMAINING JCL SHOULD BE MODIFIED FOR YOUR SITE. //********************************************** //* RUN APPLY/MVS ON OS/390 DB2 6.1 * //********************************************** //ASNARUN EXEC PGM=ASNAPV66,REGION=10M, // [APPLY_PARMS] //STEPLIB DD DISP=SHR,DSN=DPROPR.V6R1M0.SASNLINK // DD DISP=SHR,DSN=DSN610.SDSNLOAD //MSGS DD DSN=DPROPR.V2R1M0A.MSGS,DISP=SHR //ASNASPL DD DSN=&&ASNASPL,DISP=(NEW,DELETE,DELETE), // UNIT=SYSDA,SPACE=(CYL,(10,1)), // DCB=(RECFM=VB,BLKSIZE=6404) //SYSTERM DD SYSOUT=* //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* // 42.27.9 Agent logging Many DB2 Warehouse Manager components such as the server, the logger, agents, and some Data Warehouse Center programs write logs to the logging directory, which is specified in the VWS_LOGGING environment variable. These log files are plain text. You can start agent logging from the Data Warehouse Center. From the left pane, right-click Warehouse and click Properties. On the Trace Level tab, change the settings to the trace level that you want. The agent trace supports levels 0-4: * Level 1 - entry/exit tracing * Level 2 - level 1 plus debugging trace * Level 3 - level 2 plus data tracing * Level 4 - internal buffer tracing When trace is set higher than level 1, performance will be slower. Turn on tracing only for debugging purposes. The tracing information is stored in the file AGNTxxx.LOG. Environment information is stored in the file AGNTxxx.SET. ------------------------------------------------------------------------ 42.28 Client Side Caching on Windows NT If a user tries to access a READ PERM DB file residing on a Windows NT server machine where DB2 Datalinks is installed using a shared drive using a valid token, the file opens as expected. However, after that, subsequent open requests using the same token do not actually reach the server, but are serviced from the cache on the client. Even after the token expires, the contents of the file continue to be visible to the user, since the entry is still in the cache. However, this problem does not occur if the file resides on an Windows NT workstation. A solution would be to set the registry entry \\HKEY_LOCAL_MACHINE\SYSTEM \CurrentControlSet\Services\Lanmanserver\Parameters\EnableOpLocks to zero on the Windows NT server. With this registry setting, whenever a file residing on the server is accessed from a client workstation through a shared drive, the request will always reach the server, instead of being serviced from the client cache. Therefore, the token is re-validated for all requests. The negative impact of this solution is that this affects the overall performance for all file access from the server over shared drives. Even with this setting, if the file is accessed through a shared drive mapping on the server itself, as opposed to from a different client machine, it appears that the request is still serviced from the cache. Therefore, the token expiry does not take effect. Note: In all cases, if the file access is a local access and not through a shared drive, token validation and subsequent token expiry will occur as expected. ------------------------------------------------------------------------ 42.29 Trial Products on Enterprise Edition UNIX CD-ROMs The DB2 Universal Database (UDB) Enterprise Edition (EE) CD-ROMs for UNIX platforms Version 6 and Version 7 contain a 90-day trial version of DB2 Connect Enterprise Edition (CEE). Because DB2 Connect functionality is built into the DB2 UDB EE product, you do not have to install the DB2 CEE product on systems where DB2 UDB EE is installed to use DB2 Connect functionality. If you install the 90-day trial version of DB2 CEE and decide to upgrade to a licensed version, you must purchase the DB2 CEE product and install the DB2 CEE license key. You do not have to reinstall the product. The instructions for installing the license key is provided in the DB2 EE or DB2 CEE for UNIX Quick Beginnings book. If you installed the trial CEE product along with your EE installation, and do not want to install CEE permanently, you can remove the CEE 90-day trial version by following these instructions. If you remove the trial version of Connect EE, you will still have DB2 Connect functionality available with DB2 EE. To remove DB2 Connect Version 7, uninstall the following filesets from the respective platforms: * On AIX, uninstall the db2_07_01.clic fileset. * On NUMA-Q and the Solaris Operating Environments, uninstall the db2clic71 package. * On Linux, uninstall the db2clic71-7.1.0-x RPM. * On HP-UX, uninstall the DB2V7CONN.clic fileset. To remove DB2 Connect Version 6, uninstall the following filesets from the respective platforms: * On AIX, uninstall the db2_06_01.clic fileset. * On NUMA-Q and the Solaris Operating Environments, uninstall the db2cplic61 package. * On Linux, uninstall the db2cplic61-6.1.0-x RPM. * On HP-UX, uninstall the DB2V6CONN.clic fileset. ------------------------------------------------------------------------ 42.30 Trial Products on DB2 Connect Enterprise Edition UNIX CD-ROMs The DB2 Connect Enterprise Edition (EE) CD-ROMs for UNIX platforms Version 6 and Version 7 contain a 90-day trial version of DB2 Universal Database (UDB) Enterprise Edition (EE). The DB2 UDB EE 90-day trial version is provided for evaluation, but is not required for DB2 Connect to work. If you install the 90-day trial version of DB2 UDB EE and decide to upgrade to a licensed version, you must purchase the DB2 UDB EE product and install the DB2 UDB EE license key. You do not have to reinstall the product. The instructions for installing the license key are provided in the DB2 EE or DB2 CEE for UNIX Quick Beginnings book. If you installed the trial UDB EE product along with your Connect EE installation, and you do not want to install UDB EE permanently, you can remove the EE 90-day trial version by following these instructions. If you remove the trial version of DB2 UDB EE, it will not impact the functionality of DB2 Connect EE. To remove DB2 UDB EE Version 7, uninstall the following filesets from the respective platforms: * On AIX, uninstall the db2_07_01.elic fileset. * On NUMA-Q and the Solaris Operating Environments, uninstall the db2elic71 package. * On Linux, uninstall the db2elic71-7.1.0-x RPM. * On HP-UX, uninstall the DB2V7ENTP.elic fileset. To remove DB2 UDB EE Version 6, uninstall the following filesets from the respective platforms: * On AIX, uninstall the db2_06_01.elic fileset. * On NUMA-Q and the Solaris Operating Environments, uninstall the db2elic61 package. * On Linux, uninstall the db2elic61-6.1.0-x RPM. * On HP-UX, uninstall the DB2V6ENTP.elic fileset. ------------------------------------------------------------------------ 42.31 Drop Data Links Manager You can now drop a DB2 Data Links Manager for a specified database. The processing of some Data Links-related SQL requests, as well as utilities, such as backup/restore, involve communicating with all DLMs configured to a database. Previously, DB2 did not have the capability to drop a configured DLM even though it may have not been operational. This resulted in an additional overhead in SQL and utilities processing. Once a DLM was added, the engine communicated with it in the processing of requests, which may have resulted in the failure of some SQL requests (for example, drop table/tablespace/database). ------------------------------------------------------------------------ 42.32 Uninstalling DLFM Components Using SMIT May Remove Additional Filesets Before uninstalling DB2 (Version 5, 6, or 7) from an AIX machine on which the Data Links Manager is installed, follow these steps: 1. As root, make a copy of /etc/vfs using the command: cp -p /etc/vfs /etc/vfs.bak 2. Uninstall DB2. 3. As root, replace /etc/vfs with the backup copy made in step 1: cp -p /etc/vfs.bak /etc/vfs ------------------------------------------------------------------------ 42.33 Error SQL1035N when Using CLP on Windows 2000 If DB2 is installed to a directory to which only some users (e.g. administrators) have write access, a regular user may receive error SQL1035N when attempting to use the DB2 Command Line Processor. To solve this problem, DB2 should be installed to a directory to which all users have write access. ------------------------------------------------------------------------ 42.34 Enhancement to SQL Assist The SQL Assist tool now allows the user to specify a join operator other than "=" for table joins. The Join Type dialog, which is launched by clicking the Join Type button on the Joins page of the SQL Assist tool, has been enhanced to include a drop-down list of join operators. The available operators are "=", "<>", "<", ">", "<=", and ">=". SQL Assist is a tool that assists the user in creating simple SQL statements. It is available from the Command Center (Interactive tab), the Control Center (Create View and Create Trigger dialogs), the Stored Procedure Builder ("Inserting SQL Stored Procedure" wizard), and the Data Warehouse Center (SQL Process step). ------------------------------------------------------------------------ 42.35 Gnome and KDE Desktop Integration for DB2 on Linux DB2 now includes a set of utilities for the creation of DB2 desktop folders and icons for launching the most commonly used DB2 tools on the Gnome and KDE desktops for supported Intel-based Linux distributions. These utilities are installed by DB2 Version 7.2 by default, and can be used after the installation to create and remove desktop icons for one or more selected users. To add a set of desktop icons for one or more users, use the following command: db2icons [ ...] Note: Note that if icons are generated while a Gnome or KDE desktop environment is running, the user may need to force a manual desktop refresh to see the new icons. To remove a set of desktop icons for one or more users, use the following command: db2rmicons [ ...] Note: You must have sufficient authority to generate or remove icons for other users. Typically, db2icons and db2rmicons can be used to create or remove icons for yourself if you are a normal user, and for others only if you are root, or another user with the authority to write to the specified users home directories. ------------------------------------------------------------------------ 42.36 Running DB2 under Windows 2000 Terminal Server, Administration Mode For DB2 UDB Version 7.1, FixPak 3 and later, DB2 can run under the Windows 2000 Terminal Server, Administration Mode. Prior to this, you could not run DB2 under the Client session of a Windows 2000 Terminal Server, Administration Mode. ------------------------------------------------------------------------ 42.37 Online Help for Backup and Restore Commands Incorrect information appears when you type db2 ? backup. The correct output is: BACKUP DATABASE database-alias [USER username [USING password]] [TABLESPACE (tblspace-name [ {,tblspace-name} ... ])] [ONLINE] [INCREMENTAL [DELTA]] [USE TSM [OPEN num-sess SESSIONS]] | TO dir/dev [ {,dir/dev} ... ] | LOAD lib-name [OPEN num-sess SESSIONS]] [WITH num-buff BUFFERS] [BUFFER buffer-size] [PARALLELISM n] [WITHOUT PROMPTING] Incorrect information appears when you type db2 ? restore. The correct output is: RESTORE DATABASE source-database-alias { restore-options | CONTINUE | ABORT }"; restore-options:"; [USER username [USING password]] [{TABLESPACE [ONLINE] |"; TABLESPACE (tblspace-name [ {,tblspace-name} ... ]) [ONLINE] |"; HISTORY FILE [ONLINE]}] [INCREMENTAL [ABORT]]"; [{USE TSM [OPEN num-sess SESSIONS] |"; FROM dir/dev [ {,dir/dev} ... ] | LOAD shared-lib"; [OPEN num-sess SESSIONS]}] [TAKEN AT date-time] [TO target-directory]"; [INTO target-database-alias] [NEWLOGPATH directory]"; [WITH num-buff BUFFERS] [BUFFER buffer-size]"; [DLREPORT file-name] [REPLACE EXISTING] [REDIRECT] [PARALLELISM n]"; [WITHOUT ROLLING FORWARD] [WITHOUT DATALINK] [WITHOUT PROMPTING]"; ------------------------------------------------------------------------ 42.38 "Warehouse Manager" Should Be "DB2 Warehouse Manager" All occurrences of the phrase "Warehouse Manager" in product screens and in product documentation should read "DB2 Warehouse Manager". ------------------------------------------------------------------------ Additional Information ------------------------------------------------------------------------ Additional Information ------------------------------------------------------------------------ 43.1 DB2 Universal Database and DB2 Connect Online Support For a complete and up-to-date source of DB2 information, including information about issues discovered after this document was published, use the DB2 Universal Database & DB2 Connect Online Support Web site, located at http://www.ibm.com/software/data/db2/udb/winos2unix/support. ------------------------------------------------------------------------ 43.2 DB2 Magazine For the latest information about the DB2 family of products, obtain a free subscription to "DB2 magazine". The online edition of the magazine is available at http://www.db2mag.com; instructions for requesting a subscription are also posted on this site. ------------------------------------------------------------------------ Appendixes ------------------------------------------------------------------------ Appendix A. Notices IBM may not offer the products, services, or features discussed in this document in all countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Canada Limited Office of the Lab Director 1150 Eglinton Ave. East North York, Ontario M3C 1H7 CANADA Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this information and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information may contain examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information may contain sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. Each copy or any portion of these sample programs or any derivative work must include a copyright notice as follows: (C) (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. (C) Copyright IBM Corp. _enter the year or years_. All rights reserved. ------------------------------------------------------------------------ A.1 Trademarks The following terms, which may be denoted by an asterisk(*), are trademarks of International Business Machines Corporation in the United States, other countries, or both. ACF/VTAM IBM AISPO IMS AIX IMS/ESA AIX/6000 LAN DistanceMVS AIXwindows MVS/ESA AnyNet MVS/XA APPN Net.Data AS/400 OS/2 BookManager OS/390 CICS OS/400 C Set++ PowerPC C/370 QBIC DATABASE 2 QMF DataHub RACF DataJoiner RISC System/6000 DataPropagator RS/6000 DataRefresher S/370 DB2 SP DB2 Connect SQL/DS DB2 Extenders SQL/400 DB2 OLAP Server System/370 DB2 Universal Database System/390 Distributed Relational SystemView Database Architecture VisualAge DRDA VM/ESA eNetwork VSE/ESA Extended Services VTAM FFST WebExplorer First Failure Support TechnologyWIN-OS/2 The following terms are trademarks or registered trademarks of other companies: Microsoft, Windows, and Windows NT are trademarks or registered trademarks of Microsoft Corporation. Java or all Java-based trademarks and logos, and Solaris are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Tivoli and NetView are trademarks of Tivoli Systems Inc. in the United States, other countries, or both. UNIX is a registered trademark in the United States, other countries or both and is licensed exclusively through X/Open Company Limited. Other company, product, or service names, which may be denoted by a double asterisk(**) may be trademarks or service marks of others. ------------------------------------------------------------------------ 1 A new level is initiated each time a trigger, function, or stored procedure is invoked. 2 Unless the automatic commit is turned off, interfaces that automatically commit after each statement will return a null value when the function is invoked in separate statements. 3 This applies to both FOR EACH ROW and FOR EACH STATEMENT after insert triggers. 4 A Service Policy defines a set of quality of service options that should be applied to this messaging operation. These options include message priority and message persistence. See the MQSeries Application Messaging Interface manual for further details. 5 A character string with a subtype of BIT DATA is not allowed. 6 A common-table-expression may precede the fullselect 7 A common-table-expression may precede a fullselect. 8 There is no casting of the previous value to the source type prior to the computation.