Release notes for test fix 9


IBM(R) InfoSphere(TM) Master Data Management Server for Product Information Management, Version 6.0.0, Test Fix 9 is now available. This release notes document addresses system requirements, installation, and known problems for this test fix.

Contents

Description of this test fix

InfoSphere MDM Server for PIM provides a highly scalable, enterprise Product Information Management (PIM) solution. InfoSphere MDM Server for PIM is the middleware that establishes a single, integrated, consistent view of products and services information inside and outside of an enterprise.

Caution:
It is strongly recommended that you apply the test fix only to a test system. Test fixes do not undergo official IBM testing.

This test fix addresses the following enhancements and fixes:

Product fixes and enhancements discovered internally

The following fixes were included in this fix pack as a result of an internal defect that was discovered by the development team. If you want to learn more about one of these fixes, you can use the internal defect number for reference when speaking to someone from IBM Software Support:

Limitations:

You can review the release notes of previous maintenance releases for InfoSphere MDM Server for PIM, version 6.0.0, to see what enhancements or product fixes were included previously:

Fix Release Release Date
fix pack 7 07/29/2010
interim fix 4 06/25/2010
interim fix 3 05/17/2010
fix pack 6 04/29/2010
interim fix 2 12/03/2009
fix pack 5 10/01/2009
fix pack 4 06/19/2009
fix pack 3 05/07/2009
fix pack 2 03/02/2009
interim fix 1 12/24/2008
fix pack 1 11/25/2008

System requirements

For information about hardware and software compatibility, see the detailed system requirements document at: http://www-01.ibm.com/support/docview.wss?uid=swg27013146.

Installing test fix 9

Before you install this test fix, you must have installed InfoSphere MDM Server for PIM, Version 6.0.0 or any previous fix packs for version 6.0.0. In particular, make sure that you have applied all configuration changes. For example, any schema changes and common.properties updates as documented in the Requirements section of those release notes.

Important: Before starting the installation process, you must clean your browser cache. Additionally, all users must clean their browser cache before using the user interface for the first time after a fix pack, interim fix, or test fix has been applied. Frequently, JavaScript files that the user interface depends on are updated and installed with each release. These JavaScript files are cached by the browser when the user interface loads. So, in order to avoid incompatibilities and issues in using the user interface, you must clean your browser cache such that the latest JavaScript files are loaded and used by the user interface.

Remember: This maintenance release is a cumulative patch. You can apply this patch from any previous maintenance release level of V6.0.0.

Important:

The steps for installing this test fix include:

  1. Preparing to install the test fix
  2. Installing the test fix
  3. Updating the property files
  4. Modifying the database schema
  5. Verifying the installation

Step 1. Preparing to install the test fix

Before you install the test fix, make sure to complete these steps:

Important: InfoSphere MDM Server for PIM, Version 6.0.0, Fix Pack 3 onwards enforces certain database constraints. During the migration process, the data that violates these constraints is moved to the temporary tables and is not available to the application. Though the impact of database constraints is expected to be minimal, ensuring a full backup of the product directories and product database is critical to restore any of the removed data.

Important: Starting with Version 6.0.0, fix pack 4, InfoSphere MDM Server for PIM prevents the commit or rollback of an already active transaction by the user code implementation of certain extension points. Refer to Limitations on using transactions within extension points and review your usage of transactional operations from the affected extension points. Also, an exception thrown from post-save script is now re-thrown to any user code calling the item save operation. If your code invokes item save (resulting in invocation of post-save script), then review and modify your code for intended handling of the exception from post-save script.

Remember: If you are migrating from InfoSphere MDM Server for PIM, version 6.0.0, fix pack 3, fix pack 4, fix pack 5, interim fix 2, fix pack 6, interim fix 3, interim fix 4, and fix pack 7, you are not required to run the data verification report shell script. You can skip the steps 1.1 to 1.3, and continue with the next step, Step 1.4.

  1. Download the following file:
  2. Copy the script file that you downloaded in the previous step within the $TOP/bin/migration directory.
  3. Verify if the InfoSphere MDM Server for PIM database requires any cleanup for conflicting data before enforcing constraints.
    1. Run the following script to produce a report of all the possible violations:
      $TOP/bin/migration/constraint_data_verification_report.sh
      
    2. Review the reported violations in the generated log file, which is available at the following location:
      $TOP/logs/constraint_data_verification_report.out
      
    3. If no violations are found then proceed with the test fix installation.
    4. If violations are found then contact IBM Support for assistance and do not proceed with test fix installation.

  4. Stop the InfoSphere MDM Server for PIM application on the local server.
    1. Check the scheduler to make sure that no critical jobs are running or need to complete. If the queue is clear, stop the scheduler manually by running the following shell script:

      $TOP/bin/go/stop/stop_scheduler.sh

    2. Check the workflow engine to make sure no critical workflow events are running or need to complete by running the following shell script:

      $TOP/bin/go/workflow_status.sh

      If no events are running, shut down the workflow engine manually by running the following shell script:

      $TOP/bin/go/stop/stop_workflowengine.sh

    3. For all applications that are deployed in a cluster environment, stop all specified application servers by running the abort_local.sh shell script, which is located in the $TOP/bin/go/ directory:

      Syntax
      abort_local.sh --appservernames=CSV_file
      

      Parameter
      --appservernames
      CSV_file is the fully qualified file name of the comma-separated values (CSV) file. If you do not specify the --appservernames parameter, the abort_local.sh shell script will continue to abort the default application server that is specified in the init_ccd_vars.sh file.

      Running the abort_local.sh shell script does not affect any of the other JVM services.

    4. Make sure that all processes have stopped using the ps command. Stop any other Java or RMI registry processes that remain after shutting down the InfoSphere MDM Server for PIM instance. It might take several attempts to stop all Java processes, but continue stopping Java processes until they are all stopped.

  5. Backup your system.

    The installation will overwrite your current files with updated versions from the test fix. If any issues occur when installing the test fix, you can use this backup copy to rollback the installation.

    1. Create a full backup of all of your InfoSphere MDM Server for PIM directories.
    2. Backup the database using native database utilities. Example database utilities include DB2(R) offline backup and Oracle Recovery Manager (RMAN).

Step 2. Installing the test fix

To install this test fix, you must perform the following steps:

For WebLogic Application Server
  1. To extract and update any new installation files into the current working directory, perform the following steps:
    1. Copy the InfoSphere MDM Server for PIM .tar file to the user or temporary directory.

      For example: WPC_HOME/tarFileTemp

    2. Change the directory to $TOP (or the current working directory) then verify that the correct permissions exist for unpacking the .tar file by running the following commands:
      cd $TOP
      chmod -R 755 $TOP
      
    3. Unpack the .tar file.

      For example, when using the GNU .tar utility, the following command extracts and unzips the .tar file by using an absolute path:

      -gzip -dc 6.0.0-WS-MDMPIM-WL9_ORA-TF009-01_update_from_6000-88.tgz  | tar -xvf - 
      

      Important: The InfoSphere MDM Server for PIM files were packed using the GNU tar utility. For best results, use the GNU tar utility to unpack the files.

  2. Run the application server script.

    The application server shell script, update_weblogic_xml.sh, installs InfoSphere MDM Server for PIM into the WebLogic application server.

    1. Run the application server shell script, update_weblogic_xml.sh, which is located in the $TOP/bin/weblogic/ directory.

      An updated ccd.war file gets created in the $TOP/etc/default/weblogic92/wpc_domain/autodeploy directory.

    2. Extract the content of this ccd.war file in the final runtime location.

      Example
      jar -xvf $TOP/etc/default/weblogic92/wpc_domain/autodeploy/ccd.war 
      

  3. Optional: If you are using the Script Workbench for InfoSphere MDM Server for PIM, when you install this interim fix, the Script Workbench communication .jsp and .jar files are removed from your server configuration. To regain communication you must reinstall the docstore_tooling.jsp and docstore_tooling.jar files. See the Script Workbench for InfoSphere MDM Server for PIM Users guide for installation instructions.

Step 3. Updating the property files

To successfully use the fixes and enhancements in this maintenance release you must modify InfoSphere MDM Server for PIM configuration files.

InfoSphere MDM Server for PIM uses the following configuration files:

common.properties
During system startup, the common.properties file is used to read in all system-level parameters and is located in the $TOP/etc/default directory.

admin_properties.xml
The admin_properties.xml file is used by the administrative utilities to configure clusters of the application and is located in the $TOP/etc/default directory.

init_ccd_vars.sh
The init_ccd_vars.sh is the shell script that initializes the shell variables that are used by the system and is located in the $TOP/setup directory.

The following table includes all InfoSphere MDM Server for PIM fix versions, along with their corresponding configuration file changes that you must apply. Depending on the InfoSphere MDM Server for PIM fix version you are migrating from, use this table to determine which configuration file changes you must apply, then view the list of configuration file changes that follow for the specific details.

Remember: If you migrate from version 6.0.0 GA, you must apply the configuration file changes defined in 1, 2, 3, 4, 5, 6, and 7 of the list of configuration file changes.

Your fix version List number of the configuration file change
6.0.0 fix pack 7 7
6.0.0 interim fix 4 6, 7
6.0.0 interim fix 3 6, 7
6.0.0 fix pack 6 6, 7
6.0.0 interim fix 2 5, 6, 7
6.0.0 fix pack 5 5, 6, 7
6.0.0 fix pack 4 4, 5, 6, 7
6.0.0 fix pack 3 3, 4, 5, 6, 7
6.0.0 fix pack 2 3, 4, 5, 6, 7
6.0.0 fix pack 1 2, 3, 4, 5, 6, 7
6.0.0 interim fix 1 2, 3, 4, 5, 6, 7
6.0.0 GA 1, 2, 3, 4, 5, 6, 7

List of configuration file changes

  1. New Properties introduced in version 6.0.0 fix pack 1:

    Add the following properties and description by copying and pasting the text to the end of your common.properties file:

    #Default dictionary language for Spell Checker
    #possible values:: en_US/en_CA/en_GB [en_US for English(United States), 
    #en_CA for English(Canada) and en_GB for English(United Kingdom).]
    #If this value is not set then en_US will be taken as default dictionary language.
    spell_default_locale=
     
    # set to true in order to activate filtering of non indexed attributes from
    # the rich search page default view.
    # This will not affect custom templates.
    rich_search_default_view_indexed_only=false
     
    # Controls whether to use the new attribute filter for improved performance
    # true = use the new filter
    # false = use the old filter
    medit_use_new_header_atr_filter=true
     
    # set to true to activate prefetch of lock information for items in the multiedit page.
    enable_medit_lock_prefetch=true
    

    Updated Property description

    An additional warning has been added in the configuration file common.properties for the parameter must_save_before_paging_entries. It is recommended that you set this parameter to true. Setting this parameter to false may cause significant performance degradation. The corresponding section in common.properties has changed.

    Section of common.properties in InfoSphere MDM Server for PIM, Version 6.0.0, fix pack 1:

    # false -> (old behavior) no saving required
    

    Same section of common.properties in InfoSphere MDM Server for PIM, Version 6.0.0, fix pack 2:

    # false -> (old behavior) no saving required WARNING: this may introduce a significant
    # performance degradation depending on how many items exist in the workflow step.
    
  2. Updated Property description introduced in version 6.0.0 fix pack 2
    Additional dictionary support has been made available. The corresponding section in the configuration file common.properties has changed.

    Section of common.properties in InfoSphere MDM Server for PIM, Version 6.0.0, fix pack 1:

    #Default dictionary language for Spell Checker
    #possible values:: en_US/en_CA/en_GB [en_US for English(United States),
    #en_CA for English(Canada) and en_GB for English(United Kingdom).]
    #If this value is not set then en_US will be taken as default dictionary language.
    

    Same section of common.properties in InfoSphere MDM Server for PIM, Version 6.0.0, fix pack 2:

    #Default dictionary language for Spell Checker
    #possible values:: en_US/en_CA/en_GB [en_US for English(United States),
    #en_CA for English(Canada) and en_GB for English(United Kingdom).]
    #Other possible values are:: es_ES/fr_FR/it_IT/pt_BR [es_ES for Spanish, fr_FR for French, it_IT for Italian and pt_BR for Portuguese (Brazilian)]
    #To enable dictionary for the above locales, required libraries must be present. Refer documentation for more details.
    #If this value is not set then en_US will be taken as default dictionary language.
    

  3. Updated Property description introduced in version 6.0.0 fix pack 4
    The description of property entrypreview_refresh_entries_post_run has been updated.

    From:

    # Controls if the entry edit page is refreshed after an entry preview popup is closed
    # true  = will refresh entry edit page when the popup is closed
    # false = do not refresh entry edit page
    

    To:

    # Controls if the entry edit page is refreshed after an entry preview popup is closed
    # true = will refresh entry edit page when the popup is closed only when hard coded
    # attribute value is passed for implicit entry object in the script operations for setting entry attributes.
    # false = will improve performance of Entry Preview Script and do not refresh entry edit page
    
  4. Updated Property description introduced in version 6.0.0 fix pack 5
    The description of property reset_schedule_when_enabled has been updated.

    From:

    # Set this property to true if you want a disabled schedule to be reset when it is enabled.
    # Resetting would mean that when the schedule is enabled the next running time would be set for the schedule before it is enabled.
    

    To:

    # Set this property to true if you want a re-enabled job to be reset to its next scheduled time. 
    # If this property set to true, the job's schedule is reset to run at it's scheduled time
     intervals starting with the next one that comes after the current time.
    #For example, consider a disabled hourly job originally scheduled to run at 15 min past every hour is now enabled at 20 min past the hour.
    #If this property is set to false, the job runs immediately and then the next running time is set to 15 min past next hour. 
    #If this property is set to true, the next running time is set to 15 min past next hour and the job will run at 15 min past next hour
    reset_schedule_when_enabled=false
    
  5. Updated Properties in 6.0.0-FP6

    Remove the following property and description from the common.properties file:

    # Controls whether to use the new attribute filter for 
    # improved performance
    # true = use the new filter
    # false = use the old filter
    medit_use_new_header_atr_filter=true
    


    Update the default value of the property enable_memorymonitor:

    # Enable memory monitoring of session usage 
    enable_memorymonitor=false
    
  6. New Properties introduced in 6.0.0-FP7

    Add the following property and description by copying and pasting the text to the end of your common.properties file:

    # Specify granularity of master locking of entries in a workflow event's entrySet (checkout, interimCheckin, reserve, checkin, and drop events only).
    # Possible values: all_entries (must obtain master locks on all entries, at the outset of event processing.
    # single_entry (will "explode" events of those five kinds, into a stream of one-entry events, each processed in its own transaction.
    # If no value is specified, a default value of "all_entries" will be used.
    workflow_entry_locking_granularity=all_entries
     
    # Specify the destination next step, for the failing entries of interimCheckin and checkin events (if and when there ever are such failed entries).
    # Possible values: default (similar to traditional product behavior - entries failng interimCheckin go to its specified next step, 
    # entries failing final checkin go to the Fixit step.)
    # stay_in_step (failing entries "stay where they are": for interimCheckin, they stay there;  for final checkin, they stay in the Success step.)
    # If no value is specified, a default value of "default" will be used.
    	checkin_failure_behavior=default
    
  7. New Properties introduced in 6.0.0-Test Fix 9

    Add the following property and description by copying and pasting the text to the end of your common.properties file:

    #  For "Is empty" (null) searches on non-indexed multi-occurring attributes, 
    # choose "same as indexed" behavior, or original way.
    nonindexed_search_like_indexed=false
    

Step 4. Modifying the database schema

Several changes are made to the database schema in each fix pack release of InfoSphere MDM Server for PIM version 6.0.0; therefore, you must run a migration shell script to migrate to the database schema of InfoSphere MDM Server for PIM, version 6.0.0, Test Fix 9.

Remember: If you are migrating from InfoSphere MDM Server for PIM, Version 6.0.0, Fix Pack 7, or Test Fix 8, you are not required to run the migration shell script. You can skip this step and continue with the next step, Step 5. Verifying the installation.

To migrate your database schema:

  1. Ensure that you have stopped the InfoSphere MDM Server for PIM application on the local server.
  2. Identify the version of fix pack that you are migrating from.
    1. If you are migrating from version 6.0.0, then run the migration shell script migrateToInstalledFP.sh that is located in the $TOP/bin/migration/ directory.

      Syntax
      migrateToInstalledFP.sh --fromversion=BASE|FP1|IF1|FP2|FP3|FP4|FP5|IF2|FP6|IF3|IF4
      

      Parameter
      --fromversion
      BASE, FP1, IF1, FP2, FP3, FP4, FP5, IF2, FP6, IF3, IF4 correspond to the fix pack you are migrating from. For example, you must use BASE if you have never installed a test fix over the InfoSphere MDM Server for PIM version 6.0.0.

      If you are migrating from a Test Fix version use the table below to determine which Fix Pack the Test Fix was built on. Use the Fix Pack version in the migration shell script that corresponds to the Test Fix version you are migrating from.

      Test Fix Version Fix Pack Version
      TF1 FP4
      TF2 IF2
      TF3 IF2
      TF4 FP6
      TF5 FP6
      TF6 FP6
      TF7 FP6
      TF8 FP7

      Fix pack migration example:
      In this example the migration shell script will migrate from FP2:
      $TOP/bin/migration/migrateToInstalledFP.sh --fromversion=FP2
      

      Test fix migration example:
      In this example the migration shell script will migrate from TF3:
      $TOP/bin/migration/migrateToInstalledFP.sh --fromversion=IF2
      

      Interim fix migration example:
      In this example the migration shell script will migrate from IF1:
      $TOP/bin/migration/migrateToInstalledFP.sh --fromversion=IF1
      

      Limited availability patch example:
      If you are migrating from a limited availability patch (LA), use the fix pack version that corresponds to the LA version you are migrating from. For example, the migration shell script migrates from LA1:
      $TOP/bin/migration/migrateToInstalledFP.sh --fromversion=IF1
      
  3. Verify that the database schema migration was successful:
    1. Locate and open your verify.log file located in the $TOP/logs directory.
    2. Click on the respective log file to compare the content of your verify.log file against the log file that corresponds to the fix pack you migrated from.

    The following log files include the expected log output for a successful migration based on both the fix pack you migrated from and the database you are using.

    Fix pack The expected log file output for Oracle databases The expected log file output for DB2 databases
    version 6.0.0 BASEtoFP7Oracle.log BASEtoFP7DB2.log
    version 6.0.0, fix pack 1, interim fix 1, and fix pack 2 FP1_FP2toFP7Oracle.log FP1_FP2toFP7DB2.log
    version 6.0.0, fix pack 3, and fix pack 4 FP3_FP4toFP7Oracle.log FP3_FP4toFP7DB2.log
    version 6.0.0, fix pack 5, and interim fix 2 FP5_IF2toFP7Oracle.log FP5_IF2toFP7DB2.log
    version 6.0.0, fix pack 6, interim fix 3, and interim fix 4 FP6_IF3_IF4toFP7Oracle.log FP6_IF3_IF4toFP7DB2.log

  4. DB2 database only:
    1. Run the analyze_schema.sh shell script.

      This optimizer script uses the catalog tables from a database to obtain information about the database, including the amount of data within the database and other characteristics. The optimizer script then uses this information to determine how to access the data. If your database statistics are not available, the optimizer might choose an inefficient access plan based on the default statistics, which is inaccurate.

      You should use the analyze_schema.sh shell script to collect your current statistics from within the tables and indexes, especially if significant update activity has occurred since the last time you ran the analyze_schema.sh shell script.

      To run the analyze_schema.sh shell script in DB2, refer to the InfoSphere MDM Server for PIM technote: Analyzing WebSphere Product Center schema and collecting statistics in DB2 located at: http://www-1.ibm.com/support/docview.wss?uid=swg21205939.

      Important: Run the analyze_schema.sh shell script on your InfoSphere MDM Server for PIM databases at least once a week or when there has been at least a twenty percent increase or change in data on the database.

      Tip: Oracle version 10g and 11g has automatic statistics gathering job enabled by default.

    2. Update the DB2 temporary table space page size settings for InfoSphere MDM Server for PIM installations that use DB2 as the backend database.
      1. Shutdown the InfoSphere MDM Server for PIM.
      2. Backup the DB2 database.
      3. Connect to the DB2 database that is used by InfoSphere MDM Server for PIM using theDB2 admin user ID.
      4. Execute the following DB2 commands and ensure that they are completed successfully. Modify the <db container path> in the following statements with correct file system path in your environment.
        • db2 drop tablespace temp_user
        • db2 drop tablespace temp_system
        • db2 drop bufferpool tempusrbp
        • db2 drop bufferpool tempsysbp
        • db2 "CREATE BUFFERPOOL TEMPUSRBP SIZE AUTOMATIC PAGESIZE 32K"
        • db2 "CREATE BUFFERPOOL TEMPSYSBP SIZE AUTOMATIC PAGESIZE 32K"
        • db2 "CREATE USER TEMPORARY TABLESPACE TEMP_USER PAGESIZE 32K MANAGED BY SYSTEM USING ('<db container path>') EXTENTSIZE 32 PREFETCHSIZE 32 BUFFERPOOL TEMPUSRBP"
        • db2 "CREATE SYSTEM TEMPORARY TABLESPACE TEMP_SYSTEM PAGESIZE 32K MANAGED BY SYSTEM USING ('<db container path>') EXTENTSIZE 32 PREFETCHSIZE 32 BUFFERPOOL TEMPSYSBP"

  5. Optional: If you have migrated from Version 5.3.0 or 5.3.1, run the Index Regeneration Capability utility, indexRegenerator.sh, from the $TOP/bin directory, to correct or enable the new Rich Search Option feature of version 5.3.2.

    The shell usage

    IndexRegenerator.sh --company=COMPANY_NAME RUN-OPTIONS [TUNING-OPTIONS]

    Only one RUN-OPTIONS combination can be used at a time. However, zero or more TUNING-OPTIONS may be used together.

    RUN-OPTIONS include

    Full Container:

    --catalog=CATALOG_NAME
    --hierarchy=HIERARCHY_NAME
    

    Container subset:

    --items=FULL_PATH_CSV_FILE (2 columns: PK, CATALOG_NAME)
    --categories=FULL_PATH_CSV_FILE (2 columns: PK, HIERARCHY_NAME)
    

    Generate PK files for multiple machines (numFiles - default is 1)

    --catalog=NAME --items=FULL_PATH_TO_DESIRED_FILE --numFiles=NUMBER_FILES
    --hierarchy=NAME --categories=FULL_PATH_TO_DESIRED_FILE --numFiles=NUMBER_FILES
    

    TUNING-OPTIONS include
    --nodePaths=FULL_PATH_TO_NODES_SEPARATED_BY_COLONS speed performance by
     specifying node paths, default is all paths. example:
     --nodePaths="SpecName/Node1:SpecName/Node2"
     
    --lockContainer=[YES|NO] speed performance by locking container, 
    disadvantage is this locks out other users; default is true
     
    --threads=NUMBER_THREADS speed performance by using more than one thread,
     but be careful enough DB connections exist! default is 1
    

    Example
    In this example, the Index Regeneration Capability utility runs the items in the CSV file, $TOP/item-list.csv in the company named test_Co, by using 2 threads.
    $TOP/bin/indexRegenerator.sh --company=test_Co --items=$TOP/item-list.csv --threads=2
    

    Parameters containing spaces and special characters must be enclosed in escaped quotes (\").Also special characters should be escaped by back slash (\).

    If you specify more than one file, the file number is placed before the file extension. Like items.csv becomes items-1.csv, items-2.csv, etc.

    Restriction: Catalog and hierarchy arguments cannot be combined. For example, --catalog and --hierarchy cannot be used together, nor can --catalog and --categories. When arguments --catalog and --items are used together, only PK files are generated and no index regeneration is performed.

Step 5. Verifying the installation

Every time that you install a test fix, verify that the installation was successful. To verify the installation of this test fix, complete these steps:

  1. Start InfoSphere MDM Server for PIM:

    The shell script, start_local.sh, located in the $TOP/bin/go/ directory, starts all the services that you need to run InfoSphere MDM Server for PIM.

    1. Run the start_local.sh shell script:
      • For the WebSphere(R) Application Server platform, run:
        start_local.sh --redeploy=yes
        

        The --redeploy=yes command ensures that all Web services are properly re-deployed.

      • For the WebLogic Server platform, run:
        start_local.sh
        

        You are not required to use the --redeploy=yes command for WebLogic Server.

    2. Run the start_local.sh shell script to start your application servers:

      The start_local.sh shell script also supports starting multiple application servers from one individual InfoSphere MDM Server for PIM instance.

      Syntax
      start_local.sh --appservername=appserver_Name
      

      Parameters
      --appservername
      appserver_Name specifies the application server. If it is not specified, the default application server, which is specified in init_ccd_vars.sh file, will be used.

      Multiple application servers can be specified by listing each application server separated by a comma (see the example below).

      Example
      In this example, WebSphere Application Server is the platform and the start_local.sh shell script is started on a host called wpcserver that has wpc_server as the defined value for WAS_APPSERVERNAME.

      If these two application servers were specified during installation:

      $TOP/bin/websphere/install_war.sh --svc_name=appsvr1_WPCSERVER --appservername=wpc_server1 --conf_appsvr_port=9188
       
      $TOP/bin/websphere/install_war.sh --svc_name=appsvr2_WPCSERVER --appservername=wpc_server2 --conf_appsvr_port=9388
      

      Then you can start both application servers by executing the following shell script:

      $TOP/bin/go/start_local.sh --appservernames=wpc_server1,wpc_server2
      

      The resulting application servers will start:

      • wpc_server1, with RMI name appsvr1_WPCSERVER.
      • wpc_server2, with RMI name appsvr2_WPCSERVER.

    This process should only take approximately 30-40 seconds, depending on the speed of your CPU processor.

  2. Verify that all InfoSphere MDM Server for PIM JVM services have started.

    Run the $TOP/bin/go/rmi_status.sh script and verify that the following services have started correctly:

  3. Verify the InfoSphere MDM Server for PIM installation by reviewing what the installed InfoSphere MDM Server for PIM version displays as:

Known problems

If the spec ordering of an item in the tabbed view of workflow is not in the expected order, refresh the collaboration area to view them in the expected order. Fix for this issue will be provided in a subsequent maintenance release.

Known problems are documented in the form of individual technotes in the Support knowledge base at InfoSphere MDM Server for PIM Support site. As problems are discovered and resolved, the IBM Support team updates the knowledge base. By searching the knowledge base, you can quickly find workarounds or solutions to problems.

The following link launches a customized query of the live Support knowledge base for all published technotes for InfoSphere MDM Server for PIM: View all known problems for InfoSphere MDM Server for PIM

You can search for keywords within this complete list of technotes.