Test and Performance Platform v3.3.0 - Release notes


1.0 Known problems and limitations
1.1 Generic Log Adapter
1.1.1 Problems running Generic Log Adapter rules IBM's Java Runtime Environment (JRE) v1.4.1
1.1.2 Importing a log file from a remote z/OS system may result in incomplete data
1.1.3 Continuously parsing a log file with a footer results in missing records
1.1.4 Some error messages are duplicated in the Problems view of the GLA Configuration Editor
1.1.5 Generic Log Adapter does not support creating rules to parse multiple timestamp formats
1.1.6 Formatter errors in Problems view when new GLA adapter file is run in the Generic Log Adapter perspective
1.1.7 HTTP Server access log rules parser does not parse some records correctly
1.2 Agent Controller
1.2.1 Console text is garbled when profiling a Java application on a DBCS system
1.2.2 Agent Controller file copy on HP 11i does not work
1.2.3 Agent Controller reports "sh: sysdef: not found" error on Solaris
1.2.4 Agent Controller running with a Sun JVM on Linux enters an infinite loop
1.2.5 Multiple instances of Agent Controller on one machine not permitted
1.2.6 FileNotFoundExceptions not reported by file transfer engine when files on remote server cannot be found
1.2.7 Running the Agent Controller under secured mode on iSeries
1.2.8 Data not collected when monitoring multiple agents simultaneously
1.2.9 Segmentation violation when shutting down the Agent Controller
1.2.10 "Out Of Memory" error when profiling applications
1.2.11 Data collected by agent does not reach the client
1.2.12 Unsuccessful termination of an agent that is running in a process with multiple agents
1.2.13 Request peer monitoring does not work on EBCDIC platforms
1.3 Log and Trace Analyzer
1.3.1 Continuous log monitoring is not supported for localhost
1.3.2 Logging examples readme does not open
1.3.3 Remote Log Import with a filter does not work when Agent Controller is started incorrectly
1.3.4 Remote log import process remains in "live" state when Agent Controller not started
1.3.5 Importing some HTTP Server access logs may result in a "String index out of range" error
1.3.6 Unreadable data in some events when importing Microsoft Windows System event log on DBCS system
1.3.7 NullPointerException when importing an empty log
1.3.8 Importing Windows Application event log generates Common Base Event formatting errors
1.3.9 Log import from a remote HP-UX system hangs when an invalid log file name is specified
1.4 Probekit
1.5 Profiling Tool
1.5.1 Problem with garbage collection when using IBM JDK 1.4.1
1.5.2 With Sun JVM, some method calls are not traced
1.5.3 Profiling on Solaris using the Sun JDK 1.4.x may cause JVM to crash
1.5.4 Potential crash when running in standalone mode with STACK_INFORMATION=contiguous on Solaris
1.5.5 Negative timeout values for WAIT and WAITED events
1.5.6 Incorrect monitor dumps with IBM JDK 1.4.2
1.5.7 Method counts incorrect with JIT Inlining
1.5.8 Method level CPU time statistics limitations on AIX and Solaris
1.5.9 Profiling to an existing profile file fails on Linux
1.5.10 Importing profile files generated from headless profiling
1.5.11 Duplicate filter views are displayed after workbench is closed abnormally
1.5.12 Free up memory action may fail silently
1.5.13 Incorrect agent options are sent when Execution History > Full Graphical Details is selected without editing
1.5.14 Import profile file with package level filtering shows empty view
1.5.15 Profiling mode shows more data than expected
1.6 Statistical Console
1.7 Test
1.7.1 Common Test Issues
1.7.1.1 JUnit, Manual, and URL Tests do not work on iSeries
1.7.1.2 Datapool Access
1.7.2 URL Test
1.7.2.1 Executing URL Tests as JUnit tests
1.7.2.2 Executing the URL Test Sample
 

1.0 Known problems and limitations

1.1 Generic Log Adapter

1.1.1 Problems running Generic Log Adapter rules using IBM's Java Runtime Environment (JRE) v1.4.1

The IBM JDK 1.4.1 that ships in 2003 causes problems in the rule-based Apache access log parser.

Service Release (SR2) and above is required when running IBM's Java Runtime Environment (JRE) v1.4.1 to use the Generic Log Adapter and/or importing log files using a rules-based log file parser.

1.1.2 Importing a log file from a remote z/OS system may result in incomplete data

Bugzilla defect: 80730

Importing a log file using Log and Trace Analyzer from a remote z/OS system may result in incomplete data shown in the Log View. The import operation may stop prematurely and not all of the log records are shown in the Log View. This problem occurs when one of the following IBM JDK versions is installed on the z/OS system:

This problem is fixed in IBM JDK 1.4.2 with PTF UK00802.  Upgrade the JDK to that version or a later version.  If you cannot upgrade the JDK version, to work around the problem, change the configuration of the Agent Controller on the z/OS system by doing the following steps:

  1. Edit the file plugins/org.eclipse.hyades.logging.parsers/config/pluginconfig.xml in the Agent Controller install directory.
  2. Add a new Parameter to the RemoteLogParserLoader Application element after the java.version parameter. For example:
    <Parameter position="prepend" value="-Djava.version=1.4"/>
    <Parameter position="prepend" value="-Djava.compiler=NONE"/>
    <Parameter position="append" value="&quot;config_path=%GLA_CONFIG_PATH%&quot;"/>
  3. Restart Agent Controller.
  4. Import the log file again.

1.1.3 Continuously parsing a log file with a footer results in missing records

Bugzilla defect: 97974

Continuously parsing a log file that contains a footer section sometimes results in records missing from the parsed output. Specifically, when a log file is appended with new records, the first record of the appended records is not parsed and is not included in the parsed output. This problem occurs when the context instance is configured with continuousOperation="true" in the adapter configuration file and the log file contains a footer section. To workaround this problem, parse the log file once by configuring the context instance with continuousOperation="false".

1.1.4 Some error messages are duplicated in the Problems view of the GLA Configuration Editor

Bugzilla defect: 101184

Some error messages are shown multiple times in the Problems view of GLA Configuration Editor. The Problems view is not always cleared of existing messages before the adapter configuration file is executed by clicking the Rerun adapter... button. Modifying and saving the file will clear the Problems view and show any adapter configuration validation errors.

1.1.5 Generic Log Adapter does not support creating rules to parse multiple timestamp formats

Generic Log Adapter does not support parsing log files that have locale sensitive timestamp formats with a single rules-based adapter configuration file. If an application generates log files containing timestamps that have formats dependant on the locale it is generated in, then these logs cannot be parsed with a single rules-based adapter. For example, if the date format is MM/dd/yy in log files generated on en_US systems, yy/MM/dd in log files generated on ja_JP systems, and dd.MM.yy for log files generated on de_DE systems then a separate adapter configuration file is required to parse each log file, each one having a parsing rule with the correct timestamp format for the locale.

1.1.6 Formatter errors in Problems view when new GLA adapter file is run in the Generic Log Adapter perspective

The Problems view in the Generic Log Adapter perspective returns the following error when trying to execute a new GLA adapter file by clicking the Rerun adapter ..." button:

IWAT0438E Common Base Event formatter N76D20B0042411D98000E0362B33D6F0 cannot create a CommmonBaseEvent because required property sourceComponentId is missing.

This message indicates that the formatter component of GLA could not create a Common Base Event because sourceComponentId is a required property of Common Base Event and it is missing. To work around this problem, add parser rules to the adapter file for the sourceComponentId attributes. Note that the situation property is also a required Common Base Event property. To avoid similar errors, add parser rules to the adapter for the situation property. Only the GLA creates CommonBaseEvents that contain all of the required properties.

1.1.7 HTTP Server access log rules parser does not parse some records correctly

Bugzilla defect: 101545

HTTP Server access log rules parser does not parse the following records correctly:

9.26.5.6 - - [09/Feb/2005:17:07:53 -0500] "VERSION" 501 -
9.26.5.6 - - [09/Feb/2005:17:14:52 -0500] "GET_CONFIG\r" 501 -
9.26.5.6 - - [09/Feb/2005:17:15:00 -0500] "< NSP/0.2 >" 400 299
9.26.5.6 - - [09/Feb/2005:17:22:40 -0500] "\x16\x03\x01" 501 -

The severity is not parsed correctly for the first two records and the last record. Some of the other record data is not captured in extended data elements correctly.

1.2 Agent Controller

1.2.1 Console text is garbled when profiling a Java application on a DBCS system

When profiling a remote Java application within Eclipse on a DBCS (e.g. Traditional Chinese, Simplified Chinese, Japanese, Korean) system, the console output is displayed as garbled text. This problem may happen on any platform.

To work around this problem, add a Java VM argument -Dconsole.encoding=<native encoding> when launching the remote Java application. This will ensure proper encoding when transferring the console output from the remote side back to the Eclipse workbench. To determine <native encoding> on Windows, open a command prompt and run the command chcp. For example, if you get a result of 950 then the value of <native encoding> is MS950. The Java VM argument will then be -Dconsole.encoding=MS950. For a list of valid encoding, please refer to Sun's Java documentation, "Supported Encodings", under the section "Internationalization".

1.2.2 Agent Controller file copy does not work on HP 11i

The file copy does not work because the file server does not startup. This is due to the JVM library libjvm.sl not being loaded at run-time, which in turn causes the file server not to run.

To work around this, the linker patch version PHSS_30049 or higher is required. The linker version from patch 30049 is as follows:

/bin/ld:
        $Revision: 1.1 $
        HP aC++ B3910B X.03.37.01 Classic Iostream Library
        HP aC++ B3910B X.03.37.01 Language Support Library
        ld_msgs.cat: $Revision: 1.1 $
        92453-07 linker command s800.sgs ld PA64 B.11.38 REL 031217

To check the version number: what /bin/ld

To list the installed patches: swlist -l fileset

Grep for "ld" to get the version number for the cumulative ld and linker tools patch.

1.2.3 Agent Controller reports "sh: sysdef: not found" error on Solaris

The Agent Controller uses the sysdef command to obtain the maximum size for a shared memory buffer on your system. If the Agent Controller is unable to run sysdef, it will use dataChannelSize="30M" specified in the <RAServer>/plugins/org.eclipse.hyades.datacollection/pluginconfig.xml file. The following error will be reported on the console where the RAServer.exe was launched:

sh: sysdef: not found
To work around this problem, add the /usr/sbin directory, which contains sysdef, to the PATH variable.

1.2.4 Agent Controller running with a Sun JVM on Linux enters an infinite loop

When running the Agent Controller on a Linux machine with a Sun 1.4.2_04 JVM, the engine enters into an infinite loop. The following messages are logged to the servicelog.log with the last three lines repeated continuously until a kill command is issued to stop the RAServer process:
<SERVER_MSG time="2004:6:3:17:42:49" severity="INFORMATION" text="Service starting"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="INFORMATION" 
            text="Successfully loaded plugin: org.eclipse.hyades.datacollection"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="INFORMATION" 
            text="Successfully loaded plugin: org.eclipse.hyades.logging.parsers"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="INFORMATION" 
            text="Successfully loaded plugin: org.eclipse.hyades.test"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="INFORMATION" 
            text="Active configuration set to: default"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="INFORMATION" 
            text="Loaded configuration: default"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="INFORMATION" 
            text="Service started successfully"/>  
<SERVER_MSG time="2004:6:3:17:42:49" severity="WARNING" text="Server stopping"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="WARNING" text="Internal server closed"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="WARNING" text="External server closed"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="WARNING" text="Server stopping"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="WARNING" text="Internal server closed"/>
<SERVER_MSG time="2004:6:3:17:42:49" severity="WARNING" text="External server closed"/>
To work around this problem, set LD_LIBRARY_PATH pointing to all .so files before starting Agent Controller. For example, before running the RAServer, issue this command:
export 
LD_LIBRARY_PATH=/opt/j2sdk1.4.2_04/jre/lib/i386/server:/opt/j2sdk1.4.2_04/jre/li
b/i386

1.2.5 Multiple instances of Agent Controller on one machine not permitted

Only one instance of the Agent Controller may be installed on a machine. This means that if you have installed the engine or an extended version of the engine with another product, you will have to uninstall that instance for a new instance to work correctly. For example, some IBM WebSphere Studio or IBM Rational products, or the Autonomic Computing Toolkit from developerWorks, include optional installs of the Agent Controller under the name Agent Controller.

1.2.6 FileNotFoundExceptions not reported by file transfer engine when files on remote server cannot be found

The file transfer protocol does not report a FileNotFoundException when you attempt a GET operation on a nonexistent file from a remote file server. Instead, you are notified of a successful transfer of a file with size 0. If a file of size 0 is returned after a GET operation, it is either because the file does not exist on the remote server or because it exists and is size 0. Currently, the transfer protocol does not differentiate between these two possibilities.

1.2.7 Running the Agent Controller under secured mode on iSeries

Running the Agent Controller under secured mode on iSeries requires special account authorities. The user account used to start the Agent Controller job "RASTART" should have the following special authorities "*SECADM, *ALLOBJ". You may need to add these authorities by updating the user profile using the command "WRKUSRPRF".

1.2.8 Data not collected when monitoring multiple agents simultaneously

Sometimes when monitoring two or more agents associated with a single process simultaneously no data is collected for one of the agents. The data channel for one of the agents fails to initialize properly so no data can be returned to the client from that agent.

To work around this problem, monitor only one agent for a process at a time.

1.2.9 Segmentation violation when shutting down the Agent Controller

Bugzilla defect: 99788

When shutting down the agent controller, a segmentation violation is reported. Other than interrupting the display there are no other effects. No action is required. This segmentation violation was reported on Red Hat Enterprise Linux 3.0 update 4.

1.2.10 "Out Of Memory" error when profiling applications

Bugzilla defect: 57786

An "Out Of Memory" error may be issued by the JVM if the JVM arguments -Xmxnnn and -XrunpiAgent are specified when starting the application and the application is attached and monitored with the Profiling and Logging perspective of TPTP. The dataChannelSize attribute setting for the Java Profiling Agent in the Agent Controller configuration may affect the amount of memory available to the JVM which can cause an "Out Of Memory" error. To work around this problem, reduce the -Xmx value or the dataChannelSize value for the Java Profiling Agent or both.

1.2.11 Data collected by agent does not reach the client

Bugzilla defect: 73668

Sometimes when an agent collects data, the data is not sent to the client that is monitoring that agent. The following CommonBaseEvent message in the Agent Controller servicelog.log file shows the cause of the problem:

msg="Shared memory allocation failure: -518"

The shared memory buffer used as the data channel for sending data from the agent to the Agent Controller cannot be allocated. Shared memory buffer names are reused when Agent Controller is restarted. Sometimes the shared memory buffers are not cleaned up completely by the system after a previous use. When attempting to allocate a buffer with a name that was not previously cleaned up, the allocation fails. To work around this problem, perform the monitoring operation again to use a different shared memory buffer name.

1.2.12 Unsuccessful termination of an agent that is running in a process with multiple agents

Bugzilla defect: 100870

When you try to terminate an agent running in a process that has more than one agent, the process is terminated successfully but the status of the process remains unterminated. Repeat attempts to terminate the agent will also be unsuccessful in this case.

To workaround the problem, terminate the agent process at the process level instead of agent level.

1.2.13 Request peer monitoring does not work on EBCDIC platforms

Request peer monitoring does not work on EBCDIC platforms. There is currently no workaround for TPTP 3.3. This limitation has been removed in TPTP 4.0.

1.3 Log and Trace Analyzer

1.3.1 Continuous log monitoring is not supported for localhost

Log and Trace Analyzer does not support continuous log monitoring via localhost. However, if you want to continuously monitor local log files, you can through the loopback interface (127.0.0.1), thus simulating a remote import with a local log file. In this case the logging agent can be at any time terminated to avoid hanging the UI.

To import or continuously monitor via loopback, Agent Controller must be started (not necessary for importing from localhost).

1.3.2 Logging examples readme does not open

When you create a logging sample project (File > New > Example), a readme file should open in your system browser. However, if the Workbench's file association preferences have not been set correctly, the file may not open.

To fix this problem, go to the File Association preferences page by selecting Window > Preferences and then selecting Workbench > File Associations. In the File types list, select .html. In the Associated editors list, click Add. Select the External Programs button and then select your default browser. Click OK. Click OK to apply the new preference.

1.3.3 Remote Log Import with a filter does not work when Agent Controller is started incorrectly

Bugzilla defect 95615

A request to import a log file from a non-Windows system with a filter specified results in the following message being displayed when the Agent Controller is started incorrectly:

"An error occurred while attempting to import the log file /home/user/app.log.
Reason: [Ljava.lang.StackTraceElement;@538c718"

The following exception is thrown as a result of this error and is logged to the .log file. Finding this exception in the .log file is also indicative of the Agent Controller being started incorrectly:

org.eclipse.hyades.internal.execution.core.file.ServerNotAvailableException: 
     java.net.ConnectException: Connection refused: connect

Ensure that the directories of the JRE that contain executable libraries such as libjvm.so are added to the appropriate library path environment variable for the system before starting Agent Controller.  Refer to the getting_started.html file located in the Agent Controller installation directory for more details.

1.3.4 Remote log import process remains in "live" state when Agent Controller not started

Bugzilla defect 100084

When attempting to import a remote log when Agent Controller is not running on the remote system, a "Connection failed ..." error message is displayed but the log import process listed under Logs in the Log Navigator pane is still marked as "live" when in fact the process has completed. To work around this problem, start Agent Controller on the remote system and try to import the same log again with the same Destination configuration. The process will show the correct state now.

1.3.5 Importing some HTTP Server access logs may result in a "String index out of range" error

Bugzilla defect 100979

Importing some HTTP Server access logs with the static parser may stop before all records are parsed and a message similar to the following may be displayed:

IWAT0030E An error occurred during the execution of the remote log parser
"org.eclipse.hyades.logging.adapter.config.StaticParserWrapper": IWAT0412E
Errors occurred parsing the log file /home/userId/logs/access.log.
IWAT0357E Exception parsing file /home/userId/logs/access.log:
org.eclipse.hyades.logging.parsers.LogParserException: IWAT0054E Error parsing
access log.
IWAT0306E Error while parsing line number 1535:

9.26.5.6 - - [09/Feb/2005:17:07:53 -0500] "VERSION" 501 -
String index out of range: -2.

The HTTP Server access log static parser cannot parse log records that do not 
include a file name.  An example of such a records is:

9.26.5.6 - - [09/Feb/2005:17:07:53 -0500] "VERSION" 501 -

To work around this problem use the rules-based parser to import the log file.

1.3.6 Unreadable data in some events when importing Microsoft Windows System event log on DBCS system

Bugzilla defect 95077

Importing the Microsoft Windows System event log from a Double Byte Character Set system may result in some Common Base Events being shown in the Log View with missing or unreadable msg values.

1.3.7 NullPointerException when importing an empty log

Bugzilla defect 100743

When an empty log is imported or when an import filter is used that filters out all log events, the Log View will appear empty and a NullPointerException (in XMLLoader.endElement) may be thrown. Check the log file or try a different filter which will allow some events to be loaded.

1.3.8 Importing Windows Application event log generates Common Base Event formatting errors

Bugzilla defect 101718

Sometimes when importing the Microsoft Windows Application event log the following messages are displayed:

IWAT0027E Error importing the specified log file(s).
IWAT0412E Errors occurred parsing the log file null.
IWAT0438E Common Base Event formatter N6B1EE3005B511D880008CD5D1F4FA98 cannot
create a CommmonBaseEvent because required property creationTime is missing.

The log parser fails to parse some log records properly. However, most log records are imported and shown in the Log view.

1.3.9 Log import from a remote HP-UX system hangs when an invalid log file name is specified

Bugzilla defect 101491

If an invalid log file name is specified when importing a log from a remote HP- UX system the import operation may appear to never end. The job status bar shows "Importing log file...", the progress indicator continues scrolling and no error message is displayed. The log import job in this state cannot be cancelled. To stop the log import job stop the eclipse workbench. To work around this problem ensure the log file name specified is correct.

1.4 Probekit

N/A

1.5 Profiling Tool

1.5.1 Problem with garbage collection when using IBM JDK 1.4.1

Bugzilla defect: 56182

If the user's application uses an extremely large amount of heap space, requesting Collect Object References or Run GC, can potentially cause the JVM to crash with the following error message:

 **Out of memory, aborting**

*** panic: JVMCI023: Cannot allocate memory to collect heap dump in jvmpi_heap_dump

abnormal program termination

You can try to work around this by running without the -Xmx parameter, if you are currently running with it.

1.5.2 With Sun JDK, some method calls are not traced

Bugzilla defect: 69051

Using the Sun JDK on Windows, certain method calls in Java programs are not being traced by JVMPI.

There is no known workaround.

1.5.3 Profiling on Solaris using the Sun JDK 1.4.x or on HP using the HP JDK 1.4.x may cause JVM to crash

Bugzilla defect:56404
Profiling on Solaris using the Sun JDK 1.4.x or on HP using the HP JDK 1.4.x may cause the JVM to crash.

The problem on Sun is due to a bug in the Sun JVM. To work around this problem, use only one of the following profiling sets:

The problem arises if you use these profiling sets in combination or if "Show instance level" information is turned on. Alternatively, you can upgrade to the Sun JDK 1.4.2_08-b03 build where the problem has been fixed.

The HP JDK bug has been fixed as of JDK 1.4.2_04. The only solution on HP is to upgrade to this JDK version or later.

1.5.4 Potential crash when running in standalone mode with STACK_INFORMATION=contiguous on Solaris

Bugzilla defect: 50090
When profiling on Solaris, you may encounter problems with standalone profiling. The problem only occurs when STACK_INFORMATION=contiguous (or boundaryAndContiguous) and TRACE_MODE=full. This problem may result in your JVM crashing.

To work around this problem with STACK_INFORMATION=contiguous, set TRACE_MODE=noObjectCorrelation. The problem does not occur when STACK_INFORMATION=none or STACK_INFORMATION=normal.

1.5.5 Negative timeout values for WAIT and WAITED events

Bugzilla defect: 63969

When running with IBM 1.4.2 JDK, with the jvmpi profile option 'MONITOR_MODE=all' (in standalone mode), you may see negative timeout attributes on monitorWait and monitorWaited elements in their trace. These are actually extremely large timeout values cast as positive 64 bit integers. This bug is the result of a JDK bug.

The JDK bug has been fixed as of IBM JDK 1.4.2 SR1a. A solution is to upgrade to this JDK level or later.

1.5.6 Incorrect monitor dumps with IBM JDK 1.4.2

Bugzilla defects: 65193 and 72180

Because of a JDK bug, when running the Test and Performance Platform in standalone mode with the jvmpi profile option 'MONITOR_MODE=all', you may get incorrect monitor dumps. For bug 65193 particularly, this happens when the '-Xj9' VM argument is used.

1.5.7 Method counts incorrect with JIT inlining

Bugzilla defect 70660 (closed as "Won't fix")

If you suspect that the method counts you are seeing in the analysis tools are too low, turn off JIT inlining, if you are using it. This problem happens only when with the IBM Java 2 Runtime Environment v.1.4.2, and only when JIT is enabled.

The only work-around for this problem is to turn off inlining. To do this, set the following environment variable:

JITC_COMPILEOPT=NINLINING

1.5.8 Method level CPU time statistics limitations on AIX and Solaris

In TPTP 3.0 and 4.0, method level CPU time statistics are available for collection. Optionally, you can view method level CPU time statistics in an additional column in the Method Statistics view or Method Invocation table. Platform limitations for this feature are as follows:

There is no support for method level CPU time statistics reporting on AIX 4.3.

On Aix Version 5.1, method level CPU time statistics reporting requires that the environment variable "AIXTHREAD_ENRUSG=ON" be exported.

The method level CPU time statistics feature is currently not supported on Solaris.

1.5.9 Profiling to an existing profile file fails on Linux

Bugzilla defect: 95803

Profiling to an existing profile file fails on Linux platforms. An invalid path separator is used in the code which results in a FileNotFoundException.

To workaround the problem, profile to a new file instead of an existing profile file.

1.5.10 Importing profile files generated from headless profiling

When a profile file is generated from headless profiling, the file cannot be imported to the Eclipse workbench properly because it is missing the top level <TRACE> element.

The workaround is to manually edit the profile file add the strings <TRACE> at the beginning and </TRACE> at the end of the profile file before importing into the Eclipse workbench.

1.5.11 Duplicate filter views are displayed after workbench is closed abnormally

Bugzilla defect: 97894

If workbench crashes or closes abnormally, Trace and Log filters may not be saved properly resulting in the recreation of a filter when the workbench is relaunched. As a result, duplicate filters are shown in view filter list.

To remove a duplicate filter, delete the filter using the Manage Filter Wizard which can be accessed from the view drop down menu.

1.5.12 "Free up memory" action may fail silently

"Free up memory" action may fail silently. If a failure occurs, you may be required to close and reopen the Profiling and Logging perspective.

1.5.13 Incorrect agent options are set when Execution History > Full Graphical Details is selected without editing.

Bugzilla defect: 99492

When selecting the profiling set "Execution History - Full Graphical Details" in the Profile Launch configuration wizard, under the Profiling tab without editing any of its content, more profiling data is collected than is required. Extra profiling data such as object allocation data is collected.

To workaround this problem, Click Edit after selecting the "Execution History - Full Graphical Details" profiling set and step through the wizard pages by clicking Next on each page. After you have advanced through the wizard, click Finish to update settings for the profiling set.

1.5.14 Import profile file with package level filtering shows empty view

Bugzilla defect: 100334

When the profile file is generated with Memory Analysis (profiling type) selected, events will not be saved in the profile file in chronological sequence. This causes failures such as lost packages when the profile file is subsequently imported using filtering at the package level.

To workaround the problem, import the profile file without any filtering at the package level and filter the data in statistic views after the import is complete.

1.5.15 Profiling mode shows more data than expected

When profiling an application with the following profiling types: Basic Memory Analysis with no instance level information and Execution Time Analysis with execution flow graphical details and no instance level information, the instance level information will show up in Execution Statistics view when the Instance Level Information toolbar button is selected.

1.6 Statistical Console

N/A

1.7 Test

1.7.1 Common Test Issues

1.7.1.1 JUnit, Manual, and URL Tests do not work on iSeries

Bugzilla defect: 68899

1.7.1.2 Datapool access

Bugzilla defect: 68911
The documentation that describes accessing a datapool from a test is missing a step and contains a code sample that doesn't completely work.

The following jars need to be added to the Java build path. ([ECLIPSE_HOME] is the directory where Eclipse has been installed.

	[ECLIPSE_HOME]/plugins/org.eclipse.hyades.models.common_3.0.0/common_model.jar
	[ECLIPSE_HOME]/plugins/org.eclipse.hyades.test.datapool_3.0.0/datapool_api.jar
	[ECLIPSE_HOME]/plugins/org.eclipse.emf.ecore_2.0.0/runtime/ecore.jar
	[ECLIPSE_HOME]/plugins/org.eclipse.emf.common_2.0.0/runtime/common.jar
	

The following code snippet demonstrates how to access a datapool and extract information properly.  

	IDatapoolFactory dpFactory = new Common_DatapoolFactoryImpl();
	IDatapool datapool = dpFactory.load(new File("d:\\hyades3.0\\workspace\\testproj\\dpoo1.datapool"), false);
	IDatapoolIterator iter = dpFactory.open(datapool, "org.eclipse.hyades.datapool.DatapoolIteratorSequentialPrivate");
	iter.dpInitialize(datapool, -1);

	while (!iter.dpDone())
	{
		String firstName = iter.dpCurrent().getCell("First Name").getStringValue();
		// your code here
		iter.dpNext();
	}
	

1.7.2 URL Test

1.7.2.1 Executing URL Tests as JUnit tests

URL Tests can be executed as JUnit tests. In order to do so, the following entries must be added to the Java build path of the project containing the URL Test:

      [ECLIPSE_HOME]/plugins/org.eclipse.hyades.logging.core_3.3.0/hlcore.jar
      [ECLIPSE_HOME]/plugins/org.eclipse.hyades.logging.core_3.3.0/hlcbe101.jar
      [ECLIPSE_HOME]/plugins/org.eclipse.emf.ecore_2.0.2/runtime/ecore.jar
      [ECLIPSE_HOME]/plugins/org.eclipse.hyades.logging.java14_3.3.0/hl14.jar
      [ECLIPSE_HOME]/plugins/org.eclipse.emf.common_2.0.1/runtime/common.jar
	

1.7.2.2 Executing the URL Test Sample

The class and java files were removed from the URL Test sample to prevent compilation problems.  It is not intended to be executed.
 

Return to the main readme file