To best suit the needs of your environment, you can fine-tune the settings in the Data Collector properties and toolkit properties files. The files are specific to each monitored server instance (or J2EE application) and are located under the following instance directory, depending on the application server:
WebLogic | If the monitored server instance is represented
by a weblogic machine: DC_home/runtime/wlsapp_server_version.domain_name.machine_name.instance_name/datacollector.properties else: DC_home/runtime/wlsapp_server_version.domain_name.host_name.instance_name.datacollector.properties |
Tomcat | DC_home/runtime/tomcatapp_server_version.host_name.instance_name/DC_home/runtime/datacollector.properties |
JBoss | DC_home/runtime/jbossapp_server_version.host_name.instance_name/jbossapp_server_version.host_name.instance_name.datacollector.properties |
NetWeaver | DC_home/runtime/netweaverapp_server_version.sap_node_ID_host_name.sap_instance_number/datacollector.properties |
J2SE | DC_home/runtime/j2se.application_name.host_name.instance_name/DC_home/runtime/datacollector.properties |
The data collector properties file is automatically created by the data collector, and is unique for every application server instance that is monitored by the data collector. It is located in the instance directory and its name is datacollector.properties.
However, to facilitate future upgrades, do not change this file.
Instead, add the settings that you want to modify to the data collector custom properties file. This file is located in the custom subdirectory of the instance directory. Its name is datacollector_custom.properties.
The following properties are in the Data Collector properties file. Only the properties that are recommended for you to modify are listed.
Example:
probe.library.name=am
internal.lockanalysis.collect.L1.lock.event = false
The variable n can represent Mod L1, L2, or L3. Possible values are true, false, or justone. This parameter controls whether lock contention events are collected.
True indicates contention records are collected. For each lock acquisition request that results in contention, a pair of contention records is written. These records are written for each thread that acquired the lock ahead of the requesting thread. False indicates contention records are not written. Justone indicates contention records are written. However, a maximum of one pair of contention records are written for each lock acquisition request that encounters contention. This event occurs regardless of how many threads actually acquired the lock prior to the requesting thread.
Setting this parameter to true enables you to determine the problem. You can check if a single thread is holding a lock for an excessive time, or if the problem is due to too many threads all attempting to acquire the same lock simultaneously. The recommended setting at L1 is false. The recommended setting at L2 is justone. This setting enables you to collect just one pair of contention records for each lock acquisition that encountered contention. The recommended setting at L3 is true but for a limited time to reduce performance cost. This setting enables you to identify every thread that acquired the lock ahead of the requesting thread.
internal.lockanalysis.collect.L2.contend.events = justone
internal.lockanalysis.collect.L3.contention.inflight.reports = true
It is not necessary to define the property deploymentmgr.rmi.port if you are running a stand-alone application server. This property is needed for version 5 application server clusters or application servers controlled by a Deployment Manager.
deploymentmgr.rmi.port=<Deployment Manager RMI (bootstrap) port>
It is not necessary to define the property deploymentmgr.rmi.host if you are running a standalone application server. This property is needed for version 5 application server clusters or application servers controlled by a deployment manager.
deploymentmgr.rmi.host=<Deployment Manager host>
The default is no reset. Time interval after which the connection between the Data Collector and the Publish Server is reset.
networkagent.socket.resettime=-1
By default, the Data Collector limits the amount of native memory it uses to 100 MB, see the description of internal.memory.limit on page internal.memory.limit. The Data Collector enters turbo mode when the Data Collector native memory use exceeds 75% of the native memory limit, by default 75 MB. (You can adjust this percentage with turbo.mem.ulimit to adjust the percentage. However, do not set turbo.mem.ulimit unless directed by IBM® Software Support.) The behavior when the memory utilization is below 75 MB is the same whether turbo mode is enabled or disabled.
Behavior when dc.turbomode.enabled is enabled and the Data Collector is in turbo mode
When the Data Collector switches to turbo mode, a message Switching to Turbo Mode is logged in the trace-dc-native.log file.
In turbo mode, the Data Collector stops monitoring new requests and holds existing requests. It also switches Network Agent and Event Agent threads to the higher priorities specified by the na.turbo.priority and ea.turbo.priority properties respectively. It also lowers the sleep time of the Event Agent and Network Agent threads specified by the ea.turbo.sleep and na.turbo.sleep properties respectively. All these actions are done to drain the native memory quickly by sending accumulated event data to the Publish Server.
In turbo mode, if a new request comes in, the Data Collector simply does not monitor the new request. It continues to monitor the already running requests. The Data Collector notifies the Publish Server that a new request was not monitored when in turbo mode. A notification is sent to the Managing Server for every new request that is not monitored by sending a dropped record. The Publish Server in turn reflects this status in Publish Server corrupted request counters obtained through amctl.sh ps1 status.
When turbo mode is enabled, data in the Application Monitor user interface is always accurate. The accuracy comes at the cost of pausing application threads for a few seconds.
Behavior when dc.turbomode.enabled is enabled and the Data Collector is in normal mode
The Data Collector switches back to normal mode, when the Data Collector native memory use falls below 75% of the limit. When the switch to normal mode happens, the Data Collector releases the requests that were placed on hold while switching to turbo mode. The Data Collector resumes monitoring all requests from then on.
When the Data Collector switches to normal mode, a message Switching to Normal Mode is logged in the trace-dc-native.log file. It also logs memory utilization and a time stamp.
Behavior when dc.turbomode.enabled is disabled
A value of false disables turbo mode. When turbo mode is disabled, the Data Collector does not pause the application thread when the native memory use exceeds 75% of the limit. Instead, it drops the accumulated diagnostic data instead of sending it to the Managing Server. Therefore, the data shown in the Application Monitor user interface is incomplete. But the response time of the application threads is not negatively impacted. An appropriate message indicating data is dropped is logged in msg-dc-native.log and trace-dc-native.log. The Managing Server discards all the diagnostic data gathered for the request when the Data Collector drops records related to that request.
Disabling dc.turbomode.enabled
The default setting is true, which enables turbo mode.
The toolkit properties file is automatically created by the data collector at startup, using various input files. It is unique for every application server instance monitored by the data collector. It is located in the instance directory and its name is Its name is toolkit.properties.
Because this file is re-created at each data collector startup, do not make any changes to this file; if you do, they will be overwritten.
Instead, add the settings that you want to modify to the toolkit custom properties file. This file is located in the custom subdirectory of the instance directory. Its name is toolkit_custom.properties.