This topic has been locked.
Jim Sharpe 2700018MKY
Pinned topic Measuring latency of a database toolkit operator?
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
I have a situation where a Streams application needs to be sensitive to the performance of database with which it is interacting. Basically I want to throttle back if I'm pounding it too hard. One logical way to go about doing this is to watch latency that occurs with certain operations. I realize I could do this with a custom java or C++ operator that did the actual interface with the database, but that would mean duplicating the all the capability already present in the existing DBToolkit operators. Before I go down that road is there some metric that I can use on the out-of-the-box operators to do this? For example, using the congestion factor on the output port of the upstream operator, or better yet a latency metric for the db operator itself.
SystemAdmin 110000D4XK1245 Posts
Re: Measuring latency of a database toolkit operator?2013-02-15T16:58:31ZThis is the accepted answer. This is the accepted answer.Hello Jim,
Use the congestion metric on the upstream operator output port to determine if you are experiencing performance problems in the database caused by Streams. The DB toolkit metric "Dropped Tuples" keeps track of failed statement executions, which is probably not adequate for this problem. We typically handle DB load issues manually. For example, if we run an application and see that it can't keep up with say INSERTs to a database, we re-configure the application to have more parallel ODBCAppend operator instances, or if we're using DB2, change the number of partitions or something like that. But all of that means stopping/changing/recompiling, nothing dynamic.
The developers have made a note to investigate adding a feature that will do this dynamically. (possible for a future release)