and did numerous postings on develoerWorks DataPower forum making use of coproc2 service for demonstration.
The coproc2 service itself (MPGW export attached in this posting) is unchanged since first posting.
A one-line (bash) shell-script client (coproc2) and a Java client (coproc2.java) are available.
Today I posted new coproc2 client coproc2swa, another (bash) shell-script client usable in Linux and Cygwin under Windows (tested!). http://www.ibm.com/developerworks/forums/thread.jspa?threadID=387511 ... # This file is an all-in-one solution in that it # # 1) is the coproc2swa client for processing SWA data with coproc2 service # ("bash" script, in "preamble") # # 2) is a SWA file itself(!) and can be used as sample input # # 3) allows to inspect SWAfile by -preamble/-epilogue/-print (cids) # # 4) contains four demo stylesheets which may be referenced by "cid..." # # attman.xsl # - accessing attachment manifest and content-types # # pgm1st.xsl # - output first "image/x-portable-greymap" attachment (binary data!) # # xml1st.xsl # - output first "text/xml" attachment # # zipit.xsl # - return zip of all "text/xml" and "image/x-portable-greymap" attachments ...
The demo I like most is cid:zipit.xsl (invoked by "coproc2swa cid:zipit.xsl ~/bin/coproc2swa http://dp3-l3:2223 -s >z.zip").
It just determines all attachments of type "text/xml" and "image/x-portable-greymap" from SWA file,
adds them to archive "archive-zip" and returns that archive as result (binary data) !
Today I want to show a little tool we use in Böblingen Lab to keep track of our DataPower boxes (Level 2 and Level 3 support, as well as some Techsales boxes).
This tool is most useful if you have often changing firmware versions on different boxes (like in development or support).
Also boxes with different hardware features can easily be tracked with this.
Here I only have selected my Level-3 support boxes to restrict the screenshot size.
As you can see you get a timestamp, and then for every box (clickable) all of its version information in the left section. In the right section all feature information is provided (Y/N means that the feature is licensed, but the installed firmware does not provide support for it).
Click on the picture to see the details!
Here is the screenshot of all Boeblingen boxes (nearly full rack):
This is the display when a box is inaccessible (I intentionally rebooted dp5-l3 for this):
The solution is intentionally not installed on one of our boxes, but on a normal Linux server.
Below described solution can be used for Windows systems, too, by adaption or making use of cygwin (search for "cygwin cron").
Our boxesBB page gets updated every ten minutes automatically by a cron job, see the output of "crontab -l":
This is the shell script executed to generate the updated view. Since this solution is outside a DataPower box "xsltproc" is used for the stylesheet processing.
#!/bin/bash cd `dirname $0` xsltproc create_script.xsl empty.xml >script sh script sleep 15 xsltproc generate.xsl empty.xml >/var/www/html/myBoxesBB.html
The solution allows to configure which boxes should be tracked.
For each box you specify its name, the password of the admin user (may be any user with XML mgmt access rights), opionally the port of the XML management interface if different to default 5550.
The gport entry specifies the port on the serial console the box can be accessed by.
And this is the stylesheet to create the complete status page based on the results retrieved for all the boxes by the generated bash script.
The shell script is necessary because xsltproc does not allow for dp:url-open() calls ... ;-) The generated HTML page does not know on the intervals the page gets refreshed by crontab. To keep the display current it refreshes itself every minute.
Customer wanted to know how to store files (retrieved by scheduled rule) locally on DataPower box.
Solution consists of:
* scheduled rule
* dp:url-open to retrieve file
* base64 encode it (either its serialization for XML, or the binaryNode response for Non-XML)
* use set-file XML management operation to store the file
* access XML management interface from within a stylesheet to execute the request
> How can i count number of requests passing through the processing policy.
for those policies which are tied to specific stylesheets you may look
up "Status-->XML Processing-->Stylesheet Executions" in WebGUI.
You get the exact counts of stylesheet executions for 10 sec, 1 min, 10 min, 1 hour and 1 day.
You may reset the counters to 0 by flushing the corresponding XML manager's stylesheet cache.
> is there any Dp:element , XSL element or function available.
This seems to indicate that you want to access these counts from within a stylesheet?
You can access above mentioned counters via XML management interface:
$ doSoma admin StatusStylesheetExecutions.xml dp3-l3.boeblingen.de.ibm.com:5550 | \ > xpath++ "/*/*/*/*/*[contains(URL,'coproc2.xsl')]" - Enter host password for user 'admin': ------------------------------------------------------------------------------- <StylesheetExecutions xmlns:env="http://www.w3.org/2003/05/soap-envelope" xmlns:dp="http://www.datapower.com/schemas/management">
The advantage of dp:decode(_,'base-64') is that it does a UTF-8 validation and preventes negative effects.
I cannot give a more complete example until having access to my boxes in the IBM network which I do not have (and do not want to have :-) ) by my cell phone or the Internet terminal in our youth hostel in most nordic German town List on North Sea island Sylt ...
... the total precision is therefore 53 bits (approximately 16 decimal digits). ... ... For the next range, from 253 (=9,007,199,254,740,992) to 254, everything is multiplied by 2, ...
Below you can see the output of DataPower, xsltproc, xalan and saxon XSLT processors for numbers with 15 to 18 digits. Reported is the length, the conversion of the input string to number (by XSLT core function number()) as well as the original input string. The last 16 digit number is the lowest (positive) integer with precision loss:
In addition the posting shows how to make use of netcat tool as simplest form of a rawTCP server. This
is useful in debugging issues between DataPower and other TCP servers
-- get it working with netcat and you know that DataPower side is fine.
The latency values get logged at information level when a DataPower service accesses some real backend (not with skip-backside).
If you do want the entries in loopback case just have a simple pass-thru XML FW on the box as "backend".
The technote describes the 16 values by order of output in the log message.
This does not correspond to the order these values get timestamped during execution of a transaction.
Find below the reordering of technote table according to the real execution order (see <EDIT>...</EDIT> below for a restriction).
The messages in the table have been replaced by WebGUI online help messages (if you click on "Latency: ..." in the log).
In addition I added coloring of the different phases of the transaction:
receive request header
front side processing
transmit to back side
receive header from back side
back side processing
transmit response to front side
You do have access to 3 of the values by read-only service variables:
In addition you may access var://service/time-elapsed (the value used for creation of all 16 latency values, read-only).
request headers read
front side transform begun
front side parsing complete
front side stylesheet ready
front side transform complete
back side connection attempted
(this is when the connection is first tried)
back side connection completed
(or time connection attempt gave up)
request headers sent
entire request transmitted
response headers received
back side transform begun
back side parsing complete
back side stylesheet ready
back side transform complete
response headers sent
A colleague made me aware that this order cannot be guaranteed in case of streaming processing.
(see DataPower XML data processing)
Unfortunately the time column did not report timing values as milliseconds (as it should) but in sub-microseconds. The real value are not that important as for performance tuning the relative correctness of the reported times was good enough to work on transformation bottlenecks. The incorrect timing values have been fixed in April fixpack: