I have an orchestration which processes a file from ftp and inserts the data into a cloud based applilcation.
When the CastIron processes single file, it does the job fine. When the multiple files are processed at a time, the performance is degraded to a considerable extent.
I would like to know the underlying design. Does the runtime create "one instance of orchestration" for each file processing?
SystemAdmin 110000D4XK238 Posts
Re: Orchestration instances2012-04-20T19:19:28ZThis is the accepted answer. This is the accepted answer.There are a few possibilities here:
1) If you are running with high logging levels and files you are processing are quite large, you will be causing a lot of disk I/O and this has an impact on the performance.
2) If you are running performance tests on a development appliance, you have to expect slower performance because of the lower CPU and memory configurations.
3) You should be using the Max Jobs setting in the WMC to control how many concurrent files you are processing. By default, the FTP connector will start as many jobs as it can based on the Max Jobs setting and the number of files that match the file pattern on the FTP site (up to a maximum of 100 jobs on a physical applianc, and 10 in Cast Iron Live). Obviously, 100 concurrent files will cause a significant amount of disk I/O and CPU contention.
4) Are you making the most of your inserts to the cloud? Are you able to use batches of data rather than a for-each loop? I.e. Parse the file, then use a split or a bulk map to the target endpoint where possible. This will reduce the time that the orchestration executes, and reduce network I/O.
5) Are you seeing any significant garbage collection in the graph on the WMC? This may also be indicative of overloading the appliance resources, and will cause significant impact to the performance of your jobs.
Alan Moore, WebSphere Cast Iron ISSW STSM