ECM Community Blog
We often get asked what the best way to write a custom iWidget when you want to integrate it into your Case Manager solutions.
Although iWidgets and their main content are, by definition, very self contained little web apps, there are development approaches that would make their content more easily reusable in other web environments.
This new article provides a great view into that and examples to get you on you way.
Happy New Year to everyone!
For those of you who have been actively following our Content Navigator component and wondering how you can start to really take advantage of what it offers, here's a new developerWorks article that talks about how to build custom step processors for your process centric solutions.
IBM Case Manager (ICM) unites information, process, and people to provide a 360-degree view of case information and achieve optimized outcomes. You can further enhance your case management solution by integrating forms, additional rules, analytics, and logging and reporting.
Author: Jos Olminkhof (email@example.com)
Summary: This is a step by step overview that shows how you can integrate an ILOG JRules business rule in an ICM solution.
4. Integrating with Business Process Manager
Author: Guo Yan Fen (firstname.lastname@example.org)
Summary: By integrating with IBM Business Process Manager, ICM can take advantage of IBPM process. This document will show you the configuration steps about ICM – IBPM integration, display how an IBPM inbox widget works from ICM side.
Author: Lannie Truong (email@example.com), Jeff Lee, Chi Nguyen and some others from ICM team.
Summary: How to make the IBM Enterprise Records RM_Operations component available in the IBM Case Manager Solution Designer component of IBM Business Process Manager Process Designer? Special configuration is required, this document will give you detailed instructions on how to configure the integration.
6. Integrating with Cognos 8 Business Intelligence
Author: Gang Zhan (firstname.lastname@example.org)
Summary: Cognos 8 Business Intelligence is a product which can create the analyses reports from different perspective automatically with no need to manually draw the table and fill out the data. This paper describes how to integrate Cognos BI v8.4.1 with ICM 5.1.1 so that the reports of ICM can be created automatically.
Author: He Long (email@example.com)
Summary: By integrating IBM Enterprise Content Management with IBM Case Manager, you can use documents that are created in ECM in your case management solution. This document will give you a detail instruction about how to achieve this.
Author: Gang Zhan (firstname.lastname@example.org)
Summary: By integrating IBM Case Manager with IBM Cognos Real-time Monitoring, you can easily get analysis report from different perspective, so you can react quickly to revenue and cost-saving opportunities.
One of our readers asked me about a problem he was seeing with one of my Tips and Tricks.
When used in the Script Adapter the line in bold below works great in FireFox, but causes an exception in IE 8.
xmlhttp= new XMLHttpRequest();xmlhttp.overrideMimeType("application/json");
xmlhttp.open('GET' , getUserURL , false);
var myResObj = eval('(' + xmlhttp.responseText + ')');
The IE version of the XMLHttpRequest object does not support that method and in most cases, it is not required.
Dave Perman 2700007Y91 Tags:  icm 5.1 tipsandtricks datavalidation icmdev tasksandworkflow 1 Comment 2,415 Views
As I start to wind down for the Holidays, I thought I would deliver one little gift to you all before I do.
This use case has been asked about many times so here is an approach that might work for your projects. Enjoy!
Integrating Service Level Baselining, Performance Reporting, and Application Monitoring into your Software Development Methodology - Part 2
Integrating Service Level Baselining, Performance Reporting, and Application Monitoring into your Software Development Methodology
This is the second installment of a multi-part blog series that examines how to leverage application and user experience monitoring when developing applications, especially customer facing applications. It examines integration with different methodologies and varied infrastructure deployment. The series is not intended to be comprehensive, but is a reflection on my personal experiences and time spent with hundreds of ECM customers since starting with the ECM industry in 1996.
Integration of application and user experience monitoring into your SDLC with a traditional Waterfall methodology can quickly pay dividends. As a software developer, timelines are tight and the demand for low defect code is high. A developer can’t take a casual attitude towards the early stages of a project, because little issues at the beginning can become “show stoppers” later on.
Some terms and acronyms used in this installment are outlined in the first installment.
The Software Development Life Cycle typically contains several phases: Requirements, Design, Development/Implementation, Testing, and Operation/Maintenance phases.
During the Requirements phase, the functional requirements are specified, outlining what the application is “supposed to do”. An important part of the Requirements phase is the non-functional requirements – requirements that outline how the system is supposed to “operate”. These can include: SLAs, HA/DR operation, auditability, performance, usability, capacity, supportability, and response times.
The Design phase creates the system architecture that meets the requirements outlined in the previous phase. The design should not ignore “Run the Engine” (RTE) components and must take into consideration the non-functional requirements. Design should eliminate any “magic happens here” black boxes, especially those driven by vague business requirements or last minute management cursory reviews.
The Development/Implementation phase is where the application development team “cranks out the code”. In my experience, development teams produce a quality product that meets the outlined functional requirements. Issues with the overall application are typically found with the interaction between other applications or systems. Often the non-functional requirements aren’t uniform between applications or systems leading to problems during integration. Also business requirements can have issues of “That’s what we specified, but not what we meant”.
The Testing phase is where an independent testing team compares the application developed against the requirements (functional AND non-functional) specified during the Requirements phase. Integration testing, where the new application interacts with existing or newly built applications, can lead to considerable remediation efforts. Testing should not be underestimated; it is the application’s first contact with real world users and issues.
The Operation/Maintenance phase begins when the development team completes development on the release and hands off the application to the operations/support staff. The application is made available to the target internal or external customers and starts to perform the work that the requirements outlined. The support teams need to keep the “engine running” and provide feedback to management and the development teams about what is working well and where remediation may be needed.
The Waterfall methodology is a sequential process that closely follows the SDLC outlined above. Each step in the SDLC “flows” to the next, with some overlap between steps. Requirements lead to a design, which leads to development, then onto testing, production, and finally maintenance.
The Requirements phase should produce a number of non-functional requirements. The Business User community imposes some of these: response times, SLAs, system availability. Other non-functional requirements come from operations staff and management: HA/DR, capacity, auditability. Business users typically tend to gloss over many non-functional but critical “plumbing” requirements, as they are focused on what the application is “supposed to do”. However, during application testing and deployment, the lack of defined non-functional requirements can become a large point of contention with the development team.
The Design phase needs to include detailed design elements on how to meet non-functional requirements. For example, is logging/reporting to meet audit requirements going to be written directly into the application or accomplished externally? Application Monitoring (AM) and Experience and Performance Monitoring (EPM) should be integrated into the design, allowing full use of tools and reporting during subsequent phases. The application development teams needs the design to be able to demonstrate that they are meeting the functional and non-functional requirements.
During the Development/Implementation phase monitoring starts taking a more pronounced role. Leveraging application, system, and experience monitoring in the development phase provides the development team insight to meeting the non-functional requirements. How is the system performing, what are the user response times, are there any errors being generated? AM can provide: early information on performance metrics, queue depths, component status and component interaction from application operating behavior versus finding design issues in near production rollout. Automated monitoring, reporting, and correction can reduce the waste of valuable development time troubleshooting basic operational issues during development. Including AM early in development helps provide a more complete and supportable product to support/operations, freeing the development team to work on new projects.
Both AM and EPM pay huge dividends during the Testing phase. The testing team can certainly follow test scripts to validate functional requirement use cases, but how do they objectively measure non-functional requirements? The testing team can’t submit a report to management stating that “we think the system responds fast enough”. EPM allows the team to provide exact response times for users during testing and over a variety of situations (load, location, error conditions). Properly implemented monitoring can provide objective reporting on capacity, storage use, response times, errors generated (and remediated) and a host of other “non-functional” items. Properly designed monitoring also allows troubleshooting of issues encountered during integration testing. Providing information about data flowing in and out of an application and sending alerts if actual operation is different than expectations.
Baselining a system during the final user acceptance test is critical; a baseline gives management and support an overall “picture” of the application. Once the application goes into production, comparison to the baseline will help identify bottlenecks, volume related performance issues, capacity, and growth. As application or environment fixes and enhancements are put in place after “go live”; comparison to the baseline measurements provides verification that the changes remediated the issue, improved performance/operation, or most importantly “did no harm”. Service Level Baselining allows management to have a measurement of user interactions in an ideal situation and again offers a comparison point when user load or environmental issues occur.
As the project follows the Waterfall methodology and the application is placed into production, AM allows the support and management teams to know that the application is “really working”. Typically, the support team has a whole host of applications they are responsible for and can’t have the level of understanding and involvement with the application that previous teams (architects, developers, testers) do. Properly designed and configured monitoring allows the support staff to have in-depth and immediate visibility into an application. By using Application Service Level Monitoring the system can be administered by less experienced administrators, freeing up senior resources for critical issues elsewhere. In the case where the application is running out of JVM memory or storage, the support team can be alerted before it becomes an issue and a “fire drill” eliminated. If during production users report “slow” performance, EPM objectively reports response times and SLAs as accurate input for constructive business operation reviews. It’s important the Application owners have full visibility of the application operating “stack” when an issue is occurring.
Maintaining the application takes a couple tracks. First, the application development team many need an effort to remediate any application defects found upon contact with the users (never under estimate the ability of the user base to quickly exercise defects!). These defects also include non-functional issues. Often how the Business Analysts think a user will use the system is very different from how a user actually uses the system. Performance issues and bottlenecks may be identified with Application Monitoring and the gathered metrics and comparisons will help the application development team track these down.
Secondly maintaining the operating environment, which usually falls to the application support team, needs to be considered. After common support tasks occur, such as network changes, additional storage/CPU/memory, are complete is there an automated set of monitors that lets management and support know that the system is back in a fully operational state? Have the changes impacted the business users, either positively or negatively? The network changes may have been made for performance reasons, but do the users see a 1 second improvement or only a 1ms improvement? Getting ahead of potential issues helps maintain the system. Giving the storage team a couple extra weeks to purchase, provision, and configure storage make a huge difference for deployment and harmony. Having properly configured monitoring can help achieve this harmony between groups.
Planning for and integrating Application and User Experience Monitoring early in the SDLC, and with each phase, provides immediate benefits and helps to produce a better, more complete, and sustainable application. Application and Service Level Monitoring shouldn’t be an afterthought, only for the application support team to worry about. Every phase of a Waterfall based project can make use some aspect of monitoring, ultimately making your life as an application developer, support engineer, project manager, or application manager better when working with a new application. Implementing AM / EPM during throughout the SDLC sends positive signals to the business participants in the project– that you understand that service levels and end user response are key project success criteria.
The next blog entry will talk about integration of monitoring with Rapid Application Development methodologies. In the world of short timeframes and high expectations, it’s imperative to know what’s really going on.
By configuring the Case Operations component, users can perform custom actions in a workflow in FileNet® P8 Platform applications. Some example custom actions included in the following sample code are creating a case, adding a case comment, attaching an external file, and creating a subfolder. If you intend to use the source code in any way, read this document and the flowchart diagram before you take any action.
DisclaimerThe code can be compiled, modified, or enhanced to fit your needs. However, IBM does not support the code in any way and is not liable for any detrimental usage induced by the code.
Ensure that you install and configure the following software:
IBM Case Manager Information Center:
IBM Redbooks® called Advanced Case Management with IBM Case Manager:***Download the Case Operations Component Sample for IBM Case Manager for Multiplatform and Case Operations Component Sample for IBM Case Manager English from the link given below:****
Here a small presentation that highlights some of the new capabilities included with IBM Case Manager 5.1.1.
Integrating Service Level Baselining, Performance Reporting, and Monitoring into your Software Development Methodology
Integrating Service Level Baselining, Performance Reporting, and Monitoring into your Software Development Methodology
This is a multi-part blog series that will examine how to leverage application and user experience monitoring when developing applications, especially customer facing applications, to achieve world class service levels. It will examine integration with different methodologies, using various infrastructure deployment approaches. The series is not intended to be comprehensive, but is a reflection on my personal experiences and time spent with hundreds of ECM customers since starting with the ECM industry in 1996.
Traditionally monitoring has been considered a “Run the Engine” (RTE) type of activity, much like the dashboard lights and gauges on your automobile. Deploy the application and start monitoring to make sure the application is running. In reality, monitoring must be integrated early in the development process to provide data, get user feedback, and to prepare for deployment and RTE activities. Monitoring helps to improve the development process and be better prepared for production, especially when the integration starts early. Done correctly, application monitoring contributes to the ‘DevOps’ shift occurring within IT organizations.
When developing a new business application or product there are many items to consider including:
Timelines can be a real problem. Given a set of business requirements, develop a product that meets those requirements and will be complete in time to meet a business, regulatory, or other timeframe. I’ve experienced, that since requirements need to be met and are the most visible and tangible deliverables, it’s the operational items, performance testing, and comprehensive in-depth understanding of the application that suffer the most.
In many cases, as developers strive to meet the previous concerns, other important topics get pushed out until after deployment or end up being dropped all together.
As a software developer, there are a couple types of customers that your application answers to.
As a starting point for some of the topics that will be discuss in future entries, it’s important to outline some terms. There may be more exhaustive definitions or slightly different definitions for these terms, but I’m using the terms as described below. I’ve introduced a couple already.
RTE – “Run the Engine”. After the application has been deployed and put into production, RTE is the effort and adjustments to keep the application performing its designated task(s).
SDM- Software Development Methodology. The plan, processes, and controls that an application development group uses to deliver an application that meets specified business requirements. Also can specify a linear or iterative approach to development.
SDLC- Software Development Lifecycle. Closely related to the SDM, this outlines the processes, phase, and deliverables needed. The SDLC encompasses much more than development phase.
Waterfall Development – A linear and sequential development approach. Traditionally “big project” type development with long timelines.
RAD – Rapid Application Development. Iterative development approach. Agile and Scrum are popular development approaches.
Baselining- The process of measuring, analyzing, and documenting performance at a given point in time. These metrics are used as a reference to compare and relate to future metrics. A “snapshot” of system performance.
Performance Reporting- The process of gathering, storing, consolidating, and distributing operational metrics for an application or process. This applies not only to “physical” metrics (CPU, memory, I/O), but also process metrics (time from ingestion to completion).
SLA- Service Level Agreement. An agreement between a service provider and the consumer of that service. Typically outlines items such as: system availability, response time, processing volumes, and other metrics.
System Monitoring – “Ping, power, and pipes” monitoring. Provides information that the “hardware” and operating system is operational . Often provides some system performance information like CPU, storage, and memory usage.
AM - Application Monitoring. Monitoring at the “application” level. Provides information on how the application is performing, processing information, any errors or potential issues. End-to-end status of data flow is possible, with metrics and reporting throughout the process. Extends system monitoring to a more granular level on items related to the application.
ASLM - Application Service Level Monitoring. Externally and objectively looking at system AND application performance. Alerting, reporting, and automatically responding to the metrics gathered. Through analysis of metrics gathered over time, a better understanding of application operation is achieved. Using alerting and automated response, a more stable system and process that meets agreed upon SLAs is provided to the customer.
EPM - Experience and Performance Monitoring. Monitoring actual user experience while using an application (not synthetic transaction monitoring). Helps support staff bridge the gap between how the application is running and what the business user is experiencing.
The next blog entry will examine integration of these monitoring topics into a “traditional” SDLC and with Waterfall methodologies. Future topics will include: integration with RAD methodologies, working with infrastructure, communicating the appropriate information to keep the customer happy, and monitoring technology guidelines.
I just wanted to make sure you all heard that we have been busy over the summer and released IBM Case Manager 5.1.1 last Friday.