IBM Operational Decision Manager Blog
On Friday last week I travelled to the UK to meet with one of our insurance customers. They have deployed JRules v71 for a variety of underwriting related tasks and are looking to expand their usage of rules over the next 2 years to cover core underwriting as well as fraud detection. They clearly have a strong architecture team and have made tactical use of IBM services to accelerate project start up and reduce risk. I was impressed by the maturity of their rules practice, with clearly defined processes for business controlled changes into production (what they call L1 changes) as well as IT controlled changes, such as changes to the XOM/BOM/B2X (called L2). All deployment to test/staging and production is fully automated with jobs polling Rule Team Server and deploying L1 validated rulesets on the hour with no IT intervention.
They have invested in a robust governance process that incorporates 2 levels of business review (6 eyes!) , strict permission control and well defined interactions between IT and businesses. The customer architecture and services teams have clearly done an outstanding job to ensure the product is being used sustainably and to its potential. We reviewed their existing product usage, as well as their plans for implementing underwriting and fraud detection rules.
In summary it is always fascinating to see the product being used in the real-world. As ever we came back with a head full of ideas for enhancements, and it was a pleasure to hear about the customer's success. In particular, reducing the need for traditional IT changes: empowering business changes within 1 day, versus several weeks (minimum) for IT changes.
I hope we can convince the customer to travel out to IBM Impact next year to present their experiences to a wider audience.
IBM WODM v8:A business rules management system. (description). Typical usage is to externalize business rules from a host application, often for reasons of agility, transparency, performance, business user accessibility or regulatory compliance.
Differences:The table below attempts to summarize the major differences between the two approaches.
Daniel Selman 2700022VQ3 Tags:  streams bep decision-management analytics cep events processes bpm batch brms 8,320 Views
I thought I'd close out 2011 with 5 guiding principles for companies striving to be more competitive and agile over the next 12-36 months. I'd love to hear your comments (and plans) related to these! You can comment below or reach me on Twitter at @danielselman. Happy Christmas! Have a fun, compassionate, healthy and prosperous 2012!
1. Exploit Historical Data
Enterprises amass huge volumes of transactional and event data. They use distributed batch processing to aggregate and classify historical data to gain insight and advantage. In some industries sophisticated predictive models are built from the historical data. Batch processing is typically very compute intensive and must typically complete in hours.
2. React in Time
To complement the historical data computed using batch the enterprise reacts to events. Millisecond, or sub-millisecond, response times are required to ensure enterprises stay ahead of competitors. In many cases simplifying algorithmic assumptions are made to speed computation, compared to more exact batch techniques. The next night's batch run may therefore change the enterprise's world view...
Monitoring dashboards provide humans with the insight they need to remain in control and to supplement the automated system with human intelligence.
Intelligent reaction requires a rich set of action primitives: send a formatted email, send SMS, start/stop a processes, block network access, set profile data.
Events tell us something about the state of the world at time T. What was believed to be true at time T may not be true at time T+1. How reliable are our event streams?
3. Make Consistent, High-Quality Decisions
Decisions made by the enterprise must be of high-quality and consistent across multiple touch-points: streams, processes, batch, and transaction processing. Decisions and processes emit events. Subject Matter Experts need sophisticated and scalable testing and simulation tools to ensure rules function as designed.
4. Performance, Performance, Performance
Both vertical and horizontal scalability are critical due to the volumes of data to be processed. Competitive advantage is based on crunching more data, faster, using more sophisticated algorithms, producing smarter outcomes. Elastic compute grids are required to easily scale and meet peak loads.
5. Move from Segments to Customers
Enterprises need to make the right decision for customer Daniel Selman, not the right decision for white-males living in Brittany, France. Every customer has an unique historical context, and personalized rules, and these must be taken into account during decision making. The enterprise context for a single customer may be spread across an Event Processing Network, CRM system, process instances, transactional databases and batch results. Customers may benefit from a distributed authoring experience to author and manage their own rules, on their own devices.
Daniel Selman 2700022VQ3 Tags:  big-data personalization brms decision-management analytics big-rules 7,582 Views
Business rules has proved itself to be a key enabler of the agile enterprise. One of our major challenges for 2012 is how to best expose business rules (which often implement decisions) to a range of IT and business systems in an efficient, intuitive and safe manner. Decisions do not operate in a silo, they are often triggered by events or processes, make use of data managed by Master Data Management and may kick off processes or fire events. Making good decisions sometimes means referring to past outcomes, expressed as predictive models.
Simultaneously I am seeing an explosion in the number of rules (Big Rules) as well as the size of the simulation datasets (Big Data). 5 years ago a customer with 10K rules was pushing the envelope. Today we have customers requiring 5M rules and running simulations against 400M records. Many of the use cases for extreme numbers of rules are driven by requirements for mass-personalization: a bank has 20M customers and each customer has 10 personalization rules. We move from a few very large rulesets (e.g. mortgage approval) to millions of very small rulesets. Some of these rulesets may be very personal indeed, perhaps even running on a personal device.
The good news however is that we have already made significant progress!
In fact, every week I get inquiries from customers and product teams who are integrating business rules into their solutions: Master Data Management, Smarter Cities, Smarter Government, Adaptive Case Management, Health, Stream processing... the possibilities and opportunities are enormous!
The next time you visit the BRMS pages on ibm.com, a survey window will pop-up asking you to complete a survey. If you are interested in providing us some feedback, please click Yes. We are looking for feedback to help improve the pages.
The survey should not take more than 5 minutes to complete. Head on over to the the Business Rules Management page if you're interested in completing the survey.
Just launch this week, the Business Rule Management Systems (BRMS) on-demand recorded presentations library page is designed to provide you some insight into the BRMS product family without requiring a significant investment of time to learn.
Whether you’re looking to learn more about BRMS best practices or how to improve insurance claims processing using business rules, you can find it here. At your convenience, choose from this library and learn how BRMS can help you reduce costs and increase revenue opportunities by automating the key decision points that drive your most important business processes and applications.
Now that Impact is over and IBM BPM 7.5 has been officially announced I can talk a little more about its rules capabilities. If you want a general overview of IBM BPM 7.5 Sandy Kempsley has a nice write up here and you can watch Phil Gilbert here.
For the past few months I have been leading a small, but very dedicated, team that has been improving the consumability of core ILOG BRMS components/APIs and supporting the BPM team as they perform the integration of the ILOG components within the BPM design and runtime. I believe it will lead to a considerably better rules experience for BPM customers as well as provide a smooth migration path to the full ILOG BRMS, if and when required. Of course it also just makes good engineering sense to share ILOG's expertise and code in the business rules space with our comrades in the BPM team.
The rules capabilities embedded within BPM have the following characteristics:
I am excited to see how the technology gets used by BPM customers, particular as for many it will be their first exposure to a "real" business rules capability! ;-)
A great new article has just been published by Chabane Hamma and Antoine Melki entitled Implementing a continuous build system for WebSphere ILOG Business Rule Management Systems. This article highlights how to implement and deploy a continuous build system for WebSphere® ILOG Business Rule Management Systems Rule Team Server and Decision Validation Services using the JRules Java™ APIs and customization facilities. You also learned how to configure and deploy this on three machines.
I highly recommend checking it out, a very well written article.
John Pingel, a consultant with CoreLogic, recently emailed me with some good questions related to quality metrics for rule projects. Each of these questions deserves a chapter of a book, but I will take a stab at some answers -- please don't hesitate to comment if you have other ideas.
What design/authoring quality metrics can JRules gather for an IRL rule-based system?
Here John was imagining something similar to the common software complexity metrics, such as calculated by the popular JDepends library. Let's see how each could be applied to a rules-based system:
Number of classes and interfaces:Here we can easily calculate the statistics for the rule project, in terms of the complexity of the BOM, VOC, B2X as well as the rules: from Rule Studio export a rule project statistics report.
Afferent Couplings:The way I interpret this is the relative responsibility of a given rule or rule package. I.e. how many rules can a given rule enable?
For example, given the two rules below:
Rule: bar.setDaniel if the name of 'the customer' contains "Smith" then set the name of 'the customer' to "Daniel Selman"; Rule: foo.isSelman if the name of 'the customer' contains "Selman" then print "Hello Dan!";
You can create the following query to discover the data-driven relationship between the rules:
Find all business rules such that each business rule may disable "bar.setDaniel" or each business rule may enable "bar.setDaniel"
As you'd expect, the result of the query is bar.isSelman because the action of the rule will only be fired if the setDaniel rule has been triggered.
Unfortunately we don't currently have queries at the package level or support wildcards in the rule name in the query, so you have to create a query for each rule in your package and then aggregate the results. It should be possible to create a Rule Studio plugin that runs the "each business rule may enable" query on each rule within a package and rolls the results up at the package level. A good starting point for such a plugin is the studio/samples/brmmanagement/queryexecution sample.
An easier way to run the query manually is to right-click on a rule and select "Find Dependencies":
This is essentially the inverse of the above (for this simple example...), showing which rules can cause a given rule to become enabled. E.g.
Find all business rules such that each business rule may enable "foo.isSelman"
I don't think this is relevant for rules.
Here we can use the sample definition as for code: the ratio of efferent coupling (Ce) to total coupling (Ce + Ca) such that I = Ce / (Ce + Ca). This metric is an indicator of the package's resilience to change. The range for this metric is 0 to 1, with I=0 indicating a completely stable package and I=1 indicating a completely instable package.
Distance from the Main Sequence (D)
I don't think this is relevant as we do not have the concept of abstractness.
Package Dependency Cycles
We can interpret this as loops in the "enable/disable" relationships.
If we introduce a third rule:
Rule: baz:setDan if the name of 'the customer' contains "Selman" then set the name of 'the customer' to "Dan Selman";
And for all three rules we run "Find Rule Dependencies > Rules which may enable or disable this rule" we get the following graph:
To detect such cycles our custom plugin would have to walk the dependency graph pushing rules or packages onto a stack. If a rule or package already exists on the stack then we have detected a cycle.
In subsequent posts I will try to address John's other questions:
Live webcast for Business Partners: Opportunity Identification (OI) Crash Course: Process Improvement - When to Lead with BPM, Decision Management or Both
Join us on March 16 @ 11:00am ET/ 8:00am PT to hear from Brett Stineman (IBM WebSphere Product Marketing Manager) discuss when to lead with BPM, Decision Management or both, the right approach is essential to lower implementation cost, effort and risk.
Attend this lively 60-minute webcast and learn:
Who Should Attend?
A few weeks back v220.127.116.11 of WebSphere ILOG JRules was released. This updated release addresses an issue that prevents the installation of add-ons on top of Rule Studio when installed in a dedicated Eclipse IDE. This fix pack is for licensed users of JRules V7.0.x and V7.1.x. Follow these instructions to install V18.104.22.168.. To download your free 90-day trial, all you need to do is complete the short registration form, select your operating system (Unix, Windows, or MAC OS) and download WebSphere ILOG JRules. All of the WebSphere ILOG business rules product features are enabled during the trial period. When you download this trial, you are also entitled to submit technical problems and questions through our WebSphere ILOG BRMS technical forum.
Instructions to install WebSphere ILOG JRules V22.214.171.124 were posted by the WebSphere support blog. I also recently discovered a great resource on developerWorks for the WebSphere ILOG JRules v126.96.36.199 Trial. This page provides additional details on the trial, support information, and product documentation for the WebSphere ILOG JRules family. I highly recommend checking it out.
I just came across Jerry Cuomo's annual WebSphere yearly trends blog post. Every year he posts the emerging trends for the year. One topic of interest was Connecting Business Events, Rules, Decisions and Process. In 2011, WebSphere will continue the integration of rules, events and decisions for both the tooling and runtime. Also exciting is that the integration of JRules in the WebSphere Decision Server made so much sense that WebSphere is also looking at incorporating a rules feature across many of the middleware products. Check out Jerry's 2011 WebSphere Trends
And new this year, the in-house WebSphere band, Mind the Gap produced a music video that presents the trends:
The Global WebSphere Community has also launched an interactive series of open mic webcasts designed to bring the IBM development lab experts to you. The 60-minute monthly GWC Lab Chat sessions will feature a different technology or technology trend each month and provide an opportunity for you to interact with the IBM development lab experts as you relay questions, offer recommendations, voice feedback and opinions and become part of the conversation with the IBM technologists and development teams.
Join us on Wednesday, March 16th @ 11:00am ET as we kick off our first GWC Lab Chat with our host Jerry Cuomo, IBM Fellow, VP and Chief Technology Officer for the IBM WebSphere products as Jerry introduces what is sure to be an engaging conversation on upcoming trends in WebSphere including Business Events and Business Rules.
And don't forgot, if you are interesting in learning more about these trends register to attend the Impact 2011 to hear more from expert speakers.
In the last edition of the IBM SOA and BPM newsletter, Cheryl Wilson and Brett Stineman explain how decision management technologies – specifically business rule management systems (BRMS), business event processing (BEP) and business analytics (BA) – support business process management (BPM). In particular, this article provides a quick overview of how BRMS, BEP and analytics can be used either together, or individually, to help automate and improve operational decision-making, offering the ability to achieve greater business outcomes.
Read their article Confused about when to use BRMS, BEP, BA and BPM?
more information about how business rules and events can improve the
timing and quality of operational decision-making, I recommend reading
the white paper,
JeffGoodhue 0600004T7Y Tags:  bpm bpi websphere ilog brms rule_engine business_process_improvem... business_rules business_process_manageme... 10,781 Views
In a recent BrainStorm San Francisco conference, IBM hosted an Innovation Workshop titled The Business Rules Advantage for Process Improvement. Included in the content was a section around understanding the synergies for Business Rules Management and Business Process Management including a live demo. You may also wish to watch the video companion to this entry. Below, let's run through how this demo was created and presented with a focus on the integration itself. We will not address the reasons to use BRM and BPM, a good idea for a future entry ;) ...
Our use case is eligibility claims processing. For this integration between BPM and BRMS, we assume we have a rule-based decision service ready and waiting to be executed along with a business process that will use that rule-based decision service. For more on creating a rule-based decision service, you might see this post. Using IBM WebSphere Process Server and ILOG JRules, we will apply a common, integration lifecycle including:
Developer Review of Business Rules
First, we can review the business rules available in ILOG Rule Studio, both a standard business rule (BAL rule) and a decision table:
Automated SCA Component creation
Once we have an idea of the rules available, we can reuse these rules in a few different manners within our business process. Web Services are a nice, loosely coupled option and easily available with the Hosted Transparent Decision Service (HTDS) in ILOG JRules where a WSDL is auto-generated with no code deployment. For this integration, we chose another option: an SCA Component. For more on SCA Components, see this developerWorks article. In short, "Service Component Architecture (SCA) is a set of specifications which describe a model for building applications and systems using a Service-Oriented Architecture." Further, "SCA encourages an SOA organization of business application code based on components that implement business logic, which offer their capabilities through service-oriented interfaces...."
With IBM WPS and ILOG JRules, we have a tight integration available via the LA71 SupportPac: WebSphere ILOG JRules Integration for WebSphere Process Server. Installed in ILOG Rule Studio and IBM WebSphere Integration Developer (in the same Eclipse workbench), we can easily generate an SCA Component for a RuleApp (or deployable set of rules). First we select the SCA Component from RuleApp wizard:
Then, we provide the RuleApp and select the rule-based decision service:
Finally, we provide a few more naming details:
And Presto! We have all the necessary SCA interfaces and resources generated to call the rule-based decision service within a business process.
Developer Review of Business Process
Let's take a look at the business process next and see where we can use this SCA Component to call the ILOG BRMS. Here is the business process:
And here is the specification of the Invoke block in the above process using the SCA component and rule-based decision service named Eligibility:
Developer Testing of Business Process
From here, we can easily test our process within WebSphere Integration Developer and see some output from the server logs:
Business Changes to the Business Rules
Now, if the business needs to make a change to the business rules without modifying the process, they can login to ILOG Rule Team Server and view the rules in a easy and powerful interface specifically for business users. Here is an example of the same decision table in Rule Team Server:
The business needs to make two changes to the rules:
Here is the updated decision table after the changes were made:
The rules can then be deployed from Rule Team Server using a simple wizard as depicted below:
See Change in Business Process
Once the new rules are deployed, the process can be immediately invoked and the new rules are taken into account with new resulting output in the server logs. Below is a sample of the log output with the same input data as above, where we can see that now WA state has a manual process where it did not earlier:
Above, and in the video companion to this entry, we see how managing business rules separate from business process allows dynamic changes to decision logic within an organization. By combining IBM WebSphere Process Server and IBM WebSphere ILOG JRules, true business process improvement is realized! Thank you for reading and feel free to provide feedback in the Comments section.
For the past month or so I have been pondering how to describe the primary applications of rules technology on a compact but useful slide. If that sounds intriguing to you check out this post on javarules.org.
We are getting some nice reviews of BRMS V7.0 and some interesting articles generally. James Taylor has written two very nice pieces on V7 on his JT on EDM blog. James' deep knowledge of BRMS is evident, because he "just gets it"... :-)
Billy Newport from the IBM WebSphere eXtreme Scale team has also been experimenting with deploying JRules within a fault-tolerant (replicated memory) grid architecture. I've never met Billy personally but I feel like I know him well from my days at BEA Systems, and his many posts to theserverside! It's great to have talented engineers like Billy testing JRules to its limits and experimenting with new eXtreme use cases.
Internally the team has been simultaneously dealing with the last-minute minutiae of getting the release out of the door under IBM's product release process, and spending some time catching up on new technology. Towards the end of a release the development team enters what we call the "cool down period", during which we can experiment with any technology that catches the eye. It's an opportunity to step away from the coal face for a couple of weeks and build some technical proof-of-concepts or catch up with some training. After the rigors of dealing almost essentially with bug fixes for several months it is important for the engineers to stretch their coding muscles again, and get fired-up and creative before we start the NEXT release.
The Product Management team however is very busy at the moment, discussing the core user stories for our next release. Over the coming weeks the developers will be spending a lot of time with the PMs fleshing out the details of the stories, working on estimation, and finally release planning.
And so the wheel turns, and a new release cycle will soon be upon us...
ILOG BRMS V7.0 is a major release, and that is probably a major understatement! Based on the foundations of the JRules 6.x architecture, BRMS V7.0 has two major overarching themes: business user empowerment and enabling platform agnostic decision services. Supporting these two themes are a panoply of technical advances across the whole product stack.
The BRMS team has worked incredibly hard over the past 18 months to deliver what is without a doubt the most ambitious release we have attempted since I joined ILOG (my first release was JRules 4.6). As an indication of the work involved, the first construction iteration for BRMS V7.0 started 19th November 2007! Here is a very quick round-up of the major new features. I hope to dive into the details of each of these over the coming days and weeks.
I just got back from an on-site visit with one of our customers in the USA, a major insurance company. The customer is using JRules 6.6.x for commercial property insurance underwriting. The trip was very interesting as we were able to get hands-on with their large rule project for 3 days -- approximately 40,000 rules. We gained some fascinating insights into their development challenges and in exchange we performed an audit to ensure they were getting the most out of JRules. I came back with a couple of bugs to fix and a headful of ideas for enhancements.
One of their major challenges is that they simultaneously work on 3 versions of their platform -- a 3 month major release (2.0 say), a 1 month minor release (1.1) and a weekly "patch" release (1.0.5). The patch release is edited in Rule Team Server by business users while the other two releases are edited in Rule Studio by Java developers with JRules training. Due to this rolling release schedule they version the rules in a source code control system (using branches) and have to perform regular merges of the changes coming from RTS into RS.
The RTS repository does not support branches. This was a conscious limitation when we designed RTS as we believed that branching and merging would be problematic for a business user. The development process we therefore recommended is to treat RTS as a satellite user of the source code control system.
As you can see one of the RS users essentially "proxies" the changes coming from RTS into her local workspace.
Due to the textual nature of BAL rules (if-then-else) merging them is relatively straightforward. Decision Tables and Trees are more problematic as they are persisted as complex XML documents. We spent some time discussing the APIs that we provide that might enable a graphical merge capability for Decision Tables.
We also spent quite a bit of time optimizing the RS build time for these large projects. Here is a checklist:
I hope these tips will help as your projects get larger. Rest assured that we take real-world feedback sessions such as this one very seriously and we are constantly trying to stay one step ahead in the race for better performance.
It's my pleasure to introduce Pierre-Henri Clouin, an ILOG colleague, for this post on Complex Event Processing (CEP). CEP has emerged as one of the "hot topics" in the rules space over the past 12 months. Pierre-Henri is based in Sunnyvale, California and has spent several months looking at the technical capabilities of the CEP products as well as how they are positioned. His first post provides a nice introduction to the subject matter.
Pierre-Henri Clouin, ILOG
As interest grows in CEP, we have started receiving inquiries about how CEP and BRMS compete with or complement each other. After discussing with customers, prospects, and vendors, and reviewing a wide range of use cases, a few patterns have emerged.
CEP shines when:
These core capabilities are well documented. For additional details, Mark Tsimelzon’s CEP Complexity Scorecard summarizes them very effectively.
On the other hand, a BRMS addresses three critical needs:
ILOG BRMS does not compromise on performance either, as have shown benchmarks and actual deployments with demanding customers, such as some of the largest websites, payment networks, underwriters, and telecom operators.
The map above sums it up: a CEP engine complements a BRMS for use cases with large data rates, low latency, and rich decision automation and management. The CEP engine pairs down the volume of events and only passes interesting events on to the BRMS to perform a rich decision process. Examples abound, notably in fraud management and national security.
Conversely, CEP overlaps with BRMS at the low end of data rates, latency requirements and rulesets. This is the area where we’ve seen some confusing accounts and claims and where a CEP engine provides limited value on top of a BRMS.
In upcoming posts, we will continue to explore and discuss best practices surrounding BRMS and CEP. We encourage you to reach out to us with related experience and questions.