IBM Operational Decision Manager Blog
On Friday last week I travelled to the UK to meet with one of our insurance customers. They have deployed JRules v71 for a variety of underwriting related tasks and are looking to expand their usage of rules over the next 2 years to cover core underwriting as well as fraud detection. They clearly have a strong architecture team and have made tactical use of IBM services to accelerate project start up and reduce risk. I was impressed by the maturity of their rules practice, with clearly defined processes for business controlled changes into production (what they call L1 changes) as well as IT controlled changes, such as changes to the XOM/BOM/B2X (called L2). All deployment to test/staging and production is fully automated with jobs polling Rule Team Server and deploying L1 validated rulesets on the hour with no IT intervention.
They have invested in a robust governance process that incorporates 2 levels of business review (6 eyes!) , strict permission control and well defined interactions between IT and businesses. The customer architecture and services teams have clearly done an outstanding job to ensure the product is being used sustainably and to its potential. We reviewed their existing product usage, as well as their plans for implementing underwriting and fraud detection rules.
In summary it is always fascinating to see the product being used in the real-world. As ever we came back with a head full of ideas for enhancements, and it was a pleasure to hear about the customer's success. In particular, reducing the need for traditional IT changes: empowering business changes within 1 day, versus several weeks (minimum) for IT changes.
I hope we can convince the customer to travel out to IBM Impact next year to present their experiences to a wider audience.
IBM WODM v8:A business rules management system. (description). Typical usage is to externalize business rules from a host application, often for reasons of agility, transparency, performance, business user accessibility or regulatory compliance.
Differences:The table below attempts to summarize the major differences between the two approaches.
Daniel Selman 2700022VQ3 Marcações:  streams bep decision-management analytics cep events processes bpm batch brms 8.152 Visualizações
I thought I'd close out 2011 with 5 guiding principles for companies striving to be more competitive and agile over the next 12-36 months. I'd love to hear your comments (and plans) related to these! You can comment below or reach me on Twitter at @danielselman. Happy Christmas! Have a fun, compassionate, healthy and prosperous 2012!
1. Exploit Historical Data
Enterprises amass huge volumes of transactional and event data. They use distributed batch processing to aggregate and classify historical data to gain insight and advantage. In some industries sophisticated predictive models are built from the historical data. Batch processing is typically very compute intensive and must typically complete in hours.
2. React in Time
To complement the historical data computed using batch the enterprise reacts to events. Millisecond, or sub-millisecond, response times are required to ensure enterprises stay ahead of competitors. In many cases simplifying algorithmic assumptions are made to speed computation, compared to more exact batch techniques. The next night's batch run may therefore change the enterprise's world view...
Monitoring dashboards provide humans with the insight they need to remain in control and to supplement the automated system with human intelligence.
Intelligent reaction requires a rich set of action primitives: send a formatted email, send SMS, start/stop a processes, block network access, set profile data.
Events tell us something about the state of the world at time T. What was believed to be true at time T may not be true at time T+1. How reliable are our event streams?
3. Make Consistent, High-Quality Decisions
Decisions made by the enterprise must be of high-quality and consistent across multiple touch-points: streams, processes, batch, and transaction processing. Decisions and processes emit events. Subject Matter Experts need sophisticated and scalable testing and simulation tools to ensure rules function as designed.
4. Performance, Performance, Performance
Both vertical and horizontal scalability are critical due to the volumes of data to be processed. Competitive advantage is based on crunching more data, faster, using more sophisticated algorithms, producing smarter outcomes. Elastic compute grids are required to easily scale and meet peak loads.
5. Move from Segments to Customers
Enterprises need to make the right decision for customer Daniel Selman, not the right decision for white-males living in Brittany, France. Every customer has an unique historical context, and personalized rules, and these must be taken into account during decision making. The enterprise context for a single customer may be spread across an Event Processing Network, CRM system, process instances, transactional databases and batch results. Customers may benefit from a distributed authoring experience to author and manage their own rules, on their own devices.
Daniel Selman 2700022VQ3 Marcações:  big-data personalization brms decision-management analytics big-rules 7.408 Visualizações
Business rules has proved itself to be a key enabler of the agile enterprise. One of our major challenges for 2012 is how to best expose business rules (which often implement decisions) to a range of IT and business systems in an efficient, intuitive and safe manner. Decisions do not operate in a silo, they are often triggered by events or processes, make use of data managed by Master Data Management and may kick off processes or fire events. Making good decisions sometimes means referring to past outcomes, expressed as predictive models.
Simultaneously I am seeing an explosion in the number of rules (Big Rules) as well as the size of the simulation datasets (Big Data). 5 years ago a customer with 10K rules was pushing the envelope. Today we have customers requiring 5M rules and running simulations against 400M records. Many of the use cases for extreme numbers of rules are driven by requirements for mass-personalization: a bank has 20M customers and each customer has 10 personalization rules. We move from a few very large rulesets (e.g. mortgage approval) to millions of very small rulesets. Some of these rulesets may be very personal indeed, perhaps even running on a personal device.
The good news however is that we have already made significant progress!
In fact, every week I get inquiries from customers and product teams who are integrating business rules into their solutions: Master Data Management, Smarter Cities, Smarter Government, Adaptive Case Management, Health, Stream processing... the possibilities and opportunities are enormous!
The next time you visit the BRMS pages on ibm.com, a survey window will pop-up asking you to complete a survey. If you are interested in providing us some feedback, please click Yes. We are looking for feedback to help improve the pages.
The survey should not take more than 5 minutes to complete. Head on over to the the Business Rules Management page if you're interested in completing the survey.
Just launch this week, the Business Rule Management Systems (BRMS) on-demand recorded presentations library page is designed to provide you some insight into the BRMS product family without requiring a significant investment of time to learn.
Whether you’re looking to learn more about BRMS best practices or how to improve insurance claims processing using business rules, you can find it here. At your convenience, choose from this library and learn how BRMS can help you reduce costs and increase revenue opportunities by automating the key decision points that drive your most important business processes and applications.
Now that Impact is over and IBM BPM 7.5 has been officially announced I can talk a little more about its rules capabilities. If you want a general overview of IBM BPM 7.5 Sandy Kempsley has a nice write up here and you can watch Phil Gilbert here.
For the past few months I have been leading a small, but very dedicated, team that has been improving the consumability of core ILOG BRMS components/APIs and supporting the BPM team as they perform the integration of the ILOG components within the BPM design and runtime. I believe it will lead to a considerably better rules experience for BPM customers as well as provide a smooth migration path to the full ILOG BRMS, if and when required. Of course it also just makes good engineering sense to share ILOG's expertise and code in the business rules space with our comrades in the BPM team.
The rules capabilities embedded within BPM have the following characteristics:
I am excited to see how the technology gets used by BPM customers, particular as for many it will be their first exposure to a "real" business rules capability! ;-)
A great new article has just been published by Chabane Hamma and Antoine Melki entitled Implementing a continuous build system for WebSphere ILOG Business Rule Management Systems. This article highlights how to implement and deploy a continuous build system for WebSphere® ILOG Business Rule Management Systems Rule Team Server and Decision Validation Services using the JRules Java™ APIs and customization facilities. You also learned how to configure and deploy this on three machines.
I highly recommend checking it out, a very well written article.
Daniel Selman 2700022VQ3 Marcações:  rules business analysis quality brms metrics 7.662 Visualizações
John Pingel, a consultant with CoreLogic, recently emailed me with some good questions related to quality metrics for rule projects. Each of these questions deserves a chapter of a book, but I will take a stab at some answers -- please don't hesitate to comment if you have other ideas.
What design/authoring quality metrics can JRules gather for an IRL rule-based system?
Here John was imagining something similar to the common software complexity metrics, such as calculated by the popular JDepends library. Let's see how each could be applied to a rules-based system:
Number of classes and interfaces:Here we can easily calculate the statistics for the rule project, in terms of the complexity of the BOM, VOC, B2X as well as the rules: from Rule Studio export a rule project statistics report.
Afferent Couplings:The way I interpret this is the relative responsibility of a given rule or rule package. I.e. how many rules can a given rule enable?
For example, given the two rules below:
Rule: bar.setDaniel if the name of 'the customer' contains "Smith" then set the name of 'the customer' to "Daniel Selman"; Rule: foo.isSelman if the name of 'the customer' contains "Selman" then print "Hello Dan!";
You can create the following query to discover the data-driven relationship between the rules:
Find all business rules such that each business rule may disable "bar.setDaniel" or each business rule may enable "bar.setDaniel"
As you'd expect, the result of the query is bar.isSelman because the action of the rule will only be fired if the setDaniel rule has been triggered.
Unfortunately we don't currently have queries at the package level or support wildcards in the rule name in the query, so you have to create a query for each rule in your package and then aggregate the results. It should be possible to create a Rule Studio plugin that runs the "each business rule may enable" query on each rule within a package and rolls the results up at the package level. A good starting point for such a plugin is the studio/samples/brmmanagement/queryexecution sample.
An easier way to run the query manually is to right-click on a rule and select "Find Dependencies":
This is essentially the inverse of the above (for this simple example...), showing which rules can cause a given rule to become enabled. E.g.
Find all business rules such that each business rule may enable "foo.isSelman"
I don't think this is relevant for rules.
Here we can use the sample definition as for code: the ratio of efferent coupling (Ce) to total coupling (Ce + Ca) such that I = Ce / (Ce + Ca). This metric is an indicator of the package's resilience to change. The range for this metric is 0 to 1, with I=0 indicating a completely stable package and I=1 indicating a completely instable package.
Distance from the Main Sequence (D)
I don't think this is relevant as we do not have the concept of abstractness.
Package Dependency Cycles
We can interpret this as loops in the "enable/disable" relationships.
If we introduce a third rule:
Rule: baz:setDan if the name of 'the customer' contains "Selman" then set the name of 'the customer' to "Dan Selman";
And for all three rules we run "Find Rule Dependencies > Rules which may enable or disable this rule" we get the following graph:
To detect such cycles our custom plugin would have to walk the dependency graph pushing rules or packages onto a stack. If a rule or package already exists on the stack then we have detected a cycle.
In subsequent posts I will try to address John's other questions:
Live webcast for Business Partners: Opportunity Identification (OI) Crash Course: Process Improvement - When to Lead with BPM, Decision Management or Both
duffys 060000RF24 Marcações:  brms bpm decision-management rules-management 5.275 Visualizações
Join us on March 16 @ 11:00am ET/ 8:00am PT to hear from Brett Stineman (IBM WebSphere Product Marketing Manager) discuss when to lead with BPM, Decision Management or both, the right approach is essential to lower implementation cost, effort and risk.
Attend this lively 60-minute webcast and learn:
Who Should Attend?