Here are two articles that I wrote for publication on the BR Community web site. The articles were published in the July 2005 and October 2005 editions of the Business Rules Journal. The folks at the BRJ have been kind enough to approval republication of the articles in this BLOG. I hope you find them interesting and useful.
IBM Operational Decision Manager Blog
IBM ILOG Blogger 270002NTYA 3,999 Views
IBM ILOG Blogger 270002NTYA Tags:  healthcare claim processi... enforcement healthcare rules 4,933 Views
Here is an article that I wrote for publication in the Business Rule Community web site in 2003. I thought some of you might find it interesting.
Keeping Business Rules Separate from their Enforcement
by Oscar Chappel
This article from The Business Rules Manifesto raises a very important question. "What is the difference between a business rule and its enforcement?" It seems that it is difficult to get strong agreement on exactly what represents a business rule. I have found that it is even more difficult to reach agreement on the definition of a business rule's enforcement.
When I think of a business rule, I think of the many possible definitions that Ron Ross published in The Principles of the Business Rule Approach. Ron provided 10 different definitions on pages 183 and 184. These definitions range from the very simple "...[A]n explicit statement of a constraint that exists within a business's ontology" to the very complex "...[A] statement that defines or constrains some aspect of the business ... [which is] intended to assert business structure, or to control or influence the behavior of the business. [A business rule] can not be broken down or decomposed further into more detailed business rules.... [I]f reduced any further, there would be a loss of important information about the business." Regardless of the definition used, it is very difficult to explain exactly what a business rule really is. It seems that we should be able to agree that business rules describe the way business people want their business to operate.
Business rules are stated in many different ways but always using natural language. They can be stated in business procedure manuals, where they are often stated in terms of business policies. They can be stated in government regulations and manuals such as the Hospital Prospective Payment System Manual from the Centers for Medicare and Medicaid Services, CMS. They can be stated in business contracts where they are represented by the terms and conditions of the contract. They can be stated in service level agreements. They can be 'common knowledge' among the business executives, management, and staff, "this is the way we always do it." Regardless of the source for the business rules, little attention is given to the enforcement of those rules. What is to be done if the business rule is violated? This example from the health care domain is illustrative.
These business rules leave the enforcement up to the reader's imagination. The enforcement rules are missing. What actions should be taken, by whom, and when, when these business rules are violated?
The traditional approach to dealing with the missing enforcement rules is to bury the business logic in application software, stored procedures, database constraints, and other application source code, essentially hiding the enforcement rules from the business people and the software developers. Over time, the rules become increasingly unmanageable and costly to maintain. Any change in business rule or policies results in a software change project that can take months to complete and cost the business in both dollars and lost revenues. This scenario is the reason for the renewed interest in business rule systems.
It is no wonder that it is difficult to achieve consensus on the meaning of Enforcement Rules. When I think of the enforcement for business rules I think of software that has a base of knowledge, facts, business policies, and, yes, business rules that express 'what should be' in the business environment. I think of objects that represent this knowledge, that are stored in some form of fact repository, most probably a relational data base. I think of a persistence service that facilitates the insertion and extraction of these fact objects. I think of interfaces that allow business people to create these fact objects using natural language. I think of objects that are created by a business application to represent 'what is' at the point in time enforcement rules are expected to be employed. I think of a few, well-organized, logical enforcement rules that match the objects representing 'what is' to the objects that represent 'what should be'. I think of business rule systems that help a business run smoothly by enforcing the business rules and policies. I think of economic solutions that empower the business people to control their business.
The business rule community seems to suffer from a handicap when we attempt to codify the enforcement rules for business rules. We are constrained by the things we know, our past experiences, and the habits we have developed over years of embedding business logic within business applications. We see business rules as "if ... then ... else" branching control statements like those we would write in COBOL, Java, C, C++, C#, or any other programming language, and we write business rules in this style believing it a good way to enforce the business experts' visions of the way the businesses should operate. Our rule systems become bloated and begin to perform poorly. We struggle to find ways to process thousands, tens of thousands, even hundreds of thousands of rules rather than finding a way to reduce the number of rules while simultaneously achieving the desired result.
Over a period of nearly two decades I have had the opportunity to learn about both rules and object-oriented technologies and to apply the things I have learned in several industries including defense, insurance, manufacturing, entertainment, finance, and health care. Over the years, I have observed that an understanding of object-oriented programming technologies and predicate logic is invaluable in implementing successful business rule applications. I have learned that a rule engine is, first and foremost, an optimized pattern matching algorithm. I have learned that the pattern matching algorithm must test at least one of the conditions of each and every rule in a rule set to determine if the rule conditions should be tested further and if the rule can become a candidate for firing. I have learned that the more rules I provide the rule engine, the longer it takes to process those rules. I have learned that objects embody business facts, 'what should be' and business observations, 'what is'. I have learned that Classes can implement predicates, methods that return a Boolean value and that these predicates can be used in the conditions of enforcement rules. I have learned that good enforcement rules provide patterns that enable a rule engine to efficiently compare 'what is' to 'what should be' and to recommend the appropriate actions when 'what is' is not 'what should be'. I have learned that good enforcement rules can be reused across multiple industries when the business knowledge and policies are represented in a consistent manner. I have learned that good rule engine implementations will replace variables in rules with business objects and create 'virtual' rules that can replace the thousands, tens of thousands, even hundreds of thousands of rules that can cause poor performance.
Using the health care examples in Table 1 we can develop the following business rules.
These business rules are not enforcement rules, they do not describe the actions to take when the rules are violated. These rules tell us 'what should be' when a Revenue Code with a value in the range 420 to 429 occurs on a health care claim. But they do not tell us what to do when 'what is' -- i.e., the codes reported on an actual claim -- doesn't match 'what should be'. The usual approach is to reject a claim that violates the business rule and to provide a message explaining the cause for the rejection.
These business rules have additional problems. For example, the use of the X to signify a 'wild card' would lead the uninitiated to believe that a Revenue Code with a value 426 would be a valid code. This is, in fact, not the case. The Revenue Codes in the 42X range can actually take on valid values of 420 to 424 and 429. So Revenue Code 42X does not mean the continuous interval 420 through 429. The same holds true for the Revenue Codes with values in the 43X and 44X ranges.
There are many approaches to writing the enforcement rules for the business rules stated above. Among the most common, is writing one rule similar to these:
There will be a total of six such rules, one for each of the Revenue Code value sets. This doesn't appear to be too large a challenge, but when one considers that there are currently more than 890 revenue codes that interact with other codes -- e.g., Value Codes, HCPCS Codes and so on -- the number of rules can quickly become unmanageable. This example illustrates how claim clearing houses end up with tens of thousands of health care claim edit rules.
Change is continuous in the health care industry. So imagine what happens when the format for Revenue Code values changes to a four digit format containing a leading zero, as happened in 2000. Now all the rules must be changed to insert a leading zero for all Revenue Code values. If a new Revenue Code with value 0426 is implemented a new condition ("or Revenue Code 0426") could be added to the current rules. Alternatively, completely new rules might be written to account for the new facts: "Revenue Code 0426 requires Occurrence Code 11" and "Revenue Code 0426 requires Occurrence Code 35." Either way, the maintenance headaches, costs, and delays begin to mount.
A more efficient alternative lies in the exploitation of object technologies and unification, a technique that causes variables to be instantiated in rules, much like dynamic, parameterized SQL. Using these approaches one might create relationships between the various codes and enable the codes to 'know' about and find effective relationships. One might use a model similar to that in Figure 1. From this model, one might write a single enforcement rule similar to this:
where Claim, Reported Code, and Required Code are all variables that will be instantiated through unification as the rule is evaluated by an inference engine.
When supported with a 'knowledge base' of fact objects that implement the model in Figure 1, the single rule described above can be used to replace hundreds of 'one-off' rules. The rule is reusable because it can be applied equally well to the health care and warranty claim processing domains. In fact, this one rule can be applied in any domain where there is a proliferation of codes that are involved in relationships with other codes.
Business rules and their enforcement can and must be separated in order to achieve the results that business people demand from their rule based applications. This task requires a thorough understanding of object-oriented and rule engine technologies, and predicate logic. Business experts must be pressed to describe the actions that must be taken when 'what is' doesn't match 'what should be'. Thorough analysis, the application of the principles of object-oriented technologies, the use of predicate logic, and a good understanding of rule engine operating characteristics can relieve rule bloat and enable businesses to achieve the advantages of business rule applications.
 The Business Rules Group, Business Rules Manifesto ~ The Principles of Rule Independence (Ver. 1.2), January 2003. Available at www.BusinessRulesGroup.org (in English as well as translations to other languages).
After a long journey from Paris yesterday I've arrived in Dallas, Texas for the October Rules Fest. I will be blogging the sessions I attend and I hope to catch up on the blog backlog that has built over the past few weeks. We've been heads-down on JRules 7 recently so please accept my apologies for the blogging lapse. The product should make up for it however!
"If I want to manage change, I need those drawings, that architecture..."
The keynote presentation by Dr. Leon A. Kappelman presented the Zachman enterprise architecture world-view. Although useful, I suspect the talk was rather high-level for the technical, implementation focused, audience. He discussed ontologies, as well as the overall role of enterprise architecture. His talk did provide useful context and introduction of the rest of the day however.
"Optimizing the whole as well as the parts..."
Rolando Hernandez followed up with an informative and amusing summary of system failures using real-world examples and rules for reference. He then transitioned into a Letterman style "Top Ten" list of best practices for avoiding system failure. He finished with a discussion of the differences between rule-based programming and traditional procedural programming as well as the typical benefits: centralization, consistency, accessibility to the business etc. He also placed the rules approach within the general context of the Zachman framework and enterprise architecture.
Lawrence Terrill then dived into the differences between procedural and declarative (rules-based) programming. Lawrence has been giving JRules training for many years and his depth of real-world experience showed in his presentation. He also touched on OOP and Domain Specific Languages. I thought his presentation was extremely concise and clear. If this is something you are interested in I recommend you check out his slides.
Dr. Gopal Gupta took us back to the 80s with a fascinating review of the Japanese 5th Generation project. He used the failures of the project to highlight the inherent challenges of declarative parallel programming and argued that the project was ahead of its time. He described the features that have been added to Prolog over the past 30 years to help overcome the initial limitations.
Jason Morris dove into the "modern expert system shell", ontologies and their relationship. He essentially described the challenge of extracting expert knowledge into an expert system shell. His presentation made me feel young again, quietly reminiscing about sitting in a Knowledge Representation AI MSc lecture at Edinburgh University! It's quite fascinating to me that we are seeing a lot of the "old school" papers and books being referenced once again by a new generation of engineers.
The day finished with two talks from the Drools guys, the first one on their temporal reasoning (CEP) extensions for Rete, the second on their web-based rule management environment. The CEP presentation was more technically interesting to me, while the second strayed into product-pitch territory -- earning them a rebuke from James Owen, the Master of Ceremonies! Of course I would have liked them to acknowledge more of the work all the commercial vendors have done in both these spaces...
Overall I'd give the day a 'B' grade. I got to chat with some of our customers, as well as attendees from other companies and universities. There are a broad range of interests represented, but so far the conference has lived up to its technical billing. Tomorrow I will present with Keith Lindsey from UBS. Wish us luck!
Dr. Dan Levine started the day with a plunge into the anatomy of the brain and how brain regions appear to be mapped to higher-level cognitive function. He discussed decision theory and risk assessment. Some of the points reminded me of the blog post I wrote a year or so ago on the neocortex. Although fascinating stuff, the link to hands-on business rules implementation is fairly tenuous.
Rolando Hernandez returned to discuss different "decision metaphors", such as IF-THEN-ELSE rules, decision tables and decision trees. He showed using directed acyclic graphs (decision trees with node sharing) as well as using Excel as a quick-n-dirty execution and simulation application. He reprised some of his themes from yesterday.
Carlos Serrano-Morales and Carole-Ann Berlioz-Matignon from Fair Isaac described the Fair Isaac decision management "blueprint". Basically, putting in place a feedback loop that allows automated decisions to progressively become more effective. The feedback loop is based on predictive models, optimization and case management. Carole-Ann described scorecards, the development process for scorecards, and how they can be used to incorporate feedback.
Jacob Feldman described the similarities and synergies between business rules, optimization and constraint programming (CP). He discussed a number of real-world CP applications he worked on, using ILOG CP and optimization technology. He used a nice interactive demo application to show constraints being propagated and solutions being found for several common domain problems: scheduling, resource allocation. Jacob also explained his vision for a Common CP API, an API that allows application developers to access different CP engines through an adapter layer.
John McQuary described knowledge management for a large Intranet. Flour has an in-house rule engine to implement 3D pipe routing plans. He also discussed a next-generation system that will use a rules-based explanation facility to detail why design decisions were made, based on engineering standards and best practice documents.
Carlos Serrano-Morales returned to give a pretty standard presentation on integrating BRMS within enterprise applications and the BRMS feature set. He included some interesting slides on DAGs and EDAG notation (DAGs with exceptions) for decision trees.
Keith Lindsey and I finished the day with a presentation on using JRules within a grid architecture at UBS Bank, and a review of the JRules sequential mode. The presentation went well and we had some good questions and discussion after the session, and in the bar afterwards.
Dr Rick Hicks described approaches to rule base validation. He then discussed propositional logic systems and an transformation that allows procedural rules to be converted to procedural functions and executed (solved) without a traditional engine. In essence conflict resolution has been performed at design time so that at runtime the engine performance is independent of the size of the rule base. From my limited understanding the approach sounded similar to that of Corticon.
Gary Riley reviewed the 20+ year history of CLIPS and his overall development philosophy and methodology. He showed a nice chart that showed CLIPS performance has gone from 4 rules fired per second on an IBM AT 21 years ago, to over 100K rules fired per second on a modern MacBook Pro! He then described the optimizations he added to CLIPS 6.3 that resulted in dramatic performance improvements with the academic benchmarks -- with two real-world applications the performance improvement is about 2x.
Mark Proctor described the collect and accumulate condition elements (keywords) added to Drools 4. He then discussed Drools 4 ruleflow groups as well as the ability for rules to control ruleflows and vice-a-versa. An interesting technical approach, but my fear is that such tight coupling between rules and "processes" (ruleflows) creates methodology (human) and reuse concerns.
James Owen recapped the Rete algorithm and conflict resolution.
Dr. Charles Forgy sketched out some fascinating work he has been doing on enabling the Rete algorithm's "match" cycle to run in parallel. He is using the jsr166y task-join framework to perform lightweight task dispatch.
IBM ILOG Blogger 270002NTYA 4,209 Views
Let me introduce Laurent Grateau that led the technical effort with RES.NET and the certification process. Congratulations to the team once again for a great effort. Below is a link to our recent press release and Laurent's words about the certification process.
What is Certification?
By Laurent Grateau, ILOG.
The Rule Execution Server within Rules for .NET 3.0 recently passed Microsoft's rigorous certification program and is now "Certified for Windows Server 2008". Laurent Grateau from ILOG worked with Microsoft's engineers on the certification and he explains how the ISV certification program works below.
The Windows Server 2008 software certification program tests an application's compliance while running on the Windows Server Operating System.The program features two designations (logos):
Microsoft makes a suite of tools available to ISVs working on certification, described in the sections below.
The Cookbook covers the most common application compatibility issues and provides tips on how to modify applications or redesign them to provide a quality experience on Windows Server 2008 and/or the Windows Vista operating system.
This document details the top 10 compatibility issues when developing applications for Windows Server 2008.
Works with Tool is a wizard style tool that can help determine application’s compatibility in Enterprise environments in around 4 hours.
Creates two snapshots of fixed drives, registry settings, drivers, and services at different points in time and compares them to view differences.
A wizard style tool to help walk through the requirements and tests in the Certification program. Users can take advantage of the built-in automation and results tracking features and collaborate and track progress easily.
After some effort RES.Net passed all 100 test cases defined by the Certification Program. The tests were run and certified by independent test vendor Lionbridge through their VeriTest certification program.
Some example tests that were run:
Additional Certification Resources
IBM ILOG Blogger 270002NTYA 4,106 Views
A few weeks ago I asked folks to think about where they were using rules within their architecture (see the original post here). I thought it would be a good idea to peel back the layers a bit more and dive into where RFDN fits within a prototypical ASP.NET architecture while also including WCF and WF.
The execution object model (XOM) is typically referenced at all levels of the diagram since it represents instances of transient data that pass across tier boundaries and establish the data portion of the contract within the interfaces. In using WCF I have found that at times it may make sense to have a slightly different set of parameters used for the WCF service contracts verses exposing the interface of business objects directly. This is the same issue one has when wrapping a simple a data access object (DAO) method with a similarly named business object method. They may remain one-to-one for a while but eventually many wrapped methods will evolve independently and require additional behavior.
Finally, I added WF to the business tier because upon review it could be a better way to aggregate lower-level in-process object calls with out-of-process services. Orchestration adds some over-head, but it may reduce the complexity of a specific method. Obviously one has to weigh the cost/benefit for each method before employing WF to solve the problem.
RFDN adds value at just about all levels of this architecture. It can be use to establish how to stream the GUI for a portal all the way down to the database and be called by a stored procedure (I don’t typically recommend this but customers ask about this all the time). Moreover, it is essential tooling whenever you need to analyze the state of data at any given moment and you find that your code has suddenly become non-trivial.
It is true that I have omitted many other interesting frameworks and tools such as O/R and LINQ; but in spite of this, does this architecture look like something you could get excited about? Please post a comment below and tell me about your architecture.