In the post (I Want To Believe Redux) my goal was to describe the fictitious story about a bright architect that matured a in-house rules technology. Along the way, the architect experienced pressure points and addressed the requirements in motion. At the start, the architect might have reached different conclusions before proceeding with the effort—for we all lack a critical view of something on the horizon from time to time. Below are some of the key pressure points I have experienced while walking in the “wilderness”:
Many rules, many places
Rules are in too many places—across multiple deployed applications (BPM, vertical solutions, and channels), teams, servers and data centers. IT managers and architects eventually find themselves in a position where they are responsible for changes across these boundaries and are now thinking strategically about how to limit the impact of change across these deployed systems.
Rules are constrained by release cycles
When business level needs are translated into software projects, the changes that are required are constrained by the existing software development life cycle (SDLC) of each group or team. Moreover, many changes may have nothing to do with presentation, but simply involve behavior of a production system. Strategically, the IT manager or architect would like the change to be affective in-between formal release cycles.
The collaboration problem
IT and Technical team members need to collaborate with non-IT staff and business stakeholders. Requirements documents are not always up-to-date and the master of the policy must constantly be translated into a business language for audits, simple reviews and other requirements that require visibility. In fact, the process of translating business requirements into executable rule policies is very expensive because of the amount of human interactivity involved. Strategically, organizations should look for technologies that streamline human interactions to avoid waste while at the same time keeping the goal in view. This is the whole point around declarative approaches to software.
Architects and Developers do not share the same language with Analysts. Analysts often create and maintain glossaries to map technical terms to business terms. They need a way to manage this at the same time they solve their rule visibility problems. Consistent vocabulary reduces (and in cases removes) ambiguity and allows organizations to scale. This is a more abstracted version of a “contract first” approach to design but at the level of language and human behavior.
Shared execution (SOA)
Once a decision is created, architects would like to have the same feature across all of their deployed applications. They don’t want to touch other deployed servers that may be spread across data centers and geography. In turn, they would like to take an SOA approach to decision deployment so that all channels/platforms can reuse a policy.
Code has become brittle
Business logic has reached a tipping point in terms of algorithmic complexity. This complexity is making the rules opaque to the business analysts and complicates communication. It also is increasingly difficult to debug and test. Specific problems are in the area of re-cursive patterns, complex data structures and the amount of data that must be tested at various stages of a decision. The developer needs tooling to simplify rule expression and model how rules are used. Simplifying logic and the expression of rules is critical for adoption by business analysts and other non-technical consumers of a decision.
Tools are too technical
Business analysts need a tool that fits them: Non technical staff cannot use the same tools as developers and need to manage rule artifacts independently of the SDLC used by the IT groups.
Administrators need tools too
Usually administrators do not have tooling to manage how services are created and managed. They need tools that leverage their experiences and practices in the data center. Living with a solution and keeping deployment costs low is as important as up-front costs; however, these long-term costs are more more difficult to address because they require a broader view of an organization.
In short, the pressure points are real and ever present. I recall my first dive into test first programming and unit testing. In the end, I found I was still doing what came naturally, but the tooling added consistency, repeatability, and EVIDENCE that the code worked. As a byproduct of the effort, I discovered many other benefits that I could not have foreseen without using the methodology and tools—like knowing more quickly how a contract or object signature was going to stand the test of time—or make refactoring a lot easier—just watch all the tests break. The same is true for rules. It would be nice if humans could understand the world a priori; but we don’t. Fortunately there are some walking around with a flashlight.