It has been a long week since my last entry ;-) , publishing a book uses a lot of bandwidth.
Before going into variability, I want to explain the issue leading to variabiltiy
We have often forgotten that services exposed information that is carried by processes. Thus if the information evolves in its structure it will propagate to services and processes. Let me give you an example that I experimented in a project: A product catalog may contain attributes for these products. Whether these new attributes were represented or not as new columns in a database, they ended up being new tags in a schema such as <TV_Channels> for a video on demand product. Given that the schema changed the services accessing the product catalog changed signature, and the business processes using these services had to be regenerated and tested. Two product attribute changes per week were occuring, and the system test for the affected processes was 2 weeks on average per process. This ended being a catch 22.
That being said, I now need to answer to the following questions, how can I model the information to prevent structural changes and how can I evaluate the testing effort of a business process.
On the first question, I wrote a full chapter on various techniques in my book, including xsd:any I already mentionned, in this blog but the one we used in the specific case is the CharacteristicValue and CharacteristicSpec pattern from the Telemanagement Forum SID model for Telco operators. This model defines a characteristic specification that will describe the attribute with metadata, including the allowed values the type and validity dates. The characteristic value itself has a link the specification so that a common specification can be used for many values.
On the second question, the experience shows that creation, change and test effort of processes is roughly proportional to the number of arcs (connections) in a given business process. Even if you only change a small aspect you will need to test all internal variations. It is quite common to have two to four person hours of effort per arc in the process.
The following picture shows that with only 3 tasks and 5 nodes in a process you can have 10 arcs, so you may expect 5 days of test.
It somehow relates to a cyclomatic complexity used in software development test evaluation.
Another important aspect of variability are rules and policies. A consistent enterprise approach to rules and policies will require the creation of a common business vocabularity which content must be aligned with the concepts in the information model.
With OMG's SBVR there is now a standard for the structure of rules and policies but not of the contents which will always be specific to an industry and/or an enterprise. The vocabulary will describe the core elements of the information model, while the rules content model will define the acceptable value ranges when they are required by rules or policies.
If we now integrate this information variability with SOA and BPM but also with rules and policies we can have business processes which behavior is driven by the content of information and is much less sensitive to changes. Using a business vocabulary for the rules with a human language like rules or policy description enables business users to manipulate the rules and shift the changes from IT to business. In a further blog entry I plan to give real examples of such policies.
My regards to readers.