Here is a quick, public summary of MCIF:
- Establish business and operational objectives
- Prioritize practices and define roadmap
- Accelerate adoption with tools and services
- Report, analyze and act on results
From a process engineer's perspective, business and operational objectives are a given whether derived by MCIF or some other method. We certainly escalate suggestions of benefits to be derived from exploiting an opportunity for change through adopting a set of one or more practices, but we are typically tasked with attacking the current shortcomings apparent to executives. For this posting, let's agree we are given objectives.
Our task is then to find a set of practices which will move the teams toward achieving those objectives. Reqt # 1. We cannot select practices without metrics.
In particular, I strongly assert that metrics which represent the current state and which are expected to indicate the value (or loss) from process change must be expressed before any attempt is made at practice selection. How else am I to compare available practices and select from among them? We might implement any without baselining*, but would afterwards be unable to tell 'different' from 'better.'
How do we get from objectives to metrics? I once attended an ITMPI presentation by Dr Victor Basili (of my alma mater) on the Goal Question Metric approach which has been applied to study the value of (waterfall) software process improvements.
Let's now assume that a successful GQM or other technique provides some metrics with which to select and assess process changes. MCIF would then prioritize these and define a roadmap for their adoption.
Metrics are necessary but insufficient for achieving process improvement. Reqt # 2. Metrics must be decomposed to measures, and those measures (as well as their relationship to other components of metrics) must be communicated to the development team. Please note: This does not imply quotas for measures.
The distinction is important because members of a software development team have direct control over product and process measures but may have no ability to control (or even to view) metrics. An analyst may not know the average cost to deliver test results per use case point but can directly affect the minutes required to outline the scenario currently under development. Process engineers need to provide the team with measures which can be viewed, tracked, and controlled by the development team members. Importantly, fluctuations in those measures for special cause can be identified immediately by the team and communicated to the process engineer.
Metric? Measure? What's the diff? Gary Pollice provided better definitions, but here are mine:
- A measure communicates a value relative only to a scale. (millisecond, defect)
- Descriptive statistics are a special form of metric which relate a set of measurements to itself to predict expected values for such measurements. (Longest running transaction, Typical defect severity)
*Baselining is a key part of Shewhart's PDCA/PDSA cycle.