An elephant is no more complex than a hummingbird. A finely-crafted mechanical watch is more complex than a one ton granite boulder. The three billion base pairs that comprise the human genome
form an artifact of staggering complexity.
Can I meaningfully compare the complexity of the architecture of one software-intensive system to another?
Assuming that one can even calculate the complexity of a system's architecture, what value should you expect in that number? Simply knowing that system A
has complexity X
versus system B
(or a different implementation of A
) has complexity Y
may be a fascinating curiosity, but then what? Does one try to do something (refactor the architecture) to optimize the value of the metric? Possibly, but the danger, of course, is that one may end up optimizing that number but at no material value to the use or economics of the system, or even worse, atthe cost destroying some other important aspect of the system.
An observation, then: I think it's a mistake to devise a
complexity measure for a system's architecture. Rather, I think that the only meaningful path is to devise a set
of measures for each particular view
of a system's architecture. This is a subtle difference, but an important one: one may have a system with a relatively simple logical architecture but a very complex deployment architecture (e.g. a nuclear simulation running on a massively parallel machine) or vice versa (e.g. a face recognition system running on a singe processor). Another related observation: across the spectrum of software-intensive systems, you'll find widely varying architectural styles. It seems to me that each style warrents a particular complexity profile (e.g. the structure of a real time system will look very different than that of a layered, Web-centric enterprise system versus or a rule-based system).
It is relatively easy to devise a naive set of architectural complexity metrics (for example, by trying to adapt coupling and cohesion metrics to components at higher levels of abstraction), but it's not clear to me that there's any particular value in doing so. In these matters, i'm more of an inductive reasoning kind of guy. I'd approach the problem by selecting a pile of architectures (you could limit yourself to one particular architectural style, but then you'd have to be careful about applying that suite to other styles), get my mind around how one "feels" more complex than the other, sort them into groups of relatively equal complexity...then analyze the heck out of them to see if I could discern what distinguishes each group from the others. That, I think, would lead to a more useful set of metrics.
Others (such as here
) have explored this problem. Personally, I think the most fruitful path of investigation ties architectural complexity to entropy
, that is, the measure of the disorganization of a system and/or the size of the state space of the system. Clearly, there's work to be done. I can "feel" that one system or implementation is more or less complex than another, but I can't empirically defend that feeling in many cases.
Quote of the day:Complexity comes from a large number of parts that interact in a non-simple way.