As my blog reported earlier this week, I've returned from six weeks of back to back travel. Happily, I plan to stay put for the rest of the year so that I can get back to the Handbook. In my travels, I particularly enjoyed returning to OOPSLA, having been away from the conference for several years. This was also my first year at WICSA where it was wonderful spending time with a number of the pioneers in software architecture. Speaking of pioneers, while in Vancouver, I also had a chance to meet up with my dear colleague Philippe Kruchten.
From these travels, I've accumulated a number of architectural issues I've been remiss in blogging about, so here I begin, in random order.
The term intimate computing has popped up from time to time, mainly in the context of human/machine interaction. Ray Kurzweil has taken this concept to it's natural conclusion in his latest book, The Singularity Is Near.
Lately, I've been working with a number of customers using IBM's cell processor, a nine-processor shared memory chip. As I've said many times, the average developer does not have as a core competency the ability to write distributed, concurrent, and secure systems. This is not meant to be a reflection on the skills of any individual or group of developers: writing correctly functioning multi-threaded systems is intrinsically hard. Most developers are experienced in single-threaded applications; even on the Web which is inherently distributed, one can merrily ignore issues of concurrency, simply because the disparate threads and processes are hidden in lower layers of abstractions. In such loosely-coupled distributed systems, this is as it should be. However, when you have tightly-coupled processors arranged around shared memory such as with the cell, you have a very different problem. Exploiting the power of such multiprocessor systems puts the problems of concurrency squarely in the developer's face, a programming problem I call intimate concurrrency. In this space, the classical problems of resource locking, race conditions, deadlock, livelock, and starvation are ever-present. Compounding the problem is the fact that software tooling is lagging the hardware: there are only a handful of good tools out there to help one reason about the behavior of intimate concurrency.
What is the proper engineering reaction to this issue? First, you can ignore it by keeping everything single-threaded (but you miss out exploiting the theoretical computational capacity). Second, you can hide it. Hiding can happen in a number of ways: in the language (this is what the X10 programming language is all about), in patterns (see the work at the Center for Distributed Object Computing at Washington University, or in middleware.
Is concurrency a legitimate architectural issue? Absolutely. Furthermore, returning to fundamentals, having a clear separation of concerns between the logical view of a system and its process view is a good thing, which is why in my architecture metamodel I separate these two, following Philippe's 4+1 model view.