A dear friend of mine, Joe Marasco, sent me a link to the proceedings of a workshop held in Princeton in celebration of John Archibald Wheeler's 90th birthday. Among many other things, Wheeler was the thesis advisor for Richard Feynman, one of my heros. In those proceedings, George Ellis presented a paper titled True complexity and its associated
ontology which apparently cited my work in object-oriented design and classification; I've got a copy of the proceedings on order, and methinks I shall track down George and have a lively dialog with him.
I often think that if software had not found me, I would have been a quantum physicist, or a priest, or an itinerant musician. I suppose I've ended up being a little bit of each.[Read More]
Software architecture, software engineering, and Renaissance Jazz
gbooch 120000P81R 475 Views
It is strangely comforting knowing the likely means of one's own death.
My father died of an aortic aneurysm; my uncle died of an aortic aneurysm; this summer, my 20 year old nephew died of an aortic aneurysm; my sister has been diagnosed with an aortic aneurysm; just before Christmas, I too was diagnosed with an aneurysm of the ascending aorta. In effect, I have a live grenade in my chest, and the pin has been pulled. Although I am otherwise in excellent health, over time the risks of my dying from this defect lie on a curve that approaches one. This condition is operable, but it does require major open heart surgery involving circulatory arrest, and thus the risks of neurocognitive deficits, paralysis, rental failure, infection, or death also lie on a curve that approaches one. My current task, therefore, is to make a judgement where those curves intersect, at which time the risk of the aneurysm are greater than the risks of surgery. I have been searching the known universe for the right organization and surgeon to address my case, and thus far my search has led me to the University of Colorado Hospital, the Mayo Clinic, the Texas Heart Institute, and the Cleveland Clinic. IBM's chief medical officer has assisted my search, given the company's vast connections with medical institutions around the world. It is not clear how events will unfold, but in the interim, life does go on.
While this local drama consumes me, my thoughts and prayers go out to a much more global drama, and the lives of those devestated by the recent tsunami. The raw power of the earth and the ocean are humbling indeed, to us who try to control only tiny structures composed of silicon and thought.
This is most somber beginning to the new year, I realize, but let me offer to you my wishes for a healthy, prosperous, and lively year.[Read More]
I typically ignore the Microsoft marketing machine: while occassionally entertaining, the messages therein are typically so void of detail, suspect of schedule, and full of noisy FUD that they are distracting. Recently, however, a colleague forwarded me a link to a presentation about Visual Studio 2005 Team System that I couldn't ignore, because of its use of phrases I remember writing several years ago.
I do respect Microsoft's work in this space: development is a team sport, and tooling for most domains requires far more than just a faster/better/cheaper compiler and debugger. In the VSTS Tech Ed 2004 presentation cited, Rick LePlante and crew presented Team Studio. What first struck me is his use of the phrase "friction free" development. About four years ago, Alan Brown and I wrote a paper on collaborative development environments (since published in Advances in Computers, volume 59 and also mentioned in my EclipseCon keynote provided in the references section of the blog). Rick is correct - though he's a few years late in recognizing it - in that integrated development tools are all about reducing the friction among different stakeholders. Speaking of stakeholders, the Microsoft presention here is also virtually the same as IBM Rational has described for several years with regard to our suites, and their mention of having a rich partner ecosystem is what Eclipse is all about, although the former is captive and proprietary while the latter is open.
Following the demo in Rick's presentation was interesting, but IMHO a bit naive: on the one hand, Team Studio does automate some of the mechanics of change control (that's a good thing) but it totally ignores the things that can be done to address the social dynamics of collaboration (that's a bad thing). Some of the things Team Studio automates simply adds some ceremony to what you'd get in a development team that was already jelled and embodied good communication patterns, but the scenarios covered by Team Studio didn't necessarily encourage those good practices in teams that were geographically and/or temporally distributed or already somewhat dysfunctional. Messaging, presence, the formation of small, temporary work groups, lightweight artifact workproduct versioning, the existence of a virtual team meeting space: these are collectively critical to addressing those social dynamics, and that's the trajectory that I see such tooling needing to go (also as I explained in the EclipseCon keynote).[Read More]
gbooch 120000P81R 754 Views
gbooch 120000P81R 691 Views
I've been slogging through a 3 foot high pile of clippings and articles I've collected the past several months, all randomly encountered factoids on various of the systems under study for the Handbook.
Politicians have think tanks, but check out Vanguard, one such group for our industry.
On a completely unrelated topic (I told you these were randomly encountered; this is sort of a meta-archeological dig I'm conducting), while most of you reading this are likely engaged in enterprise systems, there's a whole world out there beyond servers and such, as for example in the area of medical electronics. I've met some teams who build such devices (thank you!) and recently encountered a computed tomography system from Siemens. The physics for these devices are well-understood, and while there continue to be hardware advances in terms of resolution and performanance, the biggest focus in this space is the operational software; labs typically buy a machine (a million or so a pop) but then enter into a stream of software upgrades over the years, adding various features and improving visualization. I honestly know nothing about the software architecture of such devices - which is why it's in my list for the Handbook (I write books in order to learn, and there's soooo much that I don't know).[Read More]
The past couple of months, Microsoft has unleashed a torrent of words detailing their marketecture for software factories. Alan Brown and Simon Johnston of IBM Rational have previously and very ably commented on this work, but Steve Cook of Microsoft has drawn me into the fray in his blog, so I feel compelled to reply: Steve wrote "I hope Grady Booch reads this"; well, I did :-).
I know many of the folks involved in Microsoft's software factory effort, and I very much respect what they are trying to do. We have differences of opinion, as you'll see in this blog, but it's good to watch Microsoft putting some energy into improving the development experience. As I've said many times here and elsewhere, software development has been, is, and will remain fundamentally hard, and whatever can be done to improve the profession of developing software is a Good Thing.
That being said, I'm disappointed that Microsoft choose the term "software factory," because it's an emotionally-laden phrase that harkens to extremely mature manufacturing methods that focus on stamping out endless copies of the same thing, although perhaps with slight variants therein. There's no doubt that reuse at the level of design patterns or, even better, vertically-oriented architectural patterns is a Good Thing, but what Microsoft is proposing to do is not exactly like the manufacturing metaphor, and so their use of the term is a bit misleading (although Steve has curiously used the image of a conveyor belt when describing the Microsoft factory process). Tom Demarco in his book Why Does Software Cost So Much? sets aside a chapter on software factories in which he notes - and I agree with him - that "I think factory methods for software are dead wrong, witless, and counter-effective. Organizations that build good software know that software is an R&D activity, not a production activity. Organizations that try to make it into a production activity produce bad software (though potentially lots of it)."
At OOPSLA, Rick Rishad of Microsoft publically spoke of their strategy (in a somewhat controversial way, as reported by Spencer F. Katt, which was a bit surprising given the typically-frictionless Microsoft marketing machine). That strategy was subsequently reviewed in ADT Magazine. While I agree with much of that article, they too fell into the pit of taking the software factory term a bit too literally. Perhaps the best source of Microsoft's deep thinking in this space, in addition to their site, is the book by Jack Greenfield and others, titled Software Factories: Assembling Applications with Patterns, Models, and Tools. Therein you'll see Microsoft's emphasis upon resuable assets and tooling to support them.
To that end, there's considerable common ground between IBM and Microsoft's approaches to the problem: we both agree that resuable components, as manifest both in code as well as in patterns, are the right next stage in cutting the Gordian knot of software. Indeed, IBM's been in the pattern space for sometime, starting with many of the authors of the seminal book Design Patterns to the current work led by Grant Larsen and as manifest in the open standard we pioneered through the Object Management Group, the Reusable Asset Specification.
However, we do disagree with Microsoft's rejection of the UML in favor of proprietary domain-specific languages, as noted not only in Jack's book but also in Alan Will's blog. To be clear, as Jim Rumbaugh has commented back to me, our observation - and that of our customers - is that the UML has proven itself useful much of the time, yet there are a few purposes for which it may be less appropriate. In many cases, the semantics of the UML are pretty close to what you need, although they are deeper than necessary; in such cases, a suitable UML profile is sufficient to focus the language, which allows you to leverage standard UML tools and training and yet eliminate the bloat. In those cases where the business concepts are more naturally expressed in a specialized syntax, then inventing a suitable DSL is reasonable. At the extreme, this is essentially the path that Charles Simonyi has been trodding for some years, a path that requires a very very deep and integrated underlying semantic model. Indeed, as I've pointed out in one of my earlier blogs, the root problem is not simply making one set of stakeholders more expressive, but rather weaving their work into that of all the other stakeholders. This requires common semantics for common tooling and training, so even if you start with a set of pure DSLs, you'll most often end up covering the same semantic ground as the UML.
Will's blog had a number of errors of fact, which Bran Selic has pointed out to me and so which I'll paraphrase here. Alan wrote "So here's why we don't want to limit ourselves to the UML as a basis for our users' domain-specific language" and then went on to say:
"A careful look at the specialization mechanisms for UML reveals their limitations. Stereotypes and tagged values allow you to change icons etc, although even simple alterations like decorating a box to show the state of some property isn't within range. You can't change the semantic constraints, or invent new sorts of diagram or new categories of element." This is incorrect: a stereotype allows you to define a set of associated constraints (in OCL, for example) that can capture the characteristics of your domain-specific context. While it is true that you cannot violate the semantics of the metaclass that you have stereotyped, this is actually an advantage of the stereotypeing mechanism. A stereotype is a type-compatible specialization of an existing UML concept. Consequently, you can reuse standard UML tools and expertise even though you are using a domain-specific language. Of course, if you want a language that is incompatible with UML, that is OK as well (specifically, you can define it using MOF), but you will be losing some of those benefits.
"You can't take stuff away, so your users are always distracted by other options and elements and diagrams that aren't relevant to your language. Tools that use a UML editor as a front-end have to have a (very annoying) validation step before they generate their DB schema or whatever it is." Also incorrect: as Jim noted above, a UML profile can remove any metaclasses it chooses.
"UML only includes certain types of graphical format. If you want your language to include tree-structured diagrams, or tables, or math formulae, or block-structured text, or prose, or if you want hyperlinks - well, I think you'd have a hard time. While our initial offering won't include all those styles, we'd certainly like to support them at some stage in the future." Actually, the current UML spec really does not restrict graphical formats in any way -- it simply provides a standard set of notations, but not at the exclusion of other notations. In other words, there really is no "illegal" UML graphical syntax. The formal definition of a UML graphical syntax is an outstanding item before the OMG. While this is not good, it also means that Alan's criticisms about its graphical restrictions are misguided - and Microsoft too acknowledges that these different graphical representations are a future desire for them, not a present reality.
"An important aspect of the definition of a modern language includes how you interact with it: how you navigate and elide or elaborate and edit etc - think of your favorite GUI composing tool, in which the GUI-defining language is scarcely separable from its editor. You can't do anything about these aspects in UML." We agree - but this has nothing to do with the UML but rather is all about the tool environment. IBM's tooling approach is to build upon the open standard of Eclipse, not a proprietary IDE.
"What you get out the back of one of our tools says things like CheckinDesk and ConveyorBelt. What you get out the back of a UML tool says things like Class and Stereotype - much more difficult to read. (Yes, of course you could write translators, but it's an extra layer of hassle.)" This is also incorrect: using the UML, at the model level you get back stereotypes called CheckinDesk and ConveyorBelt. At any rate, why would anyone want to look at things at the XMI or metamodel level? XML is not really for human consumption, and ultimately, MDD is all about raising the level of abstraction for the developer.
"You have to understand all of UML (stereotypes etc) before you can create your language on top of it." Not true: you just need to know the subset that you are specializing (mostly it has to do with things such as classes and associations which is what most profiles specialize). Of course, if you want to specialize state machines, you need to know the metamodel. But, if you don't care about state machines, you can ignore them safely.
"Your users have to understand an editor that's intended for something far more complex than they probably have in hand - or at least whose complexities are in a different direction." Again, this is a issue that confuses tooling with language definition.
I hope that Steve and Alan read this :-)[Read More]
gbooch 120000P81R 727 Views
I'm back from holiday but am now 12 hours from wheels up again to the east coast. I have some thoughts regarding Microsoft's software factory initiative which I've not had time to post, but will do so before the end of the week.
Speaking of holiday, our family took a cruise, and while a Fun Time Was Had By All, I was most taken by the degree of automation onboard. Docking the boat appeared to be an almost hands-free operation and the stability system -even under moderate seas - were quite amazing in keeping the vessel upright (which I suppose is always an important use case). Doing a dig of a ship's operational system is on my list for the Handbook although I've not yet started to identify which one to study.
I've been in the midst of planning Rational's projects with IBM Research for the coming year. We invested several million for Rational alone this year and will do the same for next year in the areas of model-driven development, quality, change management, middleware tools, and collaboration. A few of these plays are, quite frankly, long shots, but you need to do a few of them to seed and nurture the future. Some of these efforts will bear fruit in terms of yielding tangible products, while some will not, but even from these we'll learn a great deal. The depth of talent inside IBM Research is really quite amazing, and it's quite invigorating to work with these folks who are scattered in labs across the world.[Read More]
I've told this story from time to time in my public lectures and I've decided to retire this tale, but before I do, I'll preserve it for reference in my blog.
My wife and I designed and built a home a few years ago, and being an alpha geek I just had to fill it with all sorts of automated elements. I hired a contractor to pull the wires (he put about 5 miles of Cat 5 wires in the walls) but as CTO/CIO of the home, I installed the rest of the network. Shortly after I booted the house for the first time, we invited some friends over for dinner. They arrived at the appointed time, rang the doorbell - but we never heard it. They knocked on the door - and we didn't hear that either - so they finally called us on their cell phone, while standing at the front door.
My doorbell had crashed.
Now, doorbells have very simple use cases: you push the button, it rings a tone inside the home. However, my implementation of said doorbell was a bit more complex, and I failed my user base by having the bones of the underlying technology stick through. You see, the doorbell sends a signal to our PBX system, which I hacked to extract events (such as the doorbell being pressed). That event gets routed to an application server - running a non-Macintosh, non-Linux operating system, I might add - which has a deamon that intercepts various events (such as from the PBX, the security system, and so on) and in this case would send an event to the A/V subsystem, where a seasonally-appropriate and pleasant tone would sound through the home. Alas, I failed to use Rational's own tools (Purify in this case) and I had a memory leak in my application server. The solution was to reboot that server, which brought the doorbell back to life.
I have a very demanding customer (my wife) who really doesn't like to have my software lying around on the floor, and so she was at first annoyed and then amused at the incident. The good news is that I've ripped out the first implementation (I'm not saddled by legacy software here) and my doorbell now works as any good little doorbell should, with all the complexity hidden below the surface.
Yet another example of why the primary task of the software development team is to engineer the illusion of simplicity.[Read More]
I've back from travel again, this time from trips to New York City and Chicago where I've been working with a number of clients on their emerging enterprise architectures. The common theme I've encountered is that large enterprises are beginning to see their way out of the global economic slump and so are turning their attention to what they can do to extract value from their legacy systems by unifying the artifacts and activities that reside across existing silos and by unifying their customer experience.
Speaking of legacy systems, one gentleman introduced me to the new phrase heritage software as a euphemism for old, tired software. A nobel concept, but a rose by any other name is still a rose. It reminds me of phrases such as pre-owned vehicle and arbitrary termination of life.
Service-oriented architectures (SOA) are on the mind of all such enterprises - and rightly so - for services do offer a mechanism for transcending the multiplatform, multilingual, multisemantic underpinnings of most enterprises, which typically have grown organically and opportunistically over the years. That being said, I need to voice the dark side of SOA, the same things I've told these and other customers. First, services are just a mechanism, a specific mechanism for allowing communication across standard Web protocols. As such, the best service-oriented architectures seem to come from good component-oriented architectures, meaning that the mere imposition of services does not an architecture make. Second, services are a useful but insufficient mechanism for interconnection among systems of systems. It's a gross simplification, but services are most applicable to large grained/low frequency interactions, and one typically needs other mechanisms for fine-grained/high frequency flows. It's also the case that many legacy - sorry, heritage - systems are not already Web-centric, and thus using a services mechanism which assumes Web-centric transport introduces an impedence mismatch. Third, simply defining services is only one part of establishing a unified architecture: one also needs shared semantics of messages and behavioral patterns for common synchronous and asynchronous messaging across services.
In short, SOA is just one part of establishing an enterprise architecture, and those organizations who think that imposing an SOA alone will bring order out of chaos are sadly misguided. As I've said many times before and will say again, solid software engineering practices never go out of style (crisp abstractions, clear separation of concerns, balanced distribution of responsibilties) and while SOA supports such practices, SOA is not a sufficient architectural practice.
One more thing before I go. If I were a betting man, I imagine my ability to predict the future success of many of these organizations would be quite high (and I don't mean their technical success, I mean the very life of the company itself). There are some organizations I encounter in which there's a tight connection between the CEO and CTO/CIO (and development teams) - these are the companies I expect will flourish, for at the highest levels of the company they understand the strategic weapon that lives in software, and the importance in building a development organization that's able to exceute predictably and with agility. Sadly, there are too many organizations where the highest level of the company simply goes not grok the value of software - and these are the organization that will be overtaken.[Read More]
I cast my vote late last week, in order to avoid the long lines that were expected (and are indeed materializing) at the polls today. Vote early, vote often, is my motto :-)
I'm not one of the many undecideds, but rather had made up my mind several weeks ago. Thusly robed in the extreme pleasure and honor of being able to cast a private vote in this democratic process, I strolled over to our local voting precinct - and waited about an hour to weave my way through the lines. When I finally got to the voting booth, I was surprised and delighted to note that our county had installed electronic voting machines. I wasn't able to read the label of the manufacturer, and I expected I'd draw some unwanted attention if I had reached around behind or under the machine to look. The use case for voting was really quite straightforward: the polling officer identified my precinct, picked up a block that matched my precinct and inserted it in the machine, bringing up the appropriate ballot for me. Voting was easy to do on the touch screen display, and changing votes/going back was even possible (I know, I intentionally explored the edges of the use case). I wish a paper copy had been created; it seems like such a simple thing to do and, in this era of hanging chads and such, seems to be a prudent safeguard. I was also surprised to see that the machines had no UPS device; they were plugged straight into the wall - one wonders what checkpointing is done in the event power fails. While I'm on this riff of surprises, I'm also surprised that there were no obvious parity checks: having a manual count of voters per machine and then matching them to votes actually placed would be another simple and obvious check and balance.
As the day unfolds, I'll be glued to my favorite Internet radio and then hosting an election party where we'll watch the returns.[Read More]
gbooch 120000P81R 693 Views
In the general architecture section of the Handbook, I've collected a set of references (a glossary, personal contacts, books, papers, presentations, sites, and so forth). Recently, a couple of readers pointed me to their work, which I'd like to highlight here.
First, check out the work of Jeff Garland and Richard Anthony on Large-Scale Software Architecture on their site. I'd overlooked the publication of this book, but thanks to Amazon, a corresponding set of atoms should be flying itself to Colorado. Second, Vaughn Vernon pointed me to his project to codify enterprise architectures, with chapters being posted on TheServerSide.com. Thanks to both Jeff and Vaughn pointing me to their work.
Amazingly, I'm not scheduled to be on an airplane for a least two weeks, so I hope to make a dent in the physical pile of notes I've collected for the Handbook.[Read More]
I see that Microsoft is poised to make some announcements about their offerings for domain-specific languages next week. I'll keep an open mind, but as I've posted before, I'm skeptical.
First, while I agree that development is a team sport and that multiple stakeholders must collaborate in weaving together their diverse, interdependent views, one still needs to have a common semantic basis for all those languages. If you accept that not unreasonable position, you will end up covering the identical semantic ground as has the UML - albeit in an open manner, quite unlike Microsoft's historical record. Second, does one really need separate languages or is it sufficient to provide a common language with acceptable variaions, as is the UML? As witnessed by the very slow take-up of C#, one may design a solid language but then you have to support that language, provide sites/papers/books/courses to help people become fluent in it, and in general build up a community of interest and a community of practice. If you do otherwise, you end up with just another isolated language that adds to the babble that every development organization already has to speak. Development teams need greater simplicity, not greater complexity, in their programmming model. Third, will any such set of domain-specific languages be sufficiently long-lived that any reasonable development organization would be willing to commit its people and resources to that set? If otherwise, then such languages may be useful for writing totally disposable software only: remember that useful software tends to live on, although often retouched over time, and that means if the support for its expression evaporates, then your organization is, at worst, left hanging or, at best, forced to spend the overhead of porting that expression to yet another form.[Read More]
gbooch 120000P81R 758 Views
I'm currently in Austin, participating in the annual meeting of the IBM Academy of Technology.
In an organization as large, deep, and broad as IBM, one on-going challenge is the exploitation of cross-divisional and cross-discipline integration. The Academy is an essential mechanism for IBM to bring this kind of integration about, primarily by throwing many of IBM's best and brightest into one mix so that connections can be made at deep technical levels. I just finished listening to a fascinating presentation on nanotechnology - where else could a software geek like me hear about the latest developments in this space, and get a coverage of the points of pain therein? Very high cool factor.
On a more pragmatic level, I connected with some of the folks who have pioneered performance and timing analysis tools for IBM's chip business. This is a field that's not only mature, but is pretty much at the core of all contemporary chip development processes. It didn't always use to be that way, but as the complexity of chips has grown, such best practices have proven essential to managing that complexity. In contrast, performance and timing analysis is highly underserved in most software shops except in obvious domains such as time-sensitive embedded systems. By way of reference, check out the pioneering work of Lloyd Williams and Connie Smith on performance engineering.[Read More]
gbooch 120000P81R 749 Views
Just ahead of the latest typhoon I escaped from Japan, where I also survived a 5.7 earthquake as well as my keynote for the first IBM Rational Software Development Conference which drew around 1,500 attendees. I always enjoy my time in Tokyo: the city is quite electric (literally and figuratively), I can find the best uni, the Park Hyatt Tokyo is one of the classiest hotels in the world, and there are some very cool projects with which to work. Alas, at the moment, I'm attached to the web via a supremely inferior connection which makes it painful to even type, so I'll have to defer the juicy details until I find a fatter pipe.
I was asked a most interesting question by one of the Japanese press: do I see any differences in development styles around the world? Warning lights flashed in my head, telling me that this was a classic black hole and that crossing its Schwarzschild radius would permit me to offer a really stupid answer that in turn would shower me with hate mail and proffer a visit to the principal's office. Emboldened by terminal jet lag, I gave a politically correct response (i.e. one void of any identifiable useful information). In retrospect, however, I can make the following observations (which, I must add, are my own and not representative of any other person, living or dead, or of any large, multinational corporations whose intials are each one off that of HAL): moving from east to west, I find - very much on average - European developers to be more formal, US East coast developers to be more conservative, US West coast developers to be greater risk-takers, and Asian developers to be more methodical.
Please please PLEASE don't read too much into these broad generalizations, for the microclimate of each individual team is unique, and I mean nothing pejorative by any of these terms. In the end, software development has been, is, and will remain fundamentally hard, and each team has to face its own demons, wrestling them to the ground with all the best moves and practices and tools that they have at their disposal.[Read More]
I just returned from Silicon Valley, having met with a couple of local companies regarding their architectual practices. Two immediate observations from those vists: first, the essential architecture of most of interesting systems really is locked in the tribal memory of a few individuals and second, as such business mature, a focus on architecture becomes increasingly important as a means of driving economies of scale (by harvesting and then exploiting common architectural elements) and innovation (by creating the opportunity for entirely new businesses by opening up their architectures for integration with other systems of systems).
I spent some time with the folks at Adobe who were very gracious hosts. Part of my time was spent with the architect of Photoshop, digging through its architecture. Photoshop is the "professional standard in desktop digital imaging" according to their literature. Opening up the hood, there's a really beautiful and elegant architecture that lies within. The current release of Photoshop consists of about one million SLOC in C++, with several million SLOC of basic libraries written in a variety of langugages. The 80/20 rule seems to apply here: the essence of the Photoshop architecture swirls around abstractions and mechanisms for documents, layers, channels, and tiles (some 20% of the total code) while the rest of the code is there for all the surrounding cruft (licensing, plugins, file manipulation, event handling, the user experience).
I also spent some time with a company who for the moment shall remain nameless, except for me to say that these guys are one of the dot com era's big success stories. They are running with a code base of about five million SLOC in C++ and Java, with a regular rhythm of releases that pushes out a new system very two weeks. It's really a joy to step inside a company where things seem to be clicking: they've got a good business model, they are growing and even hiring, and they are serious about improving their software development practices (which already are far better than the average I see in the industry). Above all, in both this company and Adobe, while there are always issues and risks to be addressed, their development teams seem to be having a good time, and that's always a sign of a healthy organization.[Read More]