Jazz platform development
Like many "knowledge workers" these days, I primarily work from home, dialing into conference calls and connecting to IBM's intranet via a virtual private network (VPN) client.
This has some advantages, e.g. you save time driving to and from work and you save money on lunches and coffee.
But sometimes it's challenging.
The other day I got a box in the mail containing the Xbox video game "Star Wars: Knights of the Old Republic II: The Sith Lords" (which I naturally bought on the web in used condition for 35% off retail).
One of the advantages of working on site is that you don't have to walk by your HD TV and your Xbox and see the case of the new video game you just bought staring at you saying, "Play me Bill. Just for a few minutes. No one will notice. Just leave a message on Sametime that you're on a call".
But this path leads to the dark side. For even though I'd only intend to play for 20 minutes, soon it would be an hour, then two, and pretty soon I'd be writing my farewell blog entry.
bhiggins at us.ibm.com[Read More]
One week until I leave China and head back to the US. Today I got sort of stranded at my brother-in-law's house. It's too cold to go outside and I somehow forgot all of my books and my journal at my parent-in-law's house. No one else is around, so I find myself with the following options:
Well, if you're reading this you've figured out which of the above options I chose. Because reading news articles and technical articles gets tiring after a few hours, I decided to look for something a little more meaty. I found it in The Art of Unix Programming by Eric Raymond, which is available to read online.
The book is really interesting and has many insights that I wasn't familiar with, which is unusual, since most books on computers tend to hit 80% of the same well-trodden topics and stories.
The book is pretty hostile to Microsoft (in a very one-sided manner) which may be a turn-off to some readers, but I guess I just look at it as the artistic license of a true believer.
One part that I've found particularly interesting is the write-up on the Unix maxim that "silence is golden"; i.e. if your program doesn't have anything interesting to say, then don't say anything. This was one of the "features" of Unix that really caused me problems when I first started programming at Penn State University. I never knew what the hell was going on because the command-prompt would say nothing, whether I simply moved a file or accidently overwrote a programming assignment due the next day (which is funny ... in hindsight). Reading Raymond's arguments, I think that the "silence is golden" rule holds up somewhat better in a command-line world than in a GUI world. In a GUI world there are many subtle mechanisms to provide feedback without being obtrusive about it.
On a related personal note, my technically-proficient wife sometimes calls me "Mr. Unix" because I occaisionally forget to provide re-assuring "uh huh"s to statements she makes that I don't disagree with.
PS - I would personally pay $100 to watch Raymond and Donald Norman debate the merits of software providing constant feedback to the user. But perhaps "celebrity deathmatch" would provide a more appropriate forum.[Read More]
I said in the last post that I was going to review Bruce Schneier's book Secrets and Lies which is Ted Neward's (and now my) essential primer on digital security.
Schneier introduced me to the term "countermeasure" which is simply some mechanism that either attempts to prevent or effectively respond to a security incident.
I had to think of this while watching Weird Science (the movie) on cable this weekend. For those of you who weren't a young boy in the 1980s, Weird Science is about a couple of high-school nerds, Gary and Wyatt, who use their computer skills to create a woman they name Lisa who has supermodel looks (played by Kelly LeBrock), magic powers, and who will do whatever Gary and Wyatt want her to do (yes, this was a movie squarely targeted at adolescent males).
They create her through a computer program that simulates the creation of a woman, both physical and mental characteristics. They hack into a government facility to get more computer power, wire a Barbie doll up to their computer and voil, there she is.
Anyhow, it's a movie worth seeing, if only for the performance of Bill Paxton as Wyatt's incredibly obnoxious brother Chet, but the reason I mention it here is because of something to do with computer security.
As mentioned before, through a circa 1985 personal computer, Wyatt and Gary hack into a government facility to "steal more computer power". Ok, fair enough. But what was really cool to me then and hilarious to me now was the government system's response to being hacked. I have never attempted to hack a system but I imagine that if you got user access to a computer you hacked, you would either see a command prompt or a typical Windows / Linux / whatever GUI. But not the government computer that Wyatt hacks. When Wyatt bypasses the security program he is treated to a vivid artsy display of 3-dimensional graphics including freaky faces and whirling clocks - sort of suggesting that they've entered a secret wonderful computer world that they didn't know existed.
I realize that this is a movie so I'm not criticizing it for not being realistic. It's just that after working as a programmer it's funny to imagine a scenario that would lead to the existence of such a "feature". Say you're a system designer for the National Security Agency (NSA) and security is of utmost importance. You're in a meeting discussing what should be the response to a system security breach.
I wonder if the NSA sub-contracted to a graphics programming shop to improve the quality of the break-in graphics? And what was the budget to design and implement said graphics?
Ah, movies that involve computer programming are funny. But I guess in a movie that's based on the premise that using 286 PC, a modem and a Barbie doll, you can generate a living breathing woman resembling Kelly LeBrock who can perform magic ... then in comparison displaying fancy graphics in response to a security breach is pretty believable!
Then again, Microsoft Excel 1997 included a hidden flight-simulation video game, so perhaps it's not so far-fetched to have such a feature![Read More]
Warning: for the experienced software engineer, the following may be a long-winded explanation of the blindingly obvious; it is more intended for people newer to software engineering.
In software engineering you often find yourself talking to another designer or programmer to learn more about some system you either have to work on or use. This is often difficult because the designer / programmer has been working with the system so long and he has spent so much time with the system that no longer thinks in terms of "why" the system does things or "what" it does, but rather only in terms of "how" it works.
This is usually fine within a tight development team because they've been working together for a long time and have a shared implicit context of the "why's" and can therefore talk in terms of "how's" without getting lost in the forest.
A trivial but useful example of this is making coffee. If my use case was "make a pot of coffee" here are three different levels:
why: "I'm a little tired and need to get more work done".
what: "Make a pot of coffee".
how: "Grind coffee beans, put coffee beans in filter, put water in tank, hit 'On' button".
Say I started grinding beans and for some reason my wife in the other room didn't recognize the noise - in reality she would because she's heard it enough. But for the sake of this blog she says "what is that noise?" and I reply "I'm grinding coffee beans!". She implicitly knows the "what" (that I'm making coffee) and probably doesn't think about the "why" (that I'm tired, because I drink coffee even when I'm not tired). This is because she has a shared context with me. If she was one of the five English-speaking humans on Earth who had never heard of coffee, just telling her that I was grinding coffee beans would leave her equally clueless.
Where is this going you may be wondering?
Well this is something I struggled with mightily. Reading all the books on software design a common theme kept popping up in a couple of different guises:
In other words people would always make the point that if you're either writing a specification or programming against some service or library, think about the "what" and try to ignore the "how".
But this always seemed arbitrary to me because any process or activity can be decomposed almost ad infinitum. For instance, if I decide to make a pot of coffee that answers the question "what will you drink?" but does not answer how I will get it. I decide to make myself coffee although I could also go buy a cup of coffee from Barnes and Noble (more on that later). But now that I've committed to this "how" I need to come up with a couple of concrete "what's" to end up with a pot of coffee. This is where the decompositional aspect comes in. The first what is "grind coffee beans". This is now a "what". But how are the coffee beans ground? I really don't know. I put them in this device, push a button and magically they become coffee grounds. This is a key insight. I don't care "how" the coffee is ground because I don't have to do it. I have a machine which provides the grinding service for me as a black box (well in reality a white cylinder).
So the truth is that any process (in the business sense) or procedure (in the computing sense) can be rolled up or drilled down as much or as little as you like. The hard part if you're a software or designer is "What level do I expose to other programmers who will use my service or API"? You could but shouldn't tell them not only what your service does but how it does it. The negative consequence of this is that the is typically arbitrary from a functional perspective and at some point you may wish to change how you do it for non-functional reasons (e.g. better performance, more robust security, etc.) But if you've published your implementation to the world, people may have coded to your service with assumptions about that implementation in mind and therefore changing it might break them.
So now you can happily agree that you should not let implementation details bleed through either your API specification or use case specification.
Happy? No. Because how the heck do you determine what level of decomposition your specification should be at?
I didn't know the answer to this for a while but it turns out it is pretty simple. Simon Johnston explained it to me during a mentoring session on business process modeling and its relationship to use cases. He was drawing a simple use case diagram on the board and making a point about only stating "what the system does not how it does it" so I went off on the same spiel above about how a "what" is just a higher-level summation of a bunch of "how's". And then he said, "well, you determine the 'what' by asking 'what does the actor care about?'".
It turns out that this whole thing is subjective and there is never a definitive answer. It all depends on the nature of who is using the service. Back to the coffee example. Say my wife asked me for a cup of coffee. Her desire for drinking coffee is the 'what' and she delegates to me how it is done. My 'what' is now procuring coffee for her. She doesn't care if I make it or drive to Barnes and Noble to buy it. I decide to make it and so now I also have to care about 'how' it is made. My first 'what' in the making process is 'get coffee grounds'. Since I have beans I need to grind them and since I have a coffee grinder I don't have to worry about how this grinding is accomplished.
It's the same in programming or use case writing. Depending on what your goal is, you may have very different levels of specification vs. implementation. Specification is the what, and implementation is the how. If you're responsible for an implementation, you will come up with a design which will leave you with a new set of what's that need to be further decomposed, perhaps by your own code, perhaps by a Java library class.
You can navigate these levels with the following tools:
The point is that whenever you're either writing code that other people might use (i.e. an API) or if you're drawing a use case diagram to say what a system does, think in terms of what the client / actor / user (whichever term is relevant to you) is trying to do - his goal. That is the magic formula for figuring out what the right level of detail is to create a specification that omits unnecessary implementation details.
These set of principles are what underlie the object-oriented notion of "polymorphism" which is the big impressive jargon word which basically means that some specification may have different implementations but you don't care because you're happy with the behavior specified by the more abstract type.
In Java a good example of this is the Collections framework. If you write a method that needs to return a collection of non-duplicate elements but don't really care about anything else, you should return the Set interface. Inside your method you may implement it as a HashSet or TreeSet or whatever suits your needs, but since you haven't shown this implementation to your client, you can change it at a whim.
There's more to it, but that's more than enough for one blog. For the two or three of you who have survived my rambling this long, I offer to provide you with a hot cup of fresh coffee if we should ever meet in Raleigh-Durham. But I refuse to specify whether I will brew it or buy it for you :-)
PS - Later I found that Alistair Cockburn talks about this very thing in his excellent book Writing Effective Use Cases under the section "Raising and Lowering Goal Levels" on page 69.
PPS - Dave Parnas wrote the seminal paper on this topic way back in 1972. You can read a copy of it here (note parts of it are now pretty low-level and hard to understand). As Bass and Clements say, "if you think you've thought of something new in Software, you should first check Parnas's stuff to make sure he didn't already think of it back in the 1970s".
PPPS - This idea is also found in the Strategy design pattern which can be found in the ever-popular book by Gamma, Helm, Johnson and Vlissides.[Read More]
billhiggins 1000006JUS 290 Visits
I'm working at home today and had CNN on in the background. At 1:46 PM Eastern US time, CNN announced that CIA Director Porter Goss is resigning. Just for fun, I immediately went to the Wikipedia entry on Porter Goss. It didn't mention his resignation. A couple of minutes later I hit refresh and it was updated to reflect that he was "a former CIA director". I looked at the version diff and it turns out someone updated Wikipedia within 2 minutes of the news breaking.
I know Wikipedia often catches flack from people like Nick Carr who lament spotty content quality, but I am still amazed that there's a valid encyclopedia that can be updated within two minutes of a surprise announcement. When I was little growing up in Hershey and Harrisburg Pennsylvania, for the longest time we had a 10 year-old set of Encyclopedia Britannica, and I remember thinking that was pretty cool!
My kids are definitely going to laugh at me some day... :-)
- Bill[Read More]
I've written a second article on Ajax, this one titled "Meeting the challenges of Ajax software development". Here's an excerpt:
The newness of the Ajax/REST architectural style presents challenges to organizations that have traditionally used the server-side Web application style. Though Ajax has several compelling architectural advantages over the traditional model, an immediate and total transition to a pure Ajax/REST architecture isn't realistic for all organizations. Those that lack Ajax development skills can begin their Ajax exploration by incrementally adding Ajax functionality to existing server-side Web architectures. As these organizations begin to gain experience with Ajax/REST, they can confidently attempt more interesting and ambitious projects.
As always, if you have any comments or questions on the article above, I'd appreciate if you would leave a comment to this blog entry. I'm really interested to know if other folks who have developed Ajax applications agree or disagree with the judgements and conclusions of this article.
PS - If you haven't read the first Ajax/REST article, I provide an excerpt and a link in this older blog entry[Read More]
In my last post I went through a long-winded explanation of how to enable and disable capabilities within Rational Application Developer (RAD) and Rational Software Architect (RSA). I notified Emeka Nwafor, product manager for RSA, about the blog and found out from him that there is a much simpler (and ingenious!) way to enable/disable capabilities.
When you launch RAD/RSA (or any of the other products listed below) go to the Welcome screen (Help -> Welcome). In the lower-right hand corner of the Welcome screen, you'll see an abstract icon of a person, with a number of smaller icons to the left of him. If you put your mouse over this little man, you'll see text to "Enable Roles". Click the little man icon and you'll see a number of possible roles that you can enable. One of these days we'll be able to post pictures on our dW blogs and I'll be able to show you these things!
A role basically corresponds to a set of capabilities that are required by that role. The roles are very self-descriptive ... e.g. "Requirements Manager", "Modeler", "Java Developer" etc. Often a person plays multiple roles in their work, so simply enable the roles that you play and disable the roles that you don't play. The whole "capabilities" discussion gets abstracted away.
This is a really, really cool feature, which greatly improves "user experience scalability". The only shortcoming of which is its somewhat inconspicuous location on the welcome screen. Emeka's looking into perhaps getting the role-enablement/disablement function a more visible place in the RAD/RSA real estate.
I have to give a complement to the user-centered design practice within IBM which have really changed the way we design products, internal systems and customer systems. When I look at RAD/RSA v6 and think about the way I initially struggled with WSAD v4, I am really impressed by the gains we've made in usability - keep it up folks!
This makes me think that I really have to post on user-centered design in general ... an area I've been studying more and more lately and have come to appreciate as much as technical architecture and design.
Update! Here's a screenshot. Hosting courtesy of ImageShack
Role enablement widget in Rational Software Architect welcome screen[Read More]
I just read and really appreciated Nicholas Carr's blog entry "The amorality of Web 2.0". Like Carr, I get very uncomfortable when I read Tim O'Reilly and others speak of Web 2.0, or just the web for that matter, in quasi-religious language.
Is the web a culturally-transformative phenomenon? Undoubtedly. Should we try to assess this phenomenon objectively? Absolutely. Should we approach the web like religious zealots? Perhaps only after many, many beers.
I must admit, it's nice to read a fellow Web 2.0 skeptic - it seems that many of the bloggers I read are almost obsessed with Web 2.0, and it makes me feel uncomfortable to read it.
But then again, for the last eight years up until May, I was obsessed with the new Star Wars movies, so who am I to judge? :-)
contact me: firstname.lastname@example.org[Read More]
So having used RSS for a few months, here's the three styles of feed items I've noticed, ordered by how much I like the style, descending:
I understand the business motivation to force readers to click a hyperlink to view a full entry - it's the most straightforward way to measure relative and absolute interest in your entries. But sometimes the "summary" sent via the feed is not so much a summary as a vague excerpt. Maybe it's just me, but frequently reading the short form of a blog entry, there's not enough there to even determine if I should follow the link and read more.
But then there's the Fast Company "quote summary" which is truly in a league of its own. Here are a few recent (real) examples:
The New Lure of Internet Marketing
"What better form of personalization is there than hearing something from a friend?"
-Scott Griffith, CEO, SoftLock.com
The Man From CHAOS
"Americans like reorganization. They don't like technology."
-Richard Morley, Founder, Modicon
Are You Being Coached?
"Figure out what behavior needs to change and how to change it."
-David Thomson, Vice President, Hewlett-Packard
So you can see the pattern emerging:
Uninteresting, sometimes unrelated, title
Underwhelming quote with two instances of bold.
-Name of person I've not heard of, title, company
As I become aware of this pattern in the Fast Company RSS feed, I went through the following phases:
If you want to share in the fun of the occasional Fast Company quote summary RSS item, you can subscribe here.
-Bill Higgins, frustrated blog reader, IBM
billhiggins 1000006JUS 301 Visits
A couple of weeks ago I wrote about the Genographic project that aims to understand the paths that the human race took from our origins in Africa to migrate and populate the world.
A few new items:
According to a recent internal report, 6,000 IBMers have participated in the Genographic project so far. Anyone, IBMer or not, can participate, so if you'd like to, go here.