My family and I are leaving for China tomorrow for a three and a half week vacation. My wife is originally from Jinan, China and her family is still there. We'll also be visiting Beijing and Shanghai.
There are three things I'm really looking forward to on this trip: meeting my in-laws, seeing historic Chinese locations, and eating lots of authentic Chinese food.
For the last item on that list, I'm most excited to go to Shanghai, home of the world-famous "pork steamed soup buns", or shao lum bao in Mandarin. These things are perhaps the yummiest food I've ever had. If you're not likely to go to Shanghai in your lifetime, you can also get them in New York City's Chinatown. Go to Joe's Shanghai and order "pork steamed soup buns". In fact, order several plates of them. Actually, just don't eat the rest of the day and concentrate on the soup buns. They are amazing.
So this will likely be my last blog entry until January 2005, although I might try to post from China on December 23rd to wish my friends and family in the U.S. a happy Festivus. Hopefully I won't have any problems getting my Festivus pole through customs.
Happy holidays![Read More]
Jazz platform development
The New York Times has an interesting article (registration required) that discusses the behind-the-scenes courtship between IBM and Lenovo over the past several years that ultimately led to IBM's sale of PC Division. It seems that much of the content of the article is based on a recent interview with Sam, so you know that it's based on fact, not fiction.
Here's a summary timeline extracted from the article:
What amazes me is how many people were in the loop on these goings-on for several years, and the story wasn't leaked to the press until four days before the sale. Just good old fashioned discipline by the participants I suppose. For instance, a friend and colleague of mine at IBM has been working on an Integrated Supply Chain project related to the PC Division sale for the past several months. During this time, he wouldn't tell me the details of his current project, which seemed goofy and frankly annoyed me a little bit (because we're good friends). But now, in hindsight, I understand that his secretiveness was absolutely necessary.
After all, I'm always looking for juicy blog material! :-)[Read More]
There has been a civil yet passionate intellectual debate going on between IBM Rational and Microsoft's development tools division with regards to modeling languages.
Rational, of course, contributed a great deal of thought-leadership to the definition and standardization of UML and has banked its modeling strategy on the MOF/UML base.
Microsoft, on the other hand, says that UML is severely flawed for a number of important applications. They have instead proposed something called Domain Specific Modeling Languages which they assert are preferable to UML in many regards.
I won't pretend to understand this debate at a deep level, as the people involved in the debate have been thinking deeply about modeling much longer than I have been in the field of software engineering. However, it's fun and a great learning experience to watch this debate unfold between such smart people from the IBM Rational and Microsoft camps. Grady just posted a really long response to a couple of posts by Alan Wills and Steve Cook of Microsoft. It's a good place to start since it gives a good overview of the debate, and links back to a couple of important blogs related to the debate.
So if you have an interest in modeling in general or UML in particular, I encourage you to monitor this debate by following the following blogs:
The IBM Rational camp
The Microsoft camp
For some good background reading on the topics under discussion I recommend the following books and web site:
Domain Specific Modeling Languages
Enjoy the debate![Read More]
In my last post I went through a long-winded explanation of how to enable and disable capabilities within Rational Application Developer (RAD) and Rational Software Architect (RSA). I notified Emeka Nwafor, product manager for RSA, about the blog and found out from him that there is a much simpler (and ingenious!) way to enable/disable capabilities.
When you launch RAD/RSA (or any of the other products listed below) go to the Welcome screen (Help -> Welcome). In the lower-right hand corner of the Welcome screen, you'll see an abstract icon of a person, with a number of smaller icons to the left of him. If you put your mouse over this little man, you'll see text to "Enable Roles". Click the little man icon and you'll see a number of possible roles that you can enable. One of these days we'll be able to post pictures on our dW blogs and I'll be able to show you these things!
A role basically corresponds to a set of capabilities that are required by that role. The roles are very self-descriptive ... e.g. "Requirements Manager", "Modeler", "Java Developer" etc. Often a person plays multiple roles in their work, so simply enable the roles that you play and disable the roles that you don't play. The whole "capabilities" discussion gets abstracted away.
This is a really, really cool feature, which greatly improves "user experience scalability". The only shortcoming of which is its somewhat inconspicuous location on the welcome screen. Emeka's looking into perhaps getting the role-enablement/disablement function a more visible place in the RAD/RSA real estate.
I have to give a complement to the user-centered design practice within IBM which have really changed the way we design products, internal systems and customer systems. When I look at RAD/RSA v6 and think about the way I initially struggled with WSAD v4, I am really impressed by the gains we've made in usability - keep it up folks!
This makes me think that I really have to post on user-centered design in general ... an area I've been studying more and more lately and have come to appreciate as much as technical architecture and design.
Update! Here's a screenshot. Hosting courtesy of ImageShack
Role enablement widget in Rational Software Architect welcome screen[Read More]
Yesterday, I downloaded and have been playing with the new Rational Software Architect integrated development environment for J2EE development and model-driven development that I blogged about a while back. At some point I need to write up a full-review to talk about how awesome this new tool is, but right now I'm going to talk about something I just figured out about the tool that may be helpful to others. I posted this as a lesson learned on our internal Rational Knowledge Community (with screenshots!) but of course anyone reading this who doesn't work for IBM can't see it there.
A new feature of Eclipse v3-based products (including Rational Application Developer) is a mechanism to enable and disable tool capabilities in order to show functions that are relevant to your job and hide functions that are irrelevant to your job. This article describes how to use this mechanism inside the new Rational destktop products.
This lesson-learned is applicable to users of the following products:
Eclipse is a framework for client-side software integration. Product based on Eclipse add features and plug-ins to provide new functionality to the user or to augment existing functionality. A downside of this extensibility is that products like WebSphere Studio Application Developer provide literally hundreds of features, but only a small subset of those features might be important to a particular user. Before Eclipse 3, a user would have to deal with toolbars and menus including all of the features of the product.
Eclipse 3 introduced a new feature that allows the user to enable and disable workbench "capabilities". A capability in this context is a coarse-grained grouping of related features. This allows the user to see features that are of interest to him or her and filter those that are not of interest. However, this feature significantly changes the Eclipse user experience and therefore may be non-intuitive to long-time Eclipse users who are used to the previous "see everything" experience.
Rational Application Developer (RAD) v6 and Rational Software Architect (RSA) v6 (and other products) are part of the Atlantic wave of Rational desktop products based on Eclipse 3. Users of these products should be familiar with the mechanisms to enable and disable capabilities so that they can ensure that all necessary capabilities are enabled and all unneccessary capabilities are disabled (hidden). This will result in a more usable experience as you will only see menu-items and toolbar buttons that correspond to capabilities you care about.
RAD and RSA enable and disable certain capabilities by default based on most-likely needed capabilities by the targeted user base. It is likely that a certain capability you desire is disabled when you first install the product. There are three ways to enable it:
The only way to disable capabilities is through the Workbench Capabilities preferences page (as described in #3, above).[Read More]
January 25, 2005 update: as per IBM Corporate standards, if I link to an audio or video clip, I need to provide a textual description for people with disabilities. I've posted a description of the video at the end of this blog, also in blue.
I got pinged* by Michael O'Connell the day before Thanksgiving pointing me to a web site with an article on corporate blogging at big technical companies like HP, Sun, Microsoft, and yes, IBM.
The article itself was fairly interesting - for instance - I didn't realize that Sun #2 guy Jonathan Schwartz has a blog. Surprisingly, it quoted my recent blog on Steve Ballmer's comments on Microsoft vs. open source security. The quote taken didn't really reflect (IMHO) the "fair and balanced" tone I tried to strike in the blog, but oh well, not that big a deal.
And now for something really funny.
As I was searching for the full text of Steve Ballmer's quote** and stumbled across a web page that links to several videos that show Ballmer doing and saying some really funny things. I would try to describe this video, but I don't know if my words could do justice to its funnyness. So instead, just right-click and save this link, then enjoy the show. Here's the whole page, which has one other funny video.
Though I disagree with Ballmer's arguments against open source, I really appreciate his passion for Microsoft and his willingness to pump up the Microsofties (not being sarcastic).
Should Sam Palmisano (IBM's CEO) ever do a dance like that on camera, I will definitely link to it ... but I'm not holding my breath :-)
* "pinged" simply means "sent an instant message to someone" in IBM lingo. Of course the term comes from the eternally useful ping program that you can use from Windows or Xnix to determine if a certain computer is alive or not. I'm not sure if this term is used in this context at other companies / institutions or not.
** Thanks to Michael O'Connell and my nephew Jay Solano for linking to the full-text in the comments section of the blog. Hi Jay!
Video description: Steve Ballmer, Microsoft CEO, runs on stage at a Microsoft developers conference and runs around, screaming and doing a funny "dance". Finally, a sweaty and exhausted Ballmer steps to the Microphone and shouts "I ... LOVE ... THIS ... COMPANY!!! WOOOOOO!!!!".[Read More]
I read a quote from Microsoft CEO Steve Ballmer today that I think merits some discussion.
We think our software is far more secure than open-source software. It is more secure because we stand behind it, we fixed it, because we built it. Nobody ever knows who built open-source software. (source)Let's analyze this quote for a moment:
We think our software is far more secure than open-source software.Fair enough. Everyone's welcome to their opinion, and there's nothing wrong with standing up for your products.
It is more secure because we stand behind it, we fixed it, because we built it.I wouldn't judge some system's security based on the system maker. Rather I would judge the quality of a system's security based on what independent security experts have said about it and the security principles applied to the system's architecture (e.g. using battle-tested encryption protocols vs. proprietary "security by obscurity" encryption protocols). Ballmer here asserts that Windows security is implied by the fact that Microsoft created it and have improved it over time. To give Microsoft credit, they have made great strides in their products' security as part of their "Trustworthy Computing Initiative" that they launched a couple of years ago.
However, I think many, if not most people (outside of Microsoft sales and marketing) tend to associate the Microsoft software brand with subpar security. This is unfortunate, because there are some good things to be said about the security in systems such as Windows, and Windows and Office get a disproportionate number of attacks because of their dominant market share on the desktop. Still, when it comes to branding, perception is reality, and the weekly announcements of new major vulnerabilities and associated patches to Microsoft products (especially Windows and the bundled Internet Explorer web browser) have taken a heavy toll on the industry's perception of security in products coming out of Microsoft.
Ballmer could make a much more compelling argument if he focused on objective security measures and analyses rather than simply saying "trust us".
Finally, he says:
Nobody ever knows who built open-source softwareThis statement could be kindly called "an extreme exaggeration" but in reality is simply untrue. Although it may not be possible to trace every line of open-source code back to the organization or developer who wrote it, it's quite common that the individual or organization of some open source component is well-known. For instance, IBM's OTI subsidiary wrote the majority of the code in Eclipse and reviewed the many valuable contributions submitted from other organizations and individuals. And in the case of the Linux kernel, there is a well known group of "committers" who create much of the code and review that which they do not create.
Once again, I think that Ballmer would do his company better service by speaking about more objective comparisons and analyses of security rather than comparing Microsoft Windows' not-so-pristine security reputation (again, somewhat unfounded) with a specious argument about not knowing the identities of creators of open-source software.
Steve Ballmer is a very smart man and has made a lot of money for Microsoft (and himself) with his sales and marketing abilities (at heart he's a sales guy, not a hard-core geek). As the saying goes, "the ultimate measure of success is success", but it's still unfortunate that he uses a specious argument on such an important topic as security to bolster Microsoft and spread FUD about open source. Alas, this isn't the first use of this technique in the software industry, and Microsoft isn't the only guilty party. Hopefully the folks who listened to his speech will compare the security of Microsoft products vs. Linux products using more objective criteria than Ballmer used in this quote.
PS - I couldn't find a transcript of the full speech and it would be interesting to see if he elaborated his argument beyond the soundbite listed above. If anyone finds a transcript, please link to it in the comment section below.[Read More]
IBM Fellow and self-ascribed "alpha geek" Grady Booch speaks to the benefits and the danger of over-selling service-oriented architecture.
Service-oriented architecture is one of those IT topics that drive me crazy because:
Grady is no luddite but he's more interested in creating good software than in creating hype around a hot methodology.
Check it out.
PS - If you'd like to learn more about SOA or think that SOA is nothing but hype, check out this article which lays out the practical benefits of SOA and puts them into a historical context.[Read More]
After a two month hiatus, Alan Brown is blogging again.
Alan's in charge of model-driven development strategy for Rational and used to work for the Software Engineering Institute (SEI) at Carnegie Mellon.
He wrote a good book called Large Scale Component Driven Development that is the first source I can find that mentions service-orientation, though there is probably something before that.
Also, he and Grady Booch co-authored a really excellent paper on Collaborative Development Environments that is driving a lot of my work these days.
Anyhow, check it out - he's got some really interesting things to say.[Read More]
This isn't related to work at IBM, but there may be a few fellow Star Wars geeks out there who are interested.
Starwars.com has posted the teaser trailer for Star Wars Episode III, "Revenge of the Sith".
The term "teaser trailer" is movie lingo for a short preview which doesn't really tell you much about the movie but shows some cool images and gives you the general theme. Teaser trailers usually come out about a half year before a big movie comes out to make the public aware that it is on the horizon. Then about two months before the movie the actual full-blown trailer comes out that reveals part of the plot. Then a couple of weeks before the movie comes out you start seeing short TV advertisements.
Why are they called "trailers"? A long time ago previews for new movies were shown *after* the feature presentation completed, rather than before. In other words, they trailed the main feature. The term has stuck even though the original meaning is now inaccurate.
Sort of like SOAP. Originally it stood for "Simple Object Access Protocol". Now according to the SOAP 1.2 specification:
In previous versions of this specification the SOAP name was an acronym. This is no longer the case.[Read More]
I said in the last post that I was going to review Bruce Schneier's book Secrets and Lies which is Ted Neward's (and now my) essential primer on digital security.
Schneier introduced me to the term "countermeasure" which is simply some mechanism that either attempts to prevent or effectively respond to a security incident.
I had to think of this while watching Weird Science (the movie) on cable this weekend. For those of you who weren't a young boy in the 1980s, Weird Science is about a couple of high-school nerds, Gary and Wyatt, who use their computer skills to create a woman they name Lisa who has supermodel looks (played by Kelly LeBrock), magic powers, and who will do whatever Gary and Wyatt want her to do (yes, this was a movie squarely targeted at adolescent males).
They create her through a computer program that simulates the creation of a woman, both physical and mental characteristics. They hack into a government facility to get more computer power, wire a Barbie doll up to their computer and voil, there she is.
Anyhow, it's a movie worth seeing, if only for the performance of Bill Paxton as Wyatt's incredibly obnoxious brother Chet, but the reason I mention it here is because of something to do with computer security.
As mentioned before, through a circa 1985 personal computer, Wyatt and Gary hack into a government facility to "steal more computer power". Ok, fair enough. But what was really cool to me then and hilarious to me now was the government system's response to being hacked. I have never attempted to hack a system but I imagine that if you got user access to a computer you hacked, you would either see a command prompt or a typical Windows / Linux / whatever GUI. But not the government computer that Wyatt hacks. When Wyatt bypasses the security program he is treated to a vivid artsy display of 3-dimensional graphics including freaky faces and whirling clocks - sort of suggesting that they've entered a secret wonderful computer world that they didn't know existed.
I realize that this is a movie so I'm not criticizing it for not being realistic. It's just that after working as a programmer it's funny to imagine a scenario that would lead to the existence of such a "feature". Say you're a system designer for the National Security Agency (NSA) and security is of utmost importance. You're in a meeting discussing what should be the response to a system security breach.
I wonder if the NSA sub-contracted to a graphics programming shop to improve the quality of the break-in graphics? And what was the budget to design and implement said graphics?
Ah, movies that involve computer programming are funny. But I guess in a movie that's based on the premise that using 286 PC, a modem and a Barbie doll, you can generate a living breathing woman resembling Kelly LeBrock who can perform magic ... then in comparison displaying fancy graphics in response to a security breach is pretty believable!
Then again, Microsoft Excel 1997 included a hidden flight-simulation video game, so perhaps it's not so far-fetched to have such a feature![Read More]
After my initial burst of posting I've had a slow couple of weeks. This is because I've been in a heavy input mode. I tend to go through phases of heavy input where I'm reading a lot of books and articles and generally studying, then going into a heavy output mode where I do a lot of work and post to this blog.
So I'll say a few words about what I've been reading. As I mentioned in an earlier post, Ted Neward's book, Effective Enterprise Java is a must have for all enterprise Java programmers. However, it is by design a breadth book, not a depth book. It touches on essential practices related to such huge topics as architecture, inter-process communication, security, state management, and others. But Ted recognized this and supplements each section with references to books that he considers to be the essential primers and/or references on each topic.
These last couple of weeks I read Ted's primers on transaction processing and security. So here is a review of the transaction processing book (I'll do one on the security book later).
Principles of Transaction Processing by Bernstein and Newcomer
This book was written in 1997 which is often considered ancient in "Internet-years" but it is still very relevant because it focuses on fundamental principles of transaction processing (TP) rather than the latest whiz-bang technologies that optimize TP.
For those of you who aren't TP experts, a transaction is a computer operation that meets the ACID test. ACID here stands for:
Why does this matter to the system user or stakeholder? The canonical example is that of the ATM machine (or the "handy bank" if you're Australian). When you withdrawl money from an ATM, it has to go out and validate you have enough funds to meet the withdrawl, reserve those funds, and dispense cash - all within the same transaction. If the ATM failed after your bank account had been debited but before you'd gotten your money, you'd be very upset; conversely if the cash was dispensed but the debit procedure failed, the bank would be very upset. Ted provides very amusing analogy for this using a wedding ceremony but you can read that in his book.
There's a whole lot more to transaction processing beyond ACID and the ATM example, including two-phase commit (TPC), high-availability, massive concurrency, and crash recovery. To find out about all of these topics, read the book. One thing to remember though is that most application developers will never have to deal with the extremely complex details of providing a working and robust transaction management implementation, but like any technology it's important to understand the technology's fundamental principles and mechanics to effectively use it.
The book itself is extremely dense. The content of the book is "only" 324 pages long but covers a large amount of ground in a good amount of detail. Definitely read in a quiet place free of interruptions with a strong cup of coffee.
One shortcoming of the book is that it was written in 1997 so it doesn't cover TP implementations in Java (e.g. JTA, EJBs, etc.) but it was nice to finally find out what the heck IBM's CICS and IMS products are.
Interestingly enough, I have never had to deal with complex transaction processing (i.e. two-phase commit) in my short IBM career. This is probably because I've worked on business-to-consumer (B2C) applications where only one data source is involved rather than a business-to-business system where multiple data sources are involved. I'll have to ask the B2B guys if they get heavy into two-phase commit or if it's not an issue.
The reason I read this book is because I've always been a bit mystified by Enterprise JavaBeans (EJBs). When I joined IBM, I knew the word, but I was not familiar with such topics as object-relational persistence, object remoting, and transaction processing, so to me EJBs were simply things that took four classes/interfaces to do what I could do in one simple POJO. Ted Neward, in a very interesting web interview on the Serverside.com mentioned that he used to think EJBs were completely worthless, but during the process of writing Effective Enterprise Java came to realize that they were not worthless but rather over-marketed. He said that they should have been called Transactional JavaBeans rather than Enterprise JavaBeans because transactions are what EJBs did very well. So, hearing this from Ted I decided to read a book on fundamentals of transaction processing, so that I could understand EJBs better. Now that I've read all about TP principles, I pick Richard Monson-Haefel's book again, and all of a sudden EJBs start to make a lot more sense.
Alright, well I've managed to ramble on about the transaction processing book long enough to turn this into yet another lengthy entry. I'll do a write-up on the security book (Bruce Schneier's "Secrets and Lies") another night.[Read More]
Warning: for the experienced software engineer, the following may be a long-winded explanation of the blindingly obvious; it is more intended for people newer to software engineering.
In software engineering you often find yourself talking to another designer or programmer to learn more about some system you either have to work on or use. This is often difficult because the designer / programmer has been working with the system so long and he has spent so much time with the system that no longer thinks in terms of "why" the system does things or "what" it does, but rather only in terms of "how" it works.
This is usually fine within a tight development team because they've been working together for a long time and have a shared implicit context of the "why's" and can therefore talk in terms of "how's" without getting lost in the forest.
A trivial but useful example of this is making coffee. If my use case was "make a pot of coffee" here are three different levels:
why: "I'm a little tired and need to get more work done".
what: "Make a pot of coffee".
how: "Grind coffee beans, put coffee beans in filter, put water in tank, hit 'On' button".
Say I started grinding beans and for some reason my wife in the other room didn't recognize the noise - in reality she would because she's heard it enough. But for the sake of this blog she says "what is that noise?" and I reply "I'm grinding coffee beans!". She implicitly knows the "what" (that I'm making coffee) and probably doesn't think about the "why" (that I'm tired, because I drink coffee even when I'm not tired). This is because she has a shared context with me. If she was one of the five English-speaking humans on Earth who had never heard of coffee, just telling her that I was grinding coffee beans would leave her equally clueless.
Where is this going you may be wondering?
Well this is something I struggled with mightily. Reading all the books on software design a common theme kept popping up in a couple of different guises:
In other words people would always make the point that if you're either writing a specification or programming against some service or library, think about the "what" and try to ignore the "how".
But this always seemed arbitrary to me because any process or activity can be decomposed almost ad infinitum. For instance, if I decide to make a pot of coffee that answers the question "what will you drink?" but does not answer how I will get it. I decide to make myself coffee although I could also go buy a cup of coffee from Barnes and Noble (more on that later). But now that I've committed to this "how" I need to come up with a couple of concrete "what's" to end up with a pot of coffee. This is where the decompositional aspect comes in. The first what is "grind coffee beans". This is now a "what". But how are the coffee beans ground? I really don't know. I put them in this device, push a button and magically they become coffee grounds. This is a key insight. I don't care "how" the coffee is ground because I don't have to do it. I have a machine which provides the grinding service for me as a black box (well in reality a white cylinder).
So the truth is that any process (in the business sense) or procedure (in the computing sense) can be rolled up or drilled down as much or as little as you like. The hard part if you're a software or designer is "What level do I expose to other programmers who will use my service or API"? You could but shouldn't tell them not only what your service does but how it does it. The negative consequence of this is that the is typically arbitrary from a functional perspective and at some point you may wish to change how you do it for non-functional reasons (e.g. better performance, more robust security, etc.) But if you've published your implementation to the world, people may have coded to your service with assumptions about that implementation in mind and therefore changing it might break them.
So now you can happily agree that you should not let implementation details bleed through either your API specification or use case specification.
Happy? No. Because how the heck do you determine what level of decomposition your specification should be at?
I didn't know the answer to this for a while but it turns out it is pretty simple. Simon Johnston explained it to me during a mentoring session on business process modeling and its relationship to use cases. He was drawing a simple use case diagram on the board and making a point about only stating "what the system does not how it does it" so I went off on the same spiel above about how a "what" is just a higher-level summation of a bunch of "how's". And then he said, "well, you determine the 'what' by asking 'what does the actor care about?'".
It turns out that this whole thing is subjective and there is never a definitive answer. It all depends on the nature of who is using the service. Back to the coffee example. Say my wife asked me for a cup of coffee. Her desire for drinking coffee is the 'what' and she delegates to me how it is done. My 'what' is now procuring coffee for her. She doesn't care if I make it or drive to Barnes and Noble to buy it. I decide to make it and so now I also have to care about 'how' it is made. My first 'what' in the making process is 'get coffee grounds'. Since I have beans I need to grind them and since I have a coffee grinder I don't have to worry about how this grinding is accomplished.
It's the same in programming or use case writing. Depending on what your goal is, you may have very different levels of specification vs. implementation. Specification is the what, and implementation is the how. If you're responsible for an implementation, you will come up with a design which will leave you with a new set of what's that need to be further decomposed, perhaps by your own code, perhaps by a Java library class.
You can navigate these levels with the following tools:
The point is that whenever you're either writing code that other people might use (i.e. an API) or if you're drawing a use case diagram to say what a system does, think in terms of what the client / actor / user (whichever term is relevant to you) is trying to do - his goal. That is the magic formula for figuring out what the right level of detail is to create a specification that omits unnecessary implementation details.
These set of principles are what underlie the object-oriented notion of "polymorphism" which is the big impressive jargon word which basically means that some specification may have different implementations but you don't care because you're happy with the behavior specified by the more abstract type.
In Java a good example of this is the Collections framework. If you write a method that needs to return a collection of non-duplicate elements but don't really care about anything else, you should return the Set interface. Inside your method you may implement it as a HashSet or TreeSet or whatever suits your needs, but since you haven't shown this implementation to your client, you can change it at a whim.
There's more to it, but that's more than enough for one blog. For the two or three of you who have survived my rambling this long, I offer to provide you with a hot cup of fresh coffee if we should ever meet in Raleigh-Durham. But I refuse to specify whether I will brew it or buy it for you :-)
PS - Later I found that Alistair Cockburn talks about this very thing in his excellent book Writing Effective Use Cases under the section "Raising and Lowering Goal Levels" on page 69.
PPS - Dave Parnas wrote the seminal paper on this topic way back in 1972. You can read a copy of it here (note parts of it are now pretty low-level and hard to understand). As Bass and Clements say, "if you think you've thought of something new in Software, you should first check Parnas's stuff to make sure he didn't already think of it back in the 1970s".
PPPS - This idea is also found in the Strategy design pattern which can be found in the ever-popular book by Gamma, Helm, Johnson and Vlissides.[Read More]
What if Superman thought and talked like a project manager? You end up a with a big beefy guy in spandex pants who talks in cliches and who won't act impulsively but rather has a well-defined "go-forward plan"!
His name is ... ACTION ITEM! *
* For optimal comic strip-viewing experience, play John Williams's rousing "Superman's main theme" on a THX-certified sound system.[Read More]
As I mentioned last night Rational just announced its "Atlantic" wave of products. I didn't mention any specific features because I didn't think Rational had gotten that far along. Well it turns out I was wrong.
If you go to this page it lists all of the Atlantic products including the ones I'm looking forward to, Rational Application Developer (RAD, successor to WSAD) and Rational Software Architect (RSA).
So anyhow, there is a datasheet for each product that enumerates features and even has screen shots of the much-improved UML graphics engine and user interface I mentioned yesterday. The links are below. You'll need Adobe Acrobat Reader for each.
I recently did a peer review of the 7th edition of Core Java for Cay Horstmann and Prentice Hall. When I got my complimentary reviewer's copy in the mail last month, I noticed on the cover that it said "J2SE 5.0". Now of course it should have said "J2SE 1.5" (a.k.a. Tiger) so I thought "oh my gosh! I can't believe they misprinted the version on the cover of the book! how dumb!" But as I started looking through the book and continued to see references to J2SE 5.0, I realized that the joke was probably on me.
So I quickly flexed my Google muscles and searched on "J2SE 5.0 name change" and sure enough, I learned that Sun had changed the name from J2SE 1.5 to J2SE 5.0 "to better reflect the level of maturity, stability, scalability and security built into J2SE" (source).
Yet it's still "Java 2" which "indicates the 2nd generation Java platform, introduced with J2SE 1.2". Does this imply that in the far-flung future we can look forward to "Java 4 version 13"?
Also don't forget that:
Due to significant popularity within the Java developer community, the development kit has reverted back to the name "JDK" from "Java 2 SDK" (or "J2SDK"), and the runtime environment has reverted back to "JRE" from "J2RE". Notice that "JDK" stands for "J2SE Development Kit". The name "Java Development Kit" has not been used since 1.1, prior to the advent of J2EE and J2ME.Got all that?[Read More]
Bobby Woolf mentioned in a blog today that he wrote an article for the WebSphere Technical Journal about ... oh hell, I can't explain it briefly - just check out the article.
Seriously though, if you'd like to gain a better understanding about what JNDI is for and the service locator pattern (and a nasty potential problem with it in J2EE 1.3), check out the article.
PS - on a related side note, Martin Fowler has a typically insightful article talking about the Dependency Injection (a.k.a. Inversion of Control, or IoC) pattern and how it compares to the Service Locator pattern that Bobby talks about. Thanks to Ted Neward for telling me what IoC stood for since Google couldn't.[Read More]
As some of you might have seen, Bob Sutor had quite a listing of news articles related to the announcement of WebSphere Application Server (WAS) version 6.
But amid all of the hubub around WAS 6, it seems that the announcement of the next wave of development and architecture tools has gone by quietly. As (I think) anyone who develops for the WebSphere platform knows, WebSphere Studio Application Developer (WSAD) v5.x makes life much simpler (can anyone even imagine hacking a complex deployment descriptor by hand anymore?!)
So since there is a new major version of WAS, you'd expect another major version of WSAD. Well, there is a successor to WSAD but it's no longer called WSAD. Due to the fact that since WSAD 5 came out IBM bought Rational in Feb. 2002 and Rational now owns all of IBM's development products, the successor to WSAD 5.x has been announced and renamed Rational Application Developer (RAD) version 6.
Obviously the runtime environment is more important to businesses but the development tools are more exciting to geeks (and I do not use that term pejoritively). There will also be a higher-end version called Rational Software Architect (RSA) that will include modeling support for UML 2.0 (note the article's mention that support for Rose and XDE will remain strong).
Both RAD and RSA have been available internally to eager IBMers like yours truly for a couple of months so I and many other have been playing with the new bits. The press releases have not gotten into specifics so I won't here but I will say that the Eclipse 3.0-based user interface is greatly improved and there are a ton of new features that make life easier. In the architect version I think people are going to be stunned by the improvement in the quality of the modeling graphics and the improved ease of use of the modeling experience. I've been doing a great deal of UML-based modeling over the past couple of months and it's been very tempting to take a chance on using the RSA beta to do real modeling work ... but alas I'll be risk-averse and wait until the product GAs before I really start using it. So hurry up Emeka and co.!
This makes me think ... it would be really cool if we had a blogger from the Rational development organization ... maybe one of the product managers.[Read More]
IBM today announced that it will acquire SystemCorp, a software company that makes project and portfolio management software. IBM Global Services is currently deploying a SystemCorp product called PMOffice to support our own enterprise IT projects.
Why is this cool? Having an integrated project and portfolio management suite (and having an organization dedicating to getting good data into it) is fundamental to execute enterprise application integration (EAI) projects. If you cannot track dependencies / issues / risks across different teams, your integration project is doomed to either failure or severe budget / schedule overruns. So now assume that you have this integrated project / portfolio management framework (with good data inside). Now you have a chance at succeeding at EAI (although you have to do technical stuff well too, but that's another matter). If you can successfully do EAI, you have a chance to successfully do end-to-end business process integration. And end-to-end business process integration is one of the cornerstone's of our On Demand initiative.
So as more and more projects withing IBM use PMOffice (and more importantly use it consistently and correctly) I really think will help us start to get a view of our different interrelated projects in a structured, measurable, queriable (real word?) manner. I've talked to some projects that are using it today and as the users get more familiar with it, the more they begin to dig its mojo, though at first people are afraid to go from semi-structured Excel spreadsheets and independent MS Project plans to something that's rigorously structured.
But as I've alluded to before, we must have good data. No matter how sophisticated a tool suite ... garbage in, garbage out. Lack of data will kill you as well. Getting all of the PMs to rigorously use the tool correctly is more of a cultural matter, and as you all know, cultural matters are at least as important, if not more important, than technical matters in the field of large-scale systems engineering.
It would really be interesting to hear Simon's or Grady's opinion on this, but they may not be able to say too much because Steve Mills and Mike Devlin wouldn't like it very much if they shared long-term strategic goals with competitors.[Read More]
I wanted to put a quick post out today with a book recommendation for October. The book is Effective Enterprise Java by Ted Neward. I've also linked to this on the right-nav of this page under My Monthly Book Recommendation.
I read quite a few software-related books every year and this one, for me, is definitely the best one for 2004. Now if you don't do anything related to Java, it's probably not for you, but if you are a Java developer, especially with enterprise Java systems, you must read this book.
PS - You can read a preview chapter on state management over here at the serverside.com. You'll need a serverside.com id.[Read More]