The latest version of WebSphere Application Server
, WAS 6, has a new feature called "Service Integration Bus
" (WAS benefits
talks about "a new pure-Java JMS engine"). The SIB is implemented as a group of messaging engines
running in application servers (usually one-to-one engine-to-server) in a cell
. As a service in WAS 6, SIB is a complete JMS
v1.1 provider implementation. (Not just the API; a working messaging system.) The JMS provider is a pure Java implementation that runs completely within the application server's JVM
process. (For persistant messaging
, WAS also requires a JDBC
database such as DB2
.) Thus JMS messaging is built into WAS and easily available to any J2EE application deployed in WAS.
Why Service Integration Bus? IBM's software customers over the past few years have divided into two overlapping but still distinct markets with different needs:
- Connect any kind of app to any other kind of app. This is the traditional WebSphere MQ market, where you've got different apps written in different languages running on different operating systems and you want them all to talk to each other. This market hasn't changed nor has IBM's commitment to supporting this market.
- Connect J2EE apps running in WAS servers. What's changed in the last few years is that many of our customers are converting everything to J2EE apps deployed in WAS and so they don't need to be able to support every platform imaginable, just WAS. WAS 5 addressed this market with its Embedded Messaging feature (see below). This market is now better addressed with Service Integration Bus in WAS 6.
For a customer that finds itself in both groups--you have lots of WAS apps communicating, but you also need to communicate with other non-WAS apps--you will still need full WebSphere MQ. Embedded Messaging and Service Integration Bus only support WAS apps, so if any of the apps are not WAS apps, you need full WMQ. WAS 6 has a feature called MQ Link
for connecting SIB and WMQ
So here's the basic breakdown of WebSphere JMS options:
- MQ Simulator -- A feature of the test server (aka, the single user WAS server) in WebSphere Studio
and Rational Application Developer. Not a real messaging provider (did the term "simulator" tip you off?), it doesn't provide interprocess communication (pretty much a must-have for messaging) or persistence. What it is very useful for, and why it's in the test server, is testing and demoing your WAS apps that use JMS, without needing a separate JMS provider. When you're developing J2EE apps that use JMS, use this simulator.
- Embedded Messaging -- A feature with WAS 5 for messaging just between WAS applications. It is a simplified version of the WMQ code base and is a full JMS implementation, but does not provide all of the quality of service advantages of full WMQ. Runs as several processes (written in C) that run outside of the WAS JVMs. So it involves more moving parts that consume more resources and need to be managed.
- Service Integration Bus -- The replacement for Embedded Messaging in WAS 6. Implements the JMS spec; implemented in Java, runs in the app server JVM. (Think of it as "Really Embedded Messaging"!) Provides most (all?) of the same quality of service of full WMQ (such as clustering, which works as part of the WAS ND clustering model), but only supports WAS apps.
- WebSphere MQ -- Messaging for just about any computer platform used in business, including WAS and JMS. WMQ is used to connect non-J2EE apps, and to connect a J2EE app to a non-J2EE app. It can also be used to connect J2EE apps, although this is usually because you also have non-J2EE apps as well. Written in C, it runs in its own processes, and does not require WAS or Java in any way (unless your app is a WAS app).
- External JMS Provider -- This is the support WAS provides for using any J2EE-compliant JMS provider, so you can use our app server with someone else's JMS product.
Feb 23, 2005Building an Enterprise Service Bus with WebSphere Application Server V6 -- Part 1
-- The first in a series of articles on the SIB in WAS 6. Also see my blog posting, IBM Info on ESBs
Here's an interesting conflict between JMS and J2EE that I just rediscovered:
You can't run an implementor of MessageListener
in a J2EE container. J2EE 1.3 says not to do it in the EJB container. J2EE 1.4 says not to do it in the Web container either. Basically, you can't use it in any container that controls thread creation, which is any container except an application client container.
WAS 5 and 6 don't allow MessageListeners to be used in either container. When you try, you get an error like this:
javax.jms.IllegalStateException: Method setMessageListener not permitted
. . .
So WAS doesn't actually prevent you from deploying a class that implements MessageListener, but when you try to run your code, WAS prevents the MessageConsumer.setMessageListener(MessageListener)
method from running by throwing an IllegalStateException
. For details, see IBM WMQ FAQ answer #92
and IBM Technote #1114239
So when you get this error, the problem isn't a bug in your code, it's your entire approach. In a nutshell, if you want to run a MessageListener in J2EE, don't implement a MessageListener, implement a (can you guess?) messsage-driven bean
(the JMS kind, which implements MessageListener). And if you don't like using EJBs
? Get used to it. MDBs work in J2EE; MessageListeners don't.
IBM sees at least three main trends for "The Future of Computing
." (This web page specifically speaks to governmental programs, but the trends apply to any use of computing, including commercial.) The trends are:
- Pervasive Computing (WebSphere) (developerWorks) -- Computing devices everywhere, from PDA's and wrist watches to car dashboards and kitchen appliances. Many of these devices may not have much power or memory, but they will be networked, often wirelessly, to provide access to any information you need. A simple example is a dashboard device giving driving directions that can adjust your route real-time to avoid traffic backups.
- Autonomic Computing (developerWorks) -- Computers and networks that manage themselves, fix or reroute around problems--to quote the "Self-Everything" commercial, a computer that can "practically heal itself." No more admins getting woken in the middle of the night because the system's down. A simple example is a network storage device that notices one of its disks is deterroriating, moves that data to other disks, shuts down the faulty disk before it looses any data or availability, and notifies a network administrator to replace the disk.
- Grid Computing (developerWorks) -- Distributed computing where tasks dynamically load balance across resources to optimize resource utilization and maximize task performance. Resources become "virtualized" so that the grid looks like a single gigantic computer whose different parts can be harnessed as needed to perform the tasks at hand. A simple example is a program that performs a large task as many small tasks, dispersing the tasks on various computers that have idle time available, then merging the results.
So, if you're looking to learn more about where computing is going, these topics are a good place to start.
Something about SOA that no one seems to be talking about is that when you go to develop an application with a service-oriented architecture (SOA), you're developing the application in two distinct tiers. These tiers (i.e. layers, parts; pick a term) are as distinct as the two parts in a client/server architecture, serve different purposes, and require different skills to develop.
For the sake of discussion, I'll name and define these two tiers this way:
- SOA Service Provider (SOA-SP) -- This is the tier whose code implements the services. The code has a service API which declares the services and provides the means for clients to invoke those services.
- SOA Service Coordinator (SOA-SC) -- This is the tier whose code provides user functionality which is implemented using services in one or more SOA-SPs. It probably has a UI/GUI so the user can interact with the SOA-SC as a traditional application.
If the SOA-SC has a GUI, when the user tells the GUI to perform a task, the SOA-SC runs the task synchronously (while the user waits). It implements the task using one or more services, probably run sequentially and perhaps conditionally, but concurrent services are also possible. When a task needs to run for longer than a user wants to wait synchronously, the GUI can provide the user the means to kick off the task asynchronously. The GUI would implement the asynchronous task using a business process/workflow (such as one implemented in WebSphere Process Choreographer
An SOA-SC and all of the SOA-SPs it uses could all run in the same process (i.e. Java virtual machine), which would be more efficient and provide better performance, but less clustering and fault-tolerance, and isn't really the idea of the whole SOA thing. Rather, the idea is that the SOA-SC runs in a process and the services it uses run in one or more SOA-SCs, where each SOA-SC runs in its own process. In theory, a single process might contain code for both an SOA-SP providing services and an SOA-SC using services and providing end-user functionality, but in practice I think an SOA-SC and SOA-SP will run in different processes with different QoS requirements. (For example, a service in an SOA-SP needs to always be available and always scale, whereas an SOA-SC that's unavailable may be annoying but permissible.)
While an SOA-SC may have the ability to directly invoke the services in the SOA-SPs it uses, a better plan is for an SOA-SC to connect to its SOA-SP through an Enterprise Service Bus
(ESB). The SOA-SC gains the advantages of the ESB of not needing to know what the SOA-SPs are, how to connect to them, etc. (See ESB vs. Message Bus
So when you think about an SOA, don't think about it as a single application running in a single process. (This is possible, but not really the whole idea of SOA.) Although it all may be running on the server, its two distinct tiers probably running in two processes. Is that still one application? Users will think of the SOA-SC as the application, but developers will put the heavy-lifting horsepower mostly in the SOA-SP, so they'll seem like separate applications.
In Web Services Compression and Reliability
, I explained that new SOAP compression specs like XOP and MTOM will not eliminate the need for messaging systems. What might give messaging systems
like WebSphere MQ
a run for their money are emerging standards around making Web services reliable.
Like many specification efforts surrounding Web services these days, there isn't just one spec in the works, but two:
The two specs are similar approaches by two sets of companies to solve the same problem: How to transmit SOAP messages reliabily over a less-reliable protocol such as HTTP. For a comparison of the two specs, see "WS-RM and WS-R: Can SOAP be reliably delivered from confusion?
Also, in the book I co-authored, Sean Neville
wrote an Emerging Standards
chapter which contains a good overview of WS-Reliability and WS-ReliableMessaging. The details may be a bit dated at this point, but it's still a good introduction. Here's a neat picture from Sean's chapter that shows quite effectively how WS-ReliableMessaging works:
So, if you can send SOAP messages reliably, who needs a messaging system anymore? Well, the thing is, the specs are just specs; you need products that implement them. In this case, you need some software running on each end of the connection that stores the messages and resends them/confirms receipt until successful. You know what that software is? That's a messaging system! So guess who's going to be in the marketplace of providing these spec-complient products?
So as I see it, the key to reliable Web services is not as a replacement for messaging systems, but as a way to make messaging systems interoperable
. Wait, doesn't JMS make messaging systems interoperable? Seems like it, but no. JMS is an API that makes two different providers look the same to an application. JMS has nothing to do with making two different messaging systems talk to each other, even if they both support the JMS API. However, two messaging systems which both implement the same reliable Web services standard should be able to interoperate via that standard. Interoperability is the whole point of the standard.
So reliable Web services won't eliminate the need for messaging systems. Rather, the standards will make messaging systems interoperable.
If you'd like to play around with an implementation of WS-ReliableMessaging, check out the Emerging Technologies Toolkit
(ETTK) on Alphaworks
. To learn more about the lastest release of the WS-RM spec, see "Web Services Reliable Messaging reloaded
The World Wide Web Consortium
(W3C)--keepers of the pivotal Web specifications like HTTP
, and SOAP
--has announced a new initiative to make SOAP more efficient and therefore make Web services faster. (Web services that are SOAP-based, anyway.) This comes out of an effort called SOAP Optimized Serialization Use Cases and Requirements
, part of the SOAP 1.2
effort. Optimized SOAP consists of three parts:
Do you want to know more?
- XML-binary Optimized Packaging (XOP) -- Enables an XML document to contain binary data. Previously, the binary data would have to be converted to character data, increasing the size of the data.
- SOAP Message Transmission Optimization Mechanism (MTOM) -- Enables a SOAP message to contain binary data using XOP. This effectively replaces SOAP w/Attachments (SwA), which was never supported by Microsoft and therefore not part of the WS-I Basic Profiles 1.0 or 1.1 (although WS-I Attachments Profile 1.0 does support SwA). MTOM can also be used to encode an entire SOAP message in binary form, decreasing the size of the message.
- Resource Representation SOAP Header Block (RRSHB) -- Enables a SOAP message to contain references to external resources. These resources can be cached by the recipient so that they don't need to be transmitted in the message, which saves bandwidth when multiple messages contain the resource.
Remember Six Degrees of Separation
(and Six Degrees of Kevin Bacon
)? Ever heard that you're not just sleeping with
someone, you're sleeping with everyone they ever slept with
? Well, here's picture of what those relationships look like.
This is an interesting example of how graphics can be used illustrate data. Time Magazine
recently had an interesting article, "A Snapshot of Teen Sex
." It describes the results of a study of sexual behavior amongst students at an anonymous
but real high school in the Midwestern United States in 1995. An accompanying graphic illustrates who had sex with whom. (Besides describing what teens are up to, the really interesting part of the article is the graphic, which isn't included in Time Oneline Edition
version. You can find the graphic in Sexual network of high school mapped by researchers
, shown below.)
The picture shows who had relations during an 18-month period. It shows a number of small clusters of people who had relations in small groups (such a 63 monogomous
pairs), and then one huge ring of 288 students that each had relations with someone else in the cluster. It's difficult to describe, but easy to see in the picture, hence the value of showing statistical data graphically.
This also shows the limits of graphics
, and how they can be misleading
. The picture hopes to show how an STD
can easily spread from one person to many. The problem is, the picture doesn't show time
. Those 288 students probably didn't all have sex at the same time
. The circle was probably many isolated parts until people between parts connected to make bigger links and finally a full ring. In other words, there was no ring until enough time went by for enough people to have enough sex with enough partners, and that took awhile (although apparently less than 18 months). So to show the potential path of an STD, the links not only need to show encounters, but the order of the encounters.
Many readers probably look at a picture like this and don't think too discerningly
about it. They read the article, glance at the picture, and think, "Golly, if any one of those 288 teens has an STD, so will the rest of them.
" Well, only if the teen with the STD was part of the first couple in the statistical period to hookup
, and only if everyone else in the ring became part of the ring when they hookedup, not through later activities of their partners. So the graphic is interesting, and compelling, but can also lead to false assumptions.
Separately, I also like this quote from the article: "Adult sexual networks...usually involve clusters of wanton individuals known to public-health experts as "core transmitters." (Think prostitutes, NBA stars.)
" NBA stars? See Wilt Chamberlain
Oh, and if you really want to be disturbed about how teens are behaving today, see the movie Thirteen
. For a lighter take, see Mean Girls
, one of the architects and developers of WebSphere Appication Server
, has a blog: /dev/websphere
. It's a really good resource that explains some of the more interesting new developments in the WAS products, as told by one of the guys who helped put those features in there. I've added it to this blogs list of blogs that I need to be keeping up with (a.k.a. "Blogs I read," in the right-hand column).
Case in point: Billy just posted WebSphere 6.0 and NFS V4/SAN FS, a match in heaven
. Of course! Huh?
Basically, he explains how the transaction manager in WAS 6
provides rapid (i.e. about 12 seconds) recovery of in-doubt transactions
. That means that if your Web site crashes, even if a customer that was in the middle of placing a bizillion dollar order and successfully committed the transaction, the order won't be lost; his work will failover to another WAS server in the cluster and will take a fraction of a minute to recover, as if nothing happened. That's pretty quick recovery, and it's automatic. Furthermore, such an arrangement usually requires some pretty fancy (and expensive) storage hardware such as a SAN or a disk array, but WAS 6 does it with the disks on any standard file server.
Check out Billy's blog
to learn more.
In Better Web Services Performance
, I talked about three new specs from the W3C that will help make Web services more efficient. A reader asked if this will make SOAP compete with messaging systems. No, they're different things.
Specifically, the reader asked:
Do you think these changes will cut into MQ's market share in middleware messaging? I know that MQ is often used as a transport for Web Services. Do you think SOAP can compete in this space?
So, are MTOM and XOP the death knell for WebSphere MQ
and similar products? Interesting question, but the comparison is apples and oranges. XOP and MTOM are about compression; message-oriented middleware
(MOM) is about reliability.
The typical transport for Web services is HTTP
, based on TCP/IP
, which is generally regarded as an unreliable protocol. Why? Because when there are network problems, packets can get lost. To make it reliable, you need something like FTP
that will detect and resend lost packets.
A more reliable transport is asynchronous messaging
, such as JMS
providers like WebSphere MQ. A messaging system provides exactly-once delivery semantics, meaning that a message can't get lost. Why? Because the messaging system will store the message at the sender's end and retry transmitting the message to the receiver's end until it suceeds. (See Basic Messaging Terminology
.) That's exactly-once delivery.
Also see the Guaranteed Delivery
pattern. As my co-author Gregor Hohpe
likes to say: Reliability is relative. No networking transport is completely reliable. Even with guaranteed messaging, the disk might get full, or it might get hit by a meteorite
--either way, messages are lost. Heck, even the ARPANET might not survive a nuclear attack. (Answers.com
, Internet Gurus
So a messaging system is used to make delivery of a SOAP message reliable, whereas XOP and MTOM compress it. HTTP can still loose an MTOM message, it will just waste less bandwidth doing so.
Here's a question that keeps coming up that I keep forgetting the answer to:
Say you're trying to measure how long it takes some data to get from one computer to another, whether it's an RPC call
or a message
or whatever. Or say you want to use message expiration
. In either case, to measure the elapsed time, both computers have to agree on what time it currently is. How do you make sure that two computers' clocks are synchronized?
The answer (as you may have guessed from the title) is the Network Time Protocol
(NTP). It runs on Internet Protocol
(IP), typically on port 123. It uses a master/slave configuration: the master hosts a reference clock
and the NTP service to make that clock available; the slaves synchronize by getting the time from the master and using that to set their own clocks.
For more information:
BTW, a handy dandy site for showing the time to us humans (at least in the United States) is The Official U.S. Time
, operated by the National Institute of Standards and Technology
(NIST) and the U. S. Naval Observatory
(USNO). It displays Coordinated Universal Time
(UTC) for any U.S. time zone
you choose, including adjustments for Daylight Saving Time
Did you know you can configure Windows XP
to automatically synchronize your computer's clock with an Internet time server? See Synchronizing your computer clock
This month's issue of the IBM WebSphere Developer Technical Journal
is now available. Here's some of what it features:
So, lots of good stuff to check out. Also, for older articles, they have an archive of the previous issues
Lately I've been speaking with some colleagues who are knowledgeable people, but who know relatively little about messaging. So for their benefit and for any of you who feel a little confusion along these lines, here's an explanation of basic messaging terminology. For reference, I'll use terms from the Java Message Service
(JMS) API and the corresponding patterns from Enterprise Integration Patterns
, the book I co-authored with Gregor Hohpe
and several contributors
Messaging is a technology applications use to exchange data, to transfer a data structure (such as a record or set of records, a serialized Java object, the text for an XML document, etc.) from one process' memory heap to another. To transfer the data via a messaging system, the applications put the data in a Message
) and transfer it via a Destination
(aka Message Channel
). An application that adds messages to a destination does so using a MessageProducer
. An application uses a MessageConsumer
to remove messages from a destination. (A.k.a. Message Endpoints
There are two kinds (subtypes) of destination in JMS:
- Queue (aka Point-to-Point Channel) -- A queue delivers each message to exactly one consumer. A queue can have multiple consumers, but only one will get each message. (See Competing Consumers.)
- Topic (aka Publish-Subscribe Channel) -- A topic delivers each message to all of the topic's consumers, so every consumer gets a copy of every message.
A queue producer is called a QueueSender
and a queue consumer is called a QueueReceiver
. A topic producer is called a TopicPublisher
and a TopicSubscriber
is a topic consumer.
So, some quick conversational messaging-speak: An application uses a sender to send a message to exactly one receiver (via a queue), but uses a publisher to broadcast a message to multiple subscribers (via a topic). For more of a quick overview of what messaging is all about, check out the EIP book's introduction
Now, the next time you're at a dinner party and the conversation turns to messaging, you'll be prepared. You can thank me later.
As I've mentioned before
, the department I work in publishes the WebSphere Recommended Reading List
. As promised, it has been updated for 2005. You can also download the PDF
. The updated list already includes resources for WAS 6, including a WebSphere Application Server V6 Technical Overview
One of the new Google tools I mentioned in Google Stuff
was Google Suggest
. Turns out Bill has already discovered Google Suggest
, as has Joel on Software
. Great minds think alike, or at least rediscover the same things.
Turns out there's also a Google Suggest FAQ
. (Access it through that "Learn More" link on the Suggest page.) Google Labs
lists a whole bunch of tools they're working on. (Reminds me of IBM's alphaWorks
.) There's also 2004 Year-End Google Zeitgeist
, a pile of statistics
about what was searched on during the year.
Part of what's cool about Google Suggest
is not just what it does, but how it's implemented. It has a dropdown list that updates as you type, without reloading the web page. How do they do that? Joel points to the answer: XmlHttpRequest
from Apple (doesn't it figure somehow? Apple is still the innovator of so much cool stuff!), as explained in Auto complete comes of age
. As usual, Slashdot has been on this for two months now, complete with an explanation of the Implementation details
There's an article on ACM Queue
, "A Conversation with Alan Kay
." Alan Kay
is a founder of Smalltalk
(where Java got all its best ideas!), a Fellow at Apple, Turing Award
winner, and much more. As Alan likes to say
, "The best way to predict the future is to invent it," so go read
what Alan has to say about the future today.
There's a lot of different search engine stuff going on, as the rest of this post will show. But before I disappear into that trivia escapade, I should make the point that the effort of organizing and finding stuff on the Web, and getting stuff onto the Web so that it can be found, is big business. "What's Next for Google
" (MIT Press) argues that the search wars will be the next browser wars and that Google could end up like Netscape (flattened by Microsoft). It explains the need for Google to establish search standards so that search engines can work together, search will become a natural part of applications instead of a separate activity, and no one company will be able to dominate. It's good reading.
Now, what can you do with Google these days?
Want to know about the features available
in a Google search? Some of the different Google tools and services
available like Google Catalogs
and Google Local
? How about a whole (unofficial) Google blog
? It's all out there, and more.
Here's an interesting one I just learned about: Goggle Suggest
. I would have expected "http://suggest.google.com," and maybe it'll be there eventually, but right now it must still be too experimental. In any event, Suggest does keyword completion while you type. Type in "bobby " and a dropdown list appears with suggestions like "bobby darin," "bobby brown," and "bobby jones." Type in "bobby woo" and forth in the list is "bobby woolf." And Suggest says "bobby woolf" has 127,000 results! The search claims "about 322,000" results! Those can't all be about me, can they? Also, Suggest lists this developerWorks blog
forth, whereas the standard Google Search (i.e. http://www.google.com/search?q=bobby+woolf
) lists the blog second. Do the two tools use different searches?
The easiest place to find articles on breaking news is Google News
. Google is also taking on MapQuest
with Google Maps
. Google Scholar
specifically searches for scholarly literature. For example, http://scholar.google.com/scholar?q=ValueModel
finds my 1994 paper "Understanding and using the ValueModel framework in VisualWorks Smalltalk," which has been sited six times (apparently the record amongst papers on ValueModels
). I'm a scholar
may well digitize the world's libraries, similar to Amazon's effort to create a digital archive of books
. Digitizing Initiatives
summarizes what's going on with all this.
Amazon, not to be left out, is jumping into the game too. A9.com
is Amazon's search engine. A9 includes a Yellow Pages feature that includes pictures of business locations. For example, the search invoked by the URL http://a9.com/books?a=oyp
shows bookstores in Seattle, Washington in case you don't want to order from Amazon. (Interesting, the A9 Advanced Search
page looks a lot like Google's
. Seems like if Goggle had patented advanced search the way Amazon patented one-click shopping
, Amazon would be in trouble.)
I think this also shows a difference of approaches. Google has explicitly separate tools, each with their own subdomain URL, whereas Amazon's features are embedded and not separately addressable. Seems like Google's tools will be more reusable (which is probably what both companies have in mind).
For more reading:
Feb 14, 2005
Turns out there's also Google Labs
, where Google showcases its ideas that aren't ready for prime time.
A colleague of mine just asked me about keeping up with blogs, so here's the answer as a blog entry (kinda self-referential):
In summary: I use SharpReader
, and so does James Snell
Here's the long version:
A current thread of discussion is what blog readers to use. A problem with blogs, as you may well have discovered, is that there's lots of them and they're updated at all different rates. It gets old checking them each day just to find that there's nothing new, but it also sucks to check one after a week and find there's been something useful there for several days but you didn't know about it. You kind of wish blogs worked more like e-mail, such that each new blog entry would be mailed to you and you'd be notified. Perhaps you could just subscribe to the ones you're interested in (and not all one-bazillion blogs there seem to be in the world these days), and perhaps have a different mailbox for each blog.
This is the idea behind RSS
feeds. They syndicate information in a machine-readable form (XML
--what a shock!) so that a feed reader (which works like a web browser or e-mail viewer) can process the information and present it to you. Not only can the reader subscribe to only the blogs you point it at and keep each one's feed separate, but it can automatically check the feeds periodically and let you know when there's new content.
For example, if you're reading my blog on the developerWorks site
, in the upper-right corner of the web page, you'll notice a calendar and a button under it labeled "RSS." If you click on it, the link
returns not an HTML
document but an XML
document. The XML's root element is an <rss> element, containing a <channel> element, containing a bunch of <item> elements. Your RSS reader interprets this and displays the blog to you in the manner it sees fit.James Snell has commented
that he uses SharpReader
, which is also what I use. Seems like a tried several a couple of months ago and settled on this one. It's a GUI that runs like a web browser or e-mail viewer. Bob Sutor
and Bill Higgins
say they like to use Bloglines
, which is a web site that shows you the blogs you're interested in.
What I like about something like SharpReader is that I can sync it, then read stuff off-line, something you can't do with Bloglines since it's a web site. (Then again, some blog feeds don't support off-line reading, they essentially just feed a URL to the entry, which isn't much of a feed. Or they only feed the first paragraph, so you have to be on-line to read the rest of the entry. Guess they're saving bandwidth.) Bill says that Bloglines is not so much a reader, but a community that lets you know who's reading what blogs and other blogs that are similar to the ones you like. So I guess it all depends on what you're looking for.
One problem that RSS feeds still can't solve is that once you subscribe to one, you get all of the items published on that feed, even if you're only interested in some of them. There is currently no "RSS policy" that us authors can use to describe their items and you subscribers can use to filter the items they receive.
James Snell has also made some interesting comments
about the information pull model that services like RSS and Atom represent as opposed to the push model of e-mail, where you get lots of junk pushed your way by anyone who knows your e-mail address, and where you have to actively ask someone else to unsubscribe you from a mailing list.
In any event, if you're manually polling blog web sites and thinking there's got to be a better way, there is. Check it out.
A book that simply has not been receiving the amount of attention it deserves is Domain-Driven Design
by Eric Evans
(a friend of mine). This book does an excellent job of taking the most powerful object-oriented practice so far, domain modeling
, and explains it with what is probably the most revolutionary documentation technique of at least the past decade, patterns
. The result is a book that describes exactly how to develop the domain model your application needs. As Kent Beck
commented, "The book is absolutely fabulous! I wish I had written it."
There's now an article by Jimmy Nilsson
(another friend), "Simplify Your Efforts With DDD
." In it, Jimmy captures the essance of Eric's book and techniques (in a nutshell, as it were) and illustrates why it's so useful. Give it a read and find out what you're missing.
For more thoughts on books you might want to check out, see my recent post ISSW Recommended Reading List
Each month, the WebSphere Developer's Zone
column "Meet the Experts
" features an IBM WebSphere expert to answer your questions. I was the expert in December
. This month's expert is one of my IBM Software Services for WebSphere
colleagues, Gang Chen, talking about J2EE transactions. Gang is wicked smart about how transactions work inside of WAS. So put Gang to work; ask him some questions
In Break up HP?
, Bill comments on how analysts are calling for the breakup of Hewlett-Packard
. Bill points out that there were similar calls to break up IBM in the early nineties, but Lou Gerstner
kept IBM together, and the company has been a lot better for it. The comparison between IBM then and HP today is an apt analogy
, but I don't think it applies to HP.
The Gerstner administration was able to help IBM find and develop a lot of synergies between its business parts, the most obvious being IBM software
products running together and on IBM hardware
. IBM Global Services
got started in the '90's and now a lot of its business involves helping clients with using IBM software on IBM hardware. The businesses work well together and compliment each other. I don't know if that was obvious when Gerstner started, but it's obvious today.
I don't see the HP businesses working well together. The vast majority of HP's profits come from printers and supplies
. Their PCs, servers, and storage are also-rans
, their software loses money, and their services/consulting haven't taken off. (Check out recent articles in Fortune
, Business Week
, etc.) Where's the synergy? I don't see how they're going to be able to form an IBM-caliber enterprise IT company around printers.
Another leading technology company that's been in trouble for a while is Sun Microsystems
. Linux is stealing Solaris' thunder, their custom microprocessor
R&D is becoming prohibitively expensive, and despite their enormous leadership roll in Java
, they can't make money off of it. (By comparison: IBM defrays the costs of developing Power chips
by selling them to lots of partners; IBM's WebSphere Application Server
leads the J2EE marketplace whereas Sun's Java System Application Server
has very little market share.)
So where is all of this going? Will HP break up? If so, will the non-printer parts survive or be bought? Will parts or all of Sun survive or be bought? If bought, by whom? IBM already has these products and businesses. Microsoft doesn't want them. Computer Associates used to buy anybody going out of business, discontinued software products at least, but CA is in its own trouble these days. Oracle can't buy everybody (or can it?). Symantec's looking for new businesses to get into (now that Microsoft's stealing its thunder); are these it?
Where are these decent but unprofitable business units going to go?
(Usual disclaimer: I don't speak for IBM nor know what plans my employer may have. I'm just an informed outsider who reads business magazines.
, an old Smalltalk friend
and IBM Software Services for WebSphere
colleague of mine, has a new developerWorks blog
, WebSphere Migrations: Practice and Experience
. Wayne specializes in our department on migrating customer's business application code to the latest versions of WebSphere Application Server
, either from older versions of WAS or from competitors' J2EE
products. Wayne's also a real Extreme Programming
kind of guy.
Like me, Wayne focuses on changes in the J2EE specs like Can't Use MessageListeners in J2EE
and Change to JMS Sessions in J2EE 1.4
. He focuses on them more than I do because they mess people up when they try to migrate their code. So far in his blog, Wayne is discussing code that is trying to be loosely coupled, but isn't really, and what to do about it. So, go check it out
There's a new web site, the PatternShare Community
. It's the brainchild of Ward Cunningham
and is being hosted by Microsoft
. It's an effort to collect patterns from several different sources, summarize them, and show how they fit together into a unified pattern space. The idea is to help answer the question, "How do the patterns in this one book fit with the patterns in this other book?" (See "What is a PatternShare
To start with, a group of authors and their publishers decided to work together to use their books to seed the web site. So the books that are represented now are Patterns of Enterprise Application Architecture
, Domain-Driven Design
, Enterprise Integration Patterns
, and several other prominent patterns books
The site has been a long time in the making, but is still just getting started. We authors are endeavoring to link our material together better. We also hope other authors will want to join, and we will add their work in over time. Meantime, have a look; hopefully you will find it helpful.
Simon Johnston has a rather interesting developerWorks blog
, Service Oriented Architecture and Business-Level Tooling
. He hasn't had time to blog in a couple of months, but popped up again last week with a couple of interesting postings.The first posting
discusses a major advantge of SOA services, that a service's components and all their prerequisites are already hosted for you. Would you really want to host your own credit card validation component? What database would it work off of?The second posting
discusses the relationship of services, components, and objects. Also, does SOA encourage fewer connections, less distribution, and fewer dependencies between components, or more of them?
It's good to have Simon back.
As noted two weeks ago
, the latest JDO
2.0 draft (JSR 243
) was not approved. Also, as noted back in October
, a lot of the JDO effort is now being channeled into JSR 220
Now Richard Monson-Haefel
(J2EE: A Standard In Jeopardy?
) posts The Death Knell: The JCP EC Rejects JDO 2.0
, where he speculates that the rejection will "make JDO a footnote in the annals of Object-Relational persistence
." He believes all significant Java O/R persistence effort will now occur in EJB, not JDO.
Interestingly, Richard asserts: "Right now it makes more sense to use JDO or Hibernate than it does to use EJB 2.0/2.1 container-managed persistence
" (for new development) because EJB 3.0 will likely be more like the former than the latter. I've said otherwise
, that WebSphere
customers should stick with J2EE. The way I've heard it, the J2EE vendors have a very strong steak in supporting their customers by making sure the new versions of EJB and J2EE are backwards-compatible with code developed for the current ones. Yes, prominent JDO and Hibernate figures are on the JSR 220 committee now, but so are EJB 2.x people who care very much about getting current customers to buy the next version of their products.
I don't decide these things for IBM, but it's hopefully a no-brainer
that current customers will be supported going forward. Therefore, I would stick with J2EE, and change when J2EE does, not before.
Let me go meta
here for a bit to discuss off-topic blog postings (of which this is one!
). As Stephen O'Grady puts it, "Does personal material belong in a work blog?
" Fellow dW blogger Bob Sutor has divided his work into three blogs
: public, internal, and personal. I've been wrestling with the same concerns. Now, I don't want to get off on a rant here, but...
First, there are some things I don't blog about because they're currently confidential information that IBM hasn't chosen to make public yet. That's what an internal blog is for.
Second, there's personal material that doesn't have much to do with work. That would be me discussing my hobbies and other parts of my fascinating social life. That's what a personal blog is for.
The third option, obviously, is the professional blog, where you discuss public information pertaining to a professional topic publicly. My blog that you're reading now is a professional blog about J2EE development. Other examples are political blogs, news blogs--commercial sources of information.
So the three-blog approach that Bob uses makes sense. But I think there are still issues around what belongs on a professional blog. There are some broad guidelines, like don't make your employer look bad. But then there's a range of what I consider being on-topic vs. off-topic. I find that my blog postings fall into one of there categories:
- original content -- Postings with new information that isn't available elsewhere, or new summaries or analysis of previously existing information.
- references -- Links to existing content that don't add much to the content, but make the reader aware of the content and where to find it, and enough information about the content for the reader to decide whether to pursue it.
- tangential discussions -- Commentary about topics that are not the purpose of the blog but that are interesting to the blogger and hopefully interesting to the readers.
Original content tends to be on-topic. References are usually on-topic too, but not as valuable as original content because the referenced material would still be available and discoverable whether or not the blogger pointed it out. Many blogs do little more than simply point to other material, and many of those are more news sites than blogs (or are, indeed, news sites, with all the personal flavor of a TV newscaster). References are still of value, though, helping readers find material of value that the might otherwise not know about. Tangential discussions are the least valuable, but sometimes the most interesting. They're usually quite off-topic, but fun.
For tangential discussions, I think the important consideration is not so much that the blogger thinks the tangent is interesting, but that they think it will be interesting to their readers. When I go onto a tangent (graphing statistics, swarming downloads, Simpsons quotes), I try to ask myself, "Will the J2EE developers who hopefully are reading my blog find this interesting? Or, is there something about this that makes it interesting to J2EE developers?" For example, I've done a couple of entries on happiness. Why? Not for the J2EE content! But I find this topic interesting and useful, which is why I read up on it. I blog about it because I find a lot of J2EE developers, and technical/engineering people in general, often struggle with being unhappy even though we generally have a lot to be happy about. So I would say that discussions of happiness are in many ways as important for this readership as discussions about J2EE (although a topic I myself have much less expertise about). After all, wouldn't you rather be a happy coder and work with happy coders?
So I have and continue to wrestle with the mix of material on this blog. Obviously original content is good, but there's only so much of it that I can produce, in terms of inspiration and in terms of time to document it. References are frankly a lot easier to blog and hopefully still valuable. Tangents add color and keep things interesting, if not for the readers than at least for me (so that I'm motivated to continue doing original content and references!).
I often wonder what readers want. What little surveying I've been able to do, they like the original content and want more. But that's hard to do. Given that the amount of original content is fixed (bounded by inspiration, time and commitments, confidentiality, etc.), what other options are there? Is it better for a blog to just have original content, even though that's less material total? Or better to have more material, references and tangents in addition to original content, because that stuff is valuable too? I think all of us bloggers are trying to figure that out.
There was a court ruling on Tuesday that didn't end the SCO Linux lawsuit, but it didn't go well for SCO. I don't know anything about this except for what I read, so here you go:
This is a bit outside the scope of my blog, but interesting news: Carly Fiorina
(no longer here
), the chairman and CEO of Hewlett-Packard
, has resigned. Fiorina has been facing pressure from HP's board and stockholders because the controversial merger with Compaq
she spearheaded in 2002 has not produced the increased shareholder value that was promised. HP and IBM are often seen as each others biggest competitors.
See Carly Fiorina steps down as chairman, CEO of Hewlett-Packard
and HEWLETT-PACKARD: Why Carly's Big Bet Is Failing
The Apache Software Foundation
has announced a new Struts
. Struts Shale is a JSF
-based version of Struts. The old version of Struts will be renamed Struts Classic and will continue to be maintained (if there are people interested in doing so). (For some JSF links, see Service Data Objects (SDO)
Say a file server has a really popular file that everyone wants to download all at once. (My latest blog posting?! No, something important, and huge, like the latest release of RedHat Linux
.) How can a server (or even a cluster) possibly have enough bandwidth? Who wants to pay for all those servers and network connections?
The answer is "download swarming." Swarming enables a server that's being hit with numerous requests for the same file to upload that file with more bandwidth than the server actually has. Impossible, right?
A swarming download is first of all a segmented download
, whereby the server breaks the file into parts; parts can be downloaded concurrently and enable an interrupted download to be resumed. Swarming takes segments one step further: The way a swarming download works is that as the client downloads segments, it must also upload segments to other clients that are downloading the file. (The main server redirects new clients to these established clients.) Thus a swarming download has a multiplier effect: the original server uploads to a few clients, which in turn upload to more clients, and so on.
Why should clients upload and not just download? Because upload throughput regulates download throughput; you have to upload more to download more. As one FAQ
puts it: "You could hack the source to not upload, but then your download rate would suck. Downloaders engage in tit-for-tat with their peers, so leeches have very little success downloading.
There's an open-source program, BitTorrent
, which is all the rage for swarming. It's the brainchild of Bram Cohen
, recently praised in "Downloading Hollywood
." According to the docs
, "Its advantage over plain HTTP is that when multiple downloads of the same file happen concurrently, the downloaders upload to each other, making it possible for the file source to support very large numbers of downloaders with only a modest increase in its load.
" Wikipedia has an incredibly thorough write-up
with pictures and everything. (Although another page then seems to say that segmented downloading and swarming downloading are the same thing. Oops.)
A similar effort is Peer Distributed Transfer Protocol
(PDTP): "PDTP decreases the amount of bandwidth a server needs to effectively serve files to a large number of clients by having the clients distribute portions of files to each other whenever possible.
" Interestingly, Onion Networks
claims to have invented the swarming approach to file transfer and to have multiple pending patents.
As usual, the collective wisdom of Slashdot
was on this years ago: Finally Real P2P With Brains
. It's potentially a way to help handle the Slashdot effect
. (Again, a problem no one accuses my blog of ever having caused.)
IBM has Download Director
, a Java applet that performs segmented downloads transparently. But it doesn't do swarming (nor would customers want that, I suspect; "I have to upload to download?!"). IBM also has an experimental grid downloader, creatively called "downloadGrid
," that runs on an experimental grid computing network
." The grid approach is not just downloading segments concurrently from one server, but from lots of servers, whichever currently can serve you best. But that's still not swarming.
For more information about grid computing, visit the developerWorks Grid Computing zone
. We don't have a Swarming Computing zone, at least not yet.