This is good news. If nothing else, it helps conceptually split-up the laundry list of features in WXD into three logical groups with descriptive names. It also means that if your organization only needs one set of capabilities, you don't need to buy all three; you can save some money and simplify the installation. Or, for simplicity when you want all three, you can still buy the integrated suite as a single product. My understanding of our pricing (check with your salesperson) is that if you want two of the products, you might as well buy the suite; they're about the same price so you get the third product more-or-less for free.
How should you set up WebSphere Process Server in your production environment?
I've talked about WebSphere Process Server (WPS) and the latest version 6.2. The simplest way to install a WPS runtime is a single server on a single node in a single cell, which is perfectly adequate for unit testing (and in fact is the topology for the WPS test server in WebSphere Integration Developer (WID)). However, this is not the best set-up for production. In production, you usually want a basic amount of high availability (HA) so that your users still get service even when part of your infrastructure goes down.
Let's take a quick look at what's in the golden topology. Two details to observe are that the cell contains two nodes and three clusters. Why is that?
Nodes -- The cell contains two nodes which should be installed on two different host machines (or perhaps two LPARs on the same host machine). This way, if one node crashes or has to be taken down for maintenance, the work will fail over to the other node and the users will still be able to do their work. During normal operation, workload management (WLM) will distribute requests across both nodes (and in fact also handles the failover when one node fails). An even better topology might have three or four nodes for increased redundancy.
Clusters -- Since the cell contains multiple nodes, the application should be deployed not in a single server running on a single node, but in a cluster with cluster members (aka application servers) on multiple nodes. In WPS, it's helpful to actually use three clusters, one for the business logic parts of the application and two for some of the main WPS infrastructure.
Business logic cluster -- This is where you deploy your SCA modules, including business process modules that run in the Business Flow Manager and Human Task Manager.
SIBus engine cluster -- This is for WPS to host messaging engines that are part of the service integration bus. A separate cluster is the easiest way to enable all of the business logic app instances to access the messaging engines even if/when they fail over.
CEI engine cluster -- This is for WPS to report what's going on for monitoring by products like Tivoli Composite Application Manager (ITCAM) (for IT monitoring) and WebSphere Business Monitor (for business monitoring). The separate CEI cluster helps lessen monitoring's interference with the business application.
So when deciding how to install WPS in production, the golden topology is at least a good start.
The IBM Sequoia supercomputer will be faster than the fastest 500 supercomputers, combined.
So says "IBM Sequoia: Faster Than the Fastest 500 Supercomputers, Combined" on a blog called Gizmodo. "IBM Tapped For 20-Petaflop Government Supercomputer" (InformationWeek) says the Sequoia, which will be built at the Lawrence Livermore National Laboratory for the National Nuclear Security Administration, will deliver 20-petaflops of computing power. That's 20 times more powerful than today's fastest computer. "The system will comprise 96 refrigerator-size racks with a combined 1.6 PB of memory, 98,304 compute notes, and 1.6 million IBM Power processor cores" and cover 3,422 square feet. "Uncle Sam buys 20 petaflops BlueGene super" (Channel Register) gives details of the Dawn (a BlueGene/P), a 501 teraflop machine to be delivered late this year, and the Sequoia, a 20.13 petaflop machine to be delivered in 2011. The Sequoia will even be energy efficient; "U.S. taps IBM for 20 petaflops computer" (EE Times) quotes IBM's Herb Schultz: "The Sequoia system will be 15 times faster than BlueGene/P with roughly the same footprint and a modest increase in power consumption."
Integration is the primary use case for more than half of the ESBs deployed today. The core language of EAI, defined in the book Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf, is also the core language of defining ESB flows and orchestrations, as seen in the ESB's developer tooling. For the users seeking integration, the ESB brings connectivity, protocol conversion, mediation, and other integration features together in one place to support the design, development, and management of integrated business solutions.
I'm interested that 50% of ESBs today are used primarily for integration. What are the other 50% being used for? In fact, I'd be more specific and say that an ESB should be used for service integration (or as IBM likes to call it, "sevice connectivity"), i.e. connecting together service requestors and providers in an SOA. Busses can be used for other things like transporting data and providing event notification, but I wouldn't exactly call those functions of a service bus.
Anyway, nice to be thought of as having helped to document the core language of EAI.
Thanks to my friend Dave for pointing out to me that I was mentioned in this report.
Marquez says, "Despite the total meltdowns of the U.S. and global economies last October, IBM executed flawlessly and handily beat analysts' earnings estimates, expanding both its margins and its profit outlook for 2009." Others have also made positive evaluations of IBM's earnings report, such as "IBM posts earnings rise, sees strong growth in 2009" (MarketWatch), and the stock rose as a result ("IBM shares rise after earnings beat expectations" in MarketWatch). Marquez also says, "The virtue of IBM's model is that it has effectively transformed itself from the cyclical hardware company that gave it its name into a software-and-service-oriented firm that gives it a recurring revenue stream. In addition into this well-thought-out business model that concentrates on high-margin, value-added businesses." Yeah, I think that sounds like us.
Another point Marquez makes is that "the many stimulus plans being implemented around the world will no doubt increase demand in many of IBM’s product-and-service areas." This sounds like a nod to the "Smarter Planet" efforts IBM has been talking about.
Marquez also advises his nephew, who has an internship at IBM, to try to leverage it into a permanent job with the company because "It is a superb global company, with a bullet-proof business model and a balance sheet that gives them a huge sustainable competitive advantage." So, Marquez thinks IBM is a good place to work, too.
One trend I find very interesting is Middleware-as-a-Service, which combines cloud computing with middleware products like the IBM SOA Foundation products. I think where this is leading is that customers will be able to create application hosting environments (aka clouds), either on the customer's hardware or in a third-party data center, that application development teams will be able to deploy their applications into and make those applications available for the enterprise without having to be too concerned about the details of what exactly their applications are running in or on. This will create a helpful separation and point of coordination between the infrastructure group managing the hardware and middleware across multiple projects and each project that simply needs a runtime for is applications.
Another neat theme is an expansion of the approaches available to achieve business/it alignment. Business Mash-ups will bring quickly built situational applications to the business world. Business Rules, like the ILOG acquisition, will enable business users to participate more easily and meaningfully in defining the policies for how the business works, policies which the applications are supposed to follow and enforce.
Two more trends, Extreme Scale and WAS.NEXT, will further evolve the long-running trend of bringing the power of the mainframe to clusters of distributed servers and yet make the middleware on those servers more pluggable so that the applications only run the infrastructure they need.
Jerry's got a pretty interesting article, so check it out. (Thanks to my friend Bill Higgins for reminding me of this article and that I ought to blog about it.)
The book covers the DataPower products (announcement, background), network appliances with software built in that you just plug into your network, configure, and get ESB-style SOA connectivity.
IBM WebSphere DataPower SOA Appliance Handbook begins by introducing the rationale for SOA appliances and explaining how DataPower appliances work from network, security, and Enterprise Service Bus perspectives. Next, the authors walk through DataPower installation and configuration; then they present deep detail on DataPower’s role and use as a network device. Using many real-world examples, the authors systematically introduce the services available on DataPower devices, especially the "big three": XML Firewall, Web Service Proxy, and Multi-Protocol Gateway. They also present thorough and practical guidance on day-to-day DataPower management, including, monitoring, configuration build and deploy techniques.
The book is 960 pages long, so it should answer any questions you may have and then some.
So if you'd like to learn more, check out the book.
A good rules engine is an important part of the middleware foundation for enterprise applications because it helps extract policies and decisions from the rest of the application and express them in a form which makes the policies easier to understand, manage, and modify. This encapsulation of decisions also makes the rest of the application easier to write because it can just delegate to a rule set when a decision needs to be made, rather than a developer having to hand code the decision logic. A good rules engine can also execute a large, complex set of rules far more efficiently than equivalent (Java, C#, etc.) code can. There tends to be good synergy between a process engine and a rules engine, where the process engine is used to guide a series of tasks over time and a rules engine is used to make a decision at a point in time.
I believe the ILOG acquisition is an exciting addition to the WebSphere SOA family of products. Being able to include this rules engine more easily within SOA application infrastructure will enable us to more easily develop better SOA applications.
Web content providers seem to be losing interest in network neutrality.
"Google Wants Its Own Fast Track on the Web" (Wall Street Journal) reports that Google, Microsoft, and Yahoo are partnering with phone and cable companies to create fast lanes on the Internet for their own traffic. This goes against the principle of network neutrality that these companies and others had been supporting, which says that all Web sites and all traffic on the Internet should have equal access to available bandwidth. The counterargument being made by the phone and cable companies is that the content providers should help pay for their network costs. This would also enable companies that control distribution to favor their own content over competitors'.
The article also says that President-elect Obama plans to name as Lawrence Lessig head of the Federal Communications Commission (FCC). Apparently Lessig is loosening his position on network neutrality, saying that content providers should be able to pay for faster service. He compares tiers of Internet service to tiers of postal service, where overnight delivery is available to those willing to pay extra.
"How Apple's iPhone Reshaped the Industry" (Business Week) discusses how the usefulness of cell phones is moving from making calls and sending text messages to running applications. Thus the most important aspect of your phone is shifting from who is your service provider (in the US: AT&T, Verizon, Sprint, etc.) to what platform the phone embodies (such as Palm, BlackBerry, or iPhone). This represents a loss of power for the service providers; they're being disintermediated.
Not so fast (it seems to me). You typically (in the US) buy your cell phone from your service provider, and that phone only works on that provider's network. So although AT&T and Sprint sell virtually identical BlackBerry models (to pick an example), a BlackBerry you buy from AT&T won't work on Sprint's network and vice versa. This is partially a technical issue (CDMA vs. GSM, two different standards for cell phone networks and thus the chip in the phone that wirelessly connects the phone to the network), it's also political: The service provides subsidize the cost of the phone when you sign-up for a long-term service agreement and otherwise don't want you using their phone on another network.
Computers don't work this way. If you want a Dell PC, you don't buy it from AOL (to pick an example) and then have a computer that can only connect to the Internet via AOL. No, you buy a computer from your favorite vendor (Dell, Lenovo, Apple, etc.) and then connect via your favorite ISP (AOL, Earthlink, your hotel's wi-fi network, etc.). Any computer works with any Internet provider. Both groups have to constantly compete to provide the best equipment and service to get you to continue to choose them instead of the competition.
The cell phone industry doesn't have this kind of interchangeability, where any phone will work on any service provider's network. Until that's the case, I'd say we're still pretty locked into our service providers. Google was supposedly working towards a cell phone that works with any carrier (see the Open Handset Alliance, such as "Breaking Wireless Wide Open" (Business Week)), but the Google G1 cell phone only works with T-Mobile (in the US); so much for neutrality.
Traffic congestion management systems are an example of what IBM means when we talk about developing a Smarter Planet, and an example of some real systems IBM is already doing. The idea is to leverage technology to help make our societies work better.
Want to be able to perform desktop tasks without the expense of a desktop?
"IBM and Business Partners Introduce a Linux-Based, Virtual Desktop" (press release) describes a Linux-based, server-based system where the server runs the applications and stores the files; all the user needs on his desktop is a simple terminal. The sever-based system will be open standards-based, less expensive to purchase, and easier to maintain.