Use the createVersionedSCAModule command to create a new instance of a versioned SCA module when you want to deploy the same versioned module across multiple clusters in a cell. You must use this command once for each additional instance of the module you want to deploy. The new instance is created in a new EAR file; the new EAR file name contains the module version value and the specified unique cell ID.
This is useful because two SCA modules with the same name cannot be deployed to the same cell (because then some of the generated resources would have the same name and collide). So if you want to deploy the same module twice (say one for your internal employees to use and one for your external customers), you would previously have to deploy them in two separate cells. Now you can deploy them in the same cell as long as they're two different versions. The create version command doesn't change any of the code, so the new module runs the same as the old one, but it changes some of the component identifiers so that they don't have the same name.
Thanks to my fellow ISSW colleague David Currie for pointing out this command to me.
This is good news. If nothing else, it helps conceptually split-up the laundry list of features in WXD into three logical groups with descriptive names. It also means that if your organization only needs one set of capabilities, you don't need to buy all three; you can save some money and simplify the installation. Or, for simplicity when you want all three, you can still buy the integrated suite as a single product. My understanding of our pricing (check with your salesperson) is that if you want two of the products, you might as well buy the suite; they're about the same price so you get the third product more-or-less for free.
How should you set up WebSphere Process Server in your production environment?
I've talked about WebSphere Process Server (WPS) and the latest version 6.2. The simplest way to install a WPS runtime is a single server on a single node in a single cell, which is perfectly adequate for unit testing (and in fact is the topology for the WPS test server in WebSphere Integration Developer (WID)). However, this is not the best set-up for production. In production, you usually want a basic amount of high availability (HA) so that your users still get service even when part of your infrastructure goes down.
Let's take a quick look at what's in the golden topology. Two details to observe are that the cell contains two nodes and three clusters. Why is that?
Nodes -- The cell contains two nodes which should be installed on two different host machines (or perhaps two LPARs on the same host machine). This way, if one node crashes or has to be taken down for maintenance, the work will fail over to the other node and the users will still be able to do their work. During normal operation, workload management (WLM) will distribute requests across both nodes (and in fact also handles the failover when one node fails). An even better topology might have three or four nodes for increased redundancy.
Clusters -- Since the cell contains multiple nodes, the application should be deployed not in a single server running on a single node, but in a cluster with cluster members (aka application servers) on multiple nodes. In WPS, it's helpful to actually use three clusters, one for the business logic parts of the application and two for some of the main WPS infrastructure.
Business logic cluster -- This is where you deploy your SCA modules, including business process modules that run in the Business Flow Manager and Human Task Manager.
SIBus engine cluster -- This is for WPS to host messaging engines that are part of the service integration bus. A separate cluster is the easiest way to enable all of the business logic app instances to access the messaging engines even if/when they fail over.
CEI engine cluster -- This is for WPS to report what's going on for monitoring by products like Tivoli Composite Application Manager (ITCAM) (for IT monitoring) and WebSphere Business Monitor (for business monitoring). The separate CEI cluster helps lessen monitoring's interference with the business application.
So when deciding how to install WPS in production, the golden topology is at least a good start.
The IBM Sequoia supercomputer will be faster than the fastest 500 supercomputers, combined.
So says "IBM Sequoia: Faster Than the Fastest 500 Supercomputers, Combined" on a blog called Gizmodo. "IBM Tapped For 20-Petaflop Government Supercomputer" (InformationWeek) says the Sequoia, which will be built at the Lawrence Livermore National Laboratory for the National Nuclear Security Administration, will deliver 20-petaflops of computing power. That's 20 times more powerful than today's fastest computer. "The system will comprise 96 refrigerator-size racks with a combined 1.6 PB of memory, 98,304 compute notes, and 1.6 million IBM Power processor cores" and cover 3,422 square feet. "Uncle Sam buys 20 petaflops BlueGene super" (Channel Register) gives details of the Dawn (a BlueGene/P), a 501 teraflop machine to be delivered late this year, and the Sequoia, a 20.13 petaflop machine to be delivered in 2011. The Sequoia will even be energy efficient; "U.S. taps IBM for 20 petaflops computer" (EE Times) quotes IBM's Herb Schultz: "The Sequoia system will be 15 times faster than BlueGene/P with roughly the same footprint and a modest increase in power consumption."
Integration is the primary use case for more than half of the ESBs deployed today. The core language of EAI, defined in the book Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf, is also the core language of defining ESB flows and orchestrations, as seen in the ESB's developer tooling. For the users seeking integration, the ESB brings connectivity, protocol conversion, mediation, and other integration features together in one place to support the design, development, and management of integrated business solutions.
I'm interested that 50% of ESBs today are used primarily for integration. What are the other 50% being used for? In fact, I'd be more specific and say that an ESB should be used for service integration (or as IBM likes to call it, "sevice connectivity"), i.e. connecting together service requestors and providers in an SOA. Busses can be used for other things like transporting data and providing event notification, but I wouldn't exactly call those functions of a service bus.
Anyway, nice to be thought of as having helped to document the core language of EAI.
Thanks to my friend Dave for pointing out to me that I was mentioned in this report.
Marquez says, "Despite the total meltdowns of the U.S. and global economies last October, IBM executed flawlessly and handily beat analysts' earnings estimates, expanding both its margins and its profit outlook for 2009." Others have also made positive evaluations of IBM's earnings report, such as "IBM posts earnings rise, sees strong growth in 2009" (MarketWatch), and the stock rose as a result ("IBM shares rise after earnings beat expectations" in MarketWatch). Marquez also says, "The virtue of IBM's model is that it has effectively transformed itself from the cyclical hardware company that gave it its name into a software-and-service-oriented firm that gives it a recurring revenue stream. In addition into this well-thought-out business model that concentrates on high-margin, value-added businesses." Yeah, I think that sounds like us.
Another point Marquez makes is that "the many stimulus plans being implemented around the world will no doubt increase demand in many of IBM’s product-and-service areas." This sounds like a nod to the "Smarter Planet" efforts IBM has been talking about.
Marquez also advises his nephew, who has an internship at IBM, to try to leverage it into a permanent job with the company because "It is a superb global company, with a bullet-proof business model and a balance sheet that gives them a huge sustainable competitive advantage." So, Marquez thinks IBM is a good place to work, too.
One trend I find very interesting is Middleware-as-a-Service, which combines cloud computing with middleware products like the IBM SOA Foundation products. I think where this is leading is that customers will be able to create application hosting environments (aka clouds), either on the customer's hardware or in a third-party data center, that application development teams will be able to deploy their applications into and make those applications available for the enterprise without having to be too concerned about the details of what exactly their applications are running in or on. This will create a helpful separation and point of coordination between the infrastructure group managing the hardware and middleware across multiple projects and each project that simply needs a runtime for is applications.
Another neat theme is an expansion of the approaches available to achieve business/it alignment. Business Mash-ups will bring quickly built situational applications to the business world. Business Rules, like the ILOG acquisition, will enable business users to participate more easily and meaningfully in defining the policies for how the business works, policies which the applications are supposed to follow and enforce.
Two more trends, Extreme Scale and WAS.NEXT, will further evolve the long-running trend of bringing the power of the mainframe to clusters of distributed servers and yet make the middleware on those servers more pluggable so that the applications only run the infrastructure they need.
Jerry's got a pretty interesting article, so check it out. (Thanks to my friend Bill Higgins for reminding me of this article and that I ought to blog about it.)
The book covers the DataPower products (announcement, background), network appliances with software built in that you just plug into your network, configure, and get ESB-style SOA connectivity.
IBM WebSphere DataPower SOA Appliance Handbook begins by introducing the rationale for SOA appliances and explaining how DataPower appliances work from network, security, and Enterprise Service Bus perspectives. Next, the authors walk through DataPower installation and configuration; then they present deep detail on DataPower’s role and use as a network device. Using many real-world examples, the authors systematically introduce the services available on DataPower devices, especially the "big three": XML Firewall, Web Service Proxy, and Multi-Protocol Gateway. They also present thorough and practical guidance on day-to-day DataPower management, including, monitoring, configuration build and deploy techniques.
The book is 960 pages long, so it should answer any questions you may have and then some.
So if you'd like to learn more, check out the book.