Imagine elevators that group together people going to the same floors.
I ride on a lot of elevators in a variety of buildings, but today I rode on some of the most interesting ones I've ever seen.
The elevator is actually a bank of elevators. Each elevator is pretty typical, but it's the way they work together that's so interesting. The button panel to call the elevator isn't the usual pair of up and down buttons. Rather it's a list of floors, one button per floor--what's normally inside the elevator rather than on the wall outside. You push the button for the floor you want to go to. A display next to the buttons shows a letter and arrow indicating which elevator to go to. Sure enough, each elevator door has a different letter above it; you go to the elevator indicated on the display and wait for the elevator. When the doors open and you get on the elevator, there are no buttons to press to select the floor; rather a display shows the floors the elevator is going to, including yours. It's like the control system for the elevators is turned inside out, with the controls on the outside instead of the inside.
Rather than pressing "up" or "down" in the lobby, and then indicating the destination floor once one has boarded the elevator, one may alternatively key in one's destination floor whilst in the lobby, using a central dispatch panel. The dispatch panel will then tell the passenger which elevator to use.
What's really cool about this approach is this: Since the elevator system knows where everyone's going (not just up or down), it groups people going to the same floor in the same elevator. Rather than getting on an elevator which stops on every floor, yours only goes to a couple of floors. This means the elevators travel less, saving energy, and elevator rides take less time.
Register your destination on a keypad before you enter the elevator.
Advance knowledge of every passenger’s destination before they even reach the elevator.
Reduced passenger journey times.
Elimination of crowding during heavy traffic.
Assurance of a dedicated service for people with special needs.
Greater design flexibility for building core configuration.
I just think this is really neat and wish more large elevators systems (multiple elevator cars, lots of floors) worked this way. BTW, I'm also fortunate that I visited this building with a friend who'd been before and knew how this worked. Otherwise, I probably would've been pretty confused on how to simply use the elevator!
Address the #1 Success Factor in SOA Implementations: Effective, Business-Driven Governance
Inadequate governance might be the most widespread root cause of SOA failure. In SOA Governance, a team of IBM’s leading SOA governance experts share hard-won best practices for governing IT in any service-oriented environment.
The authors begin by introducing a comprehensive SOA governance model that has worked in the field. They define what must be governed, identify key stakeholders, and review the relationship of SOA governance to existing governance bodies as well as governance frameworks like COBIT. Next, they walk you through SOA governance assessment and planning, identifying and fixing gaps, setting goals and objectives, and establishing workable roadmaps and governance deliverables. Finally, the authors detail the build-out of the SOA governance model with a case study.
The authors illuminate the unique issues associated with applying IT governance to a services model, including the challenges of compliance auditing when service behavior is inherently unpredictable. They also show why services governance requires a more organizational, business-centric focus than “conventional” IT governance.
Understanding the problems SOA governance needs to solve
Establishing and governing service production lines that automate SOA development activities
Identifying reusable elements of your existing IT governance model and prioritizing improvements
Establishing SOA authority chains, roles, responsibilities, policies, standards, mechanisms, procedures, and metrics
Implementing service versioning and granularity
Refining SOA governance frameworks to maintain their vitality as business and IT strategies change
[BPM BlueWorks is] a cloud-based set of strategy and business process tools. BPM BlueWorks provides business users with the collateral they need to implement business strategies within their organizations based on industry-proven business process management techniques.
I don't find that description very helpful. It seems to be a lot of things to a lot of people: A tool for creating business processes; a cloud hosting that tool; a cloud hosting reusable process components which can be used by the processes developed with the tool; a community for developing these business processes. According to Michael Vizard:
The idea is that IT people and business executives can collaboratively model a business process and then export that model to a set of IBM Websphere Business Events software to execute it. The model serves to configure the Websphere software to match the business process. Because the model essentially represents a higher level of abstraction above the core middleware software, it also allows customers to update the business process as often as required, which for the first time allows IT to be responsive to the rapidly changing needs of the business.
I'm not sure what Websphere Business Events has to do with BPM. According to Sandy, BlueWorks is for creating "dynamic processes," which is interesting because processes usually contain a centralized plan (e.g. orchestration); dynamic implies the collaboration is more like choreography. If so, then WBE makes sense because it supports choreography, which is to say that it creates very dynamic connections between components.
How would you like a program to automatically install WAS on all of the machines in your data center?
WebSphere CloudBurst Appliance (press release) is just such a machine. Plug it into your data center network, specify a bunch of other machines on the network, and CloudBurst installs WAS on those machines automatically. We say that what it's installing is a cloud, as in cloud computing. So with CloudBurst and some server computers, you can set up your own private WebSphere cloud in your data center.
CloudBurst is a significant advancement to simplify the installation of WAS environments. The installs are fast, reliable, and easily repeatable; it doesn't require a large operations staff to spend considerable time performing the installs. What's installed is WebSphere Application Server Hypervisor Edition. CloudBurst is an appliance in part to make its installation simple; we wouldn't want a chicken-and-egg problem where the installation tool is as difficult to install as the software it's supposed to install. CloudBurst's development codename was Rainmaker, which Jerry Cuomo discussed in 2009 Trends and Directions for WebSphere.
CloudBurst comes with WAS HV images already installed on it, which it then uses to install WAS HV on the other machines as you direct. This means it can install a cloud of WAS 6.1 and 7.0 servers. Odds are CloudBurst will support other WAS-based products in the future, such as Process Server and Portal.
CloudBurst makes sure that WAS is installed and configured the same way on each host server. This avoids a number of production problems. You can also use CloudBurst to install both your production environment and test environment, ensuring that the two environments really are configured the same. This will cut down on a problem we often see where an application problem can be reproduced in production but not in test because the two environments which are supposed to be identical in fact apparently are not, differences which can be very difficult to track down.
CloudBurst can also be loaded with feature packs and your own application EARs to have those installed in the cloud along with WAS.
When you install the image, you use one of several pre-configured profiles to make the server run as a stand-alone server, a cluster member, a deployment manager, etc. WAS runs in the image on, I believe, SUSE Linux; in the end, the OS WAS runs in doesn't matter as much as long as the VM engine supports it.
Other editions of WAS are base (aka stand alone), network deployment (ND), express, etc. Versions of WAS are v6.1, v7.0, etc. I believe both WAS 6.1 and WAS 7.0 are available as hypervisor editions.
Sounds like you deploy a Java EE application EAR to a WAS Hypervisor server the same way as a standard WAS server. This means you'll still need to resolve references to external resources (databases, etc.), which probably requires configuring the image to enable access.
See WebSphere charges into the clouds for a slicker marketing-sounding description. Note that while WAS HV can be installed via WebSphere CloudBurst, WAS HV can also be bought and installed without CloudBurst.
A computer which outsmarts people at games isn't so far fetched. Deep Blue is a computer which not only plays chess but in 1997 was able to beat the reigning World Chess Champion. IBM Research has details.
Whereas the trick to winning chess strategies is largely mathematical, the trick for trivia is sorting through vast amounts of data and drawing inferences, including understanding semantics (the meanings behind words). What does this have to do with business? Watson is part of IBM's smarter planet efforts. As IBM's CEO explains it, "With advanced computing power and deep analytics, we can infuse business and societal systems with intelligence."
There's even a YouTube video introducing the idea:
A smart electric grid in Houston which can quickly isolate outages for repairs and meanwhile route around them (similar to Smart Gird City)
A food distribution tracking network in Norway which can tell you farm a package of meat came from
What do these problem domains have in common? They can all be modeled a smart networks. A dumb network cannot measure its own effectiveness, whereas a smart network can measure its operating condition, report its status, diagnose problems, and repair itself. A smart network can be autonomic, which a dumb one cannot.
Why are smart networks such a big opportunity? Two industries that are the biggest, most successful users of computer software are the financial and insurance industries. This is because their products are virtual and so can easily be modeled and operated by computers. Another huge user of computers are telephony companies because their product, the phone network (as well as data networks) are, well, networks which can easily be modeled and operated by computers.
The big insight of smart networks is approaches for being able to model more phenomena in life as networks. By putting sensors in the network and tagging the items in the network so that they can be tracked (by sensing them), the network can be modeled; by embedding actuators with the sensors, the network can also be operated by computers.
And being in a business magazine, the article pointed out that IBM is making a fortune doing this.
But it's that and more. According to Gunnar Peterson, the biggest hurtle to becoming a security pro is understanding security integration, and the best way to learn that is by reading EIP. This is because, Peterson explains, it's easier to teach security to developers who know how to design distributed systems well than it is to teach network security experts how to develop applications.
And I quote:
Rather than obsessing about the latest and greatest threat, its much more strategically important to sort out the logistics, constraints, and economics to distribute and scale out the security mechanisms and processes we have. Specifically how are they impacted by and how do they impact the message flows, endpoints, routing, transformation, and management. These patterns are aptly described and cataloged in Hohpe and Woolf's book and provide an important starting point for meaningful and useful security improvement over time.
So if you'd like to learn how to design distributed systems so that they can be secured easily and effectively, check out EIP.
As of WAS 7.0, you can now connect an MDB to a remote SIBus.
Great, so what does that mean? The Service Integration Bus is the feature as of WAS 6.0 that is a built-in JMS provider. Message-driven beans are EJBs for receiving JMS messages. For an application's MDB to receive messages from a queue in a particular SIBus bus instance, the application must be running in a WAS application server or cluster that is a member of the bus, basically meaning that the bus has one of its messaging engines running in the server/cluster. Thus when an MDB reads messages from a queue, the bus for that queue is essentially local to the application.
An MDB is configured by a JMS activation specification. The JMS activation specification in WAS 7.0 adds a new property, Provider endpoints, which (as the docs explain somewhat subtly) "allow the applications to consume messages from a remote cell." Technically, it will work with any remote bus, but the main reason to connect to a bus remotely is because it's in a different cell; if it were in the same cell, you could just add the application's server/cluster as a bus member of the bus and therefore make the bus local to the app.
With this property, when the MDB pool is activated, the beans will first try to connect to the bus specified in the Bus name property (if any). When that fails, it will then try to connect to the buses specified in the Provider endpoints. These Provider endpoint buses need not be local ones--ones where the server/cluster is a bus member. Because the buses are specified by the host name and port number of their bootstrap server (essentially, the point for connecting to a bus remotely via TCP/IP), the bus can be remote--the server/cluster does not have to be a bus member. A remote bus is sometimes also referred to as a foreign bus.
Normally to connect to a foreign bus, you need to use a service integration bus link to connect one of your server's local buses to the foreign bus. This is still the only way to send messages onto a destination on a foreign bus. But to receive messages from a destination on a foreign bus, you can now configure an MDB to connect to the bus remotely, bypassing the server's local buses (if any).
Use the createVersionedSCAModule command to create a new instance of a versioned SCA module when you want to deploy the same versioned module across multiple clusters in a cell. You must use this command once for each additional instance of the module you want to deploy. The new instance is created in a new EAR file; the new EAR file name contains the module version value and the specified unique cell ID.
This is useful because two SCA modules with the same name cannot be deployed to the same cell (because then some of the generated resources would have the same name and collide). So if you want to deploy the same module twice (say one for your internal employees to use and one for your external customers), you would previously have to deploy them in two separate cells. Now you can deploy them in the same cell as long as they're two different versions. The create version command doesn't change any of the code, so the new module runs the same as the old one, but it changes some of the component identifiers so that they don't have the same name.
Thanks to my fellow ISSW colleague David Currie for pointing out this command to me.
This is good news. If nothing else, it helps conceptually split-up the laundry list of features in WXD into three logical groups with descriptive names. It also means that if your organization only needs one set of capabilities, you don't need to buy all three; you can save some money and simplify the installation. Or, for simplicity when you want all three, you can still buy the integrated suite as a single product. My understanding of our pricing (check with your salesperson) is that if you want two of the products, you might as well buy the suite; they're about the same price so you get the third product more-or-less for free.
How should you set up WebSphere Process Server in your production environment?
I've talked about WebSphere Process Server (WPS) and the latest version 6.2. The simplest way to install a WPS runtime is a single server on a single node in a single cell, which is perfectly adequate for unit testing (and in fact is the topology for the WPS test server in WebSphere Integration Developer (WID)). However, this is not the best set-up for production. In production, you usually want a basic amount of high availability (HA) so that your users still get service even when part of your infrastructure goes down.
Let's take a quick look at what's in the golden topology. Two details to observe are that the cell contains two nodes and three clusters. Why is that?
Nodes -- The cell contains two nodes which should be installed on two different host machines (or perhaps two LPARs on the same host machine). This way, if one node crashes or has to be taken down for maintenance, the work will fail over to the other node and the users will still be able to do their work. During normal operation, workload management (WLM) will distribute requests across both nodes (and in fact also handles the failover when one node fails). An even better topology might have three or four nodes for increased redundancy.
Clusters -- Since the cell contains multiple nodes, the application should be deployed not in a single server running on a single node, but in a cluster with cluster members (aka application servers) on multiple nodes. In WPS, it's helpful to actually use three clusters, one for the business logic parts of the application and two for some of the main WPS infrastructure.
Business logic cluster -- This is where you deploy your SCA modules, including business process modules that run in the Business Flow Manager and Human Task Manager.
SIBus engine cluster -- This is for WPS to host messaging engines that are part of the service integration bus. A separate cluster is the easiest way to enable all of the business logic app instances to access the messaging engines even if/when they fail over.
CEI engine cluster -- This is for WPS to report what's going on for monitoring by products like Tivoli Composite Application Manager (ITCAM) (for IT monitoring) and WebSphere Business Monitor (for business monitoring). The separate CEI cluster helps lessen monitoring's interference with the business application.
So when deciding how to install WPS in production, the golden topology is at least a good start.
The IBM Sequoia supercomputer will be faster than the fastest 500 supercomputers, combined.
So says "IBM Sequoia: Faster Than the Fastest 500 Supercomputers, Combined" on a blog called Gizmodo. "IBM Tapped For 20-Petaflop Government Supercomputer" (InformationWeek) says the Sequoia, which will be built at the Lawrence Livermore National Laboratory for the National Nuclear Security Administration, will deliver 20-petaflops of computing power. That's 20 times more powerful than today's fastest computer. "The system will comprise 96 refrigerator-size racks with a combined 1.6 PB of memory, 98,304 compute notes, and 1.6 million IBM Power processor cores" and cover 3,422 square feet. "Uncle Sam buys 20 petaflops BlueGene super" (Channel Register) gives details of the Dawn (a BlueGene/P), a 501 teraflop machine to be delivered late this year, and the Sequoia, a 20.13 petaflop machine to be delivered in 2011. The Sequoia will even be energy efficient; "U.S. taps IBM for 20 petaflops computer" (EE Times) quotes IBM's Herb Schultz: "The Sequoia system will be 15 times faster than BlueGene/P with roughly the same footprint and a modest increase in power consumption."
Integration is the primary use case for more than half of the ESBs deployed today. The core language of EAI, defined in the book Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf, is also the core language of defining ESB flows and orchestrations, as seen in the ESB's developer tooling. For the users seeking integration, the ESB brings connectivity, protocol conversion, mediation, and other integration features together in one place to support the design, development, and management of integrated business solutions.
I'm interested that 50% of ESBs today are used primarily for integration. What are the other 50% being used for? In fact, I'd be more specific and say that an ESB should be used for service integration (or as IBM likes to call it, "sevice connectivity"), i.e. connecting together service requestors and providers in an SOA. Busses can be used for other things like transporting data and providing event notification, but I wouldn't exactly call those functions of a service bus.
Anyway, nice to be thought of as having helped to document the core language of EAI.
Thanks to my friend Dave for pointing out to me that I was mentioned in this report.
Marquez says, "Despite the total meltdowns of the U.S. and global economies last October, IBM executed flawlessly and handily beat analysts' earnings estimates, expanding both its margins and its profit outlook for 2009." Others have also made positive evaluations of IBM's earnings report, such as "IBM posts earnings rise, sees strong growth in 2009" (MarketWatch), and the stock rose as a result ("IBM shares rise after earnings beat expectations" in MarketWatch). Marquez also says, "The virtue of IBM's model is that it has effectively transformed itself from the cyclical hardware company that gave it its name into a software-and-service-oriented firm that gives it a recurring revenue stream. In addition into this well-thought-out business model that concentrates on high-margin, value-added businesses." Yeah, I think that sounds like us.
Another point Marquez makes is that "the many stimulus plans being implemented around the world will no doubt increase demand in many of IBM’s product-and-service areas." This sounds like a nod to the "Smarter Planet" efforts IBM has been talking about.
Marquez also advises his nephew, who has an internship at IBM, to try to leverage it into a permanent job with the company because "It is a superb global company, with a bullet-proof business model and a balance sheet that gives them a huge sustainable competitive advantage." So, Marquez thinks IBM is a good place to work, too.
One trend I find very interesting is Middleware-as-a-Service, which combines cloud computing with middleware products like the IBM SOA Foundation products. I think where this is leading is that customers will be able to create application hosting environments (aka clouds), either on the customer's hardware or in a third-party data center, that application development teams will be able to deploy their applications into and make those applications available for the enterprise without having to be too concerned about the details of what exactly their applications are running in or on. This will create a helpful separation and point of coordination between the infrastructure group managing the hardware and middleware across multiple projects and each project that simply needs a runtime for is applications.
Another neat theme is an expansion of the approaches available to achieve business/it alignment. Business Mash-ups will bring quickly built situational applications to the business world. Business Rules, like the ILOG acquisition, will enable business users to participate more easily and meaningfully in defining the policies for how the business works, policies which the applications are supposed to follow and enforce.
Two more trends, Extreme Scale and WAS.NEXT, will further evolve the long-running trend of bringing the power of the mainframe to clusters of distributed servers and yet make the middleware on those servers more pluggable so that the applications only run the infrastructure they need.
Jerry's got a pretty interesting article, so check it out. (Thanks to my friend Bill Higgins for reminding me of this article and that I ought to blog about it.)
The book covers the DataPower products (announcement, background), network appliances with software built in that you just plug into your network, configure, and get ESB-style SOA connectivity.
IBM WebSphere DataPower SOA Appliance Handbook begins by introducing the rationale for SOA appliances and explaining how DataPower appliances work from network, security, and Enterprise Service Bus perspectives. Next, the authors walk through DataPower installation and configuration; then they present deep detail on DataPower’s role and use as a network device. Using many real-world examples, the authors systematically introduce the services available on DataPower devices, especially the "big three": XML Firewall, Web Service Proxy, and Multi-Protocol Gateway. They also present thorough and practical guidance on day-to-day DataPower management, including, monitoring, configuration build and deploy techniques.
The book is 960 pages long, so it should answer any questions you may have and then some.
So if you'd like to learn more, check out the book.