Author Hugh Lofting's "Doctor Dolittle" featured an intriguing two-headed animal character called a pushmi-pullyu. Upon starting to move, its heads would go in opposite directions until both minds, equally stubborn, agreed on a common direction.
The same push-pull battle has been waged in the world of mobile application design since mobile devices were invented. Should users submit queries to pull the latest information from the systems of record? Or should the most critical information be pushed to users' devices proactively?
Naturally, there is no one-approach-fits-all answer. But two forward thinkers at IBM have come up with a model that shows that in the proper situations, a push design could consume just a small fraction of the network, server, and other resources needed to support the same application in a traditional pull-based architecture.
I just finished editing Mobile Design Patterns: Push, Don't Pull, an IBM Redbooks Point-of-View by IBM WebSphere VP Jerry Cuomo and Program Director Robert Vila. And it's an eye opener. The authors use a mobile banking scenario to show the dramatic resource savings that could be achieved by using push notifications to update users' mobile banking apps. They say their approach might even lead to the elimination of banking apps' "Check Balance" buttons, which millions of users press several times each day, initiating multiple millions of resource-intensive queries to bank servers.
How much more efficient might a push pattern be in these instances?
By the authors' reckoning, for common bank account-related queries, a typical pull pattern results in 20 million so-called load units on a bank's back-end systems each day. That's in part because users frequently press "Check Balance" even when there has been no change to their account.
By changing to a push design, a messaging infrastructure would bypass the bank's web servers and send secure updates to each user's device whenever (and only when) the user's account balance has actually changed. The back-end server load would be trimmed to just 4 million load units a day -- an 80% savings compared to the pull approach. Eventually, when users come to trust that their app's balance information is always current, there would even be less need for the bank to maintain the higher server capacity that is required today to handle the deluge of "Check Balance" requests that come in on paydays. At these peak query times, the authors see app-related server load dropping to as little as 1/25 of what is today.
Like I said, the piece is an eye opener. It has convinced me that banks and other enterprises will be heading in the direction of push patterns in future mobile applications. I think even a pushmi-pullyu would agree that is where things are heading.
By Alex Ross IBM WebSphere MQ Software Engineer
IBM Hursley Laboratory
As a tester in the WebSphere MQ development team I get to hear a lot about the new features that are coming in the next release so that I can plan how I to go about testing them. When I heard about multiple cluster transmission queues I thought "it sounds amazing. Exactly what our customers were asking for!".
This new feature in WebSphere MQ V7.5 improves application isolation, and makes it easier to monitor the traffic being sent to different clustered destinations. A queue manager can automatically create multiple transmission queues. Instead of all clustered traffic sharing the same SYSTEM.CLUSTER.TRANSMIT.QUEUE, independent queues can be used.
Hearing that it was going to be implemented in two different ways, let me plan my tests in advance of the development work being completed.
The first and quickest way to enable multiple transmission queues is to alter a queue manager attribute called DEFCLXQ. When this attribute is set to CHANNEL, the next time the cluster sender channels start, they each have their own automatically created transmission queue. Simple!
It's also possible to manually define a transmission queue and link it to a channel by altering its CLCHNAME attribute.
Now customers can monitor traffic to different queue managers or for different applications in their cluster with much great ease. Application traffic can be split over different transmission queues and if one destination became unavailable the transmission queue won't build up a big back log of messages.
I've given presentations on the new enhancements to clustering and even a workshop. General feedback has been very positive and I can't wait to hear about more customers using it in the field. If you really want to learn more about it, I am one of the authors of an IBM Redbooks publication IBM WebSphere MQ V7.1 and V7.5 Features and Enhancements in which I have gone into greater detail on how to use the new attributes and demonstrated a working scenario.
The scenario uses Windows and Linux to demonstrate how to start using multiple cluster transmission queues. I included a z/OS queue manager as a destination to show that, even though z/OS does not have this enhancement, it doesn't affect the behavior from the distributed platforms end.
WebSphere MQ Software Engineer IBM Hursley Laboratory
As a level 3 support engineer for WebSphere MQ, the first customer problem I picked up has stuck with me. It was my first foray into providing customer service for the product I'd been working on testing for the best part of 10 years. It was my first customer service success. I gave my response and the customer was happy. What more could I want? Having now been in level 3 support for 20 months, I look back and, despite it being my first customer service success, I would handle it very differently. I solved what the customer was trying to achieve, but I didn't consider why they wanted to achieve it.
They were a security conscious company, and as a result, out of fear of rogue channel usage, they decided to delete the default channel definitions on their queue manager. Their rationale: they're default channels, everyone that uses WebSphere MQ knows we have them, we don't want people connecting to them.
Seems perfectly reasonable to me, except for the following...try this...
Create a queue manager (TRASH_QM) that you don�t mind trashing, at any release. Using runmqsc delete one of the default channel objects, say SYSTEM.DEF.SVRCONN, using the command delete chl(SYSTEM.DEF.SVRCONN). Now try to create your own SVRCONN type channel (MY.TRASH.SVRCONN) using the command def chl(MY.TRASH.SVRCONN) chltype(SVRCONN). The command will fail, with the reason that SYSTEM.DEF.SVRCONN doesn�t exist. What does SYSTEM.DEF.SVRCONN have to do with me creating a new channel?
This is the problem they hit...they deleted their default channels, but then they could not create any new channels! At least they did not have to worry about any rogue connections now! This fails because for any attributes you don't supply on their define channel command, MQ refers to the default objects to fill those parts in. So of course, the customer has one get out route...each time they define a channel they can specify every single channel attribute. For example....
An administrative nightmare to say the least. But it works.
WebSphere MQ 7.1 introduced channel authentication records, designed to allow granular control over access to channels. This sounds exactly like what my first customer was trying to do! Basically they wanted to block access to any channels that were named SYSTEM.*. One of the rules that you get out of the box with 7.1 is precisely that rule...All access to the queue manager over any channel called SYSTEM.* is blocked. No matter who you are, no matter where you're coming from, the gates are closed.
After 20 months of customer service work, if I could go back, I wouldn't just give that customer the answer to their question, I'd take the time to understand what their requirement was and provide them with a better solution.
Being part of a team that wrote the IBM Redbooks publication IBM WebSphere MQ V7.1 and V7.5 Features and Enhancements, made me think about why WebSphere MQ is providing these features in the new releases, who are they for, and what are they for. Who would have thought that it would land me back thinking about my first success in customer service?
So delete or block? NEVER delete! If you're not using WebSphere MQ V7.1 set your default SVRCONN/CLNTCONN channels to use MCAUSER('nobody') and with WebSphere MQ 7.1 and above, block, block, block with channel access control!
IBM Global Accounts (IGA) organization, IBM Brazil
I'm a consultant for messaging integration middleware in the IBM Global Accounts (IGA) organization, in IBM Brazil. WebSphere MQ is one of the main and most popular components in the infrastructure we deliver to our customers. Providing a highly reliable service at the lowest possible cost is a constant challenge. I'm always on the lookout for features and enhancements we can exploit to attain our goals. This blog post is inspired by a new feature provided with WebSphere MQ V7.1: the Use Dead-Letter Queue (USEDLQ) channel attribute.
InfoSphere Replication Server, usually called Q Replication or Q Rep, enables greater data availability for critical applications. It replicates data between source and target tables based on rules that the DBA and applications enable. Using a process called Q Capture on the source side, Q Replication reads the DB2 log for committed DB2 updates and uses WebSphere MQ to transmit them to the target. On the target side, the Q Apply process reads the queue and applies the same updates to the local replica. Q Rep is able to handle a high volume of transactions, as messages, between the source and target databases, or subsystems. There is some latency between the database replicas, but it is typically very small. To rebuild each transaction, it is critical that the messages that represent it are received and applied in the correct sequence (serialized application).
If a message cannot be delivered because, for example, the target queue is full, WebSphere MQ routes it to a dead-letter queue, if one is available. This can cause problems for the Q Apply program because messages can arrive on the target queue out of sequence. If this occurs the replication process is stopped until the problem can be addressed by an administrator.
To avoid this situation, the preferred WebSphere MQ environment for replication, and other applications that require messages to be processed in a strict sequence, has been not to define a dead-letter queue. This prevents messages being routed elsewhere, which preserves their sequence. An administrator only has to restart the channel once the problem has been resolved.
Our service organization supports several other applications that require a dead-letter queue. Before WebSphere MQ V7.1, the dead-letter queue was a global setting for all channels defined on a queue manager. Therefore, applications that require a dead-letter queue could not share a queue manager with applications such as Q Replication. Naturally, not being able to share a queue manager makes the support cost of these applications more expensive. With WebSphere MQ V7.1, each channel can be configured to use the dead-letter queue, or not, independently. We are planning to migrate our WebSphere MQ infrastructure to V7.1 to take advantage of this feature, and many other important enhancements, that will help us to meet several requirements from our customers and lower the service cost.
I recently participated in a project with a team of WebSphere MQ experts from IBM to write the IBM Redbooks publication IBM WebSphere MQ V7.1 and V7.5 Features and Enhancements. The book will help you understand the benefits of upgrading to WebSphere MQ V7.1 and V7.5 and how to implement the new functions.
CICS and Messaging Middleware for z/OS consultant
IBM STG Lab Services
Recently I participated in the IBM WebSphere MQ V7.1 and V7.5 Features and Enhancements IBM Redbooks publication project. It was a great experience and gave me the chance to meet and work with great bunch of WebSphere MQ experts including some members of the IBM Hursley development lab.
One of the chapters I wrote, revolves around improving z/OS WebSphere MQ resiliency with a new attribute, CFCONLOS, that allows control over whether a queue manager terminates or tolerates the loss when a connection to a coupling facility (CF) or coupling facility structure is lost.
I was able to have the testing environment set up so that it included multiple coupling facilities. One CF was configured so that it was only connected to the two LPARs on which I was testing. This allowed me to move one of the application structures used by the queue sharing group into this structure.
In the first test, the coupling facility resource management (CFRM) policy for the application structure had another CF defined as an alternate in which it could be rebuilt. The CFSTRUCT CFCONLOS attribute was set to terminate and the INJERROR was used to cause the structure to fail. Basically the queue managers disconnected and a system-managed rebuild was requested. The structure was rebuilt in the second CF and the queue managers reconnected and a RECOVER CFSTRUCT was automatically requested. I had expected the queue manager to terminate but in retrospect I realize that connectivity to the CF was never lost.
Next, the application structure was moved back to the original coupling facility and the CFRM was changed so that the application did not have an alternate CF in which it could rebuild. I then caused the structure to fail again. The result was the same except that the structure
was rebuilt in the same CF in which it failed.
Next, the CF was varied offline to one of the LPARs, the queue manager on that LPAR abended. When the CF was varied off the other LPAR, the queue managers there also abended.
At this point I confirmed that the CFCONLOS attribute is related to the loss of the connectivity of the CF and not the structure in the CF. While a CF structure fails, the queue manager disconnects but that is not the same as a loss of connectivity to a CF. When a CF structure fails, the queue manager disconnects and a system-managed rebuild was requested. But when the CF fails the CFCONLOS attribute determines if the queue manager will abend or continue processing messages.
It also led to some really interesting observations. In the WMQ07 Wildfire workshop environment, where I normally work, we have limited coupling facility resources, and have to be cautious about making changes because the next workshop is generally only a week or two away. The folks at IBM Redbooks that provided the lab environment for the residency said that we could define several structures and alter them at will!!!! I could test different combinations without fear. It was wonderful.
It also led to some really interesting observations. Including what can happen during capacity tests, and ultimately in a production environment, when there are multiple demands on the system.
In our process we ran the same tests many times to allow the automatic resizing of the coupling facility structures and expansion of the datasets. The first few tests series showed the expected results. These tests were repeatable and I was happy. The next week I started capturing the test results for the publication and, of course, things were not behaving.
After some frantic investigation, the answer became obvious � another residency had started using some of the same resource pools. That team�s DB2 structures were also defined using ALLOWAUTOALT(YES). When they were testing they were grabbing MY storage, and when I was testing I was only using what was fair. Well that is my opinion...
We actually kept those results in the book IBM WebSphere MQ V7.1 and V7.5 Features and Enhancements. It adds to the complete picture of the interaction between subsystems and applications that can change expected (and tested!) behavior. Something else to keep in mind for production.
By Jamie Squibb WebSphere MQ Software Engineer
IBM Hursley Laboratory
As a member of the WebSphere MQ development organization I hear about requests to enhance the product to satisfy new requirements. These requests vary from the addition of a new property, or attribute, to much larger functional enhancements. WebSphere MQ is usually part of a larger software solution. The inability of WebSphere MQ versions to coexist on Windows, UNIX, and Linux caused problems for customers when migrating from one version to another. This was especially pertinent if using a lot of applications on the same machine, or software products that only certify support for specific versions of WebSphere MQ. Arranging for many applications to have a concurrent outage can be difficult, and can also be disruptive to service level agreements.
This restriction has been lifted as of version 7.1. It is now possible to customize the location where WebSphere MQ is installed, and also to install multiple copies on the same machine, at the same, or different, version. Queue managers can be individually migrated and applications can connect to a queue manager irrespective of which installation it is associated with. This enhancement increases the options available to administrators and can reduce the downtime for a queue manager, during a migration, to the time it takes to stop and then restart it.
So when should you use multiple installations?
If you only have a single queue manager or application, or the impact of an outage during a migration is small, you might favour the simplicity of a single installation. If you have a complex setup, you need to minimise the duration of outages, you’d like to stage the migration of your queue managers and applications, or you wish to maximise flexibility, using multiple installations is something you should consider.
In a multiple installation environment there are additional considerations for administration and application connectivity. However, additional commands and other features have been added to simplify this as much as possible. These include being able to view the available installations, and see, or change, the installation each queue manager is associated with. Support has also been added so applications can load the required WebSphere MQ libraries automatically. This can help isolate applications from installation changes and allows concurrent connections to queue managers associated with different installations. There are a few restrictions, so although many applications will work without change, others may need to be updated before you can use them in such an environment. These considerations are discussed as part of this new capability in the new IBM Redbooks publication IBM WebSphere MQ V7.1 and V7.5 Features and Enhancements.
In the next weeks, all the team members will be sharing some of their technical experience with WebSphere MQ in future posts. The first entry comes courtesy of Craig Both and Alex Ross. Craig and Alex share their experience on writing an IBM Redbooks publication.
This post is the experience of 2 of us, Craig Both and Alexander Ross, (mentor and mentee), based in IBM Hursley and both working on WebSphere MQ.
For Alex, this was his first trip to the USA, so of course he needed to be prepared.
Firstly, the basics; traffic drives on the other side of the road, you can turn right on a red light most of the time ("WHAT?"), all the money looks the same ("WHAT?"), and most importantly DO NOT make jokes at immigration ("WHY?")...Just don't.
Next the language barrier. Do you have your American dictionary? ("WHAT?") It's gas, not petrol. It's a sidewalk, not a path. It's apartment, not flat. It's a faucet, not a tap. It's a parking lot, not a car park. It's thanks, not CHEERS!
Our flight was uneventful, and got us there successfully, pretty much on time. A sports bar, showing a vast array of sports was the choice for something quick and easy to eat before hitting the sack ready for the residency to start the following day. We've since lost count of the number of American football matches we watched during the four weeks.
Monday morning, day 1 of the residency and although we've spoken on the phone several times, we meet everyone in the team for the first time: Alex, Jamie Squibb, and I all from the IBM Hursley Laboratory. The three of us have been working on WebSphere MQ development, test, and customer support for several years. Lyn Elkins is an IT Specialist from Advanced Technical Skill in the USA; she has plenty of experience helping customers with WebSphere MQ solutions. Barry Dearfield is a consultant in IBM STG Lab Services; he is a CICS and WebSphere MQ expert. Cezar Aranha is a consultant in IBM Brazil; his focus is messaging integration middleware. Mark Taylor, a well known WebSphere MQ evangelist also from Hursley, is providing overall technical direction. The project leader is Marcela Adan from the IBM Redbooks organization. Great team, many years of experience, and a good mix of skills and backgrounds. We learned a lot from each other.
We all go in convoy from the hotel to the IBM site. We settle into our desks ready to get started. We immediately started to reap the rewards of doing much of the planning for the book before we arrived, sorting out the structure of the book, setting up the lab environment, and deciding who would write what. We all jumped right in and by the end of the first week, the book stood at something of the order of 200 pages already! We'd set ourselves the target of completing a first draft of everything (literally everything) by the end of the second week. By that time we'd even started to review our colleagues work too! Was this the best team on an IBM Redbooks ever!?!?
At the same time, Mark was also updating the IBM Redpaper WebSphere MQ Primer: An Introduction to Messaging and WebSphere MQ, a very popular IBM Redbooks publication for several years since it was first published in 1999.
It was going well, but let us assure you, it was hard work. It involved some long days, some weekend working, and of course being away from home for so long. We even made it all the way to the penultimate day before having a full blown disagreement about content, but after some discussion and an hour and a half later, we reached a decision on the matter and moved on. We thought one disagreement in the space of four weeks was good going!
So while we weren't writing a book, what were we doing? Almost certainly one of the following things; eating, playing pool, or watching American football. Remember Alex, you said you were ready to eat...don't let us down now! Our American football feast ended with us seeing a game live. The University of North Carolina against Duke University. Duke being the home team, that's who we were supporting. It wasn't looking good for them, until the very final moments where they won the game and a pitch invasion ensued. It was nothing short of amazing. We did experience some of the night life while there, our English accents being immediate ice breakers, and we made lots of friends that we'll never see again!
The experience for us both was excellent: the people we met, the work we did, the fun we had and if you get the chance you should definitely do it! To see the latest residencies accepting nominations check out the IBM Redbooks Residencies in the IBM Redbooks website.
In my job as a Project Leader in ITSO (also known as the IBM Redbooks team) I have published more than 100 books since 1999. My first two Redbooks even had the red covers. Anyone still remember them?
I have always enjoyed working for ITSO, because of the opportunity to meet with people from all over the world and to be able to work on the leading-edge IBM products.
Most of the Redbooks projects are executed in one of the two ITSO centers in US -- Raleigh and Poughkeepsie. However, recently we have started running some of these outside of the US, in particular Growth Market Unit (GMU) countries, such as India, Turkey, Russia, China, and Brazil. I managed such a project in Brazil two years ago and truly enjoyed it. So when Richard Baird, Vice President, WebSphere Customer Value and Competitive Initiatives, mentioned he was considering sponsoring a Redbooks project in Turkey, I jumped at the opportunity.
Holding Redbooks residencies in GMU-based countries is a major skill attainment vehicle for these countries. These countries have typically a much younger IT workforce. Each year, thousands of new IT graduates are joining the IBM workforce and these people have massive technical enablement needs.
There were several candidates for the Redbooks topic, and after some discussions, WebSphere Application Server Migration Guide was selected. We were going to update the WebSphere Application Server V7: Competitive Migration Guide for the new migration capabilities that are available with IBM WebSphere Application Server Migration Toolkit V3.5 (Migration Toolkit). The Migration Toolkit is a suite of tools that can help you quickly and cost-effectively migrate to WebSphere Application Server V7, V8, or V8.5. You can migrate from a previous version of WebSphere Application Server or a competitive application server, including Apache Tomcat Server, JBoss Application Server, Oracle Application Server, and Oracle WebLogic Server. The WebSphere Application Server V7: Competitive Migration Guide, which I co-authored in 2010, was based on the first version of the Migration Toolkit. When it was time to update this book, we created the WebSphere Application Server V8.5 Migration Guide:
Let the project start!
We "kicked off" the project on May 14 at IBM Istanbul. We had an impressive team consisting of clients, IBM Business Partners, and IBMers:
2 local clients: Ersan (Akbank) and Hakan (GarantiBank)
3 local partners: Kurtcebe (Sadeyazilim), Sinan (Eteration), Tayfun (VBT)
6 IBMers: Burak, Hatice, Levent (IBM Turkey), Dave, Ross (IBM UK), and Rispna (IBM USA)
Dave and Ross are WebSphere developers from IBM Hursley in the UK and they joined to team to help write the Redbooks and also provide skills-transfer to the local team. The following picture shows the residency team:
Left to right: Hakan Yildirim, Levent Kaya, Tayfun Yurdagul, Burak Cakil, Ersan Arik, Sinan Konya, Vasfi Gucer, Dave Vines, Kurtcebe Eroglu, Ross Pavitt, Hatice Meric (not shown: Rispna Jain)
So, our Istanbul adventure started, which is probably not as exciting as the Istanbul scenes in the latest Bond movie Skyfall 007, but nevertheless still quite a valuable experience.After discussing the content outline with product management, development, and the residency team we decided on the following table of contents for the WebSphere Application Server V8.5 Migration Guide:
Chapter 1. Overview of WebSphere Application Server V8.5
Chapter 2. Migration strategy and planning
Chapter 3. Common migration issues
Chapter 4. Installation and configuration of the Application Migration Tools
Chapter 5. Differences between Eclipse and Rational Application Developer
Chapter 6. Migrating from Oracle WebLogic
Chapter 7. Migrating from Oracle Application Server
Chapter 8. Migrating from JBoss
Chapter 9. Migrating from Apache Tomcat
Chapter 10. Application Framework migration
Chapter 11. Installation and configuration of the Application Migration Tool - WebSphere Version to Version
Chapter 12. Migrating from earlier versions of WebSphere Application Server
Appendix A. Migration questionnaires
We decided to write a practical reference book for migration specialists, but the biggest challenge was to find meaningful and realistic applications that we could use to show the capabilities of the Migration Toolkit. We did not want to use the same applications from the original book, and most of the applications we found on the Internet had license restrictions, so we ended up writing most of the sample applications during the residency. You can download these applications from the Additional Materials section of the book.
My impressions with the Migration Toolkit
So having worked extensively on the Migration Toolkit during this project, what are my impressions?
I found the Migration Toolkit V3.5 vastly improved compared to V1, which we used in the original book. It will be your best friend when migrating from Apache Tomcat Server, JBoss Application Server, Oracle Application Server, and Oracle� WebLogic Server.
Among the scenarios, the only case that Migration Manager provided limited help was Framework migration. This is covered in �Chapter 10 Application Framework�, where we present two migration scenarios, for Seam and Spring In these scenarios we used Rational Application Developer for WebSphere Software as a build and test environment.
You can upgrade the version of Java SE by using the Migration Toolkit, but this way does not change the Java EE specification requirements of your application. If you decide to upgrade the Java EE level of your application, you can use the Java EE specifications upgrade wizard within the Rational Application Developer. However, specification migration is best done after the platform migration, because WebSphere Application Server is backward-compatible for earlier Java EE versions.
We cover Apache Tomcat 7.0.27 to WebSphere Application Server V8.5 Liberty Profile in Chapter 9.We show that the Migration Toolkit provides useful instructions about simple applications, and it shows an even greater benefit when migrating applications that use a larger number of features and configurations that are specific to Apache Tomcat.
In the original book, we did not cover WebSphere Application Server version-to-version migration, but in this newer book we dedicated two chapters for it. Although we found that migrating applications between versions of the WebSphere Application Server is straightforward, some applications, such as those using a third-party web services engine can be more complicated. With some care, migration of such applications is still easy to achieve.
We did not forget the videos! During the project we shot a video with our executive sponsor Richard Baird. He shared his views about why our clients should consider migrating to WebSphere Application Server V8.5 and how Migration Toolkit and this IBM Redbooks can help them with this migration.
At the end of the project, we also created a farewell video with some members of the residency team. They talked about their experiences working with this project.
I would like to hear your comments or questions about this book. You can contact me at email@example.com or on Twitter: @vasfigucer. I am looking forward to my next GMU Redbooks project, who knows where!
Vasfi Gucer is an IBM Redbooks Project Leader. He leads publications creation about Tivoli, WebSphere, and Cloud Computing.
Is your organization struggling to manage the massive volumes of data and transactions being produced at an unprecedented pace through converging technologies, social business, and the proliferation of mobile and interconnected devices? Studies show that CEOs, business and IT leaders understand the growing importance of technology to address these challenges, but may not feel ready to leverage it.
Most CEOs said technology is the most important external factor affecting their business, when interviewed for the IBM 2012 CEO Study. Business and IT leaders polled recently by the IBM Institute for Business Value identified the top trends affecting the competitiveness of their organizations in the next 3 years as:
Mobile device proliferation
Analytics and solutions
Yet, respondents felt unprepared to take advantage of those technology trends.
At the October IBM InterConnect 2012, IBM made announcements to help businesses speed the delivery of new technologies and change the economics of IT by
Deploying integrated systems built for cloud, and
Simultaneously architecting their businesses for cloud, mobile and big data.
These announcements introduce unique systems with built-in expertise, advancements in cloud computing and mobile technologies plus many enhancements across the IBM WebSphere� portfolio. The objective is to help businesses manage today's pace of change, reinvent relationships and uncover new markets � driving innovation and growth � truly changing the game.
Need to run critical applications in a private cloud environment?
The enhanced IBM CICS� Transaction Server V5.1 enables businesses to easily extend CICS applications and robust transaction processing to the cloud. Businesses can reduce operating costs and increase performance of CICS environments.
Need infinite scalability in public or private clouds?
The enhanced IBM WebSphere eXtreme Scale V8.6 enables businesses to improve customer service with extremely fast application response times. It offers consistent and predictable performance for business-critical applications. Businesses can create a true enterprise-wide data grid across multiple application environments.
Need to build, connect and transform mobile applications?
The enhanced IBM Mobile Foundation V5.0.5 and IBM Worklight V5.0.5 enable organizations to optimize the user experience across multiple devices. These offerings also help reduce time-to-market for new applications and simplify management of application distribution across the enterprise.
Need fast deployment of packaged expertise that is affordable?
New IBM PureApplication� System Patterns reuse industry best practices to accelerate middleware deployment. With these patterns, businesses can optimize IT resources and costs for applications and middleware deployments. The patterns include IBM Business Process Manager Pattern and IBM Operational Decision Manager Pattern - both on Red Hat Enterprise Linux, IBM SOA Policy Pattern and IBM SOA Policy Gateway Pattern, IBM Messaging Extension for Web Application Pattern, and IBM WebSphere MQ Hypervisor Edition for Red Hat Enterprise Linux Server.
Need to get serious about integration but require a simplified package?
IBM WebSphere MQ Advanced V7.5 helps to quickly and cost-effectively address integration requirements from new technologies. Businesses can increase infrastructure agility and rapidly pursue new market opportunities.
One of the key topics at IBM Impact 2012 to be held in Las Vegas April 29�May 4 will be IBM PureSystems. It�s a new family of what IBM calls expert integrated systems that combines the flexibility of general purpose systems, the elasticity of cloud and the simplicity of an appliance tuned to the workload. And I think that the cloud and workload aspects are key ones here.
I had the chance to talk with Jerry Cuomo, IBM Fellow, VP and WebSphere CTO -- and one of the key presenters on PureSystems at Impact -- about the recent announcement and what it will mean to the world of business and IT. Its impact, if you will. But before I share Jerry�s insights, I�d like to step back and talk about cloud in a more general way � then we�ll see how PureSystems fits in.
I sometimes think one of the most important and underrated aspects of cloud computing is �abstraction� -- the way clouds can empower organizations to move up from a lower level of abstract thought and execution to a higher, better one.
Of course, abstraction is a little... abstract itself, as subjects go. So let me trot out one of my patented analogies to clarify a bit.
Have you ever seen a baby when it's first learning to walk? The job is really quite a complex one as far as the baby is concerned. It has to ponder large muscle groups very consciously, deliberately thinking about using one leg, then another, all while also using small muscle groups to maintain its balance.
But eventually the baby can stop thinking about things on that level -- the level of specific muscle control -- and start thinking on a higher, more abstract, more effective level.
Now it's not �I need to move my left leg forward, and put my weight on my left foot� but, much more simply, �I want to walk into the next room.�
This new, higher level of abstraction the baby has reached gives it new power to pursue its goals (which may or may not include terrorizing the family pet and deep-searching local trash cans).
And if this baby is ultimately going to reach the highest level of competitive motion -- perhaps becoming a world-class sprinter, the next Usain Bolt -- it is going to have to be thinking on a very high level of abstraction indeed. There is just no time to think about such details as which muscles you'll move next, when you're running sprints in the Olympics. There is instead only nine and a half seconds to travel a hundred meters.
That's not a bad metaphor for business today -- a similarly competitive world, in which market agility tends to translate into market success. You don't want to have to think about the technical details; you really may not have the time.
You want to focus on your goals and strategies and services, the heart of the value you're creating in the world, and trust that your infrastructure will be up to the efficient execution of whatever you have in mind.
Clouds -- done right -- can be that infrastructure.
The question isn't �What's our tech?� but �How well do we fulfill our workloads?�
All this crossed my mind when I learned about PureSystems and talked with Jerry Cuomo. He agreed with me about the importance of abstraction, but was quick to point out that the new launch delivers far more benefits than just that.
It seems that PureSystems is the end result of IBM's underlying goal to deliver a next-generation service delivery platform solution that fulfills workloads optimally -- even given how dynamically workloads can change across time, both technical and business domains and organizations.
�PureSystems is unique to our industry,� he said. �It represents a bold balance of being open yet prescriptive, and preserving compatibility with your current applications while introducing support for highly efficient new workloads. PureSystems do not just hold the potential to be workload-aware; they are workload-aware. PureSystems do not merely enable workloads; they contain them, including a scalable web workload. They facilitate lifecycle management like monitoring and license management, and what's more, those capabilities work right out of the box. Simply put, IBM PureSystems are not just your cloud-in-a-box solution, they are your workload-aware cloud.�
What are the ingredients of the PureSystems� recipe? Basically, they're packaged in two groups. The first group � �next-generation platforms,� or NGP -- is a top-caliber variation on Infrastructure-as-a-Service.
But it's in the second group, which focuses on application systems, that the real magic happens.
Recall that IBM, almost uniquely to the IT industry, produces solutions at every layer of the technology stack. That means IBM, almost uniquely to the IT industry, also has the power to combine those layers into optimized packages -- all of which also benefit from IBM's enormous experience consulting with organizations of all sizes, in all industries, on cloud computing topics.
For PureSystems application systems, that means IBM's strengths are multiplied, each helping all the others.
�Today, organizations have choices at every level -- processors, storage, network, OS, middleware and applications,� said Cuomo. �While the last decade of open competition around these components has driven record capability and quality, enterprises trade the ability to mix and match these best-of-breed parts while also paying the very high price tag of labor cost and skills needed to orchestrate the final composition. However, this leaves very little in the enterprise's innovation budget. PureSystems give the customer back their innovation budget. Our hardware and software experts have used our cumulative experience to create an integrated system that also empowers our clients to stir in their own expertise and capabilities -- easily.�
Here you see just what IBM means by �expert integrated systems.� It's not just IBM's expertise that's being integrated; it's also the customer's. This is the magic of PureSystems: it is an ideal foundation for private cloud computing that
(a) delivers the best technologies IBM has to offer, drawn from the industry's strongest cloud portfolio, (b) combines those technologies in the best ways for a private cloud, in direct support of proven best practices, and (c) still allows the new cloud to be easily tweaked to create a perfect fit for any given organization's needs.
Instant time to value, but also straight-forward tailoring
In fact, beyond merely �allowing� that kind of tweaking, IBM has made it remarkably straightforward.
For instance, cloud services executing on PureSystems can be managed by team members both inside and outside of IT proper.
Line of business managers are going to enjoy being able to request a new service right from a catalog, then have oversight of that service themselves -- an experience they may never have had before, and a power akin to being able to walk, instead of having to ask someone else to carry you.
They're also going to enjoy the fact that cloud management for PureSystems can easily be aligned with job roles, so they can manage their services using the interface that works best for them, as determined by the performance metrics that they deem most significant.
IBM has, in fact, created a new admin paradigm just for PureSystems -- another variation on the theme of multiple levels of abstraction -- and Cuomo is very optimistic about how it's likely to be received.
�One of the aspects PureSystems we think our customers will love is the way they make management so straightforward,� he said. �Via our approach of progressive disclosure, they can administer services at the technical level that makes the best sense for them. Specifically, we support a progression with three levels of disclosure. The first, Virtual Application, only requires you to know the needs of your application -- middleware and hardware are hidden. The second, Virtual Systems, pre-arranges middleware in patterns designed to power specific workloads. Last, Virtual Appliance supports a bring-your-own-expertise model, allowing you to include your own middleware and construct your own patterns.�
This concept of workload patterns is yet another selling point of PureSystems. Thanks to literally decades of experience in IT consulting, IBM has acquired an extraordinary level of knowledge about middleware/hardware combinations and the patterns that tend to apply. That insight is baked in, so you can leverage the patterns right away. And most organizations will do exactly that.
But you can also, as Cuomo suggested, create and roll out new patterns from scratch. And you can combine these two models -- integrating, in a sense, the best of IBM's expertise and the best of your own.
It's hard to get much more expert or integrated than that, and Impact 2012 will be the place to learn more about it.
About the author Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Cloud computing has become, in certain ways, the eat-right-and-get-some-exercise of IT infrastructures.
By this I mean that everybody's heard the message, and everybody knows the potential benefits... but not everybody actually follows through to the degree they could, or should, to get the best possible results. Even in 2012, the world is full of organizations that remain cloud holdouts. (I won't go so far as to call them cloud Luddites.)
Now, there are a number of valid reasons for this reluctance -- security and compliance, for instance, are major worries for certain sensitive applications, which aren't likely to migrate outside company walls any time soon.
Guaranteed performance is another common issue. For certain particularly business-crucial applications, like ERP, many organizations are simply not willing to trust a shared architecture like cloud in which many different services execute in parallel. So instead they're sticking with a tried-and-true, dedicated architecture to play it safe.
This, however, means that the information locked away in those applications can't easily be leveraged in other ways, and for other reasons -- very awkward and unfortunate for business purposes.
Fortunately, there's a good compromise: hybrid cloud models that deliver a sort of best-of-both-worlds approach. In short, you put your cloud-friendly apps in the cloud, leave the other apps (perhaps compliance-sensitive or ERP apps) in your conventional, in-house infrastructure and then integrate them as cleanly as you can to meet your needs.
Getting this done, however, means finding clever ways to get information flowing as it should between the two architectures. And by clever, what I really mean is fast, cost-efficient and yet complete, migrating all the information you want (and none of the information you don't) into the cloud.
How to make that happen? One way would be to try and custom code the interfaces between these apps.
But anybody with IT experience is probably already cringing at that idea. It might yield complete results, but it's not likely to be either fast or cost-efficient.
Is there a pragmatic plan B? Turns out there is.
Accelerate almost any hybrid cloud initiative via fast, seamless information integration
Recently I talked with Chandar Pattabhiram, who drives go-to-market strategy for the IBM WebSphere� Cast Iron product line. And he confirmed for me that indeed hybrid cloud models are increasingly attractive -- if you can take care of your information-migration needs in a business-optimized way.
�It's a hybrid world today and will continue to be so for a long time,� said Pattabhiram. �Integration has become a critical component of this hybrid world because companies need to rapidly connect the new cloud services they're adopting with the rest of the on-premise applications. And that's where IBM WebSphere Cast Iron Cloud Integration capabilities can really lend a helping hand.�
Does �Cast Iron� ring a bell for you? If you're an IT pro, you may recall that in 2010 IBM acquired Cast Iron -- a leading provider of solutions designed to integrate cloud and in-house apps in an accelerated way.
The Cast Iron technology thus turns out to target the exact 2012 scenario I describe above -- a company wants to link its own apps seamlessly with cloud apps in a hybrid model, generating the least possible complexity, costs and risks along the way.
�Integration has become the 'productivity application' for cloud computing,� said Pattabhiram. �Without integration, cloud users can wind up 'swivel chairing' -- trying to alternate between two completely different architectures to get access to critical business information in a rather clumsy way. But with integration, they get all the information they want in one place: the cloud. Net result is that integration helps companies maximize productivity, increase adoption and also maximize the value of their cloud investment.�
Drag and drop your way to cloud nirvana
How exactly does IBM WebSphere Cast Iron Cloud Integration work this magic? The answer is basically threefold: (1) Out-of-the-box templates and (2) special functions, both of which are managed via a simple drag-and-drop interface, and, if necessary, (3) custom scripting to handle the rare odd case.
Let's look at the templates first -- the heart of the solution. These have been developed based on the premise that companies struggling with integration issues are quite often dealing with the same groups of applications.
I mentioned ERP before; SAP apps are a good example along those lines. And migrating the information from SAP into the cloud really means, typically, migrating it into a particular cloud environment/application. One very common example: Salesforce.com.
So, to reflect this situation, the Cast Iron solution includes hundreds of templates to perform such jobs, each designed for a particular type of migration such as SAP-to-Salesforce. And in the majority of cases, a template will be found that (following a wizard-driven Q&A and basic validation checks) does the necessary job right out of the box.
How does that sound in terms of our previous evaluative criteria (�complete, fast and cost-efficient�)? Pretty fair, I'd say.
Now, there are certainly going to be cases where not every data record lines up perfectly between the two infrastructures; a little jiggering may be required. In scenarios like that, the Cast Iron solution also provides a range of handy data modification functions. Imagine, for instance, that you need to combine two text strings from the SAP data set into a single text string in the Salesforce application. To do that, you could use the concatenation function, which glues the two strings together. Problem solved, and we still haven't left the drag-and-drop interface.
So when you add up the convenience and capabilities, IBM WebSphere Cast Iron Cloud Integration strikes me as a tidy solution to a very common problem. Furthermore, thanks to the way it can be tweaked and modified as needed, it works well even in cases where the in-house app is completely homegrown, and there's therefore no template available.
Pattabhiram sees things the same way. �The templates are remarkably comprehensive, but, no, they won't work for all scenarios,� he said. �Still, even for home-grown applications, Cast Iron's �configuration, not-coding� approach is the way to go -- much faster and much less expensive than trying to custom code the interfaces between these apps. �
The final step, following the new orchestration across the two architectures you've just created, is to export it to an appropriate form factor for your needs. Specifically, we're talking about one of three options: (a) a physical server, (b) a virtual server or (c) a cloud-based service. The Cast Iron solution can be used for all three. That's a range of choices to fit any customer's requirements, and it also avoids locking them into a specific architecture or business process that, down the road, they might want to change.
�Integrating the cloud doesn't always really mean integration in the cloud,� said Pattabhiram. �What we've seen is that customers choose amongst a variety of form factors -- physical appliances, virtual appliance or integration as a service -- for their cloud integration needs. The key is to provide this flexibility of deployment options to customers depending on their size and IT environment.�
Maybe all of that sounds a little theoretical to you, and you need a little proof-of-concept? Take a look at the situation faced by Siemens Energy.
These guys faced the exact scenario I describe above -- an SAP-to-Salesforce hybrid cloud integration for significantly faster mirroring of information and key performance metrics across the two environments. And not only did the Cast Iron solution get the job done, it got it done in under two weeks.
How does your organization measure up? What's your cloud integration strategy?
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
Just like the Internet transformed retailing, media and entertainment in the 1990s, social networking and mobile communications are now putting even more power in the hands of individuals.
Today, 70 percent of a customer's first interaction with a product or service takes place online, 64 percent make a first purchase because of a digital experience and of the two billion people connected to the internet, more than 600 million are on Facebook. This is compounded by an explosion of mobile purchases, which is tripling annually to $119 billion this year alone. Think of this as the era of the connected consumer.
The shift is a good thing. It means consumers know more than ever about their choices and can comparison shop for the best price with ease. They get what they want when they want it. And they can make their opinions known � positive or negative � to thousands or even millions of other consumers.
The power of the connected customer
This big shift in how customers connect brings profound consequences � redefining the term �commerce.� What used to be seen as a flow of goods from manufacturers through a distribution chain to customers has become an interactive feedback loop, where consumers, producers, distributors, the media, and marketers all have new roles to play. Smart companies see "selling" not so much as a traditional function of their organization but rather as an ever-evolving set of services they perform for their customers � performed in concert with their business partners.
Toward that end, organizations are getting more intelligent, so that vast amounts of customer data � from demographics, to product-purchase histories, to online conversations � can be analyzed and turned into real value in real time.
They are getting more interconnected, so that customer insight can be fed into every point in the process � from design to distribution. And they are extending this network of insight to suppliers and partners, because no business can innovate alone.
And they are getting more instrumented, so every item of inventory can be tracked; every interaction with customers can be understood.
Smarter Commerce at work
Leaders in every industry are turning to dynamic business networks that span human, digital, social and mobile modes.
For example, an electronics retailer is using seemingly unrelated purchasing events to get the products its customers want on the shelves when they want them, and make the whole shopping experience seamless across all channels � from brick-and-mortar, to the Web, to mobile.
An automaker is continuously improving its products by infusing customer feedback and reviews into the design process, and pulling in the best parts, suppliers, and assembly expertise without disruption as market needs continuously change.
A bank lender is taking a 360-degree view of its customers using predictive analytics to determine which types of products might interest a patron and even when, where and how to approach them � putting customers at the center of its strategy for what new services are introduced.
The complexity of the task
Building dynamic business networks that span human, digital, social and mobile access modes isn�t easy. Businesses often find themselves with too many siloed systems, and too many unique processes that don�t share information or integrate very well.
But now there�s new technology that enhances and automates the way businesses connect � across the wide range of systems and activities flowing between departments, businesses, and into the cloud. There�s also clever analytics software that can turn vast streams of data into a narrative that people can understand and put to use.
Defining a new market
Powerful software tools and services are available from IBM to help companies to better address the connected customer.
In 2010, IBM added to its own WebSphere Commerce software platform with three related acquisitions � Sterling Commerce for order management and supply chain optimization; Coremetrics for analyzing customer behavior; and Unica for managing marketing campaigns from beginning to end. Together, they address a broad spectrum of enterprise commerce activities � new ways to buy, sell and secure greater customer loyalty in the era of mobile and social networks.
In March 2011 IBM debuted its Smarter Commerce Initiative with new software solutions designed to help companies intelligently automate supplier and trading partner interactions, automatically turn marketplace insights into marketing and sales actions, and seamlessly connect online, mobile and social channels to physical stores. IBM is defining and leading this new market, which is expected to grow to a $20 billion opportunity in software alone by 2015, driven by demand from clients that must bring new levels of automation to marketing, selling and fulfillment, and managing brands.
IBM has also put together a robust consulting services practice and a �university� program to teach commerce skills to clients. It�s packaging all of these capabilities together and presenting them as an integrated set of technology and business solutions. And now with IBM's help, leaders in every industry are serving the connected customer's needs at every turn.
At it's first Smarter Commerce Global Summit this week in San Diego, CA, IBM announced newsoftware and servicesthat address a broad spectrum of enterprise commerce activities � new ways to buy, sell and secure greater customer loyalty in the era of mobile and social networks.
At its first Smarter Commerce Global Summit on September 19-21 in San Diego, CA, IBM will announce new software and services that address a broad spectrum of enterprise commerce activities -- new ways to buy, sell and secure greater customer loyalty in the era of mobile and social networks
Here's a simple video on the 'how' and 'why' of Smarter Commerce.
The centerpiece of the supplement: An interview with IBM�s Tom Rosamilla (general manager of Power and z Systems) and Hayden Lindsey (Rational vice president and Distinguished Engineer, Enterprise Modernization) who talk about 1) IBM�s recent zEnterprise and Smarter Computing launches, 2) ways companies are accelerating enterprise modernization on System z and Power systems using the latest IBM development tools, and 3) what customers can expect from the IBM Innovate 2011 Conference (June 5-9 in Orlando).
The supplement also includes links to information about the various products discussed, a couple of videos, and the complete schedule of sessions in the System z and Power Systems tracks at Innovate.
And here�s a secret: Even though the link provided here takes you directly to the supplement, once there you can page through the rest of the magazine. I heartily recommend you do it � it�s a terrific publication � and then subscribe to the IBM Systems Magazine Digital Edition, which is FREE (and pretty impressive as free publications go!).