September 7, 2012 | Written by: Suchitra Joshi
Share this post:
Scale service levels rapidly for a better business outcome
One of the key questions IT leaders are asking today is this: “How scalable will my services be — and what do I need to do to maximize scalability for particularly dynamic services?”
In large part, this is because business requirements and customer interests both change much more quickly today than at any point in the past.
If a business creates a new service, whether internal or external, it has no certain way to predict just how successful or widely utilized that service will be. And if workload levels turn out to be much higher than expected, and the infrastructure isn’t capable of responding properly, the business outcome will certainly be diminished.
For internal services, that will translate into reduced team member productivity, as the services employees rely on to carry out job duties don’t perform as well as they should, and delays are created.
For external services, the impact could be even worse — problematic service performance, or even a service outage, which could easily result in loss of revenue, a less-than-ideal customer experience and, over the long term, brand damage and poor public relations.
Minimize scaling costs and excess capacity — even given unpredictable demand levels
One way to solve this problem, of course, is to allocate resources for services in such a way as to anticipate even the highest likely demand levels.
This, unfortunately, also means spending more for those resources — more, in fact, than the organization is likely to need during some (or perhaps even all) of the time those services will be up and running.
As a superior alternative, it would be possible to create a service delivery infrastructure that flexibly responds to changing workloads in a smart fashion — requesting and receiving more resources based on dynamic requirements, then returning those resources to a general pool when the demand level falls again.
That’s certainly a major factor driving the interest in both virtualization, and as an advanced expression of virtualization, cloud computing. Even in a virtualized or cloud architecture, however, there will still be circumstances when more physical resources are required — more storage, more network bandwidth or very commonly, more processing power.
So given this context, the question becomes: “If I add more processors, how easily and quickly will the system be able to take advantage of them, and how will they then contribute to the total execution of all the workloads my architecture handles?”
Need more computational power? Add another POWER 7 processor — and watch as
workloads instantly benefit
All of this was in the minds of IBM’s POWER 7 processor architects, and they have included many compelling features in response.
A great example: IBM’s SMP (symmetric multiprocessing) interconnect fabric. This technology — an area where we lead the high-tech industry — allows more processors to be added at will to IBM Power Systems, which then leverage them very efficiently as a new source of computational power.
How does IBM’s SMP fabric work? Consider it as the logical liaison between any given POWER 7 processor and the IBM Power System as a whole. The SMP fabric provides the foundation by which the system recognizes a new processor, allocates work to it and orchestrates the execution of all workloads, holistically, throughout the entire system.
Thanks to its exceptionally efficient design, this technology allows IT to add more processors at will to IBM Power System — up to eight processors total, for as many as 64 cores in a single system.
Each processor has its own level two and level three cache, which are used to buffer key information or instructions, and each processor has its own power requirements as well, which must be carefully managed for optimal results. Each processor can, in this sense, be considered almost a separate processing world of its own.
Thanks to IBM’s SMP Interconnect Fabric, those separate worlds are unified — and not just unified per se, but also in an optimal fashion. In a perfect world, adding new processors to an SMP-capable system would result in a linear scaling of service performance; that is, more computational power would be created in direct proportion to the processors added. Thanks to the efficiency of IBM’s interconnect fabric, linear scaling is very close to the outcome you will receive.
If the processor count goes up by 20 percent, then the computational capability of the IBM Power System receiving those new processors will also climb by roughly 20 percent. That’s a claim few other system vendors can make because their fabric implementations aren’t as efficient, and added processors will not yield nearly as linear a performance gain.
In sum, then, SMP fabric represents one more way IBM has empowered organizations to get superior, more finely-tuned performance for unpredictable business workloads — by allowing them to add more computational power to IBM Power Systems exactly when they need it, and then leverage that power very efficiently for more scalable, responsive services.