Archive

How to find the right balance between scale up and scale out in cloud

Share this post:

Cloud computing provides the elasticity to scale infrastructure on demand. Applications with a dynamic workload demand get application programming interface (API) driven access to a flexible infrastructure to meet performance guarantees and minimize resource costs.

The viral nature of social media has made responding to a sudden surge in demand extremely important to prevent lost revenue or lost customers because of slow response times. The burden of scaling, however, falls on the user or some automated process that scales the application infrastructure. Reactive scaling decisions are based on rules with thresholds on resource utilization and response times. Proactive scaling can be based on historical usage data, modeling, analytics and tracking social media sites.

Scale up and scale out

Applications are differentiated by various resource mixes. Some may be heavy on storage while others are heavy on compute. Applications require different scaling for different tiers. Certain applications can only benefit from scale up. Re-engineering such applications for scale out may be difficult. Applications running on SoftLayer Cloud Services can be easily scaled up in whatever increments best suit demand. SoftLayer also offers high-performance computing with Tesla GPUs for raw parallel processing power.

If the application is built for scale out, you can add new servers within a few minutes and make them part of the cluster, allowing you to only pay on demand for the resources you allocate for handling the traffic spikes. Flex Image from SoftLayer provides you the ability to not only clone to multiple instances but also to scale up from virtual servers to dedicated servers, thus expanding the options for how you mix and match your servers deployments in a distributed hybrid architecture in the cloud.

Scale in with workload optimized systems

At the IBM Pulse 2014 conference, IBM announced that SoftLayer is integrating IBM Power Systems into their cloud infrastructure to provide adaptable cloud environments that can handle the next level of big data. Workload optimized IBM Power Systems use densely configured servers. This scale-in approach takes advantage of performance and efficiencies inherent in Power Systems and allows you to execute dynamic, unexpected workloads with linear performance gains while making most efficient use of existing server capacity.

System z products help enterprises integrate data with analytics, enable cloud delivery and accommodates high densities of very small workloads with resource guarantees. Marist College uses System z as a cloud platform and is hosting a multi-faceted, multi-service heterogeneous hybrid cloud environment. Scale-in provides the best of scale up and scale out by allowing you to run multiple operating system instances and applications in a single box, allowing resources to be shared in a dynamic way.

New York Municipal Shared Services Cloud unifies hardware and software into one location on an IBM Mainframe so that the municipalities are sharing resources in the cloud. It is about finding ways to consolidate their services but still exist separately under reduced budget thus reducing inefficiencies and eliminating waste.

Summing up

Scale in with Power Systems or System z would be beneficial for certain classes of applications. If your application requires fast, single threaded performance you have no choice but to get dedicated servers with the fastest CPUs that may be supplemented with GPUs. With availability of larger resources on single machine and applications that can utilize the multi cores, scale up would be the way to go if your resource demands can be met within a single dedicated server but may require downtime. If supported, you could take advantage of hot-add RAM and hot-plug CPU functionality, so you do not need to shut down your virtual machine or application.

If your application allows distribution, then instead of a single large system, you would choose multiple bare metal or virtual servers that provide optimal cost/performance ratio. If you want the ability to upgrade your application without any downtime, then you will need multiple servers so that you can do a rolling upgrade. If you have customers in multiple geographies, then you want to bring the servers up in multiple data centers nearer to the customers. Whatever scaling you choose, you will eventually hit a wall and will need to re-engineer your application and environment to accommodate the complexity.

Share your thoughts in the comments below or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about how you accomplish auto scaling in the cloud.

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, 16.0.0.4. It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading