The cloud infrastructure market is crowded. Hundreds of providers compete, each with seemingly similar cloud-related offerings, and potential customers face the daunting challenge of distinguishing one from another. This general assessment of the space usually begs a simple question: “Why SoftLayer?” Over the past few months, hundreds of analysts, journalists, IBMers and IBM customers have asked me that question, so I’m sure the Thoughts on Cloud audience is interested as well.
SoftLayer’s fundamental differentiator can be summed up in a single word: Performance.
Unfortunately, “performance” has become a pretty loaded term in the cloud industry with hundreds of providers touting processing power and network speed numbers as an indication of how well a platform performs, but we all know that a high performance car is more than just a powerful engine. Let’s unpack what “performance” means for SoftLayer.
Performance starts with the platform
We built SoftLayer, from the ground up, to allow total automation of everything we offer. To accomplish that goal, we standardized every process and every piece of hardware in each of our data center pods—from the servers and network gear to the colors of the network cables and server rack layout. To connect the data center pods to each other and to the outside world, we implemented a triple-network architecture that provides public, out of band, and internal traffic, and our development team built the Infrastructure Management System (IMS) to be the brains behind the entire platform.
As a single system to automate the management and control of every aspect of our data centers, IMS lets us deploy bare metal servers in very much the same way we (and the rest of the industry) deliver public cloud instances. Because IMS treats physical servers and virtual cloud instances as equal and relatable devices, both types of cloud environments coexist on the same management plane and on the same network.
Additionally, IMS was developed with a robust API for accessing all of its capabilities. We used the API to make a web-based customer portal and mobile apps for customers to directly control their infrastructure, and we provide API access to our customers so they can integrate similar functionality directly into their own systems.
While hardware standardization and internal systems development might not seem very exotic, they are fundamental to our operational, server and network performance.
With the inherent benefits of our platform, our customers can manage their infrastructure at will, scale on the fly, and pay only for as little or as much as they need. Getting a bare metal server from other providers can take days, weeks, or even months. But an organization that has an immediate need to add infrastructure for a business-critical application doesn’t have that kind of time. On top of that, with the API, customers can automate their own operations.
A perfect example of how customers can take advantage of API automation would be the way one of our gaming customers completely automated the way it adjusts and manages its social game infrastructure with our API: When a given game’s server infrastructure reaches a certain threshold of utilization (monitored by the API), a new bare metal server with a specific configuration is ordered, provisioned and configured behind the game’s load balancer (also via the API), and the additional capacity is made available to the game when everything is complete. Without anyone picking up a phone, manually placing an order, or venturing out onto the data center floor, the game scales horizontally on high-powered bare metal resources.
Bare metal servers have a higher performance profile than virtual servers—you get all the resources of the dedicated server and none of the overhead of a hypervisor. But bare metal servers aren’t necessarily the best choice for all use cases, and that’s why we designed the SoftLayer platform to dial into the right level of power, scalability, and cost with a blend of bare metal and virtual resources.
When bare metal physical servers and virtual cloud instances are treated as equal and relatable devices, workloads can leverage the resources that best suit their needs. A customer with a big data application might choose to provision Web servers on public cloud instances while storing and accessing data on bare metal servers with SSD hard drives. While it’s possible to run both pieces of the application in a public cloud environment, results of our Riak Performance Analysis showed big data performs much better on dedicated hardware.
Also, with bare metal and virtual servers hosted in the same environment, customers aren’t married to one type of infrastructure. To break down the barriers between physical and virtual, we’ve even developed Flex Images technology to streamline the process of moving between the two.
Like I said earlier, we don’t think performance boils down to speeds and feeds. If it did, I’d tell you we have more 2,000+ Gbps of network capacity powered by transit and peering relationships with bandwidth providers around the world. I’d mention we have sixteen network points of presence that allow users who aren’t geographically close to a SoftLayer data center to connect to our network quickly. By minimizing the number of hops (and ISPs) between users and content in one of our data centers, latency decreases significantly, and we have more control to optimize network paths.
But network performance is about more than that. Above, I mentioned our triple-network architecture—every SoftLayer server is connected to the public network, our private network and our out-of-band management network. Being able to segment traffic across those three distinct physical networks creates a whole new world of possibilities for our customers. End users can access content from a server over the public network while the server transfers data across the private network to another server in a different data center. All the while, our customer can be administering the server via VPN over the management network, and none of the traffic has to fight with the others for bandwidth. And all of the traffic across the private network and management network is free and unmetered.
Put it all together and…
At the end of the day, truly great performance is defined by a customer’s experience. All of these differentiators look good on paper, but let’s look at one specific example of their value in the real world.
In August, we announced that the Open Source Robotics Foundation (OSRF) chose SoftLayer to host the DARPA Virtual Robotics Challenge (VRC). The VRC is a competition where teams from around the world develop software to enable a simulated robot to execute tasks similar to the activities that might be required of emergency personnel in a disaster response situation. OSRF designed a specialized environment in which teams were able to control their own five-server constellation, separate from other teams. So, for that environment OSRF needed powerful servers, hyper-fast communication between the servers, and on-demand provisioning and management. In other words, they needed server performance, network performance, and operational performance. They knew where to find it.