April 22, 2021 By Adam Shaw
Bradley Knapp
3 min read

What role do bare metal cloud servers play in the world of cloud infrastructure?

Over the past several years, it’s impossible to not notice the trend in migrations from on-premises bare metal servers (often virtualized with solutions like VMware) to cloud-based virtual instances. While that migration is pushed by many factors (e.g., cost and speed of deployment, infrastructure management tooling, infrastructure resiliency through diversified platform providers), the conversation about dedicated bare metal servers is rarely included.

Cloud infrastructure, including bare metal servers, does offer numerous benefits (e.g., hardware-specific applications, highly secure deployment, dedicated resources with low overhead), but one in particular we’d like to focus on is the impressive storage advancements that you can achieve with bare metal servers.

Options in the cloud space

When it comes to direct-attached storage options for most customers, you are mostly left with three options:

  • SATA drives, which come in both platter (HDD) and solid-state (SSD) varieties
  • SAS drives, which also come in both HDD and SSD
  • NVMe drives, which are exclusively SSD

For this blog post, we will only focus on SSD performance because HDD, though superior in cost and capacity, is so markedly slow in data transfer rates that it rarely hit the bottlenecks imposed by the data bus (SATA-150 being an obvious exception).

SATA drives

SATA drives — usually being categorized as commodity hardware — are still mostly the standard for drives in desktop computers and SMB NAS boxes. These drives are relatively slow (the SATA III bus is limited to 600MB/s) when compared to other options, but they benefit from being cheap and hot-swappable. In addition, when paired with a dedicated RAID controller, the performance can be improved by utilizing RAID striping to allow multiple drives for reads and writes. That said, SATA throughput is still bottle-necked by the slow transfer speeds of the SATA bus.

SAS drives

SAS drives are the traditional data center workhorse. One of the biggest benefits of these drives is that they have built in error checking. They are also hot-swappable and have similar RAID options as SATA when paired with a dedicated RAID controller. SAS also offers higher transfer speeds for applications that can utilize sequential (or simultaneous) reads and writes.

NVMe drives

NVMe is newest standard, first formalized in 2011. While they do not traditionally have the ability to be hot-swapped (some NVMe-oF enclosures are an exception, but these are not direct-attach), the throughput speeds and latency NVMe offers are truly remarkable. A four-lane PCIe 4.0 NVMe drive can exceed 5GBps for sequential writes, and a four-lane PCIe 6.0 drive is theoretically possible of more than 30GBps. While those speeds are technically possible over a SAS interface, NVMe does not use the SCSI protocol which reduces latency.

In the enterprise world, there are many reasons for needing the highest possible IOPS with the lowest latency from your storage. Clustered databases, dynamic content streaming and other high-intensity applications that serve thousands of clients can quickly eat through the throughput of a SATA-based drive or drive array and create application slowdowns because of the limited data transit speeds.

As an example of a database that can take advantage of this kind of extreme speed, SAP HANA has NVMe-based certified configurations specifically focused on improving the performance of high-intensity SAP workloads.

NVMe and CPU

While NVMe drives are extremely fast and feature extremely low latency, they also can become bottlenecked. Unlike SAS drives, which communicate over a dedicated SCSI controller, PCIe drives do not have an intermediary controller between the drive and the CPU.

Adding numerous NVMe drives to a server can result in the CPU (via the PCI bus) becoming the bottleneck of the system, especially if the storage load on the server is high, which is to be expected if the customer is building out a chassis with multiple NVMe drives.

For customers experiencing these issues, the solution is usually to switch to a more powerful CPU or add additional CPUs via multi-CPU boards. Alternatively, you can reduce the NVMe drive count based on your application’s needs (supplementing lost storage with SAS disks) or reduce the CPU load from extra running processes.

This tuning can be quite involved, which is why SAP-certified configurations are so important. Knowing not only that the application will work, but work well, is an important factor for any customer running their applications on the newest and best technologies.


NVMe is a fantastic option for high-performance IO application, if not the only option, but NVMe arrays needs to be properly paired and tuned with the processor to get optimal performance.

For customers who are looking to implement new bare metal chassis with NVMe drives and drive arrays, IBM Cloud sales engineers are trained on how to evaluate and tune these arrays while excelling in meeting our customer’s needs for enterprise product demands.

Our IBM Cloud sales team is available 24/7 via chat to answer any questions. To help develop your high-performance servers today, get started with configuring your server.

Was this article helpful?

More from Cloud

Announcing Dizzion Desktop as a Service for IBM Virtual Private Cloud (VPC)

2 min read - For more than four years, Dizzion and IBM Cloud® have strategically partnered to deliver incredible digital workspace experiences to our clients. We are excited to announce that Dizzion has expanded their Desktop as a Service (DaaS) offering to now support IBM Cloud Virtual Private Cloud (VPC). Powered by Frame, Dizzion’s cloud-native DaaS platform, clients can now deploy their Windows and Linux® virtual desktops and applications on IBM Cloud VPC and enjoy fast, dynamic, infrastructure provisioning and a true consumption-based model.…

Microcontrollers vs. microprocessors: What’s the difference?

6 min read - Microcontroller units (MCUs) and microprocessor units (MPUs) are two kinds of integrated circuits that, while similar in certain ways, are very different in many others. Replacing antiquated multi-component central processing units (CPUs) with separate logic units, these single-chip processors are both extremely valuable in the continued development of computing technology. However, microcontrollers and microprocessors differ significantly in component structure, chip architecture, performance capabilities and application. The key difference between these two units is that microcontrollers combine all the necessary elements…

Seven top central processing unit (CPU) use cases

7 min read - The central processing unit (CPU) is the computer’s brain, assigning and processing tasks and managing essential operational functions. Computers have been so seamlessly integrated with modern life that sometimes we’re not even aware of how many CPUs are in use around the world. It’s a staggering amount—so many CPUs that a conclusive figure can only be approximated. How many CPUs are now in use? It’s been estimated that there may be as many as 200 billion CPU cores (or more)…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters