March 21, 2019 By Alex Hudak 6 min read

Everything you need to know about GPUs

Gone are the days when it takes companies weeks to months to run compute-intensive workloads. Now, companies can receive information-driven results within hours or minutes without worrying about managing ever-changing technology on complex IT infrastructure. Graphic processing units (GPUs) are becoming more prevalent as accelerated computing is rapidly changing the modern enterprise. They support new applications that are resource demanding and provide new capabilities to gain valuable insight from customer data.

But, what is a GPU and why is it so critical for companies to start thinking about adopting them for their workloads?

In this video, I will cover the basics about GPUs, including the differences between a GPU and CPU, the top GPU applications (including industry examples), and why it’s beneficial to use GPUs on cloud infrastructure.

Learn more:

Video Transcript

What is a GPU?

Hi, my name is Alex Hudak, I’m an Offering Manager at IBM, and today I’m going to talk to you about: What is a GPU? 

Questions about GPUs

So, I get some pretty basic questions on GPUs, and that’s what I’m going to go over today. First question is, what is a GPU? What is the difference between a GPU and CPU—so I’m going to represent those here. 

And then, lastly, why use a GPU? And, is it even important to use a GPU on cloud?

Definitions of GPUs and CPUs

So, let’s start first with, what is a GPU? GPU stands for graphic processing unit. But, oftentimes people are more familiar with CPUs. 

So, CPUs are actually made up of just a few cores. You can think of these cores as the power, the ability of a CPU to do certain calculations or computations. 

On the other hand, though, GPUs are made up of hundreds of cores. But, what difference does it make? 

So, the thing with the CPU is that when it does a computation it does so in a serial form. So, it does one computation at a time. But with the GPU, it does it in parallel. So, the importance of these two differences is that with a GPU, you’re able to do computations all at once, and very intense computations at that. 

Distribution of workloads between GPUs and CPUs

So, oftentimes when you have app codes, a lot of it’s going to be going to the CPU. 

But then every now and then, you’re going to have an application that’s going to require quite a bit of compute-intensive support that the CPU just can’t do, so it’s going to be offloaded to the GPU. So, you can think of a GPU as that extra muscle or that extra brain power that the CPU just can’t do on its own. 

Use cases for GPUs

So, there are two main providers of GPUs in industry: NVIDIA and AMD. Both providers manufacture GPUs that are optimized for certain use cases. So, let’s jump into that, because a big question I get is: Why do I even need a GPU? In what industries and in what use cases? 

Virtual desktop infrastructure (VDI)

So the first we’ll talk about is VDI. 

VDI stands for virtual desktop infrastructure. So, GPUs are created to support high-intensive graphic applications. Think about if you are a construction worker, right? And you’re out in the field, and you need to access a very high-graphic-intensive 3D CAD program. So, rather than having the server right next to you or right in the field with you, you can have a server that’s a country away in a cloud data center and be able to view that 3D graphic as if that server was right with you. And that’s going to be supported by the GPU because the GPU supports graphic-intensive applications. Another example of this would be movie animation or rendering. 

So, in fact, GPUs actually first got their name mainly with the gaming industry. Oftentimes they were referred to as “gaming processing units” because of this ability to provide end users with low-latency graphics. 

Artificial intelligence (AI)

But, gaming is no longer the focus in industry anymore. It’s a big piece of it, but now financial services, life sciences, and even healthcare are starting to get into it with artificial intelligence. So, artificial intelligence has two big pieces to it—there’s machine learning and there’s deep learning. 

So now there are also GPUs that are optimized and created specifically for those applications. There are some that are created for inferencing for machine learning purposes, and there are some that are created to help data scientists create and train neural networks. In other words, they’re trying to create these algorithms that can think like a human brain. That’s something that a CPU can simply not do on its own, and it requires GPU capabilities. 

High performance computing (HPC)

And then, lastly, let’s talk about HPC. HPC is a buzz word that’s been going around—it stands for high performance computing. While a GPU is not absolutely necessary for HPC, it’s an important part of it. 

So, high performance computing is the company’s ability to spread out their compute-intensive workloads amongst multiple compute nodes (or, in the case of cloud, servers). Oftentimes, though, these applications are very compute-intensive—it could include rendering, it could include AI—and that’s where a GPU comes in. You can add a GPU to these servers that are spread out amongst an HPC application and utilize those in that manner. 

GPUs on cloud

So, this is a nice little segue into why should we use GPUs on cloud. If HPC is such a big piece of that, what else is important about cloud? 

High performance

So, the first part of that is you get high performance—you need the cloud for that.

The GPUs are great . . . but not on their own. 

So, back in the day (and even still today), there are companies that use a lot of on-prem infrastructure, and they utilize that infrastructure for any of their compute-intensive applications. However, especially in the case of GPUs, the technology is ever-changing. In fact, there’s typically a new GPU coming out almost every single year. So, it’s actually very expensive and nearly impractical for companies to keep up with the latest technology at this point. Cloud providers actually have the ability to continually update their technology and provide GPUs to these companies to utilize them when they need them.

Bare metal vs. virtual servers

So, on a more granular basis though, cloud technology can often be broken down from an infrastructure perspective between bare metal and virtual servers. So, let’s talk about the differences. 

There are advantages of using a GPU on both types of infrastructure. If you utilize a GPU on a bare metal infrastructure, the companies oftentimes have access to the entire server itself, and they can customize the configuration. So, this is great for companies that are going to be really utilizing that server and that GPU-intense application on a pretty consistent basis. 

But, for companies that need a GPU maybe just on a burst-workload scenario, the virtual server option might be even better. And the nice thing about virtual is that there are often different pricing models as well, including hourly. 

You only pay for what you use

And the cool thing about cloud is that you only pay for what you use. 

So, if a company is using on-prem technology or infrastructure but they’re not utilizing it at the time, that technology is depreciating, and it’s essentially a waste of money for that company. 

When they offload to the cloud, they only pay for what they’re using. And, so, it just makes a lot more sense from a cost perspective; and then, because the GPU is so great at performance, it just makes sense for a performance perspective as well. 

So, companies are able to focus way more on output than they are on keeping up with the latest technology.

Wrapping up

So, in summary, what we covered is: 

  • What is a GPU (graphic processing unit)? 

  • What are the differences between a GPU and a CPU? 

  • The use cases for GPUs, being VDI, AI, and HPC. 

  • Why is it even important for GPUs to be used on cloud? 

Was this article helpful?

More from Cloud

Enhance your data security posture with a no-code approach to application-level encryption

4 min read - Data is the lifeblood of every organization. As your organization’s data footprint expands across the clouds and between your own business lines to drive value, it is essential to secure data at all stages of the cloud adoption and throughout the data lifecycle. While there are different mechanisms available to encrypt data throughout its lifecycle (in transit, at rest and in use), application-level encryption (ALE) provides an additional layer of protection by encrypting data at its source. ALE can enhance…

Attention new clients: exciting financial incentives for VMware Cloud Foundation on IBM Cloud

4 min read - New client specials: Get up to 50% off when you commit to a 1- or 3-year term contract on new VCF-as-a-Service offerings, plus an additional value of up to USD 200K in credits through 30 June 2025 when you migrate your VMware workloads to IBM Cloud®.1 Low starting prices: On-demand VCF-as-a-Service deployments begin under USD 200 per month.2 The IBM Cloud benefit: See the potential for a 201%3 return on investment (ROI) over 3 years with reduced downtime, cost and…

The history of the central processing unit (CPU)

10 min read - The central processing unit (CPU) is the computer’s brain. It handles the assignment and processing of tasks, in addition to functions that make a computer run. There’s no way to overstate the importance of the CPU to computing. Virtually all computer systems contain, at the least, some type of basic CPU. Regardless of whether they’re used in personal computers (PCs), laptops, tablets, smartphones or even in supercomputers whose output is so strong it must be measured in floating-point operations per…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters