March 21, 2019 By Alex Hudak 6 min read

Everything you need to know about GPUs

Gone are the days when it takes companies weeks to months to run compute-intensive workloads. Now, companies can receive information-driven results within hours or minutes without worrying about managing ever-changing technology on complex IT infrastructure. Graphic processing units (GPUs) are becoming more prevalent as accelerated computing is rapidly changing the modern enterprise. They support new applications that are resource demanding and provide new capabilities to gain valuable insight from customer data.

But, what is a GPU and why is it so critical for companies to start thinking about adopting them for their workloads?

In this video, I will cover the basics about GPUs, including the differences between a GPU and CPU, the top GPU applications (including industry examples), and why it’s beneficial to use GPUs on cloud infrastructure.

Learn more:

Video Transcript

What is a GPU?

Hi, my name is Alex Hudak, I’m an Offering Manager at IBM, and today I’m going to talk to you about: What is a GPU? 

Questions about GPUs

So, I get some pretty basic questions on GPUs, and that’s what I’m going to go over today. First question is, what is a GPU? What is the difference between a GPU and CPU—so I’m going to represent those here. 

And then, lastly, why use a GPU? And, is it even important to use a GPU on cloud?

Definitions of GPUs and CPUs

So, let’s start first with, what is a GPU? GPU stands for graphic processing unit. But, oftentimes people are more familiar with CPUs. 

So, CPUs are actually made up of just a few cores. You can think of these cores as the power, the ability of a CPU to do certain calculations or computations. 

On the other hand, though, GPUs are made up of hundreds of cores. But, what difference does it make? 

So, the thing with the CPU is that when it does a computation it does so in a serial form. So, it does one computation at a time. But with the GPU, it does it in parallel. So, the importance of these two differences is that with a GPU, you’re able to do computations all at once, and very intense computations at that. 

Distribution of workloads between GPUs and CPUs

So, oftentimes when you have app codes, a lot of it’s going to be going to the CPU. 

But then every now and then, you’re going to have an application that’s going to require quite a bit of compute-intensive support that the CPU just can’t do, so it’s going to be offloaded to the GPU. So, you can think of a GPU as that extra muscle or that extra brain power that the CPU just can’t do on its own. 

Use cases for GPUs

So, there are two main providers of GPUs in industry: NVIDIA and AMD. Both providers manufacture GPUs that are optimized for certain use cases. So, let’s jump into that, because a big question I get is: Why do I even need a GPU? In what industries and in what use cases? 

Virtual desktop infrastructure (VDI)

So the first we’ll talk about is VDI. 

VDI stands for virtual desktop infrastructure. So, GPUs are created to support high-intensive graphic applications. Think about if you are a construction worker, right? And you’re out in the field, and you need to access a very high-graphic-intensive 3D CAD program. So, rather than having the server right next to you or right in the field with you, you can have a server that’s a country away in a cloud data center and be able to view that 3D graphic as if that server was right with you. And that’s going to be supported by the GPU because the GPU supports graphic-intensive applications. Another example of this would be movie animation or rendering. 

So, in fact, GPUs actually first got their name mainly with the gaming industry. Oftentimes they were referred to as “gaming processing units” because of this ability to provide end users with low-latency graphics. 

Artificial intelligence (AI)

But, gaming is no longer the focus in industry anymore. It’s a big piece of it, but now financial services, life sciences, and even healthcare are starting to get into it with artificial intelligence. So, artificial intelligence has two big pieces to it—there’s machine learning and there’s deep learning. 

So now there are also GPUs that are optimized and created specifically for those applications. There are some that are created for inferencing for machine learning purposes, and there are some that are created to help data scientists create and train neural networks. In other words, they’re trying to create these algorithms that can think like a human brain. That’s something that a CPU can simply not do on its own, and it requires GPU capabilities. 

High performance computing (HPC)

And then, lastly, let’s talk about HPC. HPC is a buzz word that’s been going around—it stands for high performance computing. While a GPU is not absolutely necessary for HPC, it’s an important part of it. 

So, high performance computing is the company’s ability to spread out their compute-intensive workloads amongst multiple compute nodes (or, in the case of cloud, servers). Oftentimes, though, these applications are very compute-intensive—it could include rendering, it could include AI—and that’s where a GPU comes in. You can add a GPU to these servers that are spread out amongst an HPC application and utilize those in that manner. 

GPUs on cloud

So, this is a nice little segue into why should we use GPUs on cloud. If HPC is such a big piece of that, what else is important about cloud? 

High performance

So, the first part of that is you get high performance—you need the cloud for that.

The GPUs are great . . . but not on their own. 

So, back in the day (and even still today), there are companies that use a lot of on-prem infrastructure, and they utilize that infrastructure for any of their compute-intensive applications. However, especially in the case of GPUs, the technology is ever-changing. In fact, there’s typically a new GPU coming out almost every single year. So, it’s actually very expensive and nearly impractical for companies to keep up with the latest technology at this point. Cloud providers actually have the ability to continually update their technology and provide GPUs to these companies to utilize them when they need them.

Bare metal vs. virtual servers

So, on a more granular basis though, cloud technology can often be broken down from an infrastructure perspective between bare metal and virtual servers. So, let’s talk about the differences. 

There are advantages of using a GPU on both types of infrastructure. If you utilize a GPU on a bare metal infrastructure, the companies oftentimes have access to the entire server itself, and they can customize the configuration. So, this is great for companies that are going to be really utilizing that server and that GPU-intense application on a pretty consistent basis. 

But, for companies that need a GPU maybe just on a burst-workload scenario, the virtual server option might be even better. And the nice thing about virtual is that there are often different pricing models as well, including hourly. 

You only pay for what you use

And the cool thing about cloud is that you only pay for what you use. 

So, if a company is using on-prem technology or infrastructure but they’re not utilizing it at the time, that technology is depreciating, and it’s essentially a waste of money for that company. 

When they offload to the cloud, they only pay for what they’re using. And, so, it just makes a lot more sense from a cost perspective; and then, because the GPU is so great at performance, it just makes sense for a performance perspective as well. 

So, companies are able to focus way more on output than they are on keeping up with the latest technology.

Wrapping up

So, in summary, what we covered is: 

  • What is a GPU (graphic processing unit)? 

  • What are the differences between a GPU and a CPU? 

  • The use cases for GPUs, being VDI, AI, and HPC. 

  • Why is it even important for GPUs to be used on cloud? 

Was this article helpful?
YesNo

More from Cloud

A clear path to value: Overcome challenges on your FinOps journey 

3 min read - In recent years, cloud adoption services have accelerated, with companies increasingly moving from traditional on-premises hosting to public cloud solutions. However, the rise of hybrid and multi-cloud patterns has led to challenges in optimizing value and controlling cloud expenditure, resulting in a shift from capital to operational expenses.   According to a Gartner report, cloud operational expenses are expected to surpass traditional IT spending, reflecting the ongoing transformation in expenditure patterns by 2025. FinOps is an evolving cloud financial management discipline…

IBM Power8 end of service: What are my options?

3 min read - IBM Power8® generation of IBM Power Systems was introduced ten years ago and it is now time to retire that generation. The end-of-service (EoS) support for the entire IBM Power8 server line is scheduled for this year, commencing in March 2024 and concluding in October 2024. EoS dates vary by model: 31 March 2024: maintenance expires for Power Systems S812LC, S822, S822L, 822LC, 824 and 824L. 31 May 2024: maintenance expires for Power Systems S812L, S814 and 822LC. 31 October…

24 IBM offerings winning TrustRadius 2024 Top Rated Awards

2 min read - TrustRadius is a buyer intelligence platform for business technology. Comprehensive product information, in-depth customer insights and peer conversations enable buyers to make confident decisions. “Earning a Top Rated Award means the vendor has excellent customer satisfaction and proven credibility. It’s based entirely on reviews and customer sentiment,” said Becky Susko, TrustRadius, Marketing Program Manager of Awards. Top Rated Awards have to be earned: Gain 10+ new reviews in the past 12 months Earn a trScore of 7.5 or higher from…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters