March 21, 2019 By Alex Hudak 6 min read

Everything you need to know about GPUs

Gone are the days when it takes companies weeks to months to run compute-intensive workloads. Now, companies can receive information-driven results within hours or minutes without worrying about managing ever-changing technology on complex IT infrastructure. Graphic processing units (GPUs) are becoming more prevalent as accelerated computing is rapidly changing the modern enterprise. They support new applications that are resource demanding and provide new capabilities to gain valuable insight from customer data.

But, what is a GPU and why is it so critical for companies to start thinking about adopting them for their workloads?

In this video, I will cover the basics about GPUs, including the differences between a GPU and CPU, the top GPU applications (including industry examples), and why it’s beneficial to use GPUs on cloud infrastructure.

Learn more:

Video Transcript

What is a GPU?

Hi, my name is Alex Hudak, I’m an Offering Manager at IBM, and today I’m going to talk to you about: What is a GPU? 

Questions about GPUs

So, I get some pretty basic questions on GPUs, and that’s what I’m going to go over today. First question is, what is a GPU? What is the difference between a GPU and CPU—so I’m going to represent those here. 

And then, lastly, why use a GPU? And, is it even important to use a GPU on cloud?

Definitions of GPUs and CPUs

So, let’s start first with, what is a GPU? GPU stands for graphic processing unit. But, oftentimes people are more familiar with CPUs. 

So, CPUs are actually made up of just a few cores. You can think of these cores as the power, the ability of a CPU to do certain calculations or computations. 

On the other hand, though, GPUs are made up of hundreds of cores. But, what difference does it make? 

So, the thing with the CPU is that when it does a computation it does so in a serial form. So, it does one computation at a time. But with the GPU, it does it in parallel. So, the importance of these two differences is that with a GPU, you’re able to do computations all at once, and very intense computations at that. 

Distribution of workloads between GPUs and CPUs

So, oftentimes when you have app codes, a lot of it’s going to be going to the CPU. 

But then every now and then, you’re going to have an application that’s going to require quite a bit of compute-intensive support that the CPU just can’t do, so it’s going to be offloaded to the GPU. So, you can think of a GPU as that extra muscle or that extra brain power that the CPU just can’t do on its own. 

Use cases for GPUs

So, there are two main providers of GPUs in industry: NVIDIA and AMD. Both providers manufacture GPUs that are optimized for certain use cases. So, let’s jump into that, because a big question I get is: Why do I even need a GPU? In what industries and in what use cases? 

Virtual desktop infrastructure (VDI)

So the first we’ll talk about is VDI. 

VDI stands for virtual desktop infrastructure. So, GPUs are created to support high-intensive graphic applications. Think about if you are a construction worker, right? And you’re out in the field, and you need to access a very high-graphic-intensive 3D CAD program. So, rather than having the server right next to you or right in the field with you, you can have a server that’s a country away in a cloud data center and be able to view that 3D graphic as if that server was right with you. And that’s going to be supported by the GPU because the GPU supports graphic-intensive applications. Another example of this would be movie animation or rendering. 

So, in fact, GPUs actually first got their name mainly with the gaming industry. Oftentimes they were referred to as “gaming processing units” because of this ability to provide end users with low-latency graphics. 

Artificial intelligence (AI)

But, gaming is no longer the focus in industry anymore. It’s a big piece of it, but now financial services, life sciences, and even healthcare are starting to get into it with artificial intelligence. So, artificial intelligence has two big pieces to it—there’s machine learning and there’s deep learning. 

So now there are also GPUs that are optimized and created specifically for those applications. There are some that are created for inferencing for machine learning purposes, and there are some that are created to help data scientists create and train neural networks. In other words, they’re trying to create these algorithms that can think like a human brain. That’s something that a CPU can simply not do on its own, and it requires GPU capabilities. 

High performance computing (HPC)

And then, lastly, let’s talk about HPC. HPC is a buzz word that’s been going around—it stands for high performance computing. While a GPU is not absolutely necessary for HPC, it’s an important part of it. 

So, high performance computing is the company’s ability to spread out their compute-intensive workloads amongst multiple compute nodes (or, in the case of cloud, servers). Oftentimes, though, these applications are very compute-intensive—it could include rendering, it could include AI—and that’s where a GPU comes in. You can add a GPU to these servers that are spread out amongst an HPC application and utilize those in that manner. 

GPUs on cloud

So, this is a nice little segue into why should we use GPUs on cloud. If HPC is such a big piece of that, what else is important about cloud? 

High performance

So, the first part of that is you get high performance—you need the cloud for that.

The GPUs are great . . . but not on their own. 

So, back in the day (and even still today), there are companies that use a lot of on-prem infrastructure, and they utilize that infrastructure for any of their compute-intensive applications. However, especially in the case of GPUs, the technology is ever-changing. In fact, there’s typically a new GPU coming out almost every single year. So, it’s actually very expensive and nearly impractical for companies to keep up with the latest technology at this point. Cloud providers actually have the ability to continually update their technology and provide GPUs to these companies to utilize them when they need them.

Bare metal vs. virtual servers

So, on a more granular basis though, cloud technology can often be broken down from an infrastructure perspective between bare metal and virtual servers. So, let’s talk about the differences. 

There are advantages of using a GPU on both types of infrastructure. If you utilize a GPU on a bare metal infrastructure, the companies oftentimes have access to the entire server itself, and they can customize the configuration. So, this is great for companies that are going to be really utilizing that server and that GPU-intense application on a pretty consistent basis. 

But, for companies that need a GPU maybe just on a burst-workload scenario, the virtual server option might be even better. And the nice thing about virtual is that there are often different pricing models as well, including hourly. 

You only pay for what you use

And the cool thing about cloud is that you only pay for what you use. 

So, if a company is using on-prem technology or infrastructure but they’re not utilizing it at the time, that technology is depreciating, and it’s essentially a waste of money for that company. 

When they offload to the cloud, they only pay for what they’re using. And, so, it just makes a lot more sense from a cost perspective; and then, because the GPU is so great at performance, it just makes sense for a performance perspective as well. 

So, companies are able to focus way more on output than they are on keeping up with the latest technology.

Wrapping up

So, in summary, what we covered is: 

  • What is a GPU (graphic processing unit)? 

  • What are the differences between a GPU and a CPU? 

  • The use cases for GPUs, being VDI, AI, and HPC. 

  • Why is it even important for GPUs to be used on cloud? 

Was this article helpful?
YesNo

More from Cloud

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters