January 16, 2020 By Bradley Knapp 7 min read

Infrastructure as a Service, or IaaS, is a type of cloud service that provides users an instant computing infrastructure that can be provisioned and managed over the internet.

In this video, I’m going to explain how IaaS delivers fundamental compute, network, and storage resources to consumers on-demand, over the internet, and on a pay-as-you-go basis.

Make sure you subscribe to our YouTube channel if you enjoy this video, because we’ll have lots more coming soon.

Learn more

Video Transcript 

What is IaaS?

Alright, hi everybody, and welcome back to the channel. My name is Bradley Knapp and I’m one of the Product Managers here at IBM Cloud. And what I want to talk with you guys about today is a question that we get fairly commonly when folks are starting their cloud journey and starting to learn about cloud, and that’s: “What is IaaS?” I read about cloud, I see this IaaS thing everywhere, what does it actually mean?

I = “Infrastructure”

And so IaaS is an acronym, right, and so it’s broken into two parts. The first part—the “I”—that’s infrastructure. 

If you think of cloud as being just some other dude’s computer running somewhere else, that’s the infrastructure part. That infrastructure, if it’s not cloud, it could be running in a data center somewhere, it can be running in a closet somewhere, your laptop or your desktop is infrastructure.

aaS = “as-a-Service” 

And then the “aaS” piece is “as-a-Service,” right? That’s the billing method; that’s the way that you consume it. And there are other kinds of as-a-Service. You’ve got “PaaS”—Platform-as-a-Service—you have “SaaS”—Software-as-a-Service. There’s lots of different kinds of things that you can consume as-a-Service, but very specifically, what we want to talk about is the “I”—it’s the infrastructure.

The three main categories of infrastructure

And so I’ve got this diagram written out over here because infrastructure really falls into three main categories. 

Compute

The first category is going to be compute. That’s where the processors are; that’s where the actual lifting and computing gets done.

Storage

The second piece, which is storage, kind of falls into three main buckets and lots of smaller buckets on top of it because there’s different kinds of storage.

Network

And then the third piece, the piece that ties everything together—that’s our network piece. And so we’re gonna draw this one over here because without network, you can’t do anything. 

Network is how the compute talks to the storage, and it’s how the compute talks to the other compute.

Compute

And so, like I said, we can break this down into different pieces. On the compute side, I’ve got three things called out up here—the first one, I’ve just got it labeled compute—it’s general-purpose compute, right?

This is your normal web server or application server, it can really be whatever general-purpose kind of computing needs you have.

The second two (or the second and the third, really) are more specific. So GPU is a graphics processor—that’s a very, very high-speed processor that’s used in conjunction with a traditional processor for specific kinds of workloads. This is gonna be your machine learning and your AI. 

And then the third piece—HPC—that’s high-performance computing. So there are specific kinds of workloads that had very specific requirements as far as frequency, which is your clock speed, and the number of cores that are required, where you have to have lots of power packed into a very, very, very small footprint—that’s gonna be your HPC.

Storage

And likewise, on the storage side, you’ve got different kinds of storage because you have different storage needs. 

The most commonly used one is gonna be object storage. Object storage is a little bit lower-performance, but it’s relatively inexpensive and that’s for your general-purpose storage, right?

What goes into object storage? Well you can have things like pictures, you can have documents, you can have—really, whatever you want can go into that object storage.

It’s where all of the data and all of the graphics on the web server—that’s all hiding in object storage.

And then the second and the third piece that I’ve got called out here—block and file—these are specific kinds of storage (specific kinds of network storage), and they attach in very specific ways. 

Block storage attaches with iSCSI, file storage attaches with NFS—it’s the way that they mount into the actual compute itself. And there are specific kinds of applications that require block storage or file storage because each of them has their own features and benefits.

Network

And so to talk about how we pull all of these things together, we need to talk about the network, because network has two main components that matter. And so what I want you to do is I want you to think of your network as a pipe, right? 

And so a network can be a small pipe, that would be like a pipe measured in megabits, so you can’t press much data through it. 

Or it can be a very large pipe. That very large pipe, that would be measured in gigabits per second. 

And so the more data you need to push simultaneously, the larger pipe you need and the more bandwidth you need.

The second way that we measure network traffic is how much data gets pushed through this pipe over a set period of time. Normally, it’s billed by the month but it could also be billed by the minute, by the second, or maybe even by the day, or by the week.

Looking at an example: An AI workload

And so to take all of this and tie all this together, I want to use an example of something that requires some specialty components right, we’re going to talk a little bit about an AI workload.

If you think about an AI workload where you’re going to do automatic visual recognition of pictures. Let’s say that you have a billion pictures down here in object storage that you are then going to use to train your model that’s running on these GPU servers. 

You take that billion pictures—and since a billion is a lot and pictures are very large, you have to push them through a really big pipe—that’s your network pipe—up into the GPU server. 

But the GPU server doesn’t have any storage inherent to it. So that GPU server is actually going to take and write that into block. And it’s going to write that data back and forth, and back and forth until the model is done. 

Once it’s trained it’s going to take all of the data that we pushed up here and all of the results, and it’s going to write all of that back down into object storage.

Why into object storage? Because again, it’s less expensive, it’s a good archiving solution.

You’re pushing a ton of data through these pipes while they’re turned on, and then once you’re done, you get rid of them.

“as-a-Service” is the way you consume

The second piece that I want to talk about is the “as-a-Service” piece—this is the way that you consume. And so when we talk about as-a-Service, there are kind of four things that really, really matter in this model. 

Offerings are shared

The first one is that offerings that are consumed as-a-Service are, generally speaking, shared. And so by shared, I mean they’re multi-tenant—many people use the same offering, we just take and carve it up and make it available to multiple different customers simultaneously. So that’s the first piece of as-a-Service.

Hourly/monthly billing

The second piece is the hourly or monthly piece. This is talking about how we bill. In the case of compute, it could be a certain number of cents or certain of dollars per hour or per month.

In the case of storage, we would bill out in the amount of data that’s stored in a given month—so that would be cents per gigabyte per month.

In the case of network, there are two different metrics we about earlier, right? The size of the pipe—you would pay per month charge for that—and then the amount of data that goes through it—again measured in gigabytes per month or cents per gigabytes per month. So that’s our billing metric.

No contracts

And then the third piece, and this is a very important one, is that there are no contracts involved in an as-a-Service model, or there aren’t necessarily contracts. There can certainly be them but they’re generally advantageous to you.

By no contracts we mean that you don’t have to agree to use something for a set amount of time—you use it for as long as you need it and then you get rid of it.

And so rather than a checkmark for no contracts I’m just gonna put a little X there. You only use it when you need it, it’s on-demand.

Self-service

And then the last piece, and this is probably the most important as-a-Service offerings are self-service. That means that you can go out to a website, you punch in your information, your payment details, click the “Go” button and that as-a-Service offering is going to be provisioned and delivered to you.

It’s not something that takes days or weeks or months to set up and configure, it’s one that can be provided in minutes or hours.

Was this article helpful?
YesNo

More from Cloud

Announcing Dizzion Desktop as a Service for IBM Virtual Private Cloud (VPC)

2 min read - For more than four years, Dizzion and IBM Cloud® have strategically partnered to deliver incredible digital workspace experiences to our clients. We are excited to announce that Dizzion has expanded their Desktop as a Service (DaaS) offering to now support IBM Cloud Virtual Private Cloud (VPC). Powered by Frame, Dizzion’s cloud-native DaaS platform, clients can now deploy their Windows and Linux® virtual desktops and applications on IBM Cloud VPC and enjoy fast, dynamic, infrastructure provisioning and a true consumption-based model.…

Microcontrollers vs. microprocessors: What’s the difference?

6 min read - Microcontroller units (MCUs) and microprocessor units (MPUs) are two kinds of integrated circuits that, while similar in certain ways, are very different in many others. Replacing antiquated multi-component central processing units (CPUs) with separate logic units, these single-chip processors are both extremely valuable in the continued development of computing technology. However, microcontrollers and microprocessors differ significantly in component structure, chip architecture, performance capabilities and application. The key difference between these two units is that microcontrollers combine all the necessary elements…

Seven top central processing unit (CPU) use cases

7 min read - The central processing unit (CPU) is the computer’s brain, assigning and processing tasks and managing essential operational functions. Computers have been so seamlessly integrated with modern life that sometimes we’re not even aware of how many CPUs are in use around the world. It’s a staggering amount—so many CPUs that a conclusive figure can only be approximated. How many CPUs are now in use? It’s been estimated that there may be as many as 200 billion CPU cores (or more)…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters