A Brief History of Cloud Computing

5 min read

By: IBM Cloud Team

The concept of “cloud computing” has been around much longer than you think. Let’s dive into its history.

The humble beginnings of cloud

Believe it or not, the modern day idea of “cloud computing” dates back to the 1950s, when large-scale mainframes were made available to schools and corporations. The mainframe’s colossal hardware infrastructure was installed in what could be called a “server room” (since the room would generally only be able to hold a single mainframe). Multiple users were able to access the mainframe via “dumb terminals”—stations with the sole function of facilitating access to the mainframes.

Due to the cost of buying and maintaining mainframes, an organization wouldn’t be able to afford a mainframe for each user. It became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Virtualization changes everything

Virtualization changes everything

Twenty years later in the 1970s, IBM released an operating system called VM that permitted admins on its System/370 mainframe systems to have multiple virtual systems, or “virtual machines (VMs)” on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment.

Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS. Every VM ran custom operating systems or guest operating systems that had their own memory, CPU, and hard drives, along with CD-ROMs, keyboards, and networking—despite the fact that those resources were shared. “Virtualization” became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

basic functions

In the 1990s, telecommunications companies that historically only offered single dedicated point-to-point data connections began offering virtualized private network connections—with the same service quality as dedicated services at a reduced cost. Rather than building out physical infrastructure to allow more users to have their own connections, telecommunications companies provided users with shared access to the same physical infrastructure. This change allowed telecommunications companies to shift traffic as necessary, leading to better network balance and more control over bandwidth usage.

Virtualization meets the Internet

Meanwhile, virtualization for PC-based systems started in earnest. As the Internet became more accessible, the next logical step was to take virtualization online. If you were in the market to buy servers 10 or 20 years ago, you know that the costs of physical hardware—while not at the same level as the mainframes of the 1950s—were pretty outrageous. As more and more people expressed the demand to be online, the costs had to come out of the stratosphere and into reality.

One of the ways that happened was through—you guessed it—virtualization. Servers were virtualized into shared hosting environments, virtual private servers, and virtual dedicated dervers using the same types of functionality provided by the VM OS in the 1950s.

What did this look like in practice? Let’s say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company’s needs.

virtualization

As the costs of server hardware slowly came down, more users could afford to purchase their own dedicated servers. But they ran into a different kind of problem: One server isn’t enough to provide the necessary resources. The market shifted from a “These servers are expensive; let’s split them up” belief to a “These servers are cheap; let’s figure out how to combine them” mentality. Because of that shift, the most basic understanding of “cloud computing” was born online.

The cloud is born

By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present the environment’s entire resources as though those resources were in a single physical node. To visualize that environment, technologists used terms like “utility computing” and “cloud computing,” since the sum of the parts seemed a nebulous blob of computing resources you could then segment out as needed (like telecommunications companies did in the 1990s). In these cloud computing environments, it adding resources to the “cloud” was easy—add another server to the rack and configure it to become part of the bigger system.

As technologies and hypervisors improved upon reliably sharing and delivering resources, many enterprising companies decided to carve up the bigger environment. They wanted to make the cloud’s benefits available to users who didn’t have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order “cloud computing instances” (also known as “cloud servers”) by ordering the resources they needed from the larger pool of available cloud resources. Because the servers were already online, the process of “powering up” a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it’s handled by the cloud’s software), management of the environment is much easier.

Go beyond the standard cloud computing environment

Go beyond the standard cloud computing environment

Most companies today operate with the aforementioned definition of “the cloud” as the end-all, be-all—but IBM Cloud isn’t “most companies.” IBM Cloud took the idea of a cloud computing environment and pulled it back one more step. Instead of installing software on a cluster of machines to let users grab pieces, we built a platform that automated the manual aspects of bringing a server online without a hypervisor on the server. We call this platform “IMS.” What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with the resources you need and without any unnecessary software installed—and that server will be delivered to you in a matter of hours.

Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you’re able to spin up load balancers and firewalls and storage devices on demand and turn them off when you’re done with them.

Other providers have cloud-enabled servers. We have cloud-enabled data centers.

IBM Cloud is leading the drive toward wider adoption of innovative cloud services. We have ambitious goals for the future. If you think we’ve come a long way from the mainframes of the 1950s, you ain’t seen nothin’ yet.

Build for free on IBM Cloud.

Be the first to hear about news, product updates, and innovation from IBM Cloud