August 23, 2016 | Written by: Maximilliano Destefani Neto
Share this post:
One of the first questions asked with the introduction of a new technology is: “When was it invented?” Other questions like “When was it first mentioned?” and “What are the prospects for its future?” are also common.
When we think of cloud computing, we think of situations, products and ideas that started in the 21st century. This is not exactly the whole truth. Cloud concepts have existed for many years. Let’s go back to that time.
It was a gradual evolution that started in the 1950s with mainframe computing.
Multiple users were capable of accessing a central computer through dumb terminals, whose only function was to provide access to the mainframe. Because of the costs to buy and maintain mainframe computers, it was not practical for an organization to buy and maintain one for every employee. Nor did the typical user need the large (at the time) storage capacity and processing power that a mainframe provided. Providing shared access to a single resource was the solution that made economical sense for this sophisticated piece of technology.
(Related: Infographic: A brief history of cloud — 1950s to present day)
After some time, around 1970, the concept of virtual machines (VMs) was created.
Using virtualization software like VMware, it became possible to execute one or more operating systems simultaneously in an isolated environment. Complete computers (virtual) could be executed inside one physical hardware which in turn can run a completely different operating system.
The VM operating system took the 1950s’ shared access mainframe to the next level, permitting multiple distinct computing environments to reside on one physical environment. Virtualization came to drive the technology, and was an important catalyst in the communication and information evolution.
In the 1990s, telecommunications companies started offering virtualized private network connections.
Historically, telecommunications companies only offered single dedicated point–to-point data connections. The newly offered virtualized private network connections had the same service quality as their dedicated services at a reduced cost. Instead of building out physical infrastructure to allow for more users to have their own connections, telecommunications companies were now able to provide users with shared access to the same physical infrastructure.
The following list briefly explains the evolution of cloud computing:
• Grid computing: Solving large problems with parallel computing
• Utility computing: Offering computing resources as a metered service
• SaaS: Network-based subscriptions to applications
• Cloud computing: Anytime, anywhere access to IT resources delivered dynamically as a service
Now let’s talk a bit about the present.
SoftLayer is one of the largest global providers of cloud computing infrastructure.
IBM already has platforms in its portfolio that include private, public and hybrid cloud solutions. SoftLayer guarantees an even more comprehensive infrastructure as a service (IaaS) solution. While many companies look to maintain some applications in data centers, many others are moving to public clouds.
Even now, the purchase of bare metal can be modeled in commercial cloud (for example, billing by usage or put another way, physical server billing by the hour). The result of this is that a bare metal server request with all the resources needed, and nothing more, can be delivered with a matter of hours.
In the end, the story is not finished here. The evolution of cloud computing has only begun. What do you think the future holds for cloud computing? Connect with me on Twitter @maxdneto.
A version of this post was originally published in March 2014.