What is a hyperscale data center?
Explore IBM Storage Scale Subscribe for cloud topic updates
Illustration with collage of pictograms of computer monitor, server, clouds, dots

Published: 21 March 2024
Contributors: Phill Powell, Ian Smalley

What is a hyperscale data center?

A hyperscale data center is a massive data center that provides extreme scalability capabilities and is engineered for large-scale workloads with an optimized network infrastructure, streamlined network connectivity and minimized latency.

Due to the ever-increasing demand for data storage, hyperscale data centers are in wide use globally for numerous providers and a wide variety of purposes that include artificial intelligence (AI), automation, data analytics, data storage, data processing and other big data computing pursuits.

“Hyperscale data centers” or “hyperscalers”?

It’s worth taking a moment to clarify these terms and any ambiguities they may present. For starters, the terms are often used interchangeably, which can create confusion.

 “Hyperscalers” is often used to refer to hyperscale data centers as a nickname. Unfortunately, the term “hyperscalers” already has an established meaning and is just as often used to refer to cloud service providers (CSPs) like AWS and other companies that offer hyperscale data center services.

Since the same term can be used to denote both a type of data center and the businesses that specialize in hyperscale computing and provide those data centers, the potential for confusion exists (e.g., “The hyperscaler created the hyperscaler.”). So, for the purposes of this page, we’ll be discussing hyperscale data centers and referring to cloud service providers (CSPs) each by their unique term and avoiding the more generic term “hyperscaler.”

More to Store with a Global Data Platform

All about IBM’s Global Data Platform—engineered for anyone to access anything anywhere.

Related content

Subscribe to the IBM Newsletter

How do hyperscale data centers work?

Data centers (of all sizes) can track their origins back to the important concept of virtualization. Virtualization uses software to create an abstraction layer over computer hardware that enables the division of a single computer’s hardware components into multiple virtual machines (VMs). Each VM runs its own OS and behaves as an independent computer, even though it’s running on just a portion of the actual underlying computer hardware. In this way, virtualization enables cloud computing.

A hyperscale data center differs primarily from traditional data centers by virtue of its sheer size. A hyperscale data center requires a physical site large enough to house all associated equipment—including at least 5,000 servers and quite possibly miles of connection equipment. As such, hyperscale data centers can easily encompass millions of square feet of space.

Redundancy is another important aspect of hyperscale data centers, and it simply means to provide backup measures and/or devices that can kick in automatically should equipment fail or power be lost. Redundancy is especially critical for hyperscale data centers because these systems are often running automatically—in the background, around the clock and with little direct supervision.

Hyperscale data centers: Initial considerations

An organization wishing to operate a hyperscale data center will have no shortage of decisions to make as it determines the best course of action to follow. The first of these questions may be: “Should we build or rent?” Building even a modest data center is going to entail some level of investment. Constructing a hyperscale data center requires a financial commitment that’s even more serious.

Many companies opt to go another way by choosing a colocation data center—a data center whose owners rent out facilities and server space to other businesses. The appeal of this method should be immediately apparent: Renting out space for hardware demands a significantly smaller investment than building an entire structure to house equipment, at least in terms of up-front monies paid.

Comparing the two basic options, it’s clear each has its advantages and disadvantages. Building a hyperscale data center is typically expensive and labor-intensive, but it also offers hyperscale facilities that are custom-built for that company, with all of its adjustable aspects suitably optimized.

Meanwhile, renting out space in a colocation data center offers more options for mobility and requires infinitely less investment. But it’s going to be very unlikely that a colocated data center will have been designed to that client’s ideal specifications.

 Some organizations are seeking yet another option—one that assures them that as they continue to move forward, they will be able to scale upward without having to purchase additional and costly storage equipment for their private-cloud-based system. For many of these companies, the right answer involves migrating away from a privately owned system and relocating their operations to a public cloud environment, such as they find with Software-as-a-Service (SaaS)  apps like Microsoft 365 or Google Suite.

There are other variations like modular data centers, which are pre-engineered facilities that are designed for use as data centers. Not only are modular data centers pre-engineered, they’re also pre-piped and equipped with necessary cooling equipment. Modular data centers are ideal for organizations that want to experiment with data center functionality in a limited way before making huge investments, as well as for those companies that need to implement a reliable data center solution quickly.

Hyperscale history

Data centers have been used since the 1940s, back when a single, mammoth computer filled entire department spaces. As computers became more size-efficient through the years, the on-premises physical space allotted them also evolved. Then, in the 1990s, the first burst of microcomputers was seen, radically reducing the amount of size needed for IT operations. It wasn’t long before “server rooms” were being referred to instead as “data centers.”

In terms of important advancements, the first hyperscale data center is often considered to have been launched in 2006 by Google in The Dalles, OR (near Portland). This hyperscale facility currently occupies 1.3 million square feet of space and employs a staff of approximately 200 data center operators. The term “currently” is used because Google is now pursuing plans to scale out this “energy campus” with a USD 600 million expansion project that will add a fifth building of 290,000 square feet. Showing its further commitment to this technology, Google now operates 14 data center facilities in the US—as well as 6 in Europe, 3 in Asia and 1 in Chile.

At present, the world’s largest hyperscale facilities (link resides outside ibm.com) exist within the Inner Mongolia region of China. That’s where China Telecom operates a hyperscale data center that’s roughly 10.7 million square feet. Put another way, the largest data center is the size of 165 regulation US football fields—all conjoined in a space 11 football fields wide and 15 football fields long. Not surprisingly, this humongous hyperscale data center (which cost USD 3 billion to construct) is outfitted with everything it needs functionally and even boasts residence facilities for the hyperscale data center operators who work there.

The top three hyperscale companies

Ranking the leading hyperscale companies can be problematic. First, there’s the relatively easy issue of defining what criteria will be used. Theoretically, you could focus on the number of hyperscale data centers owned or built, but that’s not an entirely accurate way to judge a company’s delivery of cloud services, since numerous outfits (including Apple) lease some of the data centers they use from other service providers.

That leaves only one viable means of comparison: percent of market share. This is ultimately the best indicator of which hyperscale companies are truly driving the market. Even if this method is slightly imperfect because the hyperscale computing market is never static—it’s always in a state of flux.

However, there is ample trend evidence to suggest that an upper tier of three hyperscale providers has currently established a solid hold on the largest segment of this market:

Amazon Web Services (AWS)

Presently, AWS is the largest provider of hyperscale cloud services, with a commanding AWS market share (link resides outside ibm.com) of roughly 32%. AWS operates in 32 cloud regions and 102 availability zones, with a total space of 33.5 million square feet. AWS is known for its expertise in automation, database management and data analytics.

Microsoft Azure

Microsoft’s popular hyperscale platform Azure currently garners approximately 23% (link resides outside ibm.com) of the hyperscale market. Azure operates in 62 cloud regions, with 120 availability zones. Not surprisingly, Microsoft Azure works especially well in conjunction with Microsoft software for enterprise data centers.

Google Cloud Platform (GCP)

Known originally and primarily for its vast expertise in data handling, Google Cloud controls roughly 10% (link resides outside ibm.com) of the hyperscale market and operates in 39 cloud regions, with 118 availability zones. In addition to data handling and data processing, GCP attracts businesses for its strengths in artificial intelligence (AI) and advanced analytics.

Next-tier hyperscale companies

Numerous other providers are operating in this same market within a secondary tier of market-share percentage.

  • Alibaba Cloud: Although not as large as AWS or other top providers, Alibaba does possess a strong hyperscale market share within the Asia-Pacific region. Its notable offerings include infrastructure-related products and AI services.

  • Apple: Apple uses a hybrid model for its cloud services. Apple owns eight data centers in the US, Europe and China, with plans to build more. In addition, Apple has multi-year lease arrangements for cloud-computing services from providers like AWS and GCP.

  • IBM Cloud: Long synonymous with technology expansion, IBM Cloud has deep experience working with enterprise data centers and delivering services in a wide range of related areas. Lately, the company’s pioneering work with AI is bringing it renewed visibility.

  • Meta Platforms: The parent company behind popular online platforms like Facebook and Instagram, Meta Platforms operates 21 hyperscale data centers around the world whose total space exceeds 50 million square feet.

  • Oracle Cloud Infrastructure (OCI): OCI has positioned itself as a lower-cost alternative to AWS, offering similar services at a significantly reduced price point. OCI excels at enabling the creation of cloud-native apps and easy migration of mission-critical workloads.

Hyperscale data center power usage

Power usage is the most pressing issue surrounding hyperscale data centers. The massive computing power needed to operate hyperscale data centers comes from a massive amount of electric power. Like the rise of cryptocurrencies and bitcoin mining, hyperscale data centers are a fairly recent technological development whose unusually large appetite for power places them at least somewhat at odds with ecological sustainability goals (although some companies are finding ways to mitigate or even eliminate such problems).

In addition to the thousands of computer servers being run in constant operation, there are also the electrical needs of power and networking equipment (such as transformers and routers) to consider, in addition to critically important cooling systems that keep hardware from reaching thermal overload situations. And that doesn’t even begin to address the costs of constructing and running an enormous structure to contain this beehive of activity.

That’s why energy efficiency is so crucial to effectively running hyperscale data centers. If one server is not working at peak efficiency, that might be considered of negligible importance. But if hyperscale facilities contain thousands of servers that aren’t working efficiently, that’s going to be considered a much larger and more expensive problem.

Depending on their size, hyperscale data centers may require megawatts of power or even gigawatts. There’s so much variance among hyperscale data centers that nailing down an average energy usage amount can be difficult. These guidelines of different-sized data centers are not official definitions, but have been unofficially ratified over time, and can assist our general understanding of these facilities:

  • Micro data center: The smallest recognized data center is generally used by single companies or for remote offices. Micro data centers usually exhibit a capacity of 10 server racks or less, which works out to a total capacity of approximately 140 servers. Micro data centers typically occupy less than 5,000 square feet of space. Energy draw: Under 100-150kW.

  • Small data center: Small data centers typically require between 5,000–20,000 square feet of space and may host anywhere from 500 to 2,000 servers. Energy draw: 1–5MW.

  • Average data center: The average onsite data center typically has between 2,000 and 5,000 servers. Likewise, its square footage could vary from between 20,000 square feet and 100,000 square feet. Energy draw: Around 100MW.

  • Hyperscale data centers: The IDC definition of a hyperscaler (link resides outside ibm.com) holds that to be considered a true hyperscale data center, the facility should contain at least 5,000 servers and occupy at least 10,000 square feet of physical space. Energy draw: Over 100MW.

Energy supply and geography issues

The enormous power needs required by hyperscale data issues pose a geographic puzzle for companies wishing to invest heavily in this type of infrastructure.

Energy is more expensive depending on its location, and companies often look to undeveloped/underdeveloped countries or regional areas as potential building sites for their hyperscale data centers because the pricing of electricity is more attractive within those economies.

But that’s only one criterion to be considered. Finding an area with an affordable energy source is key, as is observing local sustainability. But so is locating a site that likely won’t be subject to constant and hazardous weather situations that could imperil a company’s mission-critical functions through power outages and downtime.

Companies are trying to balance their corporate initiatives with sustainability goals so that hyperscale data centers are affordable to run and leave as light a carbon footprint as possible. The drive to reduce power usage is even pushing companies to push for renewable energy solutions for powering their hyperscale data centers. Most of the major businesses involved in hyperscale computing have investigated or are now exploring renewable energy options like solar power and wind power as a way to offset their considerable power usage.

In some cases, cloud service providers have even pledged to ensure that their data centers offer complete sustainability. Most impressive is Apple (link resides outside ibm.com), which has supported sustainability with actual constructive action since 2014. That’s when the company mandated that all of its data centers become powered completely by renewable energy. For a decade now, Apple’s data centers—all of them—have been run on various combinations of biogas fuel cells, hydropower, solar power and wind power.

Recent trends and future developments for hyperscale data centers

The data boom shows no sign of stopping or even letting up. In fact, there’s much evidence to suggest that it’s still just ramping up.

More data from more sources

There’s never been a time when so much sheer data was being produced, recorded and studied. Technology has reached a milestone where even simple gadgets are so cleverly engineered that many of them independently generate data and transmit it for archival and analysis purposes, via the Internet of Things (IoT).

Storage to triple over coming years

Think tank Synergy Research Group (which provides quantitative research and market intelligence) announced findings in October 2023 showing that AI advances will help propel the capacity of hyperscale data centers upward and that between now and 2028, the average hyperscale data center capacity (link resides outside ibm.com) will more than triple.

Ancillary effect on other industries

The creation of hyperscale data centers stimulates other industries, such as manufacturing (consider the thousands of server racks that must be forged). Another industry is real estate, where massive parcels of undeveloped land are bought and sold for use by these mega-facilities. The construction industry also benefits.

Market experiencing “growing pains”

A 2024 Newmark report (link resides outside ibm.com) analyzed a current situation affecting the US market—the world’s largest consumer of data centers and hyperscale data centers. Newmark found that demand for new data centers in the US far exceeds current capacity levels, especially in and around major US cities.

Power usage to double by 2030

The technological leaps ushered in by AI and machine learning will require vastly increasing amounts of electricity. In the same 2024 report, Newmark projected that needed data center power in 2030 (link resides outside ibm.com) will nearly double, compared to its 2022 level of 17MW. Newmark expects that total data center usage to reach 35GW.

Related solutions
IBM Storage Scale

Discover the easiest way to deploy IBM Storage Scale with performance and capacity that scales from 1 to over 1000 nodes.

Explore IBM Storage Scale

IBM storage area network (SAN) solutions

Update your storage infrastructure with IBM b-type and c-type SAN switches and reduce your total cost of ownership.

Explore IBM storage area network (SAN) solutions

IBM Consulting Cloud Accelerator

Shift your adoption of the IBM Cloud platform into overdrive with an Accelerator that supports a wide range of apps and landing zones.

Explore IBM Consulting Cloud Accelerator
IBM Storage

Balance both sides of the data storage equation by using IBM software (like IBM FlashSystem) and IBM hardware solutions.

Explore IBM Storage

Resources What is hyperscale?

Hyperscale is a distributed computing environment and architecture that’s designed to provide extreme scalability in order to accommodate workloads of massive scale.

What is a storage area network (SAN)?

A storage area network (SAN) is a dedicated network that’s tailored to a specific environment—combining servers, storage system, networking switches, software and services.

What is a data center?

A data center is a physical room, building or facility that houses IT infrastructure for building, running, and delivering applications and services, as well as for storing and managing the data associated with those applications and servers.

What is data storage?

Data storage refers to magnetic, optical or mechanical media that records and preserves digital information for ongoing or future operations.

What is object storage?

Object storage is a data storage architecture ideal for storing, archiving, backing up and managing high volumes of static unstructured data—reliably, efficiently and affordably.

What is IT infrastructure?

IT infrastructure refers to the combined components needed for the operation and management of enterprise IT services and IT environments.

Take the next step

Hyperscale data centers are taking the storage concept to entirely new levels of productivity, and they are proving to be a powerful ally for AI and machine learning applications. Discover how to best leverage these powerful new capabilities in your business with IBM Storage Scale.

Explore IBM Storage Scale