What is a data center?
Explore IBM's data centers Subscribe for cloud updates
Illustration with collage of pictograms of computer monitor, server, clouds, dots
What is a data center?

A data center is a physical room, building or facility that houses IT infrastructure for building, running, and delivering applications and services, and for storing and managing the data associated with those applications and services.

Data centers have evolved in recent years from privately-owned, tightly-controlled on-premises facilities housing traditional IT infrastructure for the exclusive use of one company, to remote facilities or networks of facilities owned by cloud service providers housing virtualized IT infrastructure for the shared use of multiple companies and customers.

Strategic app modernization drives digital transformation

Strategic application modernization is one key to transformational success that can boost annual revenue and lower maintenance and running costs.

Related content

Register for the guide on hybrid cloud

Types of data centers

There are different types of data center facilities, and a single company may use more than one type, depending on workloads and business need.

Enterprise (on-premises) data centers

In this data center model, all IT infrastructure and data is hosted on-premises. Many companies choose to have their own on-premises data centers because they feel they have more control over information security, and can more easily comply with regulations such as the European Union General Data Protection Regulation (GDPR) or the U.S. Health Insurance Portability and Accountability Act (HIPAA). In an enterprise data center, the company is responsible for all deployment, monitoring, and management tasks.

Public cloud data centers

Cloud data centers (also called cloud computing data centers) house IT infrastructure resources for shared use by multiple customers—from scores to millions of customers—via an Internet connection.

Many of the largest cloud data centers—called hyperscale data centers—are run by major cloud service providers like Amazon Web Services (AWS), Google Cloud Platform, IBM Cloud, Microsoft Azure, and Oracle Cloud Infrastructure. In fact, most leading cloud providers run several hyperscale data centers around the world. Typically, cloud service providers maintain smaller, edge data centers located closer to cloud customers (and cloud customers’ customers). For real-time, data-intensive workloads such big data analytics, artificial intelligence (AI), and content delivery applications, edge data centers can help minimize latency, improving overall application performance and customer experience.

Managed data centers and colocation Facilities

Managed data centers and colocation facilities are options for organizations that don’t have the space, staff, or expertise to deploy and manage some or all of their IT infrastructure on premises—but prefer not to host that infrastructure using the shared resources of a public cloud data center.

In a managed data center, the client company leases dedicated servers, storage and networking hardware from the data center provider, and the data center provider handles the administration, monitoring and management for the client company.

In a colocation facility, the client company owns all the infrastructure, and leases a dedicated space to host it within the facility. In the traditional colocation model, the client company has sole access to the hardware and full responsibility for managing it; this is ideal for privacy and security but often impractical, particularly during outages or emergencies. Today, most colocation providers offer management and monitoring services for clients who want them.

Managed data centers and colocation facilities are often used to house remote data backup and disaster recovery technology for small and midsized businesses (SMBs).

Data center architecture

Most modern data centers—even in-house on-premises data centers—have evolved from traditional IT architecture, where every application or workload runs on its own dedicated hardware, to cloud architecture, in which physical hardware resources—CPUs, storage, networking—are virtualized. Virtualization enables these resources to be abstracted from their physical limits, and pooled into capacity that can be allocated across multiple applications and workloads in whatever quantities they require.

Virtualization also enables software-defined infrastructure (SDI)—infrastructure that can be provisioned, configured, run, maintained and ‘spun down’ programmatically, without human intervention.

The combination of cloud architecture and SDI offers many advantages to data centers and their users, including the following:

  • Optimal utilization of compute, storage, and networking resources. Virtualization enables companies or clouds to serve the most users using the least hardware, and with the least unused or idle capacity.
     

  • Rapid deployment of applications and services. SDI automation makes provisioning new infrastructure as easy as making a request via a self-service portal.
     

  • Scalability. Virtualized IT infrastructure is far easier to scale than traditional IT infrastructure. Even companies using on-premises data centers can add capacity on demand by bursting workloads to the cloud when necessary.
     

  • Variety of services and data center solutions. Companies and clouds can offer users a range of ways to consume and deliver IT, all from the same infrastructure. Choices are made based on workload demands, and include infrastructure as a service (IaaS)platform as a service (PaaS), and software as a service (SaaS). These services can be offered in a private data center, or as cloud solutions in either a private cloudpublic cloudhybrid cloud, or multicloud environment.
     

  • Cloud-native development. Containerization and serverless computing, along with a robust open-source ecosystem, enable and accelerate DevOps cycles and application modernization as well as enable develop-once-deploy-anywhere apps.

Data center infrastructure components

Servers

Servers are powerful computers that deliver applications, services and data to end-user devices. Data center servers come in several form factors:

  • Rack-mount servers are wide, flat standalone servers—the size of a small pizza box— designed to be stacked on top of each other in a rack, to save space (vs. a tower or desktop server). Each rack-mount server has its own power supply, cooling fans, network switches, and ports, along with the usual processor, memory, and storage.
     

  • Blade servers are designed to save even more space. Each blade contains processors, network controllers, memory and sometime storage; they’re made to fit into a chassis that holds multiple blades and contains the power supply, network management and other resources for all the blades in the chassis.
     

  • Mainframes are high-performance computers with multiple processors that that can do the work of an entire room of rack-mount or blade servers. The first virtualizable computers, mainframes can process billions of calculations and transactions in real time.

The choice of form factor depends on many factors including available space in the data center, the workloads being run on the servers, the available power, and cost.

Storage systems

Most servers include some local storage capability—called direct-attached storage (DAS)—to enable the most frequently used data (hot data) to remain close the CPU.

Two other data center storage configurations include network-attached storage (NAS), and a storage area network (SAN).

NAS provides data storage and data access to multiple servers over a standard Ethernet connection. The NAS device is usually a dedicated server with multiple storage media—hard disk drives (HDDs) and/or solid state drives (SSDs).

Like NAS, a SAN enables shared storage, but a SAN uses a separate network for the data and consists of a more complex mix of multiple storage servers, application servers, and storage management software.

A single data center may use all three storage configurations—DAS, NAS, and SAN—as well as file storageblock storage and object storage types.

Networking

The data center network, consisting of various types of switches, routers and fiber optics, carries network traffic across the servers (called east/west traffic), and to/from the servers to the clients (called north/south traffic).

As noted above, a data center’s network services are typically virtualized. This enables the creation of software-defined overlay networks, built on top of the network’s physical infrastructure, to accommodate specific security controls or service level agreements (SLAs).

Power supply and cable Management

Data centers need to be always-on, at every level. Most servers feature dual power supplies. Battery-powered uninterruptible power supplies (UPS) protect against power surges and brief power outages. Powerful generators can kick in if a more severe power outage occurs.

With thousands of servers connected by various cables, cable management is an important data center design concern. If cables are too near to each other, they can cause cross-talk, which can negatively impact data transfer rates and signal transmission. Also, if too many cables are packed together, they can generate excessive heat. Data center construction and expansion must consider building codes and industry standards to ensure cabling is efficient and safe.

Redundancy and disaster recovery

Data center downtime is costly to data center providers and to their customers, and data center operators and architects go to great lengths to increase resiliency of their systems. These measures include everything from redundant arrays of independent disks (RAIDs) to protect against data loss or corruption in the case of storage media failures, to backup data center cooling infrastructure that keeps servers running at optimal temperatures, even if the primary cooling system fails.

Many large data center providers have data centers located in geographically distinct regions, so that if a natural disaster or political disruption occurs in one region, operations can be failed over to a different region for uninterrupted services.

The Uptime Institute (link resides outside ibm.com) uses a four-tier system to rate the redundancy and resiliency of data centers:

  • Tier I—Provides basic redundancy capacity components, such as uninterruptible power supply (UPS) and 24/7 cooling, to support IT operations for an office setting or beyond.
     

  • Tier II—Adds additional redundant power and cooling subsystems, such as generators and energy storage devices, for improved safety against disruptions.
     

  • Tier III—Adds redundant components as a key differentiator from other data centers. Tier III facilities require no shutdowns when equipment needs maintenance or replacement.
     

  • Tier IV—Adds fault tolerance by implementing several independent, physically isolated redundant capacity components, so that when a piece of equipment fails there is no impact to IT operations.

Environmental controls

Data centers must be designed and equipped to control environmental factors—most of which are interrelated—that can damage or destroy hardware and lead to expensive or catastrophic downtime.

Temperature: Most data centers employ some combination of air cooling and liquid cooling to keep servers and other hardware operating in the proper temperature ranges. Air cooling is basically air conditioning—specifically, computer room air conditioning (CRAC) targeted at the entire server room, or at specific rows or racks of servers. Liquid cooling technologies pump liquid directly to processors, or in some cases immerse servers in coolant. Data center providers are turning increasingly to liquid cooling for greater energy efficiency and sustainability—it requires less electricity and less water than air cooling.

Humidity: High humidity can cause equipment to rust; low humidity can increase the risk static electricity surges (see below). Humidity control equipment includes the aforementioned CRAC systems, proper ventilation, and humidity sensors.

Static electricity: As little as 25 volts of static discharge to damage equipment or corrupt data. Data center facilities are outfitted with equipment to monitors static electricity and discharge it safely.

Fire: For obvious reasons, data centers have must be equipped with fire-prevention equipment, and it must be tested regularly.

 

 

 

 

Related solutions
Converged infrastructure systems and solutions

Modern converged infrastructure (CI) solutions combine servers, storage and networking into one integrated platform.

Explore converged infrastructure systems and solutions
Global data centers

Learn about the IBM Cloud data centers located around the world to help you meet geography-specific requirements quickly.

Explore global data centers
IBM Cloud

IBM Cloud with Red Hat offers market-leading security, enterprise scalability and open innovation to unlock the full potential of cloud and AI.

Explore IBM Cloud
Resources Location, location, location

Learn about the importance of security and privacy of your data in the cloud.

Take the next step

Simplify data and infrastructure management with IBM Storage FlashSystem, a high-performance, all-flash storage solution that streamlines administration and operational complexity across on-premises, hybrid cloud, virtualized and containerized environments.

    Explore FlashSystem storage Take a tour