What is a data center?
A data center is the physical facility that makes enterprise computing possible, and it houses the following:
- Enterprise computer systems.
- The networking equipment and associated hardware needed to ensure the computer systems’ ongoing connectivity to the Internet or other business networks.
- Power supplies and subsystems, electrical switches, backup generators, and environmental controls (such as air conditioning and server cooling devices) that protect the data center hardware and keep it up and running.
A data center is central to an enterprise’s IT operations. It’s a repository for the majority of business-critical systems, where most business data is stored, processed, and disseminated to users.
Maintaining the security and reliability of data centers is essential to protecting an enterprise’s operational continuity—it’s ability to conduct business without interruption.
What is in a data center?
The IT equipment within a data center consists of three main elements necessary for a computing environment to function:
- Compute: The memory and processing power needed to run applications that are usually supplied by enterprise-grade servers.
- Storage: Data centers include primary and backup storage devices. They may be hard disk or even tape drives, but best-in-class facilities typically feature all-flash arrays.
- Networking: They contain a broad array of networking equipment, ranging from routers and switches to controllers and firewalls.
In addition to the IT equipment it contains, every data center houses that equipment’s support infrastructure, including the following:
- Environmental controls: Sensors monitor the airflow, humidity, and temperature in the facility at all times, with systems in place to guarantee that temperature and humidity remain within hardware manufacturers' specified ranges.
- Server racks: Most data center equipment is housed in specially designed racks or in purpose-built cabinets or shelving.
- Power supplies: Most data centers employ battery-based backup power systems able to compensate for short-term power outages and larger generators that can supply power in case longer commercial power grid outages occur.
- Cabling and cable management systems: An enterprise data center may contain hundreds of miles of fiber optic cable. Systems and equipment to keep that cabling orderly and accessible are a must.
Data center facilities
Many large data centers are located in dedicated, purpose-built buildings. Smaller data centers may be situated in specially designed rooms within buildings constructed to serve multiple functions. Since data centers consume large amounts of energy, it's important to ensure the physical structures that house them are well-designed and appropriately insulated to optimize temperature controls and energy efficiency.
Data centers should be located near reliable sources of electricity and high-speed network connectivity. The site should not be in or near flood zones, nor should it be vulnerable to other environmental hazards. The load-bearing capabilities of the building’s walls and floors must be sufficient for the weight of the hardware, racks, and other support infrastructure it will house. And the facility should have ample security and fire suppression systems, as well as appropriate monitoring systems.
Cloud data centers
When enterprises migrate their data and workloads to cloud data centers, they reside in physical infrastructures just like those in best-in-class on-premises data centers. The cloud customer is no longer obligated to design, build, maintain, power, staff, or secure a physical building. Instead, the cloud provider assumes responsibility for supplying highly available, fault-tolerant computing resources as a service. This frees enterprise cloud consumers to focus more resources on its business.
With cloud computing adoption continuing to rise, cloud data centers are hosting an ever-larger percentage of enterprise workloads. According to research firm Gartner (link resides outside IBM), 80% of enterprises will have closed their traditional on-premises data centers by 2025.
The cloud provider typically offers customers shared access to virtualized computing resources (e.g., virtual machines (VMs)) or dedicated access to specific individual physical computers, storage, and networking hardware. You can learn more about the different types of cloud hosting arrangements here.
Benefits of cloud data centers
Cloud providers enjoy the advantages of economies of scale and are thus able to supply tenants with up-to-date hardware, cutting-edge security, and better availability and resiliency than the tenant organizations would be able to afford to build in their own data centers.
Some of the chief benefits of cloud data centers include the following:
- Efficient use of resources: In public cloud architectures, multiple tenants share the same physical infrastructure. This means that individual enterprises do not have to purchase, build, and maintain resources like compute and storage just to have them available for peak usage periods or to provide failover capabilities.
- Rapid deployment and scalability: Resources can be provisioned with just a few clicks, so deploying new services takes only a tiny fraction of the time it would take if on-premises facilities needed to be built to support the deployment.
- Reduced capital expenditure (CAPEX) costs: Because cloud tenants pay for services on an as-needed basis, usually via a subscription model, there’s no need to make major up-front investments in new hardware.
- Freeing IT staff: The cloud provider takes responsibility for securing and maintaining the infrastructure, freeing customers’ IT departments from daily hardware maintenance tasks.
- Access to a global network of data centers: Major cloud providers have distributed their data centers across multiple regions and continents. This allows customers to meet their security and regulatory requirements and ensures that processing performance is optimized for their customer base, no matter where in the world it’s located. Global networks’ performance can be estimated by comparing the distance the data must travel with the speed that light can travel in fiber, which yields a potential return trip time (RTT) for the data. The closer your data is stored to its users, the better your services will perform.
Data center colocation
Colocation offers an additional option for organizations trying to find a middle ground between cloud computing and building dedicated data centers. With colocation services, businesses can rent space for their own computing hardware from a data center facility.
Typically, the colocation customer leases space in server racks or rooms, and the colocation facility provides power, Internet connectivity and bandwidth, physical security, and environmental controls. Customers tend to be responsible for maintaining and administering their own hardware devices.
Customers receive some of the benefits of cloud computing with colocation. This includes reduced capital expenditures (CAPEX) costs and freedom from the need to build and maintain data center facilities—while continuing to maintain control over their own equipment.
Data center security
Enterprise-grade data centers should be protected with rigorous physical and logical security controls. Physical security measures should include monitoring, fire prevention, and suppression systems and access controls ensuring that only verified employees are able to enter the facility.
Data security measures should safeguard the data that’s stored or processed in the facility when it is at rest (on any storage medium), in transit (to or from the facility), and in use (during processing or while resident in memory).
Logical security controls should include data encryption, network monitoring (usually by a security team working from a 24x7 security operations center (SOC)), and logging and auditing of all user activities. Most cloud experts promote a model of shared responsibility for security: Providers guarantee the physical security of the infrastructure, but the tenant is responsible for their data’s security, including access controls, configuration management, and security monitoring.
Data centers and IBM
The IBM Cloud network is built on a physical backbone of more than 60 data centers located in 19 countries on every continent except Antarctica. These data centers were built to meet global customers’ need for local data access, high reliability and performance, and low latency. The IBM Cloud data center network is subdivided into 18 global availability zones designed to ensure resilience, redundancy, and high availability.
All IBM data center facilities are built according to standardized, best practices-based designs, and feature the industry’s most advanced hardware and support equipment. High bandwidth is available to guarantee low latency and consistent performance. Physical security controls are rigorous, and IBM Cloud never provides encryption keys to government agencies or any other third parties (including its own internal teams).
IBM Cloud's global data center network provides the physical infrastructure to support more than 170 enterprise-grade products and services—everything from advanced business analytics and AI to cutting-edge developer tools and the world-class compute and storage resources to match.
To get instant access to the rich resources housed within IBM’s global data center network, sign up for a free IBM Cloud account today.