IBM is the source for IT solutions to deploy your AI applications
IBM Power Systems for AI can help enterprises realize the full potential of AI and analytics to achieve stronger data-driven decisions, access deeper insights, and develop trust and confidence.
Get accurate model results that can give you greater confidence in business decisions.
Dynamic, industry-tested and validated tools enable productivity across all of your resources, people, processors and processes.
Stay on the cutting edge of AI technology with high data throughput, AI-assisted model optimization, and the backing of IBM Research.
Build on a secure AI solution with the security of Power Systems and IBM-secured open source frameworks.
Meet the IBM Enterprise AI Servers
Power Systems LC922: The data server for AI
The IBM Power System LC922 server is engineered to meet AI data and workload requirements. It has a storage-rich design that delivers industry-leading compute to analyze and explore data, along with the vast storage capacity to contain it.
The IBM Power System IC922 inference server is engineered to put your AI models to work and unlock business insights. It uses optimized hardware and software to deliver the necessary components for AI inference that will move you from data to insight.
Advanced interconnects (PCIe Gen4, OpenCAPI) to support faster data throughput and decreased latency.
PCIe generation 4 delivers approximately 2x the data bandwidth of the PCIe generation 3 interconnect found in x86 servers.
Elinar saw the disruptive potential of AI for its enterprise content management solutions and deployed IBM Power infrastructure to become an early adopter, slashing time-to-market and winning new clients.
Building a pipeline of data is an important aspect of any AI infrastructure. In order to support the data intensive needs of enterprise AI, companies need reliable storage solutions that are optimized from the point of data ingestion all the way to data inferencing.
¹ Results are based IBM Internal Measurements running 1000 iterations of Enlarged GoogleNet model (mini-batch size=5) on Enlarged Imagenet Dataset (2240x2240) . Power AC922; 40 cores (2 x 20c chips), POWER9 with NVLink 2.0; 2.25 GHz, 1024 GB memory, 4xTesla V100 GPU ; Red Hat Enterprise Linux 7.4 for Power Little Endian (POWER9) with CUDA 9.1/ CUDNN 7;. Competitive stack: 2x Xeon E5-2640 v4; 20 cores (2 x 10c chips) / 40 threads; Intel Xeon E5-2640 v4; 2.4 GHz; 1024 GB memory, 4xTesla V100 GPU, Ubuntu 16.04. with CUDA .9.0/ CUDNN 7. Software: IBM Caffe with LMS Source code https://github.com/ibmsoe/caffe/tree/master-lms(link resides outside ibm.com)