Intel® Gaudi® 3 AI Accelerators on IBM Cloud

The powerful, cost-effective and open AI accelerator for generative AI workloads.
Close view of audi 3 boasts 64 Tensor processor cores (TPCs) and eight matrix multiplication engines (MMEs)
IBM Cloud is the first global cloud service provider to deliver Intel® Gaudi® 3

Unlock, innovate and deploy new AI solutions with Intel® Gaudi® 3 AI accelerators on IBM Cloud®—designed to help you cost-effectively scale for enterprise AI demands with high-performance, flexibility in deployment and open development.   

Support a broad range of generative AI inferencing applications and frameworks, including large language models (LLM) and multi-modal models (MMM). Start quickly with IBM Cloud Virtual Servers for VPC deployment. Support for IBM watsonx®, Red Hat® OpenShift® Kubernetes Service and an automated Terraform-based deployment is planned for 1H 2025. Support for Red Hat OpenShift AI clusters, IBM Cloud Kubernetes Service and deployable architectures on IBM Cloud is planned for 2H 2025.

Learn more about Intel Gaudi 3 technology

A new solution for enterprise AI
Graphics and charts displayed on a clean background.
Competitive AI price and performance

Get cost-effective generative AI performance for high inferencing throughput and optimized total cost of ownership.

 

A cloud icon representing documents, processes integrated in a simple and modern aesthetic.
Fast, efficient scaling

Easily increase system scalability with flexible capacity support and freedom from closed system lock-ins.

Application displaying a variety of colors in different sections and features.
Open development, choice in deployment

Accelerate AI workloads with the Intel Gaudi 3 deployment model of your choice and help remove developer barriers with open-source models on an open-standards, public cloud.

Deploy based on your infrastructure and software requirements

Intel® Gaudi® 3 AI accelerators are paired with 5th Gen Intel® Xeon® processors on IBM Cloud Virtual Servers for VPC.

IBM Cloud infrastructure for AI
Provision a stand-alone server on the IBM Cloud Virtual Private Cloud (VPC)

Intel Gaudi 3 AI accelerators can be deployed through IBM Cloud Virtual Servers for VPC cloud instances. IBM Cloud VPC is designed for high resiliency and security inside a software-defined network where clients can build isolated private clouds while maintaining essential public cloud benefits. The Intel Gaudi 3 cloud instance, which also supports Red Hat Enterprise Linux AI images, is ideal for clients with highly specialized software stacks, or those who require full control over their underlying server.

Explore server deployment
The difference is in the design
High-bandwidth memory (HBM)

Speed up generative AI performance and build with more tokens and more models on a single card with 128GB of HBM capacity at 3.7 TB/sec bandwidth speed.

Industry-standard Ethernet

Eliminate fabric lock in and help reduce integration costs while increasing your choice of switching with industry-standard Ethernet.

 

High-capacity data transmission

Get massive scale-out and scale-up capacity with 24x 200 GbE ports of high capacity RoCE.

Open development

Simplify development with the Intel® Gaudi® 3 Extension for PyTorch and help reduce development time and code maintenance with an optimized model library on Hugging Face.

High core performance

Take advantage of the Intel® Gaudi® 3 AI Accelerator Matrix Multiplication Engine with specialized high-performance cores designed for less data transfers.

Simplified migration

Lift and shift models with as few as three lines of code on open software with user-friendly developer tools.

Frequently asked questions

Intel® Gaudi® 3 AI accelerators on IBM Cloud are designed for high-performance AI workloads, featuring 64 Tensor Processor Cores (TPCs) and eight Matrix Multiplication Engines (MMEs) to help accelerate deep neural network computations. Intel® Gaudi® 3 AI accelerators on IBM Cloud are also equipped with 128 GB of HBM2E memory and offer up to 3.7 TB/s of memory bandwidth, and support industry-standard Ethernet networking with 24x200 GbE ports, providing 9.6 Tbps of bi-directional bandwidth for scalable system interconnectivity.

Intel® Gaudi® 3 AI accelerators deliver broad AI application support, including inferencing, 3D generation, text generation, classification, video generation, sentiment, translation, image generation, summarization, and Q&A – with focus on multi-modal, large language modals (LLM), and retrieval-augmented generation (RAG).

With 128 GB of HBM2E memory and up to 3.7 TB/s of memory bandwidth, Intel® Gaudi® 3 AI accelerators on IBM Cloud help ensure fast data throughput, reducing bottlenecks and enabling developers to process massive datasets more quickly and efficiently.

Intel® Gaudi® 3 AI accelerators on IBM Cloud are fitted within IBM Cloud Virtual Servers on the IBM Cloud Virtual Private Cloud (VPC). The IBM Cloud VPC is a highly resilient and highly secure software-defined network (SDN) on which you can build isolated private clouds while maintaining essential public cloud benefits. The Intel® Gaudi® 3 virtual server profile on IBM Cloud VPC is a pre-configured combination of vCPU, RAM, and storage to quickly to start a virtual server instance.

Intel® Gaudi® 3 AI accelerators on IBM Cloud support popular frameworks, including, PyTorch, ONNX, and DeepSpeed. Over 400k models are available on Hugging Face, optimized for use with the Optimum Habana software library. The full Intel® Gaudi® software suite and framework support is designed to facilitate easy migration, enabling developers to integrate existing models with minimal code changes.

 

Take the next step

Explore Intel® Gaudi® 3 AI Accelerators on IBM Cloud.

Configure, price and quote Explore documentation