With NVIDIA L4 GPUs, industries are achieving 2.5x better performance compared to the previous generation of GPUs for generative AI use cases. Creators are able to optimize graphics performance 4x higher to generate cinematic-quality graphics, scenes for virtual worlds, and cloud gaming. Similarly with AR/VR, live stream use cases have enjoyed a 120x performance boost when compared to CPU-based solutions. As the need for GPUs grow, IBM continues to demonstrably commit to addressing climate change; NVIDIA L4 GPUs consume less energy and significantly lower carbon footprints.
IBM is proud to announce GX3, a suite of NVIDIA L4 Tensor Core GPU flavors, as the newest addition of GPU profiles available with IBM Cloud Kubernetes Service (IKS) and Red Hat OpenShift on IBM Cloud (ROKS) clusters that run on IBM Cloud VPC.
For more information on NVIDIA L4 GPUs and how you can use them to accelerate and optimize your workload performance, see NVIDIA L4 Tensor Core GPU (link resides outside of ibm.com).
The following NVIDIA L4 GPU flavors are available for IBM Cloud VPC clusters that run Kubernetes version 1.28+ or any version of Red Hat OpenShift.
Enjoy a plug-and-play experience with IBM Cloud Kubernetes Service when provisioning a cluster. GPU drivers are automatically installed and you can get started right away by provisioning a new cluster at 1.28 or later with GX3 worker nodes. No additional configuration for setting up the GPU is required. If you already have a 1.28+ cluster, simply add a worker pool that uses the GX3 nodes to your existing cluster. For more information, see Deploying an app on a GPU machine for IBM Cloud Kubernetes Service.
With Red Hat OpenShift on IBM Cloud, installing the NVIDIA GPU Operator automates the management of all the necessary NVIDIA software components. Once complete, provision a new cluster or worker pool with the GX3 worker nodes. For more information, see Deploying an app on a GPU machine for Red Hat OpenShift on IBM Cloud.
Furthermore, leverage Red Hat OpenShift AI on GX3 worker nodes to rapidly develop, train, serve, and monitor machine learning models on-premise, in the public cloud, and at the edge. To learn more, see Installing Red Hat OpenShift AI.