IBM and combine forces to provide machine learning on IBM Power Systems

By | 4 minute read | June 7, 2018

Collaboration taps POWER9 for open source AI workbench

I am delighted to announce our new collaboration with to bring together our technologies surrounding artificial intelligence (AI), machine learning and deep learning. This collaboration further cements and grows the IBM Power Systems goal to be an enterprise’s preferred AI platform. By working with, we are expanding our capabilities by offering a simplified onramp to machine learning on Power.

Accelerating the adoption of AI by automating machine learning

Data scientists today rely on a host of open-source software to build AI models or brains that extract insights from data. However, we have seen that data scientists seem challenged by making sense of and deploying the multiple open-source pieces needed to build an AI workflow.

IBM and have a shared vision of improving the reach and adoption of AI by bringing it to more software application developers, whether they have data science skills or not. H2O’s machine learning (ML) software, Driverless AI, automates several key steps in the machine learning pipeline (such as automatically doing feature engineering and model validation and tuning). Similarly, IBM’s PowerAI is an enterprise distribution of some of the most popular open-source deep learning frameworks like Tensorflow, Keras, PyTorch and more. PowerAI enhances open-source software like TensorFlow to help make it easier to use for the enterprise with greatly improved model training times.

Together, H2O Driverless AI and IBM PowerAI provide companies with a data science platform or an “AI workbench” that addresses a broad set of use cases for machine learning and deep learning in every industry.

Accelerating H2O Driverless AI with IBM POWER9 and GPUs

The IBM POWER9 processor was built from the ground up for data-intensive workloads like deep learning and machine learning. With industry-exclusive IO technology, IBM POWER9 includes the next-generation NVIDIA NVLink interconnect for CPU-to-GPU, which improves data movement between the POWER9 CPUs and NVIDIA Tesla V100 GPUs up to 5.6x compared to the PCIe Gen3 buses used within x86 systems[i], and is capable of improving the training times of deep learning frameworks by nearly 4x[ii], allowing enterprises to build more accurate AI applications, faster.

GPU acceleration on Power Systems can boost performance of machine learning applications as well. By running Driverless AI on GPU-equipped Power Linux systems, users are able to improve performance of time-series workloads by 5x when compared to CPU-only/x86 systems, according to’s internal measurements. Time series machine learning looks at data collected over time. This can be used to help businesses predict future sales revenue based on seasonal trends and the current sales pipeline or help to predict equipment failures before they occur based on cascading warning indicators.

Unlocking explainable AI on IBM Power Systems with

For some enterprises adopting AI, the ability to explain how a system reached its AI-driven conclusion is a critical requirement. Just as children often aren’t satisfied with parents telling them they can’t do something “because they said so”, enterprises leaders need additional justification for changing business processes because an algorithm said so. Unlike traditional analytics, oftentimes it isn’t clear how AI came to a conclusion. Driverless AI incorporates a variety of technologies to help instrument model development and track why a model makes one prediction or decision over another. This feature opens up the potential to use AI modes in regulated industries where factors such as auditability is a requirement, and even when there is not a specific regulatory requirement it can help users to build confidence in the decision-making capability of the model.

Be in the driver’s seat for machine learning with IBM and

Today’s collaboration represents another step forward in IBM’s commitment to making Power Systems one of the leaders in enterprise AI. With’s Driverless AI integration, we are expanding customer choice and adding more onramps for organizations of all size to adopt AI and help accelerate the time to value for AI-driven insights to transform their business.

To read more about this announcement from’s CEO and Founder Sri Ambati, you can read his blog post here.

You can also view their news release here.

Please leave a comment below or reach out to your IBM representative if you want to try H2O Driverless AI on POWER9 Linux Systems.

[i] Results are based on IBM Internal Measurements running the CUDA H2D Bandwidth Test
Hardware: Power AC922; 32 cores (2 x 16c chips), POWER9 with NVLink 2.0; 2.25 GHz, 1024 GB memory, 4xTesla V100 GPU; Ubuntu 16.04. S822LC for HPC; 20 cores (2 x 10c chips), POWER8 with NVLink; 2.86 GHz, 512 GB memory, Tesla P100 GPU
Competitive HW: 2x Xeon E5-2640 v4; 20 cores (2 x 10c chips) / 40 threads; Intel Xeon E5-2640 v4; 2.4 GHz; 1024 GB memory, 4xTesla V100 GPU, Ubuntu 16.04

[ii] Results are based IBM Internal Measurements running 1000 iterations of Enlarged GoogleNet model on Enlarged Imagenet Dataset (2560×2560). Hardware: Power AC922; 40 cores (2 x 20c chips), POWER9 with NVLink 2.0; 2.25 GHz, 1024 GB memory, 4xTesla V100 GPU Pegas 1.0. Competitive stack: 2x Xeon E5-2640 v4; 20 cores (2 x 10c chips) / 40 threads; Intel Xeon E5-2640 v4; 2.4 GHz; 1024 GB memory, 4xTesla V100 GPU, Ubuntu 16.04. Software: Chainverv3 /LMS/Out of Core with CUDA 9 / CuDNN7 with patches found at and