How to leverage the IBM-zDNN-Plugin for TensorFlow.

AI brings incredibly transformative capabilities that enterprise clients are interested in leveraging. The ability to get new insights out of their data and applications represents a massive opportunity.

However, artificial intelligence (AI) is also a very complex and continuously developing space. With the exciting opportunities comes the need to invest resources to develop skills on the latest technologies and techniques that are in use in the industry. At its core, AI software is driven by a rich and diverse open-source ecosystem that supports multiple phases of the model lifecycle. This includes the ability to provide highly optimized training and inference capabilities that can accelerate time to value.

As we’ve worked with enterprise clients, it’s become clear that they recognize and embrace the use of open source in their AI projects and have developed advanced skills in popular frameworks like TensorFlow. To enable our clients to leverage these skills in IBM Z and IBM LinuxONE environments, IBM has focused on ensuring the most exciting and popular open-source AI is available on our systems with the same look and feel as other commonly used environments.

IBM is also focusing on ensuring models are seamlessly optimized for IBM Z and LinuxONE when deployed for production use. Through technologies like the Open Neural Network Exchange and the IBM Z Deep Learning Compiler, we provide simple portability and optimized inference that can leverage our newest capabilities, including the IBM z16 and LinuxONE on-chip AI accelerator (the IBM Integrated Accelerator for AI). 

Recently, we announced the general availability of new capabilities that enable TensorFlow to directly leverage the on-chip AI inference accelerator featured in IBM z16 and LinuxONE Emperor 4.

What is the IBM-zDNN-Plugin for TensorFlow?

TensorFlow is one of the most popular AI Frameworks in existence, with over 171K Github stars, 150K+ active contributors and over 87K Github forks. It is an open-source framework that supports the entire machine-learning lifecycle—from model development through deployment.  TensorFlow also has a robust extended ecosystem that can help augment your AI projects.

A few weeks back, we introduced the ibm-zdnn-plugin for TensorFlow. Not only have we optimized it to run on the IBM Z and LinuxONE platforms, but also to leverage IBM z16’s on-chip Integrated Accelerator for AI. As a result, customers can bring in TensorFlow models trained anywhere and seamlessly deploy them on the IBM Z platform closer to where their business-critical applications run.

This enables real-time inferencing across a massive number of transactions with negligible latency. As one example (of many), this can give customers the ability to screen all their credit card transactions for fraud (in real time) and react quickly enough to prevent the fraud from happening in the first place.

On IBM zSystems and LinuxONE, TensorFlow has the same ‘look and feel’ as any other platform. Users can continue to build and train their TensorFlow models on the platform of their choice (x86, Cloud or IBM zSystems). TensorFlow models trained on other platforms are portable to IBM Z and LinuxONE with ease.

We’re leveraging TensorFlow community’s PluggableDevice architecture and developed an IBM Z focused pluggable device that leverages IBM Integrated Accelerator for AI on IBM z16.

How to get started

You can begin leveraging the power of IBM-zDNN-Plugin for TensorFlow with very little effort. Getting started is a simple process:

  • Build and train the TensorFlow model using the platform of your choice.
  • Install TensorFlow 2.9 and IBM z Deep Neural Network Library:
    • Container images with pre-built and pre-installed TensorFlow core 2.9 have been made available on the IBM Z and LinuxONE Container Registry.
    • Others can build and install TensorFlow from source by following the steps here.
  • Install IBM-zDNN-Plugin from The Python Package Index (PyPI).
  • On IBM z16 or LinuxONE Emperor 4 system, TensorFlow will transparently target the Integrated Accelerator for AI for several compute-intensive operations during inferencing with no changes necessary to TensorFlow models.

Our recent technical blog has further details and points to a simple example that you can leverage to guide you on getting started.

Useful resources

Categories

More from Cloud

IBM Cloud VMware as a Service introduces multitenant as a new, cost-efficient consumption model

4 min read - Businesses often struggle with ongoing operational needs like monitoring, patching and maintenance of their VMware infrastructure or the added concerns over capacity management. At the same time, cost efficiency and control are very important. Not all workloads have identical needs and different business applications have variable requirements. For example, production applications and regulated workloads may require strong isolation, but development/testing, training environments, disaster recovery sites or other applications may have lower availability requirements or they can be ephemeral in nature,…

IBM accelerates enterprise AI for clients with new capabilities on IBM Z

5 min read - Today, we are excited to unveil a new suite of AI offerings for IBM Z that are designed to help clients improve business outcomes by speeding the implementation of enterprise AI on IBM Z across a wide variety of use cases and industries. We are bringing artificial intelligence (AI) to emerging use cases that our clients (like Swiss insurance provider La Mobilière) have begun exploring, such as enhancing the accuracy of insurance policy recommendations, increasing the accuracy and timeliness of…

IBM NS1 Connect: How IBM is delivering network connectivity with premium DNS offerings

4 min read - For most enterprises, how their users access applications and data is an essential part of doing business, and how they service those application and data responses has a direct correlation to revenue generation.    According to We Are Social’s Digital 2023 Global Overview Report, there are 5.19 billion people around the world using the internet in 2023. There’s an imperative need for businesses to trust their networks to deliver meaningful content to address customer needs.  So how responsive is the…

IBM Cloud Databases for MongoDB (Enterprise Edition): Changes to backup functionality

< 1 min read - We are announcing that IBM Cloud Databases for MongoDB (Enterprise Edition) will no longer support the creation of On Demand backups beginning on March 1, 2024. On Demand backups are being replaced by the recently deployed Point in Time Recovery (PITR) capabilities in the Enterprise Edition of our popular fully managed MongoDB service. With PITR, you can restore a copy of your database to any point in the past seven days. This gives you granular access to the past state…