Share this post:
At IBM Think this year we are excited to announce the next natural step in the evolution of the PowerAI portfolio as PowerAI Enterprise joins the Watson family.
Cognitive applications and workloads like AI, machine learning and deep learning are of key strategic importance to our clients and an area of strategic investment within IBM. In order to better serve the needs of our clients and build upon the success we have had with PowerAI, we are deepening our integration with the wider portfolio of IBM AI offerings and capabilities. Our goal across the IBM teams is to bring the best of our Watson portfolio of data science and machine learning and PowerAI together into a common set of offerings.
What is happening with PowerAI?
- PowerAI Enterprise is now called Watson Machine Learning Accelerator. Watson Machine Learning Accelerator is part of Watson Machine Learning.
- There will be no change this quarter to the PowerAI base framework package/offering, allowing us to continue to make frameworks easily available for clients that want to use those on their own on Power servers.
- As part of this quarter’s release, we will be adding a deeper integration between Accelerator and Watson Studio/Watson Machine Learning. The goal is to make the distributed GPU training and inference capabilities that were part of PowerAI Enterprise, available to a broader range of data science clients and users. With the introduction of Watson Machine Learning Accelerator, we bring the ability to deliver an optimized and shared data science environment to the IBM Watson portfolio.
- Initial release of the integration of the PowerAI components into the Watson portfolio will be for deployment on premises in private cloud environments.
What does this mean for you as a PowerAI client?
- A common AI software stack from IBM now brings together the best of our data science productivity tools across Watson Studio, Watson Machine Learning and Accelerator.
- Users of Watson Studio and the rest of the IBM AI portfolio will now have all the AI acceleration capabilities at their fingertips that were once part of PowerAI. These are available for them to use to help increase AI performance and reduce time to results.
- Support for optimized workload management and sharing of AI/analytics through the integration Apache Spark, Python and deep learning frameworks.
- Distributed deep learning: enabling efficient scaling of deep learning training across multiple GPUs.
- Software libraries for machine learning (SnapML) acceleration and large model support (LMS) allowing data scientists to leverage the unique capabilities of the underlying Power servers to speed up their ML and DL workloads.
- Elastic training and inference enabling optimized use of CPU and GPU resources across multiple servers.
- Productivity tools for training, monitoring and hyper parameter optimization to assist the data scientist in their experiments to help speed up the model development and training cycle.
- An integrated team of industry and technical experts from IBM, across Watson, Analytics and Cognitive Systems that can assist them on their data science and AI journey. The IBM team can provide a range of value-add and paid service offerings covering everything from the data science software, assistance in development of AI models, and design/delivery of the required AI infrastructure architecture.
In conclusion, our PowerAI team in Cognitive Systems is extremely excited to be a part of this evolution of the IBM AI portfolio. The initial success we have had with our clients on the journey we started over two years ago has set us up to accelerate the PowerAI strategy with its deeper integration into the larger IBM AI plans. We look forward to connecting with many more clients through the Watson team and bringing the value of our Accelerator software and underlying Cognitive Systems to many more clients, helping them achieve their journey to AI.
If you are attending THINK 2019 in San Francisco, join me for my session, Better Together: IBM PowerAI Family and the Broader AI Ecosystem.