I am pleased to announce that several major deep learning frameworks are now available on the Power platform, as "distros" (distributions) that are easily installable using the Ubuntu system installer.
Deep learning, or the use of multi-layer neural networks, has revolutionized speech recognition, natural language processing, and computer vision, and continues to to revolutionize IT due to availability of rich data sets, new methods for accelerating neural network training and extremely fast hardware with GPU accelerators.
Deep Learning can be used from safety systems to personal assistants to enterprise systems. Increasingly, driver assist technologies rely on machine and deep learning patterns to recognize objects in a rapidly changing environment, personal digital assistant technology is learning to categorize and group e-mail text message, and other content based on their context. In the enterprise, machine and deep learning applications can identify high value sales opportunities, enable smart call center automation, detect and react to intrusion or fraud, and suggest solutions to technical or business problems.
The frameworks that are available on POWER as pre-built binaries optimized for GPU acceleration include:
- Caffe, a dedicated ANN training environment developed by the Berkeley Vision and Learning Center at the University of California at Berkeley
- Torch, a framework consisting of several ANN modules built on an extensible mathematics library
- Theano, another framework consisting of several ANN modules built on an extensible mathematics library
In addition to prebuilt and optimized binaries for Power with acceleration we have worked to ensure that these environments may be built from the source repository by those preferring to compile their own binaries. Finally, we have enabled the DL4J (Deep Learning 4 Java), TensorFlow and CNTK frameworks and are working with the developers to ensure Power support for these environments “out of the box”
The POWER platform is ideal for deep learning, big data, and machine learning due to its high performance, large caches, 2x-3x higher memory bandwidth, very high I/O bandwidth, and of course, tight integration with GPU accelerators. The parallel multi-threaded Power architecture with high memory and I/O bandwidth is particularly well adapted to ensure that GPUs are used to their fullest potential.
Today, these software packages are available on our Power Linux 822LC server, that features two POWER8 CPUs, along with two NVIDIA Tesla K80s. We are currently working on optimizing the deep learning software to take advantage of the upcoming POWER8 servers connected with the high-speed NVLink interface to NVIDIA Tesla P100 (Pascal) GPU accelerators. This brings a huge advantage to cognitive computing applications like deep learning, by giving applications running on the GPU, fast access to large system memory via the NVLink interface to the CPU. Coupled with the higher performance POWER8 CPUs, the overall workflow for applications like voice recognition, natural language processing, and computer vision that employ deep learning benefits from a massive performance leap thanks to data-centric system design and optimization.
So, give deep learning on Power Linux servers a shot. To get started with the MLDL frameworks, find installation instructions and more information here => http://openpowerfoundation.org/blogs/openpower-deep-learning-distribution/ or go directly to the installation instructions => http://ibm.biz/power-mldl. Contact me at firstname.lastname@example.org to get started with an evaluation.
The latest version of the co-optimized binary distribution of Deep Learning frameworks for Power is now available under the name PowerAI. You can download the latest version of PowerAI for RedHat and Ubuntu here.