GPU

High-Efficiency Distributed Learning for Speech Modeling

A distributed deep learning architecture for automatic speech recognition that shortens run time without compromising model accuracy.

Continue reading

The Future of AI Needs Better Compute: Hardware Accelerators Based on Analog Memory Devices

A machine learning technique for evaluating materials used to make analog accelerators, whose lower power and faster speed can drive deep learning.

Continue reading

IBM GPU-Accelerated Semantic Similarity Search at Scale Shows ~30000x Speed Up

Modern data processing systems should be capable of ingesting, storing and searching across a prodigious amount of textual information. Efficient text search encompasses both a high quality of the results and the speed of execution across millions of documents. The amount of unstructured text-based data is growing every day. Querying, clustering, and classifying this big […]

Continue reading

IBM Sets Tera-scale Machine Learning Benchmark Record with POWER9 and NVIDIA GPUs; Available Soon in PowerAI

Today, at IBM THINK in Las Vegas, we are reporting a breakthrough in AI performance using new software and algorithms on optimized hardware, including POWER9 with NVIDIA® V100™ GPUs. In a newly published benchmark, using an online advertising dataset released by Criteo Labs with over 4 billion training examples, we train a logistic regression classifier […]

Continue reading

The future of hardware is AI

To make great strides in AI, hardware must change. Starting with GPUs, and then evolving to analog devices, and then fault tolerant quantum computers.

Continue reading