IBM Research Editorial Staff

Will Adam Algorithms Work for Me?

A simple and effective approach to monitor the convergence of Adam algorithms, a generic class of adaptive gradient methods for non-convex optimization.

Continue reading

IBM Research AI at CHI 2019

At the ACM CHI Conference on Human Factors in Computing Systems, IBM researchers present recent work in human-computer interaction in the context of AI.

Continue reading

IBM Research at the Intersection of HCI and AI

IBM research on explainable AI, human-computer interaction (HCI), and automated ML featured at this year's conference on Intelligent User Interfaces.

Continue reading

Think 2019 Kicks Off with Live Debate Between Man and Machine

Today, an artificial intelligence (AI) system engaged in a live, public debate with a human debate champion at Think 2019 in San Francisco.

Continue reading

We Have Winners! … Of The IBM Q Teach Me Quantum Challenge

We’re happy to announce the winners of the fourth IBM Q Award: the IBM Q Teach Me Quantum Challenge.

Continue reading

IBM Researchers Remove the “Mem” from Memcache

Data Store for Memcache replaces DRAM with NVM storage for caching using the same memcache API. In a benchmark it proved be 33 percent faster.

Continue reading

AI Can Help Retailers Understand the Consumer

AI can help retailers understand the consumer; retailers now need to look at much finer market segments.

Continue reading

Fingernail Sensors and AI Can Help Clinicians to Monitor Health and Disease Progression

Grip strength is a useful metric in a surprisingly broad set of health issues. It has been associated with the effectiveness of medication in individuals with Parkinson’s disease, the degree of cognitive function in schizophrenics, the state of an individual’s cardiovascular health, and all-cause mortality in geriatrics. At IBM Research, one of our ongoing challenges […]

Continue reading

TAPAS: Frugally Predicting the Accuracy of a Neural Network Prior to Training

Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. What if you could forecast the accuracy of the neural network earlier thanks to accumulated experience and approximation? This was the goal of a recent project at IBM Research and the result is TAPAS or Train-less Accuracy Predictor […]

Continue reading

AI Year in Review: Highlights of Papers and Predictions from IBM Research AI

For more than seventy years, IBM Research has been inventing, exploring, and imagining the future. We have been pioneering the field of artificial intelligence (AI) since its inception. We were there when the field was launched at the famous 1956 Dartmouth workshop. Just three years later, an IBMer and early computer pioneer, Arthur Samuel, coined […]

Continue reading

Efficient Deep Learning Training on the Cloud with Small Files

Here I describe an approach to efficiently train deep learning models on machine learning cloud platforms (e.g., IBM Watson Machine Learning) when the training dataset consists of a large number of small files (e.g., JPEG format) and is stored in an object store like IBM Cloud Object Storage (COS). As an example, I train a […]

Continue reading

Interpretability and Performance: Can the Same Model Achieve Both?

Interpretability and performance of a system are usually at odds with each other, as many of the best-performing models (viz. deep neural networks) are black box in nature. In our work, improving simple models, we try to bridge this gap by proposing a method to transfer information from a high-performing neural network to another model […]

Continue reading