A forecasting method that is applicable to arbitrary sequences and comes with a regret bound competing against a class of methods, which includes Kalman filters.
IBM researchers developed a novel compression algorithm that could significantly improve training times for deep learning models in large-scale AI systems.
In an upcoming presentation at the 2018 AAAI Conference, our team of deep learning experts at IBM Research India propose a new and exploratory technique that automatically ingests and infers deep learning algorithms in published research papers and recreates them in source code for inclusion in libraries for multiple deep learning frameworks (Tensorflow, Keras, Caffe). With […]
Recently, impressive progress has been made in neural network question answering (QA) systems which can analyze a passage to answer a question. These systems work by matching a representation of the question to the text to find the relevant answer phrase. But what if the text is potentially all of Wikipedia? And what if the […]
At the 32nd AAAI conference on artificial intelligence, IBM will share significant progress from its AI research team, including technical papers as well as results from the company’s ongoing collaboration with academic institutions through the MIT IBM Watson AI Lab and the AI Horizons Network. Among the featured IBM AI research projects that will be […]