At AAAI, our team presented two new multilingual research techniques that enable AI to understand different languages while only trained on one.
Our team has developed an AI that verifies other AIs’ ‘fairness’ by generating a set of counterfactual text samples and testing machine learning systems without supervision.
In a recent paper introduced at the 2021 AAAI Conference on Artificial Intelligence (AAAI), we describe an AI that trades off ‘exploration’ of the world with ‘exploitation’ of its action strategy to maximize rewards. In Reinforcement Learning, an AI gets a reward – such as a bag of gold behind a locked door in a video game – every time it reaches specific desirable states. We have greatly improved this exploration vs exploitation tradeoff using additional commonsense knowledge – in the form of crowdsourced text. Our work could lead to better mapping and navigation applications, and to a new generation of interactive assistive agents able to reason like humans.
We use AI to automatically break down the overall application by representing application code as graphs. Our AI relies on Graph Representation Learning – a popular method in deep learning. Graphs are a natural representation for software and applications. We translated the application to a graph where the programs become nodes. Their relationships with other programs become edges and determine the boundary to separate the nodes of common business functionality.
A forecasting method that is applicable to arbitrary sequences and comes with a regret bound competing against a class of methods, which includes Kalman filters.
IBM researchers developed a novel compression algorithm that could significantly improve training times for deep learning models in large-scale AI systems.
In an upcoming presentation at the 2018 AAAI Conference, our team of deep learning experts at IBM Research India propose a new and exploratory technique that automatically ingests and infers deep learning algorithms in published research papers and recreates them in source code for inclusion in libraries for multiple deep learning frameworks (Tensorflow, Keras, Caffe). With […]
Recently, impressive progress has been made in neural network question answering (QA) systems which can analyze a passage to answer a question. These systems work by matching a representation of the question to the text to find the relevant answer phrase. But what if the text is potentially all of Wikipedia? And what if the […]
At the 32nd AAAI conference on artificial intelligence, IBM will share significant progress from its AI research team, including technical papers as well as results from the company’s ongoing collaboration with academic institutions through the MIT IBM Watson AI Lab and the AI Horizons Network. Among the featured IBM AI research projects that will be […]