December 10, 2018 | Written by: Talia Gershon
Categorized: AI | Open Source
Share this post:
For decades, IBM Research has been pushing the limits of what AI technology can do. We’ve published our work in top conferences and worked with our clients and product teams to deliver real value to people through advances in technology. Now, we’re taking some of that technology and handing it over to you, so that you can explore, learn, and maybe build upon some of our innovations.
The set of technologies we chose to start off with in the AI Experiments hub highlights the breadth of our research agenda. We’ve got the Adversarial Robustness Toolbox, an open-source library for making AI systems more secure. It curates for you a collection of known attacks against neural networks, as well as a set of techniques for defending your models. We’ve got a demo of NeuNetS, a technology that automatically builds an optimized neural network for a given dataset. This technology is part of IBM Watson Studio. We’ve got the AI Fairness 360 project, which provides a suite of algorithms to detect and mitigate unwanted bias in AI models. And we’ve got a new analog accelerator that encodes the weights of a neural network into the conductance states of analog memristors; the result is a chip with the potential to train neural networks significantly faster, and inference at much lower power, than what systems are capable of today.
We’ve got a few more experiments planned for the coming months so watch this space! In the meantime, we’d love to know what you think. Tweet us @IBMResearch and share your ideas!
Neural Network Synthesizer (NeuNetS)
Fast-track development of deep-learning models by using AI to automatically synthesize customized neural networks. Choose whether to optimize for speed or accuracy, and watch the model build and train itself using NeuNetS. Now available in Watson Studio, you can take NeuNetS for a test drive today.
Customizing hardware for AI applications has led to dramatic speed-ups for important workloads. IBM Research is advancing hardware beyond traditional CMOS to see how far we can push performance. Watch what happens when you use our accelerator for a simple MNIST classification.