AI

Open standards for deep learning to simplify development of neural networks

Share this post:

Among the various fields of exploration in artificial intelligence, deep learning is an exciting and increasingly important area of research that holds great potential for helping computers understand and extract meaning from data, e.g. deciphering images and sounds.

To help further the creation and adoption of interoperable deep learning models, IBM joined the Open Neural Network Exchange (ONNX), a new industry ecosystem that was established by Facebook and Microsoft in September. ONNX provides a common open format to represent deep learning models. The ONNX initiative envisions the flexibility to move deep learning models seamlessly between open-source frameworks to accelerate development for data scientists.

ONNX can free developers from the burden and constraints of having to commit to a specific deep learning framework during the research and development phase, and provide engineers and researchers the freedom to explore a variety of possibilities, enable them to more easily move between different deep learning frameworks and computational mediums, and choose the option that has the features best suited for their project.

We have already begun exploring the possibilities that ONNX can provide to developers and quickly recognized that our work on a common tensor output format could have a broader and more rapid impact if incorporated into ONNX. Our deep learning research team is already exploring other ways that ONNX can help data scientists bring their deep learning models to market.

IBM has long supported and actively encourages the adoption of open standards and collaborative innovation. ONNX will help encourage interoperability and foster an environment of deep learning systems that spur AI innovation and accelerate the development and use of neural networks in a wide range of research projects, products and solutions.

More AI stories

Pushing the boundaries of convex optimization

Convex optimization problems, which involve the minimization of a convex function over a convex set, can be approximated in theory to any fixed precision in polynomial time. However, practical algorithms are known only for special cases. An important question is whether it is possible to develop algorithms for a broader subset of convex optimization problems that are efficient in both theory and practice.

Continue reading

Making Neural Networks Robust with New Perspectives

IBM researchers have partnered with scientists from MIT, Northeastern University, Boston University and University of Minnesota to publish two papers on novel attacks and defenses for graph neural networks and on a new robust training algorithm called hierarchical random switching at IJCAI 2019.

Continue reading

Improving the Scalability of AI Planning when Memory is Limited

We report new research results relevant to AI planning in our paper, "Depth-First Memory-Limited AND/OR Search and Unsolvability in Cyclic Search Spaces," presented at the International Joint Conference on Artificial Intelligence, IJCAI-19.

Continue reading