NeuNetS uses AI to automatically synthesize deep neural networks faster and more easily than ever before, scaling up the deployment and adoption of AI.
Here I describe an approach to efficiently train deep learning models on machine learning cloud platforms (e.g., IBM Watson Machine Learning) when the training dataset consists of a large number of small files (e.g., JPEG format) and is stored in an object store like IBM Cloud Object Storage (COS). As an example, I train a […]
Delta-encoder is a novel approach for few- and one-shot object recognition, in which a modified auto-encoder (called delta-encoder) extracts transferable intra-class deformations (deltas) between same-class pairs of training examples, then applies them to a few examples of a new class (unseen during training) to efficiently synthesize samples from that class. The synthesized samples are then […]
In a previous post we explained how to write a probabilistic model using Edward and run it on the IBM Watson Machine Learning (WML) platform. In this post, we discuss the same example written in Pyro, a deep probabilistic programming language built on top of PyTorch. Deep probabilistic programming languages (DPPLs) such as Edward and […]
Edward is a deep probabilistic programming language (DPPL), that is, a language for specifying both deep neural networks and probabilistic models. DPPLs draw upon programming languages, Bayesian statistics, and deep learning to ease the development of powerful AI applications. Probabilistic languages let the user express a probabilistic model as a program with an intuitive formalism […]
Glaucoma is the second leading cause of blindness in the world, impacting approximately 2.7 million people in the U.S alone . It is a complex set of diseases and, if left untreated, can lead to blindness. It’s a particularly large issue in Australia, where only 50% of all people who have it are actually diagnosed […]
Medical imaging creates tremendous amounts of data: many emergency room radiologists must examine as many as 200 cases each day, and some medical studies contain up to 3,000 images. Each patient’s image collection can contain 250GB of data, ultimately creating collections across organizations that are petabytes in size. Within IBM Research, we see potential in […]
Selecting the best architecture for deep learning architectures is typically a time-consuming process that requires expert input, but using AI can streamline this process. I am developing an evolutionary algorithm for architecture selection that is up to 50,000 times faster than other methods, with only a small increase in error rate. Deep learning models are […]
The ability to infer abstract high-level concepts from raw sensory inputs is a key part of human intelligence. Developing models that recapitulate this ability is an important goal in AI research. A fundamental challenge in this respect is disentangling the underlying factors of variation that give rise to the observed data. For example, factors of […]
Today, with contributions made by IBM scientists, IBM introduces Deep Learning as a Service within Watson Studio, a rich set of cloud-based tools for developers and data scientists to help remove the barriers of training deep learning models in the enterprise. Deep learning and machine learning require expensive hardware and software resources as well as […]
IBM researchers developed a novel compression algorithm that could significantly improve training times for deep learning models in large-scale AI systems.
In an upcoming presentation at the 2018 AAAI Conference, our team of deep learning experts at IBM Research India propose a new and exploratory technique that automatically ingests and infers deep learning algorithms in published research papers and recreates them in source code for inclusion in libraries for multiple deep learning frameworks (Tensorflow, Keras, Caffe). With […]