Medical imaging creates tremendous amounts of data: many emergency room radiologists must examine as many as 200 cases each day, and some medical studies contain up to 3,000 images. Each patient’s image collection can contain 250GB of data, ultimately creating collections across organizations that are petabytes in size. Within IBM Research, we see potential in applying AI to help radiologists sift through this information, including imaging analysis from breast, liver, and lung exams.
IBM researchers are applying deep learning to discover ways to overcome some of the technical challenges that AI can face when analyzing X-rays and other medical images. Their latest findings will be presented at the 21st International Conference on Medical Image Computing & Computer Assisted Intervention in Granada, Spain, from September 16 to 20.
Artificial neural networks can often struggle to learn when presented with an insufficient amount of training data. These networks also face the challenge of identifying very small regions in images depicting anomalies, such as nodules and masses, that might represent cancers.
At MICCAI 2018, researchers from IBM Research-Almaden and IBM Research-Haifa will present papers describing novel approaches to deep learning that may hold the potential to help address some of these challenges.
Learning from incomplete data
Ken Wong, a Research Staff Member at IBM Research-Almaden, will present a novel AI network design that was shown in a study to be capable of analyzing twice as many potential disease markers in 3D images, as well as accurately segmented small structures in those images, in half the time as previously studied AI-based network architectures.
Deep neural networks used to train AI systems can sometimes have difficulty breaking down medical images, a process called segmentation. This can present challenges to accurately identifying small disease markers, limiting the use of these networks in clinical settings. The project is our first effort directly targeting this challenge.
Training AI with minimal data
Mehdi Moradi, IBM Research-Almaden’s Manager of Image Analysis and Machine Learning Research, and colleagues will discuss their study of neural network architectures that were trained using images and text to automatically mark regions of new medical images that doctors can examine closely for signs of disease.
The researchers trained one network using combined image and text data and a second network using separated text and images, because there are different ways an AI-based imaging system might receive input to analyze. In the study, both networks autonomously located potential health threats in chest X-rays with a level of accuracy comparable to that of experienced radiologists analyzing and annotating the same images.
Recognizing obscure abnormalities
Scientists from IBM Research-Haifa in Israel developed a specialized deep neural network designed for mass detection and localization in breast mammography and will present their findings at MICCAI’s 4th Breast Image Analysis Workshop.
Standard breast cancer screening involves taking two mammography X-ray projections for each breast and comparing the views to pinpoint areas of interest. The new network’s design included identical “Siamese” subnetworks, from which analyses were compared to produce image evaluations. The study suggested an effective way of training AI to flag areas of abnormal and potentially cancerous breast tissue.
As the number of medical images taken in the U.S. reaches tens of millions annually, healthcare organizations are increasingly turning to AI to help them accurately and efficiently analyze vital information contained in patient MRIs, CT scans, and other visual diagnostic aids. A 2015 Consumer Reports investigation found 80 million CT scans alone are performed annually in the U.S. AI-infused imaging systems hold promise to help doctors sift through large numbers of images, plan treatment options, and perform clinical studies.
Other IBM Research work presented at the MICCAI conference this year includes:
1. Deep Multiscale Convolutional Feature Learning for Weakly Supervised Localization of Chest Pathologies in X-ray Images
In this work, scientists developed a novel neural network architecture and a weakly supervised learning method to localize small pathologies, such as lung nodules and lung masses, in chest X-ray images. This method can be used to enhance the effectiveness of a computer-aided diagnosis system by increasing the rate of incidental findings in routine check-ups.
2. Joint Segmentation and Uncertainty Visualization of Retinal Layers in Optical Coherence Tomography Images using Bayesian Deep Learning
In this paper, researchers developed a Bayesian deep learning-based method to segment the retinal layer and quantify the associated uncertainties in optical coherence tomography images. The method provides explanation of any incorrect segmentation produced by the model; therefore, it is applicable in determining the confidence of image analysis modules that utilize the segmentation output for downstream analysis.
3. Joint Registration and Segmentation of X-ray Images Using Generative Adversarial Networks
Scientists propose a deep learning-based approach for joint registration and segmentation (JRS) of chest X-ray images. Generative adversarial networks (GANs) are trained to register a floating image to a reference image by combining their segmentation map similarity with conventional feature maps.