We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.
Features Partner Sites Information LinkXpress
Sign In
Advertise with Us
Direct Effect Media/Illumina

Synthesized X-Ray Images Help Train AI Programs

By HospiMedica International staff writers
Posted on 17 Jul 2018
Print article
Image: Real X-ray image (L) next to a synthesized X-ray created by DCGAN. Underneath the X-ray images are the corresponding heatmaps (Photo courtesy of Hojjat Salehinejad/MIMLab).
Image: Real X-ray image (L) next to a synthesized X-ray created by DCGAN. Underneath the X-ray images are the corresponding heatmaps (Photo courtesy of Hojjat Salehinejad/MIMLab).
A new study describes how computer generated X-rays can be used to augment artificial intelligence (AI) training sets.

In order to generate and continually improve artificial X-rays, researchers at the University of Toronto (Canada) used deep convolutional generative adversarial network (DCGAN) algorithms, which are made up of two networks: one that generates the images, and the other that tries to discriminate synthetic images from real images. The two networks are continuously trained until they reach a point in which the discriminator cannot differentiate real images from synthesized ones. Once a sufficient number of artificial X-rays are created, they are used to train another DCGAN that can classify the images accordingly.

The researchers then compared the accuracy of the artificially augmented dataset to the original one when fed through their AI system, and found that classification accuracy improved by 20% for common conditions. For some rare conditions, accuracy improved up to 40%. An advantage of the method is that as the synthetic X-rays are not real, the dataset can be readily available to researchers outside hospital premises without violating privacy concerns. The study was presented at the IEEE International Conference on Acoustics, Speech and Signal Processing, held during April 2018 in Calgary (Canada).

“In a sense, we are using machine learning to do machine learning,” said senior author and study presenter Professor Shahrokh Valaee, PhD, of the Machine Intelligence in Medicine Lab (MIMLab). “We are creating simulated X-rays that reflect certain rare conditions so that we can combine them with real X-rays to have a sufficiently large database to train the neural networks to identify these conditions in other X-rays.”

“Deep learning only works if the volume of training data is large enough, and this is one way to ensure we have neural networks that can classify images with high precision,” concluded Professor Valaee. “We've been able to show that artificial data generated by deep convolutional GANs can be used to augment real datasets. This provides a greater quantity of data for training and improves the performance of these systems in identifying rare conditions.”

Related Links:
University of Toronto


Print article

Channels

Copyright © 2000-2018 Globetech Media. All rights reserved.