We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.

Features Partner Sites Information LinkXpress
Sign In
Advertise with Us
Sekisui Diagnostics UK Ltd.

Download Mobile App




Researchers Use AI to Improve Mammogram Interpretation

By HospiMedica International staff writers
Posted on 04 Jul 2018
Print article
Image: Researchers used AI to improve mammogram image interpretation (Photo courtesy of the Department of Energy’s Oak Ridge National Laboratory).
Image: Researchers used AI to improve mammogram image interpretation (Photo courtesy of the Department of Energy’s Oak Ridge National Laboratory).
A team of researchers at the Department of Energy’s Oak Ridge National Laboratory (Oak Ridge, TN, USA) successfully used artificial intelligence to improve understanding of the cognitive processes involved in image interpretation. Their work, which was published in the Journal of Medical Imaging, will help reduce errors in the analyses of diagnostic images by health professionals and has the potential to improve health outcomes for women affected by breast cancer.

Early detection of breast cancer is critical for effective treatment, which requires accurate interpretation of a patient’s mammogram. The ORNL-led team of researchers found that analyses of mammograms by radiologists were significantly influenced by context bias, or the radiologist’s previous diagnostic experiences. New radiology trainees were most susceptible to the phenomenon, although even more experienced radiologists fall victim to some degree, according to the researchers.

The researchers designed an experiment aimed at following the eye movements of radiologists at various skill levels to better understand the context bias involved in their individual interpretations of the images. The experiment followed the eye movements of three board certified radiologists and seven radiology residents as they analyzed 100 mammographic studies from the University of South Florida’s Digital Database for Screening Mammography. The 400 images, representing a mix of cancer, no cancer, and cases that mimicked cancer but were benign, were specifically selected to cover a range of cases similar to that found in a clinical setting.

The participants, who were grouped by levels of experience and had no prior knowledge of what was contained in the individual X-rays, were outfitted with a head-mounted eye-tracking device designed to record their “raw gaze data,” which characterized their overall visual behavior. The study also recorded the participants’ diagnostic decisions via the location of suspicious findings along with their characteristics according to the BI-RADS lexicon, the radiologists’ reporting scheme for mammograms. By computing a measure known as a fractal dimension on the individual participants’ scan path (map of eye movements) and performing a series of statistical calculations, the researchers were able to discern how the eye movements of the participants differed from mammogram to mammogram. They also calculated the deviation in the context of the different image categories, such as images that show cancer and those that may be easier or more difficult to decipher.

In order to effectively track the participants’ eye movements, the researchers had to employ real-time sensor data, which logs nearly every movement of the participants’ eyes. However, with 10 observers interpreting 100 cases, the data soon began adding up, making it impractical to manage such a data-intensive task manually. This made the researchers turn to artificial intelligence to help them efficiently and effectively make sense of the results. Using ORNL’s Titan supercomputer, the researchers were able to rapidly train the deep learning models required to make sense of the large datasets. While similar studies in the past have used aggregation methods to make sense of the enormous data sets, the team of researchers at ORNL processed the full data sequence, a critical task as over time this sequence revealed differentiations in the eye paths of the participants as they analyzed the various mammograms.

In a related paper published in the Journal of Human Performance in Extreme Environments, the researchers demonstrated how convolutional neural networks, a type of artificial intelligence commonly applied to the analysis of images, significantly outperformed other methods, such as deep neural networks and deep belief networks, in parsing the eye tracking data and, by extension, validating the experiment as a means to measure context bias. Furthermore, while the experiment focused on radiology, the resulting data drove home the need for “intelligent interfaces and decision support systems” to assist human performance across a range of complex tasks including air-traffic control and battlefield management.

While machines are unlikely to replace radiologists (or other humans involved in rapid, high-impact decision-making) any time soon, they do hold enormous potential to assist health professionals and other decision makers in reducing errors due to phenomena such as context bias, according to Gina Tourassi, team lead and director of ORNL’s Health Data Science Institute. “These findings will be critical in the future training of medical professionals to reduce errors in the interpretations of diagnostic imaging. These studies will inform human/computer interactions, going forward as we use artificial intelligence to augment and improve human performance,” said Tourassi.

Related Links:
Oak Ridge National Laboratory

Gold Member
STI Test
Vivalytic Sexually Transmitted Infection (STI) Array
Gold Member
POC Blood Gas Analyzer
Stat Profile Prime Plus
Silver Member
Wireless Mobile ECG Recorder
NR-1207-3/NR-1207-E
New
Hysteroscopic Fluid Management System
HysteroFlow/HysteroBalance II

Print article

Channels

Surgical Techniques

view channel
Image: Miniaturized electric generators based on hydrogels for use in biomedical devices (Photo courtesy of HKU)

Hydrogel-Based Miniaturized Electric Generators to Power Biomedical Devices

The development of engineered devices that can harvest and convert the mechanical motion of the human body into electricity is essential for powering bioelectronic devices. This mechanoelectrical energy... Read more

Patient Care

view channel
Image: The newly-launched solution can transform operating room scheduling and boost utilization rates (Photo courtesy of Fujitsu)

Surgical Capacity Optimization Solution Helps Hospitals Boost OR Utilization

An innovative solution has the capability to transform surgical capacity utilization by targeting the root cause of surgical block time inefficiencies. Fujitsu Limited’s (Tokyo, Japan) Surgical Capacity... Read more

Health IT

view channel
Image: First ever institution-specific model provides significant performance advantage over current population-derived models (Photo courtesy of Mount Sinai)

Machine Learning Model Improves Mortality Risk Prediction for Cardiac Surgery Patients

Machine learning algorithms have been deployed to create predictive models in various medical fields, with some demonstrating improved outcomes compared to their standard-of-care counterparts.... Read more

Point of Care

view channel
Image: The Quantra Hemostasis System has received US FDA special 510(k) clearance for use with its Quantra QStat Cartridge (Photo courtesy of HemoSonics)

Critical Bleeding Management System to Help Hospitals Further Standardize Viscoelastic Testing

Surgical procedures are often accompanied by significant blood loss and the subsequent high likelihood of the need for allogeneic blood transfusions. These transfusions, while critical, are linked to various... Read more
Copyright © 2000-2024 Globetech Media. All rights reserved.