Deep Learning May Play a Role in Assessing Breast TextureRSNA - by Elizabeth Gardner
"Can a computer network that mimics the neural structure of the brain and the visual cortex and is trained to analyze and recognize nonmedical images (deep learning), assess breast texture — and therefore risk of breast cancer — more accurately than standard radiographic texture analysis?
In a study presented during the Hot Topics in Breast Imaging series at RSNA 2016, researchers determined that convolutional neural networks can analyze full-field digital mammographic (FFDM) images and extract features that are missed both by human eyes and by other types of computer analysis.
“I think that in the future, both texture analysis and deep learning will be applied to mammograms on a routine basis,” said Maryellen Giger, PhD, A.N. Pritzker Professor of Radiology at the University of Chicago.
Breast cancer is the second leading cause of death in North America for women. Currently, mammography is an effective tool for early breast cancer detection and the reduction of mortality rates. Breast density and mammographic parenchymal patterns can both be useful in assessing the risk of developing breast cancer. Better risk assessment allows physicians to better manage patients and can potentially lead to personalized screening regimens and precision medicine.
Previous work by the Giger Lab at the University of Chicago suggests that parenchymal texture predicts cancer risk more accurately than breast density percentage. A 2014 study published by Dr. Giger and Hui Li, MD, and colleagues in the Journal of Medical Imaging used radiographic texture analysis to compare a low-risk population with two high-risk populations (women with BRCA 1 or 2 and women with unilateral breast cancer). The high-risk group had coarser and lower contrast parenchymal patterns than the control group, even though the breast density percentage was not significantly different between the two groups."