As the role of artificial intelligence in medicine continues to evolve, doctors and robots are often seen as pitted against each other. But two new studies suggest that it’s not man versus machine, it’s man and machine.
The research harnesses artificial intelligence to “read” medical images, from X-rays to more advanced scans such as magnetic resonance imaging, or MRI, scans. The ultimate goal is not to replace physicians, says co-senior author Matthew Lungren, PhD, it’s instead to provide all clinicians with tools to make faster, more accurate imaging diagnoses and treatment plans — something that could one day benefit patients across the world.
A team of scientists led by Lungren, an assistant professor of radiology, and Andrew Ng, PhD, an adjunct professor of computer science, have devised two algorithms poised to do just that. One of the latest algorithms, which diagnoses knee injuries by automatically reading MRI scans, has already made its way into the hands of Stanford clinicians for preliminary testing.
The results were published today in PLOS Medicine. Graduate students Nicholas Bien and Pranav Rajpurkar are the first authors.
“One of the main goals of this paper is to figure out how doctors, both radiologists and non-radiologist clinicians, would work with algorithms that read medical scans,” said Lungren. “Does it make them better? Worse?”
So far, it’s one of the first studies to investigate how the physician-and-machine dynamic might work. The preliminary results show that the algorithm, called MRNet, not only performed at about the same level as radiologists on its own (although sometimes it performed just below depending on the MRI), it actually helped radiologists more accurately diagnose MRI scans of knee injuries.
“We developed MRNet for knee MRIs, but the same algorithm could also potentially classify other types of MRI,” said Bien. After the algorithm has been trained on a large dataset, it can generate predictions for a new MRI scan in a matter of seconds.”
MRNet was trained on a dataset of 1,370 MRI scans. Such image-based diagnostics comes down to pattern recognition, and in this case, the scientists trained the algorithm to recognize visual patterns associated with ligament and meniscal tears in the knee, as well as anything else that looks iffy. A panel of expert radiologists then sifted through a separate set of 120 knee MRIs to find a “ground truth,” for each scan, or diagnoses that the radiologists agreed was the right call. This set of images served to establish MRNet’s accuracy.
To answer if MRNet helped or hindered doctors diagnosing MRIs, a group of seven radiologists and two orthopedic surgeons took turns reading MRI scans with and without the algorithm. When reading the scans with the help of MRNet, they diagnosed fewer false positives (meaning they were less likely to say a patient had a serious knee injury, when they in fact were healthy) compared to their performance without the algorithm.
The results of the study, although still very early, are encouraging to physicians like Gary Fanton, MD, a clinical professor of orthopedic surgery at Stanford. He was one of the first doctors to give MRNet a try.
“As a clinician, having immediate and accurate data-driven information will allow us to make clinical decisions and mobilize a treatment plan with greater efficiency, especially in urgent care facilities or the emergency room,” Fanton said. “MRNet takes us closer to a more efficient, accurate and cost-effective means of delivering the most important first step in patient management — a diagnosis.”
Lungren and Ng likewise have plans to move another one of their diagnostic algorithms into the clinic to test how it fairs among doctors. The algorithm, called CheXNeXt, can simultaneously recognize 10 different pathologies from chest X-rays as well as radiologists; for one disease, it even outdid the experts.
It was featured in a recent video.
Lungren speaks to its potential in our release:
‘I could see [CheXNeXt] working in a few ways. [It] could triage the X-rays, sorting them into prioritized categories for doctors to review, like normal, abnormal or emergent,’ Lungren said. Or the algorithm could sit bedside with primary care doctors for on-demand consultation, he said.
In this case, Lungren said, the algorithm could step in to help confirm or cast doubt on a diagnosis.
Eventually, the scientists hope that algorithms like MRNet and CheXNeXt could stand alone, efficiently and effectively scanning for a wide range of diseases or injuries typically diagnosed through image-based medical exams. Such a feat could one day even be used as a sort of digital consultation for resource-deprived regions of the world that don’t typically have access to professional radiologists.
Photo by Kurt Hickman