The doctor enters and pulls up the electronic medical record. The patient’s history is already there. So is the CT scan. The doctor drags and drops the image, presses the “analyze” button. An actionable diagnosis appears a moment later.
If artificial intelligence (AI) were to one day take over much of clinical practice, as some fear or anticipate — being potentially faster, more reliable, and generally better at certain tasks than humans — clinical decisions may no longer depend on tired eyes, imperfect risk scores, or lagging guidelines. Does that leave a role for physicians as decision-makers? Or will they become like Uber drivers in the age of GPS, just following directions issued from a device?
“GPS is not always precise,” offered Thomas Luscher, MD, a cardiologist at Switzerland’s University Hospital in Zurich, during a recent panel at the American Heart Association (AHA) meeting. “If you’re smart enough you can figure out how false it is.”
That’s the general idea of how things may also go in medicine once AI enters the mainstream. Clinicians today are cautiously optimistic that AI won’t exactly take their jobs. At risk, however, are those whose tasks involve a lot of repetitive tasks, such as looking at scans.
And that’s what automation is good for: processing information in rote, or nearly rote, fashion at high speed and with no issues of distraction or fatigue.
“I tell young people, ‘Don’t become a radiologist. You will be substituted by a machine,'” Luscher said.
Skeptics might point out that automation in medicine is hardly new in 2017. Automated ECG analysis began in the 1970s; computer vision is long established in liquid cytology for analyzing Pap smears; the MelaFind device for screening skin lesions is FDA approved.
Those systems, however, were narrowly focused and to a large extent still rely on human backup. ECG analysis software, for example, makes enough false diagnoses of atrial fibrillation and inappropriate cardiac catheterization laboratory activations that cardiologists generally know not to rely on it for clinical decisions.
What sets AI apart is the growing sophistication enabled by increased computing power and, just as important, the emergence of “Big Data” for training the algorithms.
Earlier this year, an AI system reportedly beat clinicians at recognizing 12 out of 14 arrhythmias and with better sensitivity and specificity. And just a few weeks ago, a computer algorithm outperformed human pathologists in diagnosing cancer metastasis in sentinel lymph node specimens.
The reach of AI is expected to increase as databases grow with the influx of information from wearables, electronic health records, and personal genomics firms as well as conventional sources.
“AI is a critical transformative part of the history of medicine. Medicine is an information system, now firmly so than ever before,” commented Harlan Krumholz, MD, of Yale School of Medicine in New Haven, Connecticut, at the AHA panel, adding that his academic group started hiring mathematicians and people who can code in Python.
“It will find its way into decision support, providing guidance on diagnostic interpretations, assisting in assessing prognosis, enabling better assessments of risks and benefits of particular clinical strategies – and generally spreading expertise. It will be a foundation for precision medicine,” he later told MedPage Today.
“The biggest impact of AI in medicine in the short term will be in the area of pattern recognition and image interpretation. Currently we are limited by human cognitive capabilities, which lead to high miss rates on image interpretation and marked inter and intra-observer variability,” Krumholz said. “We will also be able to extract useful information that is hard to discern with our eyes. Other important areas will center on prediction, with improved means of estimating prognosis – and of complications and risk.”
In addition to its potential for delivering better care, AI will be a democratizer of healthcare wherever it is available, according to Rima Arnaout, MD, a cardiovascular imaging sub-specialist at University of California San Francisco.
“Right now, the big academic medical centers with experienced niche specialists often provide better diagnosis and care for patients with rare or complex conditions. But if models for patient diagnosis and management are trained on data from those experts and made widely available, patients at big academic centers and rural clinics alike will have better access to better care,” she said.
Arnaout and Krumholz are both in the trenches of making AI a reality in medicine, her taking part in the effort to develop image-recognition tools to diagnose disease with higher accuracy and precision than human beings, and him investigating ways that AI may improve patient care and outcomes by identifying high-risk patients — those who may, say, be more likely to be re-admitted to the hospital after a medical procedure.
But to call today’s technology in its infancy would be an understatement.
Joseph Hill, MD, chief of cardiology at UT Southwestern in Dallas, said that he doesn’t know anyone practicing medicine with AI today. “In my world, AI-facilitated interpretation of echocardiograms is on the horizon but likely 10 years from prime time,” he said, predicting that initial uses will likely be in analyzing x-ray images.
It comes down to the technology not being widely available at an affordable cost, said George Welch, MD, a cardiologist at New York’s Manhattan Cardiology. “The cost of that technology will have to come down significantly before any of us are using it on a larger scale.”
Available Now or Soon
One application Welch is following is IBM’s Watson, which is being trained by a coalition o CVf hospitals, ambulatory radiology providers, and imaging technology companies for use in health imaging. Other Watson-enabled projects include WatsonPaths and Watson EMR Assistant, two projects launched in collaboration with the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, the former seeking to make it easier to trace how AI determined the best option for each clinical scenario and the latter designed to highlight the most important information from patient medical records.
Yet another recent AI-based rollout comes from Hitachi and Boston-based Partners Connected Health, which purportedly can predict hospital readmissions within 30 days for patients with heart failure. It identifies patients who would benefit from a special readmission prevention program, saving the healthcare system a substantial number of dollars.
And it’s not just the big names that are putting real effort into this technology: Andreessen Horowitz just created a new $450-million fund to invest in startups developing AI and machine learning for healthcare.
After all is said and done, when AI has been fed enough data and cash, when it moves from infancy to its teenage years, and people are satisfied with its reliability and accessibility, the question remains: will doctors really lose their jobs to a computer?
“This type of capacity will reduce a lot of the roadwork that cardiologists do and will free up our time to spend with patients and create a better connection and bond with patients,” Welch suggested.
“There’s still a role for radiologists — all the machine is doing is giving you a prediction,” said former FDA Commissioner Robert Califf, MD, at the AHA panel. “What do you do with probabilities? There’s a tremendous role for radiologists. This is bringing humanism back to medicine, the interaction with people. The role of the doctor is going to get bigger and bigger.”
For his part, Califf is now employed at Verily, Google’s life sciences division, where he has a hand in Project Baseline. The study continues to recruit for a goal of 10,000 participants who will volunteer massive amounts of their daily health information via wearable technology, surveys, and clinic visits — perfect fodder for feeding to AI, ostensibly.
Yet “AI is not a cure-all for the problems in healthcare,” Arnaout emphasized, adding that “especially with respect to supervised learning, an AI tool is only as good as the data you train it with. The medical community needs to be very careful in curating training data for these models, providing high-quality data that represents all ages, genders, races, ethnicities, and patient conditions.”
“If we don’t do this, we can allow some dangerous biases about our patients and diseases to get baked right into our systems of care,” she warned.