Health technology investment and research company Rock Health reports that from 2011 to 2017, 121 digital health companies earned $2.7 billion in venture funding to apply artificial intelligence (AI) to 19 areas ranging from drug research and development to clinical-decision support to health benefits administration.
Even the government is getting in on the action. Earlier this year, the Centers for Medicare & Medicaid Services (CMS) launched an AI outcomes challenge in which it offered $1.65 million total in prizes for AI models that predict hospital and skilled nursing facility readmissions and adverse events.
But for all the hope and hype about AI, good evidence is in short supply; much of it is patchy and cited selectively by proponents.
Eric Topol, founder and director of Scripps Research Translational Institute and an early adopter of remote patient monitoring, is optimistic about AI’s use in genomic and other kinds of basic research. However, he noted, “The field is certainly high on promise and relatively low on data and proof.”
AI is currently employed in American health care in one of two general areas: improving clinical care or streamlining administrative inefficiencies. While AI’s analysis of images and diagnoses of diseases catch headlines, its uses in detecting insurance fraud, reducing documentation time for doctors, and automated customer service are its more typical uses.
When it comes to AI, Topol argues that real-world clinical validation is important because a model’s accuracy doesn’t guarantee it will actually work in a clinical setting or improve outcomes.
As with any application of computers and technology, the aphorism GIGO—garbage in, garbage out—still applies to AI. “We should be thinking about testing data sets like a drug,” said physician and venture capital investor Bob Kocher.
The FDA agrees. It is tackling AI safety by subjecting AI-based software intended to “treat, diagnose, cure, mitigate, or prevent disease” to the same approval process that medical devices face.
Patient safety isn’t the only problem facing AI. Some say AI could perpetuate or even exacerbate the inequities in the U.S. health care system. If AI is trained with data sets that don’t adequately sample a specific group, their “thinking” may be blind to issues particular to that group. For example, ethnic populations and women are often underrepresented in health care data sets.
All in all, AI is ultimately just a tool, not a panacea, says Rock Health Research Leader Megan Zweig.
This post originally appeared in Managed Care Magazine.