Deploying AI in healthcare: Separating the hype from the helpful
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Of all the industries romanticizing AI, healthcare organizations may be the most smitten. Hospital executives hope AI will one day perform healthcare administrative tasks such as scheduling appointments, entering disease severity codes, managing patients’ lab tests and referrals, and remotely monitoring and responding to the needs of entire cohorts of patients as they go about their daily lives.
By improving efficiency, safety, and access, AI may be of enormous benefit to the healthcare industry, says Nigam Shah, professor of medicine (biomedical informatics) and of biomedical data science at Stanford University and an affiliated faculty member of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
But caveat emptor, Shah says. Buyers of healthcare AI need to consider not only whether an AI model will reliably provide the correct output — which has been the primary focus of AI researchers — but also whether it is the right model for the task at hand. “We need to be thinking beyond the model,” he says.
This means executives should consider the complex interplay between an AI system, the actions that it will guide, and the net benefit of using AI compared with not using it. And, before executives bring any AI system on board, Shah says, they should have a clear data strategy, a means of testing the AI system before buying it, and a clear set of metrics for evaluating whether the AI system will achieve the goals the organization has set for it.
“In deployment, AI ought to be better, faster, safer, and cheaper. Otherwise it is useless,” Shah says.
This spring, Shah will lead a Stanford HAI executive education course for senior healthcare executives called “Safe, Ethical, and Cost-Effective Use of AI in Healthcare: Critical Topics for Senior Leadership” to delve into these issues.
The business case for AI in healthcare
A recent McKinsey report outlined the various ways that innovative technologies such as AI are slowly being integrated into healthcare business models. Some AIs will improve organizational efficiency by doing rote tasks such as assigning severity codes for billing. “You can have a human read the chart and take 20 minutes to assign three codes or you can have a computer read the chart and assign three codes in a millisecond,” he says.
Other AI systems may increase patient access to care. For example, AI systems could help ensure that patients are referred to the right specialist, and that they obtain key tests before an initial visit. “Too often patients’ first visits with specialists are wasted because they are told to go get five tests and return in two weeks,” Shah says. “An AI system could short-circuit that.” And by skipping these wasted visits, doctors can see more patients.
AI could also be beneficial for health management, Shah says. For example, an AI system might watch over patients’ medication orders, or even supervise patients in their homes with an eye toward impending deterioration. So-called hospital-at-home programs might demand more nursing staff than there is supply, Shah says, “but if we can put five sensors in the home to provide early warning of an issue, such programs become feasible.”
When to deploy AI in healthcare
Despite widespread potential, there are currently no standard methods for determining if an AI system will save money for a hospital or improve patient care. “All of the guidance that people or professional societies have given is around ways to build AI,” Shah says. “There’s been very little on if, how, or when to use AI.”
Shah’s advice to executives: Define a clear data strategy, have a plan to try before you buy, and set clear metrics for evaluating if deployment is beneficial.
Define a data strategy
Because AI is only as good as the data it learns from, executives need to have a strategy and staff for gathering diverse data, properly labeling and cleaning that data, and maintaining the data on an ongoing basis, Shah says. “Without a data strategy, there’s no hope for successful AI deployment.”
For example, if a vendor is selling medical image-reading software, the purchasing organization needs to have on hand a substantial set of retrospective data that it can use to test the software. In addition, the organization needs to have the ability to store, process, and annotate its data so that it can continue testing the product again in the future, to make sure it’s still working properly.
Try before you buy
Healthcare organizations should test AI models at their own sites before buying them and making them operational, Shah says. Such testing will help hospitals separate snake oil — AI that doesn’t live up to its claims — from effective AI, as well as help them assess whether the model is appropriately generalizable from its original site to a new one. For example, Shah says, if a model was developed in Palo Alto, California, but is being deployed in Mumbai, India, there should be some testing to ascertain whether the model works in this new context.
In addition to checking if the model is accurate and generalizable, executives will have to pay attention to whether the model is actually useful when deployed, whether it can be smoothly implemented into existing workflows, and whether there are clear procedures for monitoring how well the AI is working post-deployment. “It’s like a free pony,” Shah says. “There may be no cost to buy it, but there could be a huge cost to building it a barn and feeding it for life.”
Establish clear metrics for deployable AI
Purchasers of AI systems also need to evaluate the net benefit of an AI system to help them decide when to use it and when to turn it off, Shah says.
This means considering issues such as the context in which an AI is deployed, the possibility of unintended consequences, and the healthcare organization’s capacity to respond to an AI’s recommendations. If, for example, the organization is testing an AI model that predicts readmissions of discharged patients and it flags 50 people for follow-up, the organization needs to have staff available to do that follow-up. If it doesn’t, the AI system isn’t helpful.
“Even if the model is built right, given your business processes and your cost structure, it might not be the right model for you,” Shah says.
Ripple effects of AI in healthcare
Finally, Shah cautions, executives must consider the broader ramifications of AI deployment. Some uses could displace people from long-held jobs while other uses could augment human effort in a way that increases access to care. It’s hard to know which impact will happen first or which will be more significant. And eventually, hospitals will need a plan for retraining and re-skilling displaced workers.
“While AI certainly has a lot of potential in the healthcare setting,” Shah says, “realizing that potential is going to require creating organizational units that manage the data strategy, the machine learning model life cycle, and the end-to-end delivery of AI into the care system.”
Katharine Miller is a contributing writer for the Stanford Institute for Human-Centered AI.
This story originally appeared on Hai.stanford.edu. Copyright 2022
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!