Earlier today, I was messaging back and forth with a fellow health IT enthusiast about the adoption of AI. The conversation was part of an ongoing discussion of next-gen technology solutions in healthcare on the Currnt discussion board.
In his comments, the executive suggested that with the ubiquitous presence of EMRs, the availability of increasing levels of computational power, the open source movement and the rise of cloud providers, training AI/machine learning models have become much easier.
I was intrigued by this notion. Some factors that suggest he might be right:
- It’s hard to argue that on a high level, providers probably have most or all of the data management tools they need to kick off AI trials and experiments. Check.
- If CIOs want to share data across organizational boundaries, it’s at least possible that they can create a big data store together. Bingo.
- Key vendors are largely aware of the challenges healthcare organizations will face in rolling out AI-driven applications and can probably offer at least some support for such projects. Yes, this is probably so.
On the other hand, few healthcare organizations are sure how to determine which applications make the best use case, how to tell if it’s time to test one out and what support users will need. Some challenges they face include the following:
- Can they identify an area which might deliver a short-term clinical or financial benefit? How will they measure the degree to which the AI technology is delivering these benefits?
- Assuming the pilot has an impact on the clinical care workflow (which is often the case) are the clinicians willing adjust to this?
- Can they identify clinicians and administrative staffers willing to champion the use of the AI tool to peers? Are those champions also prepared to offer feedback?
- Do they have enough of the right kind of data (such as a very large store of images in the case of radiology projects) to train the AI?
In addition to addressing all of these concerns, health IT leaders need to plan on doing a fair amount of clinician education and support when they deploy an AI application. After all, even if clinicians are willing to test out new AI-driven care tools, they probably won’t feel comfortable following through if the recommendations that AI tool makes are at all mysterious.
Just as they would with a human colleague, physicians will probably need some additional supporting evidence on and context for any course of action they might not have come to know on their own. They’re not going to go on faith that the AI algorithms process was legit (a concern known as the “black box” problem).
The bottom line here is that while many providers may have the infrastructure in place to deploy and pilot AI applications, that may not be as much of an advantage as it might appear to be. At least at this stage in the evolution of healthcare AI solutions, knowing what to do, what to say and why is far from simple, even if the “how” is much easier to tackle than in the past.