You’ve probably noticed that the movement of healthcare AI from visionary to commonplace has already begun. There are endless examples I could cite to demonstrate this, but here’s a taste:
- A UK hospital is delegating some tasks usually performed by doctors and nurses to AI technology
- The AMA is working to set standards for physician use of AI
- Competition between AI-based disease management players is increasing
- New AI software can detect signs of diabetic retinopathy without involving a physician
Of course, anytime a technology seems poised to take over the world, there’s a voice in our head saying “Are you sure?” And we all know there are many flashes in the technology pan.
When it comes to AI, however, we may be on the brink of such widespread adoption that no one could argue that it hasn’t arrived. According to a recent Intel survey of U.S. healthcare leaders, AI will be in use across the healthcare spectrum by 2023.
The research, which was conducted in partnership with Convergys Analytics, surveyed 200 US healthcare decision-makers in April 2018 on their attitudes about AI. The survey also asked subjects what barriers still existed to industry-wide AI adoption.
First, a significant number of respondents (54%) said that they expected AI to be in wide use in the industry within the next five years. Also, a substantial minority (37%) said they already used AI, though most reported that such use was limited.
Among those organizations that use AI, clinical use accounted for 77%, followed by operational use (41%) and financial use (26%). Meanwhile, respondents whose organizations hadn’t adopted AI still seem very enthusiastic about its possibilities, with 91% expecting that it will offer predictive analytics tools for early intervention, 88% saying it will improve care and 83% saying it will improve the accuracy of medical diagnoses.
Despite their enthusiasm, however, many of those surveyed were sure they could trust AI just yet. More than one-third of respondents said that patients wouldn’t trust AI enough to play an active role in their care (and they are probably right, at least for now). Meanwhile, 30% assume that clinicians wouldn’t trust AI either, predicting that concerns over fatal errors would kill their interest. Again, that’s probably a good guess.
In addition, there’s the issue of the AI “black box” to bear in mind. Though Intel didn’t go into detail on this, both clinicians and healthcare executives are concerned about the way AI gets its job done. My informal research suggests that until doctors and nurses understand how AI tools have made their decisions — and what data influenced these decisions — it will be hard to get them comfortable with it.