Today, most health IT leaders seem sold on the potential healthcare AI deployment can offer, as its benefits are becoming clearer every day. It’s becoming almost a no-brainer to believe that AI can do great things for the practice of predictive analytics, for example.
In a recent survey by medical malpractice insurer The Doctors Company, 53% of physicians it contacted are optimistic about the prospects for AI in medicine, and 35% of physician respondents reported that they were using AI in their practices.
Not only that, 66% of physicians surveyed believe that AI will lead to faster diagnoses, and 66% predicted that the use of AI would lead to more accurate diagnoses.
Specific benefits they see from healthcare AI include assistance with case triage, enhanced image scanning and segmentation, improved detection, decision support, integration and improvement of workflow, personalized care, automated tumor tracking, disease development prediction and making healthcare delivery more accessible, humane and equitable.
That being said, potential risks associated with healthcare AI use have gotten considerably less play. The Doctors Company paper also offered a thought-provoking list of what it sees as the most prominent risks that may arise from using these technologies. They include the following:
- When AI models are trained on partial or poor data sets, they can demonstrate bias toward demographics represented more fully in the training data.
- Even well-trained AI systems will get things wrong at times, so it will remain important for human beings to remain in the loop. It will also get tricky figuring out who’s liable for the mistake if an AI prediction results in a misdiagnosis.
- If providers become over-reliant on AI recommendations, it could cause problems in the long run. As AI accuracy levels improve, there’s a chance health workers won’t want to challenge AI results, even if their own training and experience contradicts an AI-driven result.
- With so-called “black box” algorithms driving AI suggestions –while failing to justify these results – it may be difficult to establish a chain of accountability.
- As with other technologies, AI systems will have cybersecurity vulnerabilities. Attackers could, for example, deliberately misclassify learning-based medical predictions,
It’s worth pointing out that none of these problems are insurmountable. For example, as long as the right policies and workflow design are in place, a clinician will always be involved in the treatment and diagnosis process, and they’ll be fully empowered to object if an AI-driven result doesn’t’ square with their understanding of a patient’s case.
Also, while at present it may not always be clear how AI algorithms make “decisions” we can certainly fix that over time. If we work to develop more transparent systems, we’ll succeed over time.
Even so, none of these obstacles will disappear quickly, either. It’s likely to be a while before providers have AI systems in place offering just the right level of transparency and that the systems are given just the right level of supervision.
Not only that, the problem of amassing the right data for use in training AI systems won’t go away anytime soon. Not only is selecting the right data a challenge, it won’t be easy to avoid baking the biases inherent in the system anytime soon. Given the shocking disparities in how care is delivered already, making sure we don’t reproduce them digitally is probably job #1.
As I see it, AI is clearly a high-impact technology which can do a lot of good for healthcare. However, it’s important to keep at least one eye on potential concerns like these. It would be a shame if we let potential pitfalls like the ones outlined by The Doctors Company undercut its potential benefits.