AI Won’t Replace Doctors, But It Could Undercut Them

Even if doctors have the same training, practice in the same specialty area and work in the same office, they have idiosyncrasies. AI-based medical technologies don’t.

Because they’ll be trained with different data, AIs will function differently between one healthcare institution to another. But at least within that institution, they’ll be capable of a level of consistency humans can’t match, not only in the results but in the processes they follow. And if I were a physician, that would worry me just a little bit.

Don’t misunderstand me: I’m not suggesting that AI will replace physicians completely anytime in the foreseeable future. There’s a ton of reasons why this is so, not the least of which is that AI technology is nowhere near approximating what we call judgment. AI technologies only perform analyses, which are a component of judgment but not the whole show. Judgment is important while healthcare is still an imperfect science.

On the other hand, AI platforms are growing increasingly sophisticated, and they’re becoming more important by the day. As a result, it’s important to consider a badly-handled rollout of an AI tool that could undercut doctors and spike burnout levels. I’m imagining a situation in which health leaders trust the judgment of the machine more than that of their own physicians.

If I were managing population health programs for a health system, I might favor the consistency of well-tuned analytics to the messier process of human intuition, research, and physician-patient communication. At least when it comes to diagnostics, AI algorithms might even outperform physicians over time, as (unfair though it may be) they never get tired, bored, angry or frustrated.

With hundreds of thousands or even millions of patients to manage, some health leaders might see it as downright irresponsible to let doctors muck about with processes that AI can handle competently.

In fact, it’s entirely possible that risk-averse healthcare executives would demand that doctors defer to the advice of the AI and follow its recommendations. The truth is that once they put Dr. AI into production as a tool, someday organizations could be sued if the doctors didn’t use it. Not doing so could even be considered negligent.

To my knowledge, there aren’t any AI tools in production likely to interfere directly with the doctor-patient relationship. So far, we’re only looking at a few pilot tests which, though they’ve generated exciting results, are still at a very early stage.

Not only that, the idea of health leaders letting a machine completely take over a key aspect of patient care is a worst-case scenario. Speaking as a patient, I think I’d boycott any institution that forced doctors down this particular path.

Still, as early applications prove their worth in clinical settings, we need to be on guard that we don’t use AI to stifle doctors. Let’s not make the same mistakes we did with the deployment of EHRs. AI could be used to create supportive tools which help physicians take better care of patients, or it could be deployed to club doctors into conformity.

About the author

Anne Zieger

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

   

Categories