Patients Fear That Medical AI Can’t Handle Their Unique Needs

A new study has concluded that patients are skittish about interacting with medical AI technology in part because they fear that AI won’t address their unique characteristics adequately.

The study looked at how receptive consumers were to the use of “medical AI,” which researchers defined as any machine using an algorithm or statistical model to perform perceptual, cognitive and/or conversational functions in patient care.

To conduct their research, the study authors looked at consumer preferences regarding medical AI. A central focus of the study was the impact of “uniqueness neglect,” patient belief that their medical AI wouldn’t take their unique issues into account when making judgments.

Among the key conclusions of the study was that consumers are less willing to receive healthcare by AI providers than human providers, and see medical care delivered this way as too standardized and likely to neglect their unique individual needs. The more unique consumers considered themselves to be, the greater their resistance to medical AI.

In addition, the study found that patients are less willing to pay for healthcare delivered by AI than by humans.

Their resistance to medical AI went away when the AI-driven provider offered added input from a human physician. They were also more comfortable with medical AI when someone other than themselves was using it to get care.

It also helped if providers tackled the uniqueness neglect problem directly. When medical AI was framed as offering personalized, customizable care based on unique patient attributes, patients were more comfortable with its use, the researchers said.

For me, these results were somewhat counterintuitive. I would have predicted that consumers would be afraid that medical AI systems would be more likely to make mistakes than human providers, particularly given that AI applications aren’t likely to be capable of explaining how they made a given decision. This doesn’t seem to be a big issue for the respondents.

Physicians are certainly not thrilled with the extent to which AI tools fail to explain how they make decisions, an issue known as the “black box” problem, but of course, their interactions with such technologies are different.

What it comes down to is that consumers still crave some degree of hands-on human contact when they get care.  In the future, as medical AI companies work to humanize their interfaces, consumers may be less guarded. For the time being, though, they don’t seem ready to let go of the given-and-take they can expect when their clinician is a fellow human.

About the author

Anne Zieger

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

   

Categories