With many providers adopting AI technology, it’s likely that some of them will face medical malpractice suits blaming the AI for patient harm. While it’s still unclear how to tackle this problem, it’s clear that healthcare organizations will have to address it soon.
According to Saurabh Jah, who wrote a piece on this subject for Stat, it’s still not clear who or what is liable if a mistake fueled by AI harms a patient.
Consider, for example, of a hospital that uses AI rather than a radiologist to interpret chest x-rays. If the AI misses the presence of pneumonia, leading to patient death from septic shock, who’s responsible? Existing law offers some guidelines, but some important questions are far from settled.
On the one hand, if the hospital developed the algorithm in house, things are comparatively straightforward – the hospital will likely be on the hook through under the concept of enterprise liability. “Though the medical center isn’t legally obligated to have radiologists oversee AI’s interpretation of x-rays, by removing radiologists from the process it assumes the risk of letting AI fly solo,” he notes.
However, if the hospital bought the AI technology from an outside vendor, Jah says, things get more complicated.
Assuming the AI algorithm was approved by the FDA, vendors can avoid some risk due to a legal concept known as preemption. The idea behind the preemption doctrine is that when state and federal laws are in conflict, federal law prevails. This allows the AI vendor to avoid meeting each state’s safety requirements.
On the other hand, not all FDA blessings are equal. In one case, healthcare technology vendor Medtronic wasn’t shielded from liability after its pacemaker failed because it had gotten an FDA approval through the less-strict expedited 501(K) pathway,
Also, it’s not clear whether preemption can apply to AI in the first place, given that unlike other software, AI software learns and evolves over time. Until the FDA develops new regulatory approaches that handle these emerging issues, many liability issues will remain in limbo.
Then, there’s the question of whether AI will become the expected way to provide medical care, especially diagnoses. If doctors who use AI miss fewer serious problems, leveraging AI in medical practice could become the standard of care. Physicians who don’t adopt AI could face much greater med mal risks.
In fact, ultimately physicians may face being sued if they disagree with the AI’s conclusions, Jah says. “A string of such lawsuits would make [physicians] practice defensively,” he suggests. “Eventually, they would stop disagreeing with AI because the legal costs of doing that would be too high.”