Not long ago, I wrote about what happened when I requested a digital copy of my medical records from my healthcare provider, Kaiser Permanente. In that article, I described struggling with a nearly 600-page PDF whose organization seemed to have no discernible rhyme or reason to it.
After spending a few weeks feeling flabbergasted, I did in fact go back into my records and delve more deeply into a few areas of concern. When I did so, I was a bit shocked about some of the things I saw.
For example, in one report on an encounter regarding the status of my Parkinson’s disease, a neurologist wrote that I said I was “well-read” on Parkinson’s.
And you know what? I am. My ability to function depends upon it. (Heck, I may be more current than some active clinicians who simply don’t have time to keep up with the latest journal articles.) For what it’s worth, I brought this up not to challenge the doctor but to let them know that they didn’t need to recap the basics during the visit, but that shouldn’t matter
But it seems the doctor didn’t approve of my exhibiting some confidence,
If you think I’m reading things into the notes that aren’t there, contact me and I’ll send you a page or two of the actual report. I defy you to draw a different conclusion when you read the rest of the note.
In another instance, a cardiologist wrote that I challenged her conclusions and told her that I was frustrated with our interaction. This is absolutely true. But is a medical record the place to document a customer service or patient/physician relationship matter?
Again, it struck me that the doctor used the visit notes to express her discomfort with my assertive side and her skepticism about my (objectively accurate) belief that I am well informed about my health.
You can say that these words aren’t a big deal, but to me, it’s obvious that they are. As I know from writing pieces like this for 30 years, the tone of a written communication absolutely matters, especially when that communication can assume a great deal of practical and even legal significance.
Okay, so why am I sharing all this with you here in this forum? My reason is that as I see it, health IT may provide at least a partial solution to these problems. Having been reminded how physicians inevitably take their prejudices to their work, I’m beginning to think that having notes compiled by a listening AI would make the data cleaner and more useful.
Yes, I’m aware that by definition the data used to train an AI system includes its own embedded prejudices, including some that perpetuate disparities in diagnosis and treatment. We should definitely keep our eye on such distortions and address them as soon as possible.
That being said, there’s a lot that’s right about taking an AI-driven approach to documentation, most particularly that an AI system has no “feelings” about anything. An AI system won’t feel threatened or impose individual judgments on how I’m managing my care. Algorithms don’t have an ego or value system which tells them patients like me I should sit down and shut up. They don’t have a social agenda.
If what I saw is typical of human documentation, give me sweet, nonjudgmental AI.