Over the past several years, EMRs have taken some steps forward. At least in some cases, analytics have improved, vendors have begun offering cloud or on-premise install versions of their products and user interfaces have even improved.
But one problem with EMRs that seems to be nearly unfixable is the need for providers to stare at an EMR screen, leaving patients to fidget uncomfortably while they wait for a bit of face-to-face contact and discussion. Sure, you’ll see scribes in hospital emergency departments, allowing ED docs to speak to patients without interruption, but in the outpatient settings where patients spend most of their time, the EMR screen is king.
Such a focus on the EMR display isn’t unreasonable, given the importance of the data being entered, but as critics have noted countless times, it does make it more likely that the provider will miss subtle clues as to the patient’s condition, and possibly end up offering lower-quality care than they would have if they had an old-fashioned computerless encounter.
I have long thought, however, that there’s a solution to this problem which would be helpful to both the physician and the patient, one which would literally make sure that patients and doctors are on the same page. I’m speaking of a new group of settings for EMRs designed specifically to let patients collaborate with physicians.
Such an EMR setting, as I envision it, would begin with a section depicting a dummy patient of the appropriate gender.The patient would touch the areas of the body which were causing them problems, while the doctor typed up a narrative version of the problem presentation. The two (patient and doctor) would then zoom in together to more specific descriptions of what the patient’s trouble might be, and the doctor would educate the patient as to what kind of treatment these different conditions might require.
At that point, depending on what condition(s) the doctor chose as requiring further study, lists of potential tests would come up. If a patient wanted to learn what these tests were intended to accomplish, they’d have the liberty to drill down and learn, say, what a CBC measures and why. The patient would also see, where possible, the data (such as high cholesterol levels) which caused the doctor to seek further insight.
If the patient had a known illness being managed by the physician, such as heart disease, a tour through a 3-D visual model of the heart would also be part of the collaboration, allowing the doctor to educate the patient effectively as to what they were jointly trying to accomplish (such as halting heart muscle thickening).
The final step in this patient-doctor process would come with the system presenting a list of current medications taken by the patient, and if appropriate, new medications that might address any new or recurring symptoms the patient was experiencing.
The final result would come in the form of a PDF, e-mailed to the patient or printed out for their use, offering an overview of their shared journey. The doctor might have to spend a few minutes adding details to their notes after the patient left, but for the most part, the collaborative consult would have met everyone’s needs.
Now you tell me: Why aren’t we doing this now? Wouldn’t it make much more sense, and take much more advantage of the powerful desktops, tablets and smartphones we have, than having a provider stare at a screen for most of their visit with a patient?