Without a doubt, doctors benefit from the face-to-face contact with patients restored to them by scribe use; also, patients seem to like that they can talk freely without waiting for doctors to catch up with their typing. Unfortunately, though, putting scribes in place to gather EMR information can be pricey.
But what if human scribes could be replaced by digital versions, ones which interpreted the content of office visits using speech recognition and machine learning tools which automatically entered that data into an EHR system? Could this be done effectively, safely and affordably? (Side Note: John proposed something similar happening with what he called the Video EHR back in 2006.)
We don’t know the answer yet, but we may find out soon. Working with Google, a Stanford University doctor is piloting the use of digital scribes at the family medicine clinic where he works. Dr. Steven Lin is conducting a 9-month long study of the concept at the clinic, which will include all nine doctors currently working there.
Patients can choose whether to participate or not. If they do opt in, researchers plan to protect their privacy by removing their protected health information from any data used in the study.
To capture the visit information, doctors will wear a microphone and record the session. Once the session is recorded, team members plan to use machine learning algorithms to detect patterns in the recordings that can be used to complete progress notes automatically.
As one might imagine, the purpose of the pilot is to see what challenges doctors face in using digital scribes. Not surprisingly, Dr. Lin (and doubtless, Google as well), hope to develop a digital scribe tool that can be used widely if the test goes well.
While the information Stanford is sharing on the pilot is intriguing in and of itself, there are a few questions I’d hope to see project leaders answer in the future:
- Will the use of digital scribes save money over the cost of human scribes? How much?
- How much human technical involvement will be necessary to make this work? If the answer is “a lot” can this approach scale up to widespread use?
- How will providers do quality control? After all, even the best voice recognition software isn’t perfect. Unless there’s some form of human content oversight, mis-translated words could end up in patient records indefinitely – and that could lead to major problems.
Don’t get me wrong: I think this is a super idea, and if this approach works it could conceivably change EHR information gathering for the better. I just think it’s important that we consider some of the tradeoffs that we’ll inevitably face if it takes off after the pilot has come and gone.