Vanderbilt Creating Voice-Command Interface For Its Epic EHR

I am not convinced that voice assistants like Siri and Alexa are the best way to access healthcare data. My experience with voice recognition – which has been a staple of my life since I broke my shoulder two years ago – is that it’s good for churning out a lot of words, but not for getting all of them right.

Even after being trained to understand my unique tone, pronunciation and vocabulary, my software runs at what I’d estimate to be a 70% to 80% level of accuracy, and while I’d love to believe that the recognition engines from the big players like Apple and Amazon are many orders of magnitude better, my sense is that they aren’t.

That being said, they seem to have a much more optimistic view of voice recognition technology at Vanderbilt University Medical Center. Researchers at VUMC are in the process of creating voice-controlled virtual assistant software which will allow team members to interact with their Epic EHR.

The new assistant will be known as EVA, short for EHR Voice Assistant. In a news item published by VUMC, project leader Yaa Kumal-Crystal MD, MPh, MS, pulled up a patient chart and asked EVA a standard clinical question.

“What was the last sodium?” she asks EVA, which transcribes and displays the query, retrieves the clinical test value, indicates whether the result is normal and where it falls in the range of possible sodium results. While at this point EVA responds with text, it will eventually have its own voice, the VUMC item notes.

According to Kumah-Crystal, another use case would be issuing orders while rounding in the hospital. “What if you could say, ‘Let’s order a BMP” or “Let’s order a CBC’ and it’s just done? The goal is to make it as natural as having an actual conversation with a really useful intern,” she said.

OK, so where will the voice assistant logic come from? I don’t like to go overboard citing press pieces, but this one contains a lot of meat, so here’s more on VUMC’s process straight from the horse’s mouth.

According to Peter Shave, the Center’s executive director of health IT systems architecture, the organization is testing various commercial software packages for use in the EVA infrastructure, in a process that includes having the prototype randomly select a natural language processing service to convert the query into text, then another component converts the text into something meaningful.  “It’s this middle component that’s the more recent advancement that’s made all these at-home devices possible,” Shave notes.

The team expects to begin getting feedback from health system clinicians in February.  Epic, which is working with VUMC on the project, is building its own EHR voice assistant, though it’s not clear when its version will become available.

While this all sounds good, I find myself wondering whether project architects have any sense of what’s involved in fostering good voice recognition. In particular, I’m curious about how they’ll cope with noisy hospital environments.

In my case, I’ve found ambient noise in the room or hallway, nearby conversations or even air currents can throw off voice recognition or even crash the software. If it hasn’t come up already, VUMC and Epic both may find that simply addressing the audio environment for voice recognition can be really difficult. Let’s hope that this otherwise promising project isn’t sidelined by this kind of issue.

About the author

Anne Zieger

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

1 Comment

  • Agree with your observations. A couple minor points. “Voice” recognition is the act of recognizing your voice for commands. “Speech” recognition is the act of converting your speech into text. Oftentimes these terminologies are confused. As for using voice/speech in a noisy environment, if the application provides the ability to “train” the user’s voice, then as long as it is trained in a similar environment to how it will be used, the voice/speech application should be able to adapt. Training of users is critical to success! They frequently get frustrated trying to use the system and give up. Also since the process involves correctly interpreting what the user is saying, it is important to use the application to either train the word/phrase, or use the application to correct the error rather than deleting it and typing the correction. This deletes the voice/speech that is associated with the created text, and the system does not learn. One final point – medical dictation is best when a specialized medical vocabulary is used. “off-the-shelf” applications will not produce as accurate results, since the special medical words are likely not part of the vocabulary.

Click here to post a comment
   

Categories