I am not convinced that voice assistants like Siri and Alexa are the best way to access healthcare data. My experience with voice recognition – which has been a staple of my life since I broke my shoulder two years ago – is that it’s good for churning out a lot of words, but not for getting all of them right.
Even after being trained to understand my unique tone, pronunciation and vocabulary, my software runs at what I’d estimate to be a 70% to 80% level of accuracy, and while I’d love to believe that the recognition engines from the big players like Apple and Amazon are many orders of magnitude better, my sense is that they aren’t.
That being said, they seem to have a much more optimistic view of voice recognition technology at Vanderbilt University Medical Center. Researchers at VUMC are in the process of creating voice-controlled virtual assistant software which will allow team members to interact with their Epic EHR.
The new assistant will be known as EVA, short for EHR Voice Assistant. In a news item published by VUMC, project leader Yaa Kumal-Crystal MD, MPh, MS, pulled up a patient chart and asked EVA a standard clinical question.
“What was the last sodium?” she asks EVA, which transcribes and displays the query, retrieves the clinical test value, indicates whether the result is normal and where it falls in the range of possible sodium results. While at this point EVA responds with text, it will eventually have its own voice, the VUMC item notes.
According to Kumah-Crystal, another use case would be issuing orders while rounding in the hospital. “What if you could say, ‘Let’s order a BMP” or “Let’s order a CBC’ and it’s just done? The goal is to make it as natural as having an actual conversation with a really useful intern,” she said.
OK, so where will the voice assistant logic come from? I don’t like to go overboard citing press pieces, but this one contains a lot of meat, so here’s more on VUMC’s process straight from the horse’s mouth.
According to Peter Shave, the Center’s executive director of health IT systems architecture, the organization is testing various commercial software packages for use in the EVA infrastructure, in a process that includes having the prototype randomly select a natural language processing service to convert the query into text, then another component converts the text into something meaningful. “It’s this middle component that’s the more recent advancement that’s made all these at-home devices possible,” Shave notes.
The team expects to begin getting feedback from health system clinicians in February. Epic, which is working with VUMC on the project, is building its own EHR voice assistant, though it’s not clear when its version will become available.
While this all sounds good, I find myself wondering whether project architects have any sense of what’s involved in fostering good voice recognition. In particular, I’m curious about how they’ll cope with noisy hospital environments.
In my case, I’ve found ambient noise in the room or hallway, nearby conversations or even air currents can throw off voice recognition or even crash the software. If it hasn’t come up already, VUMC and Epic both may find that simply addressing the audio environment for voice recognition can be really difficult. Let’s hope that this otherwise promising project isn’t sidelined by this kind of issue.