Mobile for EMR Data Input

Note: I know that there are some mistakes and incongruencies in this post. That was partially by design since it was illustrating my first attempt at voice recognition for blog posting. I did try and correct many things along the way, but as you’ll read some of it doesn’t read very well.

I’m stuck on the tarmac in JFK airport thought I’d try see how the voice recognition work today to input on a mobile phone.

Amazing thing is this is the first time I’ve used voice recognition on the Android S3 phone. It seems like a pretty good experience so far with voice even with a soft voice in an airplane it’s turning out quite well.

I was a little concerned about how long it would take to write a blog post on the phone but the voice recognition works out quite well. I have had to make a few corrections to it, but for the most part it’s done really well.

I’m not sure how many doctors will want to use voice and of course I haven’t done any medical terms for example I can talk about my son’s previous diagnosis of mastocytosis as an example to see how it will transcribe. As you can see I didn’t actually have to correct it and I got message cytosis without any problem so it did pretty good.  I wonder if other doctors have used the voice recognition on the Android phones or Android tablets to see how well it does with voice recognition of medical terms.  Although the second time I said mastscytosis it didn’t get it right.

Overall I’m pretty happy with the voice recognition. I have written this whole post in about 5 minutes and it’s the first time I’ve use voice recognition on the phone.  With that said I still probably rather type than use voice recognition for blog posts. However, I would rather use voice recognition than the keyboard on the phone.

Have you used voice recognition? In what ways to use voice recognition? I’m looking forward to using voice recognition more and I’ll let you know how it goes. What is amazing is that this technology is built into every new smartphone out there.

I’m off to the CHIME conference later today so I’ll have more details on that coming soon.

About the author

John Lynn

John Lynn

John Lynn is the Founder of the HealthcareScene.com, a network of leading Healthcare IT resources. The flagship blog, Healthcare IT Today, contains over 13,000 articles with over half of the articles written by John. These EMR and Healthcare IT related articles have been viewed over 20 million times.

John manages Healthcare IT Central, the leading career Health IT job board. He also organizes the first of its kind conference and community focused on healthcare marketing, Healthcare and IT Marketing Conference, and a healthcare IT conference, EXPO.health, focused on practical healthcare IT innovation. John is an advisor to multiple healthcare IT companies. John is highly involved in social media, and in addition to his blogs can be found on Twitter: @techguy.

6 Comments

  • I’ve used Naturally Speaking and didn’t like it. It was a combination of watching my words appear and correcting was never any quicker than my typing.

    Yet, on my Android phone, I regularly “talk my text” messages. Funny thing is, I’ve been doing this for years, yet Siri gets more publicity.

    It is just short of shocking how well my phone does with voice recognition.

  • John,
    As I mention in the post, I’d prefer to type over doing voice recognition also as long as I have a full keyboard. One thing I didn’t mention is that for me it’s easier to put together my thoughts as I type as opposed to doing so as I talk. I’m sure I could learn and many doctors that have used transcription have been able to do it with no problem.

    Also, it’s interesting that you like Siri so much since it’s the same technology as Dragon. So, the recognition will be basically the same.

  • There are some important differences between dictating non-medical content with built-in voice recognition technologies and dictating medical content using Dragon Medical. First, the best recognition happens when the software is “trained” to your voice. Mobile applications such as those available on the iPhone 4S (and up) and android devices are speaker independent. As a result, the software never learns your unique phrases and pronunciations. And, the voice processing is performed on a server, which is why you can only use the voice recognition component when you have an Internet connection.

    Compare that to Dragon Medical, which runs entirely on your computer and is trained to your voice. And, a handheld or headset mic is superior to the built in mic on your mobile device. The real difference, though, is Language Modeling. Language Modeling–not the medical vocabulary–is what makes medical voice recognition so good. Essentially, the software expects to hear certain words and phrases. (Interestingly, the software doesn’t know the WORDS you’re dictating, rather it hears the 44 distinct SOUNDS in the English language, then applies a cousin of Bayes’ theorem to write the words it thinks you said.)

    Another way to explain Language Modeling is to observe what happens when you use a cardiology model/vocabulary to dictate the Pledge of Allegience. Instead of “I pledge allegience…” the software transcribes something like “I plan to lesions.” Why? Because its expecting to hear cardiovascular terms and phrases that cardiologists are likely to dictate. (This also explains why using the consumer Dragon NaturallySpeaking product doesn’t produce the greatest accuracy in medical uses.)

    Although it continues to be challenged by names and short words like he and she, Dragon Medical has come a long way in a short time. Thats thanks partly to more powerful hardware and partly to improved voice recognition algorithms. With the right software configuration, the right microphone and a little training almost all users have 99% or better recognition accuracy within the first week.

  • I don’t like or dislike Siri, I noted that Android had quality VR well before Siri came around.

    Another issue with Siri that people need to be aware of is when you speak “to” Siri, that data is sent to central servers, then sent back.

    In short, don’t use your iPhone for dictation that include PHI.

  • Nice post and a neat idea to try VR for your own uses but as John B points out a different use case with different results

    As John says Dragon on the PC (or Mac) is trained and customized to your voice and speech patterns and sequence of worlds. It also typically has better quality audio (very important for good recognition accuracy) with a decent microphone – although many phones are now using greatly improved microphones and sometimes more than one to help digitally subtract background noise

    In fact recognition in a specific domain is easier for technology since the possible results are limited by the vocabulary and the normal sequence of words in clinical texts.

    THe mobile version of speech on your phone is not trained to your voice. Note despite that how well it performs – I think that is very impressive and would suggest that if you extend the cloud based voice recognition with a dragon customized style solution with a user specific profile and you get great accuracy – this is certainly what we see with our cloud based version of Dragon that is integrated in several Medical applications (Sparrow EDIS for example:

    Sparrow EDIS Powered by Nuance Healthcare from Montrue Technologies, Inc. on Vimeo.

    and there are many more in the pipeline.

    John B is also right for the free versions of speech and PHI – these are not medically hardened applications and the information is not secure. That’s is why we spent some time creating a secure cloud based voice recognition tool set that did secure information at the phone, in transit, in the data center and fro the return so that it can be used for PHI. This is the 360 Development Platform that includes not only speech recognition but understanding services that applies Natural Language Processing as well

    Mobile voice technology is being included in many EMR’s Cerner and Epic both announced this in the last couple of weeks and there are many others on the way….exciting times for you John and for all the clinicians who need another method to capture information on their mobile platform

  • Thanks for the shout out, Dr. Nick! We are getting astonishing accuracy with the cloud version of Dragon in our app. At ACEP last week in Denver we tried it out with a couple doctors from Turkey with thick accents. They assured me it wouldn’t work. I insisted they try. They both got 99% accuracy without even using their own speech profile. Now they are presenting Sparrow EDIS at the Pan-Asian Health IT conference this week in Korea.

Click here to post a comment
   

Categories