Need Point of Care EMR Documentation to Meet Future EMR Documentation Requirements

As part of my ongoing writing about what people are starting to call the EHR Backlash, I started to think about the shifting tides of EMR documentation. One of the strongest parts of the EHR backlash from doctors surrounds the convoluted documentation that an EMR creates. There is no end to the doctors who are tired of getting a stack of EMR documentation where 2 lines in the middle mean anything to them.

Related to this is the physician backlash to “having to do SOOOO many clicks.” (emphasis theirs) I still love the analogy of EHR clicks compared to playing a piano, but unfortunately EHR vendors haven’t done a good job solving the two things described in that article: fast predictable response and training.

With so many doctors dissatisfied with all the clicking, I predict we’re going to see a shift of documentation requirements that are going to need a full keyboard as many doctors do away with the point and click craziness that makes up many doctors lives. Sure, transcription and voice recognition can play a role for many doctors and scribes or similar documentation methods will have their place, but I don’t see them taking over the documentation. The next generation of doctors type quickly and won’t have any problem typing their notes just like I don’t have any issue typing this blog post.

As I think about the need for the keyboard, it makes me think about the various point of care computing options out there. I really don’t see a virtual keyboard on a tablet ever becoming a regular typing instrument. At CES I saw a projected keyboard screen that was pretty cool, but still had a lot of development to go. This makes sense why the COWs that I saw demoed at HIMSS are so popular and likely will be for a long time to come.

Even if you subscribe to the scribe or other data input method, I still think most of that documentation is going to need to be available at the point of care. I’ve seen first hand the difference of having a full keyboard documentation tool in the room with you versus charting in some other location. There’s just so much efficiency lost when you’re not able to document in the EMR at the point of care.

I expect that as EMR documentation options change, the need to have EMR documentation at the point of care is going to become even more important.

About the author

John Lynn

John Lynn is the Founder of HealthcareScene.com, a network of leading Healthcare IT resources. The flagship blog, Healthcare IT Today, contains over 13,000 articles with over half of the articles written by John. These EMR and Healthcare IT related articles have been viewed over 20 million times.

John manages Healthcare IT Central, the leading career Health IT job board. He also organizes the first of its kind conference and community focused on healthcare marketing, Healthcare and IT Marketing Conference, and a healthcare IT conference, EXPO.health, focused on practical healthcare IT innovation. John is an advisor to multiple healthcare IT companies. John is highly involved in social media, and in addition to his blogs can be found on Twitter: @techguy.

7 Comments

  • You can add Natural Language Processing to the recipe for getting fuller and more usable records from doctors. Some institutions are already using NLP tools to extract population health measures from their documentation. Doctors can do their stream-of-consciousness thing without trying to figure out which of the 30 fields they need to enter a value into. I think the investment here will pay off faster than expensive new hardware with a rigid model for delivery of care.

  • I wrote what was probably the first PM/EMR for Windows in the early 1990’s. Nothing really has changed since then. Of course, we have more integration with outside services now like labs, and Rx, and pervasive Internet connectivity.

    But, I heard the same major complaint in 1990, 2000, 2010, and now.
    Like the Mozart movie, “there are too many notes, just cut a few”.
    And my response then and now, “which ones”.

  • Well, maybe Google Glass can solve that problem for doctors.

    Just imagine an MD wearing Glass during a patient exam. The doctor just has a conversation with the patient. Some guy in India is watching, clicking in the EMR on his behalf.

    If the MD needs any data, they should use a combination of an iPhone app and Glass NLP to determine intent, generate an HL7 query message, get the data back, and display it on Glass.

    The future of medical documentation spans multiple devices. Each device has its pros and cons. Clinicians should be extracting the pro’s from every form factor, and using other devices to make up for the cons

  • “determine intent” I’d like to see that algorithm

    Is the upset stomach from an ulcer, something he/she ate, or a fight with his/her spouse?

  • Axeo

    Great point. I should have been more specific about the NLP I was referring to.

    Glass is the ultimate second screen for doctors, who are constantly moving around. The NLP doesn’t need to be on the data input side – that’s what the guy in India is for. The NLP is for determining intent on what data to show on the Glass display itself. For example, “Ok Glass, show me John’s lab tests from yesterday” or “Ok Glass, what are John’s most recent vitals.” The intent in most of these cases is quite clear, and there’s a very prominent keyword that the NLP can hook into.

  • It seems to me that main theme of the thread is point-of-care charting and documentation — data entry. And, completing the related tasks by the provider during the visit. One objection now and always has been “too many clicks”. Takes too long, easy to get lost, etc.

    Voice recognition software has been able to execute a command, like “show me vitals” for some time now.

    How do you see NLP helping on the data entry side capture the structured data the politicians want and think our health care system needs?

  • I’m not particularly optimistic about NLP on the data entry side. The technical challenge, coupled with the user training and expectation management, is quite daunting, given how many different paths can be taken. Each of those paths has its own unique set of datafields, and determining intent between them is extremely difficult

    I think the best short to medium term bet for NLP technologies is to train the user on the extent of the NLP. If the NLP can only parse structured HPI – timing, quality, severity, associated symptoms, etc – then doctors need to structure that part of their free text narrative accordingly. When doctors finish the HPI and move onto the physical exam, they should speak and act differently, knowing there isn’t NLP behind them.

    We should compare against NLP that’s performed my super mega companies, like Apple’s Siri and Google Now. On the data entry side, of each of those technologies is codified to a certain set of tags. For example, reminders, alerts, phone calls, text message, calendar appointments, airplane tickets, etc. Of course, none of those particular issues is THAT complicated relative to parsing strings into dozens of datafields. And perhaps most importantly, those are all different enough subjects that the NLP is relatively easy. Even still, Apple and Google are only ~85% accurate. Companies with fewer resources and harder problems to solve have a long ways to go

Click here to post a comment
   

Categories