What Duplicate Patient Notes Reveal About Health Care and Its Records

Bloat in patient notes has been alarming doctors for some time. The American Medical Informatics Association began a project to reduce patient documentation to 25% of its current volume by 2025. This task won’t be solved by any single organizer or sector; the AMIA calls on providers and health systems, Health IT vendors, and policy and advocacy groups to join the effort.

A recent study in a JAMA publication, “Prevalence and Sources of Duplicate Information in the Electronic Medical Record,” helps drive discussion of bloat forward by focusing on one manifestation: the duplication of text from one patient note to another. Fully half of all notes, the authors find, consist of text copied from previous notes. (Side Note: Check out this video interview on Physician Burnout with one of the authors.)

To establish what’s a duplicate, the authors checked for 10-word sequences that were exactly the same in different notes for the same patient. This seems to me a reasonable way to identify duplicates, although one can question what happens when an EHR automatically generates text. We’ll return to that issue later.

The authors of the study couldn’t tell why clinicians write duplicate notes. That question would call for yet another study, which would interview the nurses and doctors themselves.

And such a study might be hard to carry out. I informally polled some doctors I knew or was introduced to. Most failed to respond, probably because they were busy. I think that some are constrained by strict rules at their institutions that prohibit them from talking to any outsider about their behavior.

Still, through various contacts in the health IT field, I put together some tentative answers to two questions that arise naturally when reading the study:

  • Why do duplicates occur so often?
  • What can be done to improve the situation?

I’ll look at both questions, following a brief exploration of the problem.

The toll taken by duplicates and bloat

Gerry Miller, Founder & CEO of Cloudticity, posed me an interesting perspective: Is bloat really bad? After all, disk space is cheap and can expand without growing pains if you use a cloud vendor for storage. I’ll look later at solutions proposed by Miller and others—notably natural language processing (NLP)—for helping clinicians and researchers deal with large patient records.

But for the authors of the study and most of my respondents, Miller’s solution is unsatisfactory. Sometimes a doctor wants to read a whole patient record. The whole purpose of duplication (jumping forward a bit) could be to remove the need to look back past the most recent note. But the most recent note might not address some condition or some detail about a previous patient encounter.

And disk space, however cheap, is not something to wantonly waste. It raises costs and requires energy.

The worst impact of bloat, under our current record system, is the burden placed on patients who ask for access to their records. Most hospitals are not set up share their records; interoperability is still limited. So instead, the hospitals make a copy of the entire patient’s record, on a CD or sometimes even printed paper. And they charge what they consider a “reasonable” fee for handing the patient their own data. The logistics and costs end up denying access to many patients.

A final risk of duplication, noted by many respondents, was inconsistencies in data, which might be more likely to creep into records when text is duplicated. Naturally, inconsistencies are alarming because incorrect information might lead to incorrect treatment. Inconsistencies also interfere with automated analysis by clinicians and researchers, who are seeking patterns in big data in order to lower costs or improve care.

The AMIA would not launch a multi-year initiative around something that tools could work around. Duplicate patient notes are a real problem, in my opinion.

Why do duplicates occur so often?

Respondents had some intriguing reasons to offer for the presence of duplicates—none involving laziness.

Covering your assurance

Clinicians worry constantly about lawsuits from patients and fines by regulators, which can be triggered if outside reviewers think the clinicians ignored some relevant aspect of the patient’s condition and treatment. In short, each time the clinician deals with the patient—which could easily be several times a day for in-patient nursing staff—the clinician must make sure to include everything about that patient in the note.

Doug McGill, Lead Advisory Solutions Consultant for Q-Centrix, said that such actions aren’t as common as clinicians believe, but caution yet reigns supreme at their institutions.

Several respondents say that government regulations from the Meaningful Use era, starting in 2009, make it hard to comply without loading each note down with duplicate information.

Follow the money

A related motivation was suggested by to two administrators with nursing backgrounds: Joy Avery and Donna Pritchard from CipherHealth. They think that the detailed note is needed to make sure the clinician is reimbursed by payers for everything they do.

Several respondents pointed out that the primary purpose of EHRs is billing, not treatment. McGill suggested research to see whether there is a match between the duplicated text and the kind of text needed for reimbursement. (He also uttered the “Follow the money” phrase I used for a heading.)

Putting all necessary information in one place

It is certainly convenient to find all the information you need about a patient in a single note. As I explained in the previous section, though, it’s impossible to guarantee that everything relevant is in one note. Duplicating text just makes it harder to find the needles in the haystack when you want to check past notes for details.

Nick Hayes, Director of Clinical Research and Outcomes at Cumberland Heights Foundation, says that EHRs offer a scaffold or template to enter notes, encouraging clinicians to rerecord the same information each time. Doctors must follow strict rules about what documents to fill out, and when. All this leads to big and repetitive notes.

Stephen Dart, VP of Engineering at AdvancedMD, says that when a clinician reviews an on-going condition, it’s natural to “pull forward” information related to that condition from a previous note just to make it clear that a condition hasn’t changed. But Dart emphasizes that clinicians must be careful what information is being “pulled forward” so that they avoid erroneous notes in patient records.

I heard one anecdote suggesting that pulling forward could lead to fraud. A colleague of mine in the healthcare field checked his records and found that doctors had pulled forward records that didn’t discuss just his condition, but also recorded that a test had been performed. In other words, they were claiming that a test had been performed on two or more visits when actually it was performed just once. Hopefully, payers have tools to catch such errors, which are probably inadvertent.

Digital systems are superb at linking information, so there is no reason for duplicates; just insert links to the original location of the information. EHRs have not taken enough advantage of the seventy-odd years of computer science innovation in the area of linking. The study at the beginning of this article called this “dynamic documentation” and “the wiki model” (plucking out a relatively late addition to the computer field, the wiki) and says it is only beginning to be seen in medical records.

Duplicating information from structured fields

Avery and Pritchard have found that much of the information entered into the EHR through checkboses is then repeated in plain text. The clinicians don’t quite trust the EHR and are afraid that the checkboxes don’t record enough information. To protect themselves, as said before, they spend extra time putting the same information in the note, in their own words.

Automated text generation

Miller explained that, as doctors fill out boxes and click on menus in their EHR, it busily inserts boilerplate text into the record. This is often a response to regulation, and could well explain the prevalence of identical text in multiple notes. The study did not check for duplicate text across multiple patients.

Multiple record systems

Clinical sites often have different systems and often must enter the same data in each system. Sometimes different departments will install different EHRs that are appropriate for those departments—oncologists record very different kinds of data from cardiologists, for instance. On a larger scale, different facilities may choose different record systems, and find it impossible to harmonize them after mergers, which have been taking place at breakneck speed during past years.

The study did not say whether it considered multiple record systems for a single patient. It’s safe to deduce that the study took everything from a single system and that each patient had a single record. So the problems identified by the study are just compounded by using multiple systems.

What can be done to improve the situation?

Given the regulatory, billing, and technical pressures to insert duplicate notes, solutions are hard to come by. The study warns, “Administrators should be wary of simple solutions such as an outright ban on duplication; without addressing the clinical need to maintain information visibility, these solutions will only exacerbate other hazards.”

Education

Avery and Pritchard believe that there are ways to cut down on duplication, and that clinicians can learn how to use EHRs more efficiently. Miller also suggested that scribes might learn how to use the EHR more effectively than the clinicians, because entering data is the scribe’s essential job. McGill pointed out that each practitioner learns their job from previous ones, so they may preserve obsolete practices after systems evolve.

Education takes time and money, though, and the information uncovered during my interviews suggests that much of the duplication we suffer from has rational causes.

Automating searches and deduplication

Natural language processing (NLP) can extract important information and boil it down to a list for the clinician. Analytics are particularly valuable for researchers looking for trends and anomallies.

Miller pointed to services that normalize ICD-10 diagnostic codes and to RxNorm, which can determine (for instance) that Tylenol and acetaminophen refer to the same medication. He said that Amazon Comprehend Medical and Microsoft Text Analytics for Health are examples of comprehensive, widely available services offering NLP and analytics.

Nick Hayes also suggests using analytics to check for and remove duplication.

A starting point for dealing with deduplication

At least we know now that there’s a huge amount of unnecessary duplicate text in patient records. The authors of the study, as well as people cited in this article, have suggested further research to uncover the roots of the problem.

It already seems clear that solutions will have to take place at many places in the health care system. EHRs will have to sprout new software architectures to record information more intelligently. Clinicians will need tools and education to help information get to the right place. Finally, payers and regulators will have design rules intelligently to minimize unnecessary recording—for which every clinician will be thankful. In the meantime, automated NLP-based tools that help institutions adapt to bloat will appeal to clinicians.

About the author

Andy Oram

Andy is a writer and editor in the computer field. His editorial projects have ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. A correspondent for Healthcare IT Today, Andy also writes often on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM (Brussels), DebConf, and LibrePlanet. Andy participates in the Association for Computing Machinery's policy organization, named USTPC, and is on the editorial board of the Linux Professional Institute.

   

Categories