My EMR is DOWN!!!

2 real life stories from users of an EMR (sent to me by a reader of the site). In each case the users started figuratively shouting, “My EMR is DOWN!!!”

1. One of my clients, a pediatric group, went down yesterday. Their firewall box choked which resulted in their internal network being useless. Each computer was alive and well, they just couldn’t talk to the server. They couldn’t print. They couldn’t get to the Internet. They were dead in the water all day. This is a group that didn’t grow up in the old days where day-to-day unreliability was the norm. They had no printouts from which to work. Being a firewall which was 5+ years old, the hardware tech was really scrambling to find a replacement.

2. Another client, has been using wireless laptops to talk to their server. For some reason, the wireless reliability has started to drop like a rock in the past couple of weeks. It is not clear where the culprit is. Signal strengths are excellent according to the laptop, and then poof, no more server. The event viewer reports that there was a disconnection. They re-connect, work a little while, and poof.

The funny part is that in neither case was the EMR software responsible for being down. It was the other technology that facilitates the EMR. Yet the EMR takes the blame. It’s kind of like a patient who has a bad experience with a nurse or receptionist or doctor at an office and reports to their friends that the PRACTICE is a disaster. It’s easier to blame the whole instead of the responsible part.

About the author

John Lynn

John Lynn

John Lynn is the Founder of the, a network of leading Healthcare IT resources. The flagship blog, Healthcare IT Today, contains over 13,000 articles with over half of the articles written by John. These EMR and Healthcare IT related articles have been viewed over 20 million times.

John manages Healthcare IT Central, the leading career Health IT job board. He also organizes the first of its kind conference and community focused on healthcare marketing, Healthcare and IT Marketing Conference, and a healthcare IT conference,, focused on practical healthcare IT innovation. John is an advisor to multiple healthcare IT companies. John is highly involved in social media, and in addition to his blogs can be found on Twitter: @techguy.


  • John,

    Great topic. I think given the brevity of down-time, the issue is certainly not given as much attention as it deserves. I often find that organizations do not address downtime scenarios, solutions and protocol until they experience a downtime event.

    That said, Galen Healthcare Solutions provides a solution that would have prevented downtime in the two scenarios you describe above as well as many other scenarios. The solution – VitalCenter ( – provides uninterrupted access to patients’ charts.

    Whether the downtime is a result of a network outage, application or hardware failure, or any other cause, VitalCenter allows organizations to continue functioning during periods of downtime. It also offers the ability to electronically document a patient visit during a downtime sitatuation and the documents are automatically uploaded to the EHR once it becomes available.

    This solution provides many advantages over other solutions (namely disaster recovery, cluster db/web farm, high availability/hot site) as illustrated in the feature comparison chart:

    I’d be interested to hear approaches and solutions that other organizations have in place.


  • Great post Justin.

    To my mind the issues you describe are easily avoided with a trusted IT partner. Most practices that I come into contact really don’t have the technical resource they need.

    What I find is that often a practice will have a person who has a bit of knowledge or they use a company on a “break fix” basis. While this type of service has its place, it often doesn’t serve the business needs of a practice.

    When sourcing external IT support often the decision comes down to the cheapest option. Rather than looking at it from a strategic position and asking the question, “If my IT doesn’t work for one day, how much does it cost me?”

    With the anticipated surge in EMR adoption, practices are going to have to invest in technology. Technology can generate business efficiencies, however in order to ensure the efficiencies are realized someone is going to have to support and manage it.

    As the 2 examples demonstrate, if you don’t have reliable IT Support and Management in place business efficiencies and patient care can be severely impacted.

    In my opinion practices should look external IT experts who understands the business goals and who can provide a technology strategy/road map to support the business needs of the practice and then support and manage the IT investment

  • John,

    For a medical provider, down time is unacceptable. They provided care during natural disasters (with no electricity) and in remote locations far away from civilization. When EMR becomes an integral part of the care and makes the provider dependant on EMR for delivering care, its failure due to whatever reason that renders the provider stop his/her operations is too costly for everyone. EMR must be treated like a mission critical element by vendors and providers. Several levels of redundancy in all aspects of the system need to be planned. It can be easily achieved, but it costs money and it may be beyond the budgets of small ambulatory providers (1-4 doctors). When vendors and EMR proponents downplay costs and implement low cost systems, they become showstoppers for providers. I have known some providers who have abandoned using EMRs when their systems went down for prolonged periods. Its non a win-win situation, if risks are not discussed and mitigated during planning stage.

    Until integrated and affordable solutions that assure 24X7 operational capabilities, we must tell the limitations of the technology.

  • EMR/PMS should be treated as a mission-critical application. Applications like these should have IT support that can be called in immediately and knows the environment where the EMR is installed. Where I work for I support an IT environment for a medium size comany and we are repsonsible for the company’s ERP system. If the ERP is down for at least an hour we are losing over tens of thousand dollars in revenue plus the downtime shows our (un)reliability to our customer. An EMR system should be much more reliable than an ERP system, it should have 100% uptime, nothing else. You can’t ask a dying patient to wait because you can’t get her medical records from the computer, can you ? That is why EMR is not just deploying the software and making it work. It requires that you have a reliable IT infrastructure in-place. JP

  • Glad to see that I’ve hit a nerve with my readers. I really enjoy this type of discussion. I might have to do a future post summarizing some of the comments made in this thread.

    One thing that I think is worth saying is that it’s impossible to guarantee 100% uptime. Even the Google has some downtime. Plus, my brother who is great at measuring this type of thing and planning for a certain service level of uptime always described that it’s never a problem to deliver 99.999% uptime to someone. It just costs a lot of money to guarantee it. The cost to add those extra 9’s to that number is significant. Most doctors offices should be completely satisfied with 99% uptime I believe (although haven’t calculated it out exactly).

    Plus, it’s amazing how doctors unprepared for EMR down time are lost. We discussed this issue in our clinic and came to an important conclusion. Sure, the EMR being down is a challenge and far from ideal. However, that doesn’t mean that we can’t still serve patients. We might have to ask more questions. We might have to even occasionally delay a decision, but we can still treat the patients to a large extent. However, we had to plan for it so that when it happens panic mode doesn’t set in.

    Amen to having reliable IT support. That’s important, but hard for a practice to plan for in my opinion.

  • John,

    It appears you had a single point of failure on the infrastructure side. Sekhar and Jojo are absolutely correct when it comes to treating the technology as a strategic asset. Unfortunately, technology and compliance costs can be prohibitive to smaller operations, especially when trying to build in redundancy to address business continuity. However, that does not eliminate the need to identify those risks and mitigate the vulnerabilities whenever you can.

  • John,
    Just to be clear. The 2 scenarios above weren’t my implementations. They were experiences sent to me by a reader. However, I think this happens to a lot of people that haven’t planned very well.

    There’s definitely a lot that even small practices can do to have a reliable infrastructure.

  • Case 1 – REDUNDANCY. It’s in every text book and every test. They should (have had) vital spares (like a hardware firewall) stocked on the premises. Also, they should (have had) a software firewall configured and tested ready to start up at a moment’s notice. All for just such an eventuality. Continuous uptime is the second most important statistics for any network. First being security, of course.
    Case 2 – If you run vital computer operations you must have real time TROUBLESHOOTING/ANALYTICAL capability. Question of money, of course. Many places rely on part time or remote tech help. Obviously not always sufficient. (The problem in question sounds like channel congestion. If it’s a busy building several WiFi modems/connection points can end up using the same broadcast channel. Just have to “move” to an unused one.)

  • Business continuity is generally an after thought for many healthcare organizations and many EMR vendors. There is an exponential cost associated with system uptime that an organization will not go beyond, yet they want 100% reliability. Similiar to life insurance, no one want to pay money for it as they dont see any benefit from it on a daily basis, but if (when) they are ever in need of it, they wish they had paid.

    In our organizations (a large multi-facility CCRC) we do have replication taking place at an offsite facility for critical clinical applications. If the system goes down for an extended period of time, we can switch our users over to this backup environment. However, in most cases for system outages (maintenance or otherwise) we simply resort to our business continuity process which is cheap and effective. For the most critical information (MARs, TARs, clinical notes, etc…) we created custom reports that pull the data from our clinical enterprise system at our corporate office on an hourly or daily basis. These reports are scheduled to run and are deployed to stand-alone PCs in the medical practices/nursing units at our various facilities. If there is ever a system or network outage, the clinicals can refer to these reports for the information they need to continue operation. Creating scheduled reports that copy critical information to the desktop, can be a cheap business continuity solution for the small providers.

  • Calvin,
    Have you ever heard of a small practice doing this? Or of an EMR vendor that provides this service to small practices? The idea of deploying just the data, but not the ability to update the data to workstations is interesting and would cost less. Just if the EMR vendor doesn’t provide the service small providers won’t be able to do it for sure.

  • Well, of the 11 comments none of touch on the true issue….end to end design.

    Having robust, redundant hardware systems does no good if the software doesn’t know how to make use of it. Additionally, if the IT or vendor staff don’t configure everything perfectly, all the benefit is lost.

    Yes, I have run some of the largest and most heavily used websites in existence over the last 12 years and one thing has become painfully obvious there are four things that must be done and done correctly.

    1) The hardware (computers, storage, printers, etc) design must be fully fault tolerant. Lots of folks do this.

    2) The software must be designed to likewise be fault tolerant and abstracted enough to not really care about the hardware underneath.

    3) The network design needs to be fault tolerant and self cleaning and healing.

    4) The staff must be well trained and cognisant of the way things are done in 1-3 to minimize to minimize downtime.

    And no the Cloud doesn’t solve all the problems or prevent down time.

    100% availability can be achieved but not with standard business practice of disaster recovery.. one must change one’s paradigm and design for disaster avoidance.

  • Mike,
    I can’t really argue with your 4 steps. However, let’s just be honest and practical here for a minute. No small practice can afford to do all of that well (or at least it’s approaching 0). So, the question for small practices is how can you get very close to that and be aware of the weaknesses so you’re prepared when those weaknesses go down.

  • The original comment was actually directed at the consumer as much as to the providers. Granted I haven’t done a study in over 2 years on EMR systems, but when I last did, the systems fell into 2 categories: Software only and turnkey. Neither category doing particularly well in fault tolerant design.

    Specifically for a small practice, the things I would look for:

    A turnkey solution with the option of buying or leasing equipment. With reasonable and knowledgeable maintenance expect to replace the desktops somewhere between year 3 and 5. Laptops figure on replacing between years 2 and 3.

    Primary storage of all data should be local with nightly monitored uploads to another facility or the cloud. Either case it should be fully encrypted. By monitored, I mean that the data going up needs to be check for validity on the remote side and a warning messaged generated preferably to the vendor and the clinic staff. Same thing if lack of network connectivity prevents the upload.

    Wired networks in the clinic will always be faster and safer, but if you want to economize on hardware by using traveling laptops, then the network needs to locked down tight. This is done by using mac address allowed lists and running Enterprise level WPA encryption. It won’t keep the determined hacker out but it will keep the script kiddies and opportunists off of your network. And by all means if you decide to offer wireless in you waiting room, please, please still password protected it and put on a separate subnet.

    As far a power goes, there are a couple of decisions to be made. The first is how many locations need around the clinic need a UPS. At a minimum, the server(s) needs and ordinarily the network gear needs. If running laptops, then each wireless access point needs to have a UPS as well or the software needs to smart enough to wait to be in network range (either things working again or near the 1 access point, probably near the servers that is on a UPS). Second is how long of a power outage do you desire to survive, 30 minutes, 60 minutes or longer.

    Now if for some reason you decide that you need to be able to do more 1-2 hours of back, then one should consider going to a stand by generator that automatically kicks in when the power goes out.

    If all of this is too much to handle do the following:

    Ask the sales rep to diagram how the patient data is handled by their system. Once you get the diagram, point at symbol on the diagram and ask the sales rep how the system functions if the chosen part fails. Do that for every symbol on the diagram and tally how many of those symbols completely break the system. Typically, there will be at least 3 components. Any more than that ask the sales rep why so many delicate parts in their system.

    Any more questions can be directed to me at mike_moran at

  • Another known redundant solution is the use of colo offices; the back-end (server) systems will be located on rented colo offices that have already been made redundant with respect to network, power and server hardware, assuming that the EMR software is more of a client (desktop computer)/ server system. There may be solutions to hook colo servers with auxiliary equipment (e.g. scanners, sensors,etc.) but your mileage varies. Then the clinic is left t to making its client side redundant, again with respect to the network, power and client hardware (desktop / laptops / printers).

  • The whole issue is finding the right balance of security and availability to be able to keep the practice functioning the majority of the time.

    The problem is there is no set rules for how to do it. And if done wrong can not only cause a head, but might have the practitioner run afoul of federal and state privacy laws/rules.

    Each case is unique.

Click here to post a comment