As I discussed in the last blog post we were very busy this past spring with our biggest EMR upgrade to date:
- Upgrade from the 2005 software to the 2010 version (2011 upgrade delayed on the advice of our VAR), making a big jump.
- Purchase new servers and new memory (SAN)
- Switch to virtual servers / VMware
- Convert our database from 2-database structure to single database to accommodate the 2010 software.
This was more of a system replacement than an upgrade. The only parts that weren’t completely replaced were the network components and some peripheral applications (web portal and document scanning).
Despite realistic expectations the upgrade took longer than expected. Some problems took many weeks to solve.
- Despite a successful test run, the dual-to-single database conversion was fraught with problems and took longer than expected. The computer that was running the conversion software (called a “migration tool”) had a RAM failure during the operation, which slowed the conversion down but didn’t kill it. When we saw the operation slow down we had a dilemma – do you stop to troubleshoot or let it keep running slowly? We have over 250,000 patient records in our database so the conversion was expected to take well over 72 hours – longer than a weekend. That meant we were already looking at EMR down time during office hours. We stopped the migration to diagnose and replace the RAM. Then the migration tool itself failed, forcing another interruption and requiring our vendor to troubleshoot and patch the migration tool. The migration tool is an unusual piece of software. You only need it once so about the time you have learned to use it you don’t need it anymore. On the vendor side, every customer’s database / hardware situation is different, so the migration tool is never totally debugged. That is why we delayed our upgrade so long – we wanted the vendor to gain some experience with the migration tool before we used it. We were still by far the largest database conversion they had ever done. In spite of the difficulties the result was an intact single database that gave us no further trouble once the migration was completed.
- Another contribution to our delay in upgrading was waiting for our vendor to support VMware and give us hardware specs. Even with that accomplished VMware was a nightmare to set up. Performance was very slow initially and took days to correct. The biggest problem was the printers. Printer preferences were lost several times a day and it was not unusual for my documents to get printed at a member practice across town despite having reset my printer preferences several times that day. That wreaked havoc on clinic operations and took over a month to fix.
- We were blindsided by a bizarre “failure” of a T1 line to one of our offices. The line was somehow put in some sort of diagnostic mode, rendering it unable to function but showing it as normal to our monitoring. For days we assumed that office’s performance problems were related to the upgrade.
- Some issues were purely our fault. We did not adequately staff our upgrade operations. We had only our chief operating officer and our IT specialist to handle problems and questions; they couldn’t get off the phone long enough to fix anything. This also impaired communications significantly. To make things worse each of them had immediate family members become suddenly ill, requiring that they take some time off during the upgrade.
The next post will be my analysis of this great adventure.