Is MUMPS the Major Healthcare Interoperability Problem?

Jeremy Bikman from KATALUS Advisors wrote this interesting comment on a LinkedIn discussion I was participating in:

Perhaps there is a place for MUMPS but only if healthcare continues to thumb its nose at the prevailing technology trends. It’s hard for me to envision healthcare to continue to embrace a technology that doesn’t like to play nicely with other non-MUMPS systems. If there were real advantages to it you would see a fair number of high tech firms utilizing it (Facebook, salesforce.com, Twitter, Spotify, etc).

If your goal is to have an enterprise system with a database that has some scale to it and certainly has good speed, and you don’t really care about interoperability with other systems, then MUMPS is certainly a good viable option. But IMO, the days of healthcare IT being insular, and moving out of phase with the rest of the tech world, are numbered.

I found this comment incredibly interesting. Mostly because I’ve never personally believed that the fact that many of the larger healthcare IT and EMR systems are built on MUMPS was any part of the reason why healthcare entities aren’t interoperable. I’m a tech guy by background, but I’ve never worked on a MUMPS software system myself so I don’t have first hand knowledge of MUMPS in particular. However, it seems wrong to “blame” MUMPS on the lack of healthcare data interoperability.

I guess the way I look at it is that no matter which database back end you have, you’re always going to need some front end interface to take care of the transport of the healthcare data to another system. Is this any harder with MUMPS than another SQL or even NOSQL database? From my experience it shouldn’t matter. I’d love to hear if there are reasons why it is harder.

I also don’t want to give the impression that Jeremy is trying to say that MUMPS is the only reason that healthcare IT has been so insular and closed. I’m pretty sure he agrees with me that a lot of other factors that have stopped healthcare from sharing data. I just don’t believe that MUMPS is one of those reasons.

Of course, the question of whether MUMPS should continue in healthcare is a different question. In fact, I wrote about MUMPS in healthcare IT and EMR here.

What are your thoughts? Is MUMPS the problem with healthcare interoperability? What are the other reasons stopping healthcare interoperability?

Update: Jeremy Bikman provided the following clarifying comment in the comments of this post:
Good points John. I really should have clarified. MUMPS is not really the issue (although I still stand by my assertion that if it was such a superior technology you’d see it all over Silicon Valley, RTP, etc). The main issue is really with the walled garden (w/ razor wire and machine guns along the top) approach of the major EMR/HIS vendors that have it as their foundation.

The more control you exert over your clients and the harder you make it to connect with other systems, the more money you can make…at least in the short-term.

John’s thought: I still look forward to the discussion around MUMPS and interoperability and healthcare interoperability in general.

About the author

John Lynn

John Lynn is the Founder of HealthcareScene.com, a network of leading Healthcare IT resources. The flagship blog, Healthcare IT Today, contains over 13,000 articles with over half of the articles written by John. These EMR and Healthcare IT related articles have been viewed over 20 million times.

John manages Healthcare IT Central, the leading career Health IT job board. He also organizes the first of its kind conference and community focused on healthcare marketing, Healthcare and IT Marketing Conference, and a healthcare IT conference, EXPO.health, focused on practical healthcare IT innovation. John is an advisor to multiple healthcare IT companies. John is highly involved in social media, and in addition to his blogs can be found on Twitter: @techguy.

39 Comments

  • Good points John. I really should have clarified. MUMPS is not really the issue (although I still stand by my assertion that if it was such a superior technology you’d see it all over Silicon Valley, RTP, etc). The main issue is really with the walled garden (w/ razor wire and machine guns along the top) approach of the major EMR/HIS vendors that have it as their foundation.

    The more control you exert over your clients and the harder you make it to connect with other systems, the more money you can make…at least in the short-term.

  • I’m an IT guy. All I know about MUMPS is that it is used exclusively in the medical community, and is likely a scripting language (which is good).

    FWI: For a full-featured scripting language, with all the build-in tools needed to efficiently manipulate data of all kinds, nothing beats Perl. Ruby is Perl made object-oriented (and thus slower), and Ruby-On-Rails is very popular now within the IT world.

  • David – my company is developing a few solutions to help us with client interactions and it’s all MongoDB, RoR, and HTML5. I’d love to see a lot more healthcare software firms get with the times.

  • Think back awhile and you may see why there is a lack of interoperability.

    For a long time in the EHR/PM world non-interoperability was a way to prevent docs from easily switching to the competition.

    EHR vendors didn’t give a hoot about interoperability.

    There were, and still are, companies whose sole purpose was to connect different systems.

    I’ve dealt with mapping from one EHR to another…and it is a nightmare.

    I think this lack of interoperability is still part of the industry and won’t change quickly.

    It doesn’t help that “standards” groups want to reinvent the wheel when it comes to this interoperable communication.

  • MonogoDB and all the other NoSQL databases are all based off the structure of a MUMPS database. They are all binary tree type structures.

    The problem is these databases are just that. All current versions of MUMPS support all transport methodologies you would want. From TCP to WCF services.

  • Oops, I meant to post this here->

    The core of the EHR system I work on is considered a legacy product, which few are familiar with.

    The funny thing is, I also work in a lot of newer tech (.NET, PHP, SQL Server, NoSQL – CouchDB to name a few), and I feel qualified to say with certainty that older tech often has less interoperability options out of the box, but that doesn’t mean it’s impossible to be interoperable, just harder.

    For example, I can automate a simple data file extraction using graphical tools in SQL Server (using SSIS) in a heartbeat. For comparison, in a legacy product, a programmer may have to write all the code and build automation manually to accomplish the task. It’s still doable. Even things like web service calls are doable in the legacy world. They require some capability in the legacy product itself, to support the necessary functionality of interconnectivity, and someone remotely knowledgeable in the legacy product to support it.

    When you go out as a vendor and the customer asks “what’s the tech behind your system?”, and you don’t respond with a buzzword or a current tech that people are familiar with, you tend to hit a mental wall/blockade with many people.

    The age of MUMPs and how it was adapted to the interconnected modern world were/are challenges, but the politics and opinions that surround these legacy products and related decisions is the larger difficulty.

    In the end, the choice of tool does not depend solely on whether or not the tool is capable of doing the job.

  • Jon, thanks for posting again.

    You said it about as well as it could be said – very well done. In truth, the underlying technology is rarely the bad guy; the fault almost always remains with the vendor utilizing it and the restrictions they build into their own products and systems of control.

    Now I sound like Morpheus…

  • I was told that one of the challenges with MUMPS (M) is that it is a flat file based database and not relational. Does anyone know if this true?

  • @Jeremy, sometimes vendor sales pitches feel like “will it be the red pill or the blue pill”!!

    @Mike, MUMPS is different than most data stores that people know. It is definitely not relational. Calling it a flat file isn’t really accurate – it has only recently been compared to a NoSQL database actually.

    MUMPs is actually a “hierarchical database”, and the database and programming language live side by side providing direct access to the data. Lots of info on http://en.wikipedia.org/wiki/MUMPS. A fun read if you’re a technology historian – and things to learn from the past!

    Most people don’t realize that MUMPs, while old, was actually kind of cool and powerful. In fact, there are a lot of similar cool technologies such as multivalue databases that got ignored when the relational model became popular (although Intersystem’s Cache – a modern descendant of MUMPS is alive and well).

    These technologies are also being “re-discovered” in NoSQL currently, but have been around since at least the 70s. Of course, the NoSQL databases have a certain aspect of distributed/highly scaleable systems that the older ones did not.

  • Having worked with healthcare EHR’s at the VA and other large academic medical centers, MUMPS is a serious problem. Getting data out of MUMPS is the main issue. You either have to write a M routine, then import, use a ghastly .net component, or use the MUMPS Data Extractor (about 40K per install). If you attempt to use standard ODBC, you will be waiting years to get your data. Given that you pretty much have to have a Operational Data Store and datawarehouse for reporting, and that the data are exceptionally hard to get out, MUMPS is a serious problem. I’d also like to note that since the data are Hierarchal, it can be very difficult to pull the data into something relational. As getttint too “deep” into the data frequently results in loss of primary or foreign keys. So you have to keep traversing the hierarcy and expanding an already bloated data pull. This further slows the system. And because of this data structure you can’t query the data in place as aggregation queries take far too long. Want to know how many males have high blood pressure? Better be ready to spend a month pulling data and transforming it. MUMPS IS A PROBLEM!

  • Bob,

    Great post. We’ve been hearing the same issue more and more from Epic and Meditech hospitals namely:

    “ok we have our EMR up and we can enter our orders fine, and we can do our progress notes fine, etc, but now when we want to start doing some real analysis and quality reporting it’s almost impossible to get it out. We’ve got to spend a fortune on data extraction and creating an entirely new separate Big Data repository and go with Microsoft Amalga or a SAP solution to go the next level as a hospital.”

    Check out how many Epic and Meditech hospitals are looking at Microsoft Amalga, dbMotion, and other Big Data aggregators. ‘Nuff said.

  • I have always wondered if MUMPS/CACHE was so fast, how come EPIC relies on their Clarity component to pull the data into a relational(ish) structure – that is either SQL Server or Oracle. Shouldn’t we just be able to query in place? Or have an ODS in MUMPS? Why move it to SQL Server or Oracle? Jeremy for those suffering with reports from EPIC, if they don’t do Clarity, they will continue to suffer.

  • Brian – I don’t know an Epic site that doesn’t use Clarity in conjunction with another reporting tool that is pulling from a relational(ish) data warehouse that is itself pulling from MUMPS. Lots of additional work and steps could be interpreted as suffering.

  • Seems to me your article says the problem isn’t MUMPS, it’s the verticality and proprietary coding of the vendors. That can happen in any language.

    I’ve been programming in MUMPS for 30 years, and it’s the most fantastic language. I love the organic shape of the data, which easily replaces 50% of the code necessary in flat or rectangular data sets.

    I once was writing an interface between MUMPS and a C++ application. They sent out their best programmer. We wrote it one dataset at a time, so I’d write to lines of MUMPS, test it locally, and wait. He’d write a page of code, try to compile it and get errors, then compile and link and then test, get errors and edit source and then compile and link and load. We did this a few times and in exasperation he finally said, “What are you doing over there!”

    “I’m programming in MUMPS,” was my reply!

  • “the problem isn’t MUMPS, it’s the verticality and proprietary coding of the vendors.”

    Max – I think you nailed it.

  • I would argue that since MUMPS Development Committee (MDC) chose to make their own ANSI standard instead of becoming ANSI SQL compliant – they are proprietary and are part of the “verticality and proprietary coding of the vendors”. And therefore are part of the problem.

  • I’m always amazed how passionate people are for or against MUMPS. Although, we haven’t gotten any of the really big MUMPS fans in this thread…yet.

  • I remember those yearly meetings. Power to the Programmers! Each year MUMPS programmers got together and discussed and then voted on WHAT WE WANTED as the actual programmers, and within two years those functions became the ANSI standard, and if an vendor wanted to call themselves MUMPS Standard, they had to do as we said.

    I remember the vendors fought us tooth and nail about being able to $ORDER up as well as down a global. Then, one vote, and for ever more, $O(^GLO,-1) gets you the node before the current one.

    Occupy Programming!

  • Other than Amalga and dbMotion, what other ways are there to get to the data? Mirth claim it’s no problem with their Mirth Connect platform, but I have no first hand experience with that with either Epic or Meditech. What about other integration engines e.g. Cloverleaf?

  • I would not recommend you using an integration engine for querying the db. That will be tooooo slowwww!
    If you tear apart the big WI Cache system, you will see they use KB_SQL. More info: http://kbsql.com/products_kbsql.html

    But since these are not typical relational databases, you cannot reverse engineer an ERD. So if you are doing this on your own, you may never know where your data are! We wasted tons of time thinking we were getting the correct fields from VistA, only to find we weren’t even close. And we had an ERD!

  • The main reason MUMPS is a roadblock is because of the lack of market penetration. There aren’t a lot of MUMPS developers out there, so when it comes time to write third party interfaces it ends up causing problems. It forces the company with the MUMPS system to be responsible for doing ALL the translation for EVERY interface. This creates a development bottleneck and slows everythign down. If it was SQL, the developers of one software could more easily work with their counterparts and get things talking nicely together.

    MUMPS itself is not really the problem, it’s just another instance of non-standard software gumming up the works. I’m sure it had compelling arguments in its favor when it was initially chosen, but when it became clear that it was not going to win a significant market share companies using it should have cut their losses and migrated.

  • The Case for MUMPS and the future.

    The new OO MUMPS —> GT.M and Cache are State of Art technologies.

    Many of the older implementations have converted or will convert to modern GT.M or Cache, even VistA cannot stay where it is. For VistA it will just be a longer haul to get there. All the size program and name length restrictions have been removed in the new OOD/OOP versions of MUMPS. The underlying DB continues to be superior to anything else in the market in every measurement. I know this from personal experience having worked with more databases than I can remember.

    Regarding the language, it is extremely powerful in the hands of a craftsman with few syntactical restrictions (except for the newer integrated OO features) is missing nothing in the core command and function/method libraries. The language is also contextually based and spacing is significant which confuses the uninitiated. Nevertheless, the new paradigm, contains all the modern constructs to make the code simpler to read and maintain. All of MUMPS short comings have been removed not by taking out the older features but by adding newer ones. Backward compatibility remains. Therefore, complaining about the older implementations is like complaining about the car you use to drive 10 years ago, its just a memory in the past for you but someone else is still happily driving the same car.

  • I too have experience with Cache and it suffers from the same problems as MUMPS. Access to data is terrible! Cache ODBC is terrible slow – in fact I would say useless. Its much faster to export text files and import them somewhere else. Or buy a third party. Cache is also not strongly typed. Want a date and time = 100312 great put it in, no problem. What the vale use, who knows!
    Cache adds OO to MUMPS. Definitely not a solution to the problem. Just another bolt on to a system that is keeping healthcare in the dark ages.
    We need to stop putting data into systems that we cannot get it out of! If we ever want to look at what works in healthcare, we cannot keep storing data in silos.

  • I can see your experience with Cache is superficial.

    No one in their right mind would use an ODBC connection when you can go native (Class access method library) or native SQL.

    Your knowledge about the date time storage patterns is incomplete.
    Native date time storage is in what we call $H format. The number of days since 12/31/1840 and the number of seconds in a day. i.e., 59834,14598 = 10/26/2004 04:03:18. The ordering is natural and no adjustments are necessary to read forwards or backwards chronologically. $ZH is even more interesting in that you can measure activity in millionths of a second. There is much more, there are 12 or more other date formats with automatic adjustments for timezones, daylight saving changes and international exchange standards.

    Regarding Typing of Data. Let me correct you, MUMPS is not strongly typed because it was purposely made that way. Cache (object MUMPS) via class controls properties and their associated attributes can force type constraints and create any data type you can imagine, dream or conjure up.

    So, the only conclusion I can make from your negative comments is that you show how very limited your knowledge is with Cache. I suggest you read the manuals and experiment and get up to speed and stop being presumptuous.

  • To Matt (July 11, 2013),

    ” If it was SQL, the developers of one software could more easily work with their counterparts and get things talking nicely together.”

    Cache solved this problem over a decade ago with a native SQL interface that can talk to any other SQL database. While some of the less popular databases require an ODBC interface it is not ideal for use as the application’s main database. ODBC should only be used for transferring data to other dissimilar systems that require a periodic API exchanges, not for day to day storage of application activities.

  • Extracted from the initiating comment

    … It’s hard for me to envision healthcare to continue to embrace a technology that doesn’t like to play nicely with other non-MUMPS systems. … (Jeremy Bikman)

    Let me be frank, well, I going to be anyways, this statement is just as absurd: “Hurricane Sandy was caused by Global Warming”. Both contain a declaration and both are missing any facts to back it up. It is all, so, presumptuous.

    Do you know any one who has tried and failed to make an interface with traditional MUMPS or is it just an assumption it won’t work. With GT.M or Cache I dare say that the claim has no basis.

    Older MUMPS implementations are not for sale, so even if you want to try an interface there is no where you can buy it except for an installation that has discontinued its use after converting to GT.M or Cache.

    All this bashing of MUMPS is quite unnecessary and unjustified. Its generally politically driven or lazy CIOs, that climb into the same Oracle or MS/SQL boat and are happy they didn’t have to do any real work in deciding what was the best solution.

    The largest banks and the largest traders in the world use MUMPS/Cache/GT.M. Do you really think they’re stupid? You say its not popular or exposed enough. So let me ask this, why would a bank or trader using Cache/GT.M not recommend it? I will tell you – if they did recommend it then your bank or trader might become just as competitive and that would disadvantageous to them wouldn’t it? They can run 50 times faster on machines at half the price and handle millions of transaction per day without breaking into a sweat and with only one DBA per company. So, go ahead and have all your profits consumed by the big hogs or do your due diligence and make an informed decision.

  • Great discussion. Thanks for joining in to share all sides of it from multiple perspectives. You do show your bias Roger when you say, “The underlying DB continues to be superior to anything else in the market in every measurement.” Always hard to justify words like “every.”

  • John,

    I have made those measurements in varying degrees. Even though, we know its a moving target and each product is theoretically being improved from release to release the performance differences are so dramatic between Cache and everything else I do not think they are any where near catching up in this decade.

    Those DB measurements included iterative Reads, Deletes, Insert (Updates), Copy, Audit, Triggers Before and After events and all the permutations and random I/O to simulate application activity. Even varying x columns by y record size, indexes, …

    Only the small the DBs came close in some tests like dBase V.

    Databases (I know): Oracle, MS/SQL, Progress, DL/1 (IBM), Betrieve, dBase, FoxPro, Informix, Total (NCR), Cascade, Access (MS), FMS (Dec VMS), Magic and DB2 (IBM). (Note: some DBs are retired and I can’t do any renewed comparisons.

    Oracle and MS/SQL wouldn’t dare to go head to head on the same platform with Cache and expose themselves and show how slow their databases really are. That is why they never mention Cache or GT.M when they brag about performance.

    But it is not only performance, Cache consumes less space and requires little to no maintenance. I could go on and elaborate on the language (code) performance but I have to go work for a few hours, maybe later.

  • So what reporting tool do you Cache users give BI/data analysts to use against Cache?
    I have only found 1 (Information Builders) that has any “native” interface to Cache. The rest rely on ODBC/OLE DB. Since ODBC and any aggregation queries against Cache are to slow to use, we have to move the data out of Cache.
    Once you move the data to a real database, is when you start to see all the problems other mentioned above (terrible dates and times as Cache stores as a string with no data typing).
    Anytime we have to pull data from Cache, it adds 4-6 months onto the project and we end up having to export text files and FTP.
    If Cache is so fast why do the not participate in TCP.org tests?
    And why is it that all the other guys need to get something to work with Cache? I move data between SQL Server, Oracle, MySQL, Filemaker, DB2 all with no problems. But you throw in 1 Cache server and suddenly everything comes to a halt. Maybe when you start to blame everyone else, the problem lies within.

  • If you will carefully note I made no claims regarding migration of data from a Cache database to another non-Cache database irrespective of direction.

    There is no TCP.org website and besides TCP is independent of databases and cannot measure DB IO performance. May be you can supply sufficient information regarding these performance standards.

    You shouldn’t be extracting in logical mode. Use ODBC mode instead and then you wouldn’t have these date issues. See SQL Create Procedure in Caché SQL Reference Manual or %SelectMode Property (ODBC = 1) in the Using Caché SQL manual.

    If you have any other issues please kindly ask.

  • Please see http://www.tpc.org/ for actual database benchmark testing. Also note that Intersystems has never submitted numbers.
    Many years ago Intersystems had a banner on their website indicating they were the worlds fastest database. This has since been corrected to say the fastest OO database.
    And internal operations within Cache are very fast.
    However, getting data out of Cache is terrible.
    And you have to get data out of Cache to combine it with data from other sources. Not even Epic can keep its data in Cache for analysis. They export it to a SQL database so users can analyze data.
    Until Cache gets ODBC functionality fixed, they will remain a niche system, that requires data to be exported for analysis.

  • I looked over the TPC.ORG site. Where are the design specifications to build the schema, UI and methods required to run the performance simulations. I went through every page of the site and there are no testing design details. Are they not public? Who builds these “common ground” scenarios? So, many database, operating systems, hardware and programming language permutations.

    Here is a significant performance variable, I believe, TPC leaves out. The skill set of the developer. A seasoned software craftsman with 5+ years in the database and language can write code that will run a magnitude faster than a novice. Where the equality in measurement then?

    I am writing to TPC for the complete design requirements for all testing scenarios if they won’t give them to me the argument concerning performance ends.

Click here to post a comment
   

Categories