Recently, an article appearing in healthcare journal HealthAffairs argued that hospitals’ progress toward interoperability has been modest to date. The article, which looked at the extent to which hospitals found, sent, received and integrated information from outside providers in 2015, found that they’d made few gains across all four categories.
Researchers found that the percent of hospitals engaging in all four activities rose to 29.7% that year, up from 24.5% in 2014. The two activities that grew the most in frequency were sending (growing 8.1%) and receiving (8.4%). Despite this expansion, only 18.7% of hospitals reported that they used this data often. The extent to which hospitals integrated the information they received didn’t change from 2014 to 2015.
Interesting, isn’t it, how these stats fail to align with what we know of hospitals’ priorities? Not only did the rate hospitals sent and received data increase slowly between those two years, hospitals don’t seem to be making any advances in integrating (and presumably, using) shared data. This doesn’t make sense given hospitals’ intense efforts to make interoperability happen.
The question is, are hospitals still limping along in their efforts, or are we failing to measure their progress effectively? For years now, looking at the extent to which they sent/received/found/integrated data has been the accepted yardstick most quarters. To my knowledge, though, those metrics haven’t been validated by formal research as being the best way to define and capture levels of interoperability.
Yes, hospital health data interoperability may be moving as slowly as the HealthAffairs article suggests. After all, I hardly have to tell readers like you how difficult it has been to foster interoperability in any form, and how challenging it has been to achieve any kind of consensus on data staring standards. If someone tells progress toward health data exchange between hospitals hasn’t reached robust levels yet, it probably won’t surprise you in the least.
Still, before we draw the sweeping conclusions about something as important as interoperability, it probably wouldn’t hurt to double-check that we’re asking the right questions.
For example, is the extent to which providers send data to outside organizations as important as the extent to which they receive such data? I know, in theory, that health data exchanges would be just that, a back and forth between parties on both sides. Certainly, such arrangements are probably better for the industry as a whole long term. But does that mean we should discount the importance of one side or the other of the process?
Perhaps more importantly, at least in my book, is the degree to which hospitals integrate the data into their own systems a good proxy for measuring who’s making interoperability progress? And should be assumed that if they integrate the data, they’re likely to use it to improve outcomes or streamline care?
Don’t misunderstand me, I’m not suggesting that the existing metrics are useless. However, it would be nice to know whether they actually measure what we want them to measure. We need to validate our tools if we want use them to make important judgments about care delivery. Otherwise, why bother with measurements in the first place?