Today I’ve been attending the AHIMA virtual conference. AHIMA deserves an applause for the work they did to put this together and credit to the amazing HIM professionals that seem really engaged with the community. I’ve been to a lot of virtual health IT events lately and most aren’t nearly as engaged as the AHIMA attendees.
As I visited a number of virtual exhibitors, I had a really great conversation with the people at Dolbey. The majority of our conversation revolved around computer assisted coding (CAC). When asked about where we were in the adoption of CAC and how effective AI was at coding, they shared how it really depended on the healthcare organization. No surprise there since we’re now entering a phase where many healthcare organizations have had CAC or have tried CAC 5 years ago and they’re now coming back to it since the accuracy and technology has made such huge improvements over the last few years.
However, what they said next was the best description of where we’re at with AI assistance that I’ve heard in a while. The Dolbey rep, Heather Gladden, suggested that CAC was a lot like a multiple choice test. It doesn’t give you the answer, but it makes things much easier.
Ok, if you take out our own distaste for “tests” in general, I thought the comparison was really apt. We all know that multiple choice tests are easier to do because the right answer is presented before you and you just have to choose the right one. There are usually a few that you can exclude quickly and then you just have to choose between a couple options. I thought it was such an apt description of how an AI engine can help someone be more efficient and accurate.
Of course, this is just the start. I think most of us dream of the day when the AI engine will be able to replace the mundane tasks in healthcare that are so easy and obvious. In fact, Dolbey says that it is doing that for simple charts now that can auto close because the coding is obvious. However, for complex charts, they still need a human. I think that will continue to change over time as AI gets better, but we’re not quite there yet. (Many of my HIM friends say it will never happen in coding)
Plus, there’s another reason why the AI engine often can’t be trusted today to fully automate the coding process. Dolbey told me that it’s often true that the CAC AI doesn’t have access to all the data that might be needed to code it properly. Good old healthcare interoperability strikes again. The CAC will do as good as it can with the data it has, but a human may tweak the CAC result based on data they can see that the AI engine can’t see.
While I’d love for interoperability to solve the problem described above, I think ambient clinical voice documentation and recordings will probably solve this problem even better. While those recordings are being worked on now to create the clinical documentation, there’s no reason why they can’t be used to handle the CAC as well. In fact, as technology and AI continue to evolve and improve, it’s not hard to see a day when analyzing the audio from a visit could do a better job at coding the visit than analyzing the entered documentation. Combine both and it will be even better.
Until we get to full AI automation, I’m going to start describing where we’re at today as the AI presenting a multiple choice test. It still needs the human to choose the right answer, but having AI narrow down the possibilities can do a lot to improve the accuracy and speed of the human. That’s certainly something worth doing. Plus, it’s a stepping stone to training the AI for more and that’s a great thing.