Over the past few years, as some standout use cases began to emerge, enthusiasm for AI has grown among health leaders, to the point where you’ll hear little but positive thoughts about AI’s potential benefits.
Now that interest in AI has moved almost entirely into the mainstream, it’s probably time to look at areas in which the industry has gotten perhaps carried away with itself and lost sight of potential challenges to its use.
This is why I was pleased to stumble across an article in Harvard Business Review which names what it sees as some bothersome myths about machine learning in healthcare which have already begun to take hold in the industry.
The authors of the piece include Derek Haas, MD, an orthopedic surgeon with Henry Ford Health System, Joseph Schwab, MD, an associate professor of orthopedic surgery at Harvard Medical School and John Halamka, MD, executive director of the Health Technology Exploration of Beth Israel Lahey Health. Collectively, they’ve developed and used dozens of ML applications.
In the article, they identify three particularly pervasive beliefs about ML which they consider to be myths. They include:
- That machine learning tools can do much of what doctors do. According to the authors, ML applications are not likely to replace most of what doctors do for the foreseeable future. They concede that ML can help doctors prevent people from getting sick and diagnose medical problems, particularly when it comes to analyzing images, but argue that it’s not going to replicate physicians’ ability to provide care and treatment anytime soon.“The ML output still must be analyzed by someone with domain knowledge,” they write. “Otherwise, trivial data may be interpreted as essential and essential data as trivial.” Also, humans still play a critical role in helping patients decide whether to receive treatment and if so, what type, they note.
- That “big data” + brilliant data scientists are always a recipe for success. Highly skilled data scientists are indeed critical for building sophisticated ML models, but that’s not enough on its own. It’s also important to have domain experts in place who understand how to think about models and output from those large databases, the authors write.One way to succeed is to leverage an ML-plus-human approach. The ML algorithm can make its “decisions” about, for example, classifying medical supplies and drugs, and once that done, a hospital can bring in an expert to review the ML output and fix any problems, particularly focusing on ML guesses which fall in a lower confidence range.
- That healthcare leaders will adopt and use successful algorithms once they have been discovered. In the experience of the authors, many powerful algorithms aren’t widely adopted and utilized because they aren’t integrated into the workflow of potential users. For example, they tell the story of a hospital that built a specialist referral algorithm that never got used because to access it, doctors had to exist their EHR, open the referral app, enter information into it then return to using the EHR. This took up far too much time. In contrast, they write, Beth Israel Deaconess Medical Center in Boston had great success with an ML application that automatically scanned incoming faxes, paper and electronically transmitted documents related to surgical consent and filed them into the right medical record. The app also added an alert to the pre-operative checklist. This saves about 120 hours of staff time per month.
Seems to me that the myths the authors identify are similar to those that hover around any emerging technology – either that the technology can magically solve problems, that if we built the right apps they will come, and that we’ll need fewer professionals to get a given job done when once the tech is mature. Regardless, given that AI use is accelerating quickly, it’s good to have mistaken notions about its uses identified clearly.