In recent times, healthcare organizations have increasingly used AI to tackle key problems created by the virus. One approach which might offer a powerful alternative in the future might be the use of AI to reduce healthcare-acquired infections, according to one healthcare executive.
John Langton, who leads data science efforts at Wolters Kluwer, writes that COVID-19 is pushing health leaders to fast-track use of machine learning and AI tools to tackle the virus.
These tools are increasingly being used to strengthen clinical surveillance systems to more accurately detect and track the disease. (As you’ll see below, machine learning tech may not be as accurate as it seems, but the importance of those findings goes well beyond this story — more to come on this issue later.)
The idea of using AI to tackle HAIs is the result of the long-term evolution of clinical surveillance systems. AI-based approaches have seemed more feasible as they’ve grown to tackle hospitals’ surveillance, data analytics and regulatory compliance concerns, Langton notes.
However, this will involve pulling together data from both inside and outside of the hospital system to flag at-risk patients before they can either get worse or spread their illness to others. Using this kind of information, physicians can intervene sooner in the patient’s course of illness by addressing modifiable risk factors. Langton cites the use of high-risk antimicrobials and acid suppressants as examples.
Meanwhile, he notes, it should be noted that many clinicians remain skeptical about the effectiveness of using AI in patient care settings. Many distrust the data and worry about the impact AI tools may have on workflow.
Patients, too, aren’t entirely comfortable with the extent to which AI tools will protect their privacy and provide adequate safety when using such tools for diagnosis and treatment. It may be some time before providers build enough trust with clinicians and patients to make the use of AI more accepted across the board.
It should be noted that clinician and patient worries aren’t unjustified. In fact, a recent study by a team lead by Google computer engineers has discovered that a phenomenon known as underspecification may be substantially undermining the accuracy of machine learning-based predictions.
Underspecification occurs when similar data values accurately create predictions early in the game but lead to very different divergent predictions later. Obviously, a machine learning process needs to generate a single set of answers, so this is a serious flaw.
Nonetheless, much AI technology is quite robust. To advance trust in these tested and more reliable systems, Langton suggests taking three steps:
- Make sure clinicians have “explainability” – a visual picture that confirms the AI solution is effective. The picture should also Illustrate how and why the AI makes predictions, and how it will make the right information available at the right time to support decisions.
- Clinicians should be involved in the development of AI tools from start to finish. This allows them to be sure that AI will fit seamlessly into their workflow. Meanwhile, data scientists must work with clinicians to validate the patterns AI finds through error analysis and feature engineering.
- Make as much data as possible available for AI use, and then, see to it that it’s not prone to generating biased results. Also, the data must reflect the patient population and not discriminate based on gender, race, ethnicity and other factors.
In any event, it’s obvious that there still a huge number of clinical puzzles to which the solution might be leveraging AI. The next year or two should see the development of many new technologies designed to solve painful problems in the care delivery process.