Health care has come slowly to artificial intelligence, but in the past year or so a number of providers and insurers have followed the path forged by other industries and applied machine learning seriously to health care costs and treatments. Naturally, the risks of doing something wrong are high in health care, so special attention must be given to correct and ethical use of machine learning. These issues apply throughout the field of artificial intelligence, but health care institutions have a special need to prove that they are addressing them:
Accuracy of input
Different risks are presented by data taken from devices, from medical records, and from patients themselves. Devices sometimes interpret unfamiliar activities in bizarre ways, producing meaningless data. Medical records can be skewed by upcoding and other distortions introduced for billing purposes, and can reflect variations in how doctors assign codes. Patients are biased and not always honest.
Diversity and reproducibility
Medical research has always been skewed toward “mainstream” patients, because the researchers seek people who are “pure” representatives of a condition without the comorbidities brought by most patients. Analytics on real data can correct for that, but they still suffer from a concentration on patients who are on average younger, more educated, higher-income, and more likely to be Caucasian than the general population. This lack of representation is not only discriminatory toward under-represented demographics, but makes the results of analytics in one setting unreproducible in other settings.
When analytics are applied to situations with high stakes, people intuitively distrust results, while sophisticated observers understand that algorithms can be thrown off. Therefore, doctors demand that machine learning models explain what factors led to their decision.
Virtually no software developers are willing to take legal liability for the results of their software–they know that bugs are always present. In health care, doctors ultimately keep the responsibility for diagnostic and treatment decisions, and that’s how it should be for the foreseeable future. Software developers define the limits to their responsibility with great precision.
I recently talked to Stewart Whiting, CTO of Current Health, to see how their company handles these issues and helps clients with them. Their relatively new platform currently serves several hundred patients, and they plan to get up to 20,000 patients during the next two years.
They are currently focused on chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), and sepsis, all common and dangerous conditions. They are also currently researching trends in cardiac rhythms to look for indications of upcoming problems. The main goal of their analytics is to predict patient deterioration. For instance, they track lethargy, confusion, and similar behaviors in the home that can suggest that the patient needs intervention.
To address the risk of trusting device data, although Current Health is fundamentally a software company focused on analytics, they took on the bold task of creating their own monitoring device. This can be used in the hospital, a community care facility such as a nursing or rehab facility, or in the patient’s home. Long-term tracking is critical for gaining insight into the course of a patient’s progress and to finding the factors that eventually lead to a health crisis.
The audacious decision to build their own device emerged from their research into current devices. Devices change frequently, and although better ones become available regularly, it’s hard to tell how their output reflects a patient’s condition. A change in something trivial such as a lens can break an analytical model developed for another device.
The Current Health device measures respiration rate, oxygen saturation (SpO2), pulse, skin temperature, posture, and motion. They also take blood glucose monitoring and international normalized ratio (INR) from a patch manufactured by another company, weight from a scale, and several other blood-related stats that devices make available. Current Health doesn’t currently use lab results or other data from electronic health records. This is partly because their clients are payers, such as insurers and employers, rather than providers in clinical settings.
To handle questions of risk and responsibility, Current Health stops short of making diagnoses or even recommendations. The analytics operate by clustering data, based both on pre-defined labels (known as supervised learning in AI) and new factors or dimensions that arise from crunching the data (known as unsupervised learning in AI). They have found that high-risk patterns include nighttime apnea, low CO2 saturation, lack of movement, and degraded speech patterns.
Current Health’s analytics, based on these inputs, look for anomalies between patients that explain why some deteriorate and others stay stable. The Current Health application can just show doctors the implicated factors, along with a probability score indicating the likelihood of deterioration, suggesting that the doctors pay special attention to patients flagged as being at risk of deterioration. This presentation of information aids explainability–that is, helping doctors justify their choices to themselves, their patients, and their payers.
Current Health’s process, as Whiting describes it, is to “observe AI as it fails in order to improve it.” One source of input is feedback from doctors. When they accept or reject an alert, Current Health can incorporate that information, the way your mailer can incorporate your feedback when you mark a message as “spam” or “not spam”. They hope that they can improve the representation of different populations by making measurement affordable and easy in a multiplicity of settings.
There is an inextricable tie between data collection and analysis. Thus, combining devices with machine learning is a natural trend in health care. A recent survey of device manufacturers suggests they are recognizing the need for making sense of the data they collect, and they may well end up competing with Current Health. In any case, more attention to data that is both trustworthy and applicable will improve treatment.
Still, Current Health differs from many of the health IT start-ups I’ve talked to in the company’s comprehensive approach to producing results that are trustworthy. We’ll have to see what kinds of treatments and reductions in risk they come up with.