Artificial Intelligence could be the death of us all. I heard that roughly a hundred times last week through shares on twitter. While this theme may be premature, the concern of teaching ethics and valuing human life is a relevant question for machine learning, especially in the realm of healthcare. Can we teach a machine ethical bounds? As Elon Musk calls for laws and boundaries I was wondering what semantics and mathematics would be needed to frame that question. I had a Facebook friend that told me he knew a lot about artificial intelligence and he wanted to warn me about the coming robot revolution.
He did not, in fact, know a lot about artificial intelligence coding. He did not know code nor did he have any knowledge of mathematical theory, but was familiar with the worst-case scenarios of the robots we create finding humanity superfluous and eliminating us. I was so underwhelmed I messaged my friend who makes robots for Boston Dynamics and asked him about his latest project.
I was disappointed with that Facebook interaction. This disappointment was offset for me last week when the API changed and a happy Facebook message asked me if I wanted to revisit an ad that I had previously liked. I purposefully like ads if I like the company. Good old Facebook predictive ads. Sometimes I also comment on a picture or tag someone in an add to see if I can change which advertisements I see in my feed.
One of the happiest times with this feature was finding great new running socks. I commented on a friends’ picture that I liked her running socks and within an hour saw my first ad for those very same socks. While I’m not claiming to have seen the predictive and advertising algorithms behind Facebook advertising, machine learning is behind that ad. Photo recognition through image processing can identify the socks my friend Ilana was wearing while running a half marathon. Simple keyword scans can read my positive comments about the socks which gives them information about what I like. This can pair with photos from advertisers and- within one hour of “liking” those socks they seamlessly show up in my feed as a buying option. Are there ethical considerations about knowing exactly what my buying power is and my buying patterns and my personal history? Yes. Similarly, there will be ethical considerations when insurance companies can predict exactly which patients will and won’t be able to pay for their healthcare. While I appreciate great running socks, I have mixed feelings about assessing my ability to pay for the best medical care.
Can a machine be taught to value the best medical care and ethics? We seem to hear a lot of debate about whether they can be taught not to kill us. Teaching a machine ethics will be complicated as they show how poor nutrition directly changes how long patients live. Some claim these are dangerous things to create, others say the difference will be human intuition. Can human intuition be replicated and what application will that have for medicine? I always considered intuition connections our brain recognizes that we are not directly aware of, so a machine should be able to learn intuition through deep learning networks.
Creating “laws” or “rules” for ethics in artificial intelligence as Elon Musk calls for is difficult in that ethical bounds are difficult to teach machines. In a recent interview Musk claimed that Artificial Intelligence is the biggest risk that we face as a civilization. Ethical rules and bounds are difficult for humanity. Once when we were looking at data patterns and trust patterns and disease prediction someone turned to me and said- but insurance companies will use this information to not give people coverage. If they could read your genes people will die. In terms of teaching a machine ethics or adding outward bounds, one of the weaknesses is that trained learning systems can get really good on a narrow domain but they don’t do transfer learning in different domains- like reason by analogy- machines are terrible at that. I spoke with Adam Pease about how you can increase the ability of machines to use ontology to increase benefits of machine learning in healthcare outcomes. Ontology creates a way to process meaning that is more robust in a semantic view. He shared his open source projects about Ontology and shared that we should be speaking with Philosophers and Computer science experts about ontology and meaning.
Can a machine learn empathy? Will a naturally learning and evolving system teach itself different bounds and hold life sacred, and how will it interpret the challenge of every doctor to “Do no Harm?” The Medical community should come together for agreement about the ethical bounds and collaborate with computer scientists to see the capacity to teach those bounds and the possible outliers in motivation.
Most of the natural language processing is for applications that are pretty shallow in terms of what the machine understands. Machines are good at matching patterns- if your pattern is a bag of words it can connect with another bag of words within a quantity of documents. Many companies have done extensive work in training systems that will be working with patients to learn what words mean and common patterns within patient care. Health Navigator, for example, has done extensive work to form the clinical bounds for a telemedicine system. When a patient asks about their symptoms they get clinically relevant information paired with their symptoms even if that patient uses non-medical language in describing their chief complaint. Clinical bounds create a very important framework for an artificial intelligence system to process large amounts of inbound data and help triage patients to appropriate care.
With the help of Symplur signals I looked at Ethics and Artificial Intelligence in Healthcare and who was having conversations online in those areas. Wen Dombrowski MD, MPH lead much of the discussion. Appropriately, part of what Catalaize Health does is artificial intelligence due diligence for healthcare organizations. Joe Babian who leads #HCLDR discussion was also a significant contributor.
A “smart” system needs to be able to make the same amount of inferences that a human can. Teaching inferences and standards within semantics are the first steps to teaching a machine “intuition” or “ethics.” Predictive pattern recognition appears more developed than ethical and meaning boundaries for sensitive patient information. We can teach a system to recognize an increased risk of suicidal behavior from rushing words or dramatically altered behavior or higher pitched speaking, but is it ethical to spy on at risk patients from their phone. I attended a mental health provider meeting about how that knowledge would affect patients with paranoia. What are the ethical lines between protection and control? Can the meaning of those lines be taught to a machine?
I’m looking forward to seeing what healthcare providers decide those bounds should be, and how they train the machines those ontologies.