Proceeding with Caution: The Impact of (Generative) AI on Healthcare and Life Sciences

The following is a guest article by Steven Lazer, Global Healthcare & Life Sciences CTO, Alex Long, Head of Life Sciences Sales Strategy, and Michael Giannopoulos, Healthcare Global CISO and CTO for the Americas at Dell Technologies.

Artificial intelligence (AI) has been around for a long time. We have typically thought of it as algorithms and software programs based upon learned information and review of historical studies or collections of information to help develop and train a model. In its earliest inception, AI was used to help guardrail and develop basic algorithms to help interpret data. Artificial intelligence is utilized in many different industries and is readily accepted as a method of information analysis whether in healthcare or other industries with a high level of success. Since those early times, AI has grown in its capacity and utilization across a broad range of use cases and has become integrated into how business functions, but often unseen.  As a matter of fact, multiple forms of artificial intelligence were used to create this article; natural language processing (NLP) in the form of dictation, Spellcheck and Grammar AI, helped us compose this document, and some content created with ChatGTP.  “AI” is not just the AI you hear and read about in the media and in online articles. Recent headlines with the release of large language models (LLMs) and generative AI capabilities have changed the common perspective and brought AI into the forefront of everyday conversation.

Artificial Intelligence has been successfully integrated with various applications such as patient management, financial modeling, disease spread, therapeutic success forecasting, and image analysis through computer vision. These early AI applications were typically reactive, focusing on generating numerical model outputs. This document will delve deeper into how AI has evolved, the potential impact in healthcare and life sciences, and the recommendations for a responsible and effective implementation of this technology in order to protect patient safety and privacy.

A brief primer on the forms of AI:

The various sub-fields of AI research are centered around particular outcomes and the use of the tools applied. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, neuromorphic software and hardware designs and methods based on statistics, probability, and economics. AI also draws upon the fields of computer science, psychology, linguistics, philosophy, and others.

Some early forms of AI include things like optical character recognition or OCR, data mining, search engine recommendations, understanding human speech like Siri and Alexa and many others. The field continues to grow and what was considered AI may no longer be considered AI in the future as functions become commonplace expectations and experiences.

Large language models (LLM) and the release of ChatGTP, Bard, DALL-E, LaMDA and others are typically based on a neural network with extremely large data sets built off unlabeled text using semi supervised learning models and have been around for more than five years. With the most recent changes to compute capabilities the amount of data these models can process has increased dramatically, creating a much more robust and frankly commonly usable tool set.  Speech recognition, simplified programming methodologies and everyday language utilization have made these tools readily accessible, not requiring mathematical reasoning to build and develop questions or algorithms.  There are multiple forms of large language model approaches available to us today. Generative models will incorporate new information into the model with any and every request of information from the model.

The rise of Generative AI

At Dell Technologies, we see that early adoption of generative AI in many industries. However, the same old “GIGO” (Garbage In/Garbage Out) rule we learned about in basic computers concepts in high school still applies. When the generative AI tool doesn’t have what it needs, it creates its own facts called “hallucinations,” which can sound highly plausible, especially to non-subject matter experts. Different groups or teams can also feed the AI data, knowingly or unknowingly impacting results or creating an unwarranted bias. To be clear, even if you are an expert, it is challenging to know where an AI model got its “facts” from or to be able to test their veracity.   These factors create real concerns, such as unintentionally taking an AI hallucination as accurate, creating a strong bias based on any number of factors, and leading to unexpected or bad outcomes. In addition, bad actors use these tools to shape or drive activity for nefarious or criminal purposes. Even the creators of this technology are concerned about guide rails and understanding the rules of engagement and who should have access to what information or systems. Two examples of this are the recent testimony in Congress of Sam Altman or the published article from one of the founders of AI, Geoffrey Hinton.

Extractive LLM’s such as Pryon or Haystack are trained on datasets they’re presented with. Unless equipped with a generative function they will only allow those data sets to become part of the response when a question is asked.  Extractive AI does not permit the possibility of hallucination and will or can be developed to provide source and cite resources utilized to create any response. While extractive models lack broad applicability and will require more resources to create, we at Dell Technologies believe extractive models are likely to be one of the best resources to utilize LLM capabilities against health care data in a secure fashion.

Privatized large language models will likely become the norm for healthcare and life sciences as information is considered protected as well as extremely valuable. Privatizing the large language model offers a straightforward means of safeguarding against the introduction of misinformation and can be tuned to significantly reduce the number of “hallucinations”. By adopting an on-premises approach, organizations can maintain the privacy of protected health information while enabling the model to extract and deliver relevant insights. This stands in contrast to the public models that have garnered recent media attention.

When contemplating the use of LLMs to gain further insights and make inferences related to patient treatment, the issue of model transparency becomes a significant part of the conversation. Generative models have the capacity to incorporate new information used to generate outcomes. In the healthcare context, utilizing an LLM necessitates a model that remains fixed unless a deliberate update is chosen, followed by thorough revalidation of its capabilities. Conventional generative models tend to deviate from their initial training as they assimilate additional information, resulting in alterations to the model itself, rendering it unsuitable for healthcare treatment purposes – whether in the research or clinical setting. Conversely, generative models may find better application in other analyses that do not involve regulated algorithms.

Any discussion regarding artificial intelligence would be remiss if we did not address the topics of trust, ethics, and bias. In traditional use cases of artificial intelligence or AI where humans and human welfare are not involved these topics are less burdensome than in healthcare and life sciences

It’s a question of trust:

Trust is essential for healthcare professionals, patients, and other stakeholders to fully embrace and rely on AI technologies. Some of the challenges around developing trust lie with the caregivers and clinicians who have always utilized their own instincts to develop not only concepts but treatment methodologies for patients. Additionally, trusting the data that generates the outcomes of any algorithm has been in question since the development of IoT sensors applied to healthcare. Patient generated or anecdotal data is frequently dismissed by the clinical community.

Development of trust within the patient community is the other half of this challenge. Patients’ comfort level with computer based diagnostic inference and algorithmic treatment recommendations is growing slowly as our population ages into a comfort level. Trust will be built overtime and it’s not something that can be rushed. Overcoming these challenges requires transparent AI systems, clear communication of AI’s limitations and capabilities, robust data governance practices, and collaborative efforts to ensure that AI technologies align with the values and needs of healthcare professionals and patients.

The need for human oversight meaning clinicians will not ever be superseded what can be done with technology.

Global ethics:

The integration of AI in healthcare and life sciences raises a multitude of ethical challenges. One of the key concerns is ensuring the responsible and ethical use of AI in decision-making processes that directly impact patient care and well-being. There is a need to address biases in AI algorithms that could result in unequal treatment or disparities in healthcare outcomes.

In addition, ethics are not same around the globe. Creating AI and the ethics around the AI well it depends entirely upon the perspective of the developer of the algorithm itself. What is ethical in one part of the world may not be ethical in other parts of the world based upon culture, worldviews and unfortunately political agendas. It is crucial to establish clear guidelines and regulatory frameworks that govern the design, deployment, and monitoring of AI in healthcare to ensure that these technologies uphold ethical standards and prioritize patient safety and autonomy.

What bias?

Bias is the most challenging topic of all. As noted by NIST and others, AI bias is already impacting data integrity in biomedical research with increasing concerns and calls for standards to manage existing use cases.

AI systems are susceptible to inheriting biases present in the data used for training, which can result in discriminatory or unfair outcomes. Bias can arise from various sources, including data collection practices, imbalances in data representation, or underlying societal prejudices. Bias can also be developed through the underrepresentation of data.  We all have biases whether we recognize them or not; they are based on our culture, our learning and experiences that we have had since the beginning of our existence. Producing unbiased AI is theoretically not possible. However, producing AI that is minimally biased is possible and realistic.

Processes to develop AI with a limited or reasonable amount of bias requires careful data selection, preprocessing, and algorithm design to mitigate potential disparities along with the validation of output.

Patient Safety

At the heart of healthcare delivery is the concept of do no harm to those under care.  The possibilities for AI to create harmful scenarios exist, especially using things like generative AI. As one begins to think about engagement of AI within healthcare and life sciences, the recommendations begin with starting in a place that can do no harm. Develop algorithms and concepts under the support of AI where things like model drift, hallucination, and inconsistent results will not impact patient safety. Although these approaches to AI may not be headline creating or providing the flash some are looking for, it is a safe approach to starting the AI journey.

Recommendation: Take a Circumspective Approach to AI in Healthcare and Life Sciences

All organizations must have a clear and comprehensive implementation strategy with prioritized AI requirements across every department and data source This includes determining data access policies and governance models, considering the implications of monetizing data, and prioritizing patient safety and improved outcomes. By taking a circumspective approach, healthcare organizations can mitigate the potential pitfalls of excessive, ensure ethical and responsible use of AI, and maximize the positive impact of these technologies on patient care and medical advancements.

Next Steps: Engage Dell Technologies for Expert Guidance and Innovative Solutions

Dell Technologies has recently announced its new products at Dell Technologies World 2023 in Las Vegas – including our AI solutions. Dell’s experienced and knowledgeable healthcare advisors have a deep understanding of the healthcare and life sciences industries and these new technologies, and are here to help. Our announced solutions allow organizations to explore AI without becoming the product and maintain ongoing ownership of their data. In short, as is valid with all new findings, organizations should monitor and pay close attention to the impacts of generative AI. Then, continually test, execute, and test again.

Conclusion:

The benefits of AI in healthcare outweigh the negatives, but that does not mean organizations should jump in without careful consideration. Initiating the AI conversation and creating a concrete action plan are essential and reach out to the Dell team so we can help you navigate these challenging conversations.

Dell Technologies can help by working with healthcare, clinical, research, operational, and IT teams to create, and codify action plans that establish robust governance frameworks and ensure data integrity – ultimately protecting patient safety and privacy. Pandora’s box has been opened, and we cannot, close the lid. We can however, adapt conscientiously to this new technology to enhance healthcare delivery, advance medical research, and improve patient outcomes. For further discussion around AI in healthcare, please consult with your local Dell Technologies healthcare field director as they can provide information on platforms that can be used to support your AI development.

About Steven Lazer

Steven is the Global Healthcare and Life Sciences CTO for Dell Technologies. He brings robust Health IT competencies and management strategies for healthcare organizations ensuring successful Healthcare IT solution delivery. He drives technical strategy and solutions development for Healthcare and ISV technical relationships including joint solutions R&D, technical advisory, and technical escalation processes. Steven is part of one of the strongest healthcare practices in the technology industry with a heritage of more than 30 years building solutions around the globe with clinical ISV partners and providing essential technology infrastructure to hospitals of all sizes.

About Alex Long

Alex Long is the Head of Life Sciences Sales Strategy at Dell Technologies. He is a seasoned executive with an impressive background in driving growth and innovation in the life sciences and healthcare sectors. With a wealth of experience in sales strategy, business development, and industry leadership, Alex has consistently delivered outstanding results and spearheaded transformative initiatives. In his current role, Alex is pivotal in delivering new solutions to the life sciences and healthcare verticals. Before his tenure at Dell Technologies, Alex played a significant role in the growth and success of Impinj. As a key stakeholder in the company’s IPO process, Alex provided valuable analysis, reporting, and sales strategy support, further solidifying his life sciences and healthcare expertise.

About Michael Giannopoulos

Michael K. Giannopoulos serves as the Dell Technologies Healthcare Global CISO and CTO for the Americas. He is also the Federal Healthcare Director for Dell Technologies. His goal is to help organizations realize measurable digital transformation in the healthcare delivery continuum. His experience in advanced systems design, from the edge to the datacenter to the private cloud and out to the public cloud, coupled with his extensive operational experience within organizations that span many US States, tens of thousands of acute care beds and millions of lives under care, makes Michael a valuable resource towards helping deliver true digital healthcare transformation that is not only executable but also sustainable in an ever-changing world. Michael has an extensive background in the healthcare, technology and security sector, both locally in New England (Boston based) as well as at the national theater level within multi state, multi-jurisdiction IDNs.

Dell Technologies is a proud sponsor of Healthcare Scene.

   

Categories