background image for GxP Lifeline
GxP Lifeline

How AI Tools Will Transform Quality Management in the Life Sciences, Part 1


2018-bl-how-ai-tools-transform-quality-mgt-in-life-sciences-part-1-page-image

U.S. Food and Drug Administration (FDA) officials and leaders in the pharma and medical device spaces agree artificial intelligence (AI) tools could enable a step change in quality management in those industries. Areas that could be impacted include supply chain management, lot release, manufacturing, compliance operations, clinical trial end points and drug discovery, among others.

AI has drawn the attention of the pharma industry recently based on impressive successes in other industries, such as machines performing face recognition, driving vehicles, competing at master levels in chess and composing music. To date, the primary applications of AI in pharma have been in R&D and clinical applications. These include predicting Alzheimer’s diseasediagnosing breast cancer and precision and predictive medicine applications.

The Xavier Health Artificial Intelligence Initiative brought together key players and experts from industry, academia and government in August 2017 to explore the possibilities and potential roadblocks. At the FDA/Xavier PharmaLink conference in March 2018 at Xavier University in Cincinnati, Ohio, two working groups that are part of the initiative gave preliminary readouts of their progress to date. More in-depth summaries will be provided at the Xavier AI Summit in August 2018, along with more detailed discussions of the use of AI in pharma and medical device companies.

The Xavier Health AI Initiative is working to expand the use of AI across the pharma and device industries. Its task is to identify ways to implement AI across quality operations, regulatory affairs, supply chain operations and manufacturing operations — augmenting human decisions with AI so decisions are more informed. The vision is to use AI to move the industry from being reactive to proactive, to predictive and eventually to prescriptive, so that actions are right-first-time.

The intent is to increase patient safety by ensuring the consistency of product quality. The initiative aims to promote a move from traditional pharma techniques — such as plant audits and product sampling, which are snapshots in time — to continuous monitoring of huge amounts of GMP and non-GMP data to produce continuous product quality assurance.

What Is AI?

Simply put, AI is shorthand for any task a computer can perform in a way that is equal to or surpasses human capability. It makes use of varied methods such as knowledge bases, expert systems, and machine learning. Using computer algorithms, AI can sift through large amounts of raw data looking for patterns and connections much more efficiently and quickly than a human could.

An AI variant, deep learning, breaks the solution to a complex problem into multiple stages or layers. It examines data sets and discovers the underlying structure, with deeper layers refining the output from the previous ones. A mature system has fully connected layers with both forward and backward comparison abilities.

Another AI subset known as machine learning relies on neural networks — computer systems modeled after the human brain. It involves multilevel probabilistic analysis, allowing computers to simulate and perhaps expand on how the human brain processes information.

Are There Any Red Flags?

As these machines “learn,” the pathways they take to arrive at decisions change, so the original programmers of the algorithms cannot tell how the decisions were arrived at. This creates a “black box” that can be problematic for highly regulated industries, such as pharma, where the reasons for decisions and actions need to be documented.

“We are from an industry where we like to validate our processes — it is done once in one way, and we keep doing it that way,” Xavier Health Director Marla Phillips commented at the March conference. “With systems that continuously learn, the algorithm evolves, and it is not the same any more. How do you manage in this new world?”

In addition, since the logic of the decisions is not obvious, decisions the AI machines make might be questioned. The Xavier Health AI Initiative is focused on augmenting human decisions with more robust data and information. The credibility of the data source gives the end user confidence in the outcome.

The process of linking the input to the outcome is referred to as “explainability” and is another work stream Xavier is taking on. Oftentimes, the AI algorithm is considered intellectual property, but through explainability, the end user can know the inputs that led to the outcome. The AI is tested and trained using known inputs and known outcomes first to gain confidence in the algorithm.

This supervised learning is an important first step. Most of the industry is working at this level. However, the next step is unsupervised learning through deep neural networks. Whether supervised or unsupervised, the integrity of the linkage between inputs and the output must be maintained so humans can confidently augment their decisions through AI.

Moving forward, the culture organizations need to have when using AI must be considered. For example, Phillips asked, “What if you get information [from an AI tool] that says, ‘This product is going to fail, do not release it?’ It sure looks the same as it has the past 20 years when we have been releasing it. Are you going to tell your management, ‘Sorry, we have to discard this?’ How do you know this was going to fail? It might be right. We are in a very different decision-making situation and culture.”

A Cautionary Tale

Kumar Madurai, principal consultant and solutions delivery manager at Computer Task Group (CTG) and Xavier AI core team member, provided a cautionary tale regarding trust of the science behind AI-produced decisions, based on an experience with one of his clients.

“One client started off with the idea of having a group dedicated to data analytics. They started in a big-bang way,” Madurai said. “They integrated data from eight different systems. The thinking was that once it was all integrated and linked they could build the queries and the analytics on top of that.

“What happened in that case is they developed some tools. But the subject matter experts [SMEs] who were supposed to take action based on what the tool was telling them did not believe what it was telling them.”

The expertise, training and culture have to be in place to use AI effectively and confidently. The client decided to start over with simpler tools that had more visualization and capability for more data exploration. It intends to target a system the SMEs will help run, use and better understand.

Evaluating Continuously Learning Systems

One of the two Xavier AI teams is tasked with exploring how to evaluate a continuously learning system (CLS) — one whose output at different test times may be different as the algorithm evolves. One of the two team leaders for this effort is Berkman Sahiner, FDA Center for Devices and Radiological Health (CDRH) senior biomedical research scientist.

FDA involvement is critical in this effort, as both industry and regulatory agencies need to trust the science behind the AI and evolve their understanding of AI together. In general, industry and regulators have been accustomed to traditional science and validated processes. More information on the team is available here.

The team’s goal is to identify how one can provide a reasonable level of confidence in the performance of a CLS in a way that minimizes risks to product quality and patient safety and maximizes the advantages of AI in advancing patient health. Stated differently, what types of tests or design processes would provide reasonable assurance to a user of a CLS that the output is reliable and useable?

As part of reaching this goal, the team is looking at a series of questions:

• Since a CLS is dynamic, the algorithm changes over time. What are the criteria for updating the algorithm? Is it entirely automated? Or is there human involvement?

• Because the performance changes over time, can we monitor the performance in the field to get a better understanding of where the CLS is going?

• Understanding that users may be affected as the algorithm evolves and provides different responses, what is an effective way to communicate the changes?

• How do we ensure new data that leads to changes in the algorithm is of adequate quality?

The team is also examining the facets of explainability, security and privacy — exercises important for any new software tool. The CLS team is divided into two sub-teams: one to focus on the pharma aspects, the other on the medical device aspects. The primary deliverable for the CLS team is a white paper covering best practices for a CLS and how they can be adapted into health care. It is not intended to be a guidance or standard. The intended audience is medical device software developers and CLS users.

In Part 2 of this blog post, learn about Xavier’s Continuous Product Quality Assurance efforts, the concept of a “data lake,” and how to determine if your company is ready for AI.


2018-bl-author-jerry-chapman

Jerry Chapman  is a GMP consultant with nearly 40 years of experience in the pharmaceutical industry, with technical and leadership positions in product development, manufacturing, plant quality, site quality, corporate quality and quality systems. He designed and implemented a comprehensive “GMP Intelligence” process at Eli Lilly and again as a consultant at a top‐five animal health firm in 2017. Chapman served as senior editor at International Pharmaceutical Quality (IPQ) for six years, where he stayed current with U.S. and international GMP and CMC topics, and reported extensively on them. He now consults on GMP intelligence, quality knowledge management, the development and implementation of GMP training and other GMP topics. He is also a freelance writer. Visit Chapman's website here to learn more about Jerry and what he has to offer. His email is [email protected].


[ { "key": "fid#1", "value": ["GxP Lifeline Blog"] } ]