In life sciences, we are rapidly approaching a time when the use of artificial intelligence and machine learning (AI/ML) will be an industry norm.
Just last year, an international team demonstrated the AI system they developed is capable of detecting breast cancer, and in some instances it outperformed medical experts.1 The technology is not yet ready for clinical use, but its success illustrates the potential for AI to elevate our work as we strive to improve and save lives.
The impact of AI will be enormous, and the only thing that looms larger is our obligation to be responsible about our approach to AI/ML.
Like the many health care practitioners who vow to do no harm, life sciences technical professionals must embrace equally lofty standards for AI/ML. As we set the bar, it must be higher than in other sectors – lives are depending on it.
Historically people have a tendency to mistrust new technology, and concerns about it rise alongside increased complexity. While the challenges of helping people to trust and adopt new technology are not unique, the intricacies of AI and the implications of trusting it with a wealth of data to inform its learning process are uncharted territory. Accenture reported the most common worries related to AI include:
Responsible AI provides the building blocks for a foundation of trust, without which AI will never see widespread adoption. Because trust carries such a great deal of weight, technology leaders including Google and Microsoft have created responsible AI models to help guide organizations through effective methods to address the concerns mentioned above. While there is not yet a global standard, most responsible AI models share qualities similar to those discussed here, and all aim to assuage fear by creating a deliberate framework that is human-centric, private, unbiased, and transparent.
Human-Centric: A common misconception is that AI could replace people, but they will continue to have a critical role to play. As one example, Accenture’s responsible AI model calls for humans to monitor the performance of algorithms to safeguard against numerous problems such as bias and unintentional consequences. 3
Private: Data is necessary for effective ML, but individual privacy can never be compromised. In the life sciences, we often handle sensitive data and our unwavering commitment to privacy and security must hold fast.
Unbiased: AI that draws on a biased data source will reach biased conclusions, and making decisions based on skewed data can be particularly dangerous in our field. PwC notes that a component of responsible AI is being more aware of bias and taking corrective action to improve a system’s decision-making. 4
Transparent: Mistrust of technology can stem from not understanding how it operates, which is why an individual or the tool itself needs to be able to explain results and how a particular conclusion was reached. The Institute for Ethical AI & Machine Learning encourages people to develop tools “to continuously improve transparency and explainability of machine learning models where reasonable.” 5
Ultimately, these standards will be defined and described by regulatory bodies. At the moment, however, it is advantageous for the industry to establish its own agreed upon framework and definitions of common terms within the model to help inform the regulations that will be put forth.
Global standards for responsible AI are a foregone conclusion. The remaining questions relate to scope, timing, and which regulatory body or bodies will issue guidance that compels the rest of the world to follow suit. Regarding the latter, current frontrunners are the U.S. and the European Union (EU).
In 2021 alone, both the U.S. and EU made significant strides:
The European Commission’s legal framework is the first of its kind and calls for a risk-based approach to AI, and announcement of its release included a statement explaining they are seeking to establish global norms.8 With the creation of the General Data Protection Regulation (GDPR), the EU set the bar for data security, and it’s possible they could deliver a repeat performance in responsible AI.
Their proposal has received mixed reactions. Brookings noted that portions of the framework are sound, but some topics such as the fairness of algorithms are not given adequate attention, and the general consensus in Silicon Valley is that emerging technology should not be regulated. 9 National Security Advisor of the U.S. Jake Sullivan expressed his support via social media and tweeted, “The United States welcomes the EU’s new initiatives on artificial intelligence. We will work with our friends and allies to foster trustworthy AI that reflects our shared values and commitment to protecting the rights and dignity of all our citizens.” 10
Before the proposal becomes law, European Parliament and member states need to provide their input, and if GDPR is any indication, rollout of AI regulations will be a lengthy process. GDPR was proposed in 2012, approved by parliament four years later, and became law in 2018. 11
As international guidance continues to take shape, numerous countries including Canada, France, Russia, and China have established their own regulations or standards. The U.S. is looking to do the same through a draft memorandum titled “Guidance for Regulation of Artificial Intelligence Applications,” which was issued in 2019 and comments were requested the following year. 12 Given the current pace, another iteration of the guidance can be expected soon.
Being on the cusp of transformation offers a unique vantage point from which to view our immediate and future needs. If responsible AI is to succeed, our immediate goal must be to continue developing reasonable regulatory guidance that is informed by the industry. Once established, the framework for responsible AI and regulations must be allowed to evolve with advancements in technology, thus securing ongoing success.
In the life sciences sector, one of the more pressing needs for long-term success is that we come to an agreement today – the parameters in place for responsible AI at any given time should only serve as a starting point. The nature of work demands that we uphold higher standards.
For instance, we need to remain vigilant of the positive and negative consequences that result from choices made on the basis of AI. This requires us to develop novel approaches to validating and verifying AI-driven choices, and ensuring the data models on which these decisions are made are superior in quality. Only high ideals can ensure AI will be effective in elevating our work to improve and save lives.
Sources: