The U.S. Food and Drug Administration (FDA), International Society for Pharmaceutical Engineering (ISPE), International Organization for Standardization (ISO), and other regulating bodies have been advocating a risk-based approach to software validation for computer software assurance (CSA) for some years now. Actual real-world experiences have confirmed most, if not all, the promoted benefits of switching to the methodology detailed in the new guidance. Risk-based validation is far less burdensome than the previously accepted methodology, less costly, and has made it much easier to utilize cloud-based systems where routine software updates occur on a regular basis.
The current FDA regulations for computer systems are defined in 21 CFR Part 11 and were established back in 1997 with specific guidance on these regulations being issued in 2002 and 2003. Much has changed since then in software development, testing methodologies, and the widespread use of cloud-based applications. The validation methodologies developed to meet the original requirements were incredibly burdensome and resulted in many companies staying with paper-based systems rather than trying to take advantage of the cost, productivity, quality, and safety benefits of using the innovations and time-saving approaches introduced by modern computer systems. Instead of helping to ensure product quality and safety – the guiding underlying principle – the older guidelines were actually preventing improvements in many cases. After many years of studies, a new, less burdensome risk-based approach was proposed by the FDA and other regulating bodies as a resolution to that dilemma.
CSA, as advocated by the FDA and others, should now be a risk-based approach that ensures a computer system is truly fit for its intended use. At the heart of risk assessment validation is the determination of the appropriate level of effort and activities for establishing confidence in the software. This determination should take into consideration the risk of compromised product safety and quality should the software fail to perform as intended as well as the impact on the records that support a product’s quality and safe use. By focusing more effort on higher risks, the burden of validation becomes only what is needed to adequately address the various levels of risk, not a “validate everything” approach.
In practical terms, a risk-based validation usually involves the following steps:
An important aspect of the risk-based approach is to leverage all the validation activities that a software supplier has already done. As part of a supplier’s evaluation, it can be especially important to look at their development process as well as their software quality model. Best-in-class software suppliers use quality models such as those defined in ISO/IEC 25010 or ones that are similar. With a “Trusted Supplier” assessment by a company’s own IT QA auditors in hand, the validation plan can take credit for the work already done and not re-invent the wheel.
This is typically where the most time is spent on validations. It’s crucial to understand how software will be used in order to comprehend how it could affect product safety, quality or the important records associated with producing the product.
This is the roadmap for the validation project. It details the specifics of the methodology (risk-based) and processes to be used as well as normal project elements such as roles and responsibilities, scope of the validation, the overall system landscape and other software interfaces, proposed test methodology, test results tracking and traceability, how system configuration will be handled, and if data migration will be done.
Once the processes and functions that will be used have been mapped out, the first step is to determine whether each one may be relevant to ensuring product safety, product quality, or the integrity and retention of the important records regarding the production of the product. This is often referred to as GMP (Good Manufacturing Practices) relevance or more broadly GxP relevance, where x can refer to not only manufacturing but distribution (GDP), clinical (GCP), laboratory (GLP), or other business cases outside actual manufacturing.
These specifications are usually referred to as User Requirements Specifications (URS) and Functional Requirement Specifications (FRS). They basically describe the requirements necessary to meet the needs of the company and associated regulations. These documents form the basis of testing requirements when the risk level indicates testing is needed.
This step is where the risk level is established and, based on that risk, what degree of testing, if any, is required to minimize it. This analysis should take into account and include work done by the software supplier, detectability of errors in subsequent steps, standard operating procedures (SOPs) and management external to the application that mitigate risk, the impact of an error to the product quality and safety, the probability of an error occurring, and other relevant considerations. Taking advantage of all the aspects of risk, and steps already taken to minimize it, allows the focus to be on the higher ones, where it really matters, and to reduce the amount of overall testing required.
Once the requirements and risks are known, it usually becomes straightforward to create the test plan. Here is where the level of testing required is specified, with it being commensurate with the level of risk. Testing can go from robust, fully-scripted testing, to hybrid, limited-scripted testing, through unscripted, ad-hoc, error-guessing, and exploratory testing, or no testing at all when the validation has already been done by the supplier or through other measures.
With the test plan in place, the final step is to simply do the required risk-based testing and create the final validation report.
When doing risk-based validation, important best practices include:
Switching to a risk-based approach to computer software validation is providing much needed flexibility and agility to the validation process which in turn is encouraging wider usage of computer systems and their associated benefits. The entire validation and risk assessment process can now be much more streamlined and cost effective. It also makes it much more practical for companies to use the new generation of cloud-based applications. The routine updates and upgrades usually associated with cloud-based systems, previously made the keeping of cloud-based software in a validated state too time consuming, difficult, and expensive and therefore unattractive to companies. That has dramatically changed with the new approach. In fact, much of the software from best-of-breed developers can be supplied in a “validation ready” state and easily maintained in a validated state as updates and upgrades occur. Risk assessment validation has even motivated some eQMS providers to develop validation software for their product offerings that lower even further the barriers to utilizing all the benefits to be gained from today’s best-in-class enterprise software. Risk-based validation is making more practical the adoption of a much broader range of computer software systems and in turn improving product quality through their use.
Enjoying this blog? Learn More.
8 Best Practices for Compliant and Quick Software Validation in the Cloud
Download Now