The U.S. Food and Drug Administration (FDA) has defined the requirements for validation of life science based products, mostly manifested under the 21 CFR 820 210 and 211 regulations. These regulations include a comprehensive testing process where all systems are thoroughly examined and tested under a criticality-based, scientific approach.
Recent guidance and initiatives released by the FDA, including Process Validation: General Principles and Practices and ICH Q11 Development and Manufacture of Drug Substances, have provided a streamlined, risk-based approach using an updated life cycle management method.
Under this scenario, a new definition of validation has emerged, best described by the FDA as “the collection and evaluation of data, from the process design stage through production, which establishes scientific evidence that a process is capable of consistently delivering quality products.” However, this offers contrasts with the classical definition in the device regulations under <a cfrsearch.cfm?fr="820.75''">21 CFR 820.75, which states “… the results of a process cannot be fully verified by subsequent inspection and test, the process shall be validated with a high degree of assurance and approved according to established procedures.”
What this means is that a risk-based life cycle management approach with relevant scientific rationale and evidence can be used in lieu of a traditional top-down comprehensive approach. Many of us remember the golden rule of validation — testing in triplicate was an output of this classic approach. It didn’t matter the complexity or simplicity of the system, we just always applied the test in triplicate.
Essentially, what the FDA and ICH are now saying is that you can justify a different test plan with a risk-based approach. The results are streamlined validation processes and potentially fewer steps to production. This article presents an approach to risk management that I have used to successfully meet the updated FDA and ICH guidances.
Whether validating equipment, processes or software, I recommend writing a user requirement specification (URS). This facilitates a starting point with inputs and traceability to ensure that basic functions are established. These basic functions will be used later for assessing risks. Medical device software validation also typically includes functional requirement specifications (FRS) that follow the URS in a logical, traceable way. The FRS shows how the configured software will meet the requirements of the URS.
A risk assessment follows the URS and FRS processes. However, before applying a risk assessment to the functional processes developed in the URS and FRS, use the ISO 14971 risk management methodologies to establish the acceptance criteria and risk levels. The standard risk matrix (see graphic below) illustrates a three-level system with low (green), medium (yellow) and high (red) risk categories characterized as follows:
Your organization must develop (and justify) your own criteria. This example matrix defines the categories as follows:
After developing the acceptance criteria, complete the following tasks:
After determining the critically for the individual functional items from the URS, you can assemble a validation approach for each functional category. The following are types of validations that can be used with a risk-based process.
These criteria are then applied to the table of functional items. The output is the validation test plan described below.
According to the new guidance for process validation, the collection and evaluation of data, from the process design stage through production, establishes scientific evidence that a process is capable of consistently delivering quality products. This has resulted in validation being split into three stages:
Manufacturers must prove that the product can be manufactured according to the quality attributes before a batch is placed on the market. For this purpose, data from a lab, scale-up and pilot scale are meant to be used. The data is meant to cover conditions involving a range of process variations. The manufacturer must:
Qualification activities that lack the basis of a sound process understanding cannot ensure a safe product. The process must be maintained during routine operations, including materials, equipment, environment, personnel and changes in the manufacturing procedures.
The process design stage involves building and capturing process knowledge. The manufacturing process is meant to be defined and tested, which will then be reflected in the manufacturing and testing documentation. Earlier development stages do not need to be conducted under current good manufacturing practices (cGMP). Still, the basis should be sound scientific methods and principles, including good documentation practice (GDP).
There is no regulatory expectation for the process to be developed and tested until it fails. However, a combination of conditions involving a high process risk should be known. In order to achieve this level of process understanding, implementing Design of Experiments (DOE) in connection with risk analysis tools is recommended. Other methods, such as classical laboratory tests, are also considered acceptable. Also, it’s essential to include adequate documentation of the process understanding based on rationale.
This stage shows that the process design is suitable for consistently manufacturing commercial batches. This stage contains two steps:
This stage encompasses the activities that are currently summarized under process validation. Qualified equipment is used to demonstrate that the process can create a product in conformity with the specifications. The terms design qualification (DQ), installation qualification (IQ) and operational qualification (OQ) are no longer described as part of the qualification. However, they are still conceptually used within the validation plan, which should cover these items:
The final stage is intended to keep the validated state of the process current during routine production. The manufacturer is required to establish a system to detect unplanned process variations. Data should be evaluated accordingly (in-process), so the process does not get out of control. The data must be statistically trended, and the analysis must be done by a qualified person.
These evaluations are meant to be reviewed by the quality unit in order to detect changes in the process (i.e., alert limits) at an early stage and to allow implementation of process improvements. Still, even in a well-developed process, unexpected process changes can occur.
In this case, the guidance recommends that the manufacturer use quantitative, statistical methods whenever feasible in order to identify and investigate for root cause. At the beginning of routine production, the guidance recommends that the scope and frequency of monitoring activities and sampling is the same as that in the process qualification stage until enough data has been collected.
Analysis from complaints, out-of-specification (OOS) results, deviations and non-conformances can also provide data and trends regarding process variability. Employees on the production line and in quality assurance are encouraged to give feedback on the process performance. It’s also helpful to track operator errors to determine if training measures are appropriate.
Finally, the data sets can be used to develop process improvements. Still, the changes may only be implemented in a structured way and with the final approval of quality assurance and potential re-validation in the process qualification stage.