Computer system validation (CSV) can drag on for months. However, it doesn’t have to. The U.S. Food and Drug Administration (FDA) has clearly indicated that a risk-based approach is compliant and significantly reduces the burden on life sciences companies. At MasterControl, we’ve developed a patented validation software to help our customers validate in minutes and we have plans to make the process even faster in the future.
In a recent presentation on validation, MasterControl Vice President of Product Management Erin Wright discussed these tools and our upcoming validation process. During the Q&A at the end of the presentation, Wright answered the following validation questions.
A: The full validation package and all the documentation you can leverage from your vendor, such as transfer operational qualification (TOQ) and transfer performance qualification (TPQ), can help you determine what you can leverage from the vendor and what you need to create on your own. The vendor can also tell you what the changes were in the latest upgrade and how they impact the rest of the software.
Wright contrasted two possible changes that present very different levels of risk: “Is this something that’s isolated to a certain screen, or did we completely change the search functionality that touches every piece of data within the software?”
Other questions to ask are, “What would I need to change in my validated state to take advantage of this new feature and functionality, and how interconnected is it to other portions of the software?”
A: With a risk-based approach, you need to document your reasoning and critical thinking. Wright suggested, “You’ll want to create your validation plan like you always do. And you’ll want to document what kind of risk model you are using to determine the risk. What variables are you considering? What are the definitions of those variables? Once you have that model defined and embedded into the validation plan … you’ll want to create that risk assessment that breaks it down into your intended use.”
Part of the documentation should include remediation for each level. For low risk, you might just do user acceptance testing (UAT). For medium risk, you might do guided exploratory testing with a checklist of things you want to specifically test. For high risk, you might have to do formal validation with test scripts.
And of course, you’ll still need a final validation report.
A: As with everything else, it really depends on your intended use. If you’re using a Microsoft Excel spreadsheet to “calculate formulations and yields for recipes in batch records, absolutely you should validate that … because you’re using that to make business decisions that have a direct impact on product quality and patient safety.”
However, “If you’re using an Excel spreadsheet to track who’s been assigned to update which SOPs, [that’s] maybe a little less critical” and doesn’t require validation.
Not surprisingly, Wright suggests risk-based validation based on historical data. By looking at where you’ve had problems with the software in the past, you know where to focus your efforts.
“Every software has defects. It’s a matter of figuring out what those historic trends have been, what are some brittle areas of the code that you have experienced, and taking that into account as you are assessing the risk of those individual components.”
You maybe have noticed in the past that certain functionality tends to have more issues than others. Those would be considered higher risk for you based on your experience. The higher risk areas warrant more targeted validation testing.
A: Wright points out that, even as we wait for CSA guidance, the idea is far from new and comes from the FDA’s General Principles of Software Validation guidance document. “The FDA has been saying the same exact thing since 2002. This isn’t a new mindset. This is a new guidance to help people understand what they were saying in 2002.”
She suggests starting with a gap assessment. Look at where your organization is, what software systems you use that have a regulatory need to be validated, and which of those are homegrown and which are from third-party vendors.
Then document your intended usage. As a point of clarification, Wright said, “intended usage isn’t always ‘how I use it every day,’ it’s ‘how do I use the software’?” High-risk, important usage needs to be taken into account, even if it isn’t an everyday occurrence.
Next, you need to figure out the risk variables that are important to you. At MasterControl, we use regulatory impact, changes to the validated state, and a subjective assessment. You’ll then need to define the risk variables and how you’ll be measuring them.
Taking a risk-based approach cuts down on validation time considerably — and it’s compliant. The FDA has been trying to encourage this method of validation since 2002, and the upcoming CSA guidance is another way to emphasize that. There’s no need to wait for the guidance before implementing a risk-based approach at your organization.