When incorporating a new software package at your life sciences organization, it is not only important to validate the software in order to be GxP compliant, but also to qualify the underlying infrastructure.
Over the past few years, more and more life science companies have begun to transition these essential services to the cloud, moving to what is known as an Infrastructure as a Service (IaaS) model. Besides the commonly known cloud advantages of flexibility, scalability, cost efficiency, and security, this move can also dramatically decrease qualification efforts.
This is even truer when an Infrastructure as Code (IaC) strategy is employed.
Think of it like this: Instead of a city planner ordering vital infrastructure such as water pipes as they see fit, an algorithm can accurately predict from data points where water mains need to be replaced, and places orders for supplies automatically.
Infrastructure as Code (IaC) is a type of deployment where infrastructure is provisioned and managed through code, in an automated way, rather than being controlled by individuals via a manual process. IaC is especially useful when many instances—or virtual machines (VMs)—need to be provisioned on a regular basis.
This code might be responsible for the creation of your whole environment, a part of it, or just certain VMs. IaC can also be parameterized, and by putting parameters in your IaC, you are able to create code that allows for small variations. A parameter could be, for instance, the region of deployment. And then, when you run the code, the script will trigger you to select the desired parameter, and you can select between region West Europe or East Europe for example.
Both Microsoft Azure and Amazon Web Services support IaC and IaC can handle all the aspects of a cloud server installation within the frameworks of the Microsoft Azure or Amazon Web Services platforms. This means—generally—that as cloud resources are needed, they are very accurately provisioned and incorporated into your existing infrastructure, without the limitations and variability that come with manual processes.
Now that you understand how IaC speeds up the process of building a cloud infrastructure, you may be wondering how does a computer controlled process, which takes some control out of the hands of humans, improve qualification efforts?
As we’re all aware of, the IT infrastructure supporting your business needs to be qualified and, depending on the size of your organization, the amount of infrastructure to be qualified can be substantial. Because of this, the qualification effort required doesn’t grow in a linear way; with more to plan along with executing, the effort required can be almost exponential, especially if you have to manually qualify each database, volume, virtual machine, etc.
With an IaC based system, this effort can be minimized, by a removing a large amount of manual work, and by following a “building block” approach where we can focus on qualifying the different “types” components in the IaaS offering and the IaC provisioning process.
When working in the cloud, it can be very easy to test certain infrastructure architectures, as you are only paying for what you use, and can easily abandon unused environments. This makes it possible to extensively test your IaC in a temporary test environment, prior to the building of the final production environment.
After testing is complete and the IaC code operates according to your needs, it can be stored in a controlled environment—like a secured code repository, such as BitBucket—as you would do with any controlled type of record
The next step is to actually qualify the code.
This is typically done in a validation/test environment, where the code is run and outcomes are verified by explicitly checking all critical components, ensuring they are deployed and configured correctly. Once this is completed, logs are reviewed to confirm that the correct IaC file was used to control the process and that the parameters were set correctly according to the prior specifications.
Any high-risk items within this environment, defined through a prior risk assessment, should then undergo additional functional testing. When this has been completed, we can safely say that this particular code is qualified and we can announce a qualified environment.
For the production environment, in an on-premises set-up we would have normally had to do the identical process as the testing environment, but in a cloud deployment, once we know the code works, we only need to verify and qualify that the code runs correctly. If the parameters used are in scope, this environment can be deemed qualified, drastically reducing the work for the production environment!
This effort is not only valuable in the deployment of the production area, but it can also be used for deployments of extra environments with the same architecture (again checking if the script used is controlled and qualified, and checking if the parameters are correct). When you have made scripts for individual VMs or part of your infrastructure, this will also help with scaling up and down, making sure that the great flexibility of the cloud isn’t impacted by the restraints of working in a regulated environment.
The cloud has been seen by some as a potential “wild west” in terms of validation, with its ever-changing nature hard to nail down and therefore validation hard to maintain. However, when looking at the benefits offered by virtualization of infrastructure known as IaaS and automation of provisioning through IaC, there may actually be a reduction in the effort required to validate systems.
While knowledge of regulations remains key to any validation effort, a new skillset is gaining traction. A common refrain in many industries today is that the strength of your organization is highly dependent on the strength of your coders. As more companies adopt a cloud-based infrastructure and define themselves on the power the cloud offers their efforts, it’s critical for life science companies to stay on top of this by marrying regulatory knowledge and coding abilities.