Most TMF problems at inspection are not surprises. They are the accumulated result of decisions made, or not made, much earlier in the study. By the time an inspector is asking questions, the window to fix things has long closed.
That was the throughline of the Season 4 premiere of The State of TMF, where Paul Carter, Founder and CEO of Montrium, and I spent an hour working through one of the most practically challenging moments in clinical operations: the TMF handoff from CRO to sponsor. We covered oversight during the study, the mechanics of transfer, archiving obligations, and where AI fits into all of it.
The questions that came in from attendees made clear this topic hits close to home for a lot of teams.
You can outsource the work. You cannot outsource the accountability.
There is an inherent tension in the CRO model that does not go away no matter how experienced your CRO is. The CRO manages the TMF day to day. The sponsor holds ultimate accountability. Those two things are not the same, and confusing them is where oversight gaps begin.
Paul framed it directly: "You're not delegating oversight fully. You still have the responsibility as a sponsor. And so it's really important that you understand the process and you agree with the process." Both Paul and I have worked at CROs earlier in our careers. We know the model from both sides, and the tension is real in both directions. Sponsors want visibility. CROs want a process they can run consistently across clients. Neither of those is an unreasonable ask.
The challenge is when neither side takes the time to align on what that looks like in practice before the study starts. When oversight expectations are not clearly defined upfront, gaps tend to accumulate quietly throughout the study, and only become visible at close-out when there is little time to address them.
Oversight starts at the contract, not at close-out
One of the most useful observations in the episode came from an attendee who raised the chicken-and-egg problem: the contract is signed before the TMF plan is written. How do you align on oversight when the details are not yet defined?
The answer is that sponsors need to think ahead, which is easier said than done when you are also trying to get a study off the ground. The contract is typically in place before the TMF plan is fully defined, but the expectations related to TMF transfer and oversight need to be aligned early. Sponsors need to be thinking ahead during contract negotiations, aligning expectations with the CRO then and ideally provisioning the appropriate budget for this.
That budget point matters more than it might seem. CROs are often navigating competing pressures: delivering quality service while managing tight timelines and cost constraints.
Building oversight scope into the contract from the start is better for everyone. It gives CROs clarity on what is expected, and it gives sponsors confidence that what they need is actually funded and agreed upon. Asking for more mid-study puts both sides in a difficult position.
What adequate oversight actually looks like
The phrase "adequate oversight" is used constantly. It is defined precisely far less often.
Adequate oversight means having enough visibility and control to understand the true state of your TMF throughout the study.
In practice, that looks like:
- Clear alignment on roles, expectations, and QC standards
- A defined process for how the TMF is managed and how you as a sponsor have access to it
- Ongoing monitoring of key indicators such as completeness, timeliness and quality
- Providing evidence that oversight is actually happening.
It is continuous, risk-based, and visible.
The evidence piece is the one most sponsors underestimate, sometimes until it is too late. Regulators do not take your word for it. Inspectors are not looking for a flawless TMF. They are looking for proof that the sponsor was engaged throughout, that issues were identified and addressed as they arose.
A TMF with no documented issues at all is not necessarily a sign of a great process. It may signal that the review mechanisms were not sensitive enough to catch what was there. What inspectors want to see is an active quality system, not a perfect one.
ICH E6(R3) reinforces this directly. The guidance puts a stronger emphasis on quality built into the design from the start rather than checked at the end. That is a meaningful shift in expectation, and it applies to how sponsors and CROs structure their oversight arrangements together from day one.
How should sponsors access the TMF during an active CRO-managed study?
If the CRO is managing the TMF in their own system, the question of access needs to be settled before the study starts. Sometimes sponsors are not given full access, and there may be legitimate process reasons for that. But if that is the arrangement, a more fundamental question needs to be asked: how is oversight actually happening, and where is the evidence of it?
This is where having the right system in place makes a practical difference. When your oversight plan includes direct access to a shared filing environment, completeness and quality indicators become part of the regular study cadence rather than a discovery at close-out. eTMF Connect gives sponsors real-time visibility into TMF completeness at the study, country, and site level, so issues surface during the study, which is the only point at which you can actually do something about them.
An attendee made the point plainly during the session: you should never wait to look at the TMF until the study is almost over. The documentation of your reviews as a sponsor is critical for inspection readiness. That documentation needs to exist in your vendor oversight plan before an inspector asks for it, not after.
Risk-based QC: what it actually means
A question came in during the session asking: what percentage of CRO TMF content should be selected for QC as part of sponsor oversight? It is a fair question, and the answer tends to disappoint people who are looking for a simple number.
There is no meaningful right percentage.
A fixed percentage across the board is meaningless because it is based on volume, not risk. When risk is the lens, you can make decisions that are actually proportionate to the study. You may perform 100% QC on critical records and 20% on lower-risk ones, for example. If a significant study event occurs, such as a participant unblinding, you may want to review 100% of the records that event generated. In some areas, you may review very little.
In others, you may have to go deeper.
Paul extended this further. Risk assessment needs to go beyond individual record types and account for the full picture of what happened in the study: quality issues detected, types of deviations, the operational areas that generated the most activity.
"If you were receiving a TMF at the end of the study from a CRO, you really need to perform a risk assessment not only on the TMF processes that were used to manage the TMF, but also looking at things like the quality issues that were detected: duplicates, misfilings, illegible records, timeliness problems."
The goal is not coverage. It is confidence that the TMF is an accurate reflection of what happened in the study. Those are different things, and conflating them is how teams end up with high QC percentages and low inspection readiness.
Archiving: longer and more complicated than most sponsors expect
Archiving tends to be treated as a storage problem. It is not.
It is a long-term compliance obligation that starts with decisions made at the beginning of the study, and it is an area where the industry still lacks clear, practical structure.
At Montrium, we have been thinking about this a lot because the demand for a simpler, more cost-effective approach to archiving is real and growing.
Regulatory retention requirements run up to 25 years for trials conducted in the EU, per Article 58 of EU Regulation 536/2014. Canada sits at 15 years. The US FDA requirement is two years after the last marketing application, though most global sponsors sensibly default to the longest applicable period.
A few things to keep in mind when preparing to archive, particularly when dealing with a mix of paper and electronic records:
- Define the scope clearly. The TMF is not just the eTMF system. It includes records held in other systems such as CTMS, safety, and IRT. Know what is in scope before transfer begins.
- Decide how paper will be handled. Will it remain physical or be digitized? If digitized, what is the process for preserving the integrity of the record?
- Reconcile before archiving. What you received should match what you expected.
- Be clear on authoritative source. Paul raised this from direct inspection experience: "I was in an inspection, and it was one of the questions the inspector asked: am I looking at the authoritative source? And if you don't know the answer to that question, that's a problem."
- Ensure long-term accessibility. The ability to retrieve, read, and interpret records years later is a regulatory requirement, not a preference.
On the question of whether documents and audit trails can be transferred separately: in practice, they often are. The audit trail from the source system is not typically recreated in the destination. What matters is that it is preserved, linked to the records it supports, and in a human-readable format.
Paul made the case for applying a structure that the data management world already uses well: a formal data transfer agreement. "In clinical data management, you have a data transfer agreement, which clearly specifies how data is going to be moved from one place to another or one organization to another, either during the study or at the end. It's that agreement that really governs exactly how that process is going to happen."
Timing and format should be defined in the TMF plan or a data transfer agreement from the start, so that when the handoff arrives, the process is already agreed and both sides know exactly what to expect.
Can AI help? Yes, with the right framing.
AI came up repeatedly throughout the episode, and it is worth being precise about where it adds value and where it requires caution. The enthusiasm in the industry is real. So is the uncertainty.
Where AI genuinely helps is at scale. The TMF can contain thousands of records. Even a well-designed risk-based manual review has limits.
AI can surface signals that manual review would miss: patterns like repeated protocol deviations across sites, clusters of QC issues, delays in record finalization, and gaps between what a process document describes and what evidence actually exists in the TMF.
Paul described practical testing Montrium has done: "We've been doing some interesting tests around reporting protocol deviations and seeing that sometimes they can be misreporting across different records, which if it came up in an inspection would obviously be a problem."
A question from the audience asked how you feed data into AI in a compliant way. It is the right question, and one we spend a lot of time on at Montrium. The answer matters particularly given data residency, integrity and privacy requirements. The better approach is to bring AI to the data, not the data to AI.
Every time you move data, you introduce risk of integrity loss or compliance concerns depending on where it is stored or processed. Connecting AI to existing sources, querying and analyzing data in place, and avoiding large-scale data duplication preserves integrity, maintains clear ownership, and keeps you aligned with privacy and residency requirements.
Paul added an important note on reliability that is worth sitting with. AI will give you an answer, and it will give it to you confidently, regardless of whether the answer is correct. "From a regulatory standpoint, it's much better if you can take a deterministic approach whereby you have much more predictable outcomes. If we're able to establish a clear set of data that we know is accurate and then start to ask questions on that set of data, all of a sudden we start to get much more accurate results."

The industry and the regulators are both still working through what governed, compliant AI use looks like in this context.
That is not a reason to wait. It is a reason to build the right frameworks now, before the pressure is on.
Questions asked, straight from our attendees
What does a TMF plan need to say about archiving?
A TMF plan should specify what is in scope for archiving, the transfer format and destination, and the review process prior to archiving.
This forces early alignment between sponsor and CRO before close-out pressure sets in. If archiving expectations aren't agreed upfront, both sides end up negotiating mid-handoff — the worst possible time.
Can TMF documents and audit trails be transferred separately?
Yes. In practice they often are, especially when exporting between systems. The audit trail from the source system is not typically recreated in the destination. What matters is that it is preserved, linked to the records it supports, and in a human-readable format. Transfer timing and format should be defined upfront in the TMF plan or a data transfer agreement.
How do you push back on a CRO that is resistant to more oversight?
Reframe the conversation from "more oversight" to "better risk management." Tie requests to documented study risk, and distinguish between oversight and execution, requesting evidence of work already being done is not the same as asking the CRO to do extra work. Use data to support the conversation and stay open to how the deliverable is achieved.
How do you handle a TMF transfer when metadata quality is poor?
Define a minimum metadata set and require the CRO to map to it before transfer begins. Prioritize critical fields: artifact type, study, country, site, and dates. Use pattern checks to flag inconsistencies, and document anything that could not be verified and why, that transparency is part of the TMF record. Feed structured observations back to the CRO to improve future transfers.
Christina Mantzioros
With a mandate to bridge the gap between clinical research, technical development, and business applications, Christina plays a vital role as Montrium's Product Implementation Manager. With over five years experience working in project management roles in clinical operations departments at both Academic and Clinical Research Organizations, Christina provides the Montrium team with essential insights into how clinical professionals use our products and platforms. Highly engaged with the Life Science industry and an expert on clinical and regulatory best practices, Christina regularly contributes to industry conferences, company webinars as well as the Montrium blog.
