In the first of an eight-part Q&A series which aims to explore some of the current hot topics in Risk-Based Monitoring (RBM), CluePoints has been speaking with Craig Serra from Pfizer. Craig is currently Senior Director and Data Management Business Process Owner, with accountability for the data management process in study conduct and closeout. His experience is in data management, clinical systems, project management, and clinical operations, with a diverse educational background in business, management, information systems, and pharmacology/toxicology.
Here’s what Craig had to say when we put our questions to him.
Addressing what the current best practice is in terms of Risk-Based Monitoring is difficult because there is still a lack of widespread adoption across the industry, despite being advised by regulators to do just that. Think of faxes at your doctor’s office. Why is this still commonplace? Healthcare is notoriously slow to adopt technology into everyday practice—the Health Resources and Services Administration estimates a 10-15 year lag in implementation of computing capabilities versus other industries. Some colleagues in the industry are piloting Risk-Based Monitoring and I’ve seen examples of basic frameworks which rely on KRIs and identification of risks early on or during study start up. However, I have not seen clinical data interrogation and actions focused on sites and data that truly will have an impact on statistical validity and core conclusions of a trial.
The best practice will be comprised of two things. First, robust identification in study startup of risk factors and what is truly critical to ensure patient safety, statistical validity, and correct conclusions of a trial. Second, as data accumulates in a trial, the study team lets the data produced by the sites speak for itself—that is, the interrogation of clinical and operational data, relative to the key risk indicators (KRIs), dictate appropriate actions.
The true risk I am seeing to sponsor organizations is an overly cautious approach to monitoring in the face of regulations that are designed to facilitate drug development. We need to listen to regulators, especially when they are pushing us towards a much better approach to ensuring trials are conducted properly.
CROs are also at a crossroads with regards to Risk-Based Monitoring. Trial budgets can have monitoring costs that account for a large chunk of the total budget—around 25-35%. Risk-Based Monitoring is a danger to CROs that rely on that top-line revenue from monitoring visits and who don’t want to amend their business practice to support more efficiency. However, Risk-Based Monitoring is a major asset and differentiating factor for CROs who will deliver a centralized monitoring approach, thereby aiming to reduce total amount of on-site monitoring visits and SDV. Another trend is for CROs developing their own Risk-Based Monitoring software. I believe this to be the wrong approach, since CROs are, at their core, service providers. Software development is a different skillset, which leads to CROs developing software that misses the mark in terms of utilizing advanced analytical and statistical methodologies.
We already have the reflection paper from the EMA and the guidance from the FDA, so they are more than ready for it – other agencies will be issuing guidance in the near future. It is the industry that is not moving forward. I’ve actually seen examples in the industry where regulators are specifically looking for sponsors to adopt an Risk-Based Monitoring approach but the sponsors are putting the brakes on and insisting on using a traditional monitoring approach. We have to stop talking about it, discussing philosophy, and otherwise making it an academic exercise. We just have to commit to it and implement it.
I don’t think systems will become obsolete but I do think that we are going to start seeing the integration between them all. That said, given our industry’s track record when it comes to the adoption of new processes and ideas, I think there will be a coexistence of all this technology for at least a decade, if not two or three. What is important are there being software developers who fundamentally understand statistical approaches to monitoring and who put user-friendly software on the market.
In order for adoption to increase, what is going to be crucial is ensuring the industry understands the difference between software for data visualization and software that contains an analytics engine. It appears that the understanding of the way that data is processed and analyzed using CSM and the way that visualization software works, seems to be lacking somewhat. There seems to be a view that any software that produces an aesthetically pleasing visualization automatically has robust and statistically valid analytical component to it—that is simply not the case.
It is vital that study teams know that the data visualizations produced via CSM reflect algorithms which are complex and effectively light years ahead of what has been used in the industry. CSM allows teams to identify areas of risk which actually reflect both clinical and operational data, which can of course be visualized. However, we aren’t just visualizing data—we are visualizing the analysis of those data.
When considering something like subjective versus objective, there is evidence that there is place for both. Until study teams start receiving enough clinical and operational data, perhaps three or four months into a study, they don’t really have the ability to be terribly objective. So until that point, there is nothing wrong with teams setting subjective thresholds based on previous experience, both with individual sites and experience of study team members.
Crucially, the study teams needs to understand that this is the landscape for a particular period of time, and that once the data are available, they should use analytics tools to actually interrogate the data to start fine-tuning the specific thresholds for that particular study – that’s when teams should shift from those pre-set thresholds and adapt to the objective reality of the trial.
The Addendum really reflects the new way of thinking and will support those companies who are really pushing for Risk-Based Monitoring adoption as they have the backing of the guidelines. Initially, implementation will help companies in obtaining higher quality data in order to make better quality decisions. Further down the line, as study teams start adopting RBM technology within more of their trials and learning from it, they can start to create efficiencies which in turn, which could result in cost reductions.
Like many recommendations from the regulators, the guidelines are subject to individual companies’ interpretation. By working with the regulators to agree to a practical approach and framework, and importantly, sticking to it throughout the study, sponsors can ensure Risk-Based Monitoring implementation results in a successful operational oversight ecosystem.
Ultimately, listening to and acting on the data is key. After all, statistical methodologies dictate proof of safety and efficacy of a drug, so why aren’t we letting that same hypothesis testing actually tell us about the quality of data during the trial? The paradigm is changing in clinical development in order to increase our ability to have actionable and reliable data, as well as to reduce cost and shorten timelines. Advances like RBM and CSM can be used to focus our attention to ultimately safeguard our patients and deliver more productively for all that depend on us.