Over the past few weeks, we’ve been discussing the thoughts of some of the leading experts in Risk-Based Monitoring, and we’ve really appreciated those of you who have got involved in the debates. The penultimate question we put to our partners centered around the complexity of defining subjective thresholds using Key Risk Indicators (KRIs) rather than (or in addition to) the objectivity of comparing one site against the others.
Similar to my last post, which looked at how organizations are going to complement the use of KRIs with CSM, the partners offered quite a range of opinions. Two people who shared similar views were Angie Maurer, Co-Founder & CEO at Zynapsys and Craig Serra, Senior Director and Data Management Business Process Owner at Pfizer, who agreed that there is a place for both in today’s clinical trials, with Craig commenting: “Until study teams start receiving enough clinical and operational data, perhaps three or four months into a study, they don’t really have the ability to be terribly objective. So until that point, there is nothing wrong with teams setting subjective thresholds based on previous experience.”
Jamie O’Keefe, Vice President, Life Sciences and R&D Practice at Paragon Solutions discussed the challenge of defining subjective thresholds compared with being objective, which in his opinion, is fairly straightforward. In order to set a subjective threshold, a study team needs to consider the opinion of many people, all of whom have important interactions with sites from a qualitative perspective. In Jamie’s view, the industry is currently missing a system which facilitates this conversion and brings numerous opinions together to set one threshold.
Steve Young, Senior Director of Transformation Services at OmniComm Systems, pointed out that: “Even when study teams compare sites against each other there can be a level of subjectivity about where to set the statistical thresholds.” He used an example to demonstrate this: “A team might statistically compare the rate of adverse events that are being recorded per subject across all of the sites in a clinical trial, but how does the team know that when a site appears to have an unrealistically low adverse event rate, that it should be flagged as a site that is potentially underreporting or not reporting all of the adverse events that are occurring? Teams can define ‘x’ number of standard deviations as the appropriate threshold, however, different individuals or organizations may set thresholds at different levels, which is where the challenge lies. Currently, there is no correct solution to this issue that I am aware of, but it is likely that a common consensus on defining those thresholds will be developed over time.”
Craig Serra provided a useful summary, concluding that: “Ultimately, listening to and acting on the data is key. After all, as an industry we are willing to let stats dictate the conclusion in terms of safety and efficacy of a trial, so why aren’t we letting that same hypothesis testing actually tell us about the quality of data during the trial, and what is more, use it during the trial to help safety and efficacy?”
From my own perspective, I couldn’t agree more with Craig, the use of a statistical approach to help determine safety and efficacy can be leveraged via the Central Statistical Monitoring of data to determine indicators of risk by site. There is certainly very good reason to employ subjective thresholds as the trial starts since the amount of data required to get a statistically valid, objective result may take a few weeks or months to accrue. This subjective threshold is best assessed using prior experience but we also recommended the use of a statistical approach from historic studies of a similar data structure within the same therapy area to use as the starting point. As the trial progresses and data accrues, we have seen that sponsors want to objectively assess the Key Risk Indicators as soon as practically possible and for some variables (such as Adverse Events or SAEs) the use of both subjective and statistically-driven thresholds has been the standard approach. It is clear from the results we have seen and action taken that the objective statistically-derived results carry more weight in terms of ‘proving’ where risk lies within the study being analyzed.