Part Two
Recently, CluePoints has been taking time out with some of our valued partners to discuss hot topics in Risk-Based Monitoring (RBM). Here, in the second of an eight-part Q&A series, we spoke with Adam Butler, Senior Vice President, Strategic Development & Corporate Marketing at Bracket Global, who shared these valuable insights with us.
When it comes to the practical implementation and strategy of Risk-Based Monitoring, where do you feel current best practice lies amongst different sizes of companies and CROs?
In the context of current best practice, there is a huge amount of variety. The large pharma companies and CROs have taken an approach to Risk-Based Monitoring, which is very much focused on managing costs and reducing the amount of site monitoring but not necessarily centered on the clinical or statistical outcomes.
This could be due to the fact that internal initiatives to implement Risk-Based Monitoring seem to be focused on the high-level expectations laid out in regulatory guidance and TransCelerate’s paper. Not much of it is centered on the complex statistical problems that are out there. Small companies, however, tend to implement Risk-Based Monitoring on a program-by-program basis, considering the type of clinical trial, therapeutic area or size of the program, etc. This is encouraging, yet as is often the case with our industry, it just doesn’t seem like anyone is embracing the new approach fast enough.
How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring – are they ready for it and why?
All indications are that Risk-Based Monitoring adoption will be enthusiastically embraced by regulators. Especially the FDA, which has already released some very specific guidance and high-level recommendations from a regulatory perspective. Even back in 2009 when it introduced its PRO guidance, the FDA was starting to look at how to address issues like missing data and clinical data analysis. And then subsequent guidance on eSource and Clinical Outcomes Assessments (COA) all address the issues of how to do better statistical monitoring. We’ve been encouraging people to pay more attention to statistical analysis and not just focus on SDV data for years, so the FDA guidance validates a lot of what we are already working towards.
In my experience, regulators are very willing to answer questions and offer recommendations outside of the context of a specific guidance. So in my opinion, the best way for companies to ensure successful implementation of Risk-Based Monitoring is to engage with the relevant authorities as often as possible and if necessary, adjust study protocols according to their advice. The existing guidance is further proof that regulators are open to new approaches, so the burden really is on the industry to make sure their methods are validated before they are submitted for approval.
How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?
As technology in clinical research continues to prove its worth, for example, EDC and eCOA, which have both undoubtedly led to better efficiencies, there is no reason to believe that other new technologies couldn’t support this further. However, pharmaceutical research is at least one generation behind in terms of the adoption and integration of new technologies, so it needs to play catch up. As the industry continues to do this, there is no doubt that we will see the introduction and adoption of more and more technologies within clinical research.
There will also be greater focus on interoperability between these technologies, which will bring with it a burden on everyone, especially the technology developers, to make sure the technologies are not only able to work with each other, but work for the researchers using them. At Bracket, we invest time testing new technology and working with end users, mostly at clinical research centers, to make sure it works in the field, and ultimately improve the chances of it being adopted by pharma organizations.
Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?
When it was introduced, I think much of the initial focus was on how Risk-Based Monitoring could address operational problems, so keeping tracking of missing data, or the number of data change forms, etc. However, I think that much of the adoption we will see moving forward will center on the clinical endpoint of a trial and making sure the data captured is as reliable and valid as possible. Data quality is the ultimate goal in order to improve the chances of study success, and implementing a CSM solution will ensure all data captured during a trial is clean and accurate.
At Bracket, we begin a project by considering a study’s protocol, including the clinical endpoint and outcomes, and then develop a strategy which addresses all of the potential risks that could arise when collecting the data, based on those endpoints and outcomes. As we continue to do this, Risk-Based Monitoring and CSM will be seen as essential support tools. That said, while defining the potential risks is an important part of the process, what a team does in response to any issues identified in the data, is crucial. To ensure clean and consistent data capture throughout the trial teams must implement clearly defined, meaningful actions and remediations to rectify any issues highlighted as soon as possible.
Discuss the complexity of the strategy of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.
In our experience, defining subjective thresholds can sometimes lead study teams down a path which doesn’t necessarily represent anything meaningful. When risks are properly validated and understood upfront, site comparisons can be a really powerful tool in identifying issues which would be impossible to discover if a team was just monitoring data within the context of one center.
At Bracket, are usually quite focused on the Clinical Outcomes Assessments, the PRO’s and ratings scales that are often used as endpoints. If you are evaluating a new anti-depressant product, and the primary outcome is a clinical interview, and this interview only lasts for four minutes, but the median interview length is usually 17 minutes, this could be an issue. If the team is looking at individual data points across numerous sites, that would flag as a very meaningful negative outlier, but would be very difficult to identify if the team was only analyzing data within the context of one center.
What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?
In my view, the ICH (E6) Addendum is a continuing validation of the work we have already been doing in Risk-Based Monitoring and CSM. What it does do is gives those with doubts, for example, someone in a pharma QA position, the reassurance they need that Risk-Based Monitoring is an effective solution which will be approved by a regulator. The important element for me is that by analyzing site characteristic and performance metrics, Risk-Based Monitoring validates the need to look at all of the data in a clinical program and leverage every opportunity to analyze it, in order to uncover and rectify inconsistencies and issues.
I would hope that the primary impact of the Addendum will be an increase in Risk-Based Monitoring adoption. It gives those in the industry who have been pushing for its implementation, the ammunition they need to argue the case for going beyond the traditional monitoring approach. Ultimately, it has the potential to have a significant impact on how studies are being conducted and improve the overall quality of data captured during trials.