Risk-Based Monitoring Insights From The Industry: Steve Young

Part Five

We’ve heard some great insights on the current hot topics in Risk-Based Monitoring (RBM) from a number of leading experts over recent weeks. In part five, we hear what Steve Young, Senior Director of Transformation Services at OmniComm Systems has to say on the subject when we put our questions to him.

When it comes to the practical implementation of Risk-Based Monitoring, where do you observe differences in current best Risk-Based Monitoring practices between companies and CROs of different size?

I strongly believe that simplicity is key to successful implementation of Risk-Based Monitoring for any organization. I have unfortunately seen too many organizations unintentionally over-engineer processes at their first attempts, particularly in areas such as the pre-study risk assessment and risk identification. The start-up period of a study is a very busy time for cross-functional study teams, so organizations implementing Risk-Based Monitoring need to make sure they don’t create a resource- and time-heavy process which will increase burden and may actually contribute to increased risk.

Many sponsors are exploring how to approach Risk-Based Monitoring with their CRO partners. My recommendation to sponsors is to make sure they are closely involved in the planning, design and implementation of Risk-Based Monitoring, in order to ensure that their business objectives and study outcomes are being met. Sponsors should also have access to operational metrics, signal detection etc. throughout the study in order to monitor quality management.

Another important point to mention here is that having well considered, robust, centralized statistical data monitoring capabilities in place is absolutely critical to a successful and effective roll-out of Risk-Based Monitoring. Centralized analytics and centralized monitoring of key risk indicators are the tools that allow organizations to effectively identify emerging risks and proactively remediate them. This in turn allows study teams to be more targeted with monitoring activity and eliminate the need for the traditional 100% source data verification (SDV) and site visits every four weeks. Truly implementing a targeted approach, operationally and effectively, means that organizations simply have to have a very effective centralized monitoring system in place.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring – are they ready for it and why?

It’s clear that the regulatory authorities have been actively encouraging the industry to increase adoption of Risk-Based Monitoring over recent years. The FDA guidance and the EMA reflection papers make it very clear that Risk-Based Monitoring is considered a superior approach to achieving quality in clinical trials, and that they are not just passively suggesting it as an alternate approach to quality management. In Japan, the PMDA has also recently made clear its strong support for Risk-Based Monitoring, which is promising.

The ICH E6 GCP guidance is making its first real major update in 15 years, which is due to be finalized later this year. The centerpiece of the new update is Risk-Based Monitoring and risk-based quality management principles, so we can see that Risk-Based Monitoring is quickly turning from regulatory recommendation into GCP expectation.

There are still many organizations with concerns about how well the regulatory auditors are actually going to react to this change. However, it’s quite clear from comments the regulatory authorities have made, that as long as sponsors have a well-documented quality management plan that demonstrates how risk assessment was carried out, how the monitoring plan was guided by that risk assessment, and makes clear the findings (and any remediations), then there should be few issues. In my view, the door has been opened very clearly by the regulatory authorities to encourage industry to move forward with Risk-Based Monitoring.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

There is an evolution which needs to happen and it’s actually already underway.
EDC systems need to have robust, built-in intelligent features that allow study teams to plan, set-up and effectively manage a targeted SDV approach. A number of EDC systems have already been updated to include these type of capabilities, but I think the development of this technology will continue over time to become even more robust and fit-for-purpose.

As I mentioned earlier, centralized analytics tools are critically important to the success of Risk-Based Monitoring. While historically these tools have not played a major role in the quality management of clinical studies, as Risk-Based Monitoring adoption continues to increase, it will be critical for current technologies to incorporate these analytical capabilities in order for the approach to be truly successful.

Many organizations are already putting in place initial centralized analytics, key risk indicators, statistical monitoring methods etc, but there is still much refinement needed. Some are more advanced than others, but in my view, there is still a danger of ‘immature analytics’. By this, I mean that a study team could be using data from these tools to identify emerging risks, but if those metrics are not configured properly or well considered, teams run the risk of acting on unreliable data, which runs directly counter to what Risk-Based Monitoring aims to achieve.

Workflow support is also going to be very important and is evolving behind these two other areas. So prior to the study, workflow support is needed to manage the development, review and approval of a risk management plan, and during the study, effective support is required to follow-up on emerging issues detected by centralized monitoring, including adequately documenting them through to resolution or mediation.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

Advanced Central Statistical Monitoring capabilities, that can identify outliers and unrealistic data patterns, are crucial. The industry is becoming more interested in this approach and as a result, I think adoption rates will increase significantly in the near future as organizations strive to have a full complement of tools to detect operational and data integrity issues. That said, this early on in development, most organizations are still trying to figure out how this will work from a practical perspective.

Organizationally, my view continues to be that this is going to gravitate towards skill sets that exist in clinical data management organizations, within both CROs’ and sponsor’s clinical data management departments. Certainly biostatistical departments and representatives will be important stakeholders in the adoption process, however, they will  actually play a more supportive, as opposed to central, role in the operational use of these tools, especially as they become increasingly sophisticated and have more refined user interfaces.

While data managers are not statisticians, they typically have good data analytics skills, so in my opinion are probably the best equipped to work with the monitoring team to translate data into an appropriate remediation action.

Discuss the complexity of defining subjective thresholds using Key Risk Indicators KRIs rather than (or in addition to) the objectivity of comparing one site against the others

Even when study teams compare sites against each other there can be a level of subjectivity about where to set the statistical thresholds. For example, a team might statistically compare the rate of adverse events that are being recorded per subject across all of the sites in a clinical trial, but how does the team know when a site should actually be flagged as potentially under-reporting – or not reporting all of the adverse events that are occurring? Teams can define ‘x’ number of standard deviations from the study average as the appropriate threshold, however, different individuals or organizations may set thresholds at different levels, which is where the challenge lies. Currently there is no consensus that I am aware of but it is likely that a one will be established over time.

An important issue is also study teams’ understanding that identifying potential risks is not black and white. Even once thresholds are set, sites should not be dismissed just because they haven’t hit the alert level. For example, there may be a site which is not flagged as a concern but is just below the alert threshold, so it may be in a team’s interest to review activity at this site too.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

I believe that its main impact will be to turn Risk-Based Monitoring into an expectation rather than a suggestion, and from a pragmatic perspective, it’s going to accelerate adoption across the industry. The FDA and EMA set a foundation which has really increased the conversation, but these were just guidelines that don’t carry the force of setting a specific GCP expectation. The ICH E6 Addendum sets clear expectations for the industry, so in my view, it is likely to be a game changer, with many organizations that were previously holding back having to actively start putting Risk-Based Monitoring processes in place.  It is becoming clear that within a couple of years Risk-Based Monitoring will simply be the standard, accepted approach to operational quality management across the industry.

Meet SPOT: Transforming Site Monitoring Practices with Adaptive Intelligence
A New Era of Automation: Improving Efficiency & Outcomes with Intelligent Medical Coding
Press Release
CluePoints Continues ‘Turning Artificial Intelligence into Human Intelligence’ by Launching Two New Innovations