Monthly Archives: June 2016

Risk-Based Monitoring Insights from the Industry: Steve Young

By | Blog, Insights from the Industry Blog Series | No Comments

Part Five

We’ve heard some great insights on the current hot topics in Risk-Based Monitoring (RBM) from a number of leading experts over recent weeks. In part five, we hear what Steve Young, Senior Director of Transformation Services at OmniComm Systems has to say on the subject when we put our questions to him.

When it comes to the practical implementation of Risk-Based Monitoring, where do you observe differences in current best Risk-Based Monitoring practices between companies and CROs of different size?

I strongly believe that simplicity is key to successful implementation of Risk-Based Monitoring for any organization. I have unfortunately seen too many organizations unintentionally over-engineer processes at their first attempts, particularly in areas such as the pre-study risk assessment and risk identification. The start-up period of a study is a very busy time for cross-functional study teams, so organizations implementing Risk-Based Monitoring need to make sure they don’t create a resource- and time-heavy process which will increase burden and may actually contribute to increased risk.

Many sponsors are exploring how to approach Risk-Based Monitoring with their CRO partners. My recommendation to sponsors is to make sure they are closely involved in the planning, design and implementation of Risk-Based Monitoring, in order to ensure that their business objectives and study outcomes are being met. Sponsors should also have access to operational metrics, signal detection etc. throughout the study in order to monitor quality management.

Another important point to mention here is that having well considered, robust, centralized statistical data monitoring capabilities in place is absolutely critical to a successful and effective roll-out of Risk-Based Monitoring. Centralized analytics and centralized monitoring of key risk indicators are the tools that allow organizations to effectively identify emerging risks and proactively remediate them. This in turn allows study teams to be more targeted with monitoring activity and eliminate the need for the traditional 100% source data verification (SDV) and site visits every four weeks. Truly implementing a targeted approach, operationally and effectively, means that organizations simply have to have a very effective centralized monitoring system in place.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

It’s clear that the regulatory authorities have been actively encouraging the industry to increase adoption of Risk-Based Monitoring over recent years. The FDA guidance and the EMA reflection papers make it very clear that Risk-Based Monitoring is considered a superior approach to achieving quality in clinical trials, and that they are not just passively suggesting it as an alternate approach to quality management. In Japan, the PMDA has also recently made clear its strong support for Risk-Based Monitoring, which is promising.

The ICH E6 GCP guidance is making its first real major update in 15 years, which is due to be finalized later this year. The centerpiece of the new update is Risk-Based Monitoring and risk-based quality management principles, so we can see that Risk-Based Monitoring is quickly turning from regulatory recommendation into GCP expectation.

There are still many organizations with concerns about how well the regulatory auditors are actually going to react to this change. However, it’s quite clear from comments the regulatory authorities have made, that as long as sponsors have a well-documented quality management plan that demonstrates how risk assessment was carried out, how the monitoring plan was guided by that risk assessment, and makes clear the findings (and any remediations), then there should be few issues. In my view, the door has been opened very clearly by the regulatory authorities to encourage industry to move forward with Risk-Based Monitoring.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

There is an evolution which needs to happen and it’s actually already underway.

EDC systems need to have robust, built-in intelligent features that allow study teams to plan, set-up and effectively manage a targeted SDV approach. A number of EDC systems have already been updated to include these type of capabilities, but I think the development of this technology will continue over time to become even more robust and fit-for-purpose.

As I mentioned earlier, centralized analytics tools are critically important to the success of Risk-Based Monitoring. While historically these tools have not played a major role in the quality management of clinical studies, as Risk-Based Monitoring adoption continues to increase, it will be critical for current technologies to incorporate these analytical capabilities in order for the approach to be truly successful.

Many organizations are already putting in place initial centralized analytics, key risk indicators, statistical monitoring methods etc, but there is still much refinement needed. Some are more advanced than others, but in my view, there is still a danger of ‘immature analytics’. By this, I mean that a study team could be using data from these tools to identify emerging risks, but if those metrics are not configured properly or well considered, teams run the risk of acting on unreliable data, which runs directly counter to what Risk-Based Monitoring aims to achieve.

Workflow support is also going to be very important and is evolving behind these two other areas. So prior to the study, workflow support is needed to manage the development, review and approval of a risk management plan, and during the study, effective support is required to follow-up on emerging issues detected by centralized monitoring, including adequately documenting them through to resolution or mediation.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

Advanced Central Statistical Monitoring capabilities, that can identify outliers and unrealistic data patterns, are crucial. The industry is becoming more interested in this approach and as a result, I think adoption rates will increase significantly in the near future as organizations strive to have a full complement of tools to detect operational and data integrity issues. That said, this early on in development, most organizations are still trying to figure out how this will work from a practical perspective.

Organizationally, my view continues to be that this is going to gravitate towards skill sets that exist in clinical data management organizations, within both CROs’ and sponsor’s clinical data management departments. Certainly biostatistical departments and representatives will be important stakeholders in the adoption process, however, they will  actually play a more supportive, as opposed to central, role in the operational use of these tools, especially as they become increasingly sophisticated and have more refined user interfaces.

While data managers are not statisticians, they typically have good data analytics skills, so in my opinion are probably the best equipped to work with the monitoring team to translate data into an appropriate remediation action.

Discuss the complexity of defining subjective thresholds using Key Risk Indicators KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

Even when study teams compare sites against each other there can be a level of subjectivity about where to set the statistical thresholds. For example, a team might statistically compare the rate of adverse events that are being recorded per subject across all of the sites in a clinical trial, but how does the team know when a site should actually be flagged as potentially under-reporting – or not reporting all of the adverse events that are occurring? Teams can define ‘x’ number of standard deviations from the study average as the appropriate threshold, however, different individuals or organizations may set thresholds at different levels, which is where the challenge lies. Currently there is no consensus that I am aware of but it is likely that a one will be established over time.

An important issue is also study teams’ understanding that identifying potential risks is not black and white. Even once thresholds are set, sites should not be dismissed just because they haven’t hit the alert level. For example, there may be a site which is not flagged as a concern but is just below the alert threshold, so it may be in a team’s interest to review activity at this site too.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

I believe that its main impact will be to turn Risk-Based Monitoring into an expectation rather than a suggestion, and from a pragmatic perspective, it’s going to accelerate adoption across the industry. The FDA and EMA set a foundation which has really increased the conversation, but these were just guidelines that don’t carry the force of setting a specific GCP expectation. The ICH E6 Addendum sets clear expectations for the industry, so in my view, it is likely to be a game changer, with many organizations that were previously holding back having to actively start putting Risk-Based Monitoring processes in place.  It is becoming clear that within a couple of years Risk-Based Monitoring will simply be the standard, accepted approach to operational quality management across the industry.

Join CluePoints’ dedicated Risk-Based Monitoring Linkedin Group to keep abreast of the latest industry trends!

 

Defining Clinical Trial Thresholds: Objective Vs Subjective?

By | Blog | No Comments

Defining Thresholds: Objective Vs Subjective?

Over the past few weeks, we’ve been discussing the thoughts of some of the leading experts in Risk-Based Monitoring, and we’ve really appreciated those of you who have got involved in the debates. The penultimate question we put to our partners centered around the complexity of defining subjective thresholds using Key Risk Indicators (KRIs) rather than (or in addition to) the objectivity of comparing one site against the others.

 Similar to my last post, which looked at how organizations are going to complement the use of KRIs with CSM, the partners offered quite a range of opinions. Two people who shared similar views were Angie Maurer, Co-Founder & CEO at Zynapsys and Craig Serra, Senior Director and Data Management Business Process Owner at Pfizer, who agreed that there is a place for both in today’s clinical trials, with Craig commenting: “Until study teams start receiving enough clinical and operational data, perhaps three or four months into a study, they don’t really have the ability to be terribly objective. So until that point, there is nothing wrong with teams setting subjective thresholds based on previous experience.”

Jamie O’Keefe, Vice President, Life Sciences and R&D Practice at Paragon Solutions discussed the challenge of defining subjective thresholds compared with being objective, which in his opinion, is fairly straightforward. In order to set a subjective threshold, a study team needs to consider the opinion of many people, all of whom have important interactions with sites from a qualitative perspective. In Jamie’s view, the industry is currently missing a system which facilitates this conversion and brings numerous opinions together to set one threshold.

Steve Young, Senior Director of Transformation Services at OmniComm Systems, pointed out that: “Even when study teams compare sites against each other there can be a level of subjectivity about where to set the statistical thresholds.” He used an example to demonstrate this: “A team might statistically compare the rate of adverse events that are being recorded per subject across all of the sites in a clinical trial, but how does the team know that when a site appears to have an unrealistically low adverse event rate, that it should be flagged as a site that is potentially underreporting or not reporting all of the adverse events that are occurring? Teams can define ‘x’ number of standard deviations as the appropriate threshold, however, different individuals or organizations may set thresholds at different levels, which is where the challenge lies. Currently,  there is no correct solution to this issue that I am aware of, but it is likely that a common consensus on defining those thresholds will be developed over time.”

Craig Serra provided a useful summary, concluding that: “Ultimately, listening to and acting on the data is key. After all, as an industry we are willing to let stats dictate the conclusion in terms of safety and efficacy of a trial, so why aren’t we letting that same hypothesis testing actually tell us about the quality of data during the trial, and what is more, use it during the trial to help safety and efficacy?”

From my I own perspective, I couldn’t agree more with Craig, the use of a statistical approach to help determine safety and efficacy can be leveraged via the Central Statistical Monitoring of data to determine indicators of risk by site. There is certainly very good reason to employ subjective thresholds as the trial starts since the amount of data required to get a statistically valid, objective result may take a few weeks or months to accrue. This subjective threshold is best assessed using prior experience but we also recommended the use of a statistical approach from historic studies of a similar data structure within the same therapy area to use as the starting point. As the trial progresses and data accrues, we have seen that sponsors want to objectively assess the Key Risk Indicators as soon as practically possible and for some variables (such as Adverse Events or SAEs) the use of both subjective and statistically-driven thresholds has been the standard approach. It is clear from the results we have seen and action taken that the objective statistically-derived results carry more weight in terms of ‘proving’ where risk lies within the study being analyzed.

Central Statistical Monitoring Should Support Key Risk Indicators

By | Uncategorized | No Comments

 

As you may well know, much of the early work in Risk-Based Monitoring has focused on relatively simple Key Risk Indicators (KRIs) and traffic-light dashboards, which are easy to understand. However, there is now a growing requirement to complement this approach with a more sophisticated and comprehensive analysis of data using a Central Statistical Monitoring (CSM) methodology. At CluePoints, we’re interested to know your thoughts on how companies are going to adopt these complementary approaches to ensure data accuracy and integrity in their trials?

Unlike the other Risk-Based Monitoring topics discussed over recent weeks, when we spoke with a number of our partners about this, the responses were quite wide-ranging, so we’ve tried to provide a quick overview for you in this post (for their full responses, please see here). We would love to hear your views and experiences on this matter too, so please feel free to get the discussion started!

The majority of our partners were in agreement that we are likely to see an increase in adoption of Central Statistical Monitoring across the industry over the coming years due to the advanced capabilities and value that the technology offers. Angie Maurer, Co-Founder & CEO at Zynapsys commented: “These systems deliver not only more transparent and efficient data but the sophisticated technology ensures data is also much more robust, so I think we will definitely see an increase in adoption.” An opinion echoed by Steve Young, Senior Director of Transformation Services at OmniComm Systems, who said: “Advanced Central Statistical Monitoring capabilities, that can identify outliers and unrealistic data patterns, are crucial. The industry is becoming more interested in this approach and as a result, I think adoption rates will increase significantly in the near future.”

Oracle Health Sciences’ Senior Director of Life Sciences Product Strategy, James Streeter, thinks that we are likely to see the use of Key Risk Indicators (KRIs) reduce over time, saying: “For now, what we are likely to see is companies adopting an approach which combines the use of Key Risk Indicators (KRIs) and Central Statistical Monitoring, to give them a broader picture of the potential risks within their clinical trials. As study teams begin to understand that Central Statistical Monitoring provides them with more access to real-world data, coupled with the increase in data being collected from emerging technology, the use of Key Risk Indicators (KRIs) will diminish.”

In terms of practical implementation, Adam Butler, Senior Vice President, Strategic Development & Corporate Marketing at Bracket Global urged sponsors to ensure they develop a strategy which addresses all of the potential risks that could arise when collecting data, and when doing this, utilize Risk-Based Monitoring and Central Statistical Monitoring as support tools. Jamie O’Keefe, Vice President, Life Sciences and R&D Practice at Paragon Solutions, thinks that we will see a divide between large and smaller companies, commenting: “It will be much more achievable for smaller organizations to integrate Central Statistical Monitoring and adopt this new approach across all of their trials. The challenge for big pharma companies will be changing existing complex infrastructure. Trying to drive this change through multi-million dollar, multi-year implementations, will be a huge task and have a significant impact on business.”

While Jamie thinks the adoption of Central Statistical Monitoring could be easier for smaller organizations, Angie Maurer thinks that budget will have an influence for smaller companies and would like to see providers develop a flexible platform that will allow even small start-up companies to afford this new technology.

Finally, Steve Young raised an interesting question about who within an organization should take responsibility for the management of Central Statitstical Monitoring. In Steve’s opinion, while data managers are not statisticians, they have the data analytics skills, so will be best equipped to work with the monitoring team to translate data into an appropriate remediation action.

In our own experience to date, it is evident that Central Statistical Monitoring is just as applicable for small Pharma as it is for Large Pharma and CROs. The ‘light checks’ offered by Key Risk indicators (KRI) functionality certainly provide companies with an operational tool to regularly check data and drive reduced monitoring activities. However, most of the companies that CluePoints works with have identified that they also want to also use an independent and objective approach to comprehensively interrogate the data. Using the KRI approach only focuses on 15-20 largely operational variables, whereas CSM leaves no stone unturned in comprehensively analyzing all the clinical data within a study. The result is as rigorous and scientifically validated health check of the study and identification of anomalies within the data that can be examined and course-corrected to ensure patient safety and improved accuracy and integrity. The regulators also favor this approach and the revised ICH E6 guideline document also helps sponsors and CROs alike to determine how Central Monitoring should be best approached. The gold medal position is for sponsors to harness the power of  both KRIs and Central Statistical Monitoring to ensure improvement and integrity of the data throughout their studies and reduced risk when it comes to submission.