Skip to main content

Risk-Based Monitoring Insights from the Industry: Dr. Peter Schiemann

By May 26, 2016March 9th, 2022No Comments

In the fourth installment of the CluePoints Risk-Based Monitoring Q&A series, we spoke to Dr. Peter Schiemann, Managing Partner and Co-Founder of Widler & Schiemann, who had some interesting views to share with us.

When it comes to the practical implementation of Risk-Based Monitoring, where do you observe differences in current best Risk-Based Monitoring practices between companies and CROs of different size?

At Widler & Schiemann we have observed that the adoption of Risk-Based Monitoring is still very much in its infancy, with companies of all sizes experimenting with different approaches. For that reason, we don’t think we are in a position to talk about “best practice” yet. In some cases we have seen companies following the FDA guidance a bit too closely and, as a result, neglecting the suggestions in the EMA reflection paper. Indeed, on one hand focus in those studies has been very much on the practical execution of the study monitoring aspects such as finding ways to reduce the level of Source Document Verification (SDV) and not necessarily on the prerequisite that organizations must have in place to enable successful Risk-Based Monitoring and on the other, strategies and processes to design protocols that are more “fit for purpose” have been neglected.

Many companies are using generic risk indicators to measure certain events, for example low(er) frequency in adverse event reporting or protocol deviations, and then using the metrics collected to draw conclusions on site behavior and determine where any changes to the monitoring plan are needed. In addition, tools such as CluePoints’ SMART engine are being used to deliver a central review of the data being collected during a clinical trial to check for outliers and on the basis of such insight fine-tune monitoring activities.

From what we have seen so far, many of the companies who are attempting Risk-Based Monitoring adoption are often not effectively determining risk during the protocol design and implementation process, and with that initiating a risk mitigation before issues materialize. For instance, the number and type of endpoints of the study, selection and posology of the comparators, the patient population, experience of outsourcing to service providers, the knowledge of site teams, the experience of the partners and contractors involved, etc. are elements in a study that can cause issues such as protocol deviations if they are not properly designed. All of this information is key and should be used to assess the risk inherent to the study protocol and study set up. Failing to do so is something we know the regulators see as sub-optimal and are not inclined to accept.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring – are they ready for it and why?

A number of regulators have published papers outlining their support for the approach and their expectations, however, the guidance does – rightly so – not provide details on how Risk-Based Monitoring should actually be implemented. Regulators have resisted the call for defining what a “right” RbM approach is as it is clear that the sponsor and its partners (e.g., CRO) are the responsible parties for identifying the right Risk-Based Monitoring strategy, a one-size-fits-all approach would not do justice to a true Quality by Design and Risk-Based Monitoring approach. While those at the agencies responsible for the recommendations have an understanding of how Risk-Based Monitoring can work practically, unfortunately, there is still a large number who are not yet confident with the approach. This, of course, may cause some difficulties when they come to carry out inspections of Risk-Based Monitoring approaches.
From past discussions we have had with competent authorities, we believe they will follow a path of logic when it comes to approvals. Risk-Based Monitoring approaches need to be based on data and facts and follow a clear plan. This backs up the EMA reflection paper, which advises that decisions should be made on fact and that organizations should be able to substantiate decisions.

As it stands, it is our view that neither party is currently ready for a full inspection. That said, Risk-Based Monitoring is work in progress, and progress has (and is currently) being made. This is further reflected in the ICH Addendum.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

At last year’s PCT conference and other occasions we visited many vendors who have developed software which supports Risk-Based Monitoring, but in our opinion, there are currently very few able to fulfill all of the requirements of this approach, let alone incorporate the multitude of systems that need to be used concurrently. Quite simply, the evolution of current technology is definitely needed in order to meet industry needs. Software solutions that simply produce visualizations of trends without providing objective criteria for deciding what is an “acceptable” vs. “non-acceptable” deviation are not serving the purpose of enabling a fact driven risk management approach. In other words, a software solution that does not enable the user to define the “Design Space” is an unsuitable Risk-Based Monitoring tool.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM).

How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

The aim with Risk-Based Monitoring is to prevent risks and issues from occurring by detecting trends in the data in real time. The use of KRIs is an important element of this, as they tell study teams how people behave and if/how many mistakes they make, and alert teams to risks before too many have occurred. The industry can utilize CSM to complement this. With a CSM system, unexpected deviations or outliers are spotted by examining data produced from patients who have already been treated. The new regulations require sites and companies to carry a self-assessment of their data, so by helping teams identify which sites and data they can trust, CSM can make this happen.

Discuss the complexity of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

We think this is best answered through an example. Take adverse event reporting, a very important activity which is directly linked to patient safety. How do study teams ensure that sites are reporting adverse events correctly, i.e., all the events that should be reported? Study teams could carry out a comparison, but there are considerations with this. Simply comparing the number of adverse events reported at sites against each other is not appropriate as one site could just be better at reporting these incidences or one may have a higher number of patients than the other. Sites must compare one site against all sites involved and examine the average number of adverse events reported per patient, per visit. It is also important to take any cultural differences into consideration when making comparisons, for example, patients in one geography could be culturally much less likely to complain than in another.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

For the moment, Risk-Based Monitoring is more or less a recommendation. However, if the ICH E6 goes through, it is likely to come into effect in early 2017, making it really quite urgent for companies to start thinking about how to address the requirements.

The most important thing is having Risk-Based Monitoring and quality management systems in place, plus complete vendor oversight. It is also crucial for the industry to realize that this will not only impact sponsors, but all organizations involved in clinical studies. Large hospitals, for example, would also need to better document their quality management systems, as well as their approaches to monitoring.

As time goes on, regulators will question more and more the reliability of data from organizations not deploying a risk-based approach, especially when errors occurring – and perhaps identified and corrected at one site – are not identified in other sites in the same study or even in other studies. In addition, it is worth bearing in mind that a risk-based approach to monitoring is more cost-effective long term, so companies making the shift now will see the financial rewards of their efforts sooner.

There is no doubt that the Addendum will have a major impact. When it comes to a risk-based approach, many companies are setting the wrong objectives and the wrong incentives, with many concentrating on the potential cost-savings thanks to reduced site monitoring, less travel, reduced resource costs, etc. With monitoring activity typically accounting for 30-40 percent of clinical trial costs, this is of course tempting.

What needs to happen is for the Risk-Based Monitoring protocol to be designed carefully from the offset, in order to minimize issues when it comes to operationalizing the study. If a study is planned correctly, with primary objective, well-defined endpoints and focus on these elements and no distractions by “unnecessary” parameters or exploratory objectives, study teams can streamline execution. By reducing the trial duration even by one week, organizations can potentially save thousands, if not millions, of dollars. Not to mention the additional benefits of introducing the drug to market earlier and longer patent exclusivity for the marketed product.

This quality by design approach is what the ICH E6 Addendum proposes, rather than historically checking data once events have occurred. Of course, it will not be easy for the industry to make this shift, but those organizations who fail to make the transition will ultimately lose out in the long run.

Patrick Hughes

Patrick holds a Marketing degree from the University of Newcastle-upon-Tyne, UK, and a post-graduate Marketing diploma in Business-to-Business Marketing Strategy from Northwestern University - Kellogg School of Management, Chicago, Illinois. Responsible for leading global sales, product, marketing, operational and technical teams throughout his career, Patrick is a Senior Executive with over eighteen years international commercial experience within life sciences, healthcare and telecommunications. In the past, Patrick consulted on corporate and commercial strategy for various life sciences companies and was responsible for successfully positioning ClinPhone as the leading Clinical Technology Organization during his 10-year tenure with the company.