Monthly Archives: May 2016

Are the Regulatory Authorities Ready for the Adoption of Risk-Based Monitoring?

By | Uncategorized | No Comments

Firstly, thanks to those of you who got involved with our recent discussion about current best practice in Risk-Based Monitoring, it was great to hear from you. Next, we asked our partners to tell us about how they think the regulatory authorities will respond to the adoption of Risk-Based Monitoring, and importantly, if they are ready for it.

With both the FDA and the EMA releasing draft guidance for the conduct of Risk-Based Monitoring, there is no doubt that a shift to this approach by sponsors and CROs will be welcomed by the authorities. However, when it comes to actually putting it into practice, our partners still have questions about whether those carrying out audits and inspections are in practice ready and willing to accept these new methodologies. James Streeter, Senior Director of Life Sciences Product Strategy at Oracle Health Sciences commented that despite the regulatory authorities pushing for widespread adoption of Risk-Based Monitoring, “in reality, it may take some time for individual auditors to accept the new Risk-Based Monitoring methodologies and how these differ between organizations,” and this could prove to be a potential hurdle for the industry.

In an industry typically slow to adapt to change, it is no surprise that some organizations are yet to respond to this guidance – perhaps because it is just that, guidance. Karen Fanouillere, Biostatistics Project Leader at Sanofi, gave reasoning for this, saying that, “while the authorities are definitely on board with Risk-Based Monitoring and already advising the industry to adopt this approach, they have yet to outline any specific recommendations or guidance on exactly how they would like to see it implemented.” Our partners were in general agreement that the introduction of the ICH (E6) Addendum later this year should provide some of the much-needed clarity and ‘specifics’ that the industry is waiting for, and will also force the hand of many organizations still resisting the change.

All that said, the majority of our partners are in regular discussions with the authorities and many are of the opinion that Risk-Based Monitoring methodologies which, as Dr. Peter Schiemann, Managing Partner, and Co-Founder, of Widler & Schiemann, said, “are based on data and facts, and follow a clear plan,” are likely to be accepted by the regulators. Steve Young, Senior Director of Transformation Services at OmniComm agreed with this view point, saying: “The door has been opened very clearly by the regulatory authorities to encourage the industry to move forward with Risk-Based Monitoring,” and, “as long as sponsors have a well-documented quality management plan that demonstrates how risk assessment was carried out, how the monitoring plan was guided by that risk assessment and makes clear the findings (and any remediations), then there should be few issues.”

From the CluePoints perspective, we can certainly see that the agencies, via the various guidance documents, are clearly insisting that sponsors and their partners adopt new centralized monitoring processes that will both improve data quality and reduce cost. The agencies themselves are also scrutinizing their own processes, and I wouldn’t mind betting that they will be adopting similar new techniques for selecting sites for inspection in the near future. The regulatory bodies have certainly given the industry the push that has been needed to affect change that will herald a new way of managing study conduct and ensuring data quality while reducing risk. At CluePoints, we use a very effective analogy to the airline industry in the late 1980s whereby that industry went through a comprehensive process re-engineering designed to reduce costs but also improve safety. The results were extraordinary, and the similarities to the challenges we now collectively face in Pharma are remarkable. Let us know if you’d like to hear more.

What are your experiences of this? Do you have any advice to share from your experiences with the regulators?

Risk-Based Monitoring Insights from the Industry: Dr. Peter Schiemann

By | Blog, Insights from the Industry Blog Series | No Comments

Part Four

In the fourth installment of the CluePoints Risk-Based Monitoring Q&A series, we spoke to Dr. Peter Schiemann, Managing Partner and Co-Founder of Widler & Schiemann, who had some interesting views to share with us

When it comes to the practical implementation of Risk-Based Monitoring, where do you observe differences in current best Risk-Based Monitoring practices between companies and CROs of different size?

At Widler & Schiemann we have observed that the adoption of Risk-Based Monitoring is still very much in its infancy, with companies of all sizes experimenting with different approaches. For that reason, we don’t think we are in a position to talk about “best practice” yet. In some cases we have seen companies following the FDA guidance a bit too closely and, as a result, neglecting the suggestions in the EMA reflection paper. Indeed, on one hand focus in those studies has been very much on the practical execution of the study monitoring aspects such as finding ways to reduce the level of Source Document Verification (SDV) and not necessarily on the prerequisite that organizations must have in place to enable successful Risk-Based Monitoring and on the other, strategies and processes to design protocols that are more “fit for purpose” have been neglected.

Many companies are using generic risk indicators to measure certain events, for example low(er) frequency in adverse event reporting or protocol deviations, and then using the metrics collected to draw conclusions on site behavior and determine where any changes to the monitoring plan are needed. In addition, tools such as CluePoints’ SMART engine are being used to deliver a central review of the data being collected during a clinical trial to check for outliers and on the basis of such insight fine-tune monitoring activities.

From what we have seen so far, many of the companies who are attempting Risk-Based Monitoring adoption are often not effectively determining risk during the protocol design and implementation process, and with that initiating a risk mitigation before issues materialize. For instance, the number and type of endpoints of the study, selection and posology of the comparators, the patient population, experience of outsourcing to service providers, the knowledge of site teams, the experience of the partners and contractors involved, etc. are elements in a study that can cause issues such as protocol deviations if they are not properly designed. All of this information is key and should be used to assess the risk inherent to the study protocol and study set up. Failing to do so is something we know the regulators see as sub-optimal and are not inclined to accept.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

A number of regulators have published papers outlining their support for the approach and their expectations, however, the guidance does – rightly so – not provide details on how Risk-Based Monitoring should actually be implemented. Regulators have resisted the call for defining what a “right” RbM approach is as it is clear that the sponsor and its partners (e.g., CRO) are the responsible parties for identifying the right Risk-Based Monitoring strategy, a one-size-fits-all approach would not do justice to a true Quality by Design and Risk-Based Monitoring approach. While those at the agencies responsible for the recommendations have an understanding of how Risk-Based Monitoring can work practically, unfortunately, there is still a large number who are not yet confident with the approach. This, of course, may cause some difficulties when they come to carry out inspections of Risk-Based Monitoring approaches.

From past discussions we have had with competent authorities, we believe they will follow a path of logic when it comes to approvals. Risk-Based Monitoring approaches need to be based on data and facts and follow a clear plan. This backs up the EMA reflection paper, which advises that decisions should be made on fact and that organizations should be able to substantiate decisions.

As it stands, it is our view that neither party is currently ready for a full inspection. That said, Risk-Based Monitoring is work in progress, and progress has (and is currently) being made. This is further reflected in the ICH Addendum.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

At last year’s PCT conference and other occasions we visited many vendors who have developed software which supports Risk-Based Monitoring, but in our opinion, there are currently very few able to fulfill all of the requirements of this approach, let alone incorporate the multitude of systems that need to be used concurrently. Quite simply, the evolution of current technology is definitely needed in order to meet industry needs. Software solutions that simply produce visualizations of trends without providing objective criteria for deciding what is an “acceptable” vs. “non-acceptable” deviation are not serving the purpose of enabling a fact driven risk management approach. In other words, a software solution that does not enable the user to define the “Design Space” is an unsuitable Risk-Based Monitoring tool.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

The aim with Risk-Based Monitoring is to prevent risks and issues from occurring by detecting trends in the data in real time. The use of KRIs is an important element of this, as they tell study teams how people behave and if/how many mistakes they make, and alert teams to risks before too many have occurred. The industry can utilize CSM to complement this. With a CSM system, unexpected deviations or outliers are spotted by examining data produced from patients who have already been treated. The new regulations require sites and companies to carry a self-assessment of their data, so by helping teams identify which sites and data they can trust, CSM can make this happen.

Discuss the complexity of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

We think this is best answered through an example. Take adverse event reporting, a very important activity which is directly linked to patient safety. How do study teams ensure that sites are reporting adverse events correctly, i.e., all the events that should be reported? Study teams could carry out a comparison, but there are considerations with this. Simply comparing the number of adverse events reported at sites against each other is not appropriate as one site could just be better at reporting these incidences or one may have a higher number of patients than the other. Sites must compare one site against all sites involved and examine the average number of adverse events reported per patient, per visit. It is also important to take any cultural differences into consideration when making comparisons, for example, patients in one geography could be culturally much less likely to complain than in another.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

For the moment, Risk-Based Monitoring is more or less a recommendation. However, if the ICH E6 goes through, it is likely to come into effect in early 2017, making it really quite urgent for companies to start thinking about how to address the requirements.

The most important thing is having Risk-Based Monitoring and quality management systems in place, plus complete vendor oversight. It is also crucial for the industry to realize that this will not only impact sponsors, but all organizations involved in clinical studies. Large hospitals, for example, would also need to better document their quality management systems, as well as their approaches to monitoring.

As time goes on, regulators will question more and more the reliability of data from organizations not deploying a risk-based approach, especially when errors occurring – and perhaps identified and corrected at one site – are not identified in other sites in the same study or even in other studies. In addition, it is worth bearing in mind that a risk-based approach to monitoring is more cost-effective long term, so companies making the shift now will see the financial rewards of their efforts sooner.

There is no doubt that the Addendum will have a major impact. When it comes to a risk-based approach, many companies are setting the wrong objectives and the wrong incentives, with many concentrating on the potential cost-savings thanks to reduced site monitoring, less travel, reduced resource costs, etc. With monitoring activity typically accounting for 30-40 percent of clinical trial costs, this is of course tempting.

What needs to happen is for the Risk-Based Monitoring protocol to be designed carefully from the offset, in order to minimize issues when it comes to operationalizing the study. If a study is planned correctly, with primary objective, well-defined endpoints and focus on these elements and no distractions by “unnecessary” parameters or exploratory objectives, study teams can streamline execution. By reducing the trial duration even by one week, organizations can potentially save thousands, if not millions, of dollars. Not to mention the additional benefits of introducing the drug to market earlier and longer patent exclusivity for the marketed product.

This quality by design approach is what the ICH E6 Addendum proposes, rather than historically checking data once events have occurred. Of course, it will not be easy for the industry to make this shift, but those organizations who fail to make the transition will ultimately lose out in the long run.

How will Risk-Based Monitoring Coexist with other Clinical Trial Technology?

By | Blog | No Comments

We’ve been really enjoying hearing your thoughts on the latest Risk-Based Monitoring hot topics over the last few weeks, so thank you to everyone who has taken time out to be involved. Next on our agenda is technology. As Risk-Based Monitoring takes off and becomes more mainstream within clinical studies, how will new technology exist alongside the multitude of electronic systems already in use? Will we see an evolution of technology or are some systems at risk of becoming obsolete?

We put these questions to some of our partners during our recent Q&A series and Angie Maurer, Co-Founder & CEO, Zynapsys summed up the general consensus of the group when saying, “we are likely to see the evolution and integration of current systems to incorporate Risk-Based Monitoring capabilities, as well as the introduction of new technologies.” James Streeter, Senior Director of Life Sciences Product Strategy at Oracle Health Sciences also quite rightly pointed out that, “studies can last for many years, so the systems being used in current trials or those beginning soon, will need to co-exist with new solutions introduced to the market over the coming years.”

Most colleagues we spoke with discussed the probability that current clinical trial technology will have to evolve over the coming years to incorporate Risk-Based Monitoring capabilities or risk becoming obsolete further down the line. In fact, many discussed the likelihood that going forward, providers of existing solutions, such as EDC, CTMS or site monitoring technology, will need to work together to integrate their systems to create larger, comprehensive Risk-Based Monitoring platforms for the industry. Angie Maurer was passionate about this potential for collaboration, saying that, “by working together in partnership, companies specializing in different areas, for example, EDC and monitoring, can create powerful new solutions for the industry.” Angie went on to say that in an industry saturated with specialists, all of which have spent years developing a deep understanding of their customers and markets, it is unlikely that one company will be able to provide the best quality solution in all the areas needed. What are your thoughts on industry collaboration?

While everyone we spoke with agreed that clinical trial technology needs to – and will – evolve over time, many pointed out that there is likely to be a considerable period of transition for the industry, while companies work to make this change happen. Commenting on this, Adam Butler, Senior Vice President, Strategic Development & Corporate Marketing, Bracket Global said: “Pharmaceutical research is at least one generation behind in terms of the adoption and integration of new technologies, so it needs to play catch up.” Craig Serra, Senior Director and Data Management Business Process Owner at Pfizer, agreed, saying that, “given our industry’s track record when it comes to the adoption of new processes and ideas, I think there will be a coexistence of all this technology for at least a decade, if not two or three.”

As a provider of enabling technology to drive Risk-Based Monitoring and Data Quality Oversight, CluePoints sees that whilst there is still a huge amount of process change required to fully exploit the technological advancements, many sponsors are finding that they are still able to use existing technology today to improve data quality. Further, periodic assessments of both operational and clinical data quality is helping to improve the way in which data is collected using existing, incumbent technology. Several companies, that are now invested in RBM, are making a big step by looking to implement a workflow for an enhanced data interrogation and monitoring process that is a hybrid of new and existing technology tools. If anyone wants to receive more information about this approach then please get in touch and we would be delighted to provide an overview of how these seamless links have been implemented.

Risk-Based Monitoring Insights from the Industry: Karen Fanouillere

By | Blog, Insights from the Industry Blog Series | No Comments

Part Three

In the third of an eight-part Q&A series which aims to explore some of the current hot topics in Risk-Based Monitoring (RBM), CluePoints has been speaking with Karen Fanouillere, Biostatistics Project Leader for 15 years and now Head of Clinical Information Governance at Sanofi about the perspectives of a large pharma company in implementing new monitoring methodologies for drug development programs.

Here’s what Karen had to say when we put our questions to her.

When it comes to the practical implementation of Risk-Based Monitoring, where do you feel current best practice lies amongst different sizes of companies and CROs?

In my experience, the size of the company has an impact on how Risk-Based Monitoring is implemented. For example, for larger organizations, the implementation of any new practice tends to be more difficult because of the complexity of integrating the solutions practically throughout the business.

At Sanofi, until Risk-Based Monitoring becomes practically available for use in all studies, we are only utilizing the approach for pivotal and large-scale studies. Our decision to use Risk-Based Monitoring within a trial usually depends on the number of patients and number of sites, or whether the type of study lends itself to a Risk-Based Monitoring approach. For Phase 1 studies, for example, where we feel Risk-Based Monitoring may not add as much value, we are still opting to utilize more traditional monitoring approaches.



How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

While the authorities are definitely on board with Risk-Based Monitoring and already advising the industry to adopt this approach, they have yet to outline any specific recommendations or guidance on exactly how they would like to see it implemented. At Sanofi, we began implementing Risk-Based Monitoring because we recognized it as a way to gain insight into the quality and the precision of data in our studies, not because it was advised by the regulatory authorities.



How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

I think that there are multiple ways of integrating Risk-Based Monitoring within the many tools and systems currently in use. There are numerous solutions implemented across the industry, all of which identify trends in study data in different ways – the industry now needs to find a way to integrate these into one cohesive offering.

The introduction of Risk-Based Monitoring technology means we can now view much more comprehensive data, and centrally at sites. As well as identifying potential risks to the study at the start of a program, Risk-Based Monitoring allows teams to identify trends in data quicker than ever before during the course of a study, meaning they can take remedial action and minimize potential issues before they have happened or progress any further. Existing systems are not designed for that. And while I don’t think these will become obsolete, as many offer capabilities complementary to Risk-Based Monitoring, there will certainly need to be some evolution of the technologies in order to create more appropriate risk management platforms.



Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

Sanofi first implemented central statistical monitoring (CSM) in a pilot study in 2013, and now we have just begun implementing KRIs, so we have done things a little bit in reverse.

In the past, we only had between 5 and 10 indicators that could be considered a KRI, which was very simple and not very useful or user-friendly. Because of this, we moved away from the use of KRIs. Then we initiated CSM with CluePoints, which has enabled us to derive KRIs and given us greater understanding of how to utilize them. So we are now working on a pilot study to implement a dashboard with KRIs, and appreciate that both approaches can add significant value to the data monitoring process.

Moving forward, we would like to implement both approaches, particularly in our larger trials, as we acknowledge the importance and value both can bring to a study. However, the challenge is finding a way to do this without doing the same assessments twice, so the team is currently working on developing a process which integrates the use of CSM and KRIs in the same study, which avoids gaps and minimizes duplication.



Discuss the complexity of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

At Sanofi, we use a defined list of standard KRIs that are implemented across all of our study programs. But it is under the study team accountability to use a pre-defined thresholds or to compare sites according to the average of observed values within the study. We like to avoid the fear of the blank page, so pre-defined KRIs help us focus and look, for example, at adverse events, treatment withdrawals, missing data for critical endpoints…. In addition, at the start of trial, we provide a list of factors based on the biggest issues posed to a particular therapeutic area / trial dependent on our experience and on the expected figures observed in the literature– so, for example, we can ensure that a study registers 10% of diabetic patients as per expected and identify sites with a very low rate of diabetic patients to be investigated. CSM, on the other hand, allows us to compare one site against all other sites and identifies those challenges that aren’t always predictable, to follow potential underestimations or overestimations on sites.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

This is something that we have been trying to anticipate during the past year. In terms of EDC, Risk-Based Monitoring, and numerous other tools, we have been trying to improve the way they integrate to streamline our processes. We are also currently working towards formalizing how we document the monitoring of risk.

While there is no doubt we are making progress, I think there are definitely still occasions when we are all guilty of paying the most attention to risk in our respective areas. To overcome this, I think organizations need to do now is start looking at risk in a more holistic sense. By that I mean getting the team to consider risk for the organization as a whole, rather than just in their area(s) of responsibility. By everyone focusing on their primary endpoints there is a chance that certain issues, for example, fraud, could go undetected.

In my opinion, by working together to look at risk holistically, the ICH (E6) Addendum will clearly help us improve the management of our studies, and we are currently working to adapt our internal procedures to meet these recommendations.

It will also bring much more collaboration between global and local teams, and make everyone at all levels across organizations more aware of the risks associated with entire studies rather than simply their own functions. This will not only improve communication across businesses and help teams better understand the impact of poor quality data, but also allow us to quantify the quality of the sites to identify low-performing centers.