Category Archives: Insights from the Industry Blog Series

Risk-Based Monitoring Insights from the Industry: Angie Maurer

By | Blog, Insights from the Industry Blog Series | No Comments

Part Seven

In the latest of our eight-part ‘Insights from the Industry’ series, we caught up with Angie Maurer, Co-Founder & CEO at Zynapsys. Here’s what she had to share with us.

When it comes to the practical implementation of Risk-Based Monitoring, where do you observe differences in current best Risk-Based Monitoring practices between companies and CROs of different size?

There is currently no standard way of implementing RBM across the industry. In my opinion, the first thing that any company seeking to implement this new approach should do is undertake an organizational assessment. They should consider available resources and budget, as well as examine what the company needs are, including its vision and goals etc, and importantly, its expectations of the RBM program.

With any RBM implementation, it is important that organizations are realistic about what is achievable and feasible, and take that into consideration during the program design phase. So are you implementing a simple RBM function, or are you developing a more holistic program which takes into consideration other departments and functional areas that will be impacted by RBM.

Looking at where the industry is now, I think a lot of companies are ‘dipping their toes in the water.’ We have seen much more prevalent implementation of RBM in the large pharma space, particularly by the top two, mainly because there is an availability of funding. We are also seeing some sponsors considering how they can embed RBM within their organizations as a standard capability, rather than as an isolated practice in one off trials, which is promising.

We have spent a lot of time with the smaller pharma and biotech companies where the focus tends to be on how they can actually go about this within an organization of their size. As small pharma companies typically outsource much of their clinical trial management to CROs, there is a question over how they should manage the adoption of RBM. Should they just outsource it? Or should they take certain components on? For example, the central monitoring. In my opinion, much of it comes down to whether they are going to get anything out of it from an efficiency or value perspective. Or are they so small that it’s not worth the transition because of the cost implications of transitioning? Are they going to get an outcome from it over the next couple of years to make it worth it? This is an important consideration for small companies that simply don’t have the same financial support as the larger pharma organizations.

What is apparent to organizations of all sizes is that this is a big change, both in terms of culture and systems, as well as technology. In order to drive RBM adoption efficiently, accurately and compliantly, organizations must make sure the changes are communicated to all the relevant people in the business process, including any third party partners. From a technology view point, it’s about systems being able to meet the needs of all users; giving them access to the information they need, when and how they need it, so they have the knowledge to act accordingly.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

I think by releasing the guidance they have to be, but I think there will be a transition period in which the authorities allow the industry to get up to standard. How long that period is, I’m not sure. I think it will be a case of waiting and seeing how this pans out as more and more organizations adopt this approach.

The changes soon to come into force via the ICH (E6) Addendum very much focus on improving quality, but in my view, there will need to be some flexibility on timings for smaller companies that don’t necessarily have the budget or resources to implement RBM as easily or quickly as the larger pharmaceutical organizations.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

I think we are likely to see the evolution and integration of current systems to incorporate RBM capabilities, as well as the introduction of new technologies.

In my view, our industry has many companies that have spent years developing deep understanding of their markets and customers, and creating innovative, targeted, solutions. For that reason, I don’t think one single company will be able to provide the best quality solution in all the areas needed within one product, and as such, there is a real opportunity for collaboration. By working together in partnership, companies specializing in different areas, for example EDC and monitoring, can create powerful new solutions for the industry.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

I think companies will be open to CSM because without the support of technology, we can only do so much in terms of the review and analysis of data. These systems deliver not only more transparent and efficient data but the sophisticated technology ensures data is also much more robust, so I think we will definitely see an increase in adoption.

That said, I think budget will be an influence here. I would like to see technology providers develop a flexible platform that will allow even small start-up companies to afford and implement these powerful tools.

Discuss the complexity of defining subjective thresholds using Key Risk Indicators KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

I think this comes down to the individual needs of an organization. I personally don’t think you can do one without the other, and that study teams should combine both approaches. What’s important when reviewing risk for a study program is being able to find a way to take subjective information and make it measurable, and easy for study teams to understand.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

In terms of practical implementation, the first thing companies would need to utilize is a quality by design/RACT type of tool. Designing the study protocol is critical and implementing this type of tool will ensure the bare minimum of the guidelines are met. Second is the implementation of CSM, and third is the use of a risk tracking system. A risk tracking tool is a way to monitor, manage and rank all of the risks for a study, so by supporting risk control and mitigation, these tools help study teams directly address a large part of section 5.0 of the ICH (E6) Addendum.

A quality management plan, which details how the study team is going to ensure quality throughout the program and summarizes all the key information, is the next step. Organizations can then provide this document as evidence to the regulators during the inspection. Finally, it is crucial for teams to continue with the traditional, ongoing risk review activities, typical of most current trials. By holding regular team meetings, where any issues are discussed and minuted, study teams can ensure risks are highlighted and managed on a regular basis.

In my experience, our industry is very reactive. Everyone is so busy with day-to-day activity, that there is little time or head space take a step back and plan ahead, meaning we are often not as proactive as we could be. With this in mind, I think the ICH (E6) Addendum will really make companies stop and think, and force them to be more proactive in the management of their programs. This will come into play from the very beginning of trials, as companies start to assess risks when they are drafting their protocols, meaning the number of iterations to the program should decrease because of the pro-activeness of planning ahead.

Join CluePoints’ dedicated Risk-Based Monitoring Linkedin Group to keep abreast of the latest industry trends!

 

Risk-Based Monitoring Insights from the Industry: Jamie O’Keefe

By | Blog, Insights from the Industry Blog Series | No Comments

Part Six

In the latest of our eight-part Q&A series, we catch up with Jamie O’Keefe, Vice President, Life Sciences, and R&D Practice at Paragon Solutions to get his thoughts on the latest issues in Risk-Based Monitoring (RBM).

When it comes to the practical implementation of Risk-Based Monitoring, where do you observe differences in current best Risk-Based Monitoring practices between companies and CROs of different size?

Looking at where the industry is now, I think a lot of companies are ‘dipping their toes in the water.’ We have seen much more prevalent implementation of RBM in the large pharma space, particularly by the top two, mainly because there is an availability of funding. We are also seeing some sponsors considering how they can embed RBM within their organizations as a standard capability, rather than as an isolated practice in one off trials, which is promising.

We have spent a lot of time with the smaller pharma and biotech companies where the focus tends to be on how they can actually go about this within an organization of their size. As small pharma companies typically outsource much of their clinical trial management to CROs, there is a question over how they should manage the adoption of RBM. Should they just outsource it? Or should they take certain components on? For example, the central monitoring. In my opinion, much of it comes down to whether they are going to get anything out of it from an efficiency or value perspective. Or are they so small that it’s not worth the transition because of the cost implications of transitioning? Are they going to get an outcome from it over the next couple of years to make it worth it? This is an important consideration for small companies that simply don’t have the same financial support as the larger pharma organizations.

What is apparent to organizations of all sizes is that this is a big change, both in terms of culture and systems, as well as technology. In order to drive RBM adoption efficiently, accurately and compliantly, organizations must make sure the changes are communicated to all the relevant people in the business process, including any third party partners. From a technology view point, it’s about systems being able to meet the needs of all users; giving them access to the information they need, when and how they need it, so they have the knowledge to act accordingly.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

In my experience, the people who are actually performing the assessments at study level are not necessarily as on board with RBM adoption as those at senior level, who are definitely ready for it. Despite this, there has been some strong success with the TransCelerate pilot studies, which have received positive feedback from the regulators.

Success will come down to how well a sponsor is documenting and importantly, following their Standard Operating Procedures (SOPs). As long as there is robust documentation and supporting information to demonstrate their actions, and show how they have managed risk appropriately, I don’t think there will be a significant challenge from a regulatory authority perceptive.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

There is going to be a big shift from a technology perspective. There will be certain categories of systems which will always be around; EDC is a great example, as there will always be a need to collect data. But in terms of how the EDC technology is used going forward, I think we are at a crossroads. It can either evolve to take on additional necessary capabilities, or it will be relegated. In the long term, I think that we will see some components of many current systems, for example, CTMS or site monitoring technology, integrated to create larger, RBM platforms.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

I think this is where we are going to see a divide between large and smaller companies. It will be much more achievable for smaller organizations to integrate CSM and adopt this new approach across all of their trials. The challenge for big pharma companies will be changing existing complex infrastructure. Trying to drive this change through multi-million dollar, multi-year implementations, will be a huge challenge and have a significant impact on business. For that reason, I think we will see the larger pharma organizations take much smaller steps towards RBM and CSM, as they make this transition slowly.

I also think another interesting discussion point is how far the industry could take this from a technology perspective. For example, is there an option further down the line to evolve CSM technology to become a complete risk management platform, rather than just a risk monitoring system?

Discuss the complexity of defining subjective thresholds using Key Risk Indicators KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

While there needs to be a level of granularity and flexibility for what the literal thresholds will be, based on the therapeutic area, indication type, or study design, setting an objective threshold is fairly straightforward.

Setting subjective thresholds will always be more challenging. Instead of simply taking one opinion, assigning it a score and adding it into the overall calculation, there needs to be a system to facilitate the conversation from a group perspective. For example, the site monitor, the site manager, the study manager and the data manager, etc., all have important interactions with sites from a qualitative perspective, how does a study team bring all these opinions together and set one threshold?

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

I think this will be the biggest roadblock for the industry. Current systems and tools, and what they are used for, differs greatly across the sector, so for the time being, implementation will be tailored to individual organizations.

As study teams make the transition from current to new systems, it will be important for them to have a clear vision and documented plan detailing how they will approach implementation, which acknowledges timeframes for adoption and any associated risks. The timeframe for implementation will vary depending on the size of the organization. For large pharma companies, the transition could take up to five years, whereas a small biotech business could potentially make the transition within six months.

The biggest impact is going to be defining the roles of the major stakeholders and ensuring teams work together cohesively. Different team members, for example, data monitors and study team leaders, will interact with data and make decisions in very different ways, so it will be crucial to consider how this will impact system capability and/or the flow of information across the team. In my opinion, success will come down to organizations making a plan and sticking to it. RBM is a fairly quickly evolving area so communicating and documenting any changes to the original plan will be crucial in order to increase compliance across the organization.

It is also important to have an active dialogue with the health authorities. A collaborative approach will minimize the chance of challenges and issues when it comes to review by the regulators and ultimately result in more drugs being approved for market.

Join CluePoints’ dedicated Risk-Based Monitoring Linkedin Group to keep abreast of the latest industry trends!

 

Risk-Based Monitoring Insights from the Industry: Steve Young

By | Blog, Insights from the Industry Blog Series | No Comments

Part Five

We’ve heard some great insights on the current hot topics in Risk-Based Monitoring (RBM) from a number of leading experts over recent weeks. In part five, we hear what Steve Young, Senior Director of Transformation Services at OmniComm Systems has to say on the subject when we put our questions to him.

When it comes to the practical implementation of Risk-Based Monitoring, where do you observe differences in current best Risk-Based Monitoring practices between companies and CROs of different size?

I strongly believe that simplicity is key to successful implementation of Risk-Based Monitoring for any organization. I have unfortunately seen too many organizations unintentionally over-engineer processes at their first attempts, particularly in areas such as the pre-study risk assessment and risk identification. The start-up period of a study is a very busy time for cross-functional study teams, so organizations implementing Risk-Based Monitoring need to make sure they don’t create a resource- and time-heavy process which will increase burden and may actually contribute to increased risk.

Many sponsors are exploring how to approach Risk-Based Monitoring with their CRO partners. My recommendation to sponsors is to make sure they are closely involved in the planning, design and implementation of Risk-Based Monitoring, in order to ensure that their business objectives and study outcomes are being met. Sponsors should also have access to operational metrics, signal detection etc. throughout the study in order to monitor quality management.

Another important point to mention here is that having well considered, robust, centralized statistical data monitoring capabilities in place is absolutely critical to a successful and effective roll-out of Risk-Based Monitoring. Centralized analytics and centralized monitoring of key risk indicators are the tools that allow organizations to effectively identify emerging risks and proactively remediate them. This in turn allows study teams to be more targeted with monitoring activity and eliminate the need for the traditional 100% source data verification (SDV) and site visits every four weeks. Truly implementing a targeted approach, operationally and effectively, means that organizations simply have to have a very effective centralized monitoring system in place.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

It’s clear that the regulatory authorities have been actively encouraging the industry to increase adoption of Risk-Based Monitoring over recent years. The FDA guidance and the EMA reflection papers make it very clear that Risk-Based Monitoring is considered a superior approach to achieving quality in clinical trials, and that they are not just passively suggesting it as an alternate approach to quality management. In Japan, the PMDA has also recently made clear its strong support for Risk-Based Monitoring, which is promising.

The ICH E6 GCP guidance is making its first real major update in 15 years, which is due to be finalized later this year. The centerpiece of the new update is Risk-Based Monitoring and risk-based quality management principles, so we can see that Risk-Based Monitoring is quickly turning from regulatory recommendation into GCP expectation.

There are still many organizations with concerns about how well the regulatory auditors are actually going to react to this change. However, it’s quite clear from comments the regulatory authorities have made, that as long as sponsors have a well-documented quality management plan that demonstrates how risk assessment was carried out, how the monitoring plan was guided by that risk assessment, and makes clear the findings (and any remediations), then there should be few issues. In my view, the door has been opened very clearly by the regulatory authorities to encourage industry to move forward with Risk-Based Monitoring.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

There is an evolution which needs to happen and it’s actually already underway.

EDC systems need to have robust, built-in intelligent features that allow study teams to plan, set-up and effectively manage a targeted SDV approach. A number of EDC systems have already been updated to include these type of capabilities, but I think the development of this technology will continue over time to become even more robust and fit-for-purpose.

As I mentioned earlier, centralized analytics tools are critically important to the success of Risk-Based Monitoring. While historically these tools have not played a major role in the quality management of clinical studies, as Risk-Based Monitoring adoption continues to increase, it will be critical for current technologies to incorporate these analytical capabilities in order for the approach to be truly successful.

Many organizations are already putting in place initial centralized analytics, key risk indicators, statistical monitoring methods etc, but there is still much refinement needed. Some are more advanced than others, but in my view, there is still a danger of ‘immature analytics’. By this, I mean that a study team could be using data from these tools to identify emerging risks, but if those metrics are not configured properly or well considered, teams run the risk of acting on unreliable data, which runs directly counter to what Risk-Based Monitoring aims to achieve.

Workflow support is also going to be very important and is evolving behind these two other areas. So prior to the study, workflow support is needed to manage the development, review and approval of a risk management plan, and during the study, effective support is required to follow-up on emerging issues detected by centralized monitoring, including adequately documenting them through to resolution or mediation.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

Advanced Central Statistical Monitoring capabilities, that can identify outliers and unrealistic data patterns, are crucial. The industry is becoming more interested in this approach and as a result, I think adoption rates will increase significantly in the near future as organizations strive to have a full complement of tools to detect operational and data integrity issues. That said, this early on in development, most organizations are still trying to figure out how this will work from a practical perspective.

Organizationally, my view continues to be that this is going to gravitate towards skill sets that exist in clinical data management organizations, within both CROs’ and sponsor’s clinical data management departments. Certainly biostatistical departments and representatives will be important stakeholders in the adoption process, however, they will  actually play a more supportive, as opposed to central, role in the operational use of these tools, especially as they become increasingly sophisticated and have more refined user interfaces.

While data managers are not statisticians, they typically have good data analytics skills, so in my opinion are probably the best equipped to work with the monitoring team to translate data into an appropriate remediation action.

Discuss the complexity of defining subjective thresholds using Key Risk Indicators KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

Even when study teams compare sites against each other there can be a level of subjectivity about where to set the statistical thresholds. For example, a team might statistically compare the rate of adverse events that are being recorded per subject across all of the sites in a clinical trial, but how does the team know when a site should actually be flagged as potentially under-reporting – or not reporting all of the adverse events that are occurring? Teams can define ‘x’ number of standard deviations from the study average as the appropriate threshold, however, different individuals or organizations may set thresholds at different levels, which is where the challenge lies. Currently there is no consensus that I am aware of but it is likely that a one will be established over time.

An important issue is also study teams’ understanding that identifying potential risks is not black and white. Even once thresholds are set, sites should not be dismissed just because they haven’t hit the alert level. For example, there may be a site which is not flagged as a concern but is just below the alert threshold, so it may be in a team’s interest to review activity at this site too.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

I believe that its main impact will be to turn Risk-Based Monitoring into an expectation rather than a suggestion, and from a pragmatic perspective, it’s going to accelerate adoption across the industry. The FDA and EMA set a foundation which has really increased the conversation, but these were just guidelines that don’t carry the force of setting a specific GCP expectation. The ICH E6 Addendum sets clear expectations for the industry, so in my view, it is likely to be a game changer, with many organizations that were previously holding back having to actively start putting Risk-Based Monitoring processes in place.  It is becoming clear that within a couple of years Risk-Based Monitoring will simply be the standard, accepted approach to operational quality management across the industry.

Join CluePoints’ dedicated Risk-Based Monitoring Linkedin Group to keep abreast of the latest industry trends!

 

Risk-Based Monitoring Insights from the Industry: Dr. Peter Schiemann

By | Blog, Insights from the Industry Blog Series | No Comments

Part Four

In the fourth installment of the CluePoints Risk-Based Monitoring Q&A series, we spoke to Dr. Peter Schiemann, Managing Partner and Co-Founder of Widler & Schiemann, who had some interesting views to share with us

When it comes to the practical implementation of Risk-Based Monitoring, where do you observe differences in current best Risk-Based Monitoring practices between companies and CROs of different size?

At Widler & Schiemann we have observed that the adoption of Risk-Based Monitoring is still very much in its infancy, with companies of all sizes experimenting with different approaches. For that reason, we don’t think we are in a position to talk about “best practice” yet. In some cases we have seen companies following the FDA guidance a bit too closely and, as a result, neglecting the suggestions in the EMA reflection paper. Indeed, on one hand focus in those studies has been very much on the practical execution of the study monitoring aspects such as finding ways to reduce the level of Source Document Verification (SDV) and not necessarily on the prerequisite that organizations must have in place to enable successful Risk-Based Monitoring and on the other, strategies and processes to design protocols that are more “fit for purpose” have been neglected.

Many companies are using generic risk indicators to measure certain events, for example low(er) frequency in adverse event reporting or protocol deviations, and then using the metrics collected to draw conclusions on site behavior and determine where any changes to the monitoring plan are needed. In addition, tools such as CluePoints’ SMART engine are being used to deliver a central review of the data being collected during a clinical trial to check for outliers and on the basis of such insight fine-tune monitoring activities.

From what we have seen so far, many of the companies who are attempting Risk-Based Monitoring adoption are often not effectively determining risk during the protocol design and implementation process, and with that initiating a risk mitigation before issues materialize. For instance, the number and type of endpoints of the study, selection and posology of the comparators, the patient population, experience of outsourcing to service providers, the knowledge of site teams, the experience of the partners and contractors involved, etc. are elements in a study that can cause issues such as protocol deviations if they are not properly designed. All of this information is key and should be used to assess the risk inherent to the study protocol and study set up. Failing to do so is something we know the regulators see as sub-optimal and are not inclined to accept.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

A number of regulators have published papers outlining their support for the approach and their expectations, however, the guidance does – rightly so – not provide details on how Risk-Based Monitoring should actually be implemented. Regulators have resisted the call for defining what a “right” RbM approach is as it is clear that the sponsor and its partners (e.g., CRO) are the responsible parties for identifying the right Risk-Based Monitoring strategy, a one-size-fits-all approach would not do justice to a true Quality by Design and Risk-Based Monitoring approach. While those at the agencies responsible for the recommendations have an understanding of how Risk-Based Monitoring can work practically, unfortunately, there is still a large number who are not yet confident with the approach. This, of course, may cause some difficulties when they come to carry out inspections of Risk-Based Monitoring approaches.

From past discussions we have had with competent authorities, we believe they will follow a path of logic when it comes to approvals. Risk-Based Monitoring approaches need to be based on data and facts and follow a clear plan. This backs up the EMA reflection paper, which advises that decisions should be made on fact and that organizations should be able to substantiate decisions.

As it stands, it is our view that neither party is currently ready for a full inspection. That said, Risk-Based Monitoring is work in progress, and progress has (and is currently) being made. This is further reflected in the ICH Addendum.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

At last year’s PCT conference and other occasions we visited many vendors who have developed software which supports Risk-Based Monitoring, but in our opinion, there are currently very few able to fulfill all of the requirements of this approach, let alone incorporate the multitude of systems that need to be used concurrently. Quite simply, the evolution of current technology is definitely needed in order to meet industry needs. Software solutions that simply produce visualizations of trends without providing objective criteria for deciding what is an “acceptable” vs. “non-acceptable” deviation are not serving the purpose of enabling a fact driven risk management approach. In other words, a software solution that does not enable the user to define the “Design Space” is an unsuitable Risk-Based Monitoring tool.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

The aim with Risk-Based Monitoring is to prevent risks and issues from occurring by detecting trends in the data in real time. The use of KRIs is an important element of this, as they tell study teams how people behave and if/how many mistakes they make, and alert teams to risks before too many have occurred. The industry can utilize CSM to complement this. With a CSM system, unexpected deviations or outliers are spotted by examining data produced from patients who have already been treated. The new regulations require sites and companies to carry a self-assessment of their data, so by helping teams identify which sites and data they can trust, CSM can make this happen.

Discuss the complexity of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

We think this is best answered through an example. Take adverse event reporting, a very important activity which is directly linked to patient safety. How do study teams ensure that sites are reporting adverse events correctly, i.e., all the events that should be reported? Study teams could carry out a comparison, but there are considerations with this. Simply comparing the number of adverse events reported at sites against each other is not appropriate as one site could just be better at reporting these incidences or one may have a higher number of patients than the other. Sites must compare one site against all sites involved and examine the average number of adverse events reported per patient, per visit. It is also important to take any cultural differences into consideration when making comparisons, for example, patients in one geography could be culturally much less likely to complain than in another.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

For the moment, Risk-Based Monitoring is more or less a recommendation. However, if the ICH E6 goes through, it is likely to come into effect in early 2017, making it really quite urgent for companies to start thinking about how to address the requirements.

The most important thing is having Risk-Based Monitoring and quality management systems in place, plus complete vendor oversight. It is also crucial for the industry to realize that this will not only impact sponsors, but all organizations involved in clinical studies. Large hospitals, for example, would also need to better document their quality management systems, as well as their approaches to monitoring.

As time goes on, regulators will question more and more the reliability of data from organizations not deploying a risk-based approach, especially when errors occurring – and perhaps identified and corrected at one site – are not identified in other sites in the same study or even in other studies. In addition, it is worth bearing in mind that a risk-based approach to monitoring is more cost-effective long term, so companies making the shift now will see the financial rewards of their efforts sooner.

There is no doubt that the Addendum will have a major impact. When it comes to a risk-based approach, many companies are setting the wrong objectives and the wrong incentives, with many concentrating on the potential cost-savings thanks to reduced site monitoring, less travel, reduced resource costs, etc. With monitoring activity typically accounting for 30-40 percent of clinical trial costs, this is of course tempting.

What needs to happen is for the Risk-Based Monitoring protocol to be designed carefully from the offset, in order to minimize issues when it comes to operationalizing the study. If a study is planned correctly, with primary objective, well-defined endpoints and focus on these elements and no distractions by “unnecessary” parameters or exploratory objectives, study teams can streamline execution. By reducing the trial duration even by one week, organizations can potentially save thousands, if not millions, of dollars. Not to mention the additional benefits of introducing the drug to market earlier and longer patent exclusivity for the marketed product.

This quality by design approach is what the ICH E6 Addendum proposes, rather than historically checking data once events have occurred. Of course, it will not be easy for the industry to make this shift, but those organizations who fail to make the transition will ultimately lose out in the long run.

Risk-Based Monitoring Insights from the Industry: Karen Fanouillere

By | Blog, Insights from the Industry Blog Series | No Comments

Part Three

In the third of an eight-part Q&A series which aims to explore some of the current hot topics in Risk-Based Monitoring (RBM), CluePoints has been speaking with Karen Fanouillere, Biostatistics Project Leader for 15 years and now Head of Clinical Information Governance at Sanofi about the perspectives of a large pharma company in implementing new monitoring methodologies for drug development programs.

Here’s what Karen had to say when we put our questions to her.

When it comes to the practical implementation of Risk-Based Monitoring, where do you feel current best practice lies amongst different sizes of companies and CROs?

In my experience, the size of the company has an impact on how Risk-Based Monitoring is implemented. For example, for larger organizations, the implementation of any new practice tends to be more difficult because of the complexity of integrating the solutions practically throughout the business.

At Sanofi, until Risk-Based Monitoring becomes practically available for use in all studies, we are only utilizing the approach for pivotal and large-scale studies. Our decision to use Risk-Based Monitoring within a trial usually depends on the number of patients and number of sites, or whether the type of study lends itself to a Risk-Based Monitoring approach. For Phase 1 studies, for example, where we feel Risk-Based Monitoring may not add as much value, we are still opting to utilize more traditional monitoring approaches.



How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

While the authorities are definitely on board with Risk-Based Monitoring and already advising the industry to adopt this approach, they have yet to outline any specific recommendations or guidance on exactly how they would like to see it implemented. At Sanofi, we began implementing Risk-Based Monitoring because we recognized it as a way to gain insight into the quality and the precision of data in our studies, not because it was advised by the regulatory authorities.



How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

I think that there are multiple ways of integrating Risk-Based Monitoring within the many tools and systems currently in use. There are numerous solutions implemented across the industry, all of which identify trends in study data in different ways – the industry now needs to find a way to integrate these into one cohesive offering.

The introduction of Risk-Based Monitoring technology means we can now view much more comprehensive data, and centrally at sites. As well as identifying potential risks to the study at the start of a program, Risk-Based Monitoring allows teams to identify trends in data quicker than ever before during the course of a study, meaning they can take remedial action and minimize potential issues before they have happened or progress any further. Existing systems are not designed for that. And while I don’t think these will become obsolete, as many offer capabilities complementary to Risk-Based Monitoring, there will certainly need to be some evolution of the technologies in order to create more appropriate risk management platforms.



Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

Sanofi first implemented central statistical monitoring (CSM) in a pilot study in 2013, and now we have just begun implementing KRIs, so we have done things a little bit in reverse.

In the past, we only had between 5 and 10 indicators that could be considered a KRI, which was very simple and not very useful or user-friendly. Because of this, we moved away from the use of KRIs. Then we initiated CSM with CluePoints, which has enabled us to derive KRIs and given us greater understanding of how to utilize them. So we are now working on a pilot study to implement a dashboard with KRIs, and appreciate that both approaches can add significant value to the data monitoring process.

Moving forward, we would like to implement both approaches, particularly in our larger trials, as we acknowledge the importance and value both can bring to a study. However, the challenge is finding a way to do this without doing the same assessments twice, so the team is currently working on developing a process which integrates the use of CSM and KRIs in the same study, which avoids gaps and minimizes duplication.



Discuss the complexity of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

At Sanofi, we use a defined list of standard KRIs that are implemented across all of our study programs. But it is under the study team accountability to use a pre-defined thresholds or to compare sites according to the average of observed values within the study. We like to avoid the fear of the blank page, so pre-defined KRIs help us focus and look, for example, at adverse events, treatment withdrawals, missing data for critical endpoints…. In addition, at the start of trial, we provide a list of factors based on the biggest issues posed to a particular therapeutic area / trial dependent on our experience and on the expected figures observed in the literature– so, for example, we can ensure that a study registers 10% of diabetic patients as per expected and identify sites with a very low rate of diabetic patients to be investigated. CSM, on the other hand, allows us to compare one site against all other sites and identifies those challenges that aren’t always predictable, to follow potential underestimations or overestimations on sites.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

This is something that we have been trying to anticipate during the past year. In terms of EDC, Risk-Based Monitoring, and numerous other tools, we have been trying to improve the way they integrate to streamline our processes. We are also currently working towards formalizing how we document the monitoring of risk.

While there is no doubt we are making progress, I think there are definitely still occasions when we are all guilty of paying the most attention to risk in our respective areas. To overcome this, I think organizations need to do now is start looking at risk in a more holistic sense. By that I mean getting the team to consider risk for the organization as a whole, rather than just in their area(s) of responsibility. By everyone focusing on their primary endpoints there is a chance that certain issues, for example, fraud, could go undetected.

In my opinion, by working together to look at risk holistically, the ICH (E6) Addendum will clearly help us improve the management of our studies, and we are currently working to adapt our internal procedures to meet these recommendations.

It will also bring much more collaboration between global and local teams, and make everyone at all levels across organizations more aware of the risks associated with entire studies rather than simply their own functions. This will not only improve communication across businesses and help teams better understand the impact of poor quality data, but also allow us to quantify the quality of the sites to identify low-performing centers.

Risk-Based Monitoring Insights from the Industry: Adam Butler

By | Blog, Insights from the Industry Blog Series | No Comments

Part Two

Recently, CluePoints has been taking time out with some of our valued partners to discuss hot topics in Risk-Based Monitoring (RBM). Here, in the second of an eight-part Q&A series, we spoke with Adam Butler, Senior Vice President, Strategic Development & Corporate Marketing at Bracket Global, who shared these valuable insights with us.

When it comes to the practical implementation of Risk-Based Monitoring, where do you feel current best practice lies amongst different sizes of companies and CROs?

In the context of current best practice, there is a huge amount of variety. The large pharma companies and CROs have taken an approach to Risk-Based Monitoring, which is very much focused on managing costs and reducing the amount of site monitoring but not necessarily centered on the clinical or statistical outcomes.

This could be due to the fact that internal initiatives to implement Risk-Based Monitoring seem to be focused on the high-level expectations laid out in regulatory guidance and TransCelerate’s paper. Not much of it is centered on the complex statistical problems that are out there. Small companies, however, tend to implement Risk-Based Monitoring on a program-by-program basis, considering the type of clinical trial, therapeutic area or size of the program, etc. This is encouraging, yet as is often the case with our industry, it just doesn’t seem like anyone is embracing the new approach fast enough.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

All indications are that Risk-Based Monitoring adoption will be enthusiastically embraced by regulators. Especially the FDA, which has already released some very specific guidance and high-level recommendations from a regulatory perspective. Even back in 2009 when it introduced its PRO guidance, the FDA was starting to look at how to address issues like missing data and clinical data analysis. And then subsequent guidance on eSource and Clinical Outcomes Assessments (COA) all address the issues of how to do better statistical monitoring. We’ve been encouraging people to pay more attention to statistical analysis and not just focus on SDV data for years, so the FDA guidance validates a lot of what we are already working towards.

In my experience, regulators are very willing to answer questions and offer recommendations outside of the context of a specific guidance. So in my opinion, the best way for companies to ensure successful implementation of Risk-Based Monitoring is to engage with the relevant authorities as often as possible and if necessary, adjust study protocols according to their advice. The existing guidance is further proof that regulators are open to new approaches, so the burden really is on the industry to make sure their methods are validated before they are submitted for approval.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

As technology in clinical research continues to prove its worth, for example, EDC and eCOA, which have both undoubtedly led to better efficiencies, there is no reason to believe that other new technologies couldn’t support this further. However, pharmaceutical research is at least one generation behind in terms of the adoption and integration of new technologies, so it needs to play catch up. As the industry continues to do this, there is no doubt that we will see the introduction and adoption of more and more technologies within clinical research.

There will also be greater focus on interoperability between these technologies, which will bring with it a burden on everyone, especially the technology developers, to make sure the technologies are not only able to work with each other, but work for the researchers using them. At Bracket, we invest time testing new technology and working with end users, mostly at clinical research centers, to make sure it works in the field, and ultimately improve the chances of it being adopted by pharma organizations.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

When it was introduced, I think much of the initial focus was on how Risk-Based Monitoring could address operational problems, so keeping tracking of missing data, or the number of data change forms, etc. However, I think that much of the adoption we will see moving forward will center on the clinical endpoint of a trial and making sure the data captured is as reliable and valid as possible. Data quality is the ultimate goal in order to improve the chances of study success, and implementing a CSM solution will ensure all data captured during a trial is clean and accurate.

At Bracket, we begin a project by considering a study’s protocol, including the clinical endpoint and outcomes, and then develop a strategy which addresses all of the potential risks that could arise when collecting the data, based on those endpoints and outcomes. As we continue to do this, Risk-Based Monitoring and CSM will be seen as essential support tools. That said, while defining the potential risks is an important part of the process, what a team does in response to any issues identified in the data, is crucial. To ensure clean and consistent data capture throughout the trial teams must implement clearly defined, meaningful actions and remediations to rectify any issues highlighted as soon as possible.

Discuss the complexity of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

In our experience, defining subjective thresholds can sometimes lead study teams down a path which doesn’t necessarily represent anything meaningful. When risks are properly validated and understood upfront, site comparisons can be a really powerful tool in identifying issues which would be impossible to discover if a team was just monitoring data within the context of one center.

At Bracket, are usually quite focused on the Clinical Outcomes Assessments, the PRO’s and ratings scales that are often used as endpoints. If you are evaluating a new anti-depressant product, and the primary outcome is a clinical interview, and this interview only lasts for four minutes, but the median interview length is usually 17 minutes, this could be an issue. If the team is looking at individual data points across numerous sites, that would flag as a very meaningful negative outlier, but would be very difficult to identify if the team was only analyzing data within the context of one center.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

In my view, the ICH (E6) Addendum is a continuing validation of the work we have already been doing in Risk-Based Monitoring and CSM. What it does do is gives those with doubts, for example, someone in a pharma QA position, the reassurance they need that Risk-Based Monitoring is an effective solution which will be approved by a regulator. The important element for me is that by analyzing site characteristic and performance metrics, Risk-Based Monitoring validates the need to look at all of the data in a clinical program and leverage every opportunity to analyze it, in order to uncover and rectify inconsistencies and issues.

I would hope that the primary impact of the Addendum will be an increase in Risk-Based Monitoring adoption. It gives those in the industry who have been pushing for its implementation, the ammunition they need to argue the case for going beyond the traditional monitoring approach. Ultimately, it has the potential to have a significant impact on how studies are being conducted and improve the overall quality of data captured during trials.

Risk-Based Monitoring Insights from the Industry: Craig Serra

By | Blog, Insights from the Industry Blog Series | No Comments

Part One

In the first of an eight-part Q&A series which aims to explore some of the current hot topics in Risk-Based Monitoring (RBM), CluePoints has been speaking with Craig Serra from Pfizer. Craig is currently Senior Director and Data Management Business Process Owner, with accountability for the data management process in study conduct and closeout. His experience is in data management, clinical systems, project management, and clinical operations, with a diverse educational background in business, management, information systems, and pharmacology/toxicology.

Here’s what Craig had to say when we put our questions to him.

When it comes to the practical implementation of Risk-Based Monitoring, where do you feel current best practice lies amongst different sizes of companies and CROs?

Addressing what the current best practice is in terms of Risk-Based Monitoring is difficult because there is still a lack of widespread adoption across the industry, despite being advised by regulators to do just that. Think of faxes at your doctor’s office. Why is this still commonplace? Healthcare is notoriously slow to adopt technology into everyday practice—the Health Resources and Services Administration estimates a 10-15 year lag in implementation of computing capabilities versus other industries. Some colleagues in the industry are piloting Risk-Based Monitoring and I’ve seen examples of basic frameworks which rely on KRIs and identification of risks early on or during study start up. However, I have not seen clinical data interrogation and actions focused on sites and data that truly will have an impact on statistical validity and core conclusions of a trial.

The best practice will be comprised of two things. First, robust identification in study startup of risk factors and what is truly critical to ensure patient safety, statistical validity, and correct conclusions of a trial. Second, as data accumulates in a trial, the study team lets the data produced by the sites speak for itself—that is, the interrogation of clinical and operational data, relative to the key risk indicators (KRIs), dictate appropriate actions.

The true risk I am seeing to sponsor organizations is an overly cautious approach to monitoring in the face of regulations that are designed to facilitate drug development. We need to listen to regulators, especially when they are pushing us towards a much better approach to ensuring trials are conducted properly.

CROs are also at a crossroads with regards to Risk-Based Monitoring. Trial budgets can have monitoring costs that account for a large chunk of the total budget—around 25-35%. Risk-Based Monitoring is a danger to CROs that rely on that top-line revenue from monitoring visits and who don’t want to amend their business practice to support more efficiency. However, Risk-Based Monitoring is a major asset and differentiating factor for CROs who will deliver a centralized monitoring approach, thereby aiming to reduce total amount of on-site monitoring visits and SDV. Another trend is for CROs developing their own Risk-Based Monitoring software. I believe this to be the wrong approach, since CROs are, at their core, service providers. Software development is a different skillset, which leads to CROs developing software that misses the mark in terms of utilizing advanced analytical and statistical methodologies.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

We already have the reflection paper from the EMA and the guidance from the FDA, so they are more than ready for it – other agencies will be issuing guidance in the near future. It is the industry that is not moving forward. I’ve actually seen examples in the industry where regulators are specifically looking for sponsors to adopt an Risk-Based Monitoring approach but the sponsors are putting the brakes on and insisting on using a traditional monitoring approach. We have to stop talking about it, discussing philosophy, and otherwise making it an academic exercise. We just have to commit to it and implement it.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

I don’t think systems will become obsolete but I do think that we are going to start seeing the integration between them all. That said, given our industry’s track record when it comes to the adoption of new processes and ideas, I think there will be a coexistence of all this technology for at least a decade, if not two or three. What is important are there being software developers who fundamentally understand statistical approaches to monitoring and who put user-friendly software on the market.

Much of the early work in RBM has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

In order for adoption to increase, what is going to be crucial is ensuring the industry understands the difference between software for data visualization and software that contains an analytics engine. It appears that the understanding of the way that data is processed and analyzed using CSM and the way that visualization software works, seems to be lacking somewhat. There seems to be a view that any software that produces an aesthetically pleasing visualization automatically has robust and statistically valid analytical component to it—that is simply not the case.

It is vital that study teams know that the data visualizations produced via CSM reflect algorithms which are complex and effectively light years ahead of what has been used in the industry. CSM allows teams to identify areas of risk which actually reflect both clinical and operational data, which can of course be visualized. However, we aren’t just visualizing data—we are visualizing the analysis of those data. 

Discuss the complexity of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

When considering something like subjective versus objective, there is evidence that there is place for both. Until study teams start receiving enough clinical and operational data, perhaps three or four months into a study, they don’t really have the ability to be terribly objective. So until that point, there is nothing wrong with teams setting subjective thresholds based on previous experience, both with individual sites and experience of study team members.

Crucially, the study teams needs to understand that this is the landscape for a particular period of time, and that once the data are available, they should use analytics tools to actually interrogate the data to start fine-tuning the specific thresholds for that particular study – that’s when teams should shift from those pre-set thresholds and adapt to the objective reality of the trial.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

The Addendum really reflects the new way of thinking and will support those companies who are really pushing for Risk-Based Monitoring adoption as they have the backing of the guidelines. Initially, implementation will help companies in obtaining higher quality data in order to make better quality decisions. Further down the line, as study teams start adopting RBM technology within more of their trials and learning from it, they can start to create efficiencies which in turn, which could result  in cost reductions.

Like many recommendations from the regulators, the guidelines are subject to individual companies’ interpretation. By working with the regulators to agree to a practical approach and framework, and importantly, sticking to it throughout the study, sponsors can ensure Risk-Based Monitoring implementation results in a successful operational oversight ecosystem.

Ultimately, listening to and acting on the data is key. After all, statistical methodologies dictate proof of safety and efficacy of a drug, so why aren’t we letting that same hypothesis testing actually tell us about the quality of data during the trial? The paradigm is changing in clinical development in order to increase our ability to have actionable and reliable data, as well as to reduce cost and shorten timelines. Advances like RBM and CSM can be used to focus our attention to ultimately safeguard our patients and deliver more productively for all that depend on us.