Monthly Archives: April 2016

Unveiling CluePoints’ New and Improved Risk-Based Monitoring Software

By | Blog, News, Product Updates | No Comments

Today, we’re thrilled to unveil CluePoints’ latest release of its award-winning Risk-Based Monitoring Software. For this release, we wanted to hone in on several customer feature requests to deliver a solution that further advances the ability to meet the guidance proposed by regulators regarding Risk-Based Monitoring. In addition to an abundance of new features, we’ve expanded current functionality to further enhance the user experience.

Advanced Key Risk Indicator (KRI) Dashboard

A new version of the Key Risk Indicator (KRI) dashboard includes:

  • A centralized view of all of the KRIs activated for the study
  • Interactive graphs displaying relative scores and absolute values giving you both subjectivity and independent objectivity in the results
  • Larger icons to identify centres at risk

New Key Risk Indicator (KRI) Setup

The new release adds the ability to define Key Risk Indicators using CluePoints’ intelligent KRI setup wizard. It now takes seconds to set up a new KRI or amend an existing indicator with full audit history of who made the changes and why.

  • Define datasets, variables, tests, relative scores and absolute values
  • Add suggestive corrective actions based on the dataset/study
  • View adjustments to thresholds for the duration of study conduct with the new change history tracker

New Signal Tracker

What do you do when you identify a potential centre at risk? Based on customer feedback CluePoints has created a flexible approach to ensuring the workflow allows the right users to get the critical actions at the right time. Using CluePoints’ enhanced signal tracker you can:

  • Ignore, assign or watch potential issues that are surfaced
  • View all of the outstanding actions in a centralized view
  • Setup email alerts for actions
  • View and add comments against the signals and actions with a full audit history
  • Export the data

Seamless Integration with Medidata Rave Electronic Data Capture (EDC)

You asked, we delivered!

Hot on the heels of the integration with Oracle’s Inform, CluePoints’ new connector for Medidata Rave makes it easy to connect your study data to the CluePoints Risk-Based Monitoring platform.

  • One-time setup
  • Seamless integration
  • No custom coding required

About CluePoints

CluePoints is the premier provider of Risk-Based Monitoring and Data Quality Oversight Software. Our products utilize statistical algorithms to determine the quality, accuracy, and integrity of clinical trial data both during and after study conduct. Aligned with guidance from the FDA, EMA, and the new ICH (E6) addendum, CluePoints is deployed to support traditional monitoring and data management and can be implemented as the ultimate engine to drive Risk-Based Monitoring. The value of CluePoints lies in its powerful and timely ability to identify anomalous data and site errors allowing optimization of central and on-site monitoring and a significant reduction in overall regulatory submission risk.

Like this post? Follow @CluePoints on Twitter for blog post alerts, industry news, and more!

Risk-Based Monitoring Insights from the Industry: Adam Butler

By | Blog, Insights from the Industry Blog Series | No Comments

Part Two

Recently, CluePoints has been taking time out with some of our valued partners to discuss hot topics in Risk-Based Monitoring (RBM). Here, in the second of an eight-part Q&A series, we spoke with Adam Butler, Senior Vice President, Strategic Development & Corporate Marketing at Bracket Global, who shared these valuable insights with us.

When it comes to the practical implementation of Risk-Based Monitoring, where do you feel current best practice lies amongst different sizes of companies and CROs?

In the context of current best practice, there is a huge amount of variety. The large pharma companies and CROs have taken an approach to Risk-Based Monitoring, which is very much focused on managing costs and reducing the amount of site monitoring but not necessarily centered on the clinical or statistical outcomes.

This could be due to the fact that internal initiatives to implement Risk-Based Monitoring seem to be focused on the high-level expectations laid out in regulatory guidance and TransCelerate’s paper. Not much of it is centered on the complex statistical problems that are out there. Small companies, however, tend to implement Risk-Based Monitoring on a program-by-program basis, considering the type of clinical trial, therapeutic area or size of the program, etc. This is encouraging, yet as is often the case with our industry, it just doesn’t seem like anyone is embracing the new approach fast enough.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

All indications are that Risk-Based Monitoring adoption will be enthusiastically embraced by regulators. Especially the FDA, which has already released some very specific guidance and high-level recommendations from a regulatory perspective. Even back in 2009 when it introduced its PRO guidance, the FDA was starting to look at how to address issues like missing data and clinical data analysis. And then subsequent guidance on eSource and Clinical Outcomes Assessments (COA) all address the issues of how to do better statistical monitoring. We’ve been encouraging people to pay more attention to statistical analysis and not just focus on SDV data for years, so the FDA guidance validates a lot of what we are already working towards.

In my experience, regulators are very willing to answer questions and offer recommendations outside of the context of a specific guidance. So in my opinion, the best way for companies to ensure successful implementation of Risk-Based Monitoring is to engage with the relevant authorities as often as possible and if necessary, adjust study protocols according to their advice. The existing guidance is further proof that regulators are open to new approaches, so the burden really is on the industry to make sure their methods are validated before they are submitted for approval.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

As technology in clinical research continues to prove its worth, for example, EDC and eCOA, which have both undoubtedly led to better efficiencies, there is no reason to believe that other new technologies couldn’t support this further. However, pharmaceutical research is at least one generation behind in terms of the adoption and integration of new technologies, so it needs to play catch up. As the industry continues to do this, there is no doubt that we will see the introduction and adoption of more and more technologies within clinical research.

There will also be greater focus on interoperability between these technologies, which will bring with it a burden on everyone, especially the technology developers, to make sure the technologies are not only able to work with each other, but work for the researchers using them. At Bracket, we invest time testing new technology and working with end users, mostly at clinical research centers, to make sure it works in the field, and ultimately improve the chances of it being adopted by pharma organizations.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

When it was introduced, I think much of the initial focus was on how Risk-Based Monitoring could address operational problems, so keeping tracking of missing data, or the number of data change forms, etc. However, I think that much of the adoption we will see moving forward will center on the clinical endpoint of a trial and making sure the data captured is as reliable and valid as possible. Data quality is the ultimate goal in order to improve the chances of study success, and implementing a CSM solution will ensure all data captured during a trial is clean and accurate.

At Bracket, we begin a project by considering a study’s protocol, including the clinical endpoint and outcomes, and then develop a strategy which addresses all of the potential risks that could arise when collecting the data, based on those endpoints and outcomes. As we continue to do this, Risk-Based Monitoring and CSM will be seen as essential support tools. That said, while defining the potential risks is an important part of the process, what a team does in response to any issues identified in the data, is crucial. To ensure clean and consistent data capture throughout the trial teams must implement clearly defined, meaningful actions and remediations to rectify any issues highlighted as soon as possible.

Discuss the complexity of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

In our experience, defining subjective thresholds can sometimes lead study teams down a path which doesn’t necessarily represent anything meaningful. When risks are properly validated and understood upfront, site comparisons can be a really powerful tool in identifying issues which would be impossible to discover if a team was just monitoring data within the context of one center.

At Bracket, are usually quite focused on the Clinical Outcomes Assessments, the PRO’s and ratings scales that are often used as endpoints. If you are evaluating a new anti-depressant product, and the primary outcome is a clinical interview, and this interview only lasts for four minutes, but the median interview length is usually 17 minutes, this could be an issue. If the team is looking at individual data points across numerous sites, that would flag as a very meaningful negative outlier, but would be very difficult to identify if the team was only analyzing data within the context of one center.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

In my view, the ICH (E6) Addendum is a continuing validation of the work we have already been doing in Risk-Based Monitoring and CSM. What it does do is gives those with doubts, for example, someone in a pharma QA position, the reassurance they need that Risk-Based Monitoring is an effective solution which will be approved by a regulator. The important element for me is that by analyzing site characteristic and performance metrics, Risk-Based Monitoring validates the need to look at all of the data in a clinical program and leverage every opportunity to analyze it, in order to uncover and rectify inconsistencies and issues.

I would hope that the primary impact of the Addendum will be an increase in Risk-Based Monitoring adoption. It gives those in the industry who have been pushing for its implementation, the ammunition they need to argue the case for going beyond the traditional monitoring approach. Ultimately, it has the potential to have a significant impact on how studies are being conducted and improve the overall quality of data captured during trials.

Is it too soon to Define Best Practice When it Comes to the Practical Implementation of Risk-Based Monitoring?

By | Blog | No Comments

Last week CluePoints launched its eight-part Q&A blog series which aims to explore the current talking points in Risk-Based Monitoring (RBM). We’ve been gathering insights from a number of CluePoints’ partners but are keen to open up the discussion to the wider industry. The first question we put to our partners was centered around the practical implementation of Risk-Based Monitoring and where they feel current best practice lies amongst different sizes of companies and CROs. As expected, opinions varied but our discussions revealed some interesting common themes.

When it comes to current best practice, Jamie O’Keefe at Paragon Solutions said that many companies are simply, “dipping their toes in the water,” and there was a general consensus from the other partners that this is the case. All agreed that while some in the industry are starting to utilize RBM within their study programs, there is a huge variety in how they are approaching implementation, and for that reason, it is difficult to define current best practice.

Many of our partners also agreed that when, if, how and the speed at which Risk-Based Monitoring is currently being implemented, very much depends on the size of the organization. For larger companies, many of whom are now starting to utilize elements of Risk-Based Monitoring, the ability to make it standard practice across all studies is a complex challenge involving many stakeholders. Existing infrastructure, processes, technology etc means the transition to Risk-Based Monitoring will take considerable time and investment, and significantly alter their business. While on paper it may seem more practical for smaller organizations to integrate Risk-Based Monitoring across their business, many perhaps don’t have the same financial support as big pharma. As a result, smaller companies tend to be implementing RBM on a program-by-program basis or relying on third-party vendors to support them.

The pace at which Risk-Based Monitoring is being adopted across the industry was also highlighted as an issue – despite being actively encouraged to do so by the regulators, there still seems to be a lack of urgency. Bracket Global’s Adam Butler commented that, “as it is often the case with our industry, it just doesn’t seem like anyone is embracing the new approach fast enough.” With Craig Serra from Pfizer, agreeing with him and quite rightly pointing out the irony that despite operating in a very risk-averse industry current working practice is more risky by not adopting Risk-Based Monitoring quicker.

In my opinion, the industry in general recognizes the importance of mitigating risk and improving data quality and it is heartening to see the adoption of data interrogation techniques that have been used in other industries for many years, but we could be doing more. The ICH E6 guidance, which is due to be published in its final form in November, will insist that all companies adopt comprehensive risk management plans. This will be the turning point for adoption and we can already see many organizations who are embracing initiatives to prepare themselves for these requirements.

What’s your experience of current best practice of Risk-Based Monitoring implementation? Is it too early to say? Is more guidance needed for organizations? We’d be really interested to hear your thoughts on this topic so please do get involved with the discussion.

 

Risk-Based Monitoring Insights from the Industry: Craig Serra

By | Blog, Insights from the Industry Blog Series | No Comments

Part One

In the first of an eight-part Q&A series which aims to explore some of the current hot topics in Risk-Based Monitoring (RBM), CluePoints has been speaking with Craig Serra from Pfizer. Craig is currently Senior Director and Data Management Business Process Owner, with accountability for the data management process in study conduct and closeout. His experience is in data management, clinical systems, project management, and clinical operations, with a diverse educational background in business, management, information systems, and pharmacology/toxicology.

Here’s what Craig had to say when we put our questions to him.

When it comes to the practical implementation of Risk-Based Monitoring, where do you feel current best practice lies amongst different sizes of companies and CROs?

Addressing what the current best practice is in terms of Risk-Based Monitoring is difficult because there is still a lack of widespread adoption across the industry, despite being advised by regulators to do just that. Think of faxes at your doctor’s office. Why is this still commonplace? Healthcare is notoriously slow to adopt technology into everyday practice—the Health Resources and Services Administration estimates a 10-15 year lag in implementation of computing capabilities versus other industries. Some colleagues in the industry are piloting Risk-Based Monitoring and I’ve seen examples of basic frameworks which rely on KRIs and identification of risks early on or during study start up. However, I have not seen clinical data interrogation and actions focused on sites and data that truly will have an impact on statistical validity and core conclusions of a trial.

The best practice will be comprised of two things. First, robust identification in study startup of risk factors and what is truly critical to ensure patient safety, statistical validity, and correct conclusions of a trial. Second, as data accumulates in a trial, the study team lets the data produced by the sites speak for itself—that is, the interrogation of clinical and operational data, relative to the key risk indicators (KRIs), dictate appropriate actions.

The true risk I am seeing to sponsor organizations is an overly cautious approach to monitoring in the face of regulations that are designed to facilitate drug development. We need to listen to regulators, especially when they are pushing us towards a much better approach to ensuring trials are conducted properly.

CROs are also at a crossroads with regards to Risk-Based Monitoring. Trial budgets can have monitoring costs that account for a large chunk of the total budget—around 25-35%. Risk-Based Monitoring is a danger to CROs that rely on that top-line revenue from monitoring visits and who don’t want to amend their business practice to support more efficiency. However, Risk-Based Monitoring is a major asset and differentiating factor for CROs who will deliver a centralized monitoring approach, thereby aiming to reduce total amount of on-site monitoring visits and SDV. Another trend is for CROs developing their own Risk-Based Monitoring software. I believe this to be the wrong approach, since CROs are, at their core, service providers. Software development is a different skillset, which leads to CROs developing software that misses the mark in terms of utilizing advanced analytical and statistical methodologies.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

We already have the reflection paper from the EMA and the guidance from the FDA, so they are more than ready for it – other agencies will be issuing guidance in the near future. It is the industry that is not moving forward. I’ve actually seen examples in the industry where regulators are specifically looking for sponsors to adopt an Risk-Based Monitoring approach but the sponsors are putting the brakes on and insisting on using a traditional monitoring approach. We have to stop talking about it, discussing philosophy, and otherwise making it an academic exercise. We just have to commit to it and implement it.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

I don’t think systems will become obsolete but I do think that we are going to start seeing the integration between them all. That said, given our industry’s track record when it comes to the adoption of new processes and ideas, I think there will be a coexistence of all this technology for at least a decade, if not two or three. What is important are there being software developers who fundamentally understand statistical approaches to monitoring and who put user-friendly software on the market.

Much of the early work in RBM has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

In order for adoption to increase, what is going to be crucial is ensuring the industry understands the difference between software for data visualization and software that contains an analytics engine. It appears that the understanding of the way that data is processed and analyzed using CSM and the way that visualization software works, seems to be lacking somewhat. There seems to be a view that any software that produces an aesthetically pleasing visualization automatically has robust and statistically valid analytical component to it—that is simply not the case.

It is vital that study teams know that the data visualizations produced via CSM reflect algorithms which are complex and effectively light years ahead of what has been used in the industry. CSM allows teams to identify areas of risk which actually reflect both clinical and operational data, which can of course be visualized. However, we aren’t just visualizing data—we are visualizing the analysis of those data. 

Discuss the complexity of defining subjective thresholds using KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

When considering something like subjective versus objective, there is evidence that there is place for both. Until study teams start receiving enough clinical and operational data, perhaps three or four months into a study, they don’t really have the ability to be terribly objective. So until that point, there is nothing wrong with teams setting subjective thresholds based on previous experience, both with individual sites and experience of study team members.

Crucially, the study teams needs to understand that this is the landscape for a particular period of time, and that once the data are available, they should use analytics tools to actually interrogate the data to start fine-tuning the specific thresholds for that particular study – that’s when teams should shift from those pre-set thresholds and adapt to the objective reality of the trial.

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

The Addendum really reflects the new way of thinking and will support those companies who are really pushing for Risk-Based Monitoring adoption as they have the backing of the guidelines. Initially, implementation will help companies in obtaining higher quality data in order to make better quality decisions. Further down the line, as study teams start adopting RBM technology within more of their trials and learning from it, they can start to create efficiencies which in turn, which could result  in cost reductions.

Like many recommendations from the regulators, the guidelines are subject to individual companies’ interpretation. By working with the regulators to agree to a practical approach and framework, and importantly, sticking to it throughout the study, sponsors can ensure Risk-Based Monitoring implementation results in a successful operational oversight ecosystem.

Ultimately, listening to and acting on the data is key. After all, statistical methodologies dictate proof of safety and efficacy of a drug, so why aren’t we letting that same hypothesis testing actually tell us about the quality of data during the trial? The paradigm is changing in clinical development in order to increase our ability to have actionable and reliable data, as well as to reduce cost and shorten timelines. Advances like RBM and CSM can be used to focus our attention to ultimately safeguard our patients and deliver more productively for all that depend on us.

CluePoints Announce 2nd Annual Risk-Based Monitoring Roadshows to take place in Basel, Switzerland & Cambridge, UK

By | Blog | No Comments

Join CluePoints CEO Francois Torche and OmniComm Sr. Director Steve Young at this special thought leadership event where you will hear about the benefits derived from an integrated solution for electronic data capture (EDC) technology, targeted source data verification (SDV), and centralized analytics/key risk indicators (KRIs).

Expect to learn about;

  • Best practices for RBM study planning and execution, and key pitfalls to avoid
  • Drawing a clear line of sight between study risk assessment and operational KRIs and quality oversight methods
  • Capitalizing on the best use of statistics in central data review
  • Differentiating between KRIs and data quality assessment

Dates and Locations;

May 24, 2016
08:30-11:30 AM
Grand Hotel Le Trois Rois, Blumerein, Basel, Switzerland

Register

May 25, 2016
08:30-11:30 AM
Hotel Felix, Whitehouse Lane, Huntington Road, Cambridge, CB3 OLX

Register