Monthly Archives

July 2016

CluePoints Risk-Based Monitoring Platform (Version 1.9.0) is Live!

By | Blog, News, Product Updates | No Comments

We’re thrilled to announce that CluePoints’ latest version (1.9.0) of the Central Monitoring platform is live!  Version 1.9.0 adds an abundance of new features (as requested by customers) and extensions to existing functionality.

There’s lots to cover, so let’s get started.

Trends Analysis

The addition of the trends analysis functionality provides the study team with a workbench to review the evolution of Key Risk Indicators (KRIs) and scores over time. This is especially helpful for study teams to look at the evolution of a specific indicator in order to assess for instance the effectiveness of an action taken. This feature will also facilitate the work of data analysts in interpreting scores for the Overall Data Quality Assessment.

Multivariate Tests

So far the statistical tests in the SMART statistical engine were mainly looking at one variable at the time. The correlation test allows the users to investigate the relationship between pairs of variables. The goal of this test is to detect an atypical correlation in a given center with respect to the correlation observed in all the other centers.

New Scatter Plot in Patient Level Data

A new scatter plot has been added to the patient level data screen in order to facilitate the interpretation of the results of the correlation test. The visualization allows to see the relationship between variables assessed for a given site versus all other sites. The user can interact with the graph and identify which point corresponds to which patients.

New Setup Screen

The all-new setup screen was created based on the feedback we received from customers. We merged some of the setup screens to provide the data analyst with a unified view of how the study is set up.

That’s all for now!

If you would like a demonstration of the new features, please email contact@cluepoints.com.

 

 

About CluePoints

CluePoints is the premier provider of Risk-Based Monitoring and Data Quality Oversight Software. Our products utilize statistical algorithms to determine the quality, accuracy, and integrity of clinical trial data both during and after study conduct. Aligned with guidance from the FDA, EMA, and the new ICH (E6) addendum, CluePoints is deployed to support traditional monitoring and data management and can be implemented as the ultimate engine to drive Risk-Based Monitoring. The value of CluePoints lies in its powerful and timely ability to identify anomalous data and site errors allowing optimization of central and on-site monitoring and a significant reduction in overall regulatory submission risk.

Like this post? Follow @CluePoints on Twitter for blog post alerts, industry news, and more!

CluePoints Founder, Marc Buyse to Deliver Clinical Trial Data Quality Course at the ASA Biopharmaceutical Workshop

By | Blog | No Comments

The ASA Biopharmaceutical Section Regulatory-Industry Statistics Workshop is fast approaching and we’re thrilled to announce that CluePoints Founder, Marc Buyse will deliver a course with Paul Schuette, FDA; Richard C Zink, JMP Life Sciences, focused on methods to assess data integrity in clinical trials.

Course: SC3 Short Course 3: An Overview of Methods to Assess Data Integrity in Clinical Trials

Date: 09/28/16
Time: 08:30 AM – 12:00 PM
Location: Marriott Wardman Park, Washington DC

Course Overview

The quality of data from clinical trials has received a great deal of attention in recent years. Of central importance is the need to protect the well-being of study participants and maintain the integrity of final analysis results. However, traditional approaches to assessing data quality have come under increased scrutiny as providing little benefit for the substantial cost. Numerous regulatory guidance documents and industry position papers have described risk-based approaches to identify quality and safety issues. An emphasis on risk-based approaches forces the sponsor to take a more proactive approach to quality through a well-defined protocol and sufficient training and communication, and by highlighting those data most important to patient safety and the integrity of the final study results. Identifying problems early allows sponsors to refine procedures to address shortcomings as the trial is ongoing. The instructors of this short course will provide an overview of recent regulatory and industry guidance on data quality, and explore issues involving data standards and integration, sampling schemes for source data verification, risk-indicators and their corresponding thresholds, and analyses to enrich sponsor insight and site intervention. In addition, statistical and graphical algorithms used to identify patient- and investigator trial misconduct and other data quality issues will be presented, and corresponding multiplicity considerations will be described. To supplement concepts, this course will provide numerous practical illustrations and describe examples from the literature. The role of statisticians in assessing data quality will be discussed. 1. Regulatory Landscape (Schuette) 2. Background (Zink) a. Recent history b. TransCelerate c. Classification of risk-based approaches d. Definitions e. Data sources and data standards f. Prospective approaches to quality g. Role of the statistician and why we should care h. Sampling Approaches for Source Data Verification (Zink) 3. Supervised Methods (TransCelerate) (Zink) a. Risk indicators and thresholds b. Graphical approaches c. Advanced analyses 4. Unsupervised Methods of Statistical Monitoring (Zink) a. Patient- and site-level analyses b. Graphical approaches c. Multiplicity, power and sample size considerations 5. Unsupervised Methods Using All Data (Buyse) a. Patient-, site-, and country level analyses b. Graphical approaches c. Multiplicity and power considerations d. Scoring and prioritizing centers for audits e. Experience with these methods 6. Conclusions (Zink) a. Review cycle b. Models for centralized review 7. References

Risk-Based Monitoring Insights from the Industry: Jamie O’Keefe

By | Blog, Insights from the Industry Blog Series | No Comments

Part Six

In the latest of our eight-part Q&A series, we catch up with Jamie O’Keefe, Vice President, Life Sciences, and R&D Practice at Paragon Solutions to get his thoughts on the latest issues in Risk-Based Monitoring (RBM).

When it comes to the practical implementation of Risk-Based Monitoring, where do you observe differences in current best Risk-Based Monitoring practices between companies and CROs of different size?

Looking at where the industry is now, I think a lot of companies are ‘dipping their toes in the water.’ We have seen much more prevalent implementation of RBM in the large pharma space, particularly by the top two, mainly because there is an availability of funding. We are also seeing some sponsors considering how they can embed RBM within their organizations as a standard capability, rather than as an isolated practice in one off trials, which is promising.

We have spent a lot of time with the smaller pharma and biotech companies where the focus tends to be on how they can actually go about this within an organization of their size. As small pharma companies typically outsource much of their clinical trial management to CROs, there is a question over how they should manage the adoption of RBM. Should they just outsource it? Or should they take certain components on? For example, the central monitoring. In my opinion, much of it comes down to whether they are going to get anything out of it from an efficiency or value perspective. Or are they so small that it’s not worth the transition because of the cost implications of transitioning? Are they going to get an outcome from it over the next couple of years to make it worth it? This is an important consideration for small companies that simply don’t have the same financial support as the larger pharma organizations.

What is apparent to organizations of all sizes is that this is a big change, both in terms of culture and systems, as well as technology. In order to drive RBM adoption efficiently, accurately and compliantly, organizations must make sure the changes are communicated to all the relevant people in the business process, including any third party partners. From a technology view point, it’s about systems being able to meet the needs of all users; giving them access to the information they need, when and how they need it, so they have the knowledge to act accordingly.

How are the regulatory authorities likely to respond to the widespread adoption of Risk-Based Monitoring - are they ready for it and why?

In my experience, the people who are actually performing the assessments at study level are not necessarily as on board with RBM adoption as those at senior level, who are definitely ready for it. Despite this, there has been some strong success with the TransCelerate pilot studies, which have received positive feedback from the regulators.

Success will come down to how well a sponsor is documenting and importantly, following their Standard Operating Procedures (SOPs). As long as there is robust documentation and supporting information to demonstrate their actions, and show how they have managed risk appropriately, I don’t think there will be a significant challenge from a regulatory authority perceptive.

How do you think that the multitude of electronic systems are going to co-exist as Risk-Based Monitoring takes off? Will other systems become obsolete or will they need to evolve too?

There is going to be a big shift from a technology perspective. There will be certain categories of systems which will always be around; EDC is a great example, as there will always be a need to collect data. But in terms of how the EDC technology is used going forward, I think we are at a crossroads. It can either evolve to take on additional necessary capabilities, or it will be relegated. In the long term, I think that we will see some components of many current systems, for example, CTMS or site monitoring technology, integrated to create larger, RBM platforms.

Much of the early work in Risk-Based Monitoring has a focus on relatively simple KRIs and traffic-light dashboards which are easy to understand. There is a growing need to complement this approach with a more sophisticated and comprehensive analysis known as Central Statistical Monitoring (CSM). How are companies likely going to adopt these complementary approaches to ensure data accuracy and integrity?

I think this is where we are going to see a divide between large and smaller companies. It will be much more achievable for smaller organizations to integrate CSM and adopt this new approach across all of their trials. The challenge for big pharma companies will be changing existing complex infrastructure. Trying to drive this change through multi-million dollar, multi-year implementations, will be a huge challenge and have a significant impact on business. For that reason, I think we will see the larger pharma organizations take much smaller steps towards RBM and CSM, as they make this transition slowly.

I also think another interesting discussion point is how far the industry could take this from a technology perspective. For example, is there an option further down the line to evolve CSM technology to become a complete risk management platform, rather than just a risk monitoring system?

Discuss the complexity of defining subjective thresholds using Key Risk Indicators KRIs rather than (or in addition to) the objectivity of comparing one site against the others.

While there needs to be a level of granularity and flexibility for what the literal thresholds will be, based on the therapeutic area, indication type, or study design, setting an objective threshold is fairly straightforward.

Setting subjective thresholds will always be more challenging. Instead of simply taking one opinion, assigning it a score and adding it into the overall calculation, there needs to be a system to facilitate the conversation from a group perspective. For example, the site monitor, the site manager, the study manager and the data manager, etc., all have important interactions with sites from a qualitative perspective, how does a study team bring all these opinions together and set one threshold?

What are your thoughts on the ICH (E6) Addendum, how to implement the parts outlined in the guidelines practically for the respective area (EDC, Risk-Based Monitoring, etc.) and how the changes could impact current practices?

I think this will be the biggest roadblock for the industry. Current systems and tools, and what they are used for, differs greatly across the sector, so for the time being, implementation will be tailored to individual organizations.

As study teams make the transition from current to new systems, it will be important for them to have a clear vision and documented plan detailing how they will approach implementation, which acknowledges timeframes for adoption and any associated risks. The timeframe for implementation will vary depending on the size of the organization. For large pharma companies, the transition could take up to five years, whereas a small biotech business could potentially make the transition within six months.

The biggest impact is going to be defining the roles of the major stakeholders and ensuring teams work together cohesively. Different team members, for example, data monitors and study team leaders, will interact with data and make decisions in very different ways, so it will be crucial to consider how this will impact system capability and/or the flow of information across the team. In my opinion, success will come down to organizations making a plan and sticking to it. RBM is a fairly quickly evolving area so communicating and documenting any changes to the original plan will be crucial in order to increase compliance across the organization.

It is also important to have an active dialogue with the health authorities. A collaborative approach will minimize the chance of challenges and issues when it comes to review by the regulators and ultimately result in more drugs being approved for market.

Join CluePoints’ dedicated Risk-Based Monitoring Linkedin Group to keep abreast of the latest industry trends!

 

Is Risk-Based Monitoring the Answer to Data Quality Issues?

By | Blog | No Comments

It’s no secret that data quality is at the heart of every clinical trial. Data validation is one of the main objectives in the FDA’s pre-approval inspection (PAI), which is performed to contribute to the FDA’s assurance that all submitted data are accurate and complete.

For example, in clinical trials where patients self-administer diaries to record important trial information, such as outcomes or intake of medication, there may be a risk of diary falsification. And for trials including a large number of patients spread across numerous sites, it becomes harder to validate data; especially if the Source Data Verification(SDV) method is utilized. This leaves many sponsors and CROs wondering how they can accurately validate data and ensure that their house is in order in preparation for regulatory submission.

A recent paper published by TransCelerate BioPharma (Statistical Monitoring in Clinical Trials: Best Practices for Detecting Anomalies Suggestive Fabrication or Misconduct, Therapeutic Innovation & Regulatory Science 2016, Vol. 50(2) 144-154)  suggests that traditional site-monitoring techniques are not an optimal vehicle for identifying data fabrication and other issues that may impact the overall trial results, and that a more reliable option would be to implement a centralized statistical monitoring approach to compliment traditional site monitoring. The authors write:

“Traditional site-monitoring techniques are not optimal in finding data fabrication and other nonrandom data distributions with the greatest potential for jeopardizing the validity of study results. TransCelerate BioPharma conducted an experiment testing the utility of statistical methods for detecting implanted fabricated data and other signals of noncompliance. Methods: TransCelerate tested statistical monitoring on a data set from a chronic obstructive pulmonary disease (COPD) clinical study with 178 sites and 1554 subjects. Fabricated data were selectively implanted in 7 sites and 43 subjects by expert clinicians in COPD. The data set was partitioned to simulate studies of different sizes. Analyses of vital signs, spirometry, visit dates, and adverse events included distributions of standard deviations, correlations, repeated values, digit preference, and outlier/inlier detection. An interpretation team, including clinicians, statisticians, site monitoring, and data management, reviewed the results and created an algorithm to flag sites for fabricated data. Results: The algorithm identified 11 sites (19%), 19 sites (31%), 28 sites (16%), and 45 sites (25%) as having potentially fabricated data for studies 2A, 2, 1A, and 1, respectively. For study 2A, 3 of 7 sites with fabricated data were detected, 5 of 7 were detected for studies 2 and 1A, and 6 of 7 for study 1. Except for study 2A, the algorithm had good sensitivity and specificity (>70%) for identifying sites with fabricated data. Conclusions: We recommend a cross- functional, collaborative approach to statistical monitoring that can adapt to study design and data source and use a combination of statistical screening techniques and confirmatory graphics.”

This statement is not surprising given that TransCelerate has advocated Central Statistical Monitoring to inform Risk-Based Monitoring), and this paper validates some of the monitoring recommendations made in earlier TransCelerate papers.

At CluePoints, we’ve spent over ten years perfecting a software to give sponsors peace of mind when it comes to identifying and mitigating data quality issues in clinical trials. Aligned with the FDA, EMA, and ICH E6 guidance, CluePoints’ Central Monitoring Platform employs a set of unique statistical algorithms to support a Risk-Based Monitoring strategy. These algorithms are embedded in the SMART Engine, a powerful cloud-based software that evaluates how the data from every centre or patient differs from the data across all centres/patients.

Learn how CluePoints SMART Engine detected fraudulent reporting in a large Phase III cardiovascular study

Like this post? Join over 1,600 industry professionals in our Risk-Based Monitoring group on Linkedin to receive our latest posts!