Skip to main content

Risk-Based Management and the Difference Between Key Risk Indicators and Central Statistical Monitoring

By September 26, 2013March 9th, 20223 Comments

Key Risk Indicators and Central Statistical Monitoring

ClueBot here! Did you miss me? I’ve actually dropped by to share some exciting news.  A few months ago, my wonderful creator at CluePoints, Dr Marc Buyse, hosted a complimentary webinar with his friend Brian Nugent from Gilead Sciences. I wanted to share with you some of the information they discussed as I made my guest appearances throughout the presentation [My parents, components of a super-computer and a Gulfstream jet, would be so proud!]. Today, I am going to cover risk-based management and the difference between Key Risk Indicators and Central Statistical Monitoring.  

As many of you know, Risk-Based Monitoring has been a ‘hot topic’ for all of us in the industry for close to two years now. Our current paradigm for monitoring is to do routine site monitoring visits on a regular basis.  In this case, sponsors go to sites every 4 to 10 weeks and a lot of Source Document Verification (SDV) is undertaken – 100% SDV in many cases. This process typically accounts for 30% of a sponsor’s overall costs [Wow!]. Now, the new paradigm that we’d like to explore is a risk-based or, as I would describe it, an ‘intelligent’ approach to monitoring.

Two documents, the FDA Guidance for Industry and the EMA Reflection paper, are really groundbreaking in our industry and encourage the use of risk-based monitoring tools as well as increased efficiency in the form of reduced SDV.  The articles bring to our attention the fact that sponsors should be focusing or “targeting” their on-site monitoring activities.  One of the most common ways to do so is by remote monitoring. This means taking a look at the data off site. We are also referring to data management metrics and trending. This includes things such as key performance indicators, key risk indicators and key results indicators — all adding up to help sponsors focus on what to do with the sites and to be able to garner greater knowledge of what is going on in the data from a remote location. Then, of course, there is Central Statistical Monitoring, which is where CluePoints comes in. Sponsors have to look at their data in numerous ways used to target on-site monitoring visits. By doing so, the goal is to have much less than 100% SDV and improved data quality at significantly lower costs.

Abstracts from the FDA guidance

The FDA guidance recommends to “replace on-site monitoring for monitoring activities that can be done as well or better remotely; monitor data quality through routine review of submitted data in real-time.” This, of course, has been made possible through the implementation of Electronic Data Capture (EDC) that is commonly used in most clinical trials.  The FDA also says that you should “conduct analyses of site characteristics, performance metrics; target on-site monitoring by identifying higher risk clinical sites.” This is really what we mean by Key Risk Indicators. In addition, the FDA says that you should “conduct aggregated statistical analyses of study data to identify sites that are outliers relative to others and to evaluate individual subject data for plausibility and completeness.” – A.K.A., Central Statistical Monitoring.

The Difference Between Key Risk Indicators and Central Statistical Monitoring

cluebotfruit

Examples of KRIs

Study conduct
Actual accrual vs. target
% pts with protocol violations
% dropouts
Safety
AE rate
AE grade 3/4 rate
SAE rate
Treatment compliance
% dose reductions
% dose delays
Reasons for Rx stops
Data management
Overdue forms
Query rate
Query resolution time

In contrast with key risk indicators, central statistical monitoring tries to find why one site is different from the others. Computers that hold all of the clinical trial data can quite easily perform checks based on statistical algorithms to compare sites and detect different patterns in the data.  Traditionally, humans are not good at doing this which is why 100% manual SDV is not an appropriate method for checking data quality. This concept has been implemented by CluePoints using the Smart™ engine [that’s me, ClueBot!].  The SMART engine runs a large number of comprehensive statistical tests comparing each site with all of the other sites in order to identify statistical outliers. The idea behind this is that all variables are indicative of quality – so not just the Key Risk Indicators but every patient-related variable that has been collected in a clinical trial. When we run CluePoints on a data set in a clinical trial, we typically take all the data into consideration, whether it be lab data, clinical data, baseline data, or treatment outcomes; everything goes into this system and all data are deemed equally important for the purposes of checking their quality. The system uses assumption-free generic tests so that there is no assumption about the distribution of the variables. 

When the SMART engine is run, each test generates a p-value. For instance, if you have a trial with 100 sites collecting 300 variables and you run an average of 5 tests per variable, the number of tests you can run is actually 100x300x5, which is a very large number of p-values. With that in mind, we obviously need a way to simplify or summarize all of these p-values into a single unique score. This is done using another statistical algorithm to determine an individual score for each and every site involved in the specific study being analyzed. You can think of a site’s score as being the average p-value of that site as compared with all of the other sites. So if a site has a very extreme score, that means it has a very extreme p-value, and is very likely to truly differ from the all of the others in regards to the data submitted by the site. We display this information as a bubble plot:

bubbleplot

Consequently, CluePoints identifies sites that are outliers and then guides the sponsor in its investigation as to why these sites differ from all of the others by looking at the data more closely. It is important to note that CluePoints uses the false discovery rate to adjust for testing multiplicity in order to ensure the outlying sites identified are truly statistically different and that the findings are not simply due to the play of chance.

To summarize, we have two very different approaches to determining risk in clinical trials.  One is based on key risk indicators and the other is statistical monitoring. If we contrast these two approaches, key risk indicators are applied because they focus on important known risk factors, for example, the proportion of AEs or SAEs collected. There is no question that the safety reporting in a clinical trial is a key risk indicator that must be looked at very carefully.  The challenge with Key Risk Indicators is that they are based on subjective choices. In contrast, Central Statistical Monitoring is an agnostic and independent approach. It doesn’t make any assumptions about what data are important. We have learned from experience that investigators typically take great care to report primary efficacy variables and safety variables quite well (in most cases!), but may be less attentive or perhaps more sloppy when reporting other variables.  As you all well know, in a clinical trial, everything that is collected should be worth collecting, and therefore worth checking – that is why Central Statistical Monitoring ensures that you are able to determine the quality and integrity of your data and ensure that you focus efforts on errant sites quickly and efficiently.

magnifyFor more information please visit our blog post “Risk-Based Quality Management” or visit www.cluepoints.com. Also, feel free to leave a question or comment for any of our subject matter experts below, contact CluePoints directly, or share this blog with a colleague. Thank you for being a part of the CluePoints community and remember, if you follow this blog you will be entered for a chance to win a Google Chromebook! Just enter your email address into the subscribe box at the top of this page.  See you again soon!

Patrick Hughes

Patrick holds a Marketing degree from the University of Newcastle-upon-Tyne, UK, and a post-graduate Marketing diploma in Business-to-Business Marketing Strategy from Northwestern University - Kellogg School of Management, Chicago, Illinois. Responsible for leading global sales, product, marketing, operational and technical teams throughout his career, Patrick is a Senior Executive with over eighteen years international commercial experience within life sciences, healthcare and telecommunications. In the past, Patrick consulted on corporate and commercial strategy for various life sciences companies and was responsible for successfully positioning ClinPhone as the leading Clinical Technology Organization during his 10-year tenure with the company.

3 Comments