Skip to main content

Harnessing Risk-Based Quality Management and Deep Learning to Improve Trial Knowledge and Drive Better Outcomes – Your Questions Answered

By February 23, 2023March 8th, 2023No Comments

Risk-Based Quality Management (RBQM) adoption continues to rise, ensuring effective oversight of disparate data sources within a clinical trial.

Integrating deep learning capabilities has only enhanced RBQM’s potential to improve trial knowledge and drive better outcomes for sponsors and patients.

In our latest blog Steve Young, Chief Scientific Officer at CluePoints, answers your questions on how RBQM and deep learning empower sites to identify risks more efficiently and effectively.

How does CluePoints fit into the RBQM space?

CluePoints was founded in 2012 based on the SMART® engine, a suite of specific statistical tests designed to assess the quality of patient data in clinical trials. These tests are designed to run not just on one patient variable, data field, or domain of data but across most or all patient data.

In 2014 we introduced the first version of a self-service-based cloud platform with a user-friendly interface. This has enabled study teams and organizations to independently set up and run statistical tests without statisticians or statistical knowledge. The interface guides them to the sites at the most risk and allows them to drill into the causes of anomalies and act on them.

Why are we moving away from manual risk assessment?

Before the age of RBQM, the emphasis on quality oversight was on manual, visual inspection reviews. Even today, we still see a significant number of manually-intensive reviews leading to manual queries, data cleaning efforts, and the need to use various visualizations to look for trends.

These approaches are inefficient and cannot effectively oversee quality and identify actual and important risks emerging in your trial.

Is central statistical monitoring more efficient than KRIs?

Analysis across hundreds of studies using unsupervised statistical monitoring and Key Risk Indicators (KRIs) has found up to 40% of all confirmed issues resulting from the identification of risks are coming from the central statistical monitoring.

That is strong evidence that you are likely missing a lot if you just rely on KRIs and some visualizations.

How can statistical methods support the evaluation and detection of risk?

Sophisticated statistical methods support all components of central monitoring. For example, in addition to setting risk thresholds based on discrete values, you can also use statistical or relative risk thresholds to compare how any given site performs on a given KRI metric compared to the whole-study trend. This allows you to identify the biggest outlier sites statistically and what represents a real risk.

Quality tolerance limits (QTLs) do require discrete thresholds. Still, using statistical confidence intervals can inform you and your study team if a given breach of your QTL is statistically significant.

Can you give a practical example of where RBQM has identified an emerging risk in a trial?

A trial studying patient weight change with a particular endpoint related to weight loss received site data that showed patient weights steadily decreasing throughout participation. Randomly selecting another site and presenting the same data revealed a significant difference in the pattern. However, with just two sites to compare, it was difficult to know which was the outlier.

Adding more sites to the picture increased the frame of reference and clearly showed the first site was different from all the others. An investigation revealed it was reporting inaccurate data.

However, this significant finding was not identified by Source Data Verification (SDV), Source Data Review (SDR), or any data management reviews or checks because all patient weight observations were within a believable range.

This is just one example of how RBQM approaches can identify risks typically missed by traditional methods.

How widely used is this methodology, and how can the data benefit future clinical trials?

More than 1,000 studies have been run on our platform from around 90 organizations. As a result, more than 120,000 risk signals are documented, and more than 30,000 actions related to those risk signals are rapidly growing month by month.

This information represents the knowledge we need to tap into to help further enable end-to-end RBQM solutions.

Is there a risk that the power of central monitoring will lead to operational bias?

RBQM techniques have been shown to reduce deviation and bias, resulting in more accurate patient outcomes. Measures can also be put in place to minimize potential bias.

For example, our statistical monitoring solution was employed to perform scheduled analyses every six months for the four-year duration of a global trial. Although the analyses were not conducted blind to the sites, with staff being aware that oversight of the data quality was taking place, no access was provided to the results of this assessment. These meant sites were unaware of their rank or the kind of problems detected, and there were no problems associated with bias as any information that could have led to unblinding was not provided.

Do you envision deep learning techniques will replace the statistical monitoring engine?

Conceptually, we may find an opportunity for machine learning to start doing more efficiently what our SMART engine statistical tests are doing today. It is not something we see happening in the next couple of years, and maybe never.

We see machine learning working with the statistical engine to improve the effectiveness of prioritizing risks and anomalies.

How can machine learning make the study setup more efficient?

We have developed a solution already showing significant benefits in study setup activity and effort. It uses machine learning and natural language processing (NLP) to identify where datasets should be filtered and which variables to prepare for statistical testing. Previously, that was a manually-intensive part of the process, which will now be highly automated.

Can deep learning help identify potential data issues like professional patients?

We have a module that detects duplicate patients who might have enrolled in a study more than once at different sites. It uses a statistical-based comparison to determine the relative likelihood that any given pair of patients might be the same person. We also have intelligent patient profiles that score patients statistically and identify those with unusual data patterns compared to others in the study.

Deep learning can also help identify other issues, for example, inadequate documentation. Since our Root Cause Decision Support feature was implemented in 2021, we have seen unclear documentation in risk signals drop from around 40% to 30%.

Do you have results demonstrating the reliability of deep learning methods?

We have been using machine learning to learn from past medical coding to code more effectively adverse events and concomitant medications. We are taking verbatim text and running it through a deep-learning model. The accuracy achieved to date is extremely impressive. For example, in AE coding, we are already well over 90% in getting the first prediction right. We also list the top five predictions, which are well over 95%.

Steve Young

As Chief Scientific Officer for CluePoints, Steve oversees the research and development of advanced methods for data analytics, data surveillance and risk management, along with providing guidance to customers in RBQM methodology and best practices. Steve worked for three bio-pharmaceutical companies over a span of 15 years where he assumed leadership positions in clinical data management and led the successful enterprise roll-out of EDC at both J&J and Centocor. He spent an additional 6 years with eClinical solution providers Medidata and OmniComm, leading the development of analytics and risk-based quality management (RBQM) solutions and providing RBQM consultation to many organizations. Steve also led a pivotal RBM-related analysis in collaboration with TransCelerate, and is currently leading RBQM best-practice initiatives for several industry RBM consortiums. Steve holds a Master’s degree in Mathematics from Villanova University.