Search
Close this search box.
Blog

QTLs: Where Are We, And How Much Further Can We Go?

Quality tolerance limits (QTLs) may be a relatively new concept in clinical trials, but they are already making research safer and its results more reliable.

Since being mandated under the International Council for Harmonization (ICH)’s guideline for good clinical practice, E6(R2), in 2016, QTLs have been implemented by an increasing number of sponsors and CROs.1 There remains some ambiguity around the best way to select QTL parameters and set thresholds.

During a session at the recent RBQM Live event, CluePoints’ Chief Scientific Officer, Steve Young, spoke to leading voices about their motivations, experiences, and thoughts on the future of QTLs.

An ethical imperative

Aside from the regulatory requirement, sponsors are adopting QTLs because it is the right thing to do, said Joanna Florek-Marwitz, Data Management, Data Science, and RBQM Senior Leader at UCB.

“The primary motivation for implementing this is to conduct studies safely and to generate reliable evidence,” she said.

Marion Wolfs, Senior Director of Risk Management and Central Monitoring at Janssen, agreed that an important ethical element existed. “We want to make sure we work in the safest possible way, that we develop medications that are as safe as possible, and that they do what we intend them to do,” she said, adding that the very process of selecting QTL parameters and thresholds forced study teams to think about their scientific questions “on a deeper level” and “challenge their assumptions.”

It keeps the study teams “focused on what’s important,” explained Steve Gilbert, Pfizer’s Senior Director of Statistics. “We work in partnership with our patients. We are responsible for keeping them safe and ensuring that their commitment to our trial is not wasted… if we run a study but can’t get a clean readout because of quality issues, it’s not ethical, and it’s not fair to our patients.”

Selecting QTL parameters

When asked how they chose QTL parameters, the common themes were making them study-specific and conducting a thorough risk assessment.

“We have whiteboarding sessions where everybody gathers to find and rank all the study risks. Then, from that, we generate the study-specific key risk indicators (KRIs), and the QTLs will come from those,” said Gilbert, adding that it all started with identifying “critical to quality” factors.

All three speakers said their organizations maintained a library of common QTLs, which Florek-Marwitz explained helped ease implementation and avoid overburdening the study team. However, these tended to be more generic, with the study team refining them to suit the trial.

“The QTL has to be connected to the study’s scientific question,” said Wolfs. While the library may suggest monitoring “missed endpoints,” she explained, it was down to the study team to select a study-specific endpoint that would aid with the final analysis. “The QTLs we select should help us answer our scientific question,” she added.

All three said their organizations tended to implement between two and four QTLs per trial, as well as numerous KRIs, to monitor risk at a site level.

Setting QTL thresholds

Florek-Marwitz described the setting of QTL thresholds as a “cross-functional, expert-led process.”

At UCB, study teams have access to a database of historical data they can search by categories, including therapeutic area, compound, study design type, and study phase. This information then informs discussions with medics, statisticians, and other subject matter experts, who set the thresholds collectively.

All three speakers’ organizations had a similar approach. Wolfs said Janssen also ran various scenarios to calculate how variations in the identified QTL parameters would impact data quality at different breakpoints.

They also all highlighted the limits of basing decisions on historical data. Firstly, it could be built on different assumptions from those used to answer the scientific question. What’s more, “things change over time,” said Wolfs, explaining it was necessary to consider how much historical data was “still appropriate to use.”

“Regulatory demands can also come into play,” said Gilbert. “I might know that, in a particular therapeutic area, the FDA will not accept the results if we lose more than 20% of patients, for example,” he said. A “lost to follow-up” QTL, then, could be useful.

QTLs and the future

The use of QTLs is a process of “continual learning” for the industry, said Wolfs. She added that she expected organizations to get better at selecting the parameters and thresholds that add the most value over time.

As the sector becomes more proficient, Florek-Marwitz expects the approach to expand into other areas. “We have established risk-based monitoring and have all this experience. I see an opportunity to translate that into other areas, such as clinical data management. It could change our thinking from fixing issues to preventing and mitigating risk.”

For Gilbert’s part, he said he expected to see this become “business as usual,” with further automation helping to make QTL and KRI parameter selection and threshold setting “one seamless process.”

To learn more about this topic, please listen to the full OnDemand session from RBQM Live, which can be found here.

Reference:

  1. Bhagat, R., Bojarski, L., Chevalier, S., Görtz, D. R., Le Meignen, S., Makowski, M., … & Turri, S. (2021). Quality tolerance limits: framework for successful implementation in clinical development. Therapeutic innovation & regulatory science, 55(2), 251-261.
Press Release
CluePoints Appoints Richard Young as Chief Strategy Officer
Press Release
CluePoints Wins Growth Award at BVA Private Equity Awards
Press Release
CluePoints Launches Medical & Safety Review (MSR) Software to Revolutionize Clinical Data Review