Q1) Are the indicators percentages? E.g., % AEs for the site v study or just number (which could indicate that there are many AEs as there are many subjects)
The units for each KRI may be different. In the example of AE Rate, we suggest computing a rate of AE’s per patient visit. So you can think of this as the average number of AE’s per patient per visit – and a site that has a very low average compared to other sites in the study may be at risk of under-reporting AE’s.
Q2) The concept of Risk-Based Monitoring has not been around for some time – how do sites now feel with this approach – last year, we felt it was a very mixed bag – some being receptive, others very against
We are also seeing mixed feedback from sites, along with a fair amount of confusion and misperceptions about what RBM is and what it means for them. One misperception is that RBM requires sites to scan or fax lots of patient source charts for remote review/SDV. While some sponsors indeed require sites to do this, we don’t consider this an RBM-specific practice (i.e., RBM => less reliance on SDV, NOT remote SDV).
Q3) Assume some Sponsors have own DM team and may outsource Risk-Based Monitoring – how do CluePoints ensure activities done don’t overlap with traditional DM activities?
CluePoints provides an advanced platform for centralized statistical data monitoring and KRIs (along with RACT and issue workflow support). We cannot dictate the assignment of roles and responsibilities between a sponsor and CRO. However, it’s important to note that the CluePoints solution enables very robust statistical methods to detect anomalous patterns in data and operational quality metrics, whereas traditional data mgt reviews (dictionary coding, data listings reviews, programmed edit checks, SAE reconciliation, etc.) are focused on identifying and cleaning discrepant data on a case-by-case basis. The two approaches should, therefore, be mostly complementary and not overlap.
Q4) Has the CluePoints platform (SMART engine etc.) undergone inspection by any authorities – PMDA. FDA, EMA, etc. – any feedback if so?
CluePoints has not yet undergone a formal regulatory authority inspection. However, as Francois mentioned during the webinar – stay tuned for news regarding FDA and CluePoints.
Q5) What’s the recommended approach for small studies – single site, Phase I for example – RBM versus traditional SDV, etc
RBM as a methodology is just as valid for small (phase 1) studies as for large, global studies. The operational methods of monitoring risk will likely be different, however. In particular, some of the statistical methods used to detect sites with anomalous/unusual patterns of data will not be practical in a single site study. But doing pre-study risk planning including use of an RACT, along with the use of some operational KRIs, is a good idea even on smaller studies.
Q6) Do you think the costs of software/technological solutions for Risk-Based Monitoring negates the cost savings of reduced on-site monitoring?
The technology investment needed to enable and support effective RBM is minimal compared to the potential cost savings due to reduced SDV and on-site monitoring. CluePoints has developed an RBM value model that can be used to illustrate the potential savings for any given study design. Overall we expect a potential savings of up to 12% of overall study budget – and that includes the average cost of RBM technology and incremental resources to perform centralized monitoring activities.
Q7) 1)Does your engine evaluate distribution of data and then suggest the type of statistics to apply? Assuming normality is common, and skews what are called out as outliers in non-Normally distributed data sets. 2) Regarding Agile SDLC, did this include revising how you approach GAMP 5?
- SMART detects the type of variable and chooses the test accordingly.
- A variable with C < 11 distinct values is considered categorical and analyzed as a set of C binary variables.
- Tests assume normality for continuous variables but a log-transformation is systematically applied to get closer to normality. This design choice was made for three distinct reasons: (1) the engine should run automatically in all cases, (2) the test is likely to be informative even under departures from normality, and (3) other tests will detect outliers and other data anomalies that could render the distributions non-normal.
Q2: We perform all the validation activities after completion of the development sprints included in a release. We then apply a classic CSV approach (IQ/OQ/PQ/MQ).
Q8) 1) How can Risk-Based Monitoring be adapted to trials that have a high number of sites with a low number of patients per site (e.g., oncology trials)? 2) Do you have a reference(s) for the metric that < 2% of data change from SDV?
1) Studies with a very low number of patients per site do indeed present some challenges regarding pro-active issue detection and remediation – specifically because advanced statistical methods generally rely on a certain minimum volume of actual observations/data per site in order to make statistically significant assessments. Thus the window of opportunity for applying these methods in such studies will be smaller. However, there is likely still value in applying these methods to detect issues that may otherwise never have been discovered. Additionally, use of targeted KRIs for which apriori expectations are available (e.g., timeliness of data entry and query responses, screen failure rates, etc.) can be very effective even with small volumes of enrolled patients per site.
2) Yes – the TransCelerate group published results of an SDV analysis in the DIA Journal in 2014, and CluePoints’ own Steve Young co-authored the article and was intimately involved in the actual design and analysis. Please contact CluePoints (firstname.lastname@example.org) if you’d like a copy of the article.
Q9) Nice presentation – Are protocol deviations part of Risk-Based Monitoring?
Assessing rate or volume of PDs at each site is certainly a worthwhile KRI to consider as part of your centralized monitoring. A number of CluePoints customers are indeed using or planning to use KRIs related to PDs.
Q10) Once critical cross have been identified that need to be 100% SDV’d, how do we determine what noncross are 50% SDV?
There is no absolute guideline or rule available for deciding how much critical and or non-critical data to SDV at each site. However, some general best practices to consider would include:
- a) focusing relatively more SDV on the first patient (or a couple of patients) at each site, and doing much less SDV on subsequent patients.
- b) focus relatively more SDV on critical data vs. non-critical data, and relatively more on AEs/SAEs than on non-critical data.
We would also suggest (similar to TransCelerater recommendation) that overall reliance on SDV be minimized in favor of centralized statistical methods, KRIs, etc..
Q11) Please define the composition of the Central Monitoring Team and how do they work together?
Many organizations are trying to work this out currently and coming to different conclusions re: roles and responsibilities for centralized monitoring. However, generally, you should consider a “data analyst” type role with responsibility of performing ongoing review of the statistical monitoring results to identify and organize potential issues for review with a larger clinical operations study team. Appropriate study team reps should decide on which identified issues require follow-up/remediation and actions assigned and then tracked through completion and ultimate issue resolution.
Q12) 1) How does your system support issue management in terms identifying, tracking of the all issues at one place 2) How does your system support overall communication required for risk based monitoring like escalation of risk/issue, etc. to relevant stakeholders and documentation of this communication?
1) CluePoints includes very robust issue management/tracking workflow support, which allows effective tracking of all identified issues and associated actions through resolution in one place. All of the information can also be exported into various formats for sharing and/or for eTMF upload at end of study.
2) CluePoints issue mgt system allows for email alerts to stakeholders who have new/pending actions in the system. All actions/responses/statuses are retained and audited within the platform and can be exported into reports/documentation.
Q13) 1) What are the general guidelines for determining how much data is necessary to begin to usefully begin to use the central statistical monitoring tools? 2) Do we have any regulator feedback to date on how sponsors are implementing RBM?
It is difficult to set a lower volume of data required before CluePoints can be run successfully, since studies differ from each other in many different ways (number of centers, number, and type of variables, variables repeatedly measured or not, etc.). Based on CluePoints’ experience with studies of various sizes, a general rule of thumb is that the ODIS analysis requires at least ten centers with two patients, but there are exceptions to this rule. In some trials (e.g. orphan indications), the majority of centers have only one patient. In this case, the analyses can still be performed using individual patients instead of centers as the units of analysis. Some tests require a lower limit of more than 50 measurements of a variable for the center to be scored. This requirement translates to a minimum of 51 patients for variables measured only once, but not for repeatedly measured variables (e.g., variables measured once per patient-visit). Simulations are currently underway with a wide range of study sizes and data issues, in order to inform the lower volume of data required for CluePoints to have adequate sensitivity and specificity.
Q14) you said you need people with a hybrid background in data management and statistics. Traditionally, people with this background do not fully understand the clinical operations perspective to be able to communicate effectively. Would not a person with a clinical background that also has either DM or Stats background be helpful as well?
Yes, agreed – this would be ideal! So to your point, we could clarify to say the ideal profile for a central monitoring analyst includes clinical data mgt and/or biostatistics.
Q15) Gathering of the KRI – currently, listings are held within other departments and not often in a user-friendly format (cleaning of free text entries +++), how do you collect the data + clean them when applicable in order to produce the output?
CluePoints does not pre-clean discrepant clinical data before assessing it, and this is generally not an issue since what our platform is looking for are unusual or anomalous patterns of data or operational KRIs.
Q16) How are the KRI decided? Do you keep a core listing and adapt to each study depending on the primary endpoint?
The selection of KRIs is up to each organization and/or study team, and ideally, should be guided by pre-study risk planning process. However, CluePoints team has deep experience in KRI selection and optimization and consult with our customers to ensure they have the best possible approach. We are also actively working on developing a core library of advanced/optimized KRIs that we will make available to all customers.
Q17) How deep the search for KRI goes? #Queries per patient per site? Average #visit per patient per site? = potential surrogate to efficacy indicator
We believe there will be a core set of KRIs – likely no more than 15 or so – that should be applicable for use in most studies. CluePoints would be happy to engage you and your organization and offer our expertise in this area.
Q18) How does the calibration of the RACT works? Is the “weight” of each KRI modifiable “live” to assess the output and determine the best ponderation?
The definition of the weight of each question should be performed upfront. When the team answers the questions, there should no more be modifications of the weight as it might introduce bias.
Q19) To Craig: why would it be beneficial to “reduce” the number of autoqueries? Why is it a goal?
In the conversation about RBM, ROI is often brought up. In looking for explicit savings, it is often seen that there may not be a reduction in site visits—rather, site visits are just re-purposed for SDR instead of SDV. So where can some of the explicit cost savings come from? Well, in study startup, taking a risk based approach, it is very possible to have robust edit checks on the critical fields and less or none on fields that aren’t critical or are part of the risk framework. When you reduce edit checks, you reduce autoqueries. When you reduce autoqueries, you reduce cost. This is concluded from Applied Clinical Trials “High Rate of Autoqueries Demonstrates Benefits of EDC,” the article mentions that “it is estimated that the average cost to resolve each query is $53.87 (Medidata Solutions analysis of PICAS® and CROCAS® data).