“the rest is noise” by christopher randolph · 2016. 9. 9. · 4 “the rest is noise:” the...

6
By Christopher Randolph, PhD, ABPP-CN Medical Director of Neuropsychology, Loyola University Medical Center VP of Neurocognition, MedAvante “The Rest is Noise” The Promise of eCOA in Increasing Signal Detection Medavante Research White paper

Upload: others

Post on 04-Feb-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

  • By Christopher Randolph, PhD, ABPP-CNMedical Director of Neuropsychology, Loyola University Medical CenterVP of Neurocognition, MedAvante

    “The Rest is Noise”The Promise of eCOA in Increasing Signal Detection

    Medavante Research White paper

  • “The Rest is Noise:” The Promise of eCOA in Increasing Signal Detection © 2016 MedAvante 3

    Inconclusive clinical trials keep treatments from patients who need them most: a problem that is especially prevalent in central nervous system (CNS) disorders. For all neurological conditions combined, researchers estimate the Phase 3 suc-cess rate to be approximately 50%.1 This is due, in part, to imprecise endpoint measurements that add noise and reduce signal detection, decreasing the chances of a successful trial. Alzheimer’s dis-ease (AD) provides a particularly striking example of the steep road to trial success: between 2002 and 2012, Phase 3 trials in AD had a failure rate estimated at 99.6%.2

    What’s behind these daunting odds, and what can spon-sors and contract research organizations (CROs) do to improve them?

    A recent study that I led with colleagues at MedAvante, a company dedicated to improving signal detection in clinical trials, suggests that the use of paper-based assessments could be an important factor in failed or inconclusive trials.3 Why? Paper increases the possibility of rater and administrative errors in an already difficult process. This effect is often compounded with the challenges posed by subjectively reported, subjectively scored assessments.

    Trial sponsors and CROs, then, should be aware of the limitations of paper in CNS trials and understand that digital alternatives offer significant benefits. The human and financial stakes are too high not to use all available means to achieve conclusive results.

    1 Hay M, Thomas DW, Craighead JL, Economides C, Rosenthal J. Clinical development success rates for investigational drugs. Nat Biotechnol. 2014;1:40–51. doi: 10.1038/nbt.2786.

    2 Cummings JL, Morstorf T, Zhong K. Alzheimer’s disease drug-development pipeline: few candidates, frequent failures. Alzheimer’s Research & Therapy. 2014;6(4):37. PMID: 4095696. The study examined trials on Clinicaltrials.gov. The authors write, “In the decade of 2002 through 2012, 244 compounds were assessed in 413 trials for AD. Of the agents advanced to Phase 3 (and excluding those currently in Phase 3), one was advanced to the FDA and approved for marketing (1.8%). Excluding the 14 compounds currently in Phase 3, the overall success rate for approval is 0.4% (99.6% attrition). This is among the lowest for any therapeutic area.”

    3 Negash S, Böhm P, Steele S, Sorantin P, Randolph C. Virgil Investigative Study Platform Minimizes Scoring Discrepancies to Improve Signal Detection. MedAvante Inc.; Loyola University Medical Center. Poster presentation, 14th Annual Athens/Springfield Symposium on Advances in Alzheimer Therapy, March 2016.

    Risks for imprecision in CNS trialsA number of factors contribute to the high level of imprecision in CNS trials. First, traditional paper-based clinical outcome assessments are prone to high rates of error.

    For example, the total score of the Clinical Dementia Rat-ing scale (CDR) requires calculation, either by the rater using the “Sum of Boxes” score, or by using an online tool in the case of the CDR Global Score. Then, in an additional step, score data must be transcribed manually into an electronic data capture (EDC) system and later be monitored for source data verification. Attempts to implement central oversight for data quality assurance typically involve scanning a large volume of paper, as well as uploading audio or video files. All these steps create the possibility of more errors being introduced as well as adding to the heavy administrative burden on tri-al sites, perpetuating significant obstacles in an already complex undertaking.4

    Reviewing assessments based on source documents and recordings may reveal scoring discrepancies be-tween raters and reviewers. Discrepancies take time to resolve and compromise data quality by calling ratings into question.

    A better option?Given the disadvantages of paper administration as well as the clinical challenges of neuropsychiatric assess-ments, an electronic clinical outcome assessment (eCOA) system that removes paper from the equation presents obvious advantages. That was the thinking that led to the development of the Virgil® Investigative Study Platform, which reduces administrative burden and enables the real-time detection of calculation errors and discrepant scores. Such advantages point to a greater likelihood of a conclusive trial.

    eCOA can save time and increase accuracy for raters and study coordinators at the study sites, while also providing ready-to-analyze data to principal investiga-tors, sponsors, and CROs far more quickly than paper collection.

    4 Cummings JL, et al. Alzheimer's disease drug-development pipeline: few candidates, frequent failures. Alzheimers Res Ther. 2014 Jul 3;6(4):37.

  • “The Rest is Noise:” The Promise of eCOA in Increasing Signal Detection © 2016 MedAvante 4

    We compared paper-administered assessments from a recent clinical trial of mild cognitive impairment (MCI) due to AD with Virgil tablet administrations of the same assessments from two other, separate AD MCI trials. All trials were Phase II/III multinational trials.

    We examined the following four commonly used rating scales:

    • the Alzheimer’s Disease Assessment Scale-Cognition (ADAS-Cog)

    • the Alzheimer’s Disease Cooperative Study-Daily Life Inventory-Mild Cognitive Impairment (ADCS-ADL-MCI)

    • the Mini Mental State Examination (MMSE) • the Clinical Dementia Rating (CDR) A cohort of Central Reviewers examined the first 150 ad-ministrations of each assessment for the paper-and-pen-cil trial and then compared them with the first 150 administrations done with the Virgil tablet.

    This same cohort of reviewers then used audio record-ings and worksheets to identify errors in both modes of administration.

    Comparing the paper-based and Virgil administrations, the study quantified the percentage of reviewed assess-ments with the following kinds of errors:

    • one discrepancy (different score found by rater and reviewer)

    • two or more discrepancies The results? The study demonstrated that compared to paper-and-pencil administration, Virgil substantially reduced scoring-discrepancy errors for all four AD rating scales. Virgil’s reduction of discrepancy errors shows just how effective its guidance is in leading raters to provide accurate scores.

    Figures 1a-1d show the magnitude of this reduction. The percentage of assessments with one discrepancy declined by: • 26% (CDR)• 43% (ADAS-Cog)• 78% (ADCS-ADL-MCI) • 71% (MMSE)

    For trial raters who administer multiple assessments throughout a day, the tablet-based Virgil platform can:

    • eliminate calculation errors (by performing any calcu-lations and auto-scoring)

    • flag missing data• capture all data input once and forever• ease the process of central oversight by simultane-

    ously audio- or video-recording the administration of assessments, facilitating review without creating extra steps for the site

    • eliminate the EDC transfer step and associated SDV monitoring

    Equally important, Virgil eCOA addresses the problem of subjectivity. It provides raters with real-time clinical guidance—developed by expert clinicians—right on the screen of a tablet used to administer the assess-ment. Links to scoring anchors, item descriptions and study-specific guidelines help maintain consistency across the assessment’s various domains.

    Imagine a rater administering the CDR who enters, say, a 0 or 0.5 (no or very mild impairment) in the memory domain after an informant had responded “rarely” to the question “Can the patient recall recent events?” A flag can then pop up on screen pointing out that this is a possible inconsistency and ask the rater if this was the intended scoring.

    Error rates in AD trials drop with the use of eCOA My colleagues at MedAvante and I examined data from our work in major clinical trials to evaluate the effective-ness of an eCOA solution to alleviate the root cause of inconclusive trials.5

    We found that using MedAvante’s Virgil tablet to admin-ister neurocognitive assessments could significantly reduce error rates in AD trials.

    Our study compared error rates in the administration of clinical outcome assessments in AD trials between the traditional paper-and-pencil method and the Virgil tablet.

    5 Negash S, et al. Virgil Investigative Study Platform Minimizes Scoring Discrepancies to Improve Signal Detection. Poster presentation, Athens/Springfield Symposium, March 2016.

  • “The Rest is Noise:” The Promise of eCOA in Increasing Signal Detection © 2016 MedAvante 5

    47%

    24%

    35%

    7%

    0%

    10%

    20%

    30%

    40%

    50%

    % Reviews with 1 discrepancy

    % Reviews with 2 or more discrepancies

    Fig 1a: CDR

    Paper-Pencil

    Virgil

    ANOVA: F(1,298) = 15.0 p

  • “The Rest is Noise:” The Promise of eCOA in Increasing Signal Detection © 2016 MedAvante 6

    At the same time, the percentage of assessments with two or more discrepancies declined by:

    • 71% (CDR)• 59% (ADAS-Cog)• 94% (ADCS-ADL-MCI) • 57% (MMSE)

    Such reductions are notable when considering that multi-discrepancy assessments likely account for a greater amount of “noise.”6

    An item-by-item look at discrepancies provides further detail (Figures 2a-2d). Worth noting was Virgil’s effec-tiveness in reducing discrepancies in the orientation domains of both the CDR and the ADAS-Cog, as well as in items that are particularly hard to score, such as the memory domain in the CDR.7 Here, paper-and-pencil administration resulted in a 19% discrepancy rate com-pared to 9% with Virgil.

    6 Analysis of variance (ANOVA) showed that the rates of discrepancy for the Virgil eSource system were statistically significantly lower on all four types of assessments than for paper-and-pencil assessment administration (all p values

  • “The Rest is Noise:” The Promise of eCOA in Increasing Signal Detection © 2016 MedAvante 7

    ConclusionOur study was the first to examine the actual error re-duction outcomes that derive from using a tablet-based eCOA system such as Virgil. Directly comparable data sets from multinational clinical trials in MCI due to AD were available, and error rates were examined in a cen-tral review performed by the same cohort of calibrated raters. Error reduction was substantial, from approxi-mately 50% to over 80% by scale, and highly statistically significant.

    This is a compelling demonstration of the clinical utility of the Virgil Investigative Study platform and its mean-ingful advantages over paper-and-pencil administration of assessments in clinical trials. The use of Virgil eCOA leads to fewer errors and discrepancies. This reduction in site-based scoring errors is likely due to the tablet’s functions of displaying clinical guidance, performing auto-calculations, and offering consistency checks to raters in real-time. These advantages can increase signal detection and boost the chances of a successful trial.

    An eCOA system also has substantial benefits for spon-sors and CROs, reducing administrative burden and costs and speeding trials. Such a system shows strong results with fewer errors and discrepancies, improved data quality, clearer signals, and standardized, accurate studies.

    The ultimate beneficiaries of clearer signals and reduced administrative burden are of course CNS patients and their families. Their quality of life depends on medical advances – advances that improvements in the clinical trial process have vast potential to accelerate.

    What do the results of this study suggest for raters, investigators, CROs, sponsors, regulatory agencies and, ultimately, patients?

    They confirm the perception that traditional pa-per-and-pencil administration of COAs in AD trials is characterized by high error rates that contribute to error variance, which has the potential to degrade signal detection.

    Fig. 2d: MMSE Scoring Category % Score Discrepancy

    Paper-Pencil Virgil® Platform Orientation to place Building type 7 % 5 % Repetition No ifs, ands, or buts 3 % 3 % Drawing Pentagons 4 % 1 % Orientation to place County 3 % 0 % Writing Sentence 3 % 0 % Orientation to time Date 3 % 0 % Attention 72 1 % 1 % Orientation to place City/town 3 % 0 % Orientation to place Floor 3 % 0 % Recall Penny Recall 1 % 1 % Recall Apple Recall 1 % 1 % Attention 65 1 % 1 % Orientation to time Season 2 % 0 % Attention 86 1 % 1 % Recall Table Recall 1 % 1 % Orientation to time Month 1 % 0 % Naming Watch 0 % 1 % Attention 93 1 % 0 % Orientation to time Day of week 1 % 0 % Orientation to place State 1 % 0 % Comprehension Put on floor 0 % 0 % Registration Apple 0 % 0 % Comprehension Fold in half 0 % 0 % Registration Penny 0 % 0 % Comprehension Right hand 0 % 0 % Registration Table 0 % 0 % Naming Pencil/pen 0 % 0 % Attention 79 0 % 0 % Orientation to time Year 0 % 0 % Reading Close your eyes 0 % 0 %

    Medavante100 American Metro Boulevard #106Hamilton, NJ 08619, USA+1 609 528 9400 medavante.com