z-logo
Premium
P1‐064: THE USE OF STATISTICAL MODELING TO COMPLEMENT DATA QUALITY PROGRAMS
Author(s) -
Karas Sarah M.,
Barbone Jordan M.,
DeBonis Dan,
Solomon Todd M.
Publication year - 2018
Publication title -
alzheimer's and dementia
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.713
H-Index - 118
eISSN - 1552-5279
pISSN - 1552-5260
DOI - 10.1016/j.jalz.2018.06.066
Subject(s) - medicine , outlier , clinical trial , data quality , longitudinal data , computer science , data mining , artificial intelligence , metric (unit) , operations management , economics
Background: Data quality programs that reduce rater error are considered an important component of clinical trials, especially those in Alzheimer’s disease (AD) due to the continued high failure rate in AD drug development (1,2). Data quality programs have historically utilized a standard visit-based schedule of review in conjunction with targeted reviews based on rater performance (3). The addition of a statistically driven approach that targets longitudinal, statistically aberrant variability in the scores of key outcome measures as compared to study means has the potential to identify further data issues that may not have been previously detected using traditional approaches. In this analysis, we evaluated the frequency of scoring and/or administrative errors detected during review of the ADAS-Cog and ADCS-ADL, which had statistically aberrant longitudinal change. Methods: Data was obtained during a multi-national AD clinical trial. For key outcome measures, a data quality program was utilized that included: confirmation that standard administration procedures were followed, scores were transcribed accurately, and standard scoring practices were followed (4). All post-randomization visits with an ADAS-Cog and ADCS-ADL were evaluated for statically abnormal patterns of change. Any scores meeting the a priori criteria as being statistically aberrant were evaluated for adherence to standard administration practices and accurate scoring by an independent expert reviewer. Results were analyzed for frequency and type of error. Results: In total, 6251 post-randomization visits were completed. Based on statistical modeling, 12.00% (n 1⁄4 750) of these visits were considered outliers. When these visits were reviewed, 20% (n 1⁄4 152), had at least one scoring or administrative error on the ADAS-Cog or ADCS-ADL. More specifically, as each visit could contain multiple errors, 175 scoring and administrative errors were identified. A majority of these represented ADAS-Cog scoring errors (n1⁄4 102), whereas less than 8% (n1⁄4 13) represented ADCS-ADL scoring errors. See Table 1 for additional administration and scoring error frequencies for ADAS-Cog and ADCS-ADL. Conclusions: The use of statistical modeling to assist in the identification of assessments that have potentially problematic ratings due to aberrant data patterns is a complementary methodology to traditional means of ensuring data quality.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here