Premium
Defining the Burden of Osteoarthritis in Population‐Based Surveys
Author(s) -
Golightly Yvonne M.,
Allen Kelli D.,
Jordan Joanne M.
Publication year - 2016
Publication title -
arthritis care and research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.032
H-Index - 163
eISSN - 2151-4658
pISSN - 2151-464X
DOI - 10.1002/acr.22716
Subject(s) - osteoarthritis , medicine , population , environmental health , alternative medicine , pathology
Osteoarthritis (OA) is a leading cause of disability and a common chronic health condition (1–3). The burden of OA on the current health care system is significant and growing. By the year 2030, the rate of knee replacements to treat knee OA (the most common type of OA) is expected to rapidly increase to more than 3 million people per year, costing $14.3 billion annually (4). In this issue of Arthritis Care & Research, the authors of “Alternative Methods for Defining Osteoarthritis and the Impact on Estimating Prevalence in a US Population-Based Survey” correctly report that available prevalence estimates of OA from large national survey data are outdated (5). Data from population-based cohort studies suggest that OA prevalence has risen during recent decades and may be higher than previously estimated (2,6–11). Additionally, data from the National Health and Nutrition Examination Surveys (NHANES) show an increase in knee pain from the 1970s to the 1990s (11), which may be linked to escalating knee OA prevalence. Given the impact of OA on disability and its burden on health care, updated estimates of the prevalence of OA are needed. Cisternas et al employed a new approach to attempt to update OA prevalence estimates. The novel survey they utilized, the nationally representative Medical Expenditure Panel Survey (MEPS), included data based on both self-report from the home component (using multiple types of questions) and medical billing records. The authors suggest that this survey would provide OA estimates for “current adverse health impact” by capturing data from patients that seek health care, report disability, or report being bothered by their conditions. This selfreport strategy likely would capture the health impact for many of the patients with OA, but potentially miss those who may not be aware that they have OA (e.g., never received a diagnosis because they did not seek treatment for their symptoms, or disregarded their joint symptoms because they thought joint pain was a normal part of aging). With the MEPS, the authors explored 3 OA definitions using different combinations of home component survey data (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] coded data, self-report of physician or health care provider diagnosis of OA or other arthritis that is not rheumatoid arthritis). To assess the performance of these definitions, the authors evaluated the sensitivity, specificity, and the positive predictive value (PPV) of each definition compared to the gold standard of diagnostic codes from medical providers (a nonrepresentative subsample of the MEPS called the medical provider component). The authors reported that the OA definition with the best performance (based on a strategy to optimize and balance sensitivity, specificity, and PPV) resulted in an OA prevalence estimate of 13.4%, which, surprisingly, is not notably different from the NHANES 1971–1975 estimate of 12% (12). In their Discussion section, Cisternas et al provide concise explanations for these results. The purpose of this editorial is to expand upon some of those explanations by focusing on 1) the calculations for and relationships of sensitivity, specificity, and PPV and 2) the gold standard. Potential implications of this study on survey methods for examining OA are discussed. One intriguing finding of the article was that the sensitivities and specificities were moderate to high for all 3 OA definitions, while the PPVs were unusually low. The values presented in the paper were calculated using 2 different data subsets. The subset available for calculating specificity and PPV included only respondents that could be matched on home component and medical provider component encounters and therefore was smaller than the subset available for calculating sensitivity (data included regardless of matching of encounters). The different subsets do not allow for a straightforward recreation of calculations using the data available in the article by Cisternas et al; for illustrative purposes, a simple example (Table 1) will be used to examine sensitivity, specificity, and PPV. In this example, the true prevalence of OA would be 10% (10 of 100 people with OA based on the gold stanYvonne M. Golightly, PT, PhD, Joanne M. Jordan, MD, MPH: University of North Carolina at Chapel Hill; Kelli D. Allen, PhD: University of North Carolina at Chapel Hill, and Department of Veterans Affairs Medical Center, Center for Health Services Research in Primary Care, Durham, North Carolina. Address correspondence to Joanne M. Jordan, MD, MPH, Thurston Arthritis Research Center, University of North Carolina, 3300 Thurston Building, Campus Box 7280, Chapel Hill, NC 27599-7280. E-mail: joanne_jordan@med.unc.edu. Submitted for publication August 11, 2015; accepted in revised form August 18, 2015.