Premium
Ayres Sensory Integration Meets Criteria for an Evidence‐Based Practice: A Response to Stevenson [2019]
Author(s) -
Schoen Sarah A.,
Lane Shelly J.,
Schaaf Roseann C.,
Mailloux Zoe,
Parham L. Diane,
Roley Susanne S.,
MayBenson Teresa
Publication year - 2019
Publication title -
autism research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.656
H-Index - 66
eISSN - 1939-3806
pISSN - 1939-3792
DOI - 10.1002/aur.2164
Subject(s) - library science , art history , history , computer science
We are grateful for the opportunity to address Stevenson’s [2019] response to our paper entitled “A systematic review of Ayres Sensory Integration Intervention for children with autism” [Schoen et al., 2019]. These authors challenge our conclusion that Ayres Sensory Integration (ASI) meets criteria for an evidence-based intervention based on the Council for Exceptional Children’s Standards for Classifying the Evidence Base of Practices in Special Education [Council for Exceptional Children (CEC), 2014]. In our review, we state that the Schaaf, Benevides, et al. [2014] study of ASI meets 100% of the CEC criteria and that the Pfeiffer, Koenig, Kinnealey, Sheppard, and Henderson [2011] study meets 85% of the criteria. Cook et al. [2015] clearly state that at least half of the criteria of the CEC standards must be met in order for a practice to be considered evidence based, for example, “when at least half of the EBP criteria related to a number of studies, with positive effects for at least two research designs are met, a practice can be classified as evidence-based” [p. 229; see also CEC, 2014, p. 8–9]. Furthermore, Cook et al. [2015] state that in order to be considered evidence based, the practice must include “two methodologically sound group comparison studies with random assignment to groups, positive effects, and at least 60 total participants across studies; AND meet at least 50% of criteria for two or more of the study designs described” [Cook et al., 2015, fig. 1, p. 230]. They clarify that the stated is not to prescribe all the desirable elements of an ideal study but to enable special education researchers to determine which studies have the minimal methodological features to merit confidence in their findings. As they state, “Thus, rather than relying on the findings of a single, potentially flawed study, research consumers should identify effective practices on the basis of multiple, high-quality studies that use experimental research designs and demonstrate robust effects on student outcomes (i.e., evidencebased practices or EBPs)” [Cook et al., 2015, p. 220]. Although Stevenson [2019] challenges our application of the guidelines, our analysis is completely in line with Cook et al.’s recommendation, indicating that ASI meets the guidelines for an evidence-based practice according to the CEC guidelines. Stevenson [2019] raises an additional concern that the Pfeiffer et al. [2011] study did not meet criteria for a methodologically sound study. As the CEC specifies, “A methodologically sound paper is one that meets all of the quality indicators” [CEC, 2014, p. 6]. The only quality indicator that was assigned a partial rating for the Pfeiffer et al. [2011] review was data analysis, specifically related to the average effect size of the findings. Upon re-examination of this criterion, we determined that the Pfeiffer et al.’s [2011] study did actually meet this quality indicator and thus meets 100% of the CEC quality indicators. Specifically this indicator states, “Data analysis techniques are appropriate for comparing change in performance of two or more groups. Study reports one or more appropriate effect size statistics for all outcomes relevant to review being conducted, even if the outcome is not statistically significant or provides data from which appropriate ESs can be calculated” [CEC, 2014, p. 228]. As indicated in the Schoen et al. [2019] review, the study by Pfeiffer and colleagues used analyses that were appropriate for the data, and effect sizes were calculated on all measures. Thus, it fully meets this quality indicator and therefore should be considered a methodologically sound study. Although we did not convert the reported eta-squared (η) effect sizes to Cohen’s d when calculating the average effect size in our original review, in doing so now, we see that it yields an average effect size of 1.01 which is well above the 0.25 criteria set forth in the What Works Clearinghouse [2011] guidelines adopted by the CEC. This was determined by using a conversion formula provided by Fritz, Morris, and Richler [2012] (e.g., d = 2√η 2