Premium
Said Another Way: Asking the Right Questions Regarding the Effectiveness of Simulations
Author(s) -
Goodman William M.,
Lamers Angela
Publication year - 2010
Publication title -
nursing forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.618
H-Index - 36
eISSN - 1744-6198
pISSN - 0029-6473
DOI - 10.1111/j.1744-6198.2010.00199.x
Subject(s) - test (biology) , computer science , quality (philosophy) , control (management) , fidelity , frame (networking) , aggregate (composite) , statistical hypothesis testing , psychology , statistics , artificial intelligence , mathematics , paleontology , telecommunications , philosophy , materials science , epistemology , composite material , biology
Applying simulations in healthcare practice and education is increasingly accepted, yet a number of recent authors have questioned the effectiveness of these technologies. The contention is that while high‐fidelity simulators may contribute to educational gains, their gains compared to low‐tech alternatives are often “not significant.” That assessment, however, and the evidence it is based on, may be a consequence of asking the wrong questions. Typical studies often compare a measure for “average success” for one group's members versus another's on some criteria, but this can mask important information about the “tails” of the distribution for how trainees are performing. An alternative approach, adapted from quality control, compares error rates for each group in the experiment, in aggregate. The statistical results of evaluations can change if this method is used, as illustrated by a recent study showing that simulation training can significantly reduce the frequency of medication administration errors among student nurses on placement. The paper includes a case study to tangibly demonstrate how the way we frame our evaluation test question can reverse the apparent statistical finding of the significance test.