Premium
Systematic review of applied usability metrics within usability evaluation methods for hospital electronic healthcare record systems
Author(s) -
Wronikowska Marta Weronika,
Malycha James,
Morgan Lauren J.,
Westgate Verity,
Petrinic Tatjana,
Young J Duncan,
Watkinson Peter J.
Publication year - 2021
Publication title -
journal of evaluation in clinical practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.737
H-Index - 73
eISSN - 1365-2753
pISSN - 1356-1294
DOI - 10.1111/jep.13582
Subject(s) - usability , checklist , cinahl , computer science , system usability scale , systematic review , medline , heuristic evaluation , cognitive walkthrough , medicine , nursing , psychology , psychological intervention , human–computer interaction , law , political science , cognitive psychology
Background and objectives Electronic healthcare records have become central to patient care. Evaluation of new systems include a variety of usability evaluation methods or usability metrics (often referred to interchangeably as usability components or usability attributes). This study reviews the breadth of usability evaluation methods, metrics, and associated measurement techniques that have been reported to assess systems designed for hospital staff to assess inpatient clinical condition. Methods Following Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) methodology, we searched Medline, EMBASE, CINAHL, Cochrane Database of Systematic Reviews, and Open Grey from 1986 to 2019. For included studies, we recorded usability evaluation methods or usability metrics as appropriate, and any measurement techniques applied to illustrate these. We classified and described all usability evaluation methods, usability metrics, and measurement techniques. Study quality was evaluated using a modified Downs and Black checklist. Results The search identified 1336 studies. After abstract screening, 130 full texts were reviewed. In the 51 included studies 11 distinct usability evaluation methods were identified. Within these usability evaluation methods, seven usability metrics were reported. The most common metrics were ISO9241‐11 and Nielsen's components. An additional “usefulness” metric was reported in almost 40% of included studies. We identified 70 measurement techniques used to evaluate systems. Overall study quality was reflected in a mean modified Downs and Black checklist score of 6.8/10 (range 1–9) 33% studies classified as “high‐quality” (scoring eight or higher), 51% studies “moderate‐quality” (scoring 6–7), and the remaining 16% (scoring below five) were “low‐quality.” Conclusion There is little consistency within the field of electronic health record systems evaluation. This review highlights the variability within usability methods, metrics, and reporting. Standardized processes may improve evaluation and comparison electronic health record systems and improve their development and implementation.