z-logo
Premium
The Revised METRIQ Score: A Quality Evaluation Tool for Online Educational Resources
Author(s) -
ColmersGray Isabelle N.,
Krishnan Keeth,
Chan Teresa M.,
Trueger N. Seth,
Paddock Michael,
Grock Andrew,
Zaver Fareen,
Thoma Brent
Publication year - 2019
Publication title -
aem education and training
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.49
H-Index - 9
ISSN - 2472-5390
DOI - 10.1002/aet2.10376
Subject(s) - clarity , likert scale , usability , descriptive statistics , quality (philosophy) , scale (ratio) , medical education , quality score , thematic analysis , psychology , raw score , medicine , computer science , statistics , qualitative research , social science , philosophy , operations management , mathematics , raw data , metric (unit) , chemistry , sociology , developmental psychology , biochemistry , epistemology , quantum mechanics , programming language , physics , human–computer interaction , economics
Background With the rapid proliferation of online medical education resources, quality evaluation is increasingly critical. The Medical Education Translational Resources: Impact and Quality ( METRIQ ) study evaluated the METRIQ ‐8 quality assessment instrument for blogs and collected feedback to improve it. Methods As part of the larger METRIQ study, participants rated the quality of five blog posts on clinical emergency medicine topics using the eight‐item METRIQ ‐8 score. Next, participants used a 7‐point Likert scale and free‐text comments to evaluate the METRIQ ‐8 score on ease of use, clarity of items, and likelihood of recommending it to others. Descriptive statistics were calculated and comments were thematically analyzed to guide the development of a revised METRIQ ( rMETRIQ ) score. Results A total of 309 emergency medicine attendings, residents, and medical students completed the survey. The majority of participants felt the METRIQ ‐8 score was easy to use (mean ± SD  = 2.7 ± 1.1 out of 7, with 1 indicating strong agreement) and would recommend it to others (2.7 ± 1.3 out of 7, with 1 indicating strong agreement). The thematic analysis suggested clarifying ambiguous questions, shortening the 7‐point scale, specifying scoring anchors for the questions, eliminating the “unsure” option, and grouping‐related questions. This analysis guided changes that resulted in the rMETRIQ score. Conclusion Feedback on the METRIQ ‐8 score contributed to the development of the rMETRIQ score, which has improved clarity and usability. Further validity evidence on the rMETRIQ score is required.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here