Premium
Evaluating an Anatomy‐Specific Tool for Blooming Exam Questions
Author(s) -
Thompson Andrew R.,
O'Loughlin Valerie Dean
Publication year - 2013
Publication title -
the faseb journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.709
H-Index - 277
eISSN - 1530-6860
pISSN - 0892-6638
DOI - 10.1096/fasebj.27.1_supplement.957.12
Subject(s) - rubric , taxonomy (biology) , consistency (knowledge bases) , computer science , replicate , psychology , medical education , mathematics education , artificial intelligence , medicine , biology , ecology , mathematics , statistics
Bloom's taxonomy is commonly used to assess the cognitive level of exam questions and other course assignments. Our previous research found that ‘Blooming’ pathology exam questions was difficult when using guidelines that were not discipline‐specific. Achieving consistency in scoring required that the observers independently rank and then discuss each exam question. Although this strategy is highly reliable, it is inefficient and may impose a limit on the number of questions that can be evaluated. In addition, the discussion based style of this approach makes it problematic for outside researchers to replicate the results. Building on research by Crowe et al. (2008), we developed a Blooming Anatomy Tool (BAT) that provides tailored guidelines for Blooming anatomy exam questions. To test the efficacy of the BAT, two groups of instructors Bloomed a series of anatomy exam questions, each receiving different scoring criteria. The first group was given a worksheet that broadly outlined Bloom's taxonomy, while the second group received the BAT. After comparing the Bloom levels assigned by individuals in each group to a key generated by the authors, we found that the BAT had a positive impact on accuracy and consistency in determining Bloom categories. We suggest that researchers utilizing Bloom's taxonomy for assessing course materials should consider seeking out or developing a discipline‐specific rubric.