z-logo
Premium
Identifying Student Misconceptions in Biomedical Course Assessments in Dental Education
Author(s) -
Curtis Donald A.,
Lind Samuel L.,
Dellinges Mark,
Schroeder Kurt
Publication year - 2012
Publication title -
journal of dental education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.53
H-Index - 68
eISSN - 1930-7837
pISSN - 0022-0337
DOI - 10.1002/j.0022-0337.2012.76.9.tb05373.x
Subject(s) - psychology , medical education , dental education , multiple choice , mathematics education , medicine , significant difference
Dental student performance on examinations has traditionally been estimated by calculating the percentage of correct responses rather than by identifying student misconceptions. Although misconceptions can impede student learning and are refractory to change, they are seldom measured in biomedical courses in dental schools. Our purpose was to determine if scaling student confidence and the clinical impact of incorrect answers could be used on multiple‐choice questions (MCQs) to identify potential student misconceptions. To provide a measure of student misconception, faculty members indicated the correct answer on twenty clinically relevant MCQs and noted whether the three distracters represented potentially benign, inappropriate, or harmful application of student knowledge to patient treatment. A group of 105 third‐year dental students selected what they believed was the most appropriate answer and their level of sureness (1 to 4 representing very unsure, unsure, sure, and very sure) about their answer. Misconceptions were defined as sure or very sure incorrect responses that could result in inappropriate or harmful clinical treatment. In the results, 5.2 percent of the answers represented student misconceptions, and 74 percent of the misconceptions were from four case‐based interpretation questions. The mean student sureness was 3.6 on a 4.0 scale. The students’ sureness was higher with correct than with incorrect answers (p<0.001), yet there was no difference in sureness levels among their incorrect (benign, inappropriate, or harmful) responses (p>0.05). This study found that scaling student confidence and clinical impact of incorrect answers provided helpful insights into student thinking in multiple‐choice assessment.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here