z-logo
open-access-imgOpen Access
“Where Do We Teach What?”
Author(s) -
Denny Joshua C.,
Smithers Jeffrey D.,
Armstrong Brian,
Spickard Anderson
Publication year - 2005
Publication title -
journal of general internal medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.746
H-Index - 180
eISSN - 1525-1497
pISSN - 0884-8734
DOI - 10.1111/j.1525-1497.2005.0203.x
Subject(s) - relevance (law) , ranking (information retrieval) , curriculum , gold standard (test) , test (biology) , receiver operating characteristic , set (abstract data type) , information retrieval , learning curve , medicine , variable (mathematics) , medline , computer science , medical education , machine learning , psychology , radiology , mathematics , paleontology , pedagogy , mathematical analysis , biology , political science , law , programming language , operating system
Background: Often, medical educators and students do not know where important concepts are taught and learned in medical school. Manual efforts to identify and track concepts covered across the curriculum are inaccurate and resource intensive. Objective: To test the ability of a web‐based application called KnowledgeMap (KM) to automatically locate where broad biomedical concepts are covered in lecture documents in the Vanderbilt School of Medicine. Methods: In 2003, the authors derived a gold standard set of curriculum documents by ranking 383 lecture documents as high, medium, or low relevance in their coverage of 4 broad biomedical concepts: genetics, women's health, dermatology, and radiology. We compared the gold standard rankings to KM, an automated tool that generates a variable number of subconcepts for each broad concept to calculate a relevance score for each document. Receiver operating characteristic (ROC) curves and area‐under‐the‐curve were derived for each ranking using varying relevance score cutoffs. Results: Receiver operating characteristic curve areas were acceptably high for each broad concept (range 0.74 to 0.98). At relevance scores that optimized sensitivity and specificity, 78% to 100% of highly relevant documents were identified. The best results were obtained with the application of 63 to 1437 subconcepts for a given broad concept. The search time was fast. Conclusions: The KM tool capably and automatically locates the detailed coverage of broad concepts across medical school documents in real time. Use of KM or similar tools may prove useful for other medical schools to identify broad concepts in their curricula.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here