
Enhancing efficiency, reliability, and rigor in competency model analysis using natural language processing
Author(s) -
Garman Andrew N.,
Standish Melanie P.,
Kim Dae Hyun
Publication year - 2018
Publication title -
the journal of competency‐based education
Language(s) - English
Resource type - Journals
ISSN - 2379-6154
DOI - 10.1002/cbe2.1164
Subject(s) - reliability (semiconductor) , computer science , health care , variety (cybernetics) , artificial intelligence , natural language processing , baseline (sea) , perception , psychology , power (physics) , physics , quantum mechanics , oceanography , neuroscience , geology , economics , economic growth
Background Competency modeling is frequently used in higher education and workplace settings to inform a variety of learning and performance improvement programs. However, approaches commonly taken to modeling tasks can be very labor‐intensive, and are vulnerable to perceptual and experience biases of raters. Aims The present study assesses the potential for natural language processing ( NLP ) to support competency‐related tasks, by developing a baseline comparison of results generated by NLP to results generated by human raters. Methods Two raters separately conducted cross‐walks for leadership competency models of graduate healthcare management programs from eight universities against a newly validated competency model from the National Center for Healthcare Leadership containing 28 competencies, to create 224 cross‐walked pairs of “best matches”. Results Results indicated that the NLP model performed at least as accurately as human raters, who required a total of 16 work hours to complete, versus the NLP calculations which were nearly instantaneous. Conclusion Based on these findings, we conclude that NLP has substantial promise as a high‐efficiency adjunct to human evaluations in competency cross‐walks.