z-logo
Premium
A Comparison of Two Alternate Scaling Approaches Employed for Task Analyses in Credentialing Examination Development
Author(s) -
Fidler James R.,
Risk Nicole M.
Publication year - 2018
Publication title -
educational measurement: issues and practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.158
H-Index - 52
eISSN - 1745-3992
pISSN - 0731-1745
DOI - 10.1111/emip.12200
Subject(s) - credentialing , blueprint , task (project management) , rating scale , computer science , scale (ratio) , psychology , cognitive psychology , applied psychology , medical education , medicine , developmental psychology , management , engineering , mechanical engineering , physics , quantum mechanics , economics
Credentialing examination developers rely on task (job) analyses for establishing inventories of task and knowledge areas in which competency is required for safe and successful practice in target occupations. There are many ways in which task‐related information may be gathered from practitioner ratings, each with its own advantage and limitation. Two of the myriad alternative task analysis rating approaches are compared in situ : one establishing relative task saliency through a single scale of rated importance and another employing a composite of several independent scales. Outcomes regarding tasks ranked by two practitioner groups are compared. A relatively high degree of association is observed between tasks ranked through each approach, yielding comparable, though not identical examination blueprints.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here