A Potential Barrier To Completing The Assessment Feedback Loop
Author(s) -
Promod Vohra
Publication year - 2020
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--10323
Subject(s) - computer science , context (archaeology) , capstone , internship , test (biology) , session (web analytics) , process (computing) , quality (philosophy) , curriculum , outcome (game theory) , artificial intelligence , operations research , machine learning , psychology , engineering , medical education , pedagogy , medicine , paleontology , philosophy , mathematics , mathematical economics , epistemology , algorithm , world wide web , biology , operating system
Northern Illinois University’s College of Engineering and Engineering Technology employs a comprehensive nine-component assessment model. Each element in the assessment model (Pretest, Post-test, and Portfolio; Standardized Testing; Student and Faculty Surveys; Student Internships and Cooperative Work Performance; the Capstone Experience; Student Placement Information; Employer Surveys; Alumni Participation; and Peer Review of the Curriculum) provides a mechanism for data collection. Within the context of our assessment model, this paper details strategies for analyzing and using assessment results as feedback directed toward the improvement of total program quality. Incorporating feedback into the assessment process is often difficult. Assuming the measurement of selected learning outcome criteria is both valid and reliable, benchmarks for acceptable performance must be established and decision rules that provide a basis for detecting meaningful differences must be formulated. And these tasks are conducted in a policy environment where the implementation of affirmative steps may be constrained by numerous internal and external stakeholders. One of the most fundamental problems with assessment research involves how assessment results are to be placed within a meaningful comparative context. Any analysis of assessment results involves ascertaining the significance of differences from an established performance baseline, a performance goal, or other criteria. The significance of any comparisons that are made may be evaluated using statistical and/or substantive criteria. This paper will explore the potential and limits of statistical analysis, particularly as both relate to the concept of statistical power in survey research, and discuss several strategies for dealing with the problems posed by inadequate numbers of respondents.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom