z-logo
open-access-imgOpen Access
How Effectively are University Students Tested? A Case Study
Author(s) -
Jane Kembo
Publication year - 2020
Publication title -
east african journal of education studies
Language(s) - English
Resource type - Journals
eISSN - 2707-3947
pISSN - 2707-3939
DOI - 10.37284/eajes.2.1.170
Subject(s) - psychology , mathematics education , higher education , charter , subject (documents) , medical education , report card , descriptive statistics , final examination , pedagogy , medicine , political science , library science , computer science , statistics , mathematics , law
Testing and examining go on in higher education all the time through continuous assessments and end semester examinations. The grades scored by students determine not only academic mobility but eventually who get employed in the job market, which seems to be shrinking all over the world. Those charged with testing are often staff who have higher qualifications in their subject areas but are not necessarily teaching or examination experts. Against this background, the researcher wanted to find out what was happening at selected university across three schools: Social Studies, Education and Science. The university is fairly young having been awarded its charter twenty years ago. The paper asked two questions namely, at what levels of Bloom’s Taxonomy are lecturers asking examination questions? Secondly, do the level and balance of questions show growth in examining skills? The study evaluated over 1039 questions from randomly selected examination papers from the Examinations Office for the academic years from 2014/15 to 2017/18 (three academic years). A guide from the list of verbs used in Anderson s (revision of Bloom was used to analyze the questions. Descriptive statistics were used to describe the trends in testing for each year. ANOVA and t-tests were used to find out if there were significant differences between numbers across categories and within categories. The results of the study show that most examination questions are at the levels of remember (literal) and knowledge (understand). In 2016/17 and 2017/18 academic years, there were significant differences in the percentage of questions examined in these two categories. However, it seems from the study, that testing or examining skills do not grow through the practice of setting questions. There is need for examiners to be trained to use the knowledge in setting questions that discriminate effectively across the academic abilities of students they teach.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here