z-logo
open-access-imgOpen Access
Measuring learning within a large design research project
Author(s) -
Sharleen Forbes,
John Harraway,
Megan Drysdale
Publication year - 2015
Language(s) - English
Resource type - Conference proceedings
DOI - 10.52041/srap.15203
Subject(s) - bootstrapping (finance) , multiple choice , computer science , relevance (law) , test (biology) , mathematics education , measure (data warehouse) , psychology , artificial intelligence , statistics , econometrics , mathematics , data mining , significant difference , paleontology , political science , law , biology
Conceptual learning of students from universities, schools and the workplace taking part in a research project to develop new material on bootstrapping and randomisation is investigated. The aim was to develop teaching strategies using dynamic visualisation software. Before and after instruction, students sat tests involving multi-choice and True/False questions on sampling and confidence intervals. Performance is analysed in terms of increases in correct answers and changes in responses. The percentage correct in both the pre-test and post- tests varied widely. Under two thirds of students answered the same in both tests but 5-18% changed correct to incorrect answers and 13-27% incorrect to correct answers. Relevance of questions, appropriateness of multi-choice and True/False questions in assessment and levels of learning (or unlearning) acceptable to teachers are discussed. Pre- and post-tests can measure student understanding and prior skills, but multi-choice and True/False questions may not be adequate for this purpose.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here