z-logo
open-access-imgOpen Access
Optional Final Exams as an Assessment Tool in Engineering Curricula
Author(s) -
Anthony Gregerson,
Sean Franey
Publication year - 2020
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--21771
Subject(s) - curriculum , computer science , reliability (semiconductor) , engineering education , complaint , mathematics education , medical education , test (biology) , psychology , engineering management , engineering , pedagogy , medicine , paleontology , power (physics) , physics , quantum mechanics , political science , law , biology
The idea of a heavily weighted, comprehe nsiv final examination is entrenched in the assessment process of many courses. In this paper, we offer arguments in favor of a policy of making comprehensive final exams optional to studen ts and examine the effects on the assessment process. We do so by implementing an opt io al final exam policy in the assessment systems of two different engineering courses over t he span of several semesters. We analyze the prior performance levels of students who choose to take optional exams and examine the impact that varying incentives have on exam participation rates. We also compare the performance of students who chose to take an optional exam to thei r performance on mandatory mid-semester exams and evaluate the impact that optional exams h ad on overall grades in our assessment schemes. Surveys of our participants show that over 80% of students viewed the optional exam policy as a positive change to the assessment schem e and only 3% viewed it negatively. I. Background and Motivation Researchers in Science, Technology, Engineering, an d Math (STEM) education have been expressing concern over graduation and retention ra tes for decades. Recently, the issue has found its way into the highest levels of economic and edu cational policy discussion, as U.S. employers in high tech industries struggle to find qualified mployees amidst a shortage of STEM degrees in the workforce. In August of 2011, President Obam a’s Jobs and Competitiveness Council called for an additional 10,000 engineering graduat es each year to meet these shortages . If these goals are to be met, educators in engineering disciplines must strive to improve their graduation rates, as only 40% of students that begi n their education in STEM fields go on to complete their degree in that field . Such low graduation rates may be discouraging, bu t for educational researchers they highlight the opportun ity for significant gains to be made. However, realizing these gains may require a systematic reev aluation of all parts of the educational process, including methods of classroom assessment. In their seminal book on the reasons students give for leaving STEM fields, Seymour and Hewitt found that engineering students cited a ‘curriculum overload’ and ‘overwhelming pace’ in courses as being key factors in the decision to swi tch majors for 45% of students surveyed . Workload-related complaints were the second most co mm n reason for engineering students to leave their field and ranked significantly higher f o engineers than for science and math majors who cited it only 25% of the time. In engineering courses, the period of greatest over l ad often comes in the final weeks of the semester, when students must wrestle with homework assignments, semester-long projects and research papers, and end-of-semester exams all comi ng due within a short timeframe. It can prove to be a challenge simply for a single class, but for students feeling the cumulative effect of Page 25014.2 multiple engineering classes, it can be overwhelmin g. Students’ less-than-sanguine feelings about this time are readily apparent in colorful ni ck ames like “Dead Week” and “Hell Week” in common use at many universities. Health researchers have even linked this period to increases in stress and measurable deterioration in health among students . If we are to address students’ feelings of being ov erwhelmed, it seems clear that some of the first strategies educators should explore are methods of reducing end-of-semester workloads. From our experience as both students and instructors, we beli ve that design projects and research papers are essential components of the educational process in engineering courses, as they most directly reflect the demands of engineering fields. A such, we wished to look outside of cuts to design and research projects to reduce workloads. T herefore, we chose to question the role of the final exam and investigate its necessity in the ass s ment process. The remainder of this paper is organized as follows : In Section II, we lay out some of the common arguments in favor of the necessity of manda tory comprehensive final exams and present our counterarguments. In Section III, we di scuss the possible benefits of making these exams optional. In Section IV, we describe a set of pil t studies we carried out to assess the impact that optional final exams would have on asse s ment schemes in real courses. In Section V, we present and analyze the data from the se studies. In Section VI, we discuss the ramifications of our results on our final exam poli cy. In Section VII, we describe potential future work. Finally, in Section VIII, we present our conc lusions on the use of optional comprehensive exams. II. Are Comprehensive Final Exams Necessary? In this paper we focus on an exam system widely use d in ngineering classes: one or more exams are given during the semester with each mid-semeste r xam focusing on a subsection of the course curriculum. This is followed by a comprehens ive final exam at the end of the semester that covers all of the material from the course. Al though such systems are by no means universal, they are common enough that many universities have academic schedules and policies built around the idea of large final exams at the end of each semester. To be clear, when we discuss ‘comprehensive’ final ex ms, we are specifically talking about an exam that exclusively covers topics that students h ave already been tested on previously through mid-semester exams or other forms of assessment. If an exam contains a significant amount of material that students have never been tested on, w e would not classify it as a purely comprehensive exam and would not suggest that it be made entirely optional. Instead we would suggest that the material that has been tested befo re be made optional and that the untested material be put on separate mandatory exam that can be made both much shorter and lower-stakes than a comprehensive final would be. A lternatively, this new material might be tested through other non-exam assessment instrument s such as homework assignments. On its face, a comprehensive final exam may serve a few purposes. Page 25014.3 1. It gives instructors the chance to measure a studen ’s current mastery of a topic; a more up-to-date measurement than what they got from a mi d-semester exam or other earlier assessment. 2. Making multiple measurements of the same construct allows us to improve the reliability of our test results. 3. Increasing the number of questions you ask students on a given construct should increase the reliability of the assessment. 4. The final exam may give the instructor a chance to ask questions on topics not covered in the mid-semester exams due to exam length constrain t . 5. As a comprehensive exam, the instructor may ask stu den s to synthesize material from many different sections in a single test item. We argue that these points are either flawed in con cept or can be achieved without using a final exam. More Recent Results Are Not Necessarily More Useful Is a measurement of a student’s mastery of a certai n top c taken at the time of a final exam a more valid measurement than one taken on an earlier mid-semester exam? To answer this question, we need to think about a student’s knowle dge as a temporal function and think of an exam score as a sampling of this function at a give n time. Sayre & Heckler propose a “Simple Model” for this cognitive function for students lea rning about electromagnetism . An example plot of their Simple Model is shown in Figure 1Error! Reference source not found.. This model represents the acquisition and decay of knowledge b etween initial instruction and the first assessment. Figure 1: The Simple Model of student learning for a course module . P ge 25014.4 In Figure 2, we modify the Simple Model to include th effects of relearning, to reflect the assumption that students will need to review and re lea n material they have forgotten between a mid-semester exam and a final exam. We model the 2 nd period of learning with a steeper slope as the process of relearning of previously mastered ma terial has been shown to be faster for both memory-recall knowledge [6] and hands-on skills . The relative height of the 2 nd peak will depend on how much time the student invests in rele a ning the material. Students who devote a large amount of time to studying for the final exam or who performed poorly on the initial exam will likely exhibit a higher 2 peak. Students who do not devote much time to stud ying for the final may exhibit a lower 2 nd peak. However, we argue that given adequate time a nd resources, it is fair to expect that most students could relearn material to at least a similar level as their original mastery. If we are using well-designed testing instruments i our classroom assessment, we expect that the scores we measure on tests will track with stud ent mastery. Looking at the mastery curve in Figure 2 from the perspective of a classroom educat or, it provokes the obvious question: What point on the curve should we try to sample when det ermining a student’s grade in the class? Figure 2: A modified version of the Simple Model that includes relearning (cramming) material from a module for a comprehensive final exam. Our position is that the score we care about is not the most current one (if the student was tested today), but rather their peak score. The important point to grasp in this argument is that the grades we assign in our classes should be meaningfu l to the people who will attempt to use them in their decision-making. For engineering education , this will primarily be two groups: (1) Companies trying to determine if the student ha s requisite job skills and (2) others in the educational system determining whether the student should be allowed to continue to more advanced classes. For both of these common-use case s, the most up-to-date measur

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom