Unseen Influences on Student Performance: Instructor Assessment Styles
Author(s) -
Elif Miskioğlu
Publication year - 2016
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/p.27108
Subject(s) - learning styles , mathematics education , preference , categorization , process (computing) , psychology , presentation (obstetrics) , style (visual arts) , perception , dimension (graph theory) , population , variation (astronomy) , computer science , artificial intelligence , mathematics , medicine , history , statistics , physics , demography , archaeology , neuroscience , sociology , astrophysics , pure mathematics , radiology , operating system
Mass and energy balances is the common first course in chemical engineering (ChE) programs across the nation. This foundational course is essential for technical understanding, and thus, high concept mastery among students is desired. Highly variable student performance (in the form of grades, the widely accepted means of assessing student mastery) at a large Midwestern University suggests, however, that concept mastery is not always attained. Learning styles are one way to describe how individuals gather and process information. The Felder-Silverman learning styles model consists of four dimensions, with two opposing preferences in each dimension that categorize individuals based on how they best process, perceive, receive, and understand information. Originally, our study focused on uncovering any correlations between student learning styles, self-efficacy, attitudes/perceptions, and performance in an undergraduate material balances course, in an effort to better understand our student population and provide a basis for curricular development. We categorized the learning styles engaged by exam problems of five instructors in their presentation and solution. While we discovered several instances where students of one learning style preference either outperformed or underperformed relative to others, we made an even more interesting observation while categorizing the learning styles exploited by exam problems of different instructors. In most cases, there was little variation between the learning styles exploited by individual instructors over the course of a semester. However, in the two instances where average student performance was statistically lower than the other three instructors’ classes, the two instructors exhibited similar deviations from the other three in the types of learning styles they overor underexploited on exams. Further, the same two faculty had, on average, a greater number of questions and concepts on each exam than the other three instructors. An end-of-term concept inventory revealed no statistically significant difference in students’ conceptual mastery between an average performance and low performance section, suggesting that performance may not be a good indicator of concept mastery in this situation. We also observed that students in all classes consistently underperformed on questions that were categorized as “global” or “intuitive.” It is arguable that at this introductory level, it is expected that students would have underdeveloped global or intuitive skills, however, if these skills do not improve over the course of their education that is cause for concern. For this reason, in future work we will be tracking students through their curricular progression in order to better understand the development of their intuitive and global skills, and assess the need for changes to the existing curriculum to foster those skills. Further, we are interested in tracking student attrition, and specifically curious as to whether students from the “underperforming” material balances classes are more likely to leave the ChE program, regardless of concept mastery. If so, this may suggest a need to develop more homogeneous course goals and means to achieve them. After multiple semesters of evaluation, we will propose a new course model that ensures a more consistent experience in this course, and hopefully a better conceptual foundation for all students. Overview of the Work and Methods This paper focuses on understanding how instructors (and the exams they administer) may influence student factors for success in an introductory chemical engineering course (part of the sophomore year curriculum at the institution studied). The course, commonly known as mass and energy (material) balances, is taught by two different instructors, as two separate sections, every semester. While each instructor has their own course policies, teaching philosophy, and writes their own exam problems, all instructors follow a “four exam and a final” model and use the same textbook. The exams often fall on the same day, and cover much of the same content. Thus, they provide a good basis for comparing instructors’ teaching and evaluation tendencies. After noticing large differences in raw, unadjusted (unscaled) scores between two instructors teaching material balances in the same term, we became interested in variability in student performance (as measured by grades) across all instructors of the course in the last several years. As they are the highest contributor to students’ final scores, our focus is on characterization of exams administered by different faculty, and subsequent student performance on these exams. Our hypothesis, based on initial observations from early semesters of the study, was that student performance will vary by faculty, and by the types of exam problems given by each faculty instructor. We hypothesized that students in sections with highly sequential and sensing exam problems, as well as fewer overall concepts covered on each exam, would demonstrate higher performance. In short, we examined exams administered by six different faculty instructors (denoted Faculty A through F) to develop “faculty profiles” based on the types of exam problems administered. We characterized exam problems by the learning styles (from the Felder-Silverman model1 of learning styles) they engaged in either presentation or solution. This characterization was done using a criterion-table developed by us, and each exam problem was categorized by multiple trained chemical engineering graduate students (specifically with B.S. degrees in chemical engineering) based on the criterion-table to ensure accurate and consistent categorization. We use the term “inherent bias” to refer to the learning styles engaged by a specific problem, and differentiate between presentation bias and solution bias. We further evaluated these exams by the number of problems and concepts per exam. We then examined the six instructors’ exams, and identified common features among instructors shown to have “low performing” classes. Because the exam problems are written by each instructor, in Spring 2015 we also introduced a standard concept inventory2 to measure conceptual understanding across two sections of the course. We hypothesized that differences in instructor would be highlighted by the results of a standard inventory, where neither instructor has written the questions, and students are being evaluated on a broad range of relevant course topics. Our interest in the Felder-Silverman model of learning styles as a categorization tool for exam problem types comes from exploration into the best applications for learning styles theory in teaching. Learning styles describe how individuals receive, perceive, process, and understand information.1,3,4 These interactions with information are essential components of problemsolving, and thus, suggest that learning styles may be a valuable lens through which to evaluate our methods for developing students as problem solvers. We used the Felder-Silverman model specifically because of its historical application in engineering, and its multidimensional nature allowing for two preferences in each of four dimensions (active/reflective, sensing/intuitive, visual/verbal, sequential/global) with subsequent strengths (strong, moderate, balanced) for each preference. This multi-dimensional model accounts for different facets of learning, and additionally emphasizes that these preferences are not fixed characteristics but merely, as they are called, preferences. Though not a specific aim of this work, we hypothesized that faculty do have learning style preferences or simply habits that unconsciously dictate their instruction, evaluation, and assessment strategies. That is, we began this study expecting to see that faculty exam problems would reveal inherent biases weighted towards certain preferences. Statistical analysis was performed using an ANOVA followed by a Tukey-test, or KruskalWallis with subsequent Steel-Dwass test as appropriate, all at a significance level of 0.05. Results and Discussion Part 1: Learning Style Profiles across Four Semesters In all semesters studied, students were given the Index of Learning Styles (ILS) Questionnaire5 to evaluate their learning styles using the Felder-Silverman model. Within each dimension the class learning styles profiles have very little variation from semester to semester (Figure 1). While Figure 1 displays aggregate data for the two sections taught concurrently, both sections showed similar distributions in each semester. Our students are largely balanced across the dimensions, except for in the visual/verbal dimension where they are mostly visual learners. This does not agree entirely with previous work1 suggesting that engineering students are active, sensing, visual, and global learners on the Felder-Silverman scale. We do note that among students that have a preference, there are more sensors and sequentials in the associated dimensions. Thus, while most students are balanced, it is important to note that there are very few intuitors, verbal learners, and global learners. Figure 1: Learning styles profiles, by dimension, across four semesters of study. There is little variation among student population profiles throughout the duration of the study. Results and Discussion Part 2: Variations in Teaching Style Our attention was first drawn to instructor-driven variability in student performance when we observed in Spring of 2014 that Faculty D scaled up their raw final scores by 20% to match the course averages of Faculty B. Further, we had observed throughout the semester that Faculty B and D gave very different exams. While the content was in theory the same, Faculty B typically had two questions per exam, and they were highly sequential and sensing (numerical). Faculty D, on the other hand, had up to six problems per exam, and engaged learning style preferences acr
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom