z-logo
open-access-imgOpen Access
The Unsubstantiated Cutoff: Deeper Analysis of Supplemental Instruction Sessions on Engineering Courses
Author(s) -
Charles Wilson,
Warren Waggenspack,
Adrienne Steele,
James Gegenheimer
Publication year - 2016
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/p.27025
Subject(s) - attendance , session (web analytics) , class (philosophy) , mathematics education , test (biology) , selection (genetic algorithm) , psychology , computer science , medical education , selection bias , medicine , artificial intelligence , statistics , mathematics , world wide web , political science , paleontology , law , biology
Active learning sessions such as those in the Supplemental Instruction model are often reported as successful when incorporated into high DFW (Drop, Fail, Withdraw), high enrollment courses (1). Research conducted by The U.S. Department of Education, Redish, Longfellow, and many others have reported significant benefits to students enrolled in courses that incorporate active learning strategies (1, 2, 3). The initial analysis of the impact of Supplemental Instruction on students in the College of Engineering at Louisiana State University (LSU) was consistent with these previous findings (4). However, researchers like Dawson and McCarthy recognized some sobering truths–many analyses regarding Supplemental Instruction were incomplete and made weak conclusions (5, 6). The research presented herein investigated two different modes of analysis to better determine the effectiveness of Supplemental Instruction (or similar models), taking advantage of the large dataset at LSU and attempting to remove the possibility of student self-selection bias. The first analysis was conducted in attempt to directly answer Dawson’s comment that SI success often was proven after choosing unsubstantiated cutoffs to define regular attendance; the number of SI sessions a student needs to attend in order to be considered a “regular attendee” varies greatly in the literature and can be defined to meet a researcher’s preconceived notions of success (5). It was found that the trend appears linear–students continually improve their passing rates and course GPA’s as SI session attendance increases. Therefore, any choice of an attendance cutoff supports previous conclusions of increased course performance and passing rates. The second mode of analysis used standardized test scores to create a model to predict student success in certain courses and then determine if SI attendance affected the modeled prediction expanding on what has been done in previous literature (7). When examining these variables independently, students who regularly attend SI sessions as well as students with higher Math ACT scores are more likely to pass a given course. However, it was found that Math ACT and SI session attendance were inversely correlated with each other, thus dispelling the misconception that only “good students” go to SI sessions. Similarly, when looking at both variables simultaneously, data indicates that SI may help all groups of students regardless of their Math ACT scores; however, it appears to have the largest impact on those with lower Math ACT scores. Background In the 1990s, the U.S. Department of Education found that Supplemental Instruction (SI) participation is positively correlated to course grades, passing rates, and persistence. SI is a form of peer led learning that utilizes undergraduate students who have previously excelled in the course that is being offered (8, 9). The general program goal is to help students succeed through collaborative learning (9). SI at LSU is offered in sophomore-level engineering mechanics courses with historically high enrollment and high DFW rates. Typically SI is offered through one and a half hour sessions held bi-weekly where material is presented to students utilizing active learning strategies not common in lecture. Active learning requires more participation from students and can range from having students work problems on a board to having students get into groups to solve a problem (10). SIs are chosen based on their previous success in engineering courses and their communication skills. Typically, SIs are required to have a written recommendation from the professor of the course they wish to teach and interview with the SI coordinator before being hired. Supplemental Instructors (SIs) create their own material, problems, and activities with guidance from the course professor, the program coordinator, and fellow engineering SIs. Responsibilities of SIs include hosting biweekly sessions (the cornerstone of the program), as well as holding office hours, where students report feeling more comfortable asking questions. SIs also attend the course lectures, provide exam review sessions periodically throughout the semester, meet regularly with the course professor, and attend weekly SI meetings where active learning strategies are taught by the SI coordinator. Other responsibilities include monthly peer evaluations where SIs attend each other’s sessions to review and learn about other active learning strategies, and biannual trainings which are half-day workshops held at the beginning of each semester. Though the implementation varies at each university, the basic principles of SI remain the same and data continues to support programs success (1-6, 8, 9). The success of SI programs has been strongly implied (1,2,3), although frequently scrutinized by administrators funding Supplemental Instruction. Many question if SI’s success is due to a motivation or self-selection bias (5, 6). The implication is that “good students” are more likely to go to SI sessions, thus boosting the statistics and perceived program success, and that these “good students” would have been successful anyway. Contributing to this speculation is the lack of detail on how most data were compiled for analysis, such as how regular attendance (or a benchmark of attendance) was defined to see an impact on student grades. Dawson et al. reported on several other groups’ research utilizing similar benchmarks of attendance (5). The author critiqued that rationales for “regular attendance” cutoffs or boundaries were often “arbitrary and unsubstantiated,” if addressed at all, and that the intervals were usually determined after an effect was expected. Dawson’s critique is also applicable to the previous reporting of this research team (4). Therefore, in an attempt to better define why there is a difference, or to discover evidence for a cutoff, this research proposes a potential means of analysis for attendance cutoffs and attempts to further answer the question of whether the students who attend SI sessions were more likely to succeed regardless of session attendance. It is also the intention of this research to present and test potential alternative methods of analysis of Supplemental Instruction or similar out of class programs. For a full description of the Engineering SI Program at Louisiana State University’s College of Engineering see previous research (4). Methods & Results: Part 1 This research explored two different modes of analysis to better determine the effectiveness of Supplemental Instruction by taking advantage of a large dataset at LSU. The aim of both of these analyses was to better understand and account for the possibility of student selfselection bias and program success. The first analysis was conducted in attempt to more accurately address Dawson’s critique of unsubstantiated cutoffs to define regular attendance (5); the number of SI sessions a student needs to attend in order to be considered a “regular attendee” varies greatly in the literature and can be defined to meet a researcher’s preconceived notions of success. The second analysis used standardized test scores (Math ACT scores (7)) to create a model that would predict student success in certain courses and then determined if SI attendance would affect the modeled prediction. Data collection began during each SI session where students were required to sign-in. This usually occurred a few minutes into the session in order to give time for late students to arrive. At the end of the semester, the number of times a student attended SI sessions was merged with student grades and other student data such as Math ACT scores. In a previous analysis looking for impacts of SI on course passing rates, attendance data were grouped by: students who attended no sessions, students who attended a few sessions (1-3), and students who regularly attended (4 or more sessions) (4). To determine if the cutoff (4 or more sessions) was unsubstantiated, a test for linear correlation between number of sessions attended and passing rates was created (Figure 1). Passing rates (%) were calculated as the number of students earning an A, B, or C out of the entire course enrollment. This figure shows a clear, positive correlation between number of SI sessions attended and average passing rate. The slope of this figure can be plotted (Figure 2) to indicate if there is a significant increase at any number of sessions. The major disadvantage of breaking up passing rates by the distinct number of sessions attended is that uncertainty (scatter) greatly increases due to the smaller sample size of students who attend numerous sessions. However, the dataset at LSU includes over 5000 students who have been offered SI in the College of Engineering and this dataset continues to grow. Figure 1: Passing rate of courses with SI offered compared to the number of sessions attended. Figure 2: Change in pass rates per session attended compared to number of sessions attended to better understand if there is a cutoff where maximum results may be observed. Perhaps there is no perfect cutoff for a maximum perceived improvement. It appears that the probability of a student passing the class increases in an incrementally linear pattern as session attendance increases which leads to Figure 2 having no apparent slope. After attending two sessions, each additional session attended is correlated to about a 1.5-2% increase in chance of passing. After the initial analysis plotting pass rate versus number of sessions attended (Figure 1), the authors realized that there are several differences in the final count of sessions between semesters and courses (ranging between less than 10 and more than 20). Percent attendance bins seemed to be a reasonable replacement for number of sessions attended in order to put students into more even groupings across separate classes. For example, it may be argued that a student who attended 3

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom