z-logo
open-access-imgOpen Access
Differences and Similarities in Student, Instructor, and Professional Perceptions of "Good Engineering Design" through Adaptive Comparative Judgment
Author(s) -
Scott Bartholomew,
Greg Strimel,
Şenay Purzer,
Liwei Zhang,
Emily Yoshikawa
Publication year - 2020
Publication title -
2018 asee annual conference and exposition proceedings
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--30333
Subject(s) - rubric , creativity , psychology , perception , work (physics) , process (computing) , general partnership , critical thinking , quality (philosophy) , pedagogy , computer science , medical education , mathematics education , engineering , political science , social psychology , mechanical engineering , neuroscience , law , operating system , philosophy , medicine , epistemology
This project details the results from first-year undergraduate engineering students, engineering instructors, and industry professionals collaborating to assess student design projects. Each group (students, instructors, and industry professionals) used adaptive comparative judgment to rank the final projects from a first-year engineering course designed to engage students in the process of design and analysis in engineering at a public land-grant institution. The similarities and difference in the resulting rank orders of the student design projects, as well as the accompanying rationale statements from the assessors, were used to identify the assessment values and perceptions of quality of each group. A better understanding of these similarities and differences can help inform education practice and may assist educators in ensuring alignment between current practice and industry-needs towards future student success. Introduction Adequately preparing students for future careers, learning, and civic opportunities is at the forefront of discussions around best practices for education (Sharples et al., 2016). However, despite an emphasis on this preparation, research shows there is not always alignment between student, instructor, and industry perceptions of skills, traits, and competencies necessary for this preparation (Adecco, 2014; Deloitte, 2015). Further, much of this preparation revolves around difficult-to-assess-and-teach 21st century skills such as critical thinking, problem-solving, creativity, innovative thinking, and collaboration (NRC, 2012).In spite of the challenges associated with these issues, the adequate preparation of students for future success is paramount (Partnership, 2017). Therefore, this research will investigate the use of a relatively-new form of assessment, Adaptive Comparative Judgment, as a tool for not only improving the assessment practices of educators in open-ended situations (Bartholomew, 2017), but also as a method for identifying the potential disconnects between student, instructor, and industry perceptions of quality. Problem Statement As society continues to progress through a host of technological advances and innovations, skills such as creativity, problem solving, collaboration, critical thinking, and communication are consistently lauded as a necessity for student success in school and the workplace (Partnership, 2017). Student’s ability to not only interact with, but also shape, influence, and catalyze, innovative forces has led to calls for emphasis on design thinking and design skills in students (Goldman, Kabayadondo, Royalty, Carroll & Roth, 2014), with a specific emphasis on engineering design (Grubbs & Strimel, 2015; NRC, 2009). The emphasis on engineering design is not restricted to the United States but extends globally as designing is considered a fundamental component of education worldwide (Banks & Williams, 2013; Barlex, 2006; Gattie & Wicklein, 2007; Leahy & Phelan, 2014; Sheppard, 2003; Wakefield & OwenJackson, 2013). Notwithstanding the increased emphasis on engineering design, the actual implementation has been disjointed as several different engineering design processes and procedures have been developed, endorsed, and implemented in classroom learning environments (Grubbs & Strimel, 2015; Reeve, 2016). Though many of these engineering design process models have similarities, components exemplified in one model, may be excluded in another (Flowers, 2010; Reeve, 2016). Other recent findings demonstrated that these engineering design processes, may not be an accurate reflection of the practices used in industry and technical fields (Reeve, 2016). Accordingly, we investigated the perceptions of students, instructors, and practicing engineers through the assessment of a collection of student work from a first-year engineering course. Research Questions To investigate the potential similarities and differences in the values related to engineering design between students, instructors, and practicing engineers the following questions guided our study: RQ1: What correlation, if any exists, between the perceptions of first-year engineering education students, their instructors, and practicing engineers when assessing student design projects through adaptive comparative judgment? RQ2: What design values, if any, can be identified through the collected comments from adaptive comparative judgments of students, instructors, and practicing engineers? First-Year Engineering Undergraduate students beginning their post-secondary engineering studies often enroll in a first-year engineering program prior to entering their discipline-specific major, such as chemical, electrical, or biomedical engineering. Through first-year programs, students typically share a common set of coursework with other engineering majors of a similar level of academic achievement (Strimel et al., 2018). The purpose of these programs can often be viewed as providing students with the information necessary to ensure the proper selection of an engineering discipline-specific major and the knowledge and skills necessary for success in their selected major. The first-year engineering curriculum is often designed to reinforce basic science and mathematics concepts while developing a student’s engineering design capabilities. According to Strimel et al. (2018), the typical core requirements during a student’s first year includes physics, chemistry, multiple levels of calculus, and writing/composition as well as an engineering orientation seminar and multiple engineering courses focused on design/problem solving. While the orientation seminars often include an exploration of each engineering discipline, the first-year engineering coursework typically focuses on fundamental knowledge and skills that are considered necessary across all engineering disciplines (i.e. engineering design, project management, teamwork, technical writing, data analysis). These courses typically engage students in problem-and team-based activities to teach engineering design methods and tools and develop their skills in logical thinking, problem-solving, design, computational thinking, collaboration, project management, and technical communication. Therefore, the firstyear engineering coursework can provide a great platform to study the perceptions, teaching, learning, assessment, and valuation of engineering design practices and capabilities. Engineering Design Project-based learning and design-based learning pedagogies are widely used in first-year engineering design courses (Dym, Agogino, Eris, Frey, & Leifer, 2005). An emphasis on teaching design and utilizing design-based pedagogies was noted specifically by Dym et al. (2005) who specifically highlighted this trend of introducing project-based learning through design in first-year engineering courses. Often engineering design revolves a process-based approach which typically involves problem scoping, need analysis, solutions generating, prototyping, test and evaluation, presenting, and continuous improvement. Specifically, Dym, et al. (2005) defined engineering design as: A systematic, intelligent process in which designers generate, evaluate, and specify concepts for devices, systems, or processes whose form and function achieve clients’ objectives or users’ needs while satisfying a specified set of constraints. (p.104) While emphasizing and teaching engineering design to students is seen as a positive and useful endeavor, assessing engineering design project remains a challenge for educators due to the open-ended nature of many engineering design problems (Bartholomew, 2017). Chiu & Salustri (2010) suggested peer evaluation and introducing experts to evaluate creativity in student design projects, while Dym et al. (2005) reported the use of holistic judgment, rubrics, and combinations to assess the quality of project solutions. Further, Platanitis & Pop-Iliev (2010) suggested to use rubrics to evaluate engineering projects but raised important question around how to establish criteria for grading. In addition to challenges with assessment research on students working in engineering design contexts has revealed differences in students of different grade-level and practicing engineers. Specifically, Atman, Chimka, Bursic, & Nachtmann (1997, 1999) found that students who are new to engineering design tend to spend less time defining a problem and gathering information than more senior engineering students and practicing engineers. Adaptive Comparative Judgment Adaptive Comparative Judgment (ACJ), is a relatively new assessment approach which uses a comparison-based approach for decision-making and is based on research originally done by Thurstone (1927). Thurstone (1927), and later Pollitt (2004, 2012) argued that human beings are inherently better at making comparative judgments than rubric-based assessments. This claim has been validated repeatedly, in a variety of settings (Bartholomew et al., 2017; Hartell & Skogh, 2016; Kimbell, 2007, 2012; Pollitt, 2004, 2007), and has contributed to a large increase in research and implementation of ACJ in educational settings (Bartholomew & Yoshikawa, 2017). ACJ has consistently produced higher levels of reliability than traditional assessment methods (Bartholomew et al., 2017; Hartell & Skogh, 2016; Kimbell, 2007, 2012; Pollitt, 2004, 2007) as well as validity (Bartholomew, Strimel, & Jackson, 2018; Bartholomew, Strimel, & Yoshikawa, In Press). The process is iterative with multiple judges viewing sets of items; as different pairings of items are compared, the reliability of the resulting rank order increases (the highest-ranking item is the item which is consistently chosen as “better” during the judgments). After each judgment is made, the judge can also be prompted to provide comments justifying their choice and revealing their thought-process during the judgment. The process of choosing one item over another is holistic—jud

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom