A Strategy for Sustainable Student Outcomes Assessment for a Mechanical Engineering Program that Maximizes Faculty Engagement
Author(s) -
Sriram Sundararajan
Publication year - 2020
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--20002
Subject(s) - accreditation , workload , process (computing) , medical education , engineering management , engineering education , sample (material) , computer science , engineering , medicine , chemistry , chromatography , operating system
As part of continuous improvement of the program and ABET accreditation requirements, direct assessment methods of student outcomes are necessary and quite illustrative in terms of describing student learning. Direct assessment methods range from evaluating student performance on locally prepared examinations or standardized tests to assessing student portfolios or performing performance appraisals. Choice of the methods depends on a range of factors including number of students in the program, impact on faculty workload and appropriateness of sample size. One of the challenges in implementing a successful direct assessment process is engaging the faculty and achieving a high level of participation and support. Here we describe the development and successful implementation of direct assessment processes for a large mechanical engineering program with 1750 students and 42 faculty at a land-grant, research-intensive doctoral granting university. This process was piloted in Spring 2011 to identify potential issues, and fully implemented by the Spring of 2012. Assessment of the process itself indicates high level of faculty satisfaction and involvement, suggesting that the process is a sustainable one. Introduction Continual self-evaluation and improvement of instruction-related activities is critical to maintaining excellence in an undergraduate educational program [1]. In recognition of this fact, accreditation bodies (e.g. ABET for engineering) typically emphasize the establishment of such a process as a requirement for accreditation. For engineering programs, ABET has established a set of General Criteria for Baccalaureate Level Programs that must be satisfied by all programs to be accredited by the Engineering Accreditation Commission [2]. These criteria are intended to assure quality and to foster the systematic pursuit of improvement in the quality of engineering education that satisfies the needs of constituencies in a dynamic and competitive environment. Amongst these criteria are the establishment of program educational objectives (criteria 2), student outcomes (criteria 3) and a continuous improvement process (criteria 4) that regularly uses appropriate, documented processes for assessing and evaluating the extent to which the student outcomes are being attained. The nomenclatures of terms are given below for clarity. Program educational objectives are broad statements that describe what graduates are expected to attain within a few years of graduation. Program educational objectives are based on the needs of the program’s constituencies. Student outcomes describe what students are expected to know and be able to do by the time of graduation. These relate to the skills, knowledge, and behaviors that students acquire as they progress through the program. Student outcomes are often referred to as ABET a-k outcomes. In addition program specific outcomes may exist. For example, the American Society of Mechanical Engineers (ASME) specifies some outcomes in addition to P ge 24110.2 ABET a-k [2]. Typically program objectives map to student outcomes, which then map in some way map to the student outcomes. Assessment is one or more processes that identify, collect, and prepare data to evaluate the attainment of student outcomes and program educational objectives. Effective assessment uses relevant direct, indirect, quantitative and qualitative measures as appropriate to the objective or outcome being measured. Appropriate sampling methods may be used as part of an assessment process. Evaluation is one or more processes for interpreting the data and evidence accumulated through assessment processes. Evaluation determines the extent to which student outcomes and program educational objectives are being attained. Evaluation results in decisions and actions regarding program improvement. It is generally accepted that good assessment processes include a combination of direct and indirect methods [3]. A summary of commonly used methods and their classification is shown in Fig. 1. It can be seen that in general, direct assessment methods are more effort and time intensive and often become the bottleneck in an assessment process. This is often primarily due to the demand on faculty and staff time, which leads to frustration and subsequently resistance in faculty participation, which eventually undermines the intent to uphold excellence in the educational effort. Since the faculty deliver the educational programs, it is essential to have them fully vested in the process [4]. Therefore in order to truly be effective, the assessment and evaluation processes should be aligned with faculty efforts in the educational enterprise and minimize faculty effort. This is especially important in the case of programs that are part of research intensive, doctoral granting institutions, where the research enterprise can impose additional constraints on time and effort. This paper describes the development and successful implementation of a sustainable direct assessment process to measure attainment of student outcomes (summative assessment) for the Mechanical Engineering program at Iowa State University. The program is representative of a large mechanical engineering program at a land-grant doctoral granting, highly research-active university. Figure 1: Classification of commonly used assessment methods. P ge 24110.3 State of the mechanical engineering program Iowa State University’s first diploma awarded in 1872 was in the discipline of “mechanic arts, including mechanical engineering.” Since then, the mechanical engineering program’s impact has continued to grow, with its first accreditation in 1936. Currently, the American Society for Engineering Education ranks the department among the top ten programs nationally in terms of bachelor’s degrees awarded. As of Fall 2013, the mechanical engineering program had an enrollment of approximately 1,750 undergraduate students. There are currently thirty six tenure track faculty members, including the department chair and the Provost of the University, as well as six full time lecturers. Motivation for change in assessment processes An assessment and evaluation process established in 2003 and refined in 2007 proved difficult to sustain past primarily due to two major factors: • Highly data and faculty-time intensive assessment process: The process involved performing direct assessment on every course outcome in every departmentally administered course. Moreover it was suggested that this process be performed every year. One can easily imagine the level of effort involved in such a process. In addition it was not clear what could be learnt this large amount of data. • Inefficient oversight: A highly complex and layered oversight system with largely distributed responsibility that complicates responsibility and ownership of the deliverables. This led to a very loose oversight system that typically was not active in engaging and reminding the faculty of their responsibilities. With the program enrollments and faculty size continuing to grow, there was an obvious need to establish a more sustainable assessment and evaluation process and oversight structure for long term impact. Departmental leadership participated in several national workshops in 2010, to learn best practices for sustainable assessment. As a result, new assessment and evaluation processes were established in Fall 2010 by engaging all constituents (faculty, industrial advisory council) throughout the development and implementation process. The underlying philosophy was to focus on summative assessment of the program and minimize faculty and staff burden. New oversight structure and division of responsibility The current oversight structure, which was implemented in Summer/Fall 2010 leverages existing leadership positions in the department and the existence of Course Development Committees (CDCs) for the core curriculum courses, is shown in Fig. 2. The CDCs typically consist of the instructors who usually teach a particular class. Each CDC is responsible for implementing major changes to a particular courses. The oversight responsibility primarily resides with the Associate Chair for Undergraduate Studies and an assessment coordinator. Both individuals have a continuing formal responsibility for oversight of the assessment and evaluation process as defined in their position responsibility statements. The assessment coordinator also sits on the College of Engineering ABET committee and facilitates exchange of information and promotion of collaborative efforts in assessment and evaluation that may be pertinent to accreditation. The Associate Chair for Undergraduate Studies also chairs the Undergraduate Education Committee that is comprised of faculty who are Course Development Committee Chairs, the assessment coordinator and a staff P ge 24110.4 Figure 2: Current oversight structure and division of responsibilities established in 2010. support member. This committee is responsible for recommending assessment and evaluation process changes, evaluating the assessment data and recommendations for changes to curriculum. These are then presented to the faculty and the industrial advisory council of the department for feedback and finalization. The entire faculty then vote on any proposed changes to the curriculum. Finally, the Associate Chair for Undergraduate Studies and the assessment coordinator are responsible for reviewing the assessment/evaluation process and make changes as necessary. The two individuals are also responsible for spearheading reporting related to accreditation. By concentrating responsibility with two individuals, ownership of the processes is clear. Change process As is typically done in most engineering programs, indirect assessment of course outcomes was already being carried out in the program through a student survey at the end of each semester. Students were asked to assess their opportunities to attain studen
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom