z-logo
open-access-imgOpen Access
Practical, Efficient Strategies For Assessment Of Engineering Projects And Engineering Programs
Author(s) -
Kevin Dahm
Publication year - 2020
Publication title -
papers on engineering education repository (american society for engineering education)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--16438
Subject(s) - accreditation , engineering education , engineering management , curriculum , grading (engineering) , multidisciplinary approach , engine department , engineering , medical education , computer science , medicine , psychology , pedagogy , civil engineering , social science , sociology
The process of seeking and gaining accreditation for an engineering program was substantially changed ten years ago when the EC2000 criteria were implemented. (The moniker EC2000 is no longer in use; they are now simply the ABET criteria.) Programs must now define goals and objectives for their program, provide evidence that graduates are meeting these objectives, and demonstrate evidence of continuous improvement. These accreditation criteria present programs with significant challenges. Departments must determine what data are needed and collect it regularly. To be sustainable, assessment plans must make efficient use of faculty time. This paper will present strategies for collecting assessment data that serves multiple purposes beyond accreditation, using the Rowan University Junior/Senior Engineering Clinic as an example. The Rowan University Junior/Senior Engineering Clinic is a multidisciplinary, projectbased course required for engineering students in all disciplines. Students solve real engineering research and design problems, many of which are sponsored by local industry. Because each clinic project is unique, grading student work and maintaining approximately uniform expectations across all projects is a significant challenge. At the same time, the Clinic is the course within the Rowan Engineering curriculum that best reflects professional engineering practice. Consequently, the Junior/Senior Clinic provides an excellent forum for assessing whether students have indeed achieved the desired pedagogical outcomes of the curriculum. This paper will present a set of assessment rubrics that is currently being used by the Rowan Chemical Engineering department. The data collected serves two purposes: It is used to grade individual student projects and it is used for program-level assessment. The assessment strategies presented are of potential utility to any engineering faculty member, but may be of particular interest to new faculty members, for whom research productivity and generation of publications are essential. This paper will present evidence that the implementation of the assessment process led directly to improved student performance in the Jr/Sr Clinic, and thus improved the overall research productivity of the entire department. Further, new faculty members often have innovative ideas for classroom teaching. This paper will demonstrate how the assessment rubrics have been used as a tool for turning pedagogical innovations into publishable pedagogical scholarship. Programmatic Assessment for Engineering Background Since 2000, ABET 1 has required that in order to be accredited, engineering programs must demonstrate evidence of continuous assessment and continuous improvement. Components of a good assessment strategy include: P ge 15966.2 1) Establish goals and desired educational outcomes for the degree program, which must include 11 outcomes 2 (designated “A-K”) identified by ABET as essential for all engineering programs 2) Measure whether graduates of the program are attaining the goals and outcomes 3) Use the data collected in step 2 to identify opportunities for improvement, and modify the program accordingly 4) “Close the loop” by assessing whether the changes led to improved attainment of desired outcomes 1 According to Gloria Rogers 3 the most difficult part of the process, and one which most engineering programs do not do well, is “identification of a limited number of performance indicators for each outcome.” An outcome is a broad statement such as “The Chemical Engineering Program at Rowan University will produce graduates who demonstrate an ability to apply knowledge of mathematics, science, and engineering,” which mirrors ABET outcome A 1 . Dr. Rogers notes that programs “...tend to go from broad outcomes to data collection without articulating specifically what students need to demonstrate...” 3 The next section discusses a strategy which was employed at Rowan University for our first two ABET visits in 2000 and 2006. Strategies Employed at Rowan University The Rowan University Chemical Engineering department developed a set of assessment rubrics which were published previously in Chemical Engineering Education 4 . A sample rubric is shown in Table 1. For each outcome, 3-6 indicators were identified, and these are located in the leftmost column. For each indicator, precise descriptions of four different levels of achievement were devised. When reviewing a sample of work product (exam, lab report, etc.) the evaluator simply moves from left to right until he/she finds the descriptor that is accurate for the student’s work. The department also did a study 4 which demonstrated that these rubrics provide excellent consistency for different raters evaluating a particular exam or report. This result highlights one significant merit of the indicators. Inter-rater reliability would presumably not be present if the evaluator was making a single, holistic determination of whether the student “demonstrates an ability to apply knowledge of mathematics, science and engineering,” or if the evaluator were rating work on a scale from 1-4 with no specific description of what each number meant. While these rubrics were an effective tool for measuring student achievement of goals and objectives, it would have been impractical to apply them to every student assignment. Further, not all outcomes can be assessed from all assignments; an exam, for example, isn’t particularly useful in assessing the outcome “The Chemical Engineering program will produce graduates who have effective written communication skills.” The department consequently chose a portfolio of five assignments (Unit Operations Lab report, Chemical Plant Design final report, HAZOP report, Chemical Reaction Engineering final exam and Junior/Senior Clinic final report) and determined that every program outcome was substantially addressed by at least two of these. P ge 15966.3 The rubrics were applied to the portfolios each year from 2000-2006, and data was compared across years. Several programmatic improvements have resulted from this process; one example will be given here. The spring 2005 portfolios consisted of writing assignments that were good overall, but a number had weak literature reviews. The department took a number of steps to improve the problem, as summarized in Appendix 1. Notably, however, the weakness likely would not have been identified at all without the formulation of specific indicators within each objective. Because the writing assignments were generally good, grades or other holistic measures of quality of the assignment would not have detected a problem. This is one of several reasons 5 why ABET recommends not using student grades as assessment instruments. Multiple Uses for Assessment Data The previous section provided a nutshell description of an effective assessment plan and its use in evaluating and improving an engineering program. The primary drawback of this strategy is that the process of evaluating portfolios is very time-consuming. Using time efficiently is a priority for any faculty member. This section demonstrates how assessment data can be collected and used for multiple purposes. It uses the Rowan University Junior/Senior Engineering Clinic as an example. Junior/Senior Engineering Clinic Rowan University has an eight-semester Engineering Clinic program that provides Engineering students with experience solving practical, open-ended engineering problems. The sequence culminates in the Rowan Junior/Senior Engineering Clinic, in which students work on real engineering research and design projects. Project teams work with close faculty supervision and usually consist of 3-4 students; sometimes drawn from a single discipline and sometimes representing several, depending on the needs of the particular project. Most projects are externally sponsored, either by local industry or government agencies. The Mechanical Engineering and Electrical and Computer Engineering programs use Junior/Senior Clinic as the capstone design experiences in their programs. While the Chemical Engineering and Civil and Environmental Engineering departments have separate capstone design courses, these departments also recognize Junior/Senior Clinic as a course that well reflects engineering practice. Consequently Junior/Senior Clinic figures prominently in the assessment efforts of all four programs. As noted in the previous section, the Junior/Senior Clinic final reports were included in the portfolios of student work that were reviewed at the end of every year. While the department obtained valuable data from the portfolio evaluation, an inefficiency in the process was also evident: each paper was being read by the project supervisor(s), who assigned a grade to the report, and then was read a second time, by different faculty member(s), who evaluated it using the rubrics for assessment purposes. The next section describes a new system that has been implemented to accomplish both tasks in a single reading. Rubrics for Assessing Engineering Clinic Projects P ge 15966.4 Two members of the department produced a second set of rubrics, designed specifically for Junior/Senior Engineering Clinic projects. (It is expected the rubrics could be applied, with little or no modification, to undergraduate engineering research projects at other universities.) Sixteen elements of a Clinic project were identified, and for each, descriptions of four levels of performance were written. These rubrics were published in Chemical Engineering Education 6 , and two of the original 16 rubrics are shown in Table 2. Levels of performance were mapped to letter grades (A, B, C and D/F) and the rubrics were passed out to students on the first day of Junior/Senior Clinic in order to clarify expectations for the course. Note that the rubrics are intended for overall evaluation of a team project; separate mechanisms are needed for evaluating individual contributions t

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom