An Evaluation of an Engineering Leadership Development Program on Alumni Job Placement and Career Progression
Author(s) -
Dena Lang,
Travis Gehr,
Meg Handley,
John Park,
Andrew Erdman
Publication year - 2020
Publication title -
2020 asee virtual annual conference content access proceedings
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--34119
Subject(s) - curriculum , likert scale , work (physics) , career development , medical education , leadership development , scale (ratio) , psychology , professional development , public relations , computer science , pedagogy , engineering , political science , medicine , mechanical engineering , developmental psychology , physics , quantum mechanics
This is a ‘work-in-progress’ paper and is appropriate for the ‘Inform’ topic area. Leadership development programs have become an integral part of the engineering curriculum in order to meet the professional development needs of our graduates as well as the needs of their employers. This paper reports preliminary results from a survey of alumni from an undergraduate engineering leadership development program. The survey was developed to assess the degree to which the program is meeting its goals, which include ensuring that the program targets the skills needed in today’s workplace, as well as enhance students’ ability to land their first job and advance in their career. Graduates of the program (n=136) were surveyed to better understand the impact of the program on their initial career placement, subsequent career advancement, and the development of skills needed for today’s engineering work. Alumni were asked to rate their agreement (on a Likert-scale: Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree) with the following statements: 1) The ELD program was instrumental in helping me get my first job. 2) The ELD program was instrumental in helping me get one or more promotions. and 3) The ELD program helped me develop skills needed for today’s engineering work. These survey questions were intended to assess whether the alumni regarded their participation in the leadership development program as important in their initial hire and subsequent career progression. In addition, the third survey item was used to assess whether alumni believed that the program’s developmental objectives were meeting the needs of our graduates in the workplace. Results from the alumni survey indicated that respondents felt that the ELD program was instrumental in helping ELD minor graduates in getting their first job (64% responded strongly agree or agree) and in getting one or more promotions (57% responded strongly agree or agree). In addition, the survey results indicate that respondents believed that the program helped to develop the skills needed for today’s engineering work (86% responded strongly agree or agree). Future work will explore whether participation in the leadership development program results in differences in salary level upon graduation compared to similar graduates not in the leadership program. In addition, follow up work will aim at better understanding where improvements can be made within the leadership development curriculum. Introduction and Background Many universities have incorporated leadership development programs into their curriculum, at both the undergraduate and graduate level. Through a review of the 2018 U.S. News and World Report, Reyes et al. (2019) reported that the top 50 ranked universities all offered some form of leadership development for their students. With recent updates to the Accreditation Board for Engineering and Technology (ABET) criteria, criterion 3, student outcomes, now include several outcomes that are relevant to leadership develop programs: (2) an ability to apply engineering design to produce solutions that meet specified needs with consideration of public health, safety, and welfare, as well as global, cultural, social, environmental, and economic factors; (3) an ability to communicate effectively with a range of audiences; (4) an ability to recognize ethical and professional responsibilities in engineering situations and make informed judgments, which must consider the impact of engineering solutions in global, economic, environmental, and societal contexts; and (5) an ability to function effectively on a team whose members together provide leadership, create a collaborative and inclusive environment, establish goals, plan tasks, and meet objectives (ABET, 2020). While the inclusion of leadership development programs has been common practice for many disciplines, it has been on the increase within engineering programs, particularly over the last decade. A necessary component of any leadership development program is the ability to assess the effectiveness of the program and impact on students. Reyes et al. (2019) performed a meta-analysis of leadership development program evaluation in higher education. The aim of their analysis was to identify the design and delivery methods of leadership development programs that are best at developing students as leaders as well as identify gaps between management science and higher education practice, particularly when it comes to program evaluation. Previous meta-analyses have mostly focused on the evaluation of leadership development programs as implemented within the workforce, examining the outcomes based on employee performance and participant feedback (Avolio et al., 2009; Burke & Day, 1986; Collins & Holton, 2004; Lacerenza et al., 2017; Powell & Yalcin, 2010). Reyes et al. (2019) reported that their meta-analysis was the first attempt to examine leadership development effectiveness from the student perspective within a university context. Reyes et al. evaluated 73 leadership development studies, including 5654 participants in total with 78% of the samples at the undergraduate level and 16% of samples at the graduate level. Reyes et al. utilized Kirkpatrick’s training evaluation framework when evaluating the effectiveness of leadership development programs. Kirkpatrick identified four primary types of outcomes which included: trainee reactions (opinions, perceived utility); learning (level of knowledge related to targeted KSA); transfer of training (extent of application of KSA to the workplace); and results (organizational outcomes, e.g. financial, turnover, etc.) (Kirkpatrick, 1959). Reyes et al. report that 43.1% of studies measured skill-based outcomes; 20.8% measured affective outcomes, 6.9% measured cognitive outcomes only, and 29.2% measured a combination of outcomes. Reyes et al. also indicated that the following methods were used by these leadership development programs to evaluate program outcomes: self-report methods (80.8%); peer-ratings (1.4%); observers (5.5%), objective reports (4.1%), and 8.2% used multiple methods (either self-report and observer ratings, self-report and objective ratings, selfreport and peer ratings, or self-report, objective, and observer ratings) (Reyes et al., 2019). Reyes et al. report that leadership development programs in higher education are increasing ‘learning’ by 19% but the ‘transfer of learning’ lags behind, with an increase of 14%. The authors recognize that the lack of transfer of learning to the workplace is well documented in the training literature but also suggest that it may be a result of a lack of transfer focus within the leadership development programs, or may be related to constraints on gathering transfer data within the educational environment, as compared to an organization providing training to their employees. Due to a small size, they were unable to test the effect of the leadership development programs on ‘reactions’ and ‘result’ outcomes. Their review indicated that most programs focus on skill-based learning (including communicating, persuading others, setting goals, and problem solving), and suggest that future research also evaluate cognitive and affective outcomes, as these have been shown to be important in shaping behaviors (Kahle & Berman, 1979). Their review also indicated that most programs used approaches to program implementation that were convenient and inexpensive and suggest that programs should include more practice, such as reflective activities, role-play, goal setting, and games. Given that the majority of programs used self-report assessments, Reyes et al. also suggest that researchers consider best practices for program evaluation, in particular, to avoid endogeneity concerns within the evaluation data. Through their meta-analysis, they identified three dominant concerns that threaten causal inference within the examined studies: 1) non-random assignment to treatment and control groups; 2) self-selection bias (when programs are voluntary); and 3) the use of a single method for self-reporting. The majority (63.2%) of the programs studied in the meta-analysis had all three issues, and the remaining programs had either two of the issues (24.6%) or one of the issues (12.3%). Antonakis et al. (2010) discuss these common issues within social science research, specifically in the context of leadership research. Antonakis et al. indicate that endogeneity is a critical issue that is present in an alarming number of studies in the literature, and that impacts our ability to make causal inferences. In order for causal relationships to be inferred, the independent variable must vary randomly, and must not be correlated with other causes not examined (exogeneity). Self-selection into leadership development programs results in a non-random treatment group. There may be other variables that impact self-selection (such as IQ, extroversion, emotional intelligence, etc.) that may be related to self-selection and leadership abilities. The authors highlight the gold standard in treatment evaluation as a randomly assigned and representative sample compared to an equivalent control group. However, they also acknowledge that this is not possible in many instances and provide an in-depth review of the problematic issues confronting many research studies in the leadership literature as well as recommendations on how to design research studies to address these common issues. They recommend several methods for inferring causality in non-experimental settings: “Propensity score analysis: Compare individuals who were selected to treatment to statistically similar controls using a matching algorithm; Simultaneous-equation models: Using “instruments” (exogenous sources of variance that do not correlate with the error term) to purge the endogenous x variable from bias.; Regression discontinuity: Select individuals to treatment using a modelled cut-off.; Differen
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom