Quantitative Assessment of Students’ Revision Processes
Author(s) -
Lisa R. Volpatti,
Alex J. Hanson,
Jennifer Schall,
Jesse Dunietz,
Amanda Chen,
Rohan Chitnis,
Eric J. Alm,
Alison Takemura,
Diana M. Chien
Publication year - 2020
Publication title -
2020 asee virtual annual conference content access proceedings
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--35117
Subject(s) - rubric , computer science , context (archaeology) , technical communication , professional communication , coaching , graduate students , medical education , multimedia , psychology , mathematics education , pedagogy , engineering , world wide web , medicine , paleontology , electrical engineering , psychotherapist , biology
Communication is a crucial skillset for engineers, yet graduates [1]–[3] and their employers [4]–[8] continue to report their lack of preparation for effective communication upon completion of their undergraduate or graduate programs. Thus, technical communication training merits deeper investigation and creative solutions. At the 2017 ASEE Meeting, we introduced the MIT School of Engineering Communication Lab, a discipline-specific technical communication service that is akin to a writing center, but embedded within engineering departments [9]. By using the expertise of graduate student and postdoctoral peer coaches within a given discipline, the Communication Lab provides a scalable, content-aware solution with the benefits of just-in-time, one-on-one [10], and peer [11] training. When we first introduced this model, we offered easy-to-record metrics for the Communication Lab’s effectiveness (such as usage statistics and student and faculty opinion surveys), as are commonly used to assess writing centers [12], [13]. Here we present a formal quantitative study of the effectiveness of Communication Lab coaching. We designed a pre-post test study for two related tasks: personal statements for applications to graduate school and graduate fellowships. We designed an analytic rubric with seven categories (strategic alignment, audience awareness, context, evidence, organization/flow, language mechanics, and visual impact) and tested it to ensure inter-rater reliability. Over one semester, we collected and anonymized 119 personal statement drafts from 47 unique Communication Lab clients across four different engineering departments. Peer coaches rubric-scored the drafts, and we developed a statistical model based on maximum likelihood to identify significant score changes in individual rubric categories across trajectories of sequential drafts. In addition, post-session surveys of clients and their peer coaches provided insight into clients’ qualitative experiences during coaching sessions. Taken together, our quantitative and qualitative findings suggest that our peer coaches are most effective in supporting the skills of organization/flow, strategic alignment, and providing appropriate evidence; this aligns with our program’s emphasis on supporting high-level communication skills. Our results also suggest that a major factor in coaching efficacy is coach-client discussion of major takeaways from a session: rubric category scores were more likely to improve across a drafting trajectory when a category had been identified as a takeaway. Hence, we show quantitative evidence that through collaborative conversations, technical peer coaches can guide clients to identify and effectively revise key areas for improvement. Finally, since we have gathered a sizable dataset and developed analytical tools, we have laid the groundwork for future quantitative writing assessments by both our program and others. We argue that although inter-rater variability poses a challenge, statistical methods and skill-based assessments of authentic communication tasks can provide both insights into student writing/revision ability and direction for improvement of communication resources. Introduction One of the greatest gaps in engineering education is the development of communication skills: degree accreditation agencies and employers alike identify communication as one of the most crucial skills [14]–[18], yet most graduates feel unprepared for the demands of professional communication [3], [18]. To fill this gap, educational programs have often adopted curricular interventions such as technical communication courses or embedded communication tasks within technical courses [19]–[21]. However, writing centers -co-curricular interventions that provide students with just-in-time support throughout their training -have been both underused and much less studied [9]. We previously introduced the Communication Lab (Comm Lab), an adaptation of the writing center model specifically for STEM contexts, which originated in 2012 in a single department at the Massachusetts Institute of Technology (MIT) [9], [22]. By training STEM graduate students and postdocs as peer coaches, the model leverages the educational benefits of peers’ first-hand experience with communication in the discipline [23]–[26], learning through authentic tasks [27]–[29], and just-in-time support. We described the Comm Lab’s original implementation within several MIT engineering departments in [9]. Subsequently, we compared its adaptations across several different technical and liberal-arts institutions in [22]. Our first publication underlined the affordability and flexibility of a peer coaching model, in contrast to a one-time curricular intervention. Likewise, our second publication highlighted the adaptability of the Comm Lab model to different institutional constraints and needs (e.g., service to undergraduates only versus both undergraduate and graduate students). Indeed, adaptation to local conditions is a core tenet of the model, and its success is demonstrated by the Comm Lab’s continued growth across both MIT departments and external institutions. The Communication Lab’s core pedagogical approach The Comm Lab’s coaching model emphasizes self-analysis and incorporation of feedback through revision. An appointment with a Comm Lab coach encourages the client to take an active role in analyzing their work and proposing solutions; the coach facilitates by asking open-ended questions and acting as a proxy for the client’s eventual, technical audience. A typical appointment of 30-60 minutes proceeds as follows: 1. The client and coach discuss the intended audience for the communication task and the client’s own strategic goals. 2. The coach suggests an activity that will help the client analyze their own work (such as distilling the three most important points they wish to convey), while the coach reviews the work. 3. The coach focuses first on reviewing high-level communication choices like argument and structure, but also assesses the client’s success in executing these according to field-specific expectations: e.g., is the logical flow of an argument technically sound? 4. Following assessment, the coach and client discuss the communication issues identified, compare examples from the field (which may include the coach’s own experiences), and model/practice strategies for revision. 5. The coach ensures that the client identifies priorities for revision on their own. In short, during a session, the coach models for the client a process for both high-level analysis and practical revision. Research on writing centers confirms numerous benefits of such peer learning experiences, including increased writer satisfaction, improved writing and revision processes, and improved course outcomes [30]. Empirical research likewise highlights the advantage of peers with disciplinary knowledge who can address both rhetoric and content by, for example, challenging students’ technical claims and evidence [23]. In other words, a “knowledgeable peer” [31] offers a combination of social-emotional, communication, and technical support. Our aims in designing a quantitative and qualitative study of the Communication Lab In this study, our primary research question was: is the Comm Lab succeeding in improving clients’ work according to our own metrics of success? I.e., do sessions bring clients closer to our standards for a given communication task, which are informed by both rhetorical principles and real-world field standards? To do so, we designed a quantitative, rubric-based, pre-post evaluation of authentic writing products: drafts for graduate school and graduate fellowship applications, assessed by authentic evaluators -a team of our own peer coaches. In order to build a broader picture of the client’s analytical and reflective experience, we complemented the quantitative core of the study by collecting qualitative reflections about the content of the coaching session. Overall, we argue that our study design provides useful qualitative and quantitative information about the effectiveness of the Comm Lab, despite the many limitations inherent in writing assessments. Writing studies experts agree that writing assessments are challenging: whether quantitative or qualitative, of writing centers in particular or the writing process more broadly, it is difficult to design direct, authentic assessments that concretely demonstrate student success or growth [12], [32]. Our past publications [9], [22] offered typical indirect measures used by writing centers, such as repeat visits, client self-assessment, and faculty testimonials. While useful for program justification, such indirect metrics are clearly limited in their ability to concretely evaluate student growth [12], [13], [33]. Direct assessments are complicated by three considerations: validity, reliability, and ethical limitations on truly scientific study design. Validity asks: does the assessment measure what it is supposed to measure? Reliability asks: can writing be consistently and quantitatively evaluated by different evaluators? Finally, ethics forbid writing centers from executing the classic “treatment/no treatment” experimental design: true negative controls would require denial of writing center access to students who want it. Due to these three constraints, “the typical evaluation of writing programs...usually fails to obtain statistically significant results” [34]. For this reason, since roughly the 1990s, research on writing assessment and especially writing center assessment has focused on qualitative studies, despite the advantages of quantitative pre-post test design [26]. Nonetheless, we designed our study to maximize validity and reliability within these constraints by addressing the most important concerns and recommendations about both: First, most concerns about validity revolve ar
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom