Automated Grading of Access® Databases Using the Matlab® Database Toolbox
Author(s) -
Curtis Cohenour,
Audra Anjum
Publication year - 2018
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--27647
Subject(s) - toolbox , database , computer science , grading (engineering) , summative assessment , formative assessment , coding (social sciences) , alphanumeric , relational database , programming language , mathematics education , engineering , statistics , civil engineering , mathematics
This evidence based paper describes an automated method of grading Microsoft Access® databases using the MATLAB® Database Toolbox. A total of 70 students completed an average of 50 individual database assignments per week over a five-week portion of a semester. The assignments were from the Benchmark Series Microsoft® Access® 2013 (BM) text. The course schedule covered two chapters every week. Students were required to import and manipulate either raw text data or a partially completed database to complete each assignment. Students were given unique starter data to discourage academic dishonesty. The automated grader compared each student database submission with the starter data and a truth database. In addition, the database relations, keys, and data types were compared to the truth and graded. Graded assignments were returned to students within one day of submission, thus eliminating a potential bottlenecking of instructor feedback. The use of the grader allows for increased formative assessment opportunities rather than a reliance on traditional, summative examinations. Automated grading offers a flexible and reliable tool for instructors to provide objective, detailed, and timely feedback to students. The features of the automated grader, current outcomes, and future directions are discussed. Introduction An automated grader was developed using the MATLAB® Database Tool box for grading the Access® portion of a freshman-level Enterprise Computing Course (ECC) in Engineering Technology and Management (ETM) at Ohio University. The ECC is a freshmen level class that all students entering the ETM program must take and pass as a requisite for subsequent ETM courses. The class is taught online and is approximately 50% Microsoft Excel, 40% Access, and 10% other Microsoft Office products, including PowerPoint and Visio. Database skills are necessary for subsequent courses taught in detail using Microsoft Structured Query Language (SQL). Access® introduces the students to tables, keys, relationships, and queries which are used in the subsequent classes. The main motivation for developing the grader was to provide detailed, timely, and objective feedback on a high volume of gradable material to students in a large online class. The promptness of the feedback was crucial, as it allowed students to reflect on their errors, take corrective actions, and fill knowledge gaps well in advance of summative assessments. The weekly output of gradable assignments for each of the 70 enrolled students typically covered two chapters and included ten Access® databases with approximately 60 tables, queries, reports, and datagrams (relationships). Students submitted these databases and printouts (in PDF format) using a Box file sharing service (Box Inc., n.d.). The automated grader allowed the instructor to grade and return all the assignments within a day of submission. Additionally, the automated grader maintains a record of individual performances and provides indicators to identify top performers in the course who may do well in the advanced courses in the ETM program. Before the development of the automated grader, managing the grading was challenging for ECC instructors, as students were unable to receive quality feedback in a prompt manner. Because of the sheer volume of items to be graded, instructors were forced to find alternate methods to full manual grading. Methods tried for this course include: a) grading one submission at random per week, b) using highly specific quizzes to elicit correct/incorrect values, c) using the BM publisher’s assessment tool, SNAP, and d) relying on summative assessments only (no grading of homework assignments). In the random selection method, a graduate student or teaching assistant reviewed a single database submitted for each student per week. The submission was considered complete only if all the components were included, such as relationships, tables, and queries, reports, and forms. Although the method was designed to decrease grading time, it is not amenable for use in a large class, as manual grading still demands the focus and availability of the human graders, the evaluation of individual components of each assignment, and additional checks for completeness and academic dishonesty, across several students. In addition to a slow turnaround time, human graders may not be able to provide consistent, objective feedback and specific suggestions for improved learning outcomes for each students. Developing quizzes which elicit specific information from completed assignments assumes that students have completed the required work to perform well in the quizzes. These quizzes are multiple choice and are graded automatically. Given that information related to the database assignments is freely available online (for example, cramster.com), there is no objective way to uphold student accountability unless the quizzes are proctored. Proctoring requires extra time and resources from the ECC teaching team. Using the publisher’s assessment tool SNAP provides options for automated grading but presents a unique set of disadvantages. First, it requires students to purchase an electronic pass, which increases the financial burden. Second, SNAP covers only a part of the topic, meaning students are not exposed to all the learning activities outlined in the course objectives. Third, the SNAP tool is designed in a way that allows students to skip directly to the assessment portion without visiting the learning content. Students can consult online resources in real time to fare the exams without the need to master the learning material or activities. Finally, the SNAP system requires the student to complete the exercise in small steps. The student is not allowed to make mistakes, discover his/her error, and make corrections. The final possibility is to simply not grade the assignments and rely solely on summative assessments. The problem with this approach is that the students learn to cram for an exam, not to how use Access®. The method described in this paper addresses all of the concerns of the alternate methods. The use of automated grading relieves the instructor of a heavy grading load. The automated grader evaluates all student submissions and provides specific and timely feedback. Students don’t feel cheated if they do 90% of the work correctly and make mistakes on the one item that is graded. Students are free to make mistakes in one step and then backtrack and correct the errors as they are discovered. Students loose (at least partially) the incentive to game the system by using plagiarized data, or cramming for exams.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom