
What Types of Automated Tests do Developers Write?
Author(s) -
Marko Ivankovic,
Luka Rimanic,
Ivan Budiselic,
Goran Petrovic,
Gordon Fraser,
Rene Just
Publication year - 2025
Publication title -
2025 ieee/acm international conference on automation of software test (ast)
Language(s) - English
Resource type - Conference proceedings
eISSN - 2833-9061
ISBN - 979-8-3315-0179-2
DOI - 10.1109/ast66626.2025.00015
Subject(s) - computing and processing
Software testing is a widely adopted quality assurance technique that assesses whether a software system meets a given specification. The overall goal of software testing is to develop effective tests that capture desired program behaviors and reveal defects. Automated software testing is an essential part of modern software development processes, in particular those that focus on continuous integration and deployment. Existing test classifications (e.g., unit vs. integration vs. system tests) and testing best practices offer general conceptual frameworks, but instantiating these conceptual models requires a definition of what is considered a unit, or even a test. These conceptual models are rarely explicated in the literature or documentation which makes interpretation and generalization of results (e.g., comparisons between unit and integration testing efficacy) difficult. Additionally, comparatively little is known about how developers operationalize software testing in modern industrial contexts, how they write and automate software tests, and how well those tests fit into existing classifications. Since software engineering processes have substantially evolved, it is time to revisit and refine test classifications to support future research on software testing efficacy and best practices. This is especially important with the advent of AI-generated test code, where those classifications may be used to automatically classify the types of generated tests or to formulate the desired test output.This paper presents a novel test classification framework, developed using insights and data on what types of tests developers write in practice. The data was collected in an industrial setting at Google and involves tens of thousands of developers and tens of millions of tests. The developed classification framework is precise enough that it can be encoded in an automated analysis. We describe our proof-of-concept implementation and report on the development approach and costs. We also report on the results of applying the automated classification to all tests in Google’s repository and on what types of automated tests developers write.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom