z-logo
Premium
Why don't we publish more TDD research papers?
Author(s) -
Offutt Jeff
Publication year - 2018
Publication title -
software testing, verification and reliability
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.216
H-Index - 49
eISSN - 1099-1689
pISSN - 0960-0833
DOI - 10.1002/stvr.1670
Subject(s) - agile software development , oracle , software engineering , computer science , test suite , code refactoring , test driven development , test (biology) , software , software development , scrum , artificial intelligence , test case , programming language , machine learning , paleontology , regression analysis , biology
This issue contains two papers on practical software testing problems. Random or Evolutionary Search for Object‐Oriented Test Suite Generation?, by Sina Shamshiri, Jose Miguel Rojas, Luca Gazzola, Gordon Frazer, Phil McMinn, Leonardo Mariani, and Andrea Arcuri, asks whether evolutionary search can find tests to cover more branches than random search can. (Recommended by Lu Zhang.) PESTO: Automated Migration of DOM‐based Web Tests towards the Visual Approach, by Maurizio Leotta, Andrea Stocco, Filippo Ricca, and Paolo Tonella, presents a tool that transforms old‐style DOM‐based automated tests in Selenium to more modern visual image recognition‐based tests in Sikuli. (Recommended by Rob Hierons.) One of the most important changes in the software engineering industry over the last decade has been the emergence and growth of agile processes. The agile process that has the most effect on our field is test‐driven development (TDD), which is being adopted by more companies every week. TDD puts testing “front and center,” by using automated tests to replace functional requirements. TDD asks the engineer to define initial behavior of software with a test case. Each automated TDD test includes input values and a desired response encoded in an assertion. The desired response, or behavior, replaces the test oracle in more traditional tests. Since aTDD test expresses desired behavior, it initially fails. Then the engineer writes just enough software to allow the test to pass; that is, the engineer implements the desired behavior. After the latest test passes, the engineer should refactor the software by cleaning up the design and structure so that the software behaves identically but is easier to modify in subsequent rounds. This process of writing a test, writing new code, and refactoring the existing code repeats until the engineer is satisfied with the software's behavior. TDD inventors, users, and advocates suggest many benefits of this approach, which you can read elsewhere. I simply want to make you aware that TDD is in widespread and growing use. If nothing else, testing researchers, educators, and practitioners should understand how TDD works. However, I'm not sure that knowledge of TDD is widespread in the research community. At the very least, major software testing venues are publishing few papers about TDD. Although a search of IEEE explorer turns up a few papers here and there, I found no papers at ICST 2018, ICSE 2018, or ISSTA 2018 with TDD, test‐driven, agile, or refactor in the title. Neither have any such papers been submitted to STVR in the last 5 years. Despite this lack of research (or because of), software testers face real problems in applying TDD in practice. Perhaps the most widely discussed problem is that TDD tests are not particularly good at functionally testing the software. They tend to be “happy path” tests, and focus more on what the software should do under normal conditions, and less on what the software should not do or what it should do in unusual conditions. Not surprisingly, in small‐scale studies, my students have found that TDD tests usually have very low coverage on the software as released. In a similar vein, TDD tests tend to focus on the unit or integration levels and often are less useful at the system level. This leads to an important research question—can we leverage TDD tests to develop good system tests? In fact, engineers often struggle to write good TDD tests in the first place, thus hampering development. And sometimes the inputs are well thought out, but the behavior (usually encoded in assertions) is not. Thus passing the test does not really mean the feature the test describes is correctly or even fully implemented. Practitioners also often do not apply TDD as intended, doing things such as skipping the refactor step, writing insufficient TDD tests, and adding functionality that is not required by, or in response to, a TDD test. I'm sure this just scratches the surface, and there are many additional research problems. These problems are not just interesting from an intellectual perspective, they are important to practitioners. That is, solutions can help real software engineers build better software. I hope this essay will encourage readers to work on some of these important TDD problems. I have one more plea for educators. For the good of software testing, for the good of your students, and for the good of all software, please find a way to infuse TDD into your courses. Modern introductory programming courses now universally include test automation (using JUnit or one of its variants). If you teach one of those courses, have them go through at least one TDD exercise. It's a great exercise in a lab with small teams, where they can go through one test cycle in 5 or 10 minutes. If you teach a general software engineering class, make sure

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here