z-logo
open-access-imgOpen Access
Reimagining Unit Test Generation with AI: A Journey from Evolutionary Models to Transformers
Author(s) -
Sintayehu Zekarias Esubalew,
Beakal Gizachew Assefa
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3597049
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
The rapid evolution of software development demands efficient and scalable unit testing methodologies to ensure software reliability. Traditional manual test case generation is time-consuming and often inadequate for modern agile workflows. Artificial Intelligence (AI) has emerged as a transformative solution, automating test case generation while optimizing coverage and fault detection. This paper presents a comprehensive review of AI algorithms for unit test case generation, categorizing them into traditional machine learning (e.g., genetic algorithms, SVMs), deep learning (e.g., RNNs, CNNs, GNNs), and transformer-based approaches. We introduce a novel taxonomy to classify these methods based on their underlying principles, highlighting their strengths and limitations in terms of effectiveness (bug detection, coverage), usability, maintainability, and advanced capabilities (e.g., explainability, assertion generation). Our analysis reveals that transformer-based models, enhanced by parameter-efficient fine-tuning (PEFT) techniques like LoRA and adapters, outperform traditional methods in generating syntactically valid and semantically meaningful test cases. However, challenges persist in scalability, computational cost, and assertion accuracy.We critically evaluate state-of-the-art tools (e.g., A3Test, AthenaTest, ChatUnitTest) and propose future directions to bridge gaps in test signature verification, integration with CI/CD pipelines, and adaptive learning for complex codebases. This review serves as a roadmap for researchers and practitioners aiming to leverage AI for automated, high-quality unit testing.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom