Optimizing Test Case Prioritization with Meta Deep Reinforcement Learning in Continuous Integration
Author(s) -
Nahlah A. AlRakban,
Mubarak Alrashoud,
M. Abdullah-Al-Wadud
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3617387
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Software developers use Continuous Integration (CI) environments to reduce integration issues and expedite development cycles. Regression testing is an important part of CI practice, as it includes reconducting all test cases to guarantee system stability after upgrades. However, as test suites grow, this process becomes increasingly resource-intensive and time-consuming. While many Test Case Prioritization (TCP) techniques have been proposed to address this challenge, previous approaches often rely on static configurations and lack the adaptability needed to handle the dynamic nature of CI environments and different dataset complexities. To address these gaps, this study presents a novel TCP framework based on Deep Reinforcement Learning (DRL), integrating a pairwise ranking model with state-of-the-art DRL algorithms, including A2C, DQN, PPO, and TRPO. The proposed framework improves prioritization accuracy and execution efficiency, particularly when combined with an optimal cycle count strategy. An adaptive training framework based on Meta-Deep Reinforcement Learning (meta-DRL) was introduced to further enhance the adaptability of the framework. This component allows the DRL agent to assess its performance during training and dynamically modify the essential hyperparameters, thereby enhancing its ability to develop successful prioritizing methods over time. Finally, the results of the proposed methodology demonstrate that Meta-Deep Reinforcement Learning (meta-DRL) significantly reduces the training time and achieves a 60% reduction compared to existing approaches. These findings show the efficiency of Meta-DRL-based TCP in providing a scalable and adaptive solution for enhancing regression testing in CI environments.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom