Controlling Functional Complexity for Overfitting Reduction and Improved Interpretability in GP
Author(s) -
Sara Silva,
Ines Magessi,
Leonardo Vanneschi
Publication year - 2025
Publication title -
ieee transactions on evolutionary computation
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 3.463
H-Index - 180
eISSN - 1941-0026
pISSN - 1089-778X
DOI - 10.1109/tevc.2025.3614086
Subject(s) - computing and processing
Like other machine learning methods, Genetic Programming (GP) frequently faces the issue of overfitting when applied to supervised learning tasks. Traditional regularization techniques, though well-studied, are challenging to apply to GP due to the free-form nature of the evolved models. This work proposes a novel approach that prevents overfitting while inherently improving the interpretability of GP models. It involves a dual optimization process that minimizes loss while penalizing functional complexity using multi-objective selection mechanisms. The improved complexity measure used in this study approximates the mathematical curvature of a function in linear time. While loss minimization is common in GP, penalizing functional complexity is an additional step aimed at evolving robust and smooth functions, less prone to overfitting and potentially more interpretable. Experimental results demonstrate the effectiveness of the two variants of our method, benchmarked against standard GP and two of the seemingly best overfitting-reduction methods found in the literature. By focusing on both loss and complexity, our approach achieves state-of-the-art generalization on difficult problems and a strong feature selection that improves interpretability, making it a unified improvement of GP.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom