z-logo
open-access-imgOpen Access
A new formulation of gradient boosting
Author(s) -
Alex Wozniakowski,
Jayne Thompson,
Mile Gu,
Felix C. Binder
Publication year - 2021
Publication title -
machine learning: science and technology
Language(s) - English
Resource type - Journals
ISSN - 2632-2153
DOI - 10.1088/2632-2153/ac1ee9
Subject(s) - boosting (machine learning) , gradient boosting , stacking , regression , computer science , calibration , artificial intelligence , machine learning , algorithm , pattern recognition (psychology) , mathematics , statistics , physics , random forest , nuclear magnetic resonance
In the setting of regression, the standard formulation of gradient boosting generates a sequence of improvements to a constant model. In this paper, we reformulate gradient boosting such that it is able to generate a sequence of improvements to a nonconstant model, which may contain prior knowledge or physical insight about the data generating process. Moreover, we introduce a simple variant of multi-target stacking that extends our approach to the setting of multi-target regression. An experiment on a real-world superconducting quantum device calibration dataset demonstrates that our approach outperforms the state-of-the-art calibration model even though it only receives a paucity of training examples. Further, it significantly outperforms a well-known gradient boosting algorithm, known as LightGBM, as well as an entirely data-driven reimplementation of the calibration model, which suggests the viability of our approach.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here