z-logo
Premium
A translucent box: interpretable machine learning in ecology
Author(s) -
Lucas Tim C. D.
Publication year - 2020
Publication title -
ecological monographs
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 4.254
H-Index - 156
eISSN - 1557-7015
pISSN - 0012-9615
DOI - 10.1002/ecm.1422
Subject(s) - interpretability , machine learning , artificial intelligence , black box , computer science , ecology , variable (mathematics) , interpretation (philosophy) , mathematics , biology , mathematical analysis , programming language
Machine learning has become popular in ecology but its use has remained restricted to predicting, rather than understanding, the natural world. Many researchers consider machine learning algorithms to be a black box. These models can, however, with careful examination, be used to inform our understanding of the world. They are translucent boxes. Furthermore, the interpretation of these models can be an important step in building confidence in a model or in a specific prediction from a model. Here I review a number of techniques for interpreting machine learning models at the level of the system, the variable, and the individual prediction as well as methods for handling non‐independent data. I also discuss the limits of interpretability for different methods and demonstrate these approaches using a case example of understanding litter sizes in mammals.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here