Research Library

open-access-imgOpen AccessLow-resource finetuning of foundation models beats state-of-the-art in histopathology
Author(s)
Benedikt Roth,
Valentin Koch,
Sophia J. Wagner,
Julia A. Schnabel,
Carsten Marr,
Tingying Peng
Publication year2024
To handle the large scale of whole slide images in computational pathology,most approaches first tessellate the images into smaller patches, extractfeatures from these patches, and finally aggregate the feature vectors withweakly-supervised learning. The performance of this workflow strongly dependson the quality of the extracted features. Recently, foundation models incomputer vision showed that leveraging huge amounts of data through supervisedor self-supervised learning improves feature quality and generalizability for avariety of tasks. In this study, we benchmark the most popular visionfoundation models as feature extractors for histopathology data. We evaluatethe models in two settings: slide-level classification and patch-levelclassification. We show that foundation models are a strong baseline. Ourexperiments demonstrate that by finetuning a foundation model on a single GPUfor only two hours or three days depending on the dataset, we can match oroutperform state-of-the-art feature extractors for computational pathology.These findings imply that even with little resources one can finetune a featureextractor tailored towards a specific downstream task and dataset. This is aconsiderable shift from the current state, where only few institutions withlarge amounts of resources and datasets are able to train a feature extractor.We publish all code used for training and evaluation as well as the finetunedmodels.
Language(s)English

Seeing content that should not be on Zendy? Contact us.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here