Options
Parameter Efficient Self-Supervised Geospatial Domain Adaptation
Type
conference paper
Date Issued
2024-06-21
Author(s)
Abstract
As large-scale foundation models become publicly available for different domains, efficiently adapting them to individual downstream applications and additional data modalities has turned into a central challenge. For example, foundation models for geospatial and satellite remote sensing applications are commonly trained on large optical RGB or multi-spectral datasets, although data from a wide variety of heterogeneous sensors are available in the remote sensing domain. This leads to significant discrepancies between pre-training and downstream target data distributions for many important applications. Fine-tuning large foundation models to bridge that gap incurs high computational cost and can be infeasible when target datasets are small. In this paper, we address the question of how large, pretrained foundational transformer models can be efficiently adapted to downstream remote sensing tasks involving different data modalities or limited dataset size. We present a self-supervised adaptation method that boosts downstream linear evaluation accuracy of different foundation models by 4-6% (absolute) across 8 remote sensing datasets while outperforming full fine-tuning when training only 1-2% of the model parameters. Our method significantly improves label efficiency and increases few-shot accuracy by 6-10% on different datasets.
Language
English (United States)
Book title
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Publisher
IEEE/CVF
Start page
27841
End page
27851
Pages
10
Subject(s)
Division(s)
File(s)