Very high resolution canopy height maps from RGB imagery using self-supervised vision transformer and convolutional decoder trained on aerial lidar #228
Replies: 1 comment
-
Claude summary: This paper presents a novel approach for generating high-resolution (< 1 m) canopy height maps (CHMs) from RGB satellite imagery using deep learning. The method leverages self-supervised pre-training on a large dataset of unlabeled images, followed by supervised fine-tuning on a smaller dataset of aerial LiDAR measurements. Datasets:
Model Architecture and Training:
Evaluation and Results:
In summary, this work demonstrates the effectiveness of combining self-supervised learning, transformers, and multi-modal data for high-resolution canopy height mapping. The proposed approach leverages a large volume of unlabeled satellite imagery to learn a generic visual representation, which is then adapted to the specific task of canopy height estimation using a smaller dataset of LiDAR measurements. The results show state-of-the-art performance and good generalization to new geographies, highlighting the potential of this method for large-scale, high-resolution forest monitoring applications. |
Beta Was this translation helpful? Give feedback.
-
https://sustainability.fb.com/blog/2024/04/22/using-artificial-intelligence-to-map-the-earths-forests/
https://github.com/facebookresearch/HighResCanopyHeight/blob/main/README.md
This is an icnredibly interesting paper, that seems to include making a version of Clay, and also applies it to a very specific usecase.
Beta Was this translation helpful? Give feedback.
All reactions