Label transfer between multi-temporal point clouds

M-GEO
Robotics
ACQUAL
Additional Remarks

Students should possess strong programming skills in Python and have taken a deep learning course.

Topic description

Labelling point clouds is tedious work. Although significant progress has been made in the performance of deep learning networks for semantic point cloud segmentation, networks with classification accuracies of 95% still leave 5% for manual correction by human operators, which is time-consuming and expensive.

When a new point cloud is captured over the same area, classification is required again. The figure shows two point clouds colour-coded by elevation captured some five years apart. Some changes in buildings and vegetation are visible, but large parts are unchanged. To reduce the labelling costs, companies use rule-based procedures to determine which parts of the area are unchanged, such that the labels of the old dataset can be transferred to points at the same location in the new dataset. The rule-based procedures raise the classification accuracy, but further improvements are still needed.

While the example here relates to airborne laser scanning, similar questions pop up in semantic SLAM when robots revisit an area that was previously reconstructed and semantically segmented. Labelling of unchanged scene parts could then also be avoided.

Topic objectives and methodology

The objective of this MSc thesis project is to design, implement, and evaluate a deep learning network for transferring labels from an old point cloud to a new one where the scene seems to be unchanged. In changed parts, the new point cloud should be classified, exploiting the context provided by unchanged surroundings.

In 2024, De Gélis et al. proposed a Siamese network for jointly processing two unlabelled point clouds over the same area to detect and classify all changes, using labels such as new vegetation or demolished building. With N classes to be distinguished in both point clouds, the number of possible change labels becomes N², such that the network is hard to train for a larger number of changes. Considering that the old point clouds are usually labelled, a (pseudo-)Siamese network could take one labelled and one unlabelled point cloud as input instead of two unlabelled ones. Consequently, the labelling can be restricted to one point cloud. The question is how to learn a network where the labels can be transferred from the old point cloud and where new labels will be needed.

The code of the networks designed by De Gélis et al. (2024) is available from https://github.com/IdeGelis/. Classified airborne laser scanning data captured over multiple years is abundantly available over the Netherlands and easily accessed through Geotiles.

References for further reading

De Gélis, I., Corpetti, T. and Lefèvre, S. (2024). Change detection needs change information: Improving deep 3-D point cloud change detection. IEEE Transactions on Geoscience and Remote Sensing62, pp.1-10. https://doi.org/10.1109/TGRS.2024.3359484.

Thomas, H., Qi, C. R., Deschaud, J. E., Marcotegui, B., Goulette, F., and Guibas, L. J. (2019). KPConv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6411-6420). https://arxiv.org/abs/1904.08889.