Deep learning to solve ambiguities for laser scanners with high pulse repetition rates
Students should possess strong programming skills in Python and C++ and have taken a deep learning course.
Current laser scanners can operate with pulse repetition rates of 2 MHz, i.e., the time between emitted pulses is 0.5 µs. For laser scanners on a drone flying at 150 m altitude, an emitted light pulse takes 1 µs to travel down to the ground and back to the receiver. Hence, there will be two light pulses in the air simultaneously. If the scene contains objects higher than 75 m, a pulse reflecting from a high object may return to the receiver before a previously emitted pulse reflecting from the ground. This makes it difficult to understand the pulse emission time for each received pulse.
Associating a received pulse with the emission time of another pulse leads to a range error of 75 m and an incorrect location of the reconstructed point. The left figure shows a point cloud of a windmill resulting from multiple matches of emitted and received pulses, correct and incorrect ones. The challenge is understanding which ones are correct, shown in green on the right side.
Surface smoothness, in combination with pulse emission time modulation, can be used to identify correct matches of emitted and received pulses. The use of simple smoothness criteria still leads to significant amounts of errors.
The objective of this MSc research project is to investigate whether a deep learning-based classification can better identify correct matches of emitted and received pulses.
You will study literature on the range ambiguity problem and deep learning networks for point cloud classification. You will design a network for the above-sketched problem, taking into account that a received pulse can only be matched with one emitted pulse. Hence, the points of a point cloud as shown in the figure are not classified independently. Obtained results will be compared with those of already implemented classifiers based on simple smoothness criteria. C++ code is available to generate realistic point clouds and ground truth data.
Rieger, P., & Ullrich, A. (2012). Resolving range ambiguities in high-repetition rate airborne light detection and ranging applications. Journal of Applied Remote Sensing, 6(1), 063552-063552. https://doi.org/10.1117/1.JRS.6.063552.
Thomas, H., Qi, C. R., Deschaud, J. E., Marcotegui, B., Goulette, F., & Guibas, L. J. (2019). KPConv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 6411-6420). https://arxiv.org/abs/1904.08889.