Hybrid adjustment of UAS-based LiDAR and image data

ACQUAL

Potential supervisors

Bashar Alsadik, Fabio Remondino, Francesco Nex

Spatial Engineering

This topic is not adaptable to Spatial Engineering

Suggested Electives

photogrammetry, UAV for Earth Observation, bundle adjustment, laser scanning

Additional Remarks

student should be able to use a suitable programming language

Description

There is a rapid ongoing development of the unmanned aircraft systems UAS where different sensors are continuously added or improved to offer more solutions for the increased demands hold by the market on different levels of application.
Recently, either consumer-grade or professional UAS hybrid LiDAR – camera systems are offered in the market to provide the customers with a multi-use remote sensing system capable of gathering rich information of the scanned objects. This will open the door for wide applications and investments in sectors like mapping, agriculture, forestry, urban planning, etc.
However, having two different types of sensors namely the LiDAR and the camera/s will bring new scientific challenges that need to be solved for a final reliable product. One challenge is the possible misalignment between the LiDAR and the camera and accordingly the intended integration and quality of their derived point clouds will fail or degrade. Therefore, it is highly desirable to find a robust and efficient solution to co-register accurately the data of both sensors either in post-processing or on-line approach.

Objectives and Methodology

The objective of this research is to find an efficient and applicable registration method to co-register LiDAR data (trajectories and point clouds) and image data (camera poses and point clouds) simultaneously acquired with a hybrid UAS system.
As we know the initial UAS trajectory data (GPS/INS), the LiDAR scanning point clouds, and the image tie points, the unknown parameters to be solved are the corrected LiDAR trajectory, the LiDAR point cloud, and the camera orientation parameters. In this way, everything will be in the same reference system and LiDAR-based and image-based point clouds will match.

Further reading

https://d-nb.info/1187928178/34