Semantic Labeling of Urban Areas in Aerial Imagery

ACQUAL

Potential supervisors

Michael Yang Francesco Nex

Spatial Engineering

This topic is not adaptable to Spatial Engineering

Suggested Electives

Scene Understanding with Unmanned Aerial Vehicles

Additional Remarks

The topic will have a statistical and mathematical context. Good knowledge of Matlab/Caffe/Tensorflow/Python programming is plus.

Description

In this project, we consider the problem of segmenting the pixels in aerial images into different semantic classes. These images are acquired from aerial flights, e.g. from helicopter or UAV. This problem is a core component of several real world applications, including urban modeling, or automatic generation of virtual cities. Advanced airborne oblique camera systems allow us to capture urban imagery from multiple views. Also, current photogrammetric techniques allow us to generate 3D data from oblique aerial imagery in large urban areas and this 3D information has proven to be helpful to delineate more accurate object boundaries, like buildings.

Objectives and Methodology

Deep learning has transformed the field of computer vision, and now rivals human-level performance in tasks such as image recognition and object detection. Once trained, these models serve as generic feature extractors, and can be applied to a wide range of problems. Recent proposed Fully Convolutional Network and Graph Convolutional Network will be applied to segmentation of urban environments. Not only using RGB images, 3D information from point clouds will be considered to address the problem. The dataset will be provided by the supervisors.

Further reading