prof. dr. Norman Kerle, dr. Peter Hofmann (advisor)
In disaster risk management remote sensing data is frequently used for validating forecasts and the underlying models, but also as input data for simulations or decision support based on Land Use/Land Cover (LULC) classifications. Often the forecast models produce a probability for a certain event to happen (within a given time), not only for particular areas but also for objects and entities. Especially in the context of Impact Based Forecasting (IBF) certainties and probabilities of expected impacts can be given explicitly by the forecast models and at object level.
In remote sensing so-called soft-classifiers produce classification results in conjunction with the explicit classification certainty of that particular classification result. That is, every object or every pixel is assigned to one or more classes additionally obtains a certainty or probability value for its individual class assignment. This allows to analyze a classification result’s reliability, to alter class assignments, or to trigger further steps of image processing and analysis.
If remote sensing classification results (e.g. LULC classifications) are used as input for simulations and prediction models their classification reliability can have impact on the simulation’s results, their validations and finally their reliability. That is, if a model is fed with more or less reliable data, the model itself becomes more or less reliable. For model validation the assessment of a model’s validity is the more reliable the more reliable the underlying validation (i.e. reference) information is and vice versa.
In this research the following major questions should be addressed:
- How can explicit classification certainty of individual objects or pixels be used as additional information in forecast modelling, e.g. to trigger alternative simulations or to compare more and less likely scenarios.
- Different classification methods (e.g. CNNs, Bayes’ or Fuzzy classifiers) produce different classification certainties and probabilities. What is their impact on the models’ reliability? Are there mutual complements?
- What impact has a classification’s certainty and reliability on the model’s reliability? How robust are particular models against uncertain classification results?