Machine learning for disaster risk management - how can we ensure everyone is included and protect societal/human values?
Students should have suitable programming skills (e.g. Python).
Humanitarian organizations such as the Red Cross Red Crecent and Unicef use risk models to identify vulnerable populations and mitigate the affects of natural disasters. These risk models increasingly depend on machine learning, and data automatically extracted from satellite and drone imagery. However, these workflows may affect human/societal values by containing biases or private information which could potentially harm the vulnerable populations they are meant to protect. This research will look at a real-world use case and identify the possible risks to human/societal values such as bias and privacy and show possible ways to mitigate them.
A study will be conducted of the real-world use case to identify possible risks to human/societal values such as biases, personally identifiable information (PII), or demographically identifiable information (DII). The study will look at the technical aspects of the machine learning algorithm utlized but also social by considering the users and subjects of the algorithm. You will consider existing mechanisms for lowering biases and anonymizing sensitive information and propose methods to incorporate them into the use case.
For an M-SE student, the multidisciplinarity aspect of the study will be emphasized. The student will need to consult with various stakeholders to understand their conflicting views of PII/DII versus the need to use geospatial information for disaster risk applications