Tuesday, January 24, 2023

Explainable AI for dwelling detection

Suggested by: Lorenz Wendt (lorenz.wendt@plus.ac.at)

Short description: Neural networks are often seen as black boxes: they seem to work fine, but what exact and how well they do it is sometimes unclear. Therefore, one branch of research deals with making deep neural networks more explainable.

An interesting approach in this direction is presented in this paper:
Towards explainable deep neural networks (xDNN) - ScienceDirect

During training, neural networks usually need many samples. The idea of this paper is to automatically select a small number of "ideal" or "prototypical" samples, which best represent the variety of samples the network has seen. One can think of them conceptually somewhat like the endmembers in hyperspectral unmixing.  Applied to building detection, the network would say "I think at this location in the unknown satellite image is a building, because it looks similar to this reference sample you showed me". A human interpreter can then check if this similarity is indeed justified.

The authors of the article use this approach to detect the covid disease in medical images of lungs; the code is available on github. The task of this MSc thesis is to adapt the code for dwelling detection in satellite images, and to see if indeed a relatively short number of well-chosen samples is sufficient to detect a large number of targets.

Project context: The Master thesis would take place in the context of the Christian Doppler Lab GEOHUM, in which EO and GIS- applications for Doctors without Borders/MSF are being developed. 

Literature references

  • Angelov, P., & Soares, E. (2020). Towards explainable deep neural networks (xDNN). Neural Networks, 130, 185-194.

Start / finish by: anytime

Prerequisites / qualifications: the student should have a strong affinity to programming and computer vision, and should be able to read and understand papers like the above.  


No comments: