Explainable AI (XAI)

PhD in Control and Computer Engineering


Danilo Giordano danilo.giordano@polito.it
Marco Mellia marco.mellia@polito.it

PhD Student: Francesco De Santis

Context of the research activity

This scholarship focuses on the study of methods to allow human-in-theloop inspection of classifier reasons behind predictions. Explanations can help data scientists and domain experts to understand and interactively investigate individual decisions made by black box models. The research activity fits in the SmartData@PoliTo interdepartmental center, which brings together competencies from different fields. The research activity will be founded in the PNRR project by the “Centro Nazionale HPC, Big Data e Quantum Computing”. PNRR – M4 C2 INV. 1.4 – CN00000013 “NATIONALCENTRE FOR HPC, BIG DATA AND QUANTUMCOMPUTING. CUP: E13C22000990001


Exploring and understanding the motivations behind blackbox model predictions is becoming essential in many different applications. Different techniques are usually needed to account for different data types (e.g., images, structured data, time series). The research activity will consider networking and industrial domains (e.g., the spatial domain) in which the availability of understandable explanations is relevant. The explanation algorithms will target both structured data and time series. The following different facets of XAI (Explainable AI) will be addressed. Model understanding. The research work will address the local analysis of individual predictions. These techniques will allow the inspection of the local behavior of different classifiers and the analysis of the knowledge different classifiers are exploiting for their prediction. The final aim is to support human-in-the-loop inspection of the reasons behind model predictions. Model trust. Insights into how machine learning models arrive at their decisionallow evaluating if the model may be trusted. Methods to evaluate the reliability of different models will be proposed. In case of negative outcomes, techniques to suggest enhancements of the model to cope with wrong behaviors and improve the trustworthiness of the model will be studied.

Further information about the PhD program at Politecnico can be found here

Back to the list of PhD positions