Topological Methods for eXplainable Artificial Intelligence – TopXAI

PhD in Pure And Applied Mathematics

Supervisors

Francesco Vaccarino – francesco.vaccarino@polito.it

PhD Student: Marco Nurisso

Context of the research activity

The eXplainable Artificial Intelligence (XAI) algorithms aim to open the “blackbox” characterizing many of the algorithms populating the AI/ML environment. This question is of paramount importance especially when the unfathomable mechanism presiding the response given by these modern oracles regards very sensitive issues such as, e.g., those covered by the GDPR. The project aims to developing attribution methods based on game theory by using simplicial homology and the formalism of discrete external calculus. The objective of this activity will be to provide a system for evaluating the contributions of coalitions and to use the information extracted to implement “XAI-based” pruning and size reduction methodologies. Furthermore, the obtained simplicial extension will allow a connection with geometric deep learning. This will allow us to formulate a framework connecting the internal spaces of representation of neural networks and those that emerge from the XAI explanation techniques. Progetto co-finanziato dal MUR – DM 352 – CUP n° E12B22000570006

Objectives

One of the most relevant topics of research concerning artificial intelligence (AI) and, in particular, machine learning (ML) is that of the so-called “Explainability”. The question concerns the ability to provide explanations relating to the decisions adopted by an algorithm, and is of extreme relevance as regards interactions with sensitive issues such as, for example, but not limited to, those of gender, religion, race, political orientation, religious and sexual. In the case of supervised algorithms, in critical scenarios, the best ML algorithms are considered black boxes. The eXplainable Artificial Intelligence (XAI) algorithms aim to overcome this problem. In this program we will focus on attribution methods, that is, XAI algorithms that evaluate the importance of the components of the input data (predictors) with respect to the prediction of a supervised algorithm. Among them, the methods based on the theory of cooperative games stand out: considering each predictor as a player and the value of the prediction as the profit obtained from the cooperation of all the players, the importance of the single predictor is defined as a Shapley value. of the player. The problem of assessing the importance of coalitions, that is of subsets of players, and therefore of the interaction between predictors remains open. Recently, it has been shown how the Hodge decomposition of the cochain groups of the coalition graph provides a new interpretation of the Shapley value of the axioms that characterize it. In this program, we will develop an extension of this approach using the simplicial complex generated by the cooperation relations in order to determine high dimensional analogs of the Shapley value in terms of simplicial homology operators and also through the formalism of discrete external calculus. The objective of this activity will be not only to provide a system for evaluating the contributions of coalitions with solid mathematical roots but also to use the information extracted to implement “XAI-based” pruning and size reduction methodologies.
In parallel to the activities described, the simplicial extension of the XAI language will allow connecting the above-mentioned results with the recent formalism based on geometric deep learning, which is based on the reformulation of deep learning techniques in terms of the symmetries preserved by the problem. The objective of this connection will therefore be to formulate a group theory capable of connecting the internal spaces of representation of neural networks and those that emerge from the XAI explanation techniques.

Further information about the PhD program at Politecnico can be found here

Back to the list of PhD positions