PhD Program in Pure and Applied Mathematics
PhD Student: Francesco Della Santa
Context of the research activity
Machine learning (ML) is one of the most successful technology used in current days to gather information out of big data. Methodologies developed thereby, in particular deep neural networks with their various training settings, were proven to be incredibly powerful in gaining new knowledge overcoming very often human experts like e.g. in breast cancer prediction or in performing automated tasks, e.g. Tesla self-driving car, and in complicated games like Google Alpha Go victory against the world Go champion.
One of the problem with machine learning is that as exciting as their performance gains have been, though, nobody knows quite how they work. And that means no one can predict when they might fail. This pose several questions, on the business side, where data and their accompanying analytics bring real value if and only if they are “actionable”, but also and even more seriously, on the scientific side, especially the medical one. On the other hand, there is another, older, aspect of the use of computers in science and business, namely simulation. Natural and human-generated systems such as weather, biological processes, supply chains, or computers, can be represented by mathematical models and computer software. Such models are widely used today to better understand and predict the behavior of such systems by means of simulation. Here we face a sort of dual situation with respect to ML, namely, we know how to produce data and they might form a quite big ensemble, which has all the Vs characterizing big data.
The candidate will focus his activity on studying the interplay between machine learning (but not only), computer simulations and statistical models: by analyzing using ML techniques the configuration space coming out from a knows mathematical/statistical model we will try to identify the relevant parameters and to refine/simplify the model. After this they will use the knowledge acquired to infer models form big data, whose configuration space approximate the given data and to use this simplified model to transform correlation in causation to make our information finally “actionable”. We will use also the developed framework to increase the population of high quality-low quantity dataset and to work on model reduction, assessment and validation in the area of FEM.
To develop a mathematical and computational framework addressing the questions raised above.
Skills and Competencies for the Development of the Activity
The candidate is required to have very good competences in basic machine learning, topology/geometry, numerical analysis, experience in algorithm design/analysis and good programming skills.
Further information about the PhD program at Politecnico can be found here
Back to the list of PhD positions