PhD in Electrical, Electronics and Communications Engineering
PhD Student: Philippe Bich
Context of the research activity
Tiny machine learning (TinyML) is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on‐device sensing and data analytics at extremely low power
(sometimes even in the mW range and below), and hence enabling a variety of always‐on use‐cases, such as, for instance, targeting battery operated devices.
TinyML is one of the fastest‐growing areas of Deep Learning and is rapidly becoming more accessible and differs from mainstream machine learning (e.g., server and cloud) in that it requires not only software expertise, but also embedded‐hardware expertise.
The main advantages of TinyML are related to energy efficiency (stemming from the use of MCUs rather than GPUs), low cost of the frugal hardware platforms to be employed, latency (thanks to the capability to execute ML algorithms at the edge of the cloud, instead of offloading computing tasks to the cloud), and data security (since data remain more locally than in traditional ML solutions).
Traditional applications of TinyML range from Industry 4.0 to personalized health monitoring, smart environments, and precision agriculture.
The proposed research activity will explore another potentially disruptive application of TinyML as “constrained machine learning”, namely the one related to AI‐based techniques to Failure Detection, Identification and Recovery (FDIR) in satellites; we will assume that at least the “inference” part of the process is performed by on‐board computing devices.
This is a very innovative application and part of this activity is expected to run in collaboration with Thales‐Alenia Space.
The whole FDIR problems in satellites can be decomposed into three phases: 1) detection/prediction of failure, 2) identification of the failing subsystem, 3) planning and actuation of recovery action. The proposed work to perform in the framework of the PhD mainly focuses on 1) and only partially on 2). The input for 1) will be a set of time‐series coming from various satellite sensors given by Thales‐Alenia Space and its output is a set of alerts that something is failing right now or is going to fail in the near future. From the point of view of sensor time series, failures are behaviors that are statistically different from normal behaviors, i.e., anomalies. Yet, the same time series may contain other anomalies that do not result in immediate failures but may be prodromic to future malfunctioning.
In any case, a basic step in failure detection/prediction is anomaly detection, i.e., a basic binary classification task that analyzes a time window of the given set of time series and produces an alert if the observed trends do not correspond to a normal behavior. Theoretically, this is achieved by
observing long stretches of normal behaviors so that their statistics can be identified. New suspect instances are then matched with that statistics to decide whether the probability with which they should be observed in normal conditions is so low (potentially zero) to indicate an anomaly. Such a general scheme is implemented in different ways and the aim here is to concentrate on neural
architectures. Possible choice that will be studied, ranges from Auto‐Encoders (AEs) and Variational Auto‐Encoders (VAEs), which are based on feedforward network and better address short time‐windows, to Sequence‐to‐Sequence architectures (Seq2Seq) which adopt Recurrent Neural Networks (RNN) to be able to consider long windows thus enlarging the number of samples among which they look for statistical relationships. Other possible solution to detect anomalies is based on a predictor that aims at comparing the real waveform with the predicted one recognizing a failure when matching is poor. Straightforward DNNs, possibly containing convolutional layers (CNN) used as
nonlinear regressors could be employed for short windows.
RNN may come into play when long windows are considered to capture effects that are distant in time. In this case, Long Short Term Memory (LSTM) architectures that prevent the vanishing gradient effect from spoiling the training phase will be a solution to test.
Once the NN solution for on‐board anomaly will be determined, the next objective of the activity will be to assess the possibility to execute the ML solution on processors typically used in space data handling systems.
The presence of a general‐purpose rad‐hard, high‐reliability microprocessor‐based system is mandatory, and in many cases, for example in low‐cost missions, it would be the only computing device on‐board. Among the advantages, there are the cost and the absence of dedicate hardware. This will be challenging tough, due to the need to ensure, via suitable TinyML techniques (such as pruning of the NN structure or additional energy constraint training), adequate performance of the proposed FDIR even of low performance platforms.
Skills and competencies for the development of the activity
The candidate must be familiar with the concept of Deep Neural Network as well as on their training techniques.
Acquaintance with digital programmable devices such as microcontrollers and/or FPGAs, is also of interest.
Capability of master theoretical subjects (related to signal processing algorithms in particular) as well as good programming skills (C, C++, Matlab and Python) is also a desired pre‐requisite.
Further information about the PhD program at Politecnico can be found here
Back to the list of PhD positions