Neural Symbolic Integration

Presenter: Francesco Manigrasso
Monday, February 21th, 2022 17:30
Location: SmartData@Covivio

Neural-Symbolic techniques, such as Logic Tensor Networks (LTNs), allow the combination of semantic knowledge representation and reasoning with the ability to efficiently learn from examples typical of neural networks.
We focus here on the subsumption of isOfClass predicate, which is fundamental to encode most semantic image interpretation tasks, introducing two different architectures capable of merging the LTN with convolutional networks.
FASTERLTN an object detector composed of a convolutional backbone and a Logic Tensor Network trained into an end-to-end manner. On the other hand, PROTOtypical Logic Tensor (PROTO-LTN) extends the current formulation of parametrized class prototypes in a high-dimensional embedding space.
We showed how these architectures can be effectively trained in object detection and zero-shot learning scenarios respectively.
The proposed formulation opens up new opportunities to integrate background knowledge in the form of logical axioms to compensate for the lack of labelled examples by introducing room for improving the capabilities of the learning model with a sufficient knowledge base.

Biography: Francesco Manigrasso is a second-year Ph.D. at the Dipartimento di Automatica e Informatica of the Politecnico di Torino and memeber of the SmartData@Polito and GRAINS research centers. His research focuses on applications of machine learning models to interpret multidimensional data such as images. He received B.Sc. and M.Sc. degrees in Computer Engineering from Politecnico di Torino, Italy, in 2017 and 2018 respectively.

Download flyer