This page presents a selected collection of AI-related thesis topics provided by professors and lecturers of various research institutions.
Contact information are available on professors' personal websites.
Learning and evolution in simulated highly-modular soft robots
Voxel-based soft robots (VSRs) are a kind of robots that are intrinsically modular and reconfigurable, being an aggregation of many simple soft blocks. These features make VSRs potentially suitable for solving a wide range of tasks, provided that their body and brain is tailored to the specific task. Optimization and learning can be the mean through which tailoring is obtained, but current algorithms and representations do not explicitly exploit the intrinsic modularity and reconfigurability of VSRs: a good VSR for a task might be obtained by automatically reusing parts of other VSRs that are good for other tasks. Several research activities may be carried out in this context: e.g., cooperative co-evolution of body and brain of VSRs, reinforcement learning for VSRs, auto-assembly of VSRS, Semi-supervised learning for the classification of VSRs behaviors and bodies... All these activities fit, in terms of amount and kind of work, a master thesis project and we proactively organize the activity in order to be able to eventually publish a research paper in high level publication venues.
Kernels for Temporal Logic.
Define, test and implement efficiently a kernel for (temporal) logic formulae i.e. formulae expressing temporal relationships among events), thus enabling machine learning on spaces of logical properties using kernel based methods. At the moment there is a first definition of kernel, which is kind of working. There is the need to investigate it extensively also from an experimental perspective, and to check its use in problems like formulae embeddings, formula space visualization (e.g. using tSNE methods), and implement everything using pytorch to run on GPU.
Metric embeddings for temporal logic.
The idea here is to use NLP techniques based on deep learning to embed (temporal) logic formulae (i.e. formulae expressing temporal relationships among events) in a metric space, using an encoding-decoding architecture. The encoding-decoding will permit to map search problems in formula space into optimization problems into a metric space, and decode back, by learning also deep approximation of functions to optimize.
Verifiable AI. This area covers different kind of techniques to verify that a ML model, typically a deep learning one, satisfies given requirements. The technique, whenever possible, produces a certifiable answer. The idea is to explore the use of these techniques for controllers and models of critical systems learned from data (e.g. deep learning models of artificial pancreas) or in the context of reinforcement learning (to enforce/ certify ethical and normative properties). The projects here are explorative of literature - with the idea first of testing existing methods on a given problem (likely starting with artificial pancreas, a device releasing insuline on diabetic patients when needed - given measurements of glucose in the blood). Actual scalability of these methods may require to exploit Bayesian reasoning in this verification context.
Deep learning and Cyber-Physical Systems
Controlling CPS by GANs
We have been recently working on a method to synthesise a controller for a CPS in a potentially challenging environment using Generative Adversarial Networks. Here the goal is to test extensively the approach, also in large scale models, and to implement some techniques to learn also the network structure.
Deep abstraction of stochastic models
In this line of work, we have considered several form of deep models to abstract a transition kernel or a stochastic model, seen as a distribution in trajectory space. We used mixed density NN, Dirichelet processes, and Generative Adversarial Networks with FNN or CNNs. We want to explore further this direction, using also RNNs, gated networks, or LSTMs, possibly using Variational Autoencoders rather than GANs. Another direction is to try to use Echo state networks as the underlying engine.
Mobile Robots in crowded environments
This project is in cooperation with the robotics group of the engineering department, and the goal is to train a hierarchical controller for a mobile robot, to move in a crowded environment in a safe way. The controller should combine a long range planner and a short range obstacle avoidance system, capable of avoiding moving people on the area, anticipating their actions. We will use deep reinforcement learning.
Neural predictive monitoring for partially observable and noisy systems
NPM is a method, based on deep learning, for online monitoring of CPS, to identify in advance potentially unsafe scenarios. We want to extend the basic method also for partially observable and noisy systems, in which the current state of the system needs to be estimated from the current observation.
Bayesian Deep Learning
Robustness of BNN for adversarial attacks.
We have recently started to investigate the robustness of BNN to adversarial attacks, proving that in the large data limit, they are invulnerable to gradient-based attacks. In this setting, we want to extend the current idea and test it on large models, possibly investigating the robustness of surrogate forms of BNNs (one Bayesian layer and SGD as approximate Bayesian sampling). Furthermore, we want to develop specific non-gradient based attacks for BNN, and investigare what happens for other non-gradient based attacks.
Deep Learning in Medicine
AI for Intensive Care.
We are exploring DL based approaches to detect emergent critical situations for patients undergoing continuous monitoring in ICU or during surgery (currently in pre-data collection phase - online db usable). One goal is artifact detection (false alarms signaled by the monitoring machines), also using additional data sources like videocameras. Other goals involve e.g. the emergence of potentially critical scenarios like tachicardia, hyper and hypo tension, and the like.