Deep learning day

a full immersion!

informations

The seminars will be held in English and they will be both live streamed and recorded. Links will be provided as soon as they're ready! πŸ’ͺ

Register for the seminars on Eventbrite, at this link! 🎟️

The seminars are open to the public (it is not required to be a member of AI2S). But if you are a student and you want to join and support us, registrations to the association are always open, since we have both the fillable module online and the PayPal account for fast payments. See the page Join AI2S to subscribe! 🀠
If you are a company and you want to enter in our network of partners and supporters, contact us at relations@ai2s.it to know more! 🌐

schedule

πŸ“” Date and Time

πŸ—“ Monday, December 14th 2020

  • Introduction to Neural Networks:
    πŸ•š 11:00 - 13:00

  • Adversarial Attacks in Deep Learning:
    πŸ•Ÿ 15:30 - 17:15

πŸ“ Platform

TBD

Introduction to Neural Networks


  • Foundations of Neural Networks and Deep Learning

  • Notebook in PyTorch

by Dr. Alessio Ansuini, Data Scientist @ AREA Science Park, Research & Technology Institute and and teacher of the "Deep Learning" course @ UniTS

Abstract

Lecture on the basics of Artificial Neural Networks. In order to put into practice the topics of this lecture, participants will be provided with a Google Colab notebook, whose content will be referenced during the lecture, to grasp the basics of Neural Networks training and testing in PyTorch.

Deep learning is a part of machine learning, based on specific models - called artificial neural networks (ANN) - that are inspired by biological information processing, at multiple levels. Deep learning models achieve state-of-the-art results in a wide range of problems: computer vision, audio and speech recognition, natural language processing, medical image analysis, and gaming (to name a few). In this brief overview, we will describe the basics of deep learning in a non-technical way, our focus being understanding the main concepts. We will introduce some of the elements of a "grammar" (artificial neurons, convolutions, training, attention, memory etc.) that are essential to describe many deep learning models, trying to motivate their introduction (in some cases) by looking at their biological counterpart. We hope, then, that you will have a glimpse of the variety and richness of models that can be created by a limited number of elements, and will be motivated to go "deeper" in the study of this exciting field.

Adversarial Attacks in Deep Learning


  • Foreword by Luca Bortolussi, professor @ UniTS

  • Adversarial Attacks and Ethical Implications, by Andrea Cavallaro, a professor @ Queen Mary University of London

  • Robustness of Bayesian Neural Networks to Gradient-Based Attacks, by Ginevra Carbone, a PhD student @ University of Trieste and member of AI2S. She will present her work which has also been published @ NIPS 2020.

The seminars will cover advanced topics, without going into details on the basics of Neural Networks. If you are new to the subject, you can come to follow Alessio Ansuini's introduction in the morning! πŸ˜ƒ

Abstracts

Adversarial Attacks and Ethical Implications

Images we share online reveal information about personal choices and preferences, which can be inferred by classifiers. To prevent privacy violations and to protect the visual content from unwanted automatic inferences, I discuss how to exploit the vulnerability of classifiers to adversarial attacks to craft perturbations that maintain (and even improve) image quality. As adversarial perturbations designed to protect visual content may be ineffective against classifiers that were not seen during the generation of the perturbation or against defences that use re-quantization or image compression, I will discuss how to craft perturbations based on randomised ensembles to make them robust to defences, on image semantics to selectively modify colours within chosen ranges that are perceived as natural by humans, and on perturbations that enhance image details.

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse the geometry of adversarial attacks in the large-data, overparameterized limit for Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution, ie, when the data lies on a lower-dimensional submanifold of the ambient space. As a direct consequence, we demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks. Experimental results on the MNIST and Fashion MNIST datasets with BNNs trained with Hamiltonian Monte Carlo and Variational Inference support this line of argument, showing that BNNs can display both high accuracy and robustness to gradient based adversarial attacks.

speakers:

Doctor Alessio Ansuini

Data Scientist @ AREA Science Park, Research & Technology Institute and and teacher of the "Deep Learning" course @ UniTS

Professor ANdrea cavallaro

Professor @ Queen Mary University of London

ginevra carbone

PhD student @ University of Trieste

Deep in Deep Learning

Deep Learning has made it possible for computer to accomplish a number of incredible features (some of which we are sure you have already heard of) such as:



  • Having the former president Barack Obama introducing the MIT Deep Learning course (... sort of).



  • Beating the Go world champion. Go is a very popular board games in Asia, and is considered much more difficult for computers to win than other games such as chess.



  • Determining whether a file is malicious or not, and also identify what type of malware it is (for example, ransomware or Trojan).

Questions?

For any question concerning the course, please contact us at events@ai2s.it