On adversarial training and perimeter minimization problems

Date: Wed. February 16, 2022
Organized by: FAU DCN-AvH, Chair for Dynamics, Control and Numerics – Alexander von Humboldt Professorship at FAU Erlangen-Nürnberg (Germany)
Title: On adversarial training and perimeter minimization problems

Speaker: Prof. Dr. Nicolas Garcia Trillos
Affiliation: Department of Statistics, University of Wisconsin-Madison (USA)

Abstract. SAdversarial training is a framework widely used by machine learning practitioners to enforce robustness of learning models. However, despite the development of several computational strategies for adversarial training and some theoretical development in the broader distributionally robust optimization literature, there are several theoretical questions about adversarial training that remain relatively unexplored. One such question is to understand, in more precise mathematical terms, the type of regularization enforced by adversarial training in modern settings like non-parametric classification as well as classification with deep neural networks. In this talk, I will present a series of connections between adversarial training and several problems in the calculus of variations, and geometric measure theory. These connections reveal a rich geometric structure of adversarial problems and conceptually all aim at answering the question: what is the regularization effect induced by adversarial training? In concrete terms, I will discuss, among other things, an equivalence between a family of adversarial training problems for non-parametric classification and a family of regularized risk minimization problems where the regularizer is a nonlocal perimeter functional. In the binary case, the resulting regularized risk minimization problems admit exact convex relaxations of the type L^1+ TV, a form frequently studied in image analysis and graph-based learning. I will highlight how these connections provide novel theoretical results on robust training of learning models as well as provide a directly interpretable statistical motivation for a family of regularized risk minimization problems involving perimeter/total variation. This talk is based on joint works with Ryan Murray, Camilo A. García Trillos, Leon Bungert, and Jakwang Kim.

Recording/Video:

_

Don’t miss out our Upcoming events!

|| Subscribe to our FAU DCN-AvH newsletter