BEGIN:VCALENDAR
VERSION:2.0
METHOD:PUBLISH
CALSCALE:GREGORIAN
PRODID:-//WordPress - MECv7.32.0//EN
X-ORIGINAL-URL:https://dcn.nat.fau.eu/
X-WR-CALNAME:
X-WR-CALDESC:FAU DCN-AvH. Chair for Dynamics, Control, Machine Learning and Numerics -Alexander von Humboldt Professorship
X-WR-TIMEZONE:Europe/Berlin
BEGIN:VTIMEZONE
TZID:Europe/Berlin
X-LIC-LOCATION:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T030000
RRULE:FREQ=YEARLY;BYMONTH=03;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=4SU
END:STANDARD
END:VTIMEZONE
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-PUBLISHED-TTL:PT1H
X-MS-OLK-FORCEINSPECTOROPEN:TRUE
BEGIN:VEVENT
CLASS:PUBLIC
UID:MEC-11b01bd09f8d22fecc14d3418f83caab@dcn.nat.fau.eu
DTSTART;TZID=Europe/Berlin:20211022T103000
DTEND;TZID=Europe/Berlin:20211022T113000
DTSTAMP:20211020T082745Z
CREATED:20211020
LAST-MODIFIED:20220117
PRIORITY:5
SEQUENCE:0
TRANSP:OPAQUE
SUMMARY:Analysis of gradient descent on wide two-layer ReLU neural networks
DESCRIPTION:Speaker: Dr. Lénaïc Chizat\nAffiliation: EPFL, École Polytechnique Fédérale de Lausanne (Switzerland)\nOrganized by: FAU DCN-AvH, Chair for Dynamics, Control and Numerics – Alexander von Humboldt Professorship at FAU Erlangen-Nürnberg (Germany)\nZoom meeting link\nMeeting ID:  615 4539 3381 | PIN: 304949\nAbstract. In this talk, we propose an analysis of gradient descent on wide two-layer ReLU neural networks that leads to sharp characterizations of the learned predictor. The main idea is to study the training dynamics when the width of the hidden layer goes to infinity, which is a Wasserstein gradient flow. While this dynamics evolves on a non-convex landscape, we show that for appropriate initializations, its limit, when it exists, is a global minimizer. We also study the implicit regularization of this algorithm when the objective is the unregularized logistic loss, which leads to a max-margin classifier in a certain functional space. We finally discuss what these results tell us about the generalization performance, and in particular how these models compare to kernel methods.\nThis event on LinkedIn\n
URL:https://dcn.nat.fau.eu/events/analysis-of-gradient-descent-on-wide-two-layer-relu-neural-networks/
CATEGORIES:FAU DCN-AvH Seminar,Seminar/Talk
ATTACH;FMTTYPE=image/png:https://dcn.nat.fau.eu/wp-content/uploads/FAUDCNAvH-seminar-22oct2021-lChizat.png
END:VEVENT
END:VCALENDAR
