BEGIN:VCALENDAR
VERSION:2.0
METHOD:PUBLISH
CALSCALE:GREGORIAN
PRODID:-//WordPress - MECv6.5.5//EN
X-ORIGINAL-URL:https://dcn.nat.fau.eu/
X-WR-CALNAME:FAU DCN-AvH Chair for Dynamics, Control and Numerics -Alexander von Humboldt Professorship
X-WR-CALDESC:
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-PUBLISHED-TTL:PT1H
X-MS-OLK-FORCEINSPECTOROPEN:TRUE
BEGIN:VEVENT
CLASS:PUBLIC
DTSTART;TZID=Europe/Berlin:20201023T103000
DTEND;TZID=Europe/Berlin:20201023T233000
DTSTAMP:20211020T064700
UID:MEC-1c6e02b62a98d8d9341a81521edd3426@dcn.nat.fau.eu
CREATED:20211020
LAST-MODIFIED:20220117
PRIORITY:5
TRANSP:OPAQUE
SUMMARY:Large-time asymptotics in Deep Learning
DESCRIPTION:This Friday October 23rd Borjan Geshkovski PhD student at CMC Deusto on the DyCon ERC Project ( https://cmc.deusto.eus/erc-dycon/ ) from UAM, will give a talk at “Seminario de estadística” organized by the UAM – Universidad Autónoma de Madrid via Teams about:\nLarge-time asymptotics in Deep Learning\nAbstract. It is by now well-known that practical deep supervised learning may roughly be cast as an optimal control problem for a specific discrete-time, nonlinear dynamical system called an artificial neural network. In this talk, we consider the continuous-time formulation of the deep supervised learning problem. We will present, using an analytical approach, this problem’s behavior when the final time horizon increases, a fact that can be interpreted as increasing the number of layers in the neural network setting. We show qualitative and quantitative estimates of the convergence to the zero training error regime depending on the functional to be minimised.\nJoin this session via Teams\nLike this?\nYou might be interested in the “Math & Research” post by Borjan:\n\n\n\n\n\n\n\n\n\nThe interplay of control and Deep Learning\n
URL:https://dcn.nat.fau.eu/events/large-time-asymptotics-in-deep-learning/
CATEGORIES:Seminar/Talk
ATTACH;FMTTYPE=image/png:https://dcn.nat.fau.eu/wp-content/uploads/seminar-bGeshkovski-23oct2020.png
END:VEVENT
END:VCALENDAR