BEGIN:VCALENDAR
VERSION:2.0
METHOD:PUBLISH
CALSCALE:GREGORIAN
PRODID:-//WordPress - MECv7.32.0//EN
X-ORIGINAL-URL:https://dcn.nat.fau.eu/
X-WR-CALNAME:
X-WR-CALDESC:FAU DCN-AvH. Chair for Dynamics, Control, Machine Learning and Numerics -Alexander von Humboldt Professorship
X-WR-TIMEZONE:Europe/Berlin
BEGIN:VTIMEZONE
TZID:Europe/Berlin
X-LIC-LOCATION:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T030000
RRULE:FREQ=YEARLY;BYMONTH=03;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=4SU
END:STANDARD
END:VTIMEZONE
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-PUBLISHED-TTL:PT1H
X-MS-OLK-FORCEINSPECTOROPEN:TRUE
BEGIN:VEVENT
CLASS:PUBLIC
UID:MEC-de65a3fb6d48ee7cd6f5f9309d16e3fb@dcn.nat.fau.eu
DTSTART;TZID=Europe/Berlin:20201211T080000
DTEND;TZID=Europe/Berlin:20201211T180000
DTSTAMP:20211020T084717Z
RDATE;TZID=Europe/Berlin:20200908T100000,20200917T000000,20200924T000000,20201001T000000
CREATED:20211020
LAST-MODIFIED:20220117
PRIORITY:5
SEQUENCE:0
TRANSP:OPAQUE
SUMMARY:CCM Course: Inverse problems in Reinforcement Learning
DESCRIPTION:Carlos Esteve Yague Postdoctoral researcher from CCM Deusto will present the CCM course “An Introduction to Reinforcement Learning and Optimal Control Theory ( https://cmc.deusto.eus/an-introduction-to-reinforcement-learning-and-optimal-control-theory/ )” for this month in four sessions:\nSpeaker: Carlos Esteve Yague, ( https://cmc.deusto.eus/carlos-esteve-yague/ ) Postdoctoral Researcher at CCM\nFrom September 8th. to October 1st, 2020\nSessions: 4, one session/week\nAbstract. This mini-course aims to be an introduction to Reinforcement Learning for people with a background in control theory. We will discuss the differences and similarities between the two settings, relying on Markov decision processes (MDP) and dynamical systems (DS) respectively. We will present and analyze the most elementary Reinforcement Learning techniques, based on the dynamic programming principle. By means of the HJB equation, we will also discuss the possibility of implementing RL methods in continuous settings. Finally, we will consider inverse problems arising in this context, where the goal is to identify the underlying dynamics of the system and/or a cost functional compatible with a given optimal policy.\nSessions planning:\n  Introduction and Dynamic Programming Methods\n  From discrete to continuous models, Hamilton-Jacobi-Bellman equations\n  Q-Learning\n  Inverse problems in Reinforcement Learning\n \n
URL:https://dcn.nat.fau.eu/events/ccm-course-inverse-problems-in-reinforcement-learning/
CATEGORIES:Course
ATTACH;FMTTYPE=image/jpeg:https://dcn.nat.fau.eu/wp-content/uploads/ccm-course-robotic_surgery.jpg
END:VEVENT
END:VCALENDAR
