Home Publications Undergraduates Postgraduates Postdocs Calendar Contact

  Jeroen Lamb  
  Martin Rasmussen  
  Dmitry Turaev  
  Sebastian van Strien  
Mike Field
Fabrizio Bianchi
Trevor Clark
Nikos Karaliolios
Dongchen Li
Björn Winckler
Alex Athorne
Sajjad Bakrani Balani
Giulia Carigi
Andrew Clarke
Maximilian Engel
Federico Graceffa
Michael Hartl
Giuseppe Malavolta
Guillermo Olicón Méndez
Cezary Olszowiec
Christian Pangerl
Mohammad Pedramfar
Kalle Timperi
Shangzhi Li
Ole Peters
Camille Poignard
Cristina Sargent
Bill Speares
Kevin Webster
Mauricio Barahona
Davoud Cheraghi
Martin Hairer
Darryl Holm
Xue-Mei Li
Greg Pavliotis

DynamIC Seminars (Complete List)

Name Title Date Time Room
Jennifer Creaser (University of Exeter)Sequential escapes for network dynamicsAbstract: It is well known that the addition of noise in a multistable system can induce random transitions between stable states. Analysis of the transient dynamics responsible for these transitions is crucial to understanding a diverse range of brain functions and neurological disorders such as epilepsy. We consider directed networks in which each node in the network has two stable states, one of which is only marginally stable. We assume that all nodes start in the marginally stable state and once a node has escaped we assume that the transition times back are astronomically large by comparison. We use first-passage-time theory and the well-known Kramers' escape time to characterize transition rates between the attractors. Using numerical and theoretical techniques we explore how sequential escape times of the network are effected by changes in node dynamics, network structure, and coupling strength. Tuesday, 20 March 2018 14:00 Huxley 139
Georg Ostrovski (DeepMind)Exploration in Deep Reinforcement LearningAbstract: In recent years, the use of deep neural networks in Reinforcement Learning has allowed significant empirical progress, enabling generic learning algorithms with little domain-specific prior knowledge to solve a wide variety of previously challenging tasks. Examples are reinforcement learning agents that learn to play video games exceeding human-level performance, or beat the world’s strongest players at board games such as Go or Chess. Despite these practical successes, the problem of effective exploration in high-dimensional domains, recognized as one of the key ingredients for more competent and generally applicable AI, remains a great challenge and is an active area of empirical research. In this talk I will introduce basic ideas from Deep Learning and its use in Reinforcement Learning and show some of their applications. I will then zoom in on the exploration problem, and present some of the recent algorithmic approaches to create ‘curious’ reinforcement learning agents. Thursday, 22 March 2018 16:00 Huxley 340
Bastien Fernandez (CNRS)TBAAbstract: Tuesday, 22 May 2018 14:00 Huxley 140
Patricia Soto (Benemerita Universidad Autonoma de Puebla)TBAAbstract: Thursday, 21 June 2018 13:00 Huxley 130
Mike Todd (St Andrews)TBAAbstract: Tuesday, 23 October 2018 14:00 Huxley 139

DynamIC Workshops and Mini-Courses (Complete List)

Title Date Venue
Dynamics, Bifurcations, and TopologyMonday, 14 May 2018 – Sunday, 20 May 2018Imperial College London
One Day of Network DynamicsFriday, 9 February 2018Imperial College London

Short-term DynamIC Visitors (Complete List)

No visitors scheduled currently