Lecturers

Lecturers

Each Lecturer will hold up to four lectures on one or more research topics.


Anirudh Goyal
Google DeepMind & Mila, Université de Montréal, Canada
 

Topics

Machine Learning, Deep Learning, Deep Reinforcement Learning

Biography

Anirudh Goyal is a Montreal-based research scientist at Google DeepMind specializing in reinforcement learning and the generalization of complex behaviors through inductive biases and modular knowledge decomposition to enable robust out-of-distribution generalization. With over 12 years in AI research, his career spans graduate work at Mila, internships at DeepMind and Google, and visiting researcher roles at MPI Tübingen and UC Berkeley, illustrating a progression from theory to production-ready ML systems. He is a proven open-source contributor, notably improving Theano’s Reshape operation across CPU and GPU paths with C-code generation and performance-focused fixes. He holds a PhD in Deep Learning from Université de Montréal and a B.Tech Hons in Computer Science from IIIT Hyderabad, grounding his work in strong theoretical and practical foundations. His research aims to factorize knowledge into independent components that can be dynamically composed, enabling systematic generalization in multi-process dynamical environments such as video games and multi-agent settings.

Lectures



Arthur Gretton
 

Topics

Generative Models, Causality, Hypothesis Testing, Machine Learning

Biography

Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit, CSML, UCL, which he joined in 2010. He received degrees in physics and systems engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He worked from 2002-2012 at the MPI for Biological Cybernetics, and from 2009-2010 at the Machine Learning Department, Carnegie Mellon University. Arthur’s research interests include machine learning, kernel methods, statistical learning theory, nonparametric hypothesis testing, blind source separation, Gaussian processes, and non-parametric techniques for neural data analysis. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, a member of the NIPS Program Committee in 2008 and 2009, a Senior Area Chair for NIPS in 2018, an Area Chair for ICML in 2011 and 2012, and a member of the COLT Program Committee in 2013. Arthur was co-chair of AISTATS in 2016 (with Christian Robert), and co-tutorials chair of ICML in 2018 (with Ruslan Salakhutdinov).

Lectures



Topics

Machine Learning, Generative Models, Reinforcement Learning, Video Games

Biography

I am a Partner Research Manager at Microsoft Research Cambridge, where I co-lead the People-Centric AI research area. My work focuses on generative AI, interactive media, and game intelligence, combining advances in machine learning with human-computer interaction, design, and social science. With my team we aim to create AI systems that empower people through collaboration, creativity, and play – unlocking new forms of interaction and addressing complex real-world challenges. I am passionate about driving interdisciplinary research that shapes the future of AI experiences across productivity, entertainment, and beyond.

Previously, I led the Game Intelligence team with a focus on machine learning research with a focus on video games, which now forms part of the broader People-Centric AI area.

I am proud to serve the academic research community in my current roles of Board Member (since 2022) and Secretary of the Board (since 2024) of the International Conference on Learning Representations (ICLR(opens in new tab)), and have previously served as Senior Program Chair (ICLR 2021) and General Chair (ICLR 2022).

As part of the Microsoft Research PhD Scholarship program, I have deeply enjoyed co-supervising, and successfully graduating, the following PhD students:

Before joining Microsoft Research, I completed my PhD in Computer Science as part of the former ILPS group at the University of Amsterdam(opens in new tab). I worked with Maarten de Rijke(opens in new tab) and Shimon Whiteson(opens in new tab) on smart search engines that learn directly from their users. For a list of my publications before joining MSR, please see the ILPS (Information and Language Processing Systems) list of publications(opens in new tab), MSR Academic, or dblp(opens in new tab).

Lectures



Topics

Deep Learning, Gradient Descent Optimization Methods, Mathematical Analysis of the Gradients in Deep Learning, Adam Algorithm, Scientific Machine Learning

Biography

Prof. Arnulf Jentzen is appointed as a presidential chair professor at the Chinese University of Hong Kong, Shenzhen (since 2021) and as a full professor at the University of Münster (since 2019). In 2004 he started his undergraduate studies in mathematics at Goethe University Frankfurt in Germany, in 2007 he received his diploma degree at this university, and in 2009 he completed his PhD in mathematics at this university. The core research topics of his research group are machine learning approximation algorithms, computational stochastics, numerical analysis for high dimensional partial differential equations (PDEs), stochastic analysis, and computational finance. Currently he serves in the editorial boards of several scientific journals such as the Annals of Applied Probability, Communications in Mathematical Sciences, the Journal of Machine Learning, the SIAM Journal on Scientific Computing, and the SIAM Journal on Numerical Analysis. In 2020 he was the recipient of the Felix Klein Prize of the European Mathematical Society (EMS), in 2022 he has been awarded an ERC Consolidator Grant from the European Research Council (ERC), and in 2022 he has been awarded the Joseph F. Traub Prize for Achievement in Information-Based Complexity. Further details on the activities of his research group can be found at the webpage http://www.ajentzen.de.

Lectures



Topics

AI for Science, Machine Learning, AI Safety, Computer Vision, Program Synthesis

Biography

Pushmeet Kohli is an Indian British computer scientist and Vice President of research at Google DeepMind. At Deepmind, he heads the “Science and Strategic Initiatives Unit”. He was noted by Time magazine as being one of the 100 most influential people in AI according to the Time 100 AI list.

He has led and supervised a number of projects including AlphaFold, a system for predicting the 3D structures of proteins; AlphaEvolve, a general-purpose evolutionary coding agent; SynthID, a system for watermarking and detecting AI-generated content; and Co-Scientist, an agent for generating and testing new scientific hypotheses.

Pushmeet’s papers have won multiple awards and appeared in conferences in the fields of machine learning, computer vision, game theory and human computer interaction. His research has also been covered by popular media outlets such as Wired, Forbes, BBC, New Scientist and MIT Technology Review.

https://en.wikipedia.org/wiki/Pushmeet_Kohli

 

Lectures



Topics

Data Science, Global Optimization, Mathematical Modeling, Financial Applications, AI

Biography

Panos Pardalos was born in Drosato (Mezilo) Argitheas  in 1954 and graduated from Athens University (Department of Mathematics).  He received  his  PhD  (Computer and Information Sciences) from the University of Minnesota.  He  is a Distinguished Emeritus Professor  in the Department of Industrial and Systems Engineering at the University of Florida, and an affiliated faculty of Biomedical Engineering and Computer Science & Information & Engineering departments.

Panos  Pardalos is a world-renowned leader in Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data Sciences. He is a Fellow of AAAS, AAIA, AIMBE, EUROPT, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Panos  Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for “scientific contributions that stand the test of time.”

Panos Pardalos has been awarded a prestigious Humboldt Research Award (2018-2019). The Humboldt Research Award is granted in recognition of a researcher’s entire achievements to date – fundamental discoveries, new theories, insights that have had significant impact on their discipline.

Panos Pardalos is also a Member of several  Academies of Sciences, and he holds several honorary PhD degrees and affiliations. He is the Founding Editor of Optimization Letters, Energy Systems, and Co-Founder of the International Journal of Global Optimization, Computational Management Science, and Springer Nature Operations Research Forum. He has published over 600 journal papers, and edited/authored over 200 books. He is one of the most cited authors and has graduated 71 PhD students so far. Details can be found in www.ise.ufl.edu/pardalos

Panos Pardalos has lectured and given invited keynote addresses worldwide in countries including Austria, Australia, Azerbaijan, Belgium, Brazil,  Canada, Chile, China, Czech Republic, Denmark, Egypt, England, France, Finland, Germany, Greece, Holland,  Hong Kong, Hungary, Iceland, Ireland, Italy, Japan, Lithuania, Mexico, Mongolia, Montenegro, New Zealand, Norway, Peru, Portugal, Russia, South Korea, Singapore, Serbia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey, Ukraine, United Arab Emirates, and the USA.

https://scholar.google.com/citations?user=4e_KEdUAAAAJ&hl=en

Lectures



Topics

Large Language Models, Reasoning, Foundation Models, Fine-tuning Large Language Models, Reinforcement Learning with Human Feedback, Test-Time Computation

Biography

Michal is the Founding Researcher at a stealth startup, tenured researcher at Inria, and a lecturer at MVA at ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. He works on methods and settings that are able to deal with minimal feedback, such as deep reinforcement learning, bandit algorithms, self-supervised learning, or self play. Michal has recently worked on representation learning, world models and deep (reinforcement) learning algorithms that have some theoretical underpinning. In the past he has also worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. Michal is now working on a new generation of large language models (LLMs), in addition to providing algorithmic solutions for their scalable test-time inference, fine-tuning and alignment. He received his PhD in 2011 from the University of Pittsburgh, before getting a tenure at Inria in 2012 and co-creating Google DeepMind Paris with R. Munos. In 2024, he became a Principal Llama Scientist at Meta, building online reinforcement learning stack and research for Llama 3.

Lectures



David van Dijk

Topics

Large‑Scale Foundation Models, Generative AI Models, Large Language Models, Machine Learning

Biography

Dr. David van Dijk is Associate Professor at Yale Dept. of Internal Medicine and Yale Dept. of Computer Science where he leads a research group that focuses on the cutting-edge application of machine learning methods to big biomedical data. His group develops new algorithms for discovering hidden structure, signals, and patterns in complex high-dimensional and high-throughput data, including single-cell RNA sequencing, microbiome, medical imaging, and electronic health records. His research team comprises trainees from diverse backgrounds, including computer science, mathematics, physics, biology, medicine, and neuroscience. Dr. van Dijk completed his PhD in Computer Science at the University of Amsterdam and the Weizmann Institute of Science, where he used machine learning to understand how gene regulation is encoded in DNA sequence. As a postdoc at Yale Genetics and Computer Science, he developed machine learning methods for single-cell data that are widely used in the biomedical community.

NeurIPS 2025, CaDDi: https://neurips.cc/virtual/2025/loc/san-diego/poster/115857

https://arxiv.org/abs/2502.09767

ICLR 2025,  “Intelligence at the Edge of Chaos”: https://iclr.cc/virtual/2025/poster/30160

https://arxiv.org/abs/2410.02536

Lectures



Topics

Artificial Intelligence, Machine Learning, Natural Language Processing, Vision

Biography

Jason is a Research Scientist at Facebook, NY and a Visiting Research Professor at NYU. He earned his Ph.D. in machine learning at Royal Holloway, University of London and at AT&T Research in Red Bank, NJ. Previously, he was a researcher at Biowulf Technologies, a research scientist at the Max Planck Institute for Biological Cybernetics, Tuebingen, Germany, a research staff member at NEC Labs America, Princeton, and a research scientist at Google, NY. His interests lie in statistical machine learning, with a focus on reasoning, memory, perception, interaction, and communication. Jason has published over 100 papers, including Best Paper awards at ICML and ECML, and received a Test of Time Award for his work, “A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning,” (with Ronan Collobert). He was part of the YouTube team that won a National Academy of Television Arts & Sciences Emmy Award for Technology and Engineering for Personalized Recommendation Engines for Video Discovery. Jason was also listed as the 16th most influential machine learning scholar at AMiner and one of the top 50 authors in Computer Science in Science.

Lectures




 

Tutorial Speakers

Each Tutorial Speaker will hold several lessons on one or more research topics.

(TBA)