Lectures

Lectures


Lecture 1/3 “Agentic AI”

Abstract TBA

Lecture 2/3 “Agentic AI”

Abstract TBA

Lecture 3/3 “Agentic AI”

Abstract TBA



Lecture 1/3 TBA

Abstract TBA

Lecture 2/3 TBA

Abstract TBA

Lecture 3/3 TBA

Abstract TBA



Lecture 1/2 “Causal Effect Estimation with Context and Confounders (Part 1 )”
A fundamental causal modelling task is to predict the effect of an intervention (or treatment) on an outcome, given context/covariates. Examples include predicting the effect of a medical treatment on patient health given patient symptoms and demographic information, or predicting the effect of ticket pricing on airline sales given seasonal fluctuations in demand. The problem becomes especially challenging when the treatment and context are complex (for instance, “treatment” might be a web ad design or a radiotherapy plan), and when only observational data is available (i.e., we have access to historical data, but cannot intervene or conduct trials ourselves). The challenge is greater still when the covariates are not observed, and constitute a hidden source of confounding.
I will give an overview of some practical tools and methods for estimating causal effects of complex, high dimensional treatments from observational data. The approach is based on conditional feature means, which represent conditional expectations of relevant model features. These features can be deep neural nets (adaptive, finite dimensional, learned from data), or kernel features (fixed, infinite dimensional, enforcing smoothness).   The methods will be applied to modelling employment outcomes for the US Job Corps program for Disadvantaged Youth, and in policy evaluation for reinforcement learning.
Part 1 addresses the setting where all relevant information is observed (no hidden confounding),  and where the aim is to predict (conditional) average causal effects from observations of data, without resorting to intervention or randomized trials.
Lecture 2/2 “Causal Effect Estimation with Context and Confounders (Part 2)”
A fundamental causal modelling task is to predict the effect of an intervention (or treatment) on an outcome, given context/covariates. Examples include predicting the effect of a medical treatment on patient health given patient symptoms and demographic information, or predicting the effect of ticket pricing on airline sales given seasonal fluctuations in demand. The problem becomes especially challenging when the treatment and context are complex (for instance, “treatment” might be a web ad design or a radiotherapy plan), and when only observational data is available (i.e., we have access to historical data, but cannot intervene or conduct trials ourselves). The challenge is greater still when the covariates are not observed, and constitute a hidden source of confounding.
I will give an overview of some practical tools and methods for estimating causal effects of complex, high dimensional treatments from observational data. The approach is based on conditional feature means, which represent conditional expectations of relevant model features. These features can be deep neural nets (adaptive, finite dimensional, learned from data), or kernel features (fixed, infinite dimensional, enforcing smoothness).   The methods will be applied to modelling employment outcomes for the US Job Corps program for Disadvantaged Youth, and in policy evaluation for reinforcement learning.
Part 2 addresses the setting where hidden confounding is present, and can be accounted for using techniques such as instrumental variables and proxy variables.


Lecture 1/3 “World Models 1”

Abstract TBA

Lecture 2/3 “World Models 2”

Abstract TBA

Lecture 3/3 “World Models 3”

Abstract TBA



Lecture 1/3 “Mathematical Introduction to Stochastic Gradient Descent Optimization”

Abstract TBA

Lecture 2/3 “Error Analyses for Adam and further Accelerated and Adaptive Optimizers”

Abstract TBA

Lecture 3/3 “Deep Learning for High-Dimensional Partial Differential Equations”

Abstract TBA



Lecture TBA

Abstract TBA



Lecture TBA

Abstract TBA



Lecture 1/3 “Lightspeed RL Fine Tuning for LLMs”

Abstract TBA

Lecture 2/3 “Online and Offline RL Considerations for LLMs “

Abstract TBA

Lecture 3/3 “New Advances on the Theory of Language Generation and Hallucination”

Abstract TBA



Lecture 1/3 TBA

Abstract TBA

Lecture 2/3 TBA

Abstract TBA

Lecture 3/3 TBA

Abstract TBA



Lecture 1/3 “Self-Improving Language Models”

Abstract TBA

Lecture 2/3 “Self-Improving Agents”

Abstract TBA

Lecture 3/3 “The Future of Self-Improvement & the Promise of Co-Improving AI”

Abstract TBA




 

Tutorials


(TBA)