# RTG 2131 - Mini-Workshop 2018

__Mini-Workshop on high-dimensional BSDEs and PDEs__

**ARNULF JENTZEN (ETH Zürich)**

**An Introduction to Machine Learning Approximation Methods:**

Algorithms, Error Analyses, Curse of Dimensionality, and Partial Differential Equations (PDEs)

Algorithms, Error Analyses, Curse of Dimensionality, and Partial Differential Equations (PDEs)

Machine learning approximation methods have successfully been used in a serious of applications ranging from computer vision, image classification, speech recognition, and natural language processing to computational advertisement. Recently, machine learning approximation methods have also been started to use to solve complex mathematical problems such as high-dimensional partial differential equations (PDEs). The aim of this short course is to provide a self-contained introduction to machine learning approximation methods. The course covers material

- on deterministic and stochastic optimization algorithms,
- on artificial neural networks and their approximation capacities,
- on the curse of dimensionality as well as
- on machine learning based approximation methods for PDEs.

In particular, we will provide an introduction to state-of-the-art stochastic gradient descent optimization methods such as the Adam optimizer.

**CHRISTA CUCHIERO (University of Vienna)**

#### Markovian representations of affine stochastic Volterra processes

We consider stochastic partial differential equations appearing as Markovian lifts of Volterra processes with jumps. In particular we provide existence and uniqueness results for Markovian lifts of affine rough volatility models of general jump diffusion type. We also discuss extensions to polynomial Volterra processes and provide a moment formula.

**NIZAR TOUZI **(Ecole Polytechnique, France)

#### Random horizon second order backward SDE and Principal-Agent problem

Backward stochastic differential equations extend the martingale representation theorem to the nonlinear setting. This can be seen as path-dependent counterpart of the extension from the heat equation to fully nonlinear parabolic equations in the Markov setting. We provide an extension of such a nonlinear representation to the context where the random variable of interest is measurable with respect to the information at a finite stopping time. We provide a complete wellposedness theory which covers the semilinear case (backward SDE), the semilinear case with obstacle (reflected backward SDE), and the fully nonlinear case (second order backward SDE). The results can be applied to the so-called Principal-Agent problem in continuous-time contract theory.

**NADJA OUDJANE (EDF R&D, France) **

#### McKean Feynman-Kac representations of nonlinear PDEs and related numerical approximations

The presentation focuses on recent forward numerical schemes based on generalized Fokker-Planck representations for nonlinear PDEs in high space dimension. In the specific case of mass conservative PDEs, it is well known that the solution can be probabilistically represented as the marginal densities of a Markov diffusion nonlinear in the sense of Mckean. Then one can design forward interacting particle schemes to approximate numerically the PDEs solution. We present some extensions of this kind of representation and interacting particle scheme associated to a large class of PDEs including the case when they are non-conservative, non integrable with various kind of nonlinearities.

This is a joint work with Francesco Russo, ENSTA ParisTech.

**THOMAS KRUSE (University Duisburg-Essen)**

#### Multilevel Picard approximations for high-dimensional nonlinear parabolic partial differential equations

In this talk we present a family of new approximation methods for high-dimensional PDEs and BSDEs. A key idea of our methods is to combine multilevel approximations with Picard fixed-point approximations. Thereby we obtain a class of multilevel Picard approximations. Our error analysis proves that for semi-linear heat equations, the computational complexity of one of the proposed methods is bounded by $O(d\,$**ɛ**$^{-(4+\delta)})$ for any $\delta > 0$, where $d$ is the dimensionality of the problem and **ɛ **$\in(0,\infty)$ is the prescribed accuracy. We illustrate the efficiency of one of the proposed approximation methods by means of numerical simulations presenting approximation accuracy against runtime for several nonlinear PDEs from physics (such as the Allen-Cahn equation) and financial engineering (such as derivative pricing incorporating default risks) in the case of $d=100$ space dimensions.

The talk is based on joint work with W. E, M. Hutzenthaler, and A. Jentzen.

**LUKASZ SZPRUCH (University of Edinburgh)**

#### Weak Error Expansion for Mean-Field SDEs

We consider a stochastic process $X$, described by a mean-field stochastic differential equation, whose coefficients depend on the evolving law of the process itself. Such equations arise as a limit of the system of stochastic interacting particle systems $Y^{i,N}$, i.e SDEs that are coupled via the empirical law. In this talk, we show that under suitable regularity assumptions the weak error between $X$ and $Y^{i,N}$ can be expressed as $\sum_{j=1}^{k-1}\frac{C_j}{N^j} + O(\frac{1}{N^k})$, for some constants $C_1, \ldots, C_{k-1}$ that do not depend on $N$. That is we formulate the weak-error particle expansion in the spirit of Talay and Tubaro. The expansion we propose relies on the powerful machinery of differentiation with respect to a probability measure, which was proposed by P. Lions in his lectures in Coll\`{e}ge de France. At the core of our proof lies the study of the regularity of the PDE on measure spaces which might be of independent interest.