Statistik komplexer stochastischer Modelle in der Finanzmathematik
Projekt im Rahmen des DFG-Sonderforschungsbereiches 823 "Statistik nichtlinearer dynamischer Prozesse"ÂZiel dieses Projekts sind neue statistische Verfahren für die Schätzung ausgewählter Parameter in verallgemeinerten Moving-Average- oder Ornstein-Uhlenbeck-Prozessen, die einen einheitlichen Rahmen für die Modellierung und Analyse verschiedener Prozesse in finanzmathematischen und technischen Anwendungen bieten. Auch Prozesse mit Langzeitabhängigkeiten sind darunter subsumiert. Dazu untersuchen wir zunächst Verteilungseigenschaften von verallgemeinerten Moving-Average-Prozessen und schlagen darauf aufbauend neue Schätzmethoden vor, die auch dann anwendbar sind, wenn existierende Methoden versagen, wie z. B. im Falle von niedrigfrequenten Messungen. Schließlich wollen wir die entwickelten statistischen Verfahren auf Elektrizitätsdaten anwenden.
Solving optimal stopping problems and reflected backward stochastic differential equations by convex optimization and penalization
Projekt im Rahmen des DFG-Schwerpunktprogrammes 1324 "Extraktion quantifizierbarer Information aus komplexen Systemen"Â gemeinsam mitÂ Prof. Ch. Bender, Universität des Saarlandes Â
The theory of optimal stopping is concerned with the problem of choosing a time to take a particular action, in order to maximize an expected reward or minimize an expected cost. Reflected backward stochastic differential equations can be considered as generalizations of optimal stopping problems when the reward functional may also depend on the solution. Such problems can be found in many areas of statistics, economics, and mathematical finance (e.g. the pricing problem of American options). Primal and dual approaches have been developed in the literature which give rise to Monte Carlo algorithms for high-dimensional stopping problems. Typically, these algorithms lead to some problems of functional convex optimization, where the original objective functionals are to be estimated by Monte Carlo. Despite of the convexity, the performance of these optimization algorithms will deteriorate sharply as the dimension of the underlying state space increases, unless there exists a good low-dimensional approximation for the optimal value function. The aim of this project is to develop several novel approaches based on the penalization of the corresponding empirical objective functionals which are able either to recover the most important components of the state space or to identify a sparse representation for the value function in a given class of functions.
Calibration errors in Risk management
Projekt im Rahmen des DFG-Sonderforschungsbereiches 823 "Statistik nichtlinearer dynamischer Prozesse"Â
With the dissemination of quantitative methods in risk management and introduction of complex derivative products, statistical methods have come to play an increasingly important role in financial derivatives making, especially in the context of calibration of derivative instruments. While the use of such methods has undeniably led to better managing of market risk, it has in turn given rise to a new type of risks linked to the unknown error bounds for the quantities delivered by 2 these methods. When the pricing model is specified the aim of calibration is to estimate parameters of the model using the prices of liquidly traded options such as call and put options on major indices, exchange rates and major stocks. For such an option the price is determined by supply and demand on the market. Because of the bid-ask spread and small number of daily available options on the given stock (or interest rate) the calibration is an ill-posed problem and has to be treated carefully. For example, in the case of jump diffusion Merton model the bid-ask spread of order 1% and the number of vanilla call options as large as 50 can lead to a relative error in the parameters estimate of an order up to 20% if the calibration is not accompanied with a proper regularization. Moreover, the use of different calibration procedures (for example, based on different error measures) can lead to different calibration results and give rise to the calibration uncertainty or calibration risk. The unknown error bounds can not only lead to the mispricing of derivative products but also make this mispricing sometimes unnoticeable for a long time. While this type of risk is acknowledged by most operators who make use of quantitative methods, most of the discussion on this subject has stayed at a qualitative level. The aim of this project is to quantify (construct error bounds, investigate worst-case scenarios) statistical and numerical errors arising during calibration and propose new computationally efficient algorithms for calibration which can reduce these errors.