Schmocker, PhilippPhilippSchmocker2023-04-132023-04-132022-09-19https://www.alexandria.unisg.ch/handle/20.500.14171/108272This dissertation studies universal approximation between infinite dimensional spaces, which is of particular interest for applications in Finance. In the finite dimensional case, an interesting class of functions possessing the universal approximation property was discovered eighty years ago: neural networks, which, as concatenation of affine and non-linear functions, are rich enough to approximate every continuous function on a compact set of the Euclidean space arbitrarily well. However, this class of functions only gained wider attention after the turn of the millennium by showing novel and promising applications in the fields of image and speech recognition as well as computer games. Today, neural networks are well known and an integral part of machine learning, a branch of artificial intelligence, used to solve high dimensional and non-linear approximation problems, which were previously inaccessible with conventional methods. While the universal approximation property of neural networks between finite dimensional spaces was mathematically proven forty years ago, the generalization to infinite dimensional spaces is not straight-forward. Of course, one could argue that everything in our world is finite, such as the number of atoms. Nevertheless, a generalization of neural networks and their approximation property to infinite dimensional spaces would be desirable to solve problems in functional data analysis, dynamical systems and partial differential equations. This would allow to learn path-dependent functionals, such as the payoff of an Asian option, or more general operators, for example the solution operator of a partial differential equation. In a first application, the universal approximation result between infinite dimensional spaces is used in the framework of Stochastic Portfolio Theory (SPT) to learn an optimal path-dependent portfolio. SPT is a relatively new area of mathematical finance, which attempts to describe equity markets with observable characteristics under realistic assumptions. In this context, a generating function determines the weights of a portfolio and its overall investment behaviour, so that an optimal portfolio can be learned by approximating the optimal generating function. Furthermore, this approach of portfolio generation has been extended to the path-dependent setting, where the generated portfolio can also depend on the past market trajectory. Since the generating functional now depends on a continuous path, an infinite dimensional object, we use a functional neural network to approximate the optimal path-dependent functional. In a second application, we use the universal approximation result with infinite dimensional range to approximate the pricing operator of financial derivatives with non-linear dynamics, which corresponds to the solution operator of certain Hamilton-Jacobi-Bellman equations. We use this approach to learn American option prices in a complete market as well as European option prices in an incomplete market. The method is model-free in the sense that it can be applied to any financial market model and the entire value function of the financial derivative of interest can be learned from other known option prices. In summary, we solve problems in Finance by using machine learning techniques, where we prove the applicability of these methods mathematically through concepts of functional analysis.enMaschinelles LernenApproximationstheorieGewichtete ApproximationFunktionalPortfolio SelectionPreisbildungEDIS-5248path spacesUniverselle ApproximationPfadrÀumeStone-WeierstrassNachbinUniversal approximationUniversal Approximation on Path Spaces and Applications in Financedoctoral thesis