General Mathematics

Sequential learning and stochastic optimization of convex functions

Published on

Authors: Xavier Fontaine

In this thesis we study several machine learning problems that are all linked with the minimization of a noisy function, which will often be convex.Inspired by real-life applications we focus on sequential learning problems which consist in treating the data ``on the fly'', or in an online manner.The first part of this thesis is thus devoted to the study of three different sequential learning problems which all face the classical ``exploration vs. exploitation'' trade-off.Each of these problems consists in a situation where a decision maker has to take actions in order to maximize a reward or to evaluate a parameter under uncertainty, meaning that the rewards or the feedback of the possible actions are unknown and noisy.We demonstrate that all of these problems can be studied under the scope of stochastic convex optimization, and we propose and analyze algorithms to solve them.In the second part of this thesis we focus on the analysis of the Stochastic Gradient Descent algorithm, which is likely one of the most used stochastic optimization algorithms in machine learning.We provide an exhaustive analysis in the convex setting and in some non-convex situations by studying the associated continuous-time model, and obtain new optimal convergence results.