Proposed courses
Optimization in machine learning |
|||
Summary: This course covers recent advances in scalable algorithms for convex optimization, with a particular emphasis on training (linear) predictors via the empirical risk minimization paradigm. The material will be presented in a unified way wherever possible. Randomized, deterministic, primal, dual, accelerated, serial, parallel and distributed methods will be mentioned. The course will start in an unusual place: a concise yet powerful theory of randomized iterative methods for linear systems. While of an independent interest, this will highlight many of the algorithmic schemes and tools we shall encounter later in the course.
|
Reinforcement learning |
|||
Summary: The course will cover the basic models and techniques of reinforcement learning (RL). We will begin by reviewing the Markov decision process (MDP) model used to formalize the interaction between a learning agent and an (unknown) dynamic environment. After introducing the dynamic programming techniques used to compute the exact optimal solution of an MDP known in advance, we will move to the actual learning problem where the MDP is unknown and we will introduce popular algorithms such as Q-learning and SARSA. This will lead to the analysis of two critical aspects of RL algorithms: how to trade off exploration and exploitation, and how to accurately approximate solutions. The core of the exploration-exploitation problem will be studied in the celebrated multi-armed bandit framework and its application to modern recommendation systems. Finally, few examples of approximate dynamic programming will be presented together with some guarantees on their performance. The hands-on session will focus on implementing multi-armed bandit algorithms applied to the problem of policy optimization and online RL for simple navigation problems.
|
Dictionary learning |
|||
Summary: In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this course is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
|
Information Retrieval and Machine Learning |
|||
Summary: This course is an introduction to the intersection between Information Retrieval (IR) and Machine Learning (ML) models. ML has been at the basis of some IR tasks such as document ranking and relevance feedback. On the other hand IR poses new challenges to ML because of the peculiar nature of the context in which data are observed. In this course, I will introduce first the tasks of IR and then the utilization of some ML techniques to address these tasks.
|
- Log in to post comments