# M2 course at University Paris-SaclayAdvanced Continuous Optimization

This page describes the 60 hour course given at University Paris-Saclay in the M2 Optimization, during the academic year 2018-2019, entitled Advanced Continuous Optimization. It contains
- the teacher internet pages,
- a short presentation of its contents,
- the detailed program of part I, part II, and part III.

### Teachers

Jean Charles Gilbert (Inria Paris),
Claudia Sagastizábal (Unicamp, Brazil).

### Presentation

The first part of the module (30 hours) starts with the presentation of the optimality conditions of an optimization problem described in a rather general manner, so that these can be useful for dealing with a large variety of problems: the constraints are expressed by $c(x)\in G$, where $c:\mathbb{E}\to\mathbb{F}$ is a nonlinear map between two Euclidean spaces $\mathbb{E}$ and $\mathbb{F}$, and $G$ is a closed convex part of $\mathbb{F}$. Next, the course describes and analyzes various advanced algorithms to solve functional inclusions (of the kind $F(x)+N(x)\ni0$, where $F$ is a function and $N$ is a multifunction) and optimization problems (nonsmooth methods, linearization methods, proximal and augmented Lagrangian methods, interior point methods) and shows how they can be used to solve a few classical optimization problems (linear optimization, convex quadratic optimization, semidefinite optimization (SDO), nonlinear optimization). Along the way, various tools from convex and nonsmooth analysis will be presented. Everything is conceptualized in finite dimension. The goal of the lectures is therefore to consolidate basic knowledge in optimization, on both theoretical and algorithmic aspects.

The second part of the module (20 hours) focuses on the implementation of some of the previously seen algorithms in Matlab and allows the designer to understand its behavior and to evaluate its efficiency on a concrete problem. A choice will have to be made between the following two projects: (i) the implementation of an SQP solver to solve the hanging chain problem (viewed as a nonlinear optimization problem) or (ii) the implementation of a self-dual conic optimization (SDCO) solver in real numbers to solve various academic/concrete problems, such as the global minimization of a univariate polynomial or a few small size OPF problems (more precisely the rank relaxation of a QCQP version of this problem).

The last part of the module is a 10 hour lecture given by Claudia Sagastizábal (Unicamp, Brazil).

### Detailed program

#### First part

The course is composed of 7 lectures of 4h15 each, which makes it approximately 30h long.
The course is given at ENSTA, room 2234. You must contact Victoria Perez de Laborda (victoria.perez-de-laborda@ensta-paristech.fr) for having the authorization to enter ENSTA.
See below for the schedule.

Bibliography
• J.F. Bonnans, J.Ch. Gilbert, C. Lemaréchal, C. Sagastizábal (2006). Numerical Optimization - Theoretical and Practical Aspects (second edition). Universitext, Springer Verlag, Berlin. [authors] [editor]
• J.F. Bonnans, A. Shapiro (2000). Perturbation Analysis of Optimization Problems. Springer Verlag, New York.
• A.L. Dontchev, R.T. Rockafellar (2009). Implicit Functions and Solution Mappings - A View from Variational Analysis. Springer Monographs in Mathematics. Springer.
• J.Ch. Gilbert (2015). Éléments d'Optimisation Différentiable - Théorie et Algorithmes. Syllabus de cours ŕ l'ENSTA-ParisTech. [internet].
• J.-B. Hiriart-Urruty, C. Lemaréchal (1996). Convex Analysis and Minimization Algorithms (second edition). Grundlehren der mathematischen Wissenschaften, 305-306. Springer-Verlag.
• A.F. Izmailov, M.V. Solodov (2014). Newton-Type Methods for Optimization and Variational Problems. Springer Series in Operations Research and Financial Engineering, Springer.
• Y.E. Nesterov, A.S. Nemirovskii (1994). Interior-Point Polynomial Algorithms in Convex Programming. SIAM Studies in Applied Mathematics, 13. SIAM, Philadelphia, PA, USA.
• J. Renegar (2001). A Mathematical View of Interior-Point Methods in Convex Optimization. MPS-SIAM Series on Optimization 3, SIAM.
• R.T. Rockafellar (1970). Convex Analysis. Princeton Mathematics Ser., 28. Princeton University Press, Princeton, New Jersey.
• R.T. Rockafellar, R. Wets (1998). Variational Analysis. Grundlehren der mathematischen Wissenschaften, 317. Springer.
• C. Roos, T. Terlaky, J.-Ph. Vial (2006). Interior Point Methods for Linear Optimization (second edition). Springer.
• S.J. Wright (1997). Primal-Dual Interior-Point Methods. SIAM Publication, Philadelphia.
Actual program on a daily basis

#### Second part

It is composed of 5 sessions of 4h each (on Monday 14h-18h), which makes it 20h long.
Examination:

The goal of this course is to guide the student in the implementation of some well known optimization algorithms and in its use to solve a concrete problem. The student will choose among the following two projects.

• Project SQP+HC. Implementation of an SQP algorithm (a solver of nonlinear optimization) and its use to solve the hanging chain problem.
• Project SDCO+OPF. Implementation of an interior point algorithm to solve a self-dual conic optimization problem in real numbers (which includes linear semidefinite optimization and linear optimization) and its use to solve the rank relaxation of a small-size OPF problem (expressed as a QCQP problem).
Presentation of the projects

#### Third part

A 10 hour lecture by Claudia Sagastizábal. on
A V-U point of view of nonsmooth optimization
at ENSTA Paristech, Palaiseau, on
• Lecture 1 - Monday, January 14, 09h30-12h30, Room 2.2.13
• Lecture 2 - Monday, January 14, 14h00-17h00, Room 2.2.13
• Lecture 3 - Tuesday, January 15, 09h30-12h30, Room 2.2.13
• Lecture 4 - Tuesday, January 15, 14h00-17h00, Room 2.2.13

Summary of the Lecture

The realization that many nondifferentiable functions exhibit some form of structured nonsmoothness has been attracting the efforts of many researchers in the last decades. Identifying theoretically and computationally certain manifolds where a nonsmooth function behaves smoothly poses challenges for the nonsmooth optimization community. We review a sequence of milestones in the area that led to the development of algorithms of the bundle type that can track the region of smoothness and mimic a Newton algorithm to converge with superlinear speed. The new generation of bundle methods is sufficiently versatile to deal with structured objective functions, even when the available information is inexact.

Examination

Make a report of a few pages explaining what you have understood (and possibliy learned) from the lectures.
Deadline: Sunday 27th of January, 12 pm.