M2 course at the "Institut Polytechnique de Paris"
Advanced Continuous Optimization
This page describes the 48 hour course given at "Institut Polytechnique de Paris" in the
M2 Optimization,
during the academic year 20202021, entitled Advanced
Continuous Optimization. It contains
 the teacher internet pages,
 a short presentation of its contents,
 the detailed program of part I and part II.
The timetable of the other courses of the M2 is given here.
Jean Charles Gilbert (Inria Paris),
Presentation
The first part of the module (30 hours) starts with the presentation of the optimality conditions of
an optimization problem described in a rather general manner, so that these can be useful for dealing with a large variety of
problems: the constraints are expressed by $c(x)\in G$, where $c:\mathbb{E}\to\mathbb{F}$ is a nonlinear map between two
Euclidean spaces $\mathbb{E}$ and $\mathbb{F}$, and $G$ is a closed convex part of $\mathbb{F}$. Next, the course describes and
analyzes various advanced algorithms to solve functional inclusions (of the kind $F(x)+N(x)\ni0$, where $F$ is a function and
$N$ is a multifunction) and optimization problems (nonsmooth methods, linearization methods, proximal and augmented Lagrangian
methods, interior point methods) and shows how they can be used to solve a few classical optimization problems (linear
optimization, convex quadratic optimization, semidefinite optimization (SDO), nonlinear optimization). Along the way, various
tools from convex and nonsmooth analysis will be presented. Everything is conceptualized in finite dimension. The goal of the
lectures is therefore to consolidate basic knowledge in optimization, on both theoretical and algorithmic aspects.
The second part of the module (20 hours) focuses on the implementation of some of the previously seen
algorithms in Matlab and allows the designer to understand its behavior and to evaluate its efficiency on a concrete problem. A
choice will have to be made between the following three projects:
 the implementation of an SQP solver to solve the hanging chain problem (viewed as a nonlinear optimization problem);
 the implementation of a selfdual conic optimization (SDCO) solver in real numbers to solve various academic/concrete
problems, such as the global minimization of a univariate polynomial or a few small size OPF problems (more precisely the rank
relaxation of a QCQP version of this problem);
 the implementation of a semismoothNewtonlike algorithm to minimize a convex quadratic function on a convex polyhedron with
application to the case when the polyhedron is the Birkhoff polytope.
To follow the second part, the student must have followed the first part, because the algorithms implemented in the second part
are presented and anaylzed in the first part.
First part

The course is composed of 7 lectures of 4h15 each, which makes it approximately 30h long. It is always given on Monday from
2:00 pm to 6:30 pm.

Due to the corona virus pandemic, this year, the course is given online.
You must contact Anne Richard (anne.richard@enstaparis.fr) for having the authorization to enter ENSTA (this is
automatically ensured for those who are registered to the M2).

See below for the schedule.
Bibliography

J.F. Bonnans, J.Ch. Gilbert, C. Lemaréchal, C. Sagastizábal (2006).
Numerical Optimization  Theoretical and Practical Aspects (second edition).
Universitext, Springer Verlag, Berlin.
[authors]
[editor]

J.F. Bonnans, A. Shapiro (2000).
Perturbation Analysis of Optimization Problems.
Springer Verlag, New York.

A.L. Dontchev, R.T. Rockafellar (2009).
Implicit Functions and Solution Mappings  A View from Variational Analysis.
Springer Monographs in Mathematics. Springer.

J.Ch. Gilbert (2015).
Fragments d'Optimisation Différentiable  Théorie et Algorithmes.
Syllabus de cours à l'ENSTAParisTech.
[internet].

J.B. HiriartUrruty, C. Lemaréchal (1996).
Convex Analysis and Minimization Algorithms (second edition).
Grundlehren der mathematischen Wissenschaften, 305306.
SpringerVerlag.

A.F. Izmailov, M.V. Solodov (2014).
NewtonType Methods for Optimization and Variational Problems.
Springer Series in Operations Research and Financial Engineering,
Springer.

Y.E. Nesterov, A.S. Nemirovskii (1994).
InteriorPoint Polynomial Algorithms in Convex Programming.
SIAM Studies in Applied Mathematics, 13.
SIAM, Philadelphia, PA, USA.

J. Renegar (2001).
A Mathematical View of InteriorPoint Methods in Convex Optimization.
MPSSIAM Series on Optimization 3, SIAM.

R.T. Rockafellar (1970).
Convex Analysis.
Princeton Mathematics Ser., 28.
Princeton University Press, Princeton, New Jersey.

R.T. Rockafellar, R. Wets (1998).
Variational Analysis.
Grundlehren der mathematischen Wissenschaften, 317.
Springer.

C. Roos, T. Terlaky, J.Ph. Vial (2006).
Interior Point Methods for Linear Optimization (second edition).
Springer.

S.J. Wright (1997).
PrimalDual InteriorPoint Methods.
SIAM Publication, Philadelphia.
Actual program on a daily basis (the access to the Lecture materials requires the username "Student" and the given password)
Date 
Themes of the lectures 
Lecture materials 
Actual details of the contents of the lectures 
   
Monday November 16 2020 
Presentation and recalls 
Manual Scripts 1
Video 1

Presentation of the course
Background
 Convex analysis: relative interior, absorbing point, dual cone and Farkas lemma, tangent and normal cones
 Nonlinear analysis: multifunction
 Optimization: tangent cone and PeanoKantorovich necessary optimality condition of the first order for $(P_X)$,
sufficient optimality condition of the first order for a convex problem $(P_X)$, KKT conditions for $(P_{EI})$,
sufficient conditions for contraint qualification
Optimality conditions
 First order optimality conditions for $(P_G)$: definition of the problem and its convexity, tangent and
linearizing cones to the feasible set, constraint qualification, NC1, SC1 for a convex problem

   
Monday November 23 2020 
First order optimality conditions of the general optimization problem $(P_G)$ 
Manual Scripts 2
Video 2

Background
 Convex analysis: asymptotic cone
Optimality conditions
 First order optimality conditions for $(P_G)$: SC1 for a convex problem, Robinson's condition and metric
regularity, open multifunction theorem, metric regularity diffusion theorem, Robinson's constraint qualification,
Gauvin's boundedness characterization

   
Monday November 30 2020 
Second order optimality conditions for the equality and inequality constrained optimization problem $(P_{EI})$ 
Manual Scripts 3
Video 3

Background
 Optimization: second order optimality conditions for $(P_E)$, linear optimization duality
Optimality conditions
 Second order optimality conditions for $(P_{EI})$: difficulties, solution
strategy, NC2, SC2

   
Monday December 7 2020 
JosephyNewton algorithm for solving functional inclusions 
Manual Scripts 4
Video 4

Background
 Convex analysis: convex polyhedron
 Algorithmics: speeds of convergence, Newton's method, quasiNewton algorithms, Dennis and Moré superlinear criterion
Linearization methods
 Overview of linearization methods
 JosephyNewton algorithm for functional inclusions: functional inclusion problems, semistable solutions of
functional inclusions, speed of convergence of the JN algorithm, semistable solutions of polyhedral variational
inequalities

   
Monday December 14 2020 
The SQP algorithm for solving problem $(P_{EI})$ 
Manual Scripts 5
Video 5

Background
 Short overview of algorithms for solving constrained optimization problems: primal, dual, and primaldual methods
Linearization methods
 JosephyNewton algorithm for functional inclusions: local convergence
 The SQP algorithm: the algorithm viewed as a JosephyNewton method, semistability of a solution, local
convergence, exact penalization, globalization by linesearch

   
Monday January 4 2020 
The semismooth Newton algorithm for solving nonsmooth equations 
Manual Scripts 6
Video 6 is lost

Linearization methods
 The semismooth Newton algorithm for nonsmooth equations: Motivation, Clarke differential, semismoothness, local convergence

   
Monday January 11 2020 
Semidefinite optimization 
Manual Scripts 7
Video 7
Lecture Notes (20200217)
Slides
Exercises

Background
 Convex analysis: asymptotic function, separation of convex sets
Semidefinite optimization
 problem definition and examples, existence of solution, optimality conditions, central path, sketch of an interior point algorithm

   
Monday January 18 2021 
Examination
 Written examination of 3 hours (2pm5pm), at Ensta (room 1315). Please, wear a mask and bring your own exam sheets.
 The questions are similar to those given in the homework exercises, although they should be decomposed into more subquestions.
 The documents distributed during the course can be perused. You can put the lecture notes on your computer and look at them during the exam.
 The student will join to his/her script the solution to 2 exercises proposed during the lectures (presumably the hardest he/she was able to solve).

It is recommended to register to this course, by contacting "JeanCharles.Gilbert@inria.fr". Indeed,
some technical issues depend on the participants (like having an account at Ensta and establishing access to Matlab through a
VPN connection to Ensta, as explained on the page https://cascad.ensta.fr/BYOD).
The course is made up of 5 sessions of 3h45 each (given on Thursday, 2pm6pm), which makes it 18h long, approximately.
The goal of the course is to guide the student in the implementation of one of some well known optimization/nonsmoothsystem
algorithms and in its use to solve a concrete problem. The student will choose one project among the following ones.
 Project SQP+HC. Implementation of an SQP algorithm (a solver of nonlinear optimization) and its use to solve the
hanging chain problem. This project is recommended to those who have never implemented an optimization solver, since it is
instructive on many aspects. The project is well organized and guided.
 Project SDCO+OPF. Implementation of an interior point algorithm to solve a selfdual conic optimization problem in
real numbers (which concatenates several linear semidefinite optimization problems and one linear optimization problem) and its
use to solve the rank relaxation of a smallsize OPF problem (expressed as a QCQP problem) and global minimization of univariate
polynomials on a finite union of intervals. The project is well organized and guided.
 Project SN+Proj. This last project is more exploratory with a large part dedicated to the understanding of all
the aspects of a version of a semismooth Newtonlike method for making the projection on a convex polyhedron. This is
then applied abstractly and numerically to make the projection on the Birkhoff polytope and, more generally, to minimize a
convex quadratic function (defined on the space of matrices) subject to the Birkhoff constraint. For the while (it may change
during the course of the project), the project is completely left to the responsibility of the student, which makes this project
closer to a research subject. The role of the teacher is therefore to discuss with the student during the realization of the
project.
Students having already realized (part of) the SQP+HC project must choose another one.
Presentation of the projects (20210119,
19:06). It is recommended to make your choice before the first session (and send it to "JeanCharles.Gilbert@inria.fr"),
although more information on the projects could be obtained at the beginning of the first session.
The material/concrete aspects of the course are still not determined, due to the pandemic (2012021). Clearly, the course will
be given online. Since it aims at realizing an optimization solver, the student must have the possibility to implement it on
his/her machine, preferably in Matlab (or Octave?). If an access to Matlab is not furnished through Ensta (this has not been
determined yet), there might be possible to use other free languages (preferably interpreters), like Python, Julia,..., but the
teacher may not have the skill to guide the student with all these languages. The access to Matlab should be possible through
a VPN channel to Ensta.
Actual program on a daily basis
Documents useful for the SQP+HC project:
Documents useful for the SDCO+OPF project:
Documents useful for the SN+Proj project:

F.H. Clarke (1990).
Optimization and Nonsmooth Analysis (second edition).
Classics in Applied Mathematics 5. SIAM, Philadelphia, PA, USA.
[DOI]

Y. Cui, D.F. Sun, K.C. Toh (2016).
On the asymptotic superlinear convergence of the augmented Lagrangian method for semidefinite programming with multiple solutions
Technical report.
[arXiv]

J.Y. Han, D.F. Sun (1997).
Newton and quasiNewton methods for normal maps with polyhedral sets.
Journal of Optimization Theory and Applications, 94, 659676.
[DOI]

J.Y. Han, D.F. Sun (1997).
Newton and quasiNewton methods for normal maps with polyhedral sets.
Journal of Optimization Theory and Applications, 94, 659676.
[DOI]

J.B. HiriartUrruty, J.J. Strodiot, V.H. Nguyen (1984).
Generalized Hessian matrix and secondorder optimality conditions for problems with $C^{1,1}$ data.
Applied Mathematics and Optimization, 11, 4356.
[DOI]

X. Li, D. Sun, K.C. Toh (2016).
QSDPNAL: A twophase augmented Lagrangian method for convex quadratic semidefinite programming.
Technical report.
[arXiv]

X. Li, D. Sun, K.C. Toh (2020).
On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope.
Mathematical Programming, 179, 419446.
[DOI]