M2 course at University Paris-Saclay
Advanced Continuous Optimization

This page describes the 60 hour course given at University Paris-Saclay in the M2 Optimization, during the academic year 2018-2019, entitled Advanced Continuous Optimization. It contains
- the teacher internet pages,
- a short presentation of its contents,
- the detailed program of part I, part II, and part III.

Teachers

Jean Charles Gilbert (Inria Paris),
Claudia Sagastizábal (Unicamp, Brazil).

Presentation

The first part of the module (30 hours) starts with the presentation of the optimality conditions of an optimization problem described in a rather general manner, so that these can be useful for dealing with a large variety of problems: the constraints are expressed by $c(x)\in G$, where $c:\mathbb{E}\to\mathbb{F}$ is a nonlinear map between two Euclidean spaces $\mathbb{E}$ and $\mathbb{F}$, and $G$ is a closed convex part of $\mathbb{F}$. Next, the course describes and analyzes various advanced algorithms to solve functional inclusions (of the kind $F(x)+N(x)\ni0$, where $F$ is a function and $N$ is a multifunction) and optimization problems (nonsmooth methods, linearization methods, proximal and augmented Lagrangian methods, interior point methods) and shows how they can be used to solve a few classical optimization problems (linear optimization, convex quadratic optimization, semidefinite optimization (SDO), nonlinear optimization). Along the way, various tools from convex and nonsmooth analysis will be presented. Everything is conceptualized in finite dimension. The goal of the lectures is therefore to consolidate basic knowledge in optimization, on both theoretical and algorithmic aspects.

The second part of the module (20 hours) focuses on the implementation of some of the previously seen algorithms in Matlab and allows the designer to understand its behavior and to evaluate its efficiency on a concrete problem. A choice will have to be made between the following two projects: (i) the implementation of an SQP solver to solve the hanging chain problem (viewed as a nonlinear optimization problem) or (ii) the implementation of a self-dual conic optimization (SDCO) solver in real numbers to solve various academic/concrete problems, such as the global minimization of a univariate polynomial or a few small size OPF problems (more precisely the rank relaxation of a QCQP version of this problem).

The last part of the module is a 10 hour lecture given by Claudia Sagastizábal (Unicamp, Brazil).

Detailed program

First part

The course is composed of 7 lectures of 4h15 each, which makes it approximately 30h long.
The course is given at ENSTA, room 2234. You must contact Victoria Perez de Laborda (victoria.perez-de-laborda@ensta-paristech.fr) for having the authorization to enter ENSTA.
See below for the schedule.

Post-lecture notes (the access to some notes requires the username "Student" and the given password)

Bibliography Actual program on a daily basis

Date
Scheduled theme
Actual details of the contents
Monday
September 17
2018
Presentation and recalls Presentation of the course
Background
  • Convex analysis: relative interior, absorbing point, dual cone and Farkas lemma, tangent and normal cones
  • Nonlinear analysis: multifunction
  • Optimization: tangent cone and Peano-Kantorovich necessary optimality condition of the first order (NC1), sufficient optimality condition of the first order (SC1) for a convex problem
Optimality conditions
  • First order optimality conditions for a general problem: definition of the problem and its convexity
Monday
September 24
2018
First order optimality conditions
of a general optimization problem
Background
  • Convex analysis: asymptotic cone
Optimality conditions
  • First order optimality conditions for a general problem: tangent and linearizing cones, constraint qualification, NC1, SC1 for a convex problem, Robinson's condition and metric regularity, open multifunction theorem, metric regularity diffusion theorem
Monday
October 1
2018
Second order optimality conditions
for the equality and inequality constrained problem
Background
  • Optimization: second order optimality conditions for equality contrained optimization, linear optimization duality
Optimality conditions
  • Robinson's constraint qualification, Gauvin's boundedness characterization
  • Second order optimality conditions for the equality and inequality constrained problem: difficulties, solution strategy, NC2
Monday
October 8
2018
Josephy-Newton algorithm
for solving functional inclusions
Background
  • Convex analysis: convex polyhedron
  • Algorithmics: speeds of convergence, Dennis and Moré superlinear criterion
Optimality conditions
  • Second order optimality conditions for the equality and inequality constrained problem: SC2
Linearization methods
  • Overview
  • Josephy-Newton algorithm for functional inclusions: functional inclusion problems, semistable solutions of functional inclusions, speed of convergence of the JN algorithm, semistable solutions of polyhedral variational inequalities
Monday
October 15
2018
The SQP algorithm
for solving the equality and inequality constrained problem
Background
  • Short overview of algorithms for solving constrained optimization problems: primal, dual, and primal-dual methods
  • Algorithmics: line-search and trust-region methods
Linearization methods
  • Josephy-Newton algorithm for functional inclusions: local convergence
  • The SQP algorithm: the algorithm viewed as a Josephy-Newton method, semistability of a solution, local convergence, exact penalization
Monday
October 22
2018
The semismooth Newton algorithm
for solving nonsmooth nonlinear equations
Linearization methods
  • The SQP algorithm: globalization by linesearch
Linearization methods
  • The semismooth Newton algorithm for nonlinear equations: Motivation, Clarke differential, semismoothness, local convergence
Monday
November 5
2018
Semidefinite optimization Background
  • Convex analysis: asymptotic function
Interior point methods
  • Semidefinite optimization: problem definition and examples, existence of solution, optimality conditions, central path, sketch of an IP algorithm
Monday
November 12
2018
Examination
  • written examination, made of "exercises",
  • 3 hours,
  • the documents distributed during the course can be consulted,
  • on Saterday, November 11, the student sends by mail (Jean-Charles.Gilbert@inria.fr) the list of the exercises that he/she has been able to solve, and one or two of these exercise solutions (selected the day of the examination) are joined to the examination script.

Second part

It is composed of 5 sessions of 4h each (on Monday 14h-18h), which makes it 20h long.
Examination:

The goal of this course is to guide the student in the implementation of some well known optimization algorithms and in its use to solve a concrete problem. The student will choose among the following two projects.

Presentation of the projects

Actual program on a daily basis

Date
Project SQP+HC
Project SDCO+OPF
Monday
November 19
2018
Problem modeling and simulator design (20-11-2018) Directions of displacement
Monday
November 26
2018
The local SQP algorithm (26-11-2018) A feasible predictor-corrector algorithm
Monday
December 3
2018
Globalization by linesearch (3-12-2018) Finding an appropriate starting point
Monday
December 10
2018
Quasi-Newton version (10-12-2018) Starting from an infeasible point
Monday
December 17
2018
Lecture notes (19-12-2018)
Monday
January 7
2019
Oral examination

On the examination (19-12-2018)
Test case A (19-12-2018)
Test case B (19-12-2018)
Test case C (19-12-2018)
Test case D (19-12-2018)
Test case E (19-12-2018)
Oral examination

On the examination (19-12-2018)
polminmain.m (19-12-2018)
polmin2sdco.m (19-12-2018)

Third part

A 10 hour lecture by Claudia Sagastizábal. on
A V-U point of view of nonsmooth optimization
at ENSTA Paristech, Palaiseau, on

Summary of the Lecture

The realization that many nondifferentiable functions exhibit some form of structured nonsmoothness has been attracting the efforts of many researchers in the last decades. Identifying theoretically and computationally certain manifolds where a nonsmooth function behaves smoothly poses challenges for the nonsmooth optimization community. We review a sequence of milestones in the area that led to the development of algorithms of the bundle type that can track the region of smoothness and mimic a Newton algorithm to converge with superlinear speed. The new generation of bundle methods is sufficiently versatile to deal with structured objective functions, even when the available information is inexact.

Examination

Make a report of a few pages explaining what you have understood (and possibliy learned) from the lectures.
Deadline: Sunday 27th of January, 12 pm.