UPMC, Master 2 Mathematics and Applications, Fall 2021
High performance computing for numerical methods and data analysis


Syllabus | Schedule of Lectures | Recommended reading |


Syllabus, Wednesdays 9 AM - 12 PM

Instructor: L. Grigori
Location: Sorbonne Université, Jussieu campus (for each session see specific room)
Hands-on sessions with P. H. Tournier, for details see here.

The objective of this course is to provide the necessary background for designing efficient parallel algorithms in scientific computing as well as in the analysis of large volumes of data. The operations considered are the most costly steps at the heart of many complex numerical simulations. Parallel computing aspects in the analysis of large data sets will be studied through tensor calculus in high dimension. The course will also give an introduction to the most recent algorithms in large scale numerical linear algebra, an analysis of their numerical stability, associated with a study of their complexity in terms of computation and communication.

The lecture will also discuss one of the major challenges in high performance computing which is the increased communication cost. The traditional metric for the cost of an algorithm is the number of arithmetic or logical operations it performs. But the most expensive operation an algorithm performs, measured in time or energy, is not arithmetic or logic, but communication, i.e. moving data between levels of a memory hierarchy or between processors over a network. The difference in costs can be orders of magnitude, and is growing over time due to technological trends like Moore's Law. So our goal is to design new algorithms that communicate much less than conventional algorithms, attaining lower bounds when possible.

This course will also present an overview of novel "communication-avoiding" (CA) algorithms that attain these goals, including communication lower bounds, and practical implementations showing large speedups. Problem domains considered include dense and sparse linear algebra and tensors.

> Top of the page  

Schedule of lectures

  • Oct 20, room 56/66-111
    9:00 - 10:30 Introduction to high performance computing (no slides)
    10:30 - 12:00 Introduction to communication avoiding algorithms and matrix multiplication (Part 1), slides . Suggested reading pdf , Chapter in Computational Mathematics, Numerical Analysis and Applications, SEMA SIMAI Springer Series.
  • Oct 27, no class (replaced by Nov 12 class)
  • Nov 10, room 56/66-111
    9:00 - 10:30 Randomized low rank approximations slides
    Project on randomized algorithms, pdf , deadline for submitting on December 28, 2021.
    10:30 - 12:00 Introduction to distributed memory machines and MPI, Cost MPI Routines
    Recommended slides Lecture 7 of J. Demmel's class
  • Nov 12, room 15/25-104
    9:00 - 12:00 Communication avoiding LU and QR factorizations, slides1, slides2
  • Nov 17, by visio
    9:00 - 12:00 Krylov subspace iterative solvers, slides
  • Nov 22, micro Room 519 at the Atrium (optional TP)
    5PM - 7PM HPC hands-on (TP) with P. H. Tournier (see details )
  • Nov 24, visio
    9:00 - 12:00 Krylov subspace iterative solvers (continued with slides from Nov. 17)
  • Nov 29, micro Room 519 at the Atrium. (optional TP)
    5PM - 7PM HPC hands-on (TP) with P. H. Tournier (see details )
  • Dec 1, room 56/66-111
    9:00 - 12:00AM Low rank matrix approximation slides
  • Dec 8, room 56/66-111
    9:00 - 12:00 Low rank matrix approximation (contd with slides from Dec 1st)
  • Dec 13, online (optional TP)
    5PM - 7PM HPC hands-on (TP) with P. H. Tournier (see details )
  • Dec 17, 15/25-104
    9:00 - 12:00AM Tensors, slides
  • Jan 5, 2021, room 15/16-112
    Final exam (only paper documents are allowed during the exam)
    Example of questions from a previous year given at the exam is here .
  • > Top of the page  

    Recommended reading (evolving)



    For communication avoiding algorithms:

    Lower bounds on communication
  • Grey Ballard, James Demmel, Olga Holtz, Oded Schwartz,
    Minimizing Communication in Numerical Linear Algebra,
    SIAM Journal on Matrix Analysis and Applications, SIAM, Volume 32, Number 3, pp. 866-901, 2011, pdf .
  • Communication avoiding algorithms for dense linear algebra
  • J. Demmel, L. Grigori, M. F. Hoemmen, and J. Langou,
    Communication-optimal parallel and sequential QR and LU factorizations,
    SIAM Journal on Scientific Computing, Vol. 34, No 1, 2012, pdf (also available on arXiv:0808.2664v1), short version of UCB-EECS-2008-89 and LAWN 204, available since 2008.
  • L. Grigori, J. Demmel, and H. Xiang,
    CALU: a communication optimal LU factorization algorithm,
    SIAM J. Matrix Anal. & Appl., 32, pp. 1317-1350, 2011, pdf preliminary version published as LAWN 226
  • J. Demmel, L. Grigori, M. Gu, and H. Xiang
    Communication avoiding rank revealing QR factorization with column pivoting, pdf,
    SIAM J. Matrix Anal. & Appl, Vol. 36, No. 1, pp. 55-89, 2015.
  • Communication avoiding algorithms for iterative methods and preconditioners
  • L. Grigori, S. Moufawad, F. Nataf
    Enlarged Krylov Subspace Conjugate Gradient Methods for Reducing Communication , INRIA TR 8597 ,
    to appear in SIAM J. Matrix Anal. & Appl, 2016.
  • Randomized numerical linear algebra: overview, elementary proofs, low rank approximation algorithms, sketching algorithms
  • for a list of suggested reading, please check the Reading group on randomized linear algebra, Spring 2015

  • > Top of the page  
    INRIA Paris
    Laboratoire J. L. Lions
    Sorbonne Université
    Contact: Laura Grigori (Laura(dot)Grigori(at)inria(dot)fr)

    © 2007 rs