UPMC, Master 2 Mathematics and Applications, Fall 2019
High performance computing for numerical methods and data analysis


Syllabus | Schedule of Lectures | Recommended reading |


Syllabus, Wednesdays 9 AM - 12 PM

Instructor: L. Grigori
Location: 15-25/103 (except on October 24 when the location will be 15-25/101), UPMC


The objective of this course is to provide the necessary background for designing efficient parallel algorithms in scientific computing as well as in the analysis of large volumes of data. The operations considered are the most costly steps at the heart of many complex numerical simulations. Parallel computing aspects in the analysis of large data sets will be studied through tensor calculus in high dimension. The course will also give an introduction to the most recent algorithms in large scale numerical linear algebra, an analysis of their numerical stability, associated with a study of their complexity in terms of computation and communication.

The lecture will also discuss one of the major challenges in high performance computing which is the increased communication cost. The traditional metric for the cost of an algorithm is the number of arithmetic or logical operations it performs. But the most expensive operation an algorithm performs, measured in time or energy, is not arithmetic or logic, but communication, i.e. moving data between levels of a memory hierarchy or between processors over a network. The difference in costs can be orders of magnitude, and is growing over time due to technological trends like Moore's Law. So our goal is to design new algorithms that communicate much less than conventional algorithms, attaining lower bounds when possible.

This course will also present an overview of novel "communication-avoiding" (CA) algorithms that attain these goals, including communication lower bounds, and practical implementations showing large speedups. Problem domains considered include dense and sparse linear algebra and tensors.

> Top of the page  

Schedule of lectures

  • Oct 24
    Introduction to high performance computing and communication avoiding algorithms
  • Nov 7
    9:00 - 10:30 Introduction to distributed memory machines and MPI
    Recommended slides Lecture 7 of J. Demmel's class
    10:30 - 12:00 LU and QR factorizations and their backward stability, slides
    The error analysis of LU and QR is not required for the exam.
  • Nov 14
    9:00 - 10:30 LU and QR factorizations and their backward stability (contd), (in the usual room 15-25/103)
    10:30 - 12:00 MPI hands-on (1st part, TP1), this class will take place in 15-25/322
  • Nov 21
    9:00 - 10:30 Communication avoiding matrix multiplication, LU, QR (Part 1), slides (in the usual room 15-25/103). Suggested reading pdf , Chapter in Computational Mathematics, Numerical Analysis and Applications, SEMA SIMAI Springer Series.
    10:30 - 12:00 MPI hands-on (TP noté), this class will take place in 15-25/322
  • Nov 28
    9:00 - 10:30 Communication avoiding matrix multiplication, LU, QR (Part 2), same slides as Part 1
    10:30 - 12:00 MPI hands-on (2nd part, TP noté, sujet ), this class will take place in 15-25/322
    The TP is to be returned for December 12, 2018.
  • Nov 30 (room 15/25-102, replaces the class from Dec 5)
    9:00 - 12:00 Rank revealing factorizations and low rank approximations, slides .
  • Dec 5 - no class
  • Dec 12
    9:00 - 12:00 Parallelism in space: iterative solvers and examples of preconditioners, slides .
    enlarged CG is not required for the exam (slides 19 to 32)
  • Dec 19
    9:00-10:30 Parallelism in time: the parareal algorithm (not requested for the exam)
    10:30-12:00 Low rank tensor approximations slides
  • Jan 9, room 15/25-101
    Final exam (only paper documents are allowed during the exam)
  • > Top of the page  

    Recommended reading (evolving)



    For communication avoiding algorithms:

    Lower bounds on communication
  • Michael Christ, James Demmel, Nicholas Knight, Thomas Scanlon, and Katherine Yelick
    Communication Lower Bounds and Optimal Algorithms for Programs That Reference Arrays — Part 1
    EECS Technical Report No. UCB/EECS-2013-61, May 14, 2013.
  • Grey Ballard, James Demmel, Olga Holtz, Oded Schwartz,
    Minimizing Communication in Numerical Linear Algebra,
    SIAM Journal on Matrix Analysis and Applications, SIAM, Volume 32, Number 3, pp. 866-901, 2011, pdf .
  • Communication avoiding algorithms for dense linear algebra
  • J. Demmel, L. Grigori, M. F. Hoemmen, and J. Langou,
    Communication-optimal parallel and sequential QR and LU factorizations,
    SIAM Journal on Scientific Computing, Vol. 34, No 1, 2012, pdf (also available on arXiv:0808.2664v1), short version of UCB-EECS-2008-89 and LAWN 204, available since 2008.
  • L. Grigori, J. Demmel, and H. Xiang,
    CALU: a communication optimal LU factorization algorithm,
    SIAM J. Matrix Anal. & Appl., 32, pp. 1317-1350, 2011, pdf preliminary version published as LAWN 226
  • J. Demmel, L. Grigori, M. Gu, and H. Xiang
    Communication avoiding rank revealing QR factorization with column pivoting, pdf,
    SIAM J. Matrix Anal. & Appl, Vol. 36, No. 1, pp. 55-89, 2015.
  • G. Ballard, J. Demmel, L. Grigori, M. Jacquelin, N. Knight, D. Nguyen
    Reconstructing Householder vectors for QR factorization ,
    Journal of Parallel and Distributed Computing, Aug 2015, pdf.
  • Communication avoiding algorithms for iterative methods and preconditioners
  • L. Grigori, S. Moufawad, F. Nataf
    Enlarged Krylov Subspace Conjugate Gradient Methods for Reducing Communication , INRIA TR 8597 ,
    to appear in SIAM J. Matrix Anal. & Appl, 2016.
  • L. Grigori, S. Moufawad,
    Communication avoiding ILU0 preconditioner , pdf
    preliminary version published as INRIA TR 8266 .
    SIAM Journal on Scientific Computing, 2015, Vol. 37, Issue 2.
  • Randomized numerical linear algebra: overview, elementary proofs, low rank approximation algorithms, sketching algorithms
  • for a list of suggested reading, please check the Reading group on randomized linear algebra, Spring 2015

  • > Top of the page  
    INRIA Paris
    Laboratoire J. L. Lions
    Sorbonne Universite
    Contact: Laura Grigori (Laura(dot)Grigori(at)inria(dot)fr)

    © 2007 rs