Reading group on randomized numerical linear algebra


Introduction | Meetings |   Covered papers |


Reading group details - UC Berkeley, Spring 2015

Randomized linear algebra is a very active subject now, with ideas from and impacts on various fields, including analysis of big data sets, probability and statistics, numerical analysis, and computational complexity.

We have a new reading group on this topic, and some of the papers listed below will be covered during the meetings. We will focus in particular on sampling and projection algorithms for low rank matrix approximations, least squares approximation, matrix multiplication, and related problems, with the exact topics to be determined by the participants. Expected outcomes of the reading group are the following:
  • A good overview of some of the main results in randomized linear algebra, and connections with results in linear algebra
  • A better understanding of metrics of success for those algorithms (e.g., accuracy, number of flops, communication, downstream usefulness, etc.), as used in different fields that are interested in these methods (e.g., machine learning, numerical analysis, theory of algorithms, etc.)
  • Discussions on open questions and evolving directions of research such as different approaches to streaming algorithms.
  • Organizers: Jim Demmel, Laura Grigori, Ming Gu, Michael Mahoney

    > Top of the page  

    Meetings - Tuesdays 2-3pm - room 540 AB Cory Hall

    Archived video of the lectures may be seen here
  • Feb 17 - M. Mahoney: Randomized Algorithms for Matrices and Data, slides
  • Feb 24 - J. Demmel: notes of the lecture , video of the lecture
  • Mar 3 - J. Demmel, second part of lecture from Feb 24, notes of the lecture , video of the lecture
  • Mar 10 - M. Gu: Subspace iteration randomization and singular value problems, slides , video of the lecture , arxiv.org
  • Mar 17: Eric Hallman,
    based on the paper Sketching as a Tool for Numerical Linear Algebra, Woodruff, video of the lecture
  • Mar 24 - spring break
  • Mar 31 - Becca Roelofs, video of the lecture
    based on the paper Blendenpik: Supercharging LAPACK's least-squares solver, Avron et al.
  • Apr 7 - Arturo Fernandez, slides
    based on the paper Relative-Error CUR Matrix Decompositions, Drineas et al,
    and
    Shivaram Venkataraman,
    High Performance Linear Algebra using Spark,
    video of the lecture, lecture starts at 3:59
  • Apr 14 - Aydin Buluc,
    based on the paper Low Rank Approximation and Regression in Input Sparsity Time, Clarkson and Woodruff,
    and
    Chris Melgaard.
    video of the lecture
  • Apr 21: Laura Grigori,
    CA rank revealing QR and LU factorizations,
    and
    Pieter Ghysels,
    Construction of hierarchically semi-separable matrices using randomized sampling and application in a sparse direct solver
    video of the lecture, lecture starts at ~2:44
  • Apr 28 - Yudong Chen,
    Fast projected gradient descent algorithms for low-rank estimation
    video of the lecture, lecture starts at 4:59
    Abstract: Fitting a rank-r matrix to noisy data is in general NP-hard. A popular approach is by convex relaxations via nuclear/trace norm minimization. This approach is shown to provide strong (often order-wise unimprovable) statistical guarantees in terms of error bounds and sample complexity. Computationally, while nuclear norm minimization can be solved in polynomial time in principle by semidefinite programming, its time complexity is often too high for large matrices. In this talk, we consider an alternative approach via projected gradient descent over the space of n-by-r matrices, which scales well to large instances. Moreover, we develop a unified framework characterizing the convergence of projected gradient descent for a broad range of non-convex low-rank estimation problems. Our results apply to the problems of matrix sensing, matrix completion, robust PCA, sparse PCA, densest subgraph detection and others, for which we match the best known statistical guarantees provided by convex relaxation methods.
  • > Top of the page   >

    List of papers covered (evolving)

    If you wish to present one of the following papers (for which there is not yet a speaker), or other relevant paper, please send an email to Laura Grigori.
    Overlapping categories:

    Reviews
  • Randomized Algorithms for Matrices and Data,
    Mahoney, FnTML, or arxiv.org
    Speaker: M. Mahoney (Feb 17)
  • Proofs of fundamental results An Elementary Proof of a Theorem of Johnson and Lindenstrauss,
    S. Dasgupta, A. Gupta, Random Structures and Algorithms 2003, or here
    Low rank approximation
  • Subspace iteration randomization and singular value problems
    M. Gu, arxiv.org
    Speaker: Ming Gu (Mar 3)
  • Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
    Halko, Martinsson, and Tropp, SIAM Review 2011 or arxiv.org
  • Fast Monte-Carlo Algorithms for Finding Low-Rank Approximations
    Frieze, Kannan and Vempala, Journal of the ACM, or here
  • Sketching algorithms
  • Sketching as a Tool for Numerical Linear Algebra,
    Woodruff, FnTTCS, or arxiv.org
  • Relative-Error CUR Matrix Decompositions,
    Drineas, Mahoney, and Muthukrishnan, SIAM J. Matrix Analysis and Applications
  • Communication avoiding rank revealing QR factorization with column pivoting
    Demmel, Grigori, Gu, Xiang, SIAM J. Matrix Analysis and Applications, 2015
    Speaker: Laura Grigori
  • Fast Least Squares
  • Blendenpik: Supercharging LAPACK's least-squares solver,
    Avron, Maymounkov, and Toledo, SIAM Journal on Scientific Computing
  • Low Rank Approximation and Regression in Input Sparsity Time
    Clarkson and Woodruff, STOC or arxiv.org
  • Faster Least Squares Approximation,
    Drineas, Mahoney, Muthukrishnan, and Sarlos, Numerische Mathematik
  • > Top of the page  
    INRIA INRIA Paris
    Laboratoire J. L. Lions
    Sorbonne University
    Contact: Laura Grigori (Laura(dot)Grigori(at)inria(dot)fr)

    © 2007 rs