Sparse Optimization and Applications to Information Processing

Summer Course on Sparse Optimization and Applications to Information Processing

July 28, 2013

Mário A. T. Figueiredo (Technical Univ. Lisbon and IT, Portugal)

Stephen J. Wright (University of Wisconsin, Madison, USA)

09:00-09:50 Registration
09:50-10:00 Opening Remarks 2
10:00-13:00 School Lectures (30 min. break included)
13:00-14:30 Lunch
14:30-17:30 School Lectures (30 min. break included)

Abstract

Much of modern data processing requires identification of low-dimensional structures from high-dimensional spaces, using observations that are incomplete or erroneous. This general paradigm applies to denoising and debluring of images (where natural images form a low-dimensional subset of the space of all possible images), compressed sensing (where the signal can be represented in terms of just a few elements of an appropriate basis), regularized regression (where we seek to explain observations in terms of just a few predictive variables), manifold learning (where we wish to identify a manifold from partially observed vectors from that manifold), matrix completion (where we seek a low-rank matrix that fits partial information about the matrix), and so on.

Sparse optimization provides valuable tools for formulating and solving problems of this type. A key concept is regularization, whereby we introduce functions into the optimization formulation that induce the required type of structure in the solutions. In the simplest case, the 1-norm of a vector x is used to derive solutions in which x is sparse, that is, it contains relatively few nonzero components. Often (not always) the regularized formulations are convex but nonsmooth. Customized optimization algorithms are required to handle the large data size and dimension.

This tutorial will survey the scope of applications of sparse optimization in data processing, and then describe the formulation techniques and algorithms that are being used to solve these problems.