Efficient learning and sampling algorithms for big data and complex models

This project aims to develop algorithms and analysis techniques for large scale optimization and sampling problems that arise in machine learning. Some of the main treads are:

  • Stochastic greedy methods for convex optimization
  • Generalizations of Nesterov's celebrated acceleration approach for smooth convex optimization in continuous time
  • Developing parameter estimation methods for deep neural networks based on analysis of their representation and generalization properties.
  • Development of efficient sampling methods and for their analysis via an optimization approach.
  • Alternating minimization approaches to non-convex optimization problems, such as dictionary learning and neural network parameter estimation.
  • Developing effective uncertainty estimates for machine learning methods, based on the design of appropriate loss functions.
  • Developing methodology, based on fast Laplacian solvers, that is suitable for large scale graph data.

Project Researchers

Lead CI

PhD Student

Associate Investigator