Learning to Optimize (L2O) is a subset of machine learning (ML) that seeks to combine the theoretical guarantees of classic optimization methods with the exceptional performance of data-driven algorithms. This is my primary area of research and the underlying theme of numerous blog posts below. These posts present the core ideas of mathematical research, typically one for each published work. Feel free to browse, comment, and reach out if you wish to connect and discuss these materials in further detail. I am quite open to collaboration on new projects.

Google Scholar

Research Gate

Posts by Year

2020

Projecting to Manifolds via Unsupervised Learning

10 minute read

TL; DR - We present an algorithm for performing projections onto the low dimensional manifolds that efficiently represent true data.

Safeguarded Learned Convex Optimization

less than 1 minute read

TL; DR - Safeguarded learning to optimize (L2O) algorithms can leverage the strength of machine learning tools while maintaining convergence guarantees.

Back to top ↑

2019

Async Inertial Method for Common Fixed Points

less than 1 minute read

TL; DR - Baby version of ARock for common fixed points without using any probability.

Back to top ↑