Skip links

  • Skip to primary navigation
  • Skip to content
  • Skip to footer
Howard Heaton Optimization / Machine Learning
  • Home
  • Research
  • Tutoring
  • Contact
  • Math Notes

    Safeguarded Learned Convex Optimization

    TL; DR - Safeguarded learning to optimize (L2O) algorithms can leverage the strength of machine learning tools while maintaining convergence guarantees.

    less than 1 minute read

    Preprint Citation Code Slides

    Safe L2O

    Howard Heaton, Xiaohan Chen, Atlas Wang, and Wotao Yin

    (Under Construction - Jan 2021)

    Published: March 1, 2020

    Updated: January 12, 2021

    Share on

    Twitter Reddit LinkedIn
    Previous Next

    Leave a comment

    You may also enjoy

    MiniMax L2O: A Pilot Study

    less than 1 minute read

    TL; DR - We introduce the learning to optimize (L2O) methodology to the minimax problems for the first time.

    Projecting to Manifolds via Unsupervised Learning

    10 minute read

    TL; DR - We present an algorithm for performing projections onto the low dimensional manifolds that efficiently represent true data.

    Async Inertial Method for Common Fixed Points

    less than 1 minute read

    TL; DR - Baby version of ARock for common fixed points without using any probability.

    • Follow:
    • Google Scholar
    • LinkedIn
    © 2021 Howard Heaton.