Sign in

Dealing with unbounded gradients in stochastic saddle-point optimization

By Gergely Neu and Nneka Okolo
We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique... Show more
June 7, 2024
=
0
Loading PDF…
Loading full text...
Similar articles
Loading recommendations...
=
0
x1
Dealing with unbounded gradients in stochastic saddle-point optimization
Click on play to start listening