seldonian.optimizers.gradient_descent¶
Functions
|
Implements KKT optimization, i.e. simultaneous gradient descent/ascent using the Adam optimizer on a Lagrangian: L(theta,lambda) = f(theta) + lambda*g(theta), where f is the primary objective, lambda is a vector of Lagrange multipliers, and g is a vector of the upper bound functions. |
|
Wrapper to obtain the gradient functions of the primary objective and upper bounds function given a gradient library |