Convex optimization of gradient descent

  • What is the convex convergence rate of gradient descent?

    Convex f.
    From Theorem ??, we know that the convergence rate of gradient descent with convex f is O(1/k), where k is the number of iterations.
    This implies that in order to achieve a bound of f(x(k))−f(x∗) ≤ ϵ, we must run O(1/ϵ) iterations of gradient descent.
    This rate is referred to as “sub-linear convergence.”.

It tries to improve the function value by moving in a direction related to the gradient (i.e., the first derivative). For convex optimization it gives the global optimum under fairly general conditions. For nonconvex optimization it arrives at a local optimum.

Categories

Convex optimization on machine learning
Convex optimization oracle
Convex optimization optimality
Convex optimization pdf
Convex optimization prerequisites
Convex optimization python example
Convex optimization portfolio
Convex optimization polito
Convex optimization princeton
Convex optimization problem definition
Convex optimization quadratic programming
Convex optimization question paper
Convex optimization quiz
Convex optimization questions and solutions
Convex optimization quadratic constraints
Convex optimization quant
Convex optimization qp
Convex quadratic optimization
Convex quadratic optimization over symmetric cone
Convex optimization using quantum oracles