Useful tricks in Convex Optimization
Convex Optimization Problems
Optimization usually has the form of
to describe the optimization function with optimization variable and the objective function .
Linear optimization programming
where and .
The following problems can be formulated as LPs.
Minimize .
This is equivalent to the LP:
This follows from that That is, .
Minimize .
This is equivalent to the LP:
This follows from that . The LP is optimal when . Then, it implies that is optimal.
Minimize subject to .
This is equivalent to the LP:
Minimize subject to .
This is equivalent to the LP:
Minimize .
This is equivalent to the LP:
Gradient Descent
for k in range(0, N):
Approximate f by f_k according to x(k)
Do something using f_k (e.g., setting x(k+1) = argmmin f_k(x))
There are multiple forms of approximations
First-order approximation:
Usually, the algorithm here looks like
for k in range(0, N):
if gradient(x(k)) <= eps:
return x(k)
x(k+1) = x(k) - step_size * gradient(x(k))
Theorem: Let . For any , the following are equivalent:
- for all
- for all
- for all
Theorem: Let be a function with -Lipschitz gradient and be any minimizer of . The gradient descent with step size outputs a point such that in iterations.