Convex Optimization

Convex Optimization

Created
Jan 30, 2022
Description
Convex Optimization
Featured
Category
Mathematics
Tags
notion image
Useful tricks in Convex Optimization
 
 

Convex Optimization Problems

Optimization usually has the form of
to describe the optimization function with optimization variable and the objective function .
 

Linear optimization programming

where and .
 
The following problems can be formulated as LPs.
Minimize .
This is equivalent to the LP:
This follows from that That is, .
Minimize .
This is equivalent to the LP:
This follows from that . The LP is optimal when . Then, it implies that is optimal.
Minimize subject to .
This is equivalent to the LP:
Minimize subject to .
This is equivalent to the LP:
Minimize .
This is equivalent to the LP:
 
 
 

Gradient Descent

for k in range(0, N):
	Approximate f by f_k according to x(k)
	Do something using f_k (e.g., setting x(k+1) = argmmin f_k(x))
There are multiple forms of approximations
First-order approximation:
Usually, the algorithm here looks like
for k in range(0, N):
	if gradient(x(k)) <= eps:
		return x(k)
	x(k+1) = x(k) - step_size * gradient(x(k))
Theorem: Let . For any , the following are equivalent:
  1. for all
  1. for all
  1. for all
 
Theorem: Let be a function with -Lipschitz gradient and be any minimizer of . The gradient descent with step size outputs a point such that in iterations.
 
Second-order approximation:
Stochastic Approximation: for a random
Matrix Approximation: Approximate by a simpler with
Set Approximation: Approximate a convex set by an ellipsoid or a polytope
Barrier Approximation: Approximate a convex set by a smooth function that blows up on the boundary
Polynomial Approximation: Approximate a function by a polynomial
Partial Approximation: Split the problem into two parts and approximate only one part
Taylor Approximation:
Mixed - Approximation:
Stochastic Matrix Approximation: Approximate by a simpler random with and
Homotopy Method: Approximate a function by a family of functions