\(\newcommand{\W}[1]{ \; #1 \; }\) \(\newcommand{\R}[1]{ {\rm #1} }\) \(\newcommand{\B}[1]{ {\bf #1} }\) \(\newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} }\) \(\newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} }\) \(\newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} }\) \(\newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }\)
abs_min_quad¶
View page sourceabs_normal: Minimize a Linear Abs-normal Approximation¶
Syntax¶
abs_min_quad
(Prototype¶
template <class DblVector, class SizeVector>
bool abs_min_quad(
size_t level ,
size_t n ,
size_t m ,
size_t s ,
const DblVector& g_hat ,
const DblVector& g_jac ,
const DblVector& hessian ,
const DblVector& bound ,
const DblVector& epsilon ,
const SizeVector& maxitr ,
DblVector& delta_x )
Source¶
This following is a link to the source code for this example: abs_min_quad.hpp .
Purpose¶
We are given a point \(\hat{x} \in \B{R}^n\) and use the notation \(\tilde{f} (x)\) for the abs-normal approximation for f(x) near \(\hat{x}\). We are also given a vector \(b \in \B{R}_+^n\) and a positive definite matrix \(H \in \B{R}^{n \times n}\). This routine solves the problem
DblVector¶
is a SimpleVector class with elements of type double
.
SizeVector¶
is a SimpleVector class with elements of type size_t
.
f¶
We use the notation f for the original function; see f .
level¶
This value is less that or equal 3.
If level == 0 ,
no tracing of the optimization is printed.
If level >= 1 ,
a trace of each iteration of abs_min_quad
is printed.
If level >= 2 ,
a trace of the qp_box sub-problem is printed.
If level >= 3 ,
a trace of the qp_interior sub-problem is printed.
n¶
This is the dimension of the domain space for f ; see n .
m¶
This is the dimension of the range space for f ; see m . This must be one so that \(f\) is an objective function.
s¶
This is the number of absolute value terms in f ; see s .
g¶
We use the notation g for the abs-normal representation of f ; see g .
g_hat¶
This vector has size m + s and is the value of g ( x , u ) at \(x = \hat{x}\) and \(u = a( \hat{x} )\).
g_jac¶
This vector has size ( m + s ) * ( n + s ) and is the Jacobian of \(g(x, u)\) at \(x = \hat{x}\) and \(u = a( \hat{x} )\).
hessian¶
This vector has size n * n . It is a row-major representation of the matrix \(H \in \B{R}^{n \times n}\).
bound¶
This vector has size n and is the vector \(b \in \B{R}^n\). The trust region is defined as the set of \(\Delta x\) such that
for \(j = 0 , \ldots , n-1\).
epsilon¶
The value epsilon [0] is convergence criteria in terms of the infinity norm of the difference of delta_x between iterations. The value epsilon [1] is convergence criteria in terms of the derivative of the objective; i.e.
maxitr¶
This is a vector with size 2.
The value maxitr [0] is the maximum number of
abs_min_quad
iterations to try before giving up on convergence.
The value maxitr [1] is the maximum number of iterations in
the qp_interior sub-problems.
delta_x¶
This vector \(\Delta x\) has size n . The input value of its elements does not matter. Upon return, the approximate minimizer of the objective with respect to the trust region.
Method¶
sigma¶
We use the notation
where a(x) and z(x, u) are as defined in the abs-normal representation of \(f(x)\).
Cutting Planes¶
At each iteration, we are given affine functions \(p_k (x)\) such that \(p_k ( x_k ) = \tilde{f}( x_k )\) and \(p_k^{(1)} ( x_k )\) is the derivative \(\tilde{f}^{(1)} ( x_k )\) corresponding to \(\sigma ( x_k )\).
Iteration¶
At iteration \(k\), we solve the problem
The solution is the new point \(x_K\) at which the new affine approximation \(p_K (x)\) is constructed. This process is iterated until the difference \(x_K - x_{K-1}\) is small enough.
Example¶
The file abs_min_quad.cpp contains an example and test of
abs_min_quad
.