\(\newcommand{\W}[1]{ \; #1 \; }\) \(\newcommand{\R}[1]{ {\rm #1} }\) \(\newcommand{\B}[1]{ {\bf #1} }\) \(\newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} }\) \(\newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} }\) \(\newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} }\) \(\newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }\)
abs_min_linear¶
View page sourceabs_normal: Minimize a Linear Abs-normal Approximation¶
Syntax¶
abs_min_linear
(Prototype¶
template <class DblVector, class SizeVector>
bool abs_min_linear(
size_t level ,
size_t n ,
size_t m ,
size_t s ,
const DblVector& g_hat ,
const DblVector& g_jac ,
const DblVector& bound ,
const DblVector& epsilon ,
const SizeVector& maxitr ,
DblVector& delta_x )
Source¶
This following is a link to the source code for this example: abs_min_linear.hpp .
Purpose¶
We are given a point \(\hat{x} \in \B{R}^n\) and use the notation \(\tilde{f} (x)\) for the abs-normal approximation for f(x) near \(\hat{x}\). We are also given a vector \(b \in \B{R}_+^n\). This routine solves the problem
DblVector¶
is a SimpleVector class with elements of type double
.
SizeVector¶
is a SimpleVector class with elements of type size_t
.
f¶
We use the notation f for the original function; see f .
level¶
This value is less that or equal 4.
If level == 0 ,
no tracing of the optimization is printed.
If level >= 1 ,
a trace of each iteration of abs_min_linear
is printed.
If level >= 2 ,
a trace of the lp_box sub-problem is printed.
If level >= 3 ,
a trace of the objective and primal variables \(x\) are printed
at each simplex_method iteration.
If level == 4 ,
the simplex tableau is printed at each simplex iteration.
n¶
This is the dimension of the domain space for f ; see n .
m¶
This is the dimension of the range space for f ; see m . This must be one so that \(f\) is an objective function.
s¶
This is the number of absolute value terms in f ; see s .
g¶
We use the notation g for the abs-normal representation of f ; see g .
g_hat¶
This vector has size m + s and is the value of g ( x , u ) at \(x = \hat{x}\) and \(u = a( \hat{x} )\).
g_jac¶
This vector has size ( m + s ) * ( n + s ) and is the Jacobian of \(g(x, u)\) at \(x = \hat{x}\) and \(u = a( \hat{x} )\).
bound¶
This vector has size n and we denote its value by \(b \in \B{R}^n\). The trust region is defined as the set of \(x\) such that
for \(j = 0 , \ldots , n-1\), where \(x\) is the point that we are approximating \(f(x)\).
epsilon¶
The value epsilon [0] is convergence criteria in terms of the infinity norm of the difference of delta_x between iterations. The value epsilon [1] is convergence criteria in terms of the derivative of the objective; i.e., \(\tilde{f}(x)\).
maxitr¶
This is a vector with size 2.
The value maxitr [0] is the maximum number of
abs_min_linear
iterations to try before giving up on convergence.
The value maxitr [1] is the maximum number of iterations in
the simplex_method sub-problems.
delta_x¶
This vector \(\Delta x\) has size n . The input value of its elements does not matter. Upon return, the approximate minimizer of \(\tilde{f}(x)\) with respect to the trust region is \(x = \hat{x} + \Delta x\).
Method¶
sigma¶
We use the notation
where a(x) and z(x, u) are as defined in the abs-normal representation of \(f(x)\).
Cutting Planes¶
At each iteration, we are given affine functions \(p_k (x)\) such that \(p_k ( x_k ) = \tilde{f}( x_k )\) and \(p_k^{(1)} ( x_k )\) is the derivative \(\tilde{f}^{(1)} ( x_k )\) corresponding to \(\sigma ( x_k )\).
Iteration¶
At iteration \(k\), we solve the problem
The solution is the new point \(x_K\) at which the new affine approximation \(p_K (x)\) is constructed. This process is iterated until the difference \(x_K - x_{K-1}\) is small enough.
Example¶
The file abs_min_linear.cpp contains an example and test of
abs_min_linear
.