\(\newcommand{\W}[1]{ \; #1 \; }\) \(\newcommand{\R}[1]{ {\rm #1} }\) \(\newcommand{\B}[1]{ {\bf #1} }\) \(\newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} }\) \(\newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} }\) \(\newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} }\) \(\newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }\)
opt_val_hes¶
View page sourceJacobian and Hessian of Optimal Values¶
Syntax¶
opt_val_hes
( x , y , fun , jac , hes )See Also¶
Reference¶
Algorithmic differentiation of implicit functions and optimal values, Bradley M. Bell and James V. Burke, Advances in Automatic Differentiation, 2008, Springer.
Purpose¶
We are given a function \(S : \B{R}^n \times \B{R}^m \rightarrow \B{R}^\ell\) and we define \(F : \B{R}^n \times \B{R}^m \rightarrow \B{R}\) and \(V : \B{R}^n \rightarrow \B{R}\) by
We wish to compute the Jacobian and possibly also the Hessian, of \(V (x)\).
BaseVector¶
The type BaseVector must be a SimpleVector class. We use Base to refer to the type of the elements of BaseVector ; i.e.,
BaseVector ::
value_type
x¶
The argument x has prototype
const
BaseVector & x
and its size must be equal to n . It specifies the point at which we evaluating the Jacobian \(V^{(1)} (x)\) (and possibly the Hessian \(V^{(2)} (x)\)).
y¶
The argument y has prototype
const
BaseVector & y
and its size must be equal to m . It must be equal to \(Y(x)\); i.e., it must solve the implicit equation
Fun¶
The argument fun is an object of type Fun
which must support the member functions listed below.
CppAD will may be recording operations of the type AD
< Base >
when these member functions are called.
These member functions must not stop such a recording; e.g.,
they must not call AD<Base>::abort_recording .
Fun::ad_vector¶
The type Fun :: ad_vector
must be a
SimpleVector class with elements of type AD
< Base > ; i.e.
Fun ::
ad_vector::value_type
is equal to AD
< Base > .
fun.ell¶
The type Fun must support the syntax
ell = fun .
ell
()
where ell has prototype
size_t
ell
and is the value of \(\ell\); i.e., the number of terms in the summation.
One can choose ell equal to one, and have \(S(x,y)\) the same as \(F(x, y)\). Each of the functions \(S_k (x , y)\), (in the summation defining \(F(x, y)\)) is differentiated separately using AD. For very large problems, breaking \(F(x, y)\) into the sum of separate simpler functions may reduce the amount of memory necessary for algorithmic differentiation and there by speed up the process.
fun.s¶
The type Fun must support the syntax
s_k = fun .
s
( k , x , y )
The fun . s
argument k has prototype
size_t
k
and is between zero and ell - 1
.
The argument x to fun . s
has prototype
const
Fun ::ad_vector&
x
and its size must be equal to n .
The argument y to fun . s
has prototype
const
Fun ::ad_vector&
y
and its size must be equal to m .
The fun . s
result s_k has prototype
AD
< Base > s_k
and its value must be given by \(s_k = S_k ( x , y )\).
fun.sy¶
The type Fun must support the syntax
sy_k = fun .
sy
( k , x , y )
The argument k to fun . sy
has prototype
size_t
k
The argument x to fun . sy
has prototype
const
Fun ::ad_vector&
x
and its size must be equal to n .
The argument y to fun . sy
has prototype
const
Fun ::ad_vector&
y
and its size must be equal to m .
The fun . sy
result sy_k has prototype
Fun ::
ad_vector
sy_k
its size must be equal to m , and its value must be given by \(sy_k = \partial_y S_k ( x , y )\).
jac¶
The argument jac has prototype
BaseVector & jac
and has size n or zero. The input values of its elements do not matter. If it has size zero, it is not affected. Otherwise, on output it contains the Jacobian of \(V (x)\); i.e., for \(j = 0 , \ldots , n-1\),
where x is the first argument to opt_val_hes
.
hes¶
The argument hes has prototype
BaseVector & hes
and has size n * n or zero. The input values of its elements do not matter. If it has size zero, it is not affected. Otherwise, on output it contains the Hessian of \(V (x)\); i.e., for \(i = 0 , \ldots , n-1\), and \(j = 0 , \ldots , n-1\),
signdet¶
If hes has size zero, signdet is not defined. Otherwise the return value signdet is the sign of the determinant for \(\partial_{yy}^2 F(x , y)\). If it is zero, then the matrix is singular and the Hessian is not computed ( hes is not changed).
Example¶
The file opt_val_hes.cpp contains an example and test of this operation.