\(\newcommand{\W}[1]{ \; #1 \; }\) \(\newcommand{\R}[1]{ {\rm #1} }\) \(\newcommand{\B}[1]{ {\bf #1} }\) \(\newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} }\) \(\newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} }\) \(\newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} }\) \(\newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }\)
atomic_four_jac_sparsity¶
View page sourceAtomic Function Jacobian Sparsity Patterns¶
Syntax¶
Preferred¶
jac_sparsity
( call_id ,Deprecated 2022-05-10¶
jac_sparsity
(Prototype¶
template <class Base>
bool atomic_four<Base>::jac_sparsity(
size_t call_id ,
bool dependency ,
const vector<bool>& ident_zero_x ,
const vector<bool>& select_x ,
const vector<bool>& select_y ,
sparse_rc< vector<size_t> >& pattern_out )
Implementation¶
This function must be defined if afun is used to define an ADFun object f , and Jacobian sparsity patterns are computed for f . (Computing Hessian sparsity patterns requires Jacobian sparsity patterns.)
Base¶
See Base .
vector¶
is the CppAD_vector template class.
call_id¶
See call_id .
dependency¶
If dependency is true, then pattern_out is a Dependency Pattern for this atomic function. Otherwise it is a Sparsity Pattern for the derivative of the atomic function.
ident_zero_x¶
This can sometimes be used to create more efficient sparsity patterns. If you do not see a way to do this, you can just ignore it. This argument has size equal to the number of arguments to this atomic function; i.e. the size of ax . If ident_zero_x [ j ] is true, the argument ax [ j ] is a constant parameter that is identically zero. An identically zero value times any other value can be treated as being identically zero.
select_x¶
This argument has size equal to the number of arguments to this atomic function; i.e. the size of ax . It specifies which domain components are included in the calculation of pattern_out . If select_x [ j ] is false, then there will be no indices k such that
pattern_out .
col
()[ k ] == j
. If select_x [ j ] is true, the argument ax [ j ] is a variable and ident_zero_x [ j ] will be false.
select_y¶
This argument has size equal to the number of results to this atomic function; i.e. the size of ay . It specifies which range components are included in the calculation of pattern_out . If select_y [ i ] is false, then there will be no indices k such that
pattern_out .
row
()[ k ] == i
.
pattern_out¶
This input value of pattern_out does not matter. Upon return it is a dependency or sparsity pattern for the Jacobian of \(g(x)\), the function corresponding to afun . To be specific, there are non-negative indices i , j , k such that
row
()[ k ] == icol
()[ k ] == jif and only if select_x [ j ] is true, select_y [ j ] is true, and \(g_i(x)\) depends on the value of \(x_j\) (and the partial of \(g_i(x)\) with respect to \(x_j\) is possibly non-zero).
ok¶
If this calculation succeeded, ok is true. Otherwise it is false.
Example¶
The following is an example jac_sparsity
definition taken from
atomic_four_norm_sq.cpp :
// Use deprecated version of this callback to test that is still works
// (missing the ident_zero_x argument).
bool jac_sparsity(
size_t call_id ,
bool dependency ,
// const CppAD::vector<bool>& ident_zero_x,
const CppAD::vector<bool>& select_x ,
const CppAD::vector<bool>& select_y ,
CppAD::sparse_rc< CppAD::vector<size_t> >& pattern_out ) override
{ size_t n = select_x.size();
size_t m = select_y.size();
# ifndef NDEBUG
assert( call_id == 0 );
assert( m == 1 );
# endif
// nnz
size_t nnz = 0;
if( select_y[0] )
{ for(size_t j = 0; j < n; ++j)
{ if( select_x[j] )
++nnz;
}
}
// pattern_out
pattern_out.resize(m, n, nnz);
size_t k = 0;
if( select_y[0] )
{ for(size_t j = 0; j < n; ++j)
{ if( select_x[j] )
pattern_out.set(k++, 0, j);
}
}
assert( k == nnz );
return true;
}