cv::dnn::Layer Class Reference
DNN (深度神经网络) 模块

This interface class allows to build new Layers - are building blocks of networks. 更多...

#include <opencv2/dnn/dnn.hpp>

Inheritance diagram for cv::dnn::Layer:
cv::Algorithm cv::dnn::ActivationLayer cv::dnn::BaseConvolutionLayer cv::dnn::BlankLayer cv::dnn::ConcatLayer cv::dnn::ConstLayer cv::dnn::CropAndResizeLayer cv::dnn::CropLayer cv::dnn::DetectionOutputLayer cv::dnn::EltwiseLayer cv::dnn::FlattenLayer cv::dnn::InnerProductLayer cv::dnn::InterpLayer cv::dnn::LRNLayer cv::dnn::LSTMLayer cv::dnn::MaxUnpoolLayer cv::dnn::MVNLayer cv::dnn::NormalizeBBoxLayer cv::dnn::PaddingLayer cv::dnn::PermuteLayer cv::dnn::PoolingLayer cv::dnn::PriorBoxLayer cv::dnn::ProposalLayer cv::dnn::RegionLayer cv::dnn::ReorgLayer cv::dnn::ReshapeLayer cv::dnn::ResizeLayer cv::dnn::RNNLayer cv::dnn::ScaleLayer cv::dnn::ShiftLayer cv::dnn::ShuffleChannelLayer cv::dnn::SliceLayer cv::dnn::SoftmaxLayer cv::dnn::SplitLayer

Public Member Functions

()
(const LayerParams &params)
Initializes only name , type and blobs fields. 更多...
virtual ~Layer ()
virtual void applyHalideScheduler ( Ptr < BackendNode > &node, const std::vector< Mat *> &inputs, const std::vector< Mat > &outputs, int targetId) const
Automatic Halide scheduling based on layer hyper-parameters. 更多...
virtual void finalize (const std::vector< Mat *> &input, std::vector< Mat > &output)
Computes and sets internal parameters according to inputs, outputs and blobs. 更多...
virtual void finalize ( InputArrayOfArrays inputs, OutputArrayOfArrays outputs)
Computes and sets internal parameters according to inputs, outputs and blobs. 更多...
void finalize (const std::vector< Mat > &inputs, std::vector< Mat > &outputs)
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. 更多...
std::vector< Mat > finalize (const std::vector< Mat > &inputs)
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. 更多...
virtual void forward (std::vector< Mat *> &input, std::vector< Mat > &output, std::vector< Mat > &internals)
Given the input blobs, computes the output blobs . 更多...
virtual void forward ( InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals)
Given the input blobs, computes the output blobs . 更多...
void forward_fallback ( InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals)
Given the input blobs, computes the output blobs . 更多...
virtual int64 getFLOPS (const std::vector< MatShape > &inputs, const std::vector< MatShape > &outputs) const
virtual bool getMemoryShapes (const std::vector< MatShape > &inputs, const int requiredOutputs, std::vector< MatShape > &outputs, std::vector< MatShape > &internals) const
virtual void getScaleShift ( Mat &scale, Mat &shift) const
Returns parameters of layers with channel-wise multiplication and addition. 更多...
virtual Ptr < BackendNode > initCUDA (void *context, const std::vector< Ptr < BackendWrapper >> &inputs, const std::vector< Ptr < BackendWrapper >> &outputs)
Returns a CUDA backend node. 更多...
virtual Ptr < BackendNode > initHalide (const std::vector< Ptr < BackendWrapper > > &inputs)
Returns Halide backend node. 更多...
virtual Ptr < BackendNode > initInfEngine (const std::vector< Ptr < BackendWrapper > > &inputs)
virtual Ptr < BackendNode > initNgraph (const std::vector< Ptr < BackendWrapper > > &inputs, const std::vector< Ptr < BackendNode > > &nodes)
virtual Ptr < BackendNode > initVkCom (const std::vector< Ptr < BackendWrapper > > &inputs)
virtual int inputNameToIndex ( 字符串 inputName)
Returns index of input blob into the input array. 更多...
virtual int outputNameToIndex (const 字符串 &outputName)
Returns index of output blob in output array. 更多...
void run (const std::vector< Mat > &inputs, std::vector< Mat > &outputs, std::vector< Mat > &internals)
Allocates layer and computes output. 更多...
virtual bool setActivation (const Ptr < ActivationLayer > &layer)
Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. 更多...
void setParamsFrom (const LayerParams &params)
Initializes only name , type and blobs fields. 更多...
virtual bool supportBackend (int backendId)
Ask layer if it support specific backend for doing computations. 更多...
virtual Ptr < BackendNode > tryAttach (const Ptr < BackendNode > &node)
Implement layers fusing. 更多...
virtual bool tryFuse ( Ptr < > &top)
Try to fuse current layer with a next one. 更多...
virtual void unsetAttached ()
"Deattaches" all the layers, attached to particular layer. 更多...
- Public Member Functions inherited from cv::Algorithm
Algorithm ()
virtual ~Algorithm ()
virtual void clear ()
Clears the algorithm state. 更多...
virtual bool empty () const
返回 true 若 Algorithm is empty (e.g. in the very beginning or after unsuccessful read. 更多...
virtual 字符串 getDefaultName () const
virtual void read (const FileNode &fn)
Reads algorithm parameters from a file storage. 更多...
virtual void save (const 字符串 &filename) const
virtual void write ( FileStorage &fs) const
Stores algorithm parameters in a file storage. 更多...
void write (const Ptr < FileStorage > &fs, const 字符串 &name= 字符串 ()) const
simplified API for language bindings This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. 更多...

Public Attributes

std::vector< Mat > blobs
List of learned parameters must be stored here to allow read them by using Net::getParam() . 更多...
字符串 name
Name of the layer instance, can be used for logging or other internal purposes. 更多...
int preferableTarget
prefer target for layer forwarding 更多...
字符串 type
Type name which was used for creating layer by layer factory. 更多...

额外继承成员

- Static Public Member Functions inherited from cv::Algorithm
template<typename _Tp >
static Ptr < _Tp > load (const 字符串 &filename, const 字符串 &objname= 字符串 ())
Loads algorithm from the file. 更多...
template<typename _Tp >
static Ptr < _Tp > loadFromString (const 字符串 &strModel, const 字符串 &objname= 字符串 ())
Loads algorithm from a String. 更多...
template<typename _Tp >
static Ptr < _Tp > read (const FileNode &fn)
Reads algorithm from the file node. 更多...
- Protected Member Functions inherited from cv::Algorithm
void writeFormat ( FileStorage &fs) const

详细描述

This interface class allows to build new Layers - are building blocks of networks.

Each class, derived from , must implement allocate() methods to declare own outputs and forward() to compute outputs. Also before using the new layer into networks you must register your layer by using one of LayerFactory macros.

Constructor & Destructor Documentation

Layer() [1/2]

cv::dnn::Layer::Layer ( )

Layer() [2/2]

cv::dnn::Layer::Layer ( const LayerParams & params )
explicit

Initializes only name , type and blobs fields.

~Layer()

virtual cv::dnn::Layer::~Layer ( )
virtual

成员函数文档编制

applyHalideScheduler()

virtual void cv::dnn::Layer::applyHalideScheduler ( Ptr < BackendNode > & node ,
const std::vector< Mat *> & inputs ,
const std::vector< Mat > & outputs ,
int targetId
) const
virtual

Automatic Halide scheduling based on layer hyper-parameters.

参数
[in] node Backend node with Halide functions.
[in] inputs Blobs that will be used in forward invocations.
[in] outputs Blobs that will be used in forward invocations.
[in] targetId Target identifier
另请参阅
BackendNode , Target

don't use own Halide::Func members because we can have applied layers fusing. In this way the fused function should be scheduled.

finalize() [1/4]

virtual void cv::dnn::Layer::finalize ( const std::vector< Mat *> & input ,
std::vector< Mat > & output
)
virtual
Python:
outputs = cv.dnn_Layer.finalize( inputs[, outputs] )

Computes and sets internal parameters according to inputs, outputs and blobs.

Deprecated:
使用 Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
参数
[in] input vector of already allocated input blobs
[out] output vector of already allocated output blobs

If this method is called after network has allocated all memory for input and output blobs and before inferencing.

finalize() [2/4]

virtual void cv::dnn::Layer::finalize ( InputArrayOfArrays inputs ,
OutputArrayOfArrays outputs
)
virtual
Python:
outputs = cv.dnn_Layer.finalize( inputs[, outputs] )

Computes and sets internal parameters according to inputs, outputs and blobs.

参数
[in] inputs vector of already allocated input blobs
[out] outputs vector of already allocated output blobs

If this method is called after network has allocated all memory for input and output blobs and before inferencing.

finalize() [3/4]

void cv::dnn::Layer::finalize ( const std::vector< Mat > & inputs ,
std::vector< Mat > & outputs
)
Python:
outputs = cv.dnn_Layer.finalize( inputs[, outputs] )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Deprecated:
使用 Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead

finalize() [4/4]

std::vector< Mat > cv::dnn::Layer::finalize ( const std::vector< Mat > & inputs )
Python:
outputs = cv.dnn_Layer.finalize( inputs[, outputs] )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Deprecated:
使用 Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead

forward() [1/2]

virtual void cv::dnn::Layer::forward ( std::vector< Mat *> & input ,
std::vector< Mat > & output ,
std::vector< Mat > & 内部
)
virtual

Given the input blobs, computes the output blobs .

Deprecated:
使用 Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
参数
[in] input the input blobs.
[out] output allocated output blobs, which will store results of the computation.
[out] 内部 allocated internal blobs

forward() [2/2]

virtual void cv::dnn::Layer::forward ( InputArrayOfArrays inputs ,
OutputArrayOfArrays outputs ,
OutputArrayOfArrays 内部
)
virtual

Given the input blobs, computes the output blobs .

参数
[in] inputs the input blobs.
[out] outputs allocated output blobs, which will store results of the computation.
[out] 内部 allocated internal blobs

forward_fallback()

void cv::dnn::Layer::forward_fallback ( InputArrayOfArrays inputs ,
OutputArrayOfArrays outputs ,
OutputArrayOfArrays 内部
)

Given the input blobs, computes the output blobs .

参数
[in] inputs the input blobs.
[out] outputs allocated output blobs, which will store results of the computation.
[out] 内部 allocated internal blobs

getFLOPS()

virtual int64 cv::dnn::Layer::getFLOPS ( const std::vector< MatShape > & inputs ,
const std::vector< MatShape > & outputs
) const
inline virtual

getMemoryShapes()

virtual bool cv::dnn::Layer::getMemoryShapes ( const std::vector< MatShape > & inputs ,
const int requiredOutputs ,
std::vector< MatShape > & outputs ,
std::vector< MatShape > & 内部
) const
virtual

getScaleShift()

virtual void cv::dnn::Layer::getScaleShift ( Mat & scale ,
Mat & shift
) const
virtual

Returns parameters of layers with channel-wise multiplication and addition.

参数
[out] scale Channel-wise multipliers. Total number of values should be equal to number of channels.
[out] shift Channel-wise offsets. Total number of values should be equal to number of channels.

Some layers can fuse their transformations with further layers. In example, convolution + batch normalization. This way base layer use weights from layer after it. Fused layer is skipped. By default, scale and shift are empty that means layer has no element-wise multiplications or additions.

initCUDA()

virtual Ptr < BackendNode > cv::dnn::Layer::initCUDA ( void * context ,
const std::vector< Ptr < BackendWrapper >> & inputs ,
const std::vector< Ptr < BackendWrapper >> & outputs
)
virtual

Returns a CUDA backend node.

参数
context void pointer to CSLContext object
inputs layer inputs
outputs layer outputs

initHalide()

virtual Ptr < BackendNode > cv::dnn::Layer::initHalide ( const std::vector< Ptr < BackendWrapper > > & inputs )
virtual

Returns Halide backend node.

参数
[in] inputs Input Halide buffers.
另请参阅
BackendNode , BackendWrapper

Input buffers should be exactly the same that will be used in forward invocations. Despite we can use Halide::ImageParam based on input shape only, it helps prevent some memory management issues (if something wrong, Halide tests will be failed).

initInfEngine()

virtual Ptr < BackendNode > cv::dnn::Layer::initInfEngine ( const std::vector< Ptr < BackendWrapper > > & inputs )
virtual

initNgraph()

virtual Ptr < BackendNode > cv::dnn::Layer::initNgraph ( const std::vector< Ptr < BackendWrapper > > & inputs ,
const std::vector< Ptr < BackendNode > > & nodes
)
virtual

initVkCom()

virtual Ptr < BackendNode > cv::dnn::Layer::initVkCom ( const std::vector< Ptr < BackendWrapper > > & inputs )
virtual

inputNameToIndex()

virtual int cv::dnn::Layer::inputNameToIndex ( 字符串 inputName )
virtual

Returns index of input blob into the input array.

参数
inputName label of input blob

Each layer input and output can be labeled to easily identify them using "%<layer_name%>[.output_name]" notation. This method maps label of input blob to its index into input vector.

Reimplemented in cv::dnn::LSTMLayer .

outputNameToIndex()

virtual int cv::dnn::Layer::outputNameToIndex ( const 字符串 & outputName )
virtual
Python:
retval = cv.dnn_Layer.outputNameToIndex( outputName )

Returns index of output blob in output array.

另请参阅
inputNameToIndex()

Reimplemented in cv::dnn::LSTMLayer .

run()

void cv::dnn::Layer::run ( const std::vector< Mat > & inputs ,
std::vector< Mat > & outputs ,
std::vector< Mat > & 内部
)
Python:
outputs, internals = cv.dnn_Layer.run( inputs, internals[, outputs] )

Allocates layer and computes output.

Deprecated:
This method will be removed in the future release.

setActivation()

virtual bool cv::dnn::Layer::setActivation ( const Ptr < ActivationLayer > & layer )
virtual

Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case.

参数
[in] layer The subsequent activation layer.

Returns true if the activation layer has been attached successfully.

setParamsFrom()

void cv::dnn::Layer::setParamsFrom ( const LayerParams & params )

Initializes only name , type and blobs fields.

supportBackend()

virtual bool cv::dnn::Layer::supportBackend ( int backendId )
virtual

Ask layer if it support specific backend for doing computations.

参数
[in] backendId computation backend identifier.
另请参阅
Backend

tryAttach()

virtual Ptr < BackendNode > cv::dnn::Layer::tryAttach ( const Ptr < BackendNode > & node )
virtual

Implement layers fusing.

参数
[in] node Backend node of bottom layer.
另请参阅
BackendNode

Actual for graph-based backends. If layer attached successfully, returns non-empty cv::Ptr to node of the same backend. Fuse only over the last function.

tryFuse()

virtual bool cv::dnn::Layer::tryFuse ( Ptr < > & top )
virtual

Try to fuse current layer with a next one.

参数
[in] top Next layer to be fused.
返回
True if fusion was performed.

unsetAttached()

virtual void cv::dnn::Layer::unsetAttached ( )
virtual

"Deattaches" all the layers, attached to particular layer.

Member Data Documentation

blobs

std::vector< Mat > cv::dnn::Layer::blobs

List of learned parameters must be stored here to allow read them by using Net::getParam() .

name

字符串 cv::dnn::Layer::name

Name of the layer instance, can be used for logging or other internal purposes.

preferableTarget

int cv::dnn::Layer::preferableTarget

prefer target for layer forwarding

type

字符串 cv::dnn::Layer::type

Type name which was used for creating layer by layer factory.


The documentation for this class was generated from the following file: