cv::dnn::Net Class Reference
DNN (深度神经网络) 模块

This class allows to create and manipulate comprehensive artificial neural networks. 更多...

#include <opencv2/dnn/dnn.hpp>

Inheritance diagram for cv::dnn::Net:
cv::dnn::Model cv::dnn::ClassificationModel cv::dnn::DetectionModel cv::dnn::KeypointsModel cv::dnn::SegmentationModel

公共类型

typedef DictValue LayerId
Container for strings and integers. 更多...

Public Member Functions

Net ()
Default constructor. 更多...
~Net ()
Destructor frees the net only if there aren't references to the net anymore. 更多...
int addLayer (const 字符串 &name, const 字符串 &type, LayerParams &params)
Adds new layer to the net. 更多...
int addLayerToPrev (const 字符串 &name, const 字符串 &type, LayerParams &params)
Adds new layer and connects its first input to the first output of previously added layer. 更多...
void connect ( 字符串 outPin, 字符串 inpPin)
Connects output of the first layer to input of the second layer. 更多...
void connect (int outLayerId, int outNum, int inpLayerId, int inpNum)
Connects # outNum output of the first layer to # inNum input of the second layer. 更多...
字符串 dump ()
Dump net to String. 更多...
void dumpToFile (const 字符串 &path)
Dump net structure, hyperparameters, backend, target and fusion to dot file. 更多...
bool empty () const
void enableFusion (bool fusion)
Enables or disables layer fusion in the network. 更多...
Mat forward (const 字符串 &outputName= 字符串 ())
Runs forward pass to compute output of layer with name outputName . 更多...
void forward ( OutputArrayOfArrays outputBlobs, const 字符串 &outputName= 字符串 ())
Runs forward pass to compute output of layer with name outputName . 更多...
void forward ( OutputArrayOfArrays outputBlobs, const std::vector< 字符串 > &outBlobNames)
Runs forward pass to compute outputs of layers listed in outBlobNames . 更多...
void forward (std::vector< std::vector< Mat > > &outputBlobs, const std::vector< 字符串 > &outBlobNames)
Runs forward pass to compute outputs of layers listed in outBlobNames . 更多...
AsyncArray forwardAsync (const 字符串 &outputName= 字符串 ())
Runs forward pass to compute output of layer with name outputName . 更多...
int64 getFLOPS (const std::vector< MatShape > &netInputShapes) const
Computes FLOP for whole loaded model with specified input shapes. 更多...
int64 getFLOPS (const MatShape &netInputShape) const
int64 getFLOPS (const int layerId, const std::vector< MatShape > &netInputShapes) const
int64 getFLOPS (const int layerId, const MatShape &netInputShape) const
Ptr < > getLayer ( LayerId layerId)
Returns pointer to layer with specified id or name which the network use. 更多...
int getLayerId (const 字符串 &layer)
Converts string name of the layer to the integer identifier. 更多...
std::vector< Ptr < > > getLayerInputs ( LayerId layerId)
Returns pointers to input layers of specific layer. 更多...
std::vector< 字符串 > getLayerNames () const
int getLayersCount (const 字符串 &layerType) const
Returns count of layers of specified type. 更多...
void getLayerShapes (const MatShape &netInputShape, const int layerId, std::vector< MatShape > &inLayerShapes, std::vector< MatShape > &outLayerShapes) const
Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary. 更多...
void getLayerShapes (const std::vector< MatShape > &netInputShapes, const int layerId, std::vector< MatShape > &inLayerShapes, std::vector< MatShape > &outLayerShapes) const
void getLayersShapes (const std::vector< MatShape > &netInputShapes, std::vector< int > &layersIds, std::vector< std::vector< MatShape > > &inLayersShapes, std::vector< std::vector< MatShape > > &outLayersShapes) const
Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary. 更多...
void getLayersShapes (const MatShape &netInputShape, std::vector< int > &layersIds, std::vector< std::vector< MatShape > > &inLayersShapes, std::vector< std::vector< MatShape > > &outLayersShapes) const
void getLayerTypes (std::vector< 字符串 > &layersTypes) const
Returns list of types for layer used in model. 更多...
void getMemoryConsumption (const std::vector< MatShape > &netInputShapes, size_t &weights, size_t &blobs) const
Computes bytes number which are required to store all weights and intermediate blobs for model. 更多...
void getMemoryConsumption (const MatShape &netInputShape, size_t &weights, size_t &blobs) const
void getMemoryConsumption (const int layerId, const std::vector< MatShape > &netInputShapes, size_t &weights, size_t &blobs) const
void getMemoryConsumption (const int layerId, const MatShape &netInputShape, size_t &weights, size_t &blobs) const
void getMemoryConsumption (const std::vector< MatShape > &netInputShapes, std::vector< int > &layerIds, std::vector< size_t > &weights, std::vector< size_t > &blobs) const
Computes bytes number which are required to store all weights and intermediate blobs for each layer. 更多...
void getMemoryConsumption (const MatShape &netInputShape, std::vector< int > &layerIds, std::vector< size_t > &weights, std::vector< size_t > &blobs) const
Mat getParam ( LayerId layer, int numParam=0)
Returns parameter blob of the layer. 更多...
int64 getPerfProfile (std::vector< double > &timings)
Returns overall time for inference and timings (in ticks) for layers. Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers. 更多...
std::vector< int > getUnconnectedOutLayers () const
Returns indexes of layers with unconnected outputs. 更多...
std::vector< 字符串 > getUnconnectedOutLayersNames () const
Returns names of layers with unconnected outputs. 更多...
void setHalideScheduler (const 字符串 &scheduler)
Compile Halide layers. 更多...
void setInput ( InputArray blob, const 字符串 &name="", double scalefactor=1.0, const Scalar & mean = Scalar ())
Sets the new input value for the network. 更多...
void setInputsNames (const std::vector< 字符串 > &inputBlobNames)
Sets outputs names of the network input pseudo layer. 更多...
void setParam ( LayerId layer, int numParam, const Mat &blob)
Sets the new value for the learned param of the layer. 更多...
void setPreferableBackend (int backendId)
Ask network to use specific computation backend where it supported. 更多...
void setPreferableTarget (int targetId)
Ask network to make computations on specific target device. 更多...

Static Public Member Functions

static Net readFromModelOptimizer (const 字符串 &xml, const 字符串 &bin)
Create a network from Intel's Model Optimizer intermediate representation (IR). 更多...
static Net readFromModelOptimizer (const std::vector< uchar > &bufferModelConfig, const std::vector< uchar > &bufferWeights)
Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR). 更多...
static Net readFromModelOptimizer (const uchar *bufferModelConfigPtr, size_t bufferModelConfigSize, const uchar *bufferWeightsPtr, size_t bufferWeightsSize)
Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR). 更多...

详细描述

This class allows to create and manipulate comprehensive artificial neural networks.

Neural network is presented as directed acyclic graph (DAG), where vertices are instances, and edges specify relationships between layers inputs and outputs.

Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id.

This class supports reference counting of its instances, i. e. copies point to the same instance.

范例:
samples/dnn/colorization.cpp , samples/dnn/openpose.cpp ,和 samples/dnn/text_detection.cpp .

Member Typedef Documentation

LayerId

Container for strings and integers.

Constructor & Destructor Documentation

Net()

cv::dnn::Net::Net ( )
Python:
<dnn_Net object> = cv.dnn_Net( )

Default constructor.

~Net()

cv::dnn::Net::~Net ( )

Destructor frees the net only if there aren't references to the net anymore.

成员函数文档编制

addLayer()

int cv::dnn::Net::addLayer ( const 字符串 & name ,
const 字符串 & type ,
LayerParams & params
)

Adds new layer to the net.

参数
name unique name of the adding layer.
type typename of the adding layer (type must be registered in LayerRegister).
params parameters which will be used to initialize the creating layer.
返回
unique identifier of created layer, or -1 if a failure will happen.

addLayerToPrev()

int cv::dnn::Net::addLayerToPrev ( const 字符串 & name ,
const 字符串 & type ,
LayerParams & params
)

Adds new layer and connects its first input to the first output of previously added layer.

另请参阅
addLayer()

connect() [1/2]

void cv::dnn::Net::connect ( 字符串 outPin ,
字符串 inpPin
)
Python:
None = cv.dnn_Net.connect( outPin, inpPin )

Connects output of the first layer to input of the second layer.

参数
outPin descriptor of the first layer output.
inpPin descriptor of the second layer input.

Descriptors have the following template <layer_name>[.input_number] :

  • the first part of the template layer_name is string name of the added layer. If this part is empty then the network input pseudo layer will be used;
  • the second optional part of the template input_number is either number of the layer input, either label one. If this part is omitted then the first layer input will be used.

    另请参阅
    setNetInputs(), Layer::inputNameToIndex() , Layer::outputNameToIndex()

connect() [2/2]

void cv::dnn::Net::connect ( int outLayerId ,
int outNum ,
int inpLayerId ,
int inpNum
)
Python:
None = cv.dnn_Net.connect( outPin, inpPin )

Connects # outNum output of the first layer to # inNum input of the second layer.

参数
outLayerId identifier of the first layer
outNum number of the first layer output
inpLayerId identifier of the second layer
inpNum number of the second layer input

dump()

字符串 cv::dnn::Net::dump ( )
Python:
retval = cv.dnn_Net.dump( )

Dump net to String.

返回
String with structure, hyperparameters, backend, target and fusion Call method after setInput() . To see correct backend, target and fusion run after forward() .

dumpToFile()

void cv::dnn::Net::dumpToFile ( const 字符串 & path )
Python:
None = cv.dnn_Net.dumpToFile( path )

Dump net structure, hyperparameters, backend, target and fusion to dot file.

参数
path path to output file with .dot extension
另请参阅
dump()

empty()

bool cv::dnn::Net::empty ( ) const
Python:
retval = cv.dnn_Net.empty( )

Returns true if there are no layers in the network.

enableFusion()

void cv::dnn::Net::enableFusion ( bool fusion )
Python:
None = cv.dnn_Net.enableFusion( fusion )

Enables or disables layer fusion in the network.

参数
fusion true to enable the fusion, false to disable. The fusion is enabled by default.

forward() [1/4]

Mat cv::dnn::Net::forward ( const 字符串 & outputName = 字符串 () )
Python:
retval = cv.dnn_Net.forward( [, outputName] )
outputBlobs = cv.dnn_Net.forward( [, outputBlobs[, outputName]] )
outputBlobs = cv.dnn_Net.forward( outBlobNames[, outputBlobs] )
outputBlobs = cv.dnn_Net.forwardAndRetrieve( outBlobNames )

Runs forward pass to compute output of layer with name outputName .

参数
outputName name for layer which output is needed to get
返回
blob for first output of specified layer.

By default runs forward pass for the whole network.

范例:
samples/dnn/colorization.cpp ,和 samples/dnn/openpose.cpp .

forward() [2/4]

void cv::dnn::Net::forward ( OutputArrayOfArrays outputBlobs ,
const 字符串 & outputName = 字符串 ()
)
Python:
retval = cv.dnn_Net.forward( [, outputName] )
outputBlobs = cv.dnn_Net.forward( [, outputBlobs[, outputName]] )
outputBlobs = cv.dnn_Net.forward( outBlobNames[, outputBlobs] )
outputBlobs = cv.dnn_Net.forwardAndRetrieve( outBlobNames )

Runs forward pass to compute output of layer with name outputName .

参数
outputBlobs contains all output blobs for specified layer.
outputName name for layer which output is needed to get

outputName is empty, runs forward pass for the whole network.

forward() [3/4]

void cv::dnn::Net::forward ( OutputArrayOfArrays outputBlobs ,
const std::vector< 字符串 > & outBlobNames
)
Python:
retval = cv.dnn_Net.forward( [, outputName] )
outputBlobs = cv.dnn_Net.forward( [, outputBlobs[, outputName]] )
outputBlobs = cv.dnn_Net.forward( outBlobNames[, outputBlobs] )
outputBlobs = cv.dnn_Net.forwardAndRetrieve( outBlobNames )

Runs forward pass to compute outputs of layers listed in outBlobNames .

参数
outputBlobs contains blobs for first outputs of specified layers.
outBlobNames names for layers which outputs are needed to get

forward() [4/4]

void cv::dnn::Net::forward ( std::vector< std::vector< Mat > > & outputBlobs ,
const std::vector< 字符串 > & outBlobNames
)
Python:
retval = cv.dnn_Net.forward( [, outputName] )
outputBlobs = cv.dnn_Net.forward( [, outputBlobs[, outputName]] )
outputBlobs = cv.dnn_Net.forward( outBlobNames[, outputBlobs] )
outputBlobs = cv.dnn_Net.forwardAndRetrieve( outBlobNames )

Runs forward pass to compute outputs of layers listed in outBlobNames .

参数
outputBlobs contains all output blobs for each layer specified in outBlobNames .
outBlobNames names for layers which outputs are needed to get

forwardAsync()

AsyncArray cv::dnn::Net::forwardAsync ( const 字符串 & outputName = 字符串 () )
Python:
retval = cv.dnn_Net.forwardAsync( [, outputName] )

Runs forward pass to compute output of layer with name outputName .

参数
outputName name for layer which output is needed to get

By default runs forward pass for the whole network.

This is an asynchronous version of forward(const String&) . dnn::DNN_BACKEND_INFERENCE_ENGINE backend is required.

getFLOPS() [1/4]

int64 cv::dnn::Net::getFLOPS ( const std::vector< MatShape > & netInputShapes ) const
Python:
retval = cv.dnn_Net.getFLOPS( netInputShapes )
retval = cv.dnn_Net.getFLOPS( netInputShape )
retval = cv.dnn_Net.getFLOPS( layerId, netInputShapes )
retval = cv.dnn_Net.getFLOPS( layerId, netInputShape )

Computes FLOP for whole loaded model with specified input shapes.

参数
netInputShapes vector of shapes for all net inputs.
返回
computed FLOP.

getFLOPS() [2/4]

int64 cv::dnn::Net::getFLOPS ( const MatShape & netInputShape ) const
Python:
retval = cv.dnn_Net.getFLOPS( netInputShapes )
retval = cv.dnn_Net.getFLOPS( netInputShape )
retval = cv.dnn_Net.getFLOPS( layerId, netInputShapes )
retval = cv.dnn_Net.getFLOPS( layerId, netInputShape )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

getFLOPS() [3/4]

int64 cv::dnn::Net::getFLOPS ( const int layerId ,
const std::vector< MatShape > & netInputShapes
) const
Python:
retval = cv.dnn_Net.getFLOPS( netInputShapes )
retval = cv.dnn_Net.getFLOPS( netInputShape )
retval = cv.dnn_Net.getFLOPS( layerId, netInputShapes )
retval = cv.dnn_Net.getFLOPS( layerId, netInputShape )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

getFLOPS() [4/4]

int64 cv::dnn::Net::getFLOPS ( const int layerId ,
const MatShape & netInputShape
) const
Python:
retval = cv.dnn_Net.getFLOPS( netInputShapes )
retval = cv.dnn_Net.getFLOPS( netInputShape )
retval = cv.dnn_Net.getFLOPS( layerId, netInputShapes )
retval = cv.dnn_Net.getFLOPS( layerId, netInputShape )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

getLayer()

Ptr < > cv::dnn::Net::getLayer ( LayerId layerId )
Python:
retval = cv.dnn_Net.getLayer( layerId )

Returns pointer to layer with specified id or name which the network use.

范例:
samples/dnn/colorization.cpp .

getLayerId()

int cv::dnn::Net::getLayerId ( const 字符串 & layer )
Python:
retval = cv.dnn_Net.getLayerId( layer )

Converts string name of the layer to the integer identifier.

返回
id of the layer, or -1 if the layer wasn't found.

getLayerInputs()

std::vector< Ptr < > > cv::dnn::Net::getLayerInputs ( LayerId layerId )

Returns pointers to input layers of specific layer.

getLayerNames()

std::vector< 字符串 > cv::dnn::Net::getLayerNames ( ) const
Python:
retval = cv.dnn_Net.getLayerNames( )

getLayersCount()

int cv::dnn::Net::getLayersCount ( const 字符串 & layerType ) const
Python:
retval = cv.dnn_Net.getLayersCount( layerType )

Returns count of layers of specified type.

参数
layerType 类型。
返回
count of layers

getLayerShapes() [1/2]

void cv::dnn::Net::getLayerShapes ( const MatShape & netInputShape ,
const int layerId ,
std::vector< MatShape > & inLayerShapes ,
std::vector< MatShape > & outLayerShapes
) const

Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary.

参数
netInputShape shape input blob in net input layer.
layerId id for layer.
inLayerShapes output parameter for input layers shapes; order is the same as in layersIds
outLayerShapes output parameter for output layers shapes; order is the same as in layersIds

getLayerShapes() [2/2]

void cv::dnn::Net::getLayerShapes ( const std::vector< MatShape > & netInputShapes ,
const int layerId ,
std::vector< MatShape > & inLayerShapes ,
std::vector< MatShape > & outLayerShapes
) const

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

getLayersShapes() [1/2]

void cv::dnn::Net::getLayersShapes ( const std::vector< MatShape > & netInputShapes ,
std::vector< int > & layersIds ,
std::vector< std::vector< MatShape > > & inLayersShapes ,
std::vector< std::vector< MatShape > > & outLayersShapes
) const
Python:
layersIds, inLayersShapes, outLayersShapes = cv.dnn_Net.getLayersShapes( netInputShapes )
layersIds, inLayersShapes, outLayersShapes = cv.dnn_Net.getLayersShapes( netInputShape )

Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary.

参数
netInputShapes shapes for all input blobs in net input layer.
layersIds output parameter for layer IDs.
inLayersShapes output parameter for input layers shapes; order is the same as in layersIds
outLayersShapes output parameter for output layers shapes; order is the same as in layersIds

getLayersShapes() [2/2]

void cv::dnn::Net::getLayersShapes ( const MatShape & netInputShape ,
std::vector< int > & layersIds ,
std::vector< std::vector< MatShape > > & inLayersShapes ,
std::vector< std::vector< MatShape > > & outLayersShapes
) const
Python:
layersIds, inLayersShapes, outLayersShapes = cv.dnn_Net.getLayersShapes( netInputShapes )
layersIds, inLayersShapes, outLayersShapes = cv.dnn_Net.getLayersShapes( netInputShape )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

getLayerTypes()

void cv::dnn::Net::getLayerTypes ( std::vector< 字符串 > & layersTypes ) const
Python:
layersTypes = cv.dnn_Net.getLayerTypes( )

Returns list of types for layer used in model.

参数
layersTypes output parameter for returning types.

getMemoryConsumption() [1/6]

void cv::dnn::Net::getMemoryConsumption ( const std::vector< MatShape > & netInputShapes ,
size_t & weights ,
size_t & blobs
) const
Python:
weights, blobs = cv.dnn_Net.getMemoryConsumption( netInputShape )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShapes )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShape )

Computes bytes number which are required to store all weights and intermediate blobs for model.

参数
netInputShapes vector of shapes for all net inputs.
weights output parameter to store resulting bytes for weights.
blobs output parameter to store resulting bytes for intermediate blobs.

getMemoryConsumption() [2/6]

void cv::dnn::Net::getMemoryConsumption ( const MatShape & netInputShape ,
size_t & weights ,
size_t & blobs
) const
Python:
weights, blobs = cv.dnn_Net.getMemoryConsumption( netInputShape )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShapes )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShape )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

getMemoryConsumption() [3/6]

void cv::dnn::Net::getMemoryConsumption ( const int layerId ,
const std::vector< MatShape > & netInputShapes ,
size_t & weights ,
size_t & blobs
) const
Python:
weights, blobs = cv.dnn_Net.getMemoryConsumption( netInputShape )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShapes )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShape )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

getMemoryConsumption() [4/6]

void cv::dnn::Net::getMemoryConsumption ( const int layerId ,
const MatShape & netInputShape ,
size_t & weights ,
size_t & blobs
) const
Python:
weights, blobs = cv.dnn_Net.getMemoryConsumption( netInputShape )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShapes )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShape )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

getMemoryConsumption() [5/6]

void cv::dnn::Net::getMemoryConsumption ( const std::vector< MatShape > & netInputShapes ,
std::vector< int > & layerIds ,
std::vector< size_t > & weights ,
std::vector< size_t > & blobs
) const
Python:
weights, blobs = cv.dnn_Net.getMemoryConsumption( netInputShape )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShapes )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShape )

Computes bytes number which are required to store all weights and intermediate blobs for each layer.

参数
netInputShapes vector of shapes for all net inputs.
layerIds output vector to save layer IDs.
weights output parameter to store resulting bytes for weights.
blobs output parameter to store resulting bytes for intermediate blobs.

getMemoryConsumption() [6/6]

void cv::dnn::Net::getMemoryConsumption ( const MatShape & netInputShape ,
std::vector< int > & layerIds ,
std::vector< size_t > & weights ,
std::vector< size_t > & blobs
) const
Python:
weights, blobs = cv.dnn_Net.getMemoryConsumption( netInputShape )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShapes )
weights, blobs = cv.dnn_Net.getMemoryConsumption( layerId, netInputShape )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

getParam()

Mat cv::dnn::Net::getParam ( LayerId layer ,
int numParam = 0
)
Python:
retval = cv.dnn_Net.getParam( layer[, numParam] )

Returns parameter blob of the layer.

参数
layer name or id of the layer.
numParam index of the layer parameter in the Layer::blobs array.
另请参阅
Layer::blobs

getPerfProfile()

int64 cv::dnn::Net::getPerfProfile ( std::vector< double > & timings )
Python:
retval, timings = cv.dnn_Net.getPerfProfile( )

Returns overall time for inference and timings (in ticks) for layers. Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers.

参数
timings vector for tick timings for all layers.
返回
overall ticks for model inference.

getUnconnectedOutLayers()

std::vector<int> cv::dnn::Net::getUnconnectedOutLayers ( ) const
Python:
retval = cv.dnn_Net.getUnconnectedOutLayers( )

Returns indexes of layers with unconnected outputs.

getUnconnectedOutLayersNames()

std::vector< 字符串 > cv::dnn::Net::getUnconnectedOutLayersNames ( ) const
Python:
retval = cv.dnn_Net.getUnconnectedOutLayersNames( )

Returns names of layers with unconnected outputs.

readFromModelOptimizer() [1/3]

static Net cv::dnn::Net::readFromModelOptimizer ( const 字符串 & xml ,
const 字符串 & bin
)
static
Python:
retval = cv.dnn.Net_readFromModelOptimizer( xml, bin )
retval = cv.dnn.Net_readFromModelOptimizer( bufferModelConfig, bufferWeights )

Create a network from Intel's Model Optimizer intermediate representation (IR).

参数
[in] xml XML configuration file with network's topology.
[in] bin Binary file with trained weights. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.

readFromModelOptimizer() [2/3]

static Net cv::dnn::Net::readFromModelOptimizer ( const std::vector< uchar > & bufferModelConfig ,
const std::vector< uchar > & bufferWeights
)
static
Python:
retval = cv.dnn.Net_readFromModelOptimizer( xml, bin )
retval = cv.dnn.Net_readFromModelOptimizer( bufferModelConfig, bufferWeights )

Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).

参数
[in] bufferModelConfig buffer with model's configuration.
[in] bufferWeights buffer with model's trained weights.
返回
Net 对象。

readFromModelOptimizer() [3/3]

static Net cv::dnn::Net::readFromModelOptimizer ( const uchar * bufferModelConfigPtr ,
size_t bufferModelConfigSize ,
const uchar * bufferWeightsPtr ,
size_t bufferWeightsSize
)
static
Python:
retval = cv.dnn.Net_readFromModelOptimizer( xml, bin )
retval = cv.dnn.Net_readFromModelOptimizer( bufferModelConfig, bufferWeights )

Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).

参数
[in] bufferModelConfigPtr buffer pointer of model's configuration.
[in] bufferModelConfigSize buffer size of model's configuration.
[in] bufferWeightsPtr buffer pointer of model's trained weights.
[in] bufferWeightsSize buffer size of model's trained weights.
返回
Net 对象。

setHalideScheduler()

void cv::dnn::Net::setHalideScheduler ( const 字符串 & scheduler )
Python:
None = cv.dnn_Net.setHalideScheduler( scheduler )

Compile Halide layers.

参数
[in] scheduler Path to YAML file with scheduling directives.
另请参阅
setPreferableBackend

Schedule layers that support Halide backend. Then compile them for specific target. For layers that not represented in scheduling file or if no manual scheduling used at all, automatic scheduling will be applied.

setInput()

void cv::dnn::Net::setInput ( InputArray blob ,
const 字符串 & name = "" ,
double scalefactor = 1.0 ,
const Scalar & mean = Scalar ()
)
Python:
None = cv.dnn_Net.setInput( blob[, name[, scalefactor[, mean]]] )

Sets the new input value for the network.

参数
blob A new blob. Should have CV_32F or CV_8U depth.
name A name of input layer.
scalefactor An optional normalization scale.
mean An optional mean subtraction values.
另请参阅
connect(String, String) to know format of the descriptor.

If scale or mean values are specified, a final input blob is computed as:

\[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\]

范例:
samples/dnn/colorization.cpp ,和 samples/dnn/openpose.cpp .

setInputsNames()

void cv::dnn::Net::setInputsNames ( const std::vector< 字符串 > & inputBlobNames )
Python:
None = cv.dnn_Net.setInputsNames( inputBlobNames )

Sets outputs names of the network input pseudo layer.

Each net always has special own the network input pseudo layer with id=0. This layer stores the user blobs only and don't make any computations. In fact, this layer provides the only way to pass user data into the network. As any other layer, this layer can label its outputs and this function provides an easy way to do this.

setParam()

void cv::dnn::Net::setParam ( LayerId layer ,
int numParam ,
const Mat & blob
)
Python:
None = cv.dnn_Net.setParam( layer, numParam, blob )

Sets the new value for the learned param of the layer.

参数
layer name or id of the layer.
numParam index of the layer parameter in the Layer::blobs array.
blob the new value.
另请参阅
Layer::blobs
注意
If shape of the new blob differs from the previous shape, then the following forward pass may fail.

setPreferableBackend()

void cv::dnn::Net::setPreferableBackend ( int backendId )
Python:
None = cv.dnn_Net.setPreferableBackend( backendId )

Ask network to use specific computation backend where it supported.

参数
[in] backendId backend identifier.
另请参阅
Backend

If OpenCV is compiled with Intel's Inference Engine library, DNN_BACKEND_DEFAULT means DNN_BACKEND_INFERENCE_ENGINE. Otherwise it equals to DNN_BACKEND_OPENCV.

setPreferableTarget()

void cv::dnn::Net::setPreferableTarget ( int targetId )
Python:
None = cv.dnn_Net.setPreferableTarget( targetId )

Ask network to make computations on specific target device.

参数
[in] targetId target identifier.
另请参阅
Target

List of supported combinations backend / target:

DNN_BACKEND_OPENCV DNN_BACKEND_INFERENCE_ENGINE DNN_BACKEND_HALIDE DNN_BACKEND_CUDA
DNN_TARGET_CPU + + +
DNN_TARGET_OPENCL + + +
DNN_TARGET_OPENCL_FP16 + +
DNN_TARGET_MYRIAD +
DNN_TARGET_FPGA +
DNN_TARGET_CUDA +
DNN_TARGET_CUDA_FP16 +
范例:
samples/dnn/colorization.cpp .

The documentation for this class was generated from the following file: