4.1.6 Model Inference Interface Description
Overview
The development board Ubuntu system comes pre-installed with the Python version of the pyeasy_dnn model inference module. By loading the model and creating a Model object, functions such as model inference and data parsing can be completed.
The module inference process can be divided into three steps: loading the model, image inference, and data parsing. The code example is as follows:
from hobot_dnn import pyeasy_dnn as dnn
#create model object
models = model.load('./model.bin')
#do inference with image
outputs = models[0].forward(image)
for item in outputs:
output_array.append(item.buffer)
post_process(output_array)
Model Object
The Model object is created when the model is loaded. It contains members and methods such as inputs, outputs, and forward, detailed as follows:
inputs
【Function Description】Returns the tensor input information of the model. Specific input can be specified by index, for example: inputs[0] represents the 0th input.
【Function Declaration】Model.inputs(tuple(pyDNNTensor))
Parameter Name | Definition Description |
---|---|
index | Represents the index of the input tensor |
def print_properties(pro):
print("tensor type:", pro.tensor_type)
print("data type:", pro.dtype)
print("layout:", pro.layout)
print("shape:", pro.shape)
models = dnn.load('../models/fcos_512x512_nv12.bin')
input = models[0].inputs[0]
print_properties(input.properties)
Returns an object of type pyDNNTensor, detailed as follows:
Parameter Name | Description |
---|---|
properties | Represents the properties of the tensor |
buffer | Represents the data in the tensor, in numpy format |
name | Represents the name in the tensor |
None
outputs 【Function Description】
Returns the tensor output information of the model. Specific output can be specified by index, for example: outputs[0] represents the 0th output.
【Function Declaration】Model.outputs(tuple(pyDNNTensor))
Parameter Name | Definition Description |
---|---|
index | Represents the index of the output tensor |
def print_properties(pro):
print("tensor type:", pro.tensor_type)
print("data type:", pro.dtype)
print("layout:", pro.layout)
print("shape:", pro.shape)
models = dnn.load('../models/fcos_512x512_nv12.bin')
output = models[0].outputs[0]
print_properties(output.properties)
Returns an object of type pyDNNTensor, detailed as follows:
Parameter Name | Description |
---|---|
properties | Represents the properties of the tensor |
buffer | Represents the data in the tensor, in numpy format |
name | Represents the name in the tensor |
None
forward 【Function Description】
Performs model inference based on the specified input.
【Function Declaration】Model.forward(args &args, kwargs &kwargs)
Parameter Name | Definition Description | Value Range |
---|---|---|
args | Input data for inference | numpy: single model input, list[numpy, numpy, ...]: multiple model inputs |
kwargs | core_id, represents the core id for model inference | 0: automatic allocation, 1: core0, 2: core1 |
kwargs | priority, represents the priority of the current model inference task | Value range 0~255, the larger the number, the higher the priority |
args kwargs kwargs
【Usage Method】img = cam.get_img(2, 512, 512)
img = np.frombuffer(img, dtype=np.uint8)
outputs = models[0].forward(img)
Returns an outputs object.
【Notes】None
Example Code
You can view the Model Inference Example section for more details.