Skip to main content

4.1.6 Model Inference Interface Description

Overview

The development board Ubuntu system comes pre-installed with the Python version of the pyeasy_dnn model inference module. By loading the model and creating a Model object, functions such as model inference and data parsing can be completed.

The module inference process can be divided into three steps: loading the model, image inference, and data parsing. The code example is as follows:

from hobot_dnn import pyeasy_dnn as dnn

#create model object
models = model.load('./model.bin')

#do inference with image
outputs = models[0].forward(image)

for item in outputs:
output_array.append(item.buffer)
post_process(output_array)

Model Object

The Model object is created when the model is loaded. It contains members and methods such as inputs, outputs, and forward, detailed as follows:

inputs

【Function Description】

Returns the tensor input information of the model. Specific input can be specified by index, for example: inputs[0] represents the 0th input.

【Function Declaration】
Model.inputs(tuple(pyDNNTensor))
【Parameter Description】
Parameter NameDefinition Description
indexRepresents the index of the input tensor
【Usage Method】
def print_properties(pro):
print("tensor type:", pro.tensor_type)
print("data type:", pro.dtype)
print("layout:", pro.layout)
print("shape:", pro.shape)

models = dnn.load('../models/fcos_512x512_nv12.bin')
input = models[0].inputs[0]

print_properties(input.properties)
【Return Value】

Returns an object of type pyDNNTensor, detailed as follows:

Parameter NameDescription
propertiesRepresents the properties of the tensor
bufferRepresents the data in the tensor, in numpy format
nameRepresents the name in the tensor
【Notes】

None

outputs 【Function Description】

Returns the tensor output information of the model. Specific output can be specified by index, for example: outputs[0] represents the 0th output.

【Function Declaration】
Model.outputs(tuple(pyDNNTensor))
【Parameter Description】
Parameter NameDefinition Description
indexRepresents the index of the output tensor
【Usage Method】
def print_properties(pro):
print("tensor type:", pro.tensor_type)
print("data type:", pro.dtype)
print("layout:", pro.layout)
print("shape:", pro.shape)

models = dnn.load('../models/fcos_512x512_nv12.bin')
output = models[0].outputs[0]

print_properties(output.properties)
【Return Value】

Returns an object of type pyDNNTensor, detailed as follows:

Parameter NameDescription
propertiesRepresents the properties of the tensor
bufferRepresents the data in the tensor, in numpy format
nameRepresents the name in the tensor
【Notes】

None

forward 【Function Description】

Performs model inference based on the specified input.

【Function Declaration】
Model.forward(args &args, kwargs &kwargs)
【Parameter Description】
Parameter NameDefinition DescriptionValue Range
argsInput data for inferencenumpy: single model input, list[numpy, numpy, ...]: multiple model inputs
kwargscore_id, represents the core id for model inference0: automatic allocation, 1: core0, 2: core1
kwargspriority, represents the priority of the current model inference taskValue range 0~255, the larger the number, the higher the priority

args kwargs kwargs

【Usage Method】
img = cam.get_img(2, 512, 512)

img = np.frombuffer(img, dtype=np.uint8)
outputs = models[0].forward(img)
【Return Value】

Returns an outputs object.

【Notes】

None

Example Code

You can view the Model Inference Example section for more details.