cancel
Showing results for 
Search instead for 
Did you mean: 

Problems about stm32ai-modelzoo and x-cube-ai with CubeMX

123456dad
Associate II

123456dad_0-1700136656954.png

I am curious about these parameters,I don`t konw where they are used.

123456dad_1-1700136788170.png

These are auto generated by following the official guide, but if I want to build a project with x-cube-ai and CubeMX my own and still use the model from model zoo, how shoul i deal with these two parameters.

I dont`t see anything about this in code that auto generated by CubeMX with x-cube-ai.

I just know the input of model is int8 as follow,but i see these two parameters in stm32ai-modelzoo official guide

I am learning quantization, but still not familiar with it, shoul i do smething in my own project generated by CubeMX?

I just give my model input image data [0:255] now and it works,but after reading the modelzoo guide I don`t know if it`s right.Thank you!!!!!

123456dad_2-1700137290334.png

 

1 ACCEPTED SOLUTION

Accepted Solutions
GRATT.2
ST Employee

Hello @123456dad,

 

These two values nn_input_norm_scale and nn_input_norm_zp were created to implement later the support for floating point values as input for the NN. They would allow the user to apply a scale/offset preprocessing. Afterwards we found the float format is not suitable for edge AI image processing, so we kept only the uint8 and int8 input format.

The reason why your code is working is because the quantized models included in the ModelZoo expect an uint8 input format. The camera image data is converted by the DMA2D into uint8 values so there no need to change their format in the preprocessing before it is used as input by the NN.

 

Guillaume R.

View solution in original post

4 REPLIES 4
GRATT.2
ST Employee

Hello @123456dad,

 

These two values nn_input_norm_scale and nn_input_norm_zp were created to implement later the support for floating point values as input for the NN. They would allow the user to apply a scale/offset preprocessing. Afterwards we found the float format is not suitable for edge AI image processing, so we kept only the uint8 and int8 input format.

The reason why your code is working is because the quantized models included in the ModelZoo expect an uint8 input format. The camera image data is converted by the DMA2D into uint8 values so there no need to change their format in the preprocessing before it is used as input by the NN.

 

Guillaume R.

iker_arrizabalaga
Associate II

Hi @123456dad ,

I am currently trying to implement an object detection model in a STM32H745I-DISCO, and I have a significant doubt. How did you import an image to the neuronal network? I mean, how did you use an image as the input of the neural network? I tried to convert an image into a .csv file and then open it in the main.c file but it didn't work. It's important to point out that I am not using a camera so, I am directly using any image. Could you please shed a light on my doubt?

Thank you in advance.

For initial test , I save every pixel value in a text file.

GRATT.2
ST Employee

Hello @iker_arrizabalaga

 

To validate the model deployed on your target, you can either dump the image being preprocessed to ensure the preprocessing operations are running properly and the input of the NN is correct, or you can inject an image in the different preprocessing steps. For these two methods, you can use GDB. 

The first thing you need to do is setting breakpoints. Here are the positions where the breakpoints have to be set up for the different steps: 

Buffer to be accessed
Breakpoint position
camera_capture_buffer
After Camera_GetNextReadyFrame() function call
rescaled_image_buffer
After ImageResize() function call
nn_input_buffer
After PixelValueConversion() function call

Dump method: 

- When the debugger hits the breakpoint, use the GDB dump function to download the buffer as a binary file. Take care to set the right buffer size. 

Ex. : 

dump binary memory image.bin App_Config.camera_capture_buffer App_Config.camera_capture_buffer+CAM_FRAME_BUFFER_SIZE

- You can visualize the downloaded binary file using this raw pixel viewer website

Inject method: 

- First you need to generate an image to inject in the STM32. You can turn a standard format image (jpg, png...) into a binary image file using Python libraries (PIL, openCV...) or GIMP. 

- When the debugger hits the breakpoint, use the GDB restore function to upload the binary file in the buffer. 

Ex. : 

restore image.bin binary App_Config.camera_capture_buffer 0 0

 

Guillaume