cancel
Showing results for 
Search instead for 
Did you mean: 

Accuracy of STM32ai modelzoo for STM32N6570-DK

JaeLee
Associate II

I've tried several models provided in STM32 ai model zoo. I'm wondering why some models does not show the acceptable accuracy. My test results are as follows.

efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnetv2b0_224/efficientnetv2b0_224_qdq_int8.onnx  --> Not Good

mobilenetv2/Public_pretrainedmodel_public_dataset/ImageNet/mobilenetv2_a100_224/mobilenetv2_a100_224_qdq_int8.onnx -> Good

mobilenetv2_pt/Public_pretrainedmodel_public_dataset/ImageNet/mobilenetv2_a100_pt_224/mobilenetv2_a100_pt_224_qdq_int8.onnx -> Not Good

resnet50v2//Public_pretrainedmodel_public_dataset/ImageNet/resnet50v2_224/resnet50v2_224_qdq_int8.onnx -> Good

 

My testing environment

- stm32ai-modelzoo : 4.0

- stm32ai-modelzoo-services : 4.0

- STEdgeAI : 3.0

- STM32CubeIDE : 2.0.0

- Windows 11

And in order to generate network file and bin file, I used the python file in stm32ai-modelzoo-services/image_classification as follows.

python stm32ai_main.py --config-path=.\config_file_examples --config-name=deployment_n6_config.yaml

 

1 ACCEPTED SOLUTION

Accepted Solutions

OK I see.
Shared Onnx models from from pytorch (with _pt in their name) don't have the softmax at the end, so output is not a probability.
This doesn't change the ranking but you can't interpret the output as a probability...


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

8 REPLIES 8
Laurent FOLLIOT
ST Employee

Hello,
Could you share the 4 .yaml you are using for these deployments please?
Could you precise also what means "good" and "not good", understanding that these models are trained on ImageNet, so with accuracies very different depending on the class to be recognized?

Thanks,


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello @Laurent FOLLIOT 

Thanks for your support.

As you might know, the inference result is keep changing when using camera module. So I decided to embed an image file to test the several models. I used two images for testing. One is banana and the other is daisy, which are attached in this reply.

The inference code is from https://github.com/STMicroelectronics/STM32N6-GettingStarted-ImageClassification, v2.2.0 and when generating network.c, *.h, and *.raw file I used STM32AI-MODELZOO-SERVICES v4.0.


I've attached my yaml file, where only model_path changes according to the onnx file that I want to use.

I've tested 5 models and the results are as follows. All the models are about ImageNet.

 

  BananaDaisy
    
efficientnetv2efficientnetv2b0_224_qdq_int8.onnx88%72%
    
mobilenetv2_ptmobilenetv2_a100_pt_224_qdq_int8.onnx>1000%>1000%
    
mobilenetv2mobilenetv2_a100_224_qdq_int8.onnx100%96%
    
resnet50v2resnet50v2_224_qdq_int8.onnx100%96%
    
mobilenetv4mobilenetv4small_pt_224_qdq_int8.onnx<-2000%<-1000%
    
david13
Associate

Some models in the STM32 AI Model Zoo can show lower accuracy after INT8 quantization, especially depending on how they were trained or calibrated. Architectures like EfficientNetV2 are often more sensitive to quantization compared to models such as MobileNetV2 or ResNet50, which tend to be more quantization-friendly.

You may want to check whether the models that performed poorly were quantized with a different calibration dataset or preprocessing pipeline than the one you're using for testing. Even small differences in input normalization, image size, or preprocessing steps can noticeably affect accuracy.

It could also help to test the FP32 version of the same models first to confirm the baseline accuracy, then compare it with the INT8 results to see how much accuracy is lost during quantization.

Laurent FOLLIOT
ST Employee

Hello,
Yes, the dataset used for quantization can impact a lot. We internally used a small subset, and this can be quantized differently with the quantization service, starting from the pre-trained model for instance.
I am not sure to understand your table with the % of accuracy using only one image. You used the camera on one  printed image?


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi,

I know that the quantization might deteriorate the classification performance. But I am afraid that the accuracy rate should not be over 100% or under 0%.
For my testing, I directly copied the bmp image data to the input buffer (stai_ptr nn_in) before running inference. I didn't use the streaming data from the camera module.

Laurent FOLLIOT
ST Employee

Hi,
Yes fully agree.
Where do these numbers come from? A log file or on the terminal or printed on the screen of the device?
Thanks


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

I've attached several files for references.

- STM32N6570-DK.zip: network.c, .bin and etc. for mobilenetv4small_pt_224_qdq_int8.onnx

- main.c: modified version

- input_image.h : bitmap image data

 

Thanks

IMG_4054.JPG

OK I see.
Shared Onnx models from from pytorch (with _pt in their name) don't have the softmax at the end, so output is not a probability.
This doesn't change the ranking but you can't interpret the output as a probability...


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.