cancel
Showing results for 
Search instead for 
Did you mean: 

Problem deploying ST SSD Mobilenet v1 on STM32N6

mtv
Associate II

Hello,

I have a discovery kit STM32N6570-DK and I am trying to follow the instructions in ST Model Zoo Services so that I can train, evaluate and deploy an object detection model that can detect cars. So I downloaded the Pascal_VOC_2012 dataset, I made some custom data cleanup to keep only the images with the vehicles, (classes  [ bicycle,bus,car,motorbike ]) and on that I used the predefined scripts in ST model zoo to train st_ssd_mobilenet_v1. This went well and the evaluation gave good accuracy results. Then I used the scipts to make predictions on some images. The prediction options "host" and "stedgeai_host" also went well. When I try the option "stedgeai_n6", then the compilation and programming of the board works ok but then the inference freezes. After some debugging I found out that there is a software epoch that performs a Tile operation and this one freezes. In the network_generate_report.txt I noticed some question marks just before this Tile epoch.  

epoch 39 HW
epoch 40 -SW- ( Conv )
epoch 41 ??
epoch 42 ??
epoch 43 -SW- ( Tile )
epoch 44 -SW- ( QuantizeLinear )
epoch 45 HW

In the network.c code this is translated to epoch 40 being the conv and epoch 41 the tile, which I think is not how it should be.

So it this something that you have seen before?

I am using X-Cube-AI 10.0.0 with STEdgeAI 2.0.0 and STM32Cube_FW_N6_V1.1.1

Through this post I am taking the opportunity to ask some additional questions:

1. In all the .yaml scripts that are in the ST Zoo Services in the quantization field the output is float. However when I use prediction with "stedgeai_host" and "stedgeai_n6" they complain that the quantized model must have int8 output. And the same happens when I use some of the predefined models that can be found in the model zoo. Does that mean that to use them I have to perform a quantization on the relevant .h5 model?

2. In the training Readme I read that I can train mobilenet with pretrained_weights = imagenet. But this cannot be done with any of the yolo. The option pretrained_weights is not recognized. However in the documentation it is mentioned that for the tiny_yolo the pretrained_weights are from COCO. So it there a way to use pretrained weights also for the yolo models? 

Thanks in advance for your help

 

 

1 ACCEPTED SOLUTION

Accepted Solutions
Laurent FOLLIOT
ST Employee

Hello,
Yes, only imagenet on SSD models can be used as pretrained weights at this time.
No issue on my side evaluating on the N6 the st_ssd. I used the provided evaluation_n6_config.yaml file and changed the model_path, the model_type to st_ssd_mobilenet_v1 and the output_type to float32. It should work.
I don't understand the point on the quantization as there is no quantization field when evaluating on the device and as tested just above, output of st_ssd is float as well.
Full transfer learning is not available yet for OD, we are working on it.

​
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

4 REPLIES 4
Julian E.
ST Employee

Hello @mtv,

 

Edited answer with correct information given by Laurent below:

 

For your second question:

Only SSD model have the option to be used with the pretrained_weights option and it is available with imagenet only.

For all other model, you can import the .h5 model using model_path (at the beginning, in general) and retrain the model, only with the same number of classes. Most of them are trained with either COCO person (1 class) or COCO 80 classes I believe.

For yolov8, we provide only COCO person, but on ultralytics, where the original models are, you can find them trained on COCO 80 classes.

 

As Laurent also said:  Full transfer learning is not available yet for Object Dectection, we are working on it.

 

Have a good day,

Julian

 

​
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello,

Thank you for your response. In the .zip file you will find the quantized model along with the st_ai_output of the network generation process.

Regarding the pretraining of the tiny yolov2 I tried to use the option but it fails with "unrecognized option" message. As I see in the source code, the pretrained_weights are not expected.

mtv_0-1748872334311.png

Also can you please comment something on my first question?

Thanks in advance for your efforts

 

 

Laurent FOLLIOT
ST Employee

Hello,
Yes, only imagenet on SSD models can be used as pretrained weights at this time.
No issue on my side evaluating on the N6 the st_ssd. I used the provided evaluation_n6_config.yaml file and changed the model_path, the model_type to st_ssd_mobilenet_v1 and the output_type to float32. It should work.
I don't understand the point on the quantization as there is no quantization field when evaluating on the device and as tested just above, output of st_ssd is float as well.
Full transfer learning is not available yet for OD, we are working on it.

​
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello,

Thank you for the answer. It helped me find out what is the problem in my setup. In STEdgeAI 2.1.0 the tile function that was creating the problem has been updated. So now I can perform the prediction that I wanted.

Thanks