cancel
Showing results for 
Search instead for 
Did you mean: 

I can train cifar10 data set with pytorch up to 80%? It's only 30 percent accurate on the CUBEAI desktop

wg
Associate II
 
1 ACCEPTED SOLUTION

Accepted Solutions
jean-michel.d
ST Employee

Thanks for your data and models (and status). Effectively after analysis, I had detected an issue in cifar10_test_label.npy file where the one shot encoding representation was wrong (left-shift per one in the index). I dont't know if the "MCU_AI-masterCIFAR10etinynet_epoch272_params.oonx" model was correctly trained but with the provided data, I had a bad accuracy using directly the onnx file (outside X-CUBE-AI). Idem with "CIFAR10mobilenetSlim_quant_static.onnx" file.

However, I have noted that the model has been quantized with the option "per_channel=False", it is recommended to use "per_channel=True" to have a better precision.

br,

Jean-Michel

View solution in original post

7 REPLIES 7
jean-michel.d
ST Employee

Hello,

To compute the accuracy with CUBEAI desktop, you provide your own data or you use the random data?

Exported ONNX model (from Pytorch) is quantized or not?

Regards,

Jean-Michel

Thank you very much for your answer. I used self-generated data, and the model didn't quantify it. Here is my model and the data

wg
Associate II

Here is my model and the data

After I quantified the model, the accuracy rate of desktop test is still 30.30%. Here is my quantified model.

I have solved this problem. There are some errors in my data. Thank you very much

jean-michel.d
ST Employee

Thanks for your data and models (and status). Effectively after analysis, I had detected an issue in cifar10_test_label.npy file where the one shot encoding representation was wrong (left-shift per one in the index). I dont't know if the "MCU_AI-masterCIFAR10etinynet_epoch272_params.oonx" model was correctly trained but with the provided data, I had a bad accuracy using directly the onnx file (outside X-CUBE-AI). Idem with "CIFAR10mobilenetSlim_quant_static.onnx" file.

However, I have noted that the model has been quantized with the option "per_channel=False", it is recommended to use "per_channel=True" to have a better precision.

br,

Jean-Michel

Thank you very much for your answer, I found the problem. I did not perform the same transform as in the training process when generating npy data, nor did I perform the same data transform when generating calibration data in the static quantization process. So it's very low precision.

I would like to know how to modify the code if I want to add the camera to collect images for target detection in the future. The generated code seems a little difficult to read, is there any demo you can refer to? I use stm32f746G-discovery.