cancel
Showing results for 
Search instead for 
Did you mean: 

PyTorch to STM32H7: Is there a simpler workflow?

Ernsy
Associate

Hello,
I recently deployed multiple individual PyTorch models onto my STM32H7A3ZI-Q. My workflow ended up being:

1. Convert from .pt -> .onnx -> .tf -> .tflite (quantized)
2. Build an X-CUBE-AI project via CubeMX
3. Swap models in same project using ST Edge AI Core CLI
4. Writing scripts to semi-automate this flow

It works, but setting this up involved quite a few manual steps. 

Is this the most effective manner to approach this? Curious how others handle this

Regards,
Ernest

5 REPLIES 5
Julian E.
ST Employee

Hi @Ernsy,

 

Why do you convert your model to tf?

We support ONNX (and quantized ONNX QDQ models):

https://stedgeai-dc.st.com/assets/embedded-docs/supported_ops_onnx.html

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
Ernsy
Associate

Hey Julian,

Initially I used .onnx and quantised it as per this guideline.

Reasoning to use tf was:

1. Keras models seems popular in the embedded space (incl. in your model zoo). Using the tf quant library allowed for simple model swaps without changing the semi-automated workflow.
2. Quantisation via tf library produced faster and smaller results in my local benchmarking on the STM32
3. Since .tflite seems to be the most popular format, I thought sticking with that ecosystem made sense.

I used the onnx2tf library to get a tf model from which I then produced my quantisd .tflite from there


Hi @Ernsy,

 

Thanks for the update.

 

You are right, our model zoo is only tensorflow lite models, but it actually changes with the new release planned for end of month. We doubled the number of models we propose with now pytorch models.

 

We have a lot of customers working with pytorch. If you think you get better results with tensorflow quantization, then you may stick to it, but feel free to use pytorch, there is no rule to use tensorflow.

 

Other than that, I don't see what you could do.

 

Have a good day,

Julian

 

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hey @Julian E. ,

So from next mont pytorch models will be supported directly? That's great news.

Regardless, thanks for your input

Kind regards

Just to make sure we are aligned.

 

The ST Edge AI Core (the compiler) already support onnx and QDQ onnx models.

It is only model zoo that will now also propose onnx (and all the model zoo scripts also were updated to support it).

 

So, with X Cube AI or the ST Developer Cloud, onnx model are already supported.

 

The only thing is that I think that the getting started application where you can deploy various model from model zoo are maybe not being updated for you to use pytorch model. For these GS application, the preprocessing and postprocessing are needed for each models. 

I am talking about this for example: GitHub - STMicroelectronics/STM32N6-GettingStarted-ObjectDetection: An AI software application package demonstrating simple implementation of object detection use case on STM32N6 product.​

 

We will need to wait for them to be update and see what they support.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.