cancel
Showing results for 
Search instead for 
Did you mean: 

Where I can get the information about what layers the X-Cube-AI supports?

Z-YF
Associate

I am trying to deploy a model onto a STM32N6570-DK board. And I encountered some warnings that the Cube-AI cannot support the unsigned int weight and it cannot support other layers such as linear_dynamic and ATen. Iam wondering where I can get some information about what layers or other things the X-Cube-AI actually support so that I can fix may models.

1 ACCEPTED SOLUTION

Accepted Solutions
hamitiya
ST Employee

Hello,

You can find more information here:

ST Neural-ART NPU - Supported operators and limitations

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

2 REPLIES 2
hamitiya
ST Employee

Hello,

You can find more information here:

ST Neural-ART NPU - Supported operators and limitations

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
Z-YF
Associate

    Hi.

    Thanks for the response and your help. It turns out that I have to modify the quantization program again since the torch always skips certain layers instead of turning them into a int8 type.

    Thanks again for your time and suggestion.   :)