Where I can get the information about what layers the X-Cube-AI supports?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-04-11 1:59 AM
I am trying to deploy a model onto a STM32N6570-DK board. And I encountered some warnings that the Cube-AI cannot support the unsigned int weight and it cannot support other layers such as linear_dynamic and ATen. Iam wondering where I can get some information about what layers or other things the X-Cube-AI actually support so that I can fix may models.
Solved! Go to Solution.
- Labels:
-
ST Edge AI Core
-
STM32CubeAI
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-04-11 2:15 AM
Hello,
You can find more information here:
ST Neural-ART NPU - Supported operators and limitations
Best regards,
Yanis
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-04-11 2:15 AM
Hello,
You can find more information here:
ST Neural-ART NPU - Supported operators and limitations
Best regards,
Yanis
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-04-11 2:53 AM
Hi.
Thanks for the response and your help. It turns out that I have to modify the quantization program again since the torch always skips certain layers instead of turning them into a int8 type.
Thanks again for your time and suggestion. :)
