X-CUBE-AI Cannot generate code for Tensor Flow Lite Model when using external FLASH and RAM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2023-08-20 07:16 PM
I am prototyping a neural net model on the STM32u5A9xx development kit that comes with a 64MB external NOR FLash and 64MB external PSRAM. I plan on using both in memory mapped mode for my project.
I loaded my model into X-CUBE-AI (v 8.1.0) and analyzed it. The ROM and RAM usage were above the internal FLASH and RAM limits. I checked the box to use external FLASH (at 0x90000000) and RAM ( at 0xA0000000). but the tool continued to display the error that the design kit did not meet the requirements. When I asked it to generate the code, I did not see any middleware being linked or generated.
I cannot tweak in the model -- it is a 32-bit floating point model and must use all layers.
I see that there are some examples in the "FP-AI-VISION1_V3.1.0" but they do not tell me HOW TO GET PAST the errors I am seeing in the X-CUBE-AI.
Does anybody have any clues or workarounds on how to get the GUI to generate the middleware firmware?
Cheers
- Labels:
-
STM32U5 series
![](/skins/images/B042DBD9233DBE51255C3729EC4F3C6A/responsive_peak/images/icon_anonymous_message.png)