Showing results for 
Search instead for 
Did you mean: 

X-CUBE-AI Cannot generate code for Tensor Flow Lite Model when using external FLASH and RAM


I am prototyping a neural net model on the STM32u5A9xx development kit that comes with a 64MB external NOR FLash and 64MB external PSRAM. I plan on using both in memory mapped mode for my project.

I loaded my model into X-CUBE-AI (v 8.1.0) and analyzed it. The ROM and RAM usage were above the internal FLASH and RAM limits. I checked the box to use external FLASH (at 0x90000000) and RAM ( at 0xA0000000). but the tool continued to display the error that the design kit did not meet the requirements. When I asked it to generate the code, I did not see any middleware being linked or generated.

I cannot tweak in the model -- it is a 32-bit floating point model and must use all layers.

I see that there are some examples in the "FP-AI-VISION1_V3.1.0" but they do not tell me HOW TO GET PAST the errors I am seeing in the X-CUBE-AI.

Does anybody have any clues or workarounds on how to get the GUI to generate the middleware firmware?