Showing results for 
Search instead for 
Did you mean: 

I am using X-cube-AI to generate C code for my ML model. But the resulting constant data is larger than my flash, I'm using F7 with 2MB flash.

Associate II

How do I run larger ML models (greater than 10MB)? Is interfacing external ROM with QSPI and memory mapping it alone sufficient?

Can I use an SD card instead? Can the memory card be memory mapped to internal flash?

Can I paste the weights on the memory card using my PC and have the ML model request the SDMMC everytime it needs to access some data?


External QSPI NOR would allow for up to 256 MB of space that could be mapped into the CM7's address space.

The SD Card via SDMMC cannot map, neither can NAND, but you could pull content into SDRAM and use it from there.

Tips, buy me a coffee, or three.. PayPal Venmo Up vote any posts that you find helpful, it shows what's working..
Associate II

Thanks for your answer, the ML model is generated partially in binary format by x-cube-AI and I don't know it's memory access pattern. The ML model currently requires a pointer to memory where the weights are stored. What the model does with that pointer is unknown. If I consider pulling content into SDRAM, How would know what portion of the weights data(10MB) I would need to bring into SDRAM?