cancel
Showing results for 
Search instead for 
Did you mean: 

How to understand the output of Cube AI after analyze a model?

Z-YF
Associate II

I am using the X-Cube-AI to deploy a neural network onto  a STM32N6570-DK board. I selected the "n6-allmems-O3" option, and after analyzing, it shows two data, labeled as used RAM and used Flash. I am wondering since I have selected the "allmems" option, the RAM usage refers to the usage of the external RAM or the internal RAM, and the Flash usage refers to the internal Flash or the external Flash.

1 ACCEPTED SOLUTION

Accepted Solutions
Julian E.
ST Employee

Hello @Z-YF,

 

Here is the doc:

  1. https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_neural_art_compiler.html#ref_st_neural_art_option
  2. https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_neural_art_compiler.html#ref_aton_compiler_mempools

 

Basically, the memory pool describes what memory you authorize the compiler to use when allocating the weights and activation of your model.

When you use allmems, you authorize the compiler to use everything and the compiler will try to use the fastest memory first, then the slowest for you to have the best inference time.

 

This can also be useful:

https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_programming_model.html

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

4 REPLIES 4
Julian E.
ST Employee

Hello @Z-YF,

 

Here is the doc:

  1. https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_neural_art_compiler.html#ref_st_neural_art_option
  2. https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_neural_art_compiler.html#ref_aton_compiler_mempools

 

Basically, the memory pool describes what memory you authorize the compiler to use when allocating the weights and activation of your model.

When you use allmems, you authorize the compiler to use everything and the compiler will try to use the fastest memory first, then the slowest for you to have the best inference time.

 

This can also be useful:

https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_programming_model.html

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi

I’m sending this just to make sure that if I want to make the X-cube-AI automatically distribute the model to the external Flash and the internal ram of the STM32N6570-DK board, I need to configure the json file of the AI core based on the real address of the external memory and the internal memory, right?

Hello @Z-YF,

 

Yes, I think the best way is to edit user_neuralart.json and add a field for example:

  1. Copy one of the existing one
  2. Give it a name: CUSTOM_MPOOLs in my case
  3. Create a mpool file and set the path on the "memory_pool" line
  4. edit other things if you want
  5. Save
        "CUSTOM_MPOOLS": {
            "memory_pool": "./my_mpools/MY_CUSTOM_MPOOL.mpool",
            "memory_desc": "./my_mdescs/stm32n6.mdesc",
            "options": "--optimization 3 --all-buffers-info --mvei --no-hw-sw-parallelism --cache-maintenance --Oalt-sched --native-float --enable-virtual-mem-pools --Omax-ca-pipe 4 --Oshuffle-dma --Ocache-opt --Os"
        },

 

You can find the file here:

C:\Users\YOUR_USER\STM32Cube\Repository\Packs\STMicroelectronics\X-CUBE-AI\10.0.0\scripts\N6_scripts\

 

Then in Cube AI you can select it:

JulianE_0-1745311011349.png

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Yes, I get it.

Thank you for your time. :)