cancel
Showing results for 
Search instead for 
Did you mean: 

No source available for "ai_platform_network_process() at 0x8xxxxx"

PGaja.1
Associate II

Hi,

We are trying a very simple deep neural network model with 20 inputs and 3 outputs and want to generate inferences in real-time. We are following the instructions from the document "Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI)" numbered UM2526 with X Cube AI 7.1.0 with STM32CubeMX.

Has anyone encountered the same error?

Thanks in advance.

16 REPLIES 16

Hello @josepauloo ,

You can find the minimal code example in the documentation of X Cube AI in Embedded inference ST Edge AI client APIhttps://wiki.st.com/stm32mcu/wiki/AI:X-CUBE-AI_documentation#X-CUBE-AI_embedded_documentation_access 

 

You will find the aiInit() there.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Dear Julian ,

Thank you for your response. However, I didn’t quite understand the mention of the Edge platform, as I am not using Edge in my project. I developed my project using XCube.AI within STM32CubeMX and STM32CubeIDE.

I believe the relevant documentation would be this one: file:///C:/Users/jpaul/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/9.1.0/Documentation/embedded_client_api.html.

For example, in my project, I don’t see the function stai_runtime_init(); or any other function starting with stai. Additionally, it seems that the issue is not during initialization, as the error only occurs when I attempt to execute the network.

Could you please clarify this point?

 

Best regards,

José

Hello @josepauloo 

 

Are talking about ST Edge AI ? If so, it is another name of cubeAI. CubeAI is the historic name of the MX plugin but now we are regrouping everything under ST Edge AI. Later the name CubeAI should disappear.

 

As for the stai_runtime_init(), it is probably because you didn't use any template while generating your cubeAI project. I have never done it this way so I cannot help you. 

I would suggest you try creating a project using the projectTemplate when activating cubeAI in cubeMX and looking at the minimal example to run your model.

 

Have a good day,

Julian

 

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello Julian,

 

Thank you for your response.

I understand that ST Edge AI is the new name for CubeAI, but I’m still struggling to understand how to integrate the provided code snippets into a functional project.

The documentation and multiple versions of the X-CUBE-AI tool make it challenging to identify the correct steps to implement and use the stai_runtime_init() function, or to grasp how the generated code works as a whole.

Would you happen to have a fully functional example project that I could refer to? A complete, working example would be extremely helpful in understanding how to structure the code and utilize the templates effectively.

Thank you for your support!

 

Best regards,
José Paulo

Julian E.
ST Employee

Hello @josepauloo,

 

I would like to use X cube AI more to create some documentation through easy examples, but we are pretty shorthanded for the moment...

 

You can generate a code example that use the model with a random input and send via serial some metrics (inference time, predicted class etc) on a STM32H747I Disco using the ST Dev Cloud: 

JulianE_0-1733847586547.png

You need to import a model .onnx, h5 or .tflite then go through the steps. You can basically skip everything until you reach the last part.

You can download different things depending on what you want. 

Currently the CubeMX will have an issue because the cubeAI version in the dev cloud is higher than the one you can download in CubeMX (version 10.0 that came out today, it is maybe already fixed).

 

I don't know if it works for other board, you can test it, with this H7 I tested it, so it works.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello Julian,

Thank you for your support. In the meantime, I managed to make my model work using the structure of the Embedded Inference Client API (file:///C:/Users/jpaul/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/9.1.0/Documentation/embedded_client_api.html) instead of the Embedded Inference Client ST Edge AI APIs (file:///C:/Users/jpaul/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/9.1.0/Documentation/embedded_client_stai_api.html). When creating the project in CubeMX (I used version 9 of X-CUBE-AI), I could not find any functions starting with “stai” in the generated files. So, I tried the other structure, and it worked.

Unfortunately, the documentation linked above does not clarify many important details, making problem-solving more challenging. For instance, the TensorFlow version supported by X-CUBE-AI 9 is 2.12.0. Using a newer version causes errors during the network model analysis in CubeMX. This should have been documented. There is significant version discrepancy (TensorFlow, Keras, and STM tools), which complicates the process.

Thank you for your help and support.

 

Wishing you and your team a joyful holiday season!
José Paulo

Hello @josepauloo,

 

Thank you very much for your comment.

 

CubeAI is a plugin for cubeMX using the STEdgeAI that you can use in CLI.

  • The Embedded Inference Client API is the "old API" that cubeMX uses
  • The Embedded Inference Client ST Edge AI API is the "new API" that you can use if you want using the Extra command line options in the Advanced options

JulianE_0-1734016791248.png

So, as you said, you don't use the function with the stai from the new API.

I wasn't aware of this eitheir, but now I know thanks to you :).

As for what to add in the Extra command Line options, I will need to look for it.

 

The date for the migration from the old API to the new one is not defined.

 

Concerning the versions required for TensorFlow for example you can find it in the installation part of the documentation, but it may not be that clear.

JulianE_1-1734017191025.png

 

I also wish you a joyful holiday season!

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.