cancel
Showing results for 
Search instead for 
Did you mean: 

How to Feed Features and Get Predictions with X-CUBE-AI on STM32?

Stone_chan
Visitor

Hi everyone,
I'm working on deploying an MLP model on STM32F746 using X-CUBE-AI. The model was converted from ONNX format, and I’m running into trouble feeding the input features and retrieving the prediction result

I want to run inference with the following flow:

float input_data[60] = { /* preprocessed features */ };
float output_data[1]; // expecting softmax probabilities
int output_label; // expecting predicted class label

My main confusion:

I'm unclear on how to actually use the interface generated by Cube.AI:

  • What are the correct steps to initialize and fill the ai_buffer structs?

  • Do I need to manually align my input/output arrays, or use the data_ins[] / data_outs[] generated by default?

  • What is the purpose of macros like AI_HANDLE_PTR(...) and AI_MNETWORK_IN?

  • I tried calling ai_mlp_run(...) like this:

     
    ai_mlp_run(network, ai_input, ai_output);

    but I get build errors or invalid pointer types

 



model information :

problem.png

app_X-cube-ai.h:
problem2.png

0 REPLIES 0