2025-07-13 4:41 AM
Hi everyone,
I'm working on deploying an MLP model on STM32F746 using X-CUBE-AI. The model was converted from ONNX format, and I’m running into trouble feeding the input features and retrieving the prediction result
I want to run inference with the following flow:
float input_data[60] = { /* preprocessed features */ };
float output_data[1]; // expecting softmax probabilities
int output_label; // expecting predicted class label
I'm unclear on how to actually use the interface generated by Cube.AI:
What are the correct steps to initialize and fill the ai_buffer structs?
Do I need to manually align my input/output arrays, or use the data_ins[] / data_outs[] generated by default?
What is the purpose of macros like AI_HANDLE_PTR(...) and AI_MNETWORK_IN?
I tried calling ai_mlp_run(...) like this:
but I get build errors or invalid pointer types
model information :
app_X-cube-ai.h: