cancel
Showing results for 
Search instead for 
Did you mean: 

X-CUBE-AI 7.1.0 Generated Code Initialisation and use

Raggio
Associate III

Hello, everyone.

I had a problem with version 7.1.0 of X-CUBE-AI.

When I load the TFlite model of the neural network and generate the code I have problems initializing the input and output buffers. In particular I have a problem with this assignment: ai_buffer ai_input[AI_NETWORK_IN_NUM] = AI_NETWORK_IN;

The compiler gives me a problem with the AI_NETWORK_IN macro, which is listed as deprecated and returns ai_buffer*. Has anyone else encountered the same problem? How can it be solved?

In version 5.20.0, I had no problems with initialisation and already evaluated many different examples.

Thank you,

Davide

1 ACCEPTED SOLUTION

Accepted Solutions
jean-michel.d
ST Employee

Hello Raggio,

I don't know when you use this assignment, but now since 7.x, the macro "AI_NETWORK_IN" is mapped on a function (ai_network_inputs_get()). Now to create the handle of the IO tensor before to use them with the ai_network_run() fct, the following typical code is expected:

/* Array of pointer to manage the model's input/output tensors */
static ai_buffer *ai_input;
static ai_buffer *ai_output;
 
vois aiInit() {
...
  /* Reteive pointers to the model's input/output tensors */
  ai_input = ai_network_inputs_get(network);
  ai_ouput = ai_network_outputs_get(network);
...
}
 
/* 
 * Run inference
 */
int aiRun(const void *in_data, void *out_data)  {
  
  /* 1 - Update IO handlers with the @ of the data payload */
  ai_input[0].data = AI_HANDLE_PTR(in_data);
  ai_output[0].data = AI_HANDLE_PTR(out_data);
 
  /* 2 - Perform the inference */
  n_batch = ai_network_run(network, &ai_input[0], &ai_output[0]);
  if (n_batch != 1) {
      err = ai_network_get_error(network);
      ...
  };
  ...
  return 0;
}

br,

Jean-Michel

View solution in original post

2 REPLIES 2
jean-michel.d
ST Employee

Hello Raggio,

I don't know when you use this assignment, but now since 7.x, the macro "AI_NETWORK_IN" is mapped on a function (ai_network_inputs_get()). Now to create the handle of the IO tensor before to use them with the ai_network_run() fct, the following typical code is expected:

/* Array of pointer to manage the model's input/output tensors */
static ai_buffer *ai_input;
static ai_buffer *ai_output;
 
vois aiInit() {
...
  /* Reteive pointers to the model's input/output tensors */
  ai_input = ai_network_inputs_get(network);
  ai_ouput = ai_network_outputs_get(network);
...
}
 
/* 
 * Run inference
 */
int aiRun(const void *in_data, void *out_data)  {
  
  /* 1 - Update IO handlers with the @ of the data payload */
  ai_input[0].data = AI_HANDLE_PTR(in_data);
  ai_output[0].data = AI_HANDLE_PTR(out_data);
 
  /* 2 - Perform the inference */
  n_batch = ai_network_run(network, &ai_input[0], &ai_output[0]);
  if (n_batch != 1) {
      err = ai_network_get_error(network);
      ...
  };
  ...
  return 0;
}

br,

Jean-Michel

Thank you Jean-Michel,

Your solution is right, thank you for the answer.

Best Regards,

Raggio