cancel
Showing results for 
Search instead for 
Did you mean: 

Problems with Cube AI Project

LuizF_Ferro
Associate II

Hello,

I am working on a Human Activity Recognition model with a STMU585CIU6 using Cube AI. In summary, my model takes accelerometer data from my lsm6dsox sensor, and use a frame of 2s of sampling (for ODR 104Hz, it takes 208 samples) as input for the network, which return a prediction among 4 different classes.

After struggling for some time to use the generated functions in my main.c, I managed to perform the inference, but not as I expected. I reached a point where I'm not being able to identify and solve the problem.

I left my board on the desk to run some inferences, and as you can see in the image, the prediction values are the same for all the classes.

LuizF_Ferro_0-1713877492994.png

I would really appreciate if someone could help me to identify what could be possibly wrong. I am mostly suspicious about the network weights/parameters initialization, but I can't really figure it out. Here is an adapted snippet of my main.c:

 

 

 

 

ai_u8 activations[AI_NETWORK_DATA_ACTIVATIONS_SIZE];
ai_u8 weights[AI_NETWORK_DATA_WEIGHTS_SIZE];
ai_float in_data[AI_NETWORK_IN_1_SIZE];
ai_float out_data[AI_NETWORK_OUT_1_SIZE];

/* AI buffer IO handlers */
ai_buffer *ai_input;
ai_buffer *ai_output;

int main(void) {

  HAL_Init();
  MX_GPIO_Init();
  MX_ICACHE_Init();
  MX_CRC_Init();
  MX_I2C1_Init();
  MX_USART3_UART_Init();

  stmdev_ctx_t dev_ctx;
  uint8_t dummy;
  /* Initialize mems driver interface */
  dev_ctx.write_reg = platform_write;
  dev_ctx.read_reg = platform_read;
  dev_ctx.mdelay = platform_delay;
  dev_ctx.handle = &hi2c1;

  ai_handle network = AI_HANDLE_NULL;
  ai_error err;
  ai_network_report report;

  /** @brief Initialize network */
  const ai_handle acts[] = { activations };
  const ai_handle wgts[] = { weights };
  err = ai_network_create_and_init(&network, acts, wgts);
  if (err.type != AI_ERROR_NONE) {
      printf("ai init_and_create error\n");
      return -1;
  }

  /** @brief {optional} for debug/log purpose */
  if (ai_network_get_report(network, &report) != true) {
      printf("ai get report error\n");
      return -1;
  }

  printf("Model name      : %s\r\n", report.model_name);
  printf("Model signature : %s\r\n", report.model_signature);

  ai_input = &report.inputs[0];
  ai_output = &report.outputs[0];
  printf("input[0] : (%d, %d, %d)\r\n", AI_BUFFER_SHAPE_ELEM(ai_input, AI_SHAPE_HEIGHT),
                                      AI_BUFFER_SHAPE_ELEM(ai_input, AI_SHAPE_WIDTH),
                                      AI_BUFFER_SHAPE_ELEM(ai_input, AI_SHAPE_CHANNEL));
  printf("output[0] : (%d, %d, %d)\r\n", AI_BUFFER_SHAPE_ELEM(ai_output, AI_SHAPE_HEIGHT),
                                       AI_BUFFER_SHAPE_ELEM(ai_output, AI_SHAPE_WIDTH),
                                       AI_BUFFER_SHAPE_ELEM(ai_output, AI_SHAPE_CHANNEL));

  int ret = 1;
  ret = lsm6dsox_device_id_get(&dev_ctx, &dummy);

  lsm6dsox_xl_data_rate_set(&dev_ctx, LSM6DSOX_XL_ODR_104Hz);
  lsm6dsox_xl_full_scale_set(&dev_ctx, LSM6DSOX_4g);

  while (1) {
    /* Read accelerometer data */
    lsm6dsox_axis3bit16_t data_raw_acceleration;

    float acceleration_g[3];
    const float sensitivity_a = 0.122f / 1000.0f; // 0.122 mg/LSB converted to g/LSB

    /** @brief Fill input buffer with acceleration/gyroscope values*/
    int bufIdx = 0;
    while (bufIdx < AI_NETWORK_IN_1_SIZE) {
      uint8_t regXL = 0;
      lsm6dsox_xl_flag_data_ready_get(&dev_ctx, &regXL);
      
      if (regXL) {
        lsm6dsox_acceleration_raw_get(&dev_ctx, data_raw_acceleration.i16bit);

        /* int16 -> float conversion */
        acceleration_g[0] = data_raw_acceleration.i16bit[0] * sensitivity_a;
        acceleration_g[1] = data_raw_acceleration.i16bit[1] * sensitivity_a;
        acceleration_g[2] = data_raw_acceleration.i16bit[2] * sensitivity_a;
      
        in_data[bufIdx] = acceleration_g[0];
        in_data[bufIdx + 1] = acceleration_g[1];
        in_data[bufIdx + 2] = acceleration_g[2];
        bufIdx += 3;
      }
    }

    /** @brief Perform inference */
    ai_i32 n_batch;

    /** @brief Set input/output buffer addresses */
    ai_input[0].data = AI_HANDLE_PTR(in_data);
    ai_output[0].data = AI_HANDLE_PTR(out_data);
    ai_input[0].format = AI_BUFFER_FORMAT_FLOAT;
    ai_output[0].format = AI_BUFFER_FORMAT_FLOAT;

    /** @brief Perform the inference */
    n_batch = ai_network_run(network, &ai_input[0], &ai_output[0]);
    if (n_batch != 1) {
      err = ai_network_get_error(network);
      printf("ai run error %d, %d\n", err.type, err.code);
      return -1;
    }

    /** @brief Post-process the output results/predictions */
    printf("Inference output: [");
    float maxValue = -1.0f; // update the highest probability
    int maxIndex = -1;

    /** @brief AI_NETWORK_OUT_1_SIZE = 4 */
    for (int i = 0; i < AI_NETWORK_OUT_1_SIZE; i++) {
      float value;
      value = out_data[i];
      printf("%.2f", value);
      if (i < AI_NETWORK_OUT_1_SIZE - 1) printf(", ");
      if (value > maxValue) {
        maxValue = value;
        maxIndex = i;
      }
    }
    printf("]\r\n");

  // Print activity with highest probability
  const char* activities[AI_NETWORK_OUT_1_SIZE] = {"Sitting", "Stairs", "Standing", "Walking"};
  if (maxIndex != -1) printf("Activity: %s\r\n", activities[maxIndex]);

  }
}

 

 

 

 

I would really appreciate if someone could help me solve this problem. Thanks!

2 REPLIES 2
fauvarque.daniel
ST Employee

Your problem seems to be at least the weights, you should call the ai_network_create_and_init with NULL as the third parameter. The weights are generated automatically by the code generation tool in the file network_data_param.c
 

ai_network_create_and_init(&network, acts, NULL); 

 Here is the pointer to the documentation Documentation/embedded_client_api.html#ref_api_create_init inside the package (normally under ~/STM32Cube/Repository/Pack/STMicroelectronics/X-CUBE-AI/<version>

Regards

Hi Daniel, thanks for your reply. I had the third parameter set as NULL before, but my problem still persisted. Instead of getting the predicted outputs equal for all classes, using "ai_network_create_and_init(&network, acts, NULL);" gives me an inference output like that:

LuizF_Ferro_0-1713880658864.png

Moving the board or not, the prediction remains dominant for a wrong class.

 

Best regards,

Luiz