2024-12-11 02:19 AM - last edited on 2024-12-11 05:53 AM by Julian E.
I am working on deploying a 1D CNN model on an STM microcontroller using STM32CubeMX with the X-CUBE-AI package to convert the TensorFlow Lite model into C code.
While testing the model, I noticed significant discrepancies in predictions:
Steps Taken:
Questions:
Solved! Go to Solution.
2024-12-12 01:47 AM
The STM32Cube.AI (i.e. X-CUBE-AI) library used on the target has it's own implementation of the neural network kernels so you may see different results between the results when executed in Python versus the target.
In the X-Cube-AI tool you have 2 validate buttons, one to validate on the desktop using the same library as the one used on the target and one validate on target.
The results of the validate on desktop and validate on target should not differ as it is basically the same library. On the target we have some more optimizations based on the processor you use.
If you see a bad COS (less that 0.98) when you validate the model with real data then it is likely to be a bug in the implementation of the kernels that we should look at.
Could you share your model and ideally some input/output data so we can reproduce the issue ?
Thanks in advance
Regards
2024-12-12 01:47 AM
The STM32Cube.AI (i.e. X-CUBE-AI) library used on the target has it's own implementation of the neural network kernels so you may see different results between the results when executed in Python versus the target.
In the X-Cube-AI tool you have 2 validate buttons, one to validate on the desktop using the same library as the one used on the target and one validate on target.
The results of the validate on desktop and validate on target should not differ as it is basically the same library. On the target we have some more optimizations based on the processor you use.
If you see a bad COS (less that 0.98) when you validate the model with real data then it is likely to be a bug in the implementation of the kernels that we should look at.
Could you share your model and ideally some input/output data so we can reproduce the issue ?
Thanks in advance
Regards
2025-01-05 08:55 PM
Thank you for your response. I’d like to clarify my issue further:
Steps I’ve taken:
Given these discrepancies, I suspect there may be differences in how TensorFlow Lite and STM32Cube.AI implement quantized operations. Could you please confirm whether this is expected behavior or suggest further steps to investigate?
Thank you for your assistance.
Best regards,
Khushbu
2025-01-06 02:38 AM
Yes I confirm that the implementation of the kernels are different.
If you see a significant accuracy drop using X-CUBE-AI then it may be an issue we have to work on.
Regards
2025-01-26 09:49 PM
Thank you for your previous response. I trained a more complex and generalized model and achieved satisfactory results when deploying it on the STM32 board using X-CUBE-AI. Thank you for your support.
I am now exploring the possibility of training the model directly on the STM32 board to enable adaptive on-device learning. My model is a 1D Convolutional Neural Network (CNN) designed to predict target features from historical time-series data.
Could you please confirm if any STM32 microcontrollers support on-board training? If so, I would appreciate guidance on the tools, libraries, or methods to enable this functionality. Alternatively, if on-board training is not currently supported, any recommendations for adaptive personalization strategies would also be helpful.
Thank you for your time and assistance. I look forward to your response.
Best regards,
Khushbu
2025-01-27 12:26 AM
Hello @khushbu_parmar,
On board training is not supported by X Cube AI.
As you are working with 1D time series data, you may try to use NanoEdge AI Studio instead.
The tool is very easy to use, it will take you just a few minutes to check if you get good performances.
In Nanoedge, you just import data, the tool looks for the best model given your data and then it output a C library with 2 to 3 functions to use.
Anomaly detection support training on device.
Doc:
Have a good day,
Julian