2024-02-13 08:49 AM
Hi everyone,
I am working on an extrapolation task as part of a research project at the university and I compare the different models, in this case random forest models.
I assume that the model complexity of the model is related to the size of the knowledge array.
How can it be that the execution speed of the models on the microcontroller does not increase with increasing model size? Are calculations performed in parallel? For example, models with a size of 200 kB can be executed much faster than models with a size of 80 kB.
I work with STM32 Nucleo-144 development board with STM32H723ZG MCU.
Thanks in advance.
2024-02-14 12:52 AM - edited 2024-02-14 01:44 AM
Hello,
Thank you for your remark.
It is clear that the computation time is related to the size of the library, but in NanoEdge AI Studio, we provide a model and its preprocessing, and I think what you are observing comes from these preprocessing.
You can take a look at the validation reports for multiple libraries and look at the end at the flowchart to find the preprocessing applied. FTTs and feature extraction can take quite some time to compute and thus explain what you are observing.
You can find the report in the validation step here:
And the flow chart at the end of a report:
Please let me know if you have any other question.
Have a good day,
Julian