2020-12-01 05:21 AM
I have multiple .tflite models that I want to implement in the same project? Is that doable?
Also, can I validate data programmatically?
Thank you in advance
Hanya
2020-12-01 05:40 AM
Yes you can have multiple models. In the generated code the interface to the model contains the unique network name that you specify in the X-Cube-AI tool.
The only limitation is the amount of Flash available on the system to store all the networks.
For validation, we provide an interface to validate the generated network with your input/output data. We are thinking of exposing a Python interface on the PC to validate the network on the target.
Can you clarify your expectations on validation so we could see if what we plan to expose fits your needs ?
Regards
Daniel
2020-12-01 06:00 AM
Hello,
Thank you for the fast reply.
I wanted to validate my data right before using the network to predict to check the accuracy continually. I would like it to occur within the code itself not through the interface.
Regards
Hanya
2020-12-01 10:29 AM
Hello Hanya,
Yes as mentioned by Daniel, it is possible to have multiple generated models in a same STM32 project. When a model is imported, if you define a specific name for each model ("name1" for model1.tflite, "name2" for model2.tflite, ...), a set of specific c-files will be generated: name1.c/.h name1_data.c/.h files... They includes the specific c-functions/data for each model. All public/data/functions are prefixed with the provided name by the code generator: ai_name1_create(), .. else static C-keyword is used. Only the AI runtime library is shared. In theory, there is no limitation on the number of the model. Limitation is due to the availability of the Flash/sram to store respectively the weights and the activations buffers (or/and IO buffers). If two models are not executed in parallel, a same memory buffer can be shared.
Two ways, to generate this type of STM32 project if you don't want to use the provided aiValidation flow (for info, the user has the possibility to inject its own data, and all generated/predicted data are stored in the specific npz file for post-process analysis).
Don't hesitate to look the doc available in pack to have more details on the usage of the Embedded Inference API.
Regards,
Jean-Michel