cancel
Showing results for 
Search instead for 
Did you mean: 

Can I implement multiple models through X-CUBE-AI?

HAhme.1
Associate II

I have multiple .tflite models that I want to implement in the same project? Is that doable?

Also, can I validate data programmatically?

Thank you in advance

Hanya

3 REPLIES 3
fauvarque.daniel
ST Employee

Yes you can have multiple models. In the generated code the interface to the model contains the unique network name that you specify in the X-Cube-AI tool.

The only limitation is the amount of Flash available on the system to store all the networks.

For validation, we provide an interface to validate the generated network with your input/output data. We are thinking of exposing a Python interface on the PC to validate the network on the target.

Can you clarify your expectations on validation so we could see if what we plan to expose fits your needs ?

Regards

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
HAhme.1
Associate II

Hello,

Thank you for the fast reply.

I wanted to validate my data right before using the network to predict to check the accuracy continually. I would like it to occur within the code itself not through the interface.

Regards

Hanya

jean-michel.d
ST Employee

Hello Hanya,

Yes as mentioned by Daniel, it is possible to have multiple generated models in a same STM32 project. When a model is imported, if you define a specific name for each model ("name1" for model1.tflite, "name2" for model2.tflite, ...), a set of specific c-files will be generated: name1.c/.h name1_data.c/.h files... They includes the specific c-functions/data for each model. All public/data/functions are prefixed with the provided name by the code generator: ai_name1_create(), .. else static C-keyword is used. Only the AI runtime library is shared. In theory, there is no limitation on the number of the model. Limitation is due to the availability of the Flash/sram to store respectively the weights and the activations buffers (or/and IO buffers). If two models are not executed in parallel, a same memory buffer can be shared.

Two ways, to generate this type of STM32 project if you don't want to use the provided aiValidation flow (for info, the user has the possibility to inject its own data, and all generated/predicted data are stored in the specific npz file for post-process analysis).

  • generate an aiTemplate project with the different models, after you updates the provided code to use the different models
  • generate an initial aiTemplate project with a given model to have the AI runtime lib and associated header file, and after you can use the CLI to add/update the other models

Don't hesitate to look the doc available in pack to have more details on the usage of the Embedded Inference API.

Regards,

Jean-Michel