2024-06-10 08:38 AM
Dear
I have encountered some difficulties while using CUBEAI to analyze my self-trained object detection models.
I trained my own dataset using the object detection models available in the STM32AIModelZoo. I have utilized the tiny_yolo, st_ssd_mobilenet, and ssd_mobilenet_v2 models, and successfully trained the corresponding models in .h5 format. The entire training process should be error-free, as I strictly followed the configuration files provided in the modelzoo's guidelines.
However, when I attempted to analyze the trained models in CUBEAI, it displayed an error stating that Lambda layers cannot be processed by CUBEAI. This error persists regardless of the model I use. After reviewing the .py files of various models, I did not find any functions related to creating Lambda layers. I also tried analyzing the pre-trained models provided in Modelzoo with CUBEAI, and discovered that they also cannot be processed by CUBEAI due to the presence of Lambda layers.
This issue has been plaguing me for quite some time, and I am eager to find a solution. I hope to receive some suggestions from this community.
If you have any insight on how to resolve this issue, please respond promptly. Your help will be greatly appreciated.
Thank you in advance for your assistance.
Best regards,
2024-06-17 11:53 PM
Hello,
Actually we have no issue importing st_ssd_mobilenet_v1_025_192_int8_object_detection_COCO_2017.tflite for instance.
Could you provide more details of your experiment, like : are you based on the latest version of the model zoo? What is the result of the training pass? Is the result of the training tested, locally and/or in dev cloud? on target?
Thanks,
Laurent