Tflite Batch_Matmul and Loop Operator Not Supported
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-09-13 8:15 AM
I’m using X-Cube-AI Toolbox 9.1 to realize Vision Transformer deployment but got stuck. I exported from torch using ONNX opset=15. I then used onnx2tf to convert ViT to .tflite format, which is obviously a valid model as shown in Netron.
1.When I’m trying to analyze it in .tflite, there popped up an error says “BATCH_MATMUL not supported.”
2. Without conversion, analyzing .onnx model popped up something like “tuple has no method get_shape” and failed.
3. I further converted it to .keras and .h5, but .keras says: Unable to load the file, and then the whole CUBE IDE got stuck and expects a restart. Meanwhile .h5 only says something like the Length is four.
All of the trials above deploying ViT model failed with only a single line of error note during model analysis. So I have the following questions:
A) I tried to unroll the matmul in FOR loops, but unfortunately LOOP operator is also not supported.
B) I think one important thing is that the scaled dot product attention module seems to rely heavily on matmul or batch_matmul. So I wonder if the matmul operators, such as @ operator in torch, torch.bmm operator, or torch.einsum, could be supported? Or is there an interface to realize matmul customized operator in X-Cube-AI?
Solved! Go to Solution.
- Labels:
-
ST Edge AI Core
-
STM32CubeAI
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-09-26 9:10 AM
Vision Transformer are currently not supported in STM32Cube.AI
It is in our development plans but no release date yet.
Regards
Daniel
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-09-26 9:10 AM
Vision Transformer are currently not supported in STM32Cube.AI
It is in our development plans but no release date yet.
Regards
Daniel
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
