2025-04-09 9:33 AM
We are aiming to deploy one of the more well-known CV models on an MCU, the segment-anything model by Facebook, but there are layers that are unsupported, and we can't exactly modify layers as it would greatly affect performance. Fortunately, time is not a concern for us in inference so the speed of the hardware is not a major challenge (as opposed to a GPU).
Is there some timeline or plan, or some way we can implement these layers ourselves internally?
LayerNormalization, MatMulInteger, DynamicQuantizeLinear, OneHot, ConvInteger, and Range
2025-04-10 8:47 AM
Hello @junaidsiddiqui,
Sadly, it seems very complicated to do something here... It is too early to use your model I think with the ST Edge AI Core.
First, more and more layers are added as the stedgeai core is updated but I can't tell you the order or priority of which they are treated.
You can find the list of ONNX supported layers here:https://stedgeai-dc.st.com/assets/embedded-docs/supported_ops_onnx.html
It is maybe possible to replace the unsupported layers by equivalent operation using other supported layers. I am not sure that this is really possible. For example, for MatMulInteger, we support MatMul but only for float32 or QLinearMatMul but I don't know if it is the same.
It is probably a lot of work, and it will for sure have impact on the accuracy of the model..
Have a good day,
Julian
2025-04-10 10:12 AM
Hi Julian,
Thanks a bunch!
I've used alternative layer operations for other AI architectures I customized on my own which I can tolerate some model accuracy.
I'll try to investigate other avenues if the case is as you put it. Appreciate all your help, Thanks!