cancel
Showing results for 
Search instead for 
Did you mean: 

Matmul operation is converted to MCU target Convolution

mincho00
Associate II

Hello. I am currently attempting to execute a MatMul operation on an NPU. I have implemented a simple TFLite file with a MatMul operation, as shown in the image below.

mincho00_3-1765548820534.png

When converting the model using ST Edge AI, I observed that Matmul operator is mapped to the MCU instead of the NPU, and the operation is converted into a Convolution. Furthermore, checking the network.c file reveals that it is being converted into a convolution layer with an extremely large stride.

mincho00_1-1765548640104.png

mincho00_4-1765548858606.png

 

https://stm32ai-cs.st.com/assets/embedded-docs/stneuralart_operator_support.html states that the matmul(batch matmul) operator is supported on the ST Neural art Accelerator.

How can I resolve this issue? Thank you.

0 REPLIES 0