cancel
Showing results for 
Search instead for 
Did you mean: 

Issue with 16-bit input and 16-bit weight convolution on STM32N6 ConvAcc

yangweidage
Associate II

Hi all, 

According to the STM32N6 reference manual (see attached figure), both the diagram and the text description state that the ConvAcc should support 16-bit input × 16-bit weight (16×16) operations.

截屏2025-09-12 16.46.31.png

To verify this, I designed a simple test model(see attached files): 

  • Input: 4×4 tensor (all ones)

  • Kernel: 3×3 (all ones)

  • Expected output: 2×2 tensor

 

When I set the SIMD field to 1, the ConvAcc performs 8-bit input × 8-bit weight (8×8) operations, and the output results are correct as expected.

When I set the SIMD field to 2, the ConvAcc performs 16-bit input × 8-bit weight (16×8) operations, and the results are also correct.

My questions are:

  1. Does the ConvAcc really support 16×16 convolution (16-bit input × 16-bit weight) as described in the reference manual?

  2. If yes, how should I correctly configure the fields of LL_Convacc_InitTypeDef to enable 16×16 operation? (e.g., simd, inbytes_f, outbytes_o, kseten, etc.)

 

1 REPLY 1
yangweidage
Associate II

Thanks in advance!