2024-07-19 12:15 PM
Hello All,
I needed a very fast vector addition routine for the H7 so I wrote it in assembly. However, I discovered that very small changes in the code caused huge differences in execution time. Both of the following functions add a 16-bit vector to a 32-bit sum vector. The vectors are 32-bit word aligned. The sum vector is in DTCM and the other is in external SRAM. The first routine adds sequential data to the sum and the second adds every fourth point of a 4x larger vector to the same size sum. So both process the same amount of data, but the first is 3 times faster. Does anyone know what would cause such a large difference in execution time for these nearly identical functions?
Thanks
Dan
3X faster one
loop:
LDRH r3, [r0, r2, lsl #1] // load 16bit raw data in r3
LDR r4, [r1, r2, lsl #2] // load 32bit sum in r4
ADD r4,r3 // add raw data to sum
STR r4, [r1, r2, lsl #2] // store new sum
SUBS r2,#1 // next data point
BPL loop
Slower one
loop:
LDRH r3, [r0, r2, lsl #1] // load 16bit raw otdr data in r3
LDR r4, [r1, r2] // load 32bit sum in r4
ADD r4,r3 // add raw data to sum
STR r4, [r1, r2] // store new sum
SUBS r2,#4 // next data point
BPL loop
2024-07-19 12:33 PM
Where do you effector that loop from (memory, alignment)?
Cortex-M7 is not your friendly cycle-precise core, but an overcomplicated caching, mildly superscalar beast.
JW
2024-07-19 12:49 PM
The second uses 4x the total memory, which makes cache misses 4x more often. Memory gets loaded one cache page at a time. I don't know the size of this offhand, but it's on the order of 32 bytes. In the first loop, you have 8 data points per cache page. In the second, you only have 2.
What you're seeing is that keeping data within cache makes a huge difference in terms of speed.
2024-07-19 01:05 PM
Hi,
just for info : did you write it in plain C , optimizer set to -Ofast ?
How this compares to your asm ?
2024-07-19 01:38 PM
I am using ICache, but I'm not sure on the instruction alignment.
Dan