2014-03-04 03:23 PM
Hi all,
I saw many examples using systick interrupt handler to generate the accurate delay time (like the example below). My question would be pretty simple here: why does ''(
/ 1000)
'' generate a 1 millisecond delay? I need to examine the execution time of each section of my code in my project, which might reach to microsecond level, so I'm trying to understand the systick time calculation.#include ''LPC17xx.h''
uint32_t msTicks = 0;
/* Variable to store millisecond ticks */
void
SysTick_Handler(void
) {/* SysTick interrupt Handler.
msTicks++; See startup file startup_LPC17xx.s for SysTick vector */
}
int
main (void
) {uint32_t returnCode;
returnCode =
( / 1000);/* Configure SysTick to generate an interrupt every millisecond */
if
(returnCode != 0) {/* Check return code for errors */
// Error Handling
}
while
(1);}
I'm using STM32L. Thanks a lot!!~S2014-03-04 06:20 PM
So SystemCoreClock contains the number of clock ticks in one second, 1 ms is 1/1000 th of a second, SystemCoreClock / 1000 is the number if ticks in 1 ms. The value is counted by the SysTick counter, every time this number of ticks occurs the interrupt is fired.
If I were benchmarking code I'd use the cycle counter, which will tick at 48 MHz (or whatever you're running the STM32L1 at), so ~21ns granularity. Read the value before/after the code execution being measured, the subtract the start time from the end time.2014-03-05 11:24 AM
I see. Thanks for you explanation. I learned a lot about ST clock management these two days. :D