2022-02-22 12:58 AM
2022-02-22 03:12 AM
I assume you plan to run a tight loop equivalent to
for (i = 0; i < 100000; ++i) { ; }
Although that seems easy (once calibrated for how many times you need to go round for each microsecond), it is often the wrong approach.
For a start, if you have anything going on in the background, such as interrupt-service-routines or DMA transfers, these will slow the execution of the loop so your timing will be off.
Also, we often want to save power because the circuit is powered by a battery. If there is no useful processing to be done, you can save power by having the cpu sleep by having it wait-for-interrupt (WFI).
So what do I (in my arrogance) regard as the "right" way to have a time-delay?
I like to set a timer running, and then have the timer interrupt/wake-up the cpu when the desired delay has completed.
Having said that, if you wish to run a timing loop in software, the easiest approach is to be empirical - set up a loop of (say) 100,000 counts and see how long that takes to execute - perhaps turning a LED on at the start and off at the end.
Hope this helps,
Danish
2022-02-22 10:13 AM
Count machine cycles, taking into account the performance of the FLASH, which is slow on the STM32F1 series.
You can calibrate/quantify delays in a given instance by toggling GPIO and scoping the duty/frequency.
The core has a cycle counter via the DWT debug unit, and CYCCNT register, don't recall if it accounts for the wait-states stuffed by the FLASH, I haven't actively worked with F1 devices in over a decade. Shouldn't be hard to test and quantify, with a little bit of thought/imagination..