cancel
Showing results for 
Search instead for 
Did you mean: 

STM32 Timer event in foreground vs hardware Timer

Chris Pad
Associate II

I've a colleague who is using the 32-bit timer of an STM32 to trigger and interrupt handled pin toggle rather than using the 16-bit hardware Timer on the pin that needs toggling. The frequency on that pin varies a lot in use. When I wrote bringup code I used the 16bit Timer and a prescaler and that is a very small sacrifice compared to using 32bit resolution, particularly as the more interesting frequencies are high enough to have ARR below 2^6 for a PSC value of 0.

Presuming the interrupt is of the highest priority, I can't think of a good argument other than the intelligence of the interrupt approach. Is it correct to assume that the propagation delay of the pin toggle will be constant and therefore not a source of jitter? The frequency in use doesn't get high enough for the propagation delay to limit the operating frequency.

The colleague is a software developer rather than an embedded one and I often find I know that something isn't the way I would do it, but I can't put my finger on why.

20 REPLIES 20

Those who insist on being called "software guy" often consider hardware as a necessary evil, something hard to capture. That's one of the sources of popularity of Hardware Abstraction schemes.

And that's OK, we are all different. Those who indulge in the minutiae of hardware then often struggle with (semi)abstract concepts of software architecturing.

Toggling a random pin in an interrupt is probably conceptually easier to grab on that seeking out the hardware connections and then resolve the potentially needed split of the reload value, even if the latter generally offers better *control* (in terms of latency and jitter).

JW

TDK
Guru

I see no issue with using a 32-bit timer rather than a 16-bit one if it's available.

> Is it correct to assume that the propagation delay of the pin toggle will be constant and therefore not a source of jitter?

The propagation delay due to the IRQ request? You should see a jitter of a few ticks in normal scenarios, but up to 20-30 if another interrupt is already being handled.

Calling an interrupt every 2^6 ticks is going to tank your CPU resources.

If you feel a post has answered your question, please click "Accept as Solution".
Chris Pad
Associate II

It's not the 32 vs 16 bit query really, it's the hardware Timer controlled pin vs IRQ controlled pin.

> The propagation delay due to the IRQ request?

Yes that's what I was wondering. The hardware Timer control pin wouldn't have any CPU induced jitter, but you think the IRQ request based one would, even if only a few ticks.

> but up to 20-30 if another interrupt is already being handled.

Is that only if the other interrupt has the same priority as the one servicing the Timer?

I don't subscribe to any particular number of cycles in this case.

If the other ISR has the same or higher "group" priority, to the "natural" latency causes (e.g. uninterruptible multicycle instructions and instructions which wait for bus contention; bus contention during the stacking/unstacking operations, latencies of fetch of ISR instructions, etc.etc.), the whole duration of the other ISR (less tail-chaining "spare") is imposed. Multiple higher-or-same-latency ISRs add up.

20-30 cycles execution time per ISR is just about the minimum for a reasonably well written ISR. 24 cycles is the entry-exit alone (but for purposes of other ISR's latency, it's somewhat less thanks to tail-chaining, maybe half, I don't remember the exact number as it's mostly meaningless due to the vast number of other influences), and that does not include prologue-epilogue added by the compiler. If you use Cube or any other "library", don't use compiler optimizations, perform any time consuming and/or complex operations within the ISR, I'd count 100s to 1000s of cycles for the basic ISRs as they are come from clicking in CubeMX.

Jitter is of course just a portion of the latency, and it is very hard to estimate, but it will be somewhat proportional to the latency, too.

If this is the highest priority ISR, there's no latency imposed by other ISRs as such, leaving only the "natural" latency/jitter sources. These roughly tend to increase with the increasing "computing power" thus complexity of the mcu, but again, there are many many sources and there's no simple way to estimate yet alone calculate these.

JW

PS. Which STM32?

PS2. Just for the laughs, look up discussions about latency of GPIO pin toggling in 'H7 here. No, that's not even ISR, just pure pin toggling.

PS3. [self-promotion] assorted rants [/self-promotion]

> It's not the 32 vs 16 bit query really, it's the hardware Timer controlled pin vs IRQ controlled pin.

A timer PWM pin is superior in terms of performance, but it's also not available on every pin.

> Is that only if the other interrupt has the same priority as the one servicing the Timer?

No. If the task switch to an IRQ just started (which takes ~12 ticks) and the timer IRQ triggers at the same time, it will take another ~12 ticks to get there. So ~24 total, plus a few more for various things.

Edit: seems this is wrong. See "Late Arrival":

https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/beginner-guide-on-interrupt-latency-and-interrupt-latency-of-the-arm-cortex-m-processors

If the two are the same priority, the timer won't pre-empt at all and it will take even longer.

It's easy to criticize. If you have questions on the design, I'd at least ask the colleague first, maybe there's a reason. Could be that performance doesn't matter here.

If you feel a post has answered your question, please click "Accept as Solution".
Chris Pad
Associate II

I presume he has this interrupt as the only highest priority interrupt, but it will still be subject to at least the multicycle instruction delay of a lower priority interrupt wouldn't it.

Also, you can clock the timers at 2x CPU clock so there is a loss there as well.

TDK
Guru

> Also, you can clock the timers at 2x CPU clock so there is a loss there as well.

Can you? Which chip?

If you feel a post has answered your question, please click "Accept as Solution".

> Also, you can clock the timers at 2x CPU clock so there is a loss there as well.

STM32F334 (but true of many F3s)

I implemented a 16bit Timer on the pin directly and know that works well. He's gone against this recommendation and implemented it on the basis of the 32bit Timer and I wanted to check my understanding that this wasn't the right way to go before having the discussion.

I guess the smart way to do it if you really wanted to use the 32-bit Timer would be the following:

  • Setup 32bit Timer with desired period
  • Setup up a 16bit Timer on a pin to be triggered by the 32bit Timer and to govern the required pulse width (triggered by 32-bit Timer but clocked by 144MHz)
  • Setup a a CPU interrupt to be triggered by the 32bit Timer to do the per cycle compute that needs to be done and which therefore isn't timing critical

That could even be as ugly as having the 32-bit Timer run at 2*f, where f is the desired pin frequency, and have the 16bit Timer clocked by the 32-bit timer and have period 2 pulse length 1. That wold be subject to less jitter than a IRQ based pin toggle.

Nasty and much cleaner to have the 16bit Timer govern the pin clocked off the 144MHz source.

I don't see how you're getting a 144MHz timer from a 72MHz clock speed. The x2 timer speed boost is only possible if the APB prescaler is more than /1.

If you feel a post has answered your question, please click "Accept as Solution".