cancel
Showing results for 
Search instead for 
Did you mean: 

How can it be that the amount of lines of code in main() affect the time it takes to execute an interrupt routine?

MB7
Associate II

Hi community,

This question is all about the behavior of the DMA1_Channel1_IRQHandler in my application.

I am working with an STM32G431CBT. It is running at 168MHz. Configured with 4 Flash wait states.

NVIC interrupt priorities are setup using CMSIS functions as follows:

SystemCoreClockUpdate();
clock = SystemCoreClock;
SysTick_Config(clock/1e3);
 
NVIC_SetPriorityGrouping(0);
 
NVIC_DisableIRQ(SysTick_IRQn);
irq_prio = NVIC_EncodePriority(NVIC_GetPriorityGrouping(), 10, 0);
NVIC_SetPriority(SysTick_IRQn, irq_prio);
NVIC_EnableIRQ(SysTick_IRQn);
 
irq_prio = NVIC_EncodePriority(NVIC_GetPriorityGrouping(), 13, 0);
NVIC_SetPriority(USART2_IRQn, irq_prio);
NVIC_EnableIRQ(USART2_IRQn);
 
irq_prio = NVIC_EncodePriority(NVIC_GetPriorityGrouping(), 0, 0); //highest priority!!!
NVIC_SetPriority(DMA1_Channel1_IRQn, irq_prio);
NVIC_EnableIRQ(DMA1_Channel1_IRQn);
 
 

TIM1->CCR5 (on match) triggers ADC conversion sequence. ADC triggers DMA1_Channel1 (circular, periph-to-mem). DMA1 is setup to issue an interrupt on transfer complete (TCIE).

The DMA1_Channel1_IRQHandler (has highest priority) then fires and a few things are processed and calculated within it.

My code compiles with no errors and warnings. And as far as I can see it all works as intended.

But here comes the strange thing:

Right at the start of DMA1_Channel1_IRQHandler I set a GPIO Pin high. At the End of DMA1_Channel1_IRQHandler I set the same Pin low. Looking at the generated high-low pulse with an oscilloscope I can measure the execution time of the DMA1_Channel1_IRQHandler. It takes 1.69µs.

But if I shrink the code in the main() by a few lines of code I can see that the execution time gets lower!!! �� And if I put just an empty while(1) loop in the main() execution time gets even lower....to its lowest-->1.22µs !!!!��

How can that be? An ISR should not be affected in such a way by the amount of code in a low priority function just like main().

Anybody any ideas on that?

Cheers

11 REPLIES 11
MB7
Associate II

I changed the setting of the IO pin to use the BSSR register, but that did not make a difference.

Next I tried moving the "GPIOC->BSRR |= (1U << (14+16)); ////Set Pin low" upwards in the ISR-Code to see whether there is that point from which on there is no longer any dependency of the ISR execution time from the amount of code in main(). Of course the measured time gets smaller as setting high and setting low are moving closer together. But I have to conclude...Such a point does not exist. Well...besides placing set pin high and set pin low right next to eachother.

"GPIOC->BSRR |= (1U << (14)); //Set Pin high"

"GPIOC->BSRR |= (1U << (14+16)); //Set Pin low"

But in general...No matter where I place "//Set Pin low" there is always a dependency on the amount of code in main() regarding ISR execution time.

All variables used in the ISR are declared volatile.

For testing I also told the compiler not to use the FPU. This resulted in a much longer ISR execution time, but the issue was still present.

For testing I disabled all interrupts (even SysTick) besides DMA1_Channel1_IRQHandler --> Issue still present.

But then I found one interesting effect. Telling the compiler to optimize for speed (-Ofast) instead of size (-Os) solved the problem. No matter how much code is in main() the ISR execution time is always 1.2µs.

So as setting the optimization level to -Ofast is more or less a workaround and not a solution to the root of the problem I have to ask ones again: With all that information and background...has anybody any Idea what is wrong here? Bug in compiler/linker or even in silicon?

Can anybody confirm that behavior?

S.Ma
Principal

Check the asm generated in debug mode, or generate asm and file compare if in need to reach full understanding. It might reveal an obvious truth...