cancel
Showing results for 
Search instead for 
Did you mean: 

Why Different running time between Simulation and Emulation

helixwmonkey
Associate II
Posted on April 13, 2014 at 03:03

I got several lines of code in TIM1_CC_IRQHandler and need to know its exact running time because some real-time requirements.

So when I run simulation I got a time span between the entrance and the exit of the ISR. I write down the value of TIM1->CNT at the entrance, the start time is 0x0242. Then set a breakpoint at the exit of the ISR and run. At the exit TIM1->CNT is 0x0288.

(I have adopted the code DBGMCU->CR |= DBGMCU_TIM1_STOP in initialization)

BUT when I emulation(connect to hardware through JLink), the start time is 0x0245, the exit time is 0x02DD.

It was big shock and difference to me.

The Simulation's timespan differs from the Emulation's timespan of the same code!

The Simulation's timespan is decimal 70 system clocks(TIM1 72MHz), while the Emulation's timespan is 152 system clocks. Doubling the previous one and even more!

So what's wrong with it? In my project I must know the exact time of the ISR, which timespan should I trust?

Below is my ISR code:

&sharpinclude <stm32f10x.h>

&sharpdefine DBGMCU_TIM1_STOP  ((u32)0x00000400)

u16 CCR_Acc_x;

u16 CCR_Acc_y;

int main(void)

{

RCC->APB2ENR = 0x0804; //Enable TIM1 and GPIOA clock

GPIOA->CRH = 0x0b; //PA8,9 push-pull output,50MHz

TIM1->ARR = 0xffff; //ARR maximum, I use CCR1 and CCR2 compare match interrupt

TIM1->DIER = 0x06; //Enable CC1IE CC2IE interrupt

TIM1->CCMR1 = 0x3030; //CH1 and CH2 are both compare output(toggle)

TIM1->BDTR = 0x8000; //Enable MOE

TIM1->CR1 |= TIM_CR1_CEN; //Enable Counter

CCR_Acc_x = 562;

CCR_Acc_y = 18000;

TIM1->CCR1 = CCR_Acc_x; 

TIM1->CCR2 = CCR_Acc_y; 

TIM1->CCR3 = 0xffff;

TIM1->CCR4 = 0xffff;

NVIC_SetPriorityGrouping(4); //3 bits preemption

NVIC_SetPriority(TIM1_CC_IRQn, 0); //highest

NVIC_EnableIRQ(TIM1_CC_IRQn);

DBGMCU->CR |= DBGMCU_TIM1_STOP;

while(1)

{

}

}

u16 TIM1_SR_mask;

void TIM1_CC_IRQHandler(void)

{

//entrance breakpoint here

TIM1_SR_mask = TIM1->SR;

if (TIM1_SR_mask & TIM_SR_CC1IF) //CC1IF

{

TIM1->SR = ~TIM_SR_CC1IF;

TIM1->CCR1 = TIM1->CCR1 + CCR_Acc_x;

}

if (TIM1_SR_mask & TIM_SR_CC2IF) //CC2IF

{

TIM1->SR = ~TIM_SR_CC2IF;

}

TIM1_SR_mask = TIM1->SR;

if (TIM1_SR_mask & TIM_SR_CC1IF) //CC1IF

{

NVIC_SetPendingIRQ(TIM1_CC_IRQn);

}

if (TIM1_SR_mask & TIM_SR_CC2IF) //CC2IF

{

NVIC_SetPendingIRQ(TIM1_CC_IRQn);

}

}

//exit breakpoint here

my Chip: STM32F103 ZET6

my IDE: Keil MDK 4.73

JLink 4.76d

#time-simulation-emulation
3 REPLIES 3
Posted on April 13, 2014 at 03:17

What did I tell you about using TIM1->SR &= ?

The simulator probably doesn't understand flash wait states.
Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
helixwmonkey
Associate II
Posted on April 13, 2014 at 03:47

OOps Sorry, the Testing situation was TIM1->SR = ~TIM_SR_CC1IF;

It was a mistake that I copied the old code here. The different timespan is got with =, not &=. 

BTW, I followed your advice and you are right. The previous problem has almost been solved, later I will do a more thorough check and report the progress and new problems. Thanks a lot for that!

helixwmonkey
Associate II
Posted on April 13, 2014 at 04:52

Do you mean that Even if I use TIM1->SR = , Flash wait states would be still there. So the difference between simulation and emulation will always be there? ''Flash wait states'' means the time-cost while the core is fetching code, am I right?