cancel
Showing results for 
Search instead for 
Did you mean: 

timer delay deviation with stm32f051

evert2
Associate II
Posted on January 24, 2013 at 16:18

Hello,

I'am struggeling with a timer delay function,

I've made a delay_us() function with help of timer14

However, when I have a delay which is close to zero I have lots of deviation.

Here is my initialisation:

#define SYS_CLK 48000000    /* in this example we use SPEED_HIGH = 48 MHz */

#define DELAY_TIM_FREQUENCY 1000000 /* = 1MHZ -> timer runs in microseconds */

   /* Time base configuration */

   TIM_TimeBaseInitTypeDef TIM_TimeBaseStructure;

   TIM_TimeBaseStructInit(&TIM_TimeBaseStructure);

   TIM_TimeBaseStructure.TIM_Prescaler = (SYS_CLK / DELAY_TIM_FREQUENCY) - 1;

   TIM_TimeBaseStructure.TIM_Period = UINT16_MAX;

   TIM_TimeBaseStructure.TIM_ClockDivision = 0;

   TIM_TimeBaseStructure.TIM_CounterMode = TIM_CounterMode_Up;

   TIM_TimeBaseInit(TIM14, &TIM_TimeBaseStructure);

  /* Enable counter */

   TIM_Cmd(TIM14, ENABLE);

This is my delay function:

void delay_us(uint16_t us) {   

  uint16_t start = TIM14->CNT;

   /* use 16 bit count wrap around */

   while((uint16_t)(TIM14->CNT - start) <= us);

}

These are the deviations measured on an output which is switched before and after the delay:

delay_us(1) -> actual delay = 2,55 us

delay_us(10) -> actual delay = 11,24 us

delay_us(100)-> actual delay = 100,08 us

I've measured the all clock signals on the MCO port and they are what I expected, also the PLL

Does anyone has an idea what is going on?

Regards,

Evert Huijben

3 REPLIES 3
Posted on January 24, 2013 at 16:55

Why wouldn't you use a finer granularity in your timer?

Say you read CNT instantly before it ticked? Delay approaches zero for it to get to +1

Why not let it clock at 48 MHz, you'd have 21ns granularity instead of 1000ns, for >1.3ms decimate 1ms at a time. Your accuracy could still be sub micro second.

You need to look at the code generated, but the check periodicity is likely to beat differently than the clock tick. You also need to account for call/return and setup. Delay computation is a ''y = mx + c'' slope where m is your iteration cost, and c is your zero delay path cost. For a 48 MHz clock m would be 48, and c would be less than 0, and hopefully bigger than -48, you have to calibrate your code implementation. This might be easier to do in assembler. A 1ms delay would be around 48000 less a few dozen cycles of overhead.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
evert2
Associate II
Posted on January 24, 2013 at 21:29

Thank you Clive,

I don't have any special reason to use the given implementation, I just found this example somewhere

Do you have any (assembly)example of a micro second delay function?

Regards,

Evert

evert2
Associate II
Posted on January 25, 2013 at 10:07

In the meanwhile I've made a delay_us() in assembly:

static inline void delay_us(uint32_t us) {

/* So (2^32)/12 micros max, or less than 6 minutes */

us *= 12;

us -= 2; //offset seems around 2 cycles

/* fudge for function call overhead */

us--;

__ASM volatile('' mov r0, %[us] \n\t''

''.syntax unified \n\t''

''1: subs r0, #1 \n\t''

''.syntax divided \n\t''

'' bhi 1b \n\t''

:

: [us] ''r'' (us)

: ''r0'');

}

These are the specs:

delay(1) -> 1,036

delay(10) -> 10,04

delay(100) -> 100,04

Any remarks are welcome,

Regards,

Evert Huijben