cancel
Showing results for 
Search instead for 
Did you mean: 

Code works differently in debug mode than in real time

schperplata
Associate III
Posted on November 26, 2014 at 23:47

Hello.

I've got problem regarding STM32L100 (on discovery board). I've switched from Arduino where I enjoy the simplicity of printing data to via UART and delay/millis functions. So, I decided to write my own, and they suited me until I discovered, that they don't work as they should.

The strangest thing is, that in debug mode, line-by-line, code works flawless. When code works in normal mode eg. 32MHz, something wierd is happening and I can't figure out what.

I've added my own files for printing on USARTx and my own

millisfunctions. Here is the main code, since result printed on my computer is what bother me.

main.c:

#include ''main.h''

//STM32L100RCT6-Discovery - clock setumfunction - SYSCLK = 32MHz, HSI.

void

clockCorrection()

{

RCC_DeInit();

RCC_HSICmd(ENABLE);

while

(RCC_GetFlagStatus(RCC_FLAG_HSIRDY) != SET)

// Wait until HSI is ready

{

}

FLASH->ACR |= FLASH_ACR_ACC64;

// Enable 64-bit access

FLASH->ACR |= FLASH_ACR_PRFTEN;

// Enable Prefetch Buffer

FLASH->ACR |= FLASH_ACR_LATENCY;

// Flash 1 wait state

RCC->APB1ENR |= RCC_APB1ENR_PWREN;

// Power enable

PWR->CR = PWR_CR_VOS_0;

// Select the Voltage Range 1 (1.8 V)

while

((PWR->CSR & PWR_CSR_VOSF) != RESET)

// Wait until the Voltage Regulator is ready

{

}

RCC_PLLConfig(RCC_PLLSource_HSI, RCC_PLLMul_4, RCC_PLLDiv_2);

// HSI = 16MHz; 16MHz * 4 / 2 = 32MHz

RCC_PLLCmd(ENABLE);

while

(RCC_GetFlagStatus(RCC_FLAG_PLLRDY) != SET)

// Wait untill PLL is ready

{

}

RCC_PCLK1Config(RCC_HCLK_Div1);

// Configures the Low Speed APB(APB1) clock (PCLK1).

RCC_PCLK2Config(RCC_HCLK_Div1);

// Configures the High Speed APB(APB2) clock (PCLK2).

RCC_SYSCLKConfig(RCC_SYSCLKSource_PLLCLK);

// Configures the System Clock source to PLL

while

((RCC->CFGR & (uint32_t)RCC_CFGR_SWS) != (uint32_t)RCC_CFGR_SWS_PLL)

// Wait till PLL is used as system clock source

{

}

}

int

main(

void

)

{

uint32_t tajm;

uint32_t tajm2;

clockCorrection();

timer2_millis_init();

init_usart(USART1, GPIOA, 57600);

printStringLn(USART1,

''UART initialized!''

);

while

(1)

{

printString(USART1,

''tajm:''

);

//print to UART1

delay(500);

//delay 500ms

printNumberLn(USART1, micros(), DEC);

//print microseconds from the begining of tim2 init

}

}  Values printed out on terminal are wrong - microsecond part of number isn't changing at all: 

tajm:UART initialized!

tajm:505003

tajm:1007003

tajm:1509003

tajm:2011003

tajm:2513003

tajm:3015003

tajm:3517003

tajm:4019003

tajm:4521003

tajm:5023003

tajm:5525003

tajm:6027003

tajm:6529003

tajm:7031003

tajm:7533003

tajm:8035003

tajm:8537003

 If I add simple character after delay(500); printString(USART1, '' ''); 

tajm:UART initialized!

tajm: 505020

tajm: 1007021

tajm: 1509022

tajm: 2011022

tajm: 2513023

tajm: 3015024

tajm: 3517007

tajm: 4019008

tajm: 4521008

tajm: 5023009

tajm: 5525010

tajm: 6027010

tajm: 6529011

tajm: 7031012

tajm: 7533012

tajm: 8035013

tajm: 8537014

tajm: 9039014

tajm: 9541015

tajm: 10043016

tajm: 10545016

tajm: 11047017

tajm: 11549018

tajm: 12051017

tajm: 12553019

tajm: 13055019

tajm: 13557019

tajm: 14059020

tajm: 14561021

tajm: 15063021

tajm: 15565022

tajm: 16067023

tajm: 16569023

tajm: 17071024

tajm: 17573008

tajm: 18075008

tajm: 18577009

tajm: 19079009  or:printString(USART1, '' ''); 

tajm:UART initialized!

tajm: 505189

tajm: 1007190

tajm: 1509190

tajm: 2011191

tajm: 2513192

tajm: 3015192

tajm: 3517193

tajm: 4019177

tajm: 4521177

tajm: 5023178

tajm: 5525179

tajm: 6027179

tajm: 6529180

tajm: 7031181

tajm: 7533181

tajm: 8035182

tajm: 8537182 As you can see, values of microsecond part of printed numbers are incrementing, than at some point ''restarted'' and again incrementing. I can't figure why is that - and what is causing this ''delay dependant'' value. Seems like second and milisecond part of printed numbers are OK and are somewhat in ''time'' with real clock. If I run identical code in debug mode, printed values are OK. So it seem something - my uart or my millis ''library'' is messing with printed data when stm32 runs on high speed - 32MHz. I don't know, just guessing.  I hope I added all files, let me know if there is anything missing. Currently I've spent too much time for this ''simple'' problem, so I hope someone has a solution.

#stm32l1-usart-timer-delay

5 REPLIES 5
stm322399
Senior
Posted on November 27, 2014 at 09:50

The serie of value with constant sub-milliseconds part does not look wrong to me. You are running a deterministic process. The 'delay' function terminates at a boundary of millisecond (the value timer2_millis is updated after an interrupt that happens every millisecond). Then you enter 'micros' function that read from the timer value. The execution path between the interrupt (TIM2 wrap) and reading the TIM2 counter is deterministic enough to return the same value (3) every time.

When you introduce printLN between 'delay' and 'micros' calls, the execution path is less deterministic (because of UART management) and the delay observed varies within a few tens of microseconds. But it is basically the same, 'micros' returns a value that is the sum of 1000x a millisecond value, plus 10 to 30 microseconds (in the second case) or plus 150 to 250 microseconds (in the third case).

schperplata
Associate III
Posted on November 27, 2014 at 23:06

Yes, I've thought of that too. But, with little experimenting it seems there is something more going on. Also, a 150us delay because of print function - that is quite a lot of instructions at 32MHz... I don't know about that very much at all, but it seems a lot of consumed time for sending one or two caracters.

Anyway, here are another tests:


while
(1)

{ 

tajm = micros();

printString(USART1, 
''tajm:''
); 

delay(450);

tajm2=micros();

printNumberLn(USART1, tajm, DEC);

printNumberLn(USART1, tajm2, DEC);

}

Again, 3 microseconds in tajm2 .


tajm:32669234

33120003

tajm:33123233

33574003

tajm:33577232

34028003

tajm:34031232

34482003

tajm:34485231

34936003

tajm:34939231

35390003

tajm:35393230

35844003

tajm:35847229

36298003

tajm:36301228

36752003

tajm:36755227

37206003

tajm:37209227

37660003

tajm:37663226

38114003

tajm:38117225

38568003

tajm:38571224

39022003

tajm:39025224

39476003

tajm:39479223

----------------------------------------------- Strange thing is, when I switch delay(450) with delayMicroseconds(450000), printed values seems OK:


while
(1)

{ 

tajm = micros();

printString(USART1, 
''tajm:''
); 

delayMicroseconds(450000);

tajm2=micros();

printNumberLn(USART1, tajm, DEC);

printNumberLn(USART1, tajm2, DEC);

}

Result:


tajm:2725449

3176293

tajm:3179179

3630024

tajm:3632909

4083753

tajm:4086639

4537482

tajm:4540369

4991213

tajm:4994098

5444942

tajm:5447828

5898671

tajm:5901558

6352402

tajm:6355287

6806132

tajm:6809017

7259861

tajm:7262746

7713590

tajm:7716476

8167321

tajm:8170206

8621049

tajm:8623935

9074779

tajm:

schperplata
Associate III
Posted on November 28, 2014 at 00:07

Maybe I posted some results too quickly. I think I understand your point now, but there are still some ''holes'' in my understanding.

I changed the delay(ms)function to work with better precision(including timer value - microseconds).


// Return microseconds from timer initialization or timer reset

uint32_t micros(

void

)

{

uint64_t microseconds = (timer2_millis * 1000);

uint32_t timer2_current_value = TIM_GetCounter(TIM2);

microseconds += (uint64_t)(timer2_current_value * INCREMENT_RESOLUTION);

return

(uint32_t)(microseconds);

}


// Wait in this function for specific amount of time.

// param: unsigned long [milliseconds]

void

delay(uint32_t ms)

{

/*

uint32_t delay_millis = timer2_millis;

delay_millis += ms;

while (timer2_millis < delay_millis){

__nop();

}

*/

uint64_t current_time = 0;

uint64_t delay_millis = ((timer2_millis + ms)* 1000) + (TIM_GetCounter(TIM2)*INCREMENT_RESOLUTION);

//micros() + ms*1000


while

( current_time < delay_millis){

current_time = (timer2_millis*1000 + (TIM_GetCounter(TIM2)*INCREMENT_RESOLUTION));

}

}

As it turns out, micros()after delay(ms)now returns some value, that makes sense to me(delay + some extra time for calculating and stacking).


tajm:733783060

734233903

tajm:734237464

734688308

tajm:734691868

735142710

tajm:735146271

735597114

tajm:735600675

736051519

tajm:736055079

736505922

tajm:736509482

736960326

tajm:736963886

737414728

tajm:737418290

737869132

tajm:737872693

738323537

tajm:738327097

738777941

tajm:738781501

739232344

tajm:739235905

739686747

tajm:739690308

740141151

tajm:740144711

740595555

tajm:740599115

741049959

tajm:741053519

741504362

tajm:741507922

741958766

tajm:741962326

742413169

tajm:742416730

742867572

tajm:742871133

Since it is a delay it isn't really imporntat if it is accurate to submiliseconds, so I will probably change the code back to default, 1ms precision or so(work with timer2_millis). Anyway, I do have a couple of questions: 1.As you can see, inside delay() function I changed uint32_tto uint64_tto avoid variable overflow - or at least extend some range where this millis()and micros()would work. Until now I didn't use casting very much, since I didn't have any similar problems.

1.
uint64_t delay_millis = ((timer2_millis + ms)* 1000) + (TIM_GetCounter(TIM2)*INCREMENT_RESOLUTION);

Should I cast every single variable like:

1.
uint64_t delay_millis = (((uint64_t)timer2_millis + (uint64_t)ms)* 1000) + ((uint64_t)TIM_GetCounter(TIM2)*INCREMENT_RESOLUTION);

or it is OK as I did it? 2.I don't understand this extra delay, that is added to every call of delay().I made a simple experiment:

1.
while

(1)
2.

{
3.

delay(100);
4.

printNumberLn(USART1, micros(), DEC);
5.

}

I tested this code with delay(50)also, and as it turned out there is an extra 1370 - 1378microseconds of extra delay for printand while(1). As my microcontroller is running at 32MHz, each clock cycle - single instruction time is about 25ns. So in these extra 1370us of delay, there is 43840 clock cycles. If all instructions were single cycle, that is quite a lot of time that print and while functions need. I do understand that there are some other things that microcontroller needs to do between function calls - like sharing variables on stack, ... but 40000 instructions - that seems to me quite a lot of code. So what am I missing here? And thank you for helping me. :)
stm322399
Senior
Posted on November 28, 2014 at 15:44

The 'printString' and 'PrintStringLn' functions send char one by one: they wait for the TX buffer of UART to be empty before sending the next char. The first char is immediately put into the UART output shift register and TXE is set. The second char is written to TX register and then the execution waits the first char to be completely out before moving the second char to shift register and set TXE again.

The bottle neck is the UART output, at 115200 bauds, one char needs 8.7µs per bit, in 8N1 mode every char consume 87µs. This is the major part of the observed delay. At 32MHz, the processor wastes its time in while loop.

schperplata
Associate III
Posted on November 29, 2014 at 13:31

Oh, that makes sense then. 

Thank you!