cancel
Showing results for 
Search instead for 
Did you mean: 

How is wrap-around handled in CCR registers

Capt_Karnage
Associate

Apologize in advance for what may be a basic question. I have a background in C programming and other languages mostly at the application layer. I have however, supervised, reviewed, and analyzed embedded C code at lower layers from others. My current team doesn't have a strong background in it, however. I've never done the low level micro programming myself and neither have they. There was a highly skilled programmer who left the group before I came on and we haven't found a replacement. So "knows enough to be dangerous" is an apt description right now.

We are experiencing some bizarre, intermittent issues and I'm trying to rule out things like wrap-around and overflow as a cause. As skilled as the previous programmer was, he was a bit sloppy with his code. It's hard to follow and he didn't appear to check his variables.

The project I inherited is based on an STM32F105VBT6. In the section of code I'm looking at right now, it utilizes Capture/Compare register 1 for Timer 2, TIM2_CCR1.

Here is what I know so far:

The global C variable TIM2_CCR1 for the register in use in the project is from an IDE generated header file and is defined as follows:

__IO_REG32_BIT(TIM2_CCR2,     0x40000038,__READ_WRITE ,__tim_ccr_bits);

where __tim_ccr_bits is defined as follows:

/* Capture/compare register (TIMx_CCR) */
 
typedef struct {
 
 __REG32 CCR       :16;
 
 __REG32         :16;
 
} __tim_ccr_bits;

and __IO_REG32_BIT is defined as follows:

#define __IO_REG32_BIT(NAME, ADDRESS, ATTRIBUTE, BIT_STRUCT)\
 
            volatile __no_init ATTRIBUTE union \
 
             {                 \
 
              unsigned long NAME;       \
 
              BIT_STRUCT NAME ## _bit;   \
 
             } @ ADDRESS

I'm a little confused by this 32-bit definition. It's a union of an unsigned long integer (using ILP32 data type model, so a 32-bit integer) and a bitfield. The upper 16-bits of the bitfield are unused, which tracks with STMicro document RM0008 and the starting memory address is also correct. So that part makes sense.

However, in the project, it is never referenced as a bitfield, only as unsigned long. What I don't understand is in the function it is being used in, which is an IRQ handler TIM2_IRQHandler, TIM2_CCR1, it gets incremented by an interval named `Const_RTC_Interval` whenever it is called. Since the upper 16-bits are defined as "reserved" in the register, what actually happens when this value exceeds 2^16? What happens to the bits that go into the "reserved" section? Are they lost or are they stored?

Note I have not found in the code anywhere in the project where the mode of TIM2_CCR1 is set, so I don't even know which mode it is being used in. Unfortunately, I can't share the full source right now as it's protected IP. If necessary, I'll try to post a redacted version later.

Seeing as this register is more or less a trigger for events to happen, I'm guessing a wrap-around wouldn't even matter to program execution. It it correct that if the bottom 16-bits matches the timer, the interrupt check triggers regardless? Even if the answer is inconsequential to the program, I would like to build my understanding of what's going on here and what happens to the bits.

2 REPLIES 2
TDK
Guru

It is not typical to (re-)define TIM2_CCR1 like that. The standard and recommended practice is to use the CMSIS header file which defines and accesses CCR1 as a 32-bit unsigned integer register.

https://raw.githubusercontent.com/STMicroelectronics/STM32CubeF1/master/Drivers/CMSIS/Device/ST/STM32F1xx/Include/stm32f105xc.h

The ordering of bitfields (such as in __tim_ccr_bits) is implementation-specific. Presumably it works for the complier/architecture the project was set up in, but is not well defined in general.

It should be straightforward to check in the debugger what happens if CCR1 is incremented past 16-bits. Maybe it discards them, maybe not, maybe only lower 16-bits is used in hardware. It's undefined behavior.

If you feel a post has answered your question, please click "Accept as Solution".
Capt_Karnage
Associate

@TDK thank you for the reply.

I suspected this was a possible case of "Undefined Behavior". I could test behavior with the debugger, but if I'm playing it safe, shouldn't I check against UINT16_MAX and just store the potentially wrapped bits for later use if another calculation (e.g. a time difference) might need them?

I was afraid this wasn't a standard usage, but I chalked it up to my inexperience in this specific architecture. It appears my header file was originally generated by the 3rd party IDE we are using. It was generated back in 2009. I have since updated the software, but when I inquired to the maker asking if they had an updated version or how to generate an updated version, they stated it was actually generated by STM32Cube. However, when I used STM32Cube, it generated a file identical to the one you just linked to, not the one I have. If I were doing this from the ground up, I'd certainly use the STM32Cube generated version - but unfortunately I would have to restructure the entire existing code base to implement that version of the header.