2025-12-12 4:43 AM
Hello everyone. How are you?
I created a test program based on https://community.st.com/t5/stm32-mcus/how-to-calibrate-the-stm32-s-real-time-clock-rtc/ta-p/744958 I calibrated and compared the signal (512Hz @ PB2) with an oscilloscope, resulting in an error of less than 20 ppm. The RTC worked for 24 hours, and the seconds were the same as my base source (Windows). To me, this indicates that the hardware, crystal, and capacitors are OK.
After that, when I entered my application code, the RTC always showed a delay (error > 5000 ppm). My code never use HAL_RTC_SetTime or HAL_RTC_SetDate. Reading from the RTC->CALR register is the same as the calibration value.
Regardless of whether the calibration is at -488 or +488 (maximum value), the RTC always lags. I have the feeling that the processor gets busy doing something, and ends up losing some RTC pulse.
Unfortunately, I can't post the complete code here, but I can provide some other information:
- 1 ISR timer every 1 ms
- X-Cube FreeRTOS - 6 tasks
- 8 UART ISRs for RX and TX via DMA + 1 I2C channel
- iCache enabled / No USB device
Considering that the RTC runs independently, by hardware, and that ISR, DMA, RTOS crash, etc. shouldn't affect it, does anyone have any idea what else I should analyze?
Thanks in advance,
leandro
2025-12-12 4:55 AM
Hi,
the RTC crystal is very sensitive to any stray in from surrounding signals.
So screen it as good as possible , if metal case solder ground wire there, and avoid any switching on adjacent pins,
as this might disturb the 32k clock.
And set crystal drive level high (if this chip has select for drive on rtc crystal.)
2025-12-12 5:24 AM - edited 2025-12-12 5:25 AM
RTC cannot be wrong, it is free running with its own independent clock unless you change some setting.
Usually RTC is lagging when you switch to LSI clock by error, it would require different divisions.
Can you take out a freq pulse or use RTC IT to get a pps pulse still keeping your app code? That would certify that rtc is working fine.
I strongly suggest to use interrupts as much as possible to save time and date values in some variables - we waste 2 32 bit memory and 7 clocks cycles.
2025-12-12 5:57 AM
Hello AScha. Thank you for your reply.
There is no switching near the RTC crystal pins, and it is mounted very close to the microcontroller. Even with the signals near the crystal turned off, the RTC still shows a delay. But why would it work with test code and not with application code?
I tried using LSE High Drive, but the problems persist.
2025-12-12 6:08 AM
@Leandro_Araujo wrote:
I have the feeling that the processor gets busy doing something, and ends up losing some RTC pulse.
Even the CPU is blocking somewhere it has nothing to do with the RTC counting errors.
2025-12-12 6:13 AM
Hi mbarg. Thank for your suggestion
I know the RTC can't be wrong, and that's why I find this very weird.
I took a PB2 RTC freq out at 1Hz => mesure 0,9812Hz
"I strongly suggest to use interrupts as much as possible to save time and date values in some variables" sorry, but i cant get the ideia.
2025-12-12 6:16 AM
I know that. I just couldn't come up with a better explanation... lol
It's very strange... I don't know what the root cause is. If I change the hardware, the problem remains the same.
2025-12-12 6:54 AM - edited 2025-12-12 6:59 AM
The 1Hz actual frequency confirm that you are using a 32000Hz clock instead of 32768Hz:
32000/32768 = 0,9765625.
This is from LSI clock - check .ioc file. You should have RCC.RTCClockSelection=RCC_RTCCLKSOURCE_LSE instad of RCC.RTCClockSelection=RCC_RTCCLKSOURCE_LSI.
Double check by reading RTCSEL[1:0] in RCC_BDCR register.