2026-03-10 1:29 AM
Hello,
in our design with STM32H753ZIT6 I've adjusted the C0G load capacitors on the LSE 32.768 kHz crystal so that the real-time clock is now very accurate (less than 1 second error in 24 hours). I've always performed the calibration in VBAT mode at a voltage between 2.3 and 3.2 V.
However, now I've noticed that in normal VDD mode with active controller, the real-time clock is running significantly too slow, with an error of 90 seconds in 24 hours. I've ruled out the VBAT supply voltage as the cause. Any ideas what could be causing this large error in normal VDD mode?
Thank you!
Solved! Go to Solution.
2026-03-17 11:52 PM
I did the test with the calibration signal at PC13.
Here is the signal in VBAT mode. (I turned the persistence of the scope to infinity.) The edged are very stable at the same position.
And here the signal in normal running mode, again with infinity persistence:
There is a significant jitter. If I stop the CPU with the debugger, the frequency is stable again.
I have no explanation, why the frequency is not 512 Hz.
I changed the setting for the drive strength to medium-low and to medium-high (regarding the errata). With both setting I can achieve a stable frequency (of 512 HZ! ?:-/) in battery and powered-up mode. I decided to use the real medium-high mode and I'm performing a long-term measurement now. I hope this solves my issue.
2026-03-10 1:37 AM - edited 2026-03-10 1:41 AM
I would rule out any cristal or Vxx or chip related problem - it seem you are switching from crystal to LSI oscillator running 32kHz.
Usually large error like this are due to selecting wrong oscillator.
2026-03-10 1:56 AM
Thank you for the quick reply.
I had a similar thought, but isn't this sufficient:
PeriphClkInitStruct.RTCClockSelection = RCC_RTCCLKSOURCE_LSE;in the SystemClock_Config() function.
2026-03-10 2:25 AM - edited 2026-03-10 3:16 AM
Or other possible problem:
in run mode with VDD , you have switching signals close to crystal or the RTC pins, most sensitive is next pin to RTC pins (PC13) ; this can "disturb" the 32khz ....
and obviously this is quiet when cpu is not powered.
+
How did you do calibration in VBAT mode , cpu not running to do "smooth calibration" ?
2026-03-10 3:24 AM
Yes, it seems that you are using the correct clock.
You can double check looking ar reg RCC_BDCR bits RTCSEL for value 0x01.
I would suggest you take out on a pin 512Hz using calbration on RTC_OUT_CALIB and check if you have a stable frequency; that can show if it is normally working fine while some random event can shift LSE oscillator, like spike injected noise.
2026-03-10 6:03 AM
PC13 is not connected and configured as input with pull-down.
+
I calibrated the clock by experimental matching of the load capacity with comparison of an accurate reference clock every 24 hours.
2026-03-10 6:47 AM
The clock selection is configured correctly:
If I switch it to LSI manually the speed is definitely wrong because of the pre-scalers for 32768 vs. 32000 Hz.
I do not fully understand your proposal for measuring the frequency.
2026-03-10 7:47 AM - edited 2026-03-10 7:49 AM
It's not necessarily PC13 which disturbs the LSE (although that may do that a lot). Any other signal in the oscillator's vicinity may do that. And, maybe even more importantly, also currents flowing through common ground tracks/planes.
It may be hard to capture a disturbance whih causes 1000ppm deviation, especially if the source is some sudden burst. You can test this hypothesis, though, by running the circuit under VDD with a firmware which is "quiet", i.e.does not exercise its outputs, and also with no external input signals.
You can also increase LSE drive, but I personally would recommend to track down the disturbance's source first.
Unless you do something directly with the RTC, such as put it into INIT mode often; that's not LSE-related.
JW
2026-03-10 7:51 AM
You can select Calibration and see test clock on some pin, here PC13.
If square wave has jitter, you have some noise injected - memory oscilloscope helps.
When I suggest double check, I mean looking at registers using debugger after some time - no doubt setup is correct, but some code can overwrite registers.
Eventually you can monitor in real time.
2026-03-10 8:58 AM
Completely independent of noise and crystal:
wasn't there a problem in RTC initialisation that might produce some sub-second reset, if that is done each time the CPU "wakes up" or is re-started ?
... which would not apply if the CPU is never sleeping.
Can't find the thread right now.
I have something like this in my H7 source:
/* RTC - only init if off, otherwise lose subseconds */
if( (RCC->BDCR & RCC_BDCR_RTCEN) != RCC_BDCR_RTCEN ) RtcInit();