2026-01-07 2:30 PM
I have an STM32L011F4P6 with an RTC. It is running at 3.3V and 16 MHz off the internal HSI RC clock. The RTC is driven by a +/-20ppm 32.768 KHz crystal. It is running at room temperature (23-25C). After calibration, I am losing several seconds per day! What's wrong?
Here's more information:
I have a auto-calibration routine where it will output a 1Hz signal on PA2 and capture the rising edges using TIM2 channel 3, running at 16 MHz. I capture the start time (1st edge), I interrupt on overflow and add 65536 to a 32bit variable, and I capture the final (33rd) edge to get the total number of 16MHz clocks in 32 seconds according to the RTC. At the same time, I have a GPS module with a 1Hz output fed into PA10. I set up TIM21 channel 1 identically to the TIM2 setup, to capture the total number of 16MHz clocks in 32 seconds according to the GPS.
I take the two numbers and find the difference, not in PPM but in units of calibration (0.954 PPM).
I get a reasonable value of 17 (about 16 PPM), and I adjust the GPS clock accordingly.
I don't recall if it is +17 or -17 that it read, but the magnitude of the value seems reasonable.
This value is stored in EEPROM and loaded into the RTC on power up as well.
f_TempFloat = (float)(i32_RTCclockTotalCount - i32_GPSclockTotalCount);
f_TempFloat /= (float)i32_GPSclockTotalCount;
f_TempFloat *= 1048576;
if(f_TempFloat > 0)
{
i16_FactoryCalibrationValue = (int16_t)(f_TempFloat + 0.5f);
}
else
{
i16_FactoryCalibrationValue = (int16_t)(f_TempFloat - 0.5f);
}
if(i16_FactoryCalibrationValue > 511)
{
i16_FactoryCalibrationValue = 511;
}
else if (i16_FactoryCalibrationValue < -511)
{
i16_FactoryCalibrationValue = -511;
}
if(i16_FactoryCalibrationValue > 0)
{
u32_SmoothCalibPlusPulses = RTC_CALR_CALP;
u32_SmoothCalibMinusPulsesValue = 512 - i16_FactoryCalibrationValue ;
}
else
{
u32_SmoothCalibPlusPulses = 0x00000000u;
u32_SmoothCalibMinusPulsesValue = 0 - i16_FactoryCalibrationValue ;
}
//Disable write protection
RTC->WPR = 0xCA;
RTC->WPR = 0x53;
RTC->CALR = (uint32_t)((uint32_t)u32_SmoothCalibPeriod | (uint32_t)u32_SmoothCalibPlusPulses | (uint32_t)u32_SmoothCalibMinusPulsesValue);
//Enable write protection
RTC->WPR = 0xFE;
RTC->WPR = 0x64;
The clock is intended to operate outside, though so far I'm only testing at room temperature.
I am using the onboard temperature sensor to detect changes in temperature and adjust the calibration accordingly. The GPS calibration routine will always be done at room temperature, immediately after power up, so I calibrate the raw reading at that time to 25C.
During normal operation, the temperature sensor is read every 5 minutes and a temperature calibration factor is calculated. This factor is added to i16_FactoryCalibrationValue and the RTC calibration is updated.
#define XTAL_K_VALUE 0.034f // Frequency Temperature Curve value (ppm/C)
#define XTAL_T_VALUE 25.0f // Turnover Temperature value (C)
#define PPM_VALUE 0.953674f // Adjustment granularity
int16_t Get_Current_Temperature(void)
{
uint16_t u16_temp;
ADC->CCR |= ADC_CCR_TSEN; //Startup time <= 10uS
ADC1->CHSELR = LL_ADC_CHANNEL_TEMPSENSOR;
// Compute number of CPU cycles to wait for
uint32_t waitLoopIndex = (10 * (SystemCoreClock / 1000000U)); //10uS * number of instructions
while(waitLoopIndex != 0U)
{
waitLoopIndex--;
}
// Wait until End of unitary conversion or sequence conversions flag is raised
waitLoopIndex = (100 * (SystemCoreClock / 1000000U)); //100uS * number of instructions
ADC1->ISR |= ADC_ISR_EOC; //Clear End of conversion flag
ADC1->CR |= ADC_CR_ADSTART;
while( ((ADC1->ISR & ADC_ISR_EOC) != ADC_ISR_EOC) && (waitLoopIndex != 0U))
{
waitLoopIndex--;
}
ADC->CCR &= ~ADC_CCR_TSEN; //Turn off temp sensor
if(waitLoopIndex)
{ // Clear regular group conversion flag. It is cleared by software writing 1 to it or by reading the ADC_DR register.
ReadEEPROM(EEPROM_TEMPSENSOR_CAL1_ADDR, &u16_temp);
return (((int32_t)((ADC1->DR * ((uint32_t)(f_voltageRef * 1000.0f))) / TEMPSENSOR_CAL_VREFANALOG) - (int32_t)*TEMPSENSOR_CAL1_ADDR) * (int32_t)(TEMPSENSOR_CAL2_TEMP - TEMPSENSOR_CAL1_TEMP) / (int32_t)((int32_t)*TEMPSENSOR_CAL2_ADDR - (int32_t)*TEMPSENSOR_CAL1_ADDR) + TEMPSENSOR_CAL1_TEMP);
}
else
{
return -100; //Ignore this value
}
}
What is causing me to lose time?
I have already checked and confirmed that the temp sensor reads within 19-29C (24+/-5 C rounds to an adjustment factor of 0).
I have already checked that the adjustment is the correct direction by using a frequency generator. A slower external clock results in a slower calibrated RTC and vice versa.
Is it the frequent writes to the calibration register that are causing issues?
Solved! Go to Solution.
2026-01-09 9:16 AM - edited 2026-01-09 9:32 AM
> range of -40 to +85C.
+
> target accuracy is +/-1 minute over 180 days. That works out to +/-3.858 PPM.
Forget it. Thats not realistic, because :
A standard 32.768 kHz tuning fork crystal typically drifts by approximately -150 ppm to -180 ppm at the extremes of the -40°C to +80°C range.
And basic anyway only at +/- 20 ppm.
For anything close to your expectations you need a TCO , external oscillator: HSE bypass.
To maintain high accuracy across this wide temperature range, a Temperature-Compensated Crystal Oscillator (TCXO) or an integrated RTC with internal compensation (like the DS3231) is required to stay within ±10 ppm.
btw
I use a mems TCO :
2026-01-07 3:19 PM
Do you take into account the current CALR values? I don't see that done in the code. Perhaps log how CALR is changing to UART on each update and see what's happening. It should be relatively constant. You should be adjusting values rather than writing new ones and ignoring what's currently in there.
Didn't quite follow the math you described. Why are you adding 65536 instead of just taking the difference? Otherwise the procedure seems sound.
2026-01-08 1:51 AM
What happens if you just perform the comparison to GPS once, written the calculated value to , and then left the RTC running without touching it the next 24 hours (or whatever time you deem adequate) to see whether it drifts or not?
JW
2026-01-08 2:00 AM - edited 2026-01-08 2:00 AM
> Do you take into account the current CALR values?
Are you referring to the fact that RTC_OUT comes from the already CALR-corrected prescaler output?
That indeed will make a huge difference - and would probably result in alternating real-ppm/close-to-zero values for the next CALR, if it wouldn't be taken into account; and the resulting long-term effect then would be that only roughly half of the real-ppm difference would be corrected.
JW
2026-01-08 5:34 AM
Yes, precisely this. And it should be obvious if CALR gets logged.
2026-01-08 7:50 AM
The timers (TIM2 and TIM21) are running at 16MHz and are going to overflow multiple times between edges. The +65536 is to account for the 16 bit timer counter overflow.
Prior to GPS calibration, I clear out RTC->CALR so that the GPS pulses are compared to the unadjusted RTC pulses.
2026-01-08 8:02 AM - edited 2026-01-08 8:05 AM
So it isn't missed, I clear out RTC->CALR prior to GPS calibration so that the GPS pulses are compared to the unadjusted RTC pulses.
Since the HSI could have jitter and its frequency could change over the 32 second measurement window, I have the PCB wait for the 1st rising edge to turn on the RTC pulse train. This aligns the two clock edges so any frequency shifting on the HSI affects both rising edges equally, and effectively cancels out.
After starting this thread, I set up two PCBs. One retains the code as listed above. The other one has the temperature checks and adjustments removed so the RTC->CALR is only set once on power up from the stored GPS calibration value. I'm letting it run awhile to see if removing that has any effect.
I know the calibration works by adding or subtracting pulses from the 1048576 pulses in 32 seconds. The added/removed pulses are supposed to be spread evenly over those 32 seconds. As far as I know, where the clock is inside that 32 second window is not visible to the user. Does writes to RTC->CALR reset that 32 second window? Could writing to it cause it to skip over some of the added/removed pulses, and thus accumulate error?
2026-01-08 8:15 AM
Apart from the CALR issue, just to make sure...
- the RTC crystal is +-20 ppm, this means you might get some +-1.7 seconds per day
- check the crystal datasheet: the +-20 ppm does not mean that it's perfect at room temperature, but is within this error range -> and then comes the temperature drift...
- timer set up correctly? (mind the +-1 for some registers)
- the L01 can run with max. 32 MHz, maybe there's some other interrupts influencing your timing?
You probably know all that, but sometimes we forget the simple things.
2026-01-08 8:36 AM
Thank you! I've been lost in the weeds before and missed obvious things.
The +/-20 ppm is the base accuracy for the crystal. Any deviation from the ideal capacitance loading will affect this, potentially adding to the error. I used COG capacitors, but so far I'm removing temperature as a variable by only calibrating and testing in 24 +/-1 C temperatures.
I believe the timers are correct, but I can post the timer setup code if there's reason to suspect it.
During calibration, the only interrupts running are TIM2, TIM21 and LPTIM.
The LPTIM interrupt is where all the main code.
This might be an obvious overlook, but since I'm using the CCR registers to capture the edge times and compensating for timer overflows, I assume any interrupt latency won't affect the results.
During normal (not calibration) operation, the device goes to sleep and the LPTIM interrupt periodically wakes it up (about every 50mS). It reads the RTC and compares the RTC minute variable with a local variable. If they are different, it moves a motor attached to a minute hand and sets the local variable as equal to the RTC minute variable. There will be some jitter +/-25mS as to when the minute hand moves, but that's not critical as long as it averages out over time.
If RTC min % 5 == 0, and it just changed, the temperature compensation code is run.
The temperature compensation code is disabled on one of the two PCBs.
2026-01-08 3:03 PM
The PCB without the temperature compensation code came in over 1 minute fast in 24 hours. This is around 700 ppm off!
I am now running it in debugger (I have no external serial bus or anything) to directly monitor the RTC values.
This will eliminate any issues with the motor driving code and the physical clock mechanism to focus exclusively on the RTC code and calibration.
I'll report back after several hours to see if there's any drift.