2026-01-07 2:30 PM
I have an STM32L011F4P6 with an RTC. It is running at 3.3V and 16 MHz off the internal HSI RC clock. The RTC is driven by a +/-20ppm 32.768 KHz crystal. It is running at room temperature (23-25C). After calibration, I am losing several seconds per day! What's wrong?
Here's more information:
I have a auto-calibration routine where it will output a 1Hz signal on PA2 and capture the rising edges using TIM2 channel 3, running at 16 MHz. I capture the start time (1st edge), I interrupt on overflow and add 65536 to a 32bit variable, and I capture the final (33rd) edge to get the total number of 16MHz clocks in 32 seconds according to the RTC. At the same time, I have a GPS module with a 1Hz output fed into PA10. I set up TIM21 channel 1 identically to the TIM2 setup, to capture the total number of 16MHz clocks in 32 seconds according to the GPS.
I take the two numbers and find the difference, not in PPM but in units of calibration (0.954 PPM).
I get a reasonable value of 17 (about 16 PPM), and I adjust the GPS clock accordingly.
I don't recall if it is +17 or -17 that it read, but the magnitude of the value seems reasonable.
This value is stored in EEPROM and loaded into the RTC on power up as well.
f_TempFloat = (float)(i32_RTCclockTotalCount - i32_GPSclockTotalCount);
f_TempFloat /= (float)i32_GPSclockTotalCount;
f_TempFloat *= 1048576;
if(f_TempFloat > 0)
{
i16_FactoryCalibrationValue = (int16_t)(f_TempFloat + 0.5f);
}
else
{
i16_FactoryCalibrationValue = (int16_t)(f_TempFloat - 0.5f);
}
if(i16_FactoryCalibrationValue > 511)
{
i16_FactoryCalibrationValue = 511;
}
else if (i16_FactoryCalibrationValue < -511)
{
i16_FactoryCalibrationValue = -511;
}
if(i16_FactoryCalibrationValue > 0)
{
u32_SmoothCalibPlusPulses = RTC_CALR_CALP;
u32_SmoothCalibMinusPulsesValue = 512 - i16_FactoryCalibrationValue ;
}
else
{
u32_SmoothCalibPlusPulses = 0x00000000u;
u32_SmoothCalibMinusPulsesValue = 0 - i16_FactoryCalibrationValue ;
}
//Disable write protection
RTC->WPR = 0xCA;
RTC->WPR = 0x53;
RTC->CALR = (uint32_t)((uint32_t)u32_SmoothCalibPeriod | (uint32_t)u32_SmoothCalibPlusPulses | (uint32_t)u32_SmoothCalibMinusPulsesValue);
//Enable write protection
RTC->WPR = 0xFE;
RTC->WPR = 0x64;
The clock is intended to operate outside, though so far I'm only testing at room temperature.
I am using the onboard temperature sensor to detect changes in temperature and adjust the calibration accordingly. The GPS calibration routine will always be done at room temperature, immediately after power up, so I calibrate the raw reading at that time to 25C.
During normal operation, the temperature sensor is read every 5 minutes and a temperature calibration factor is calculated. This factor is added to i16_FactoryCalibrationValue and the RTC calibration is updated.
#define XTAL_K_VALUE 0.034f // Frequency Temperature Curve value (ppm/C)
#define XTAL_T_VALUE 25.0f // Turnover Temperature value (C)
#define PPM_VALUE 0.953674f // Adjustment granularity
int16_t Get_Current_Temperature(void)
{
uint16_t u16_temp;
ADC->CCR |= ADC_CCR_TSEN; //Startup time <= 10uS
ADC1->CHSELR = LL_ADC_CHANNEL_TEMPSENSOR;
// Compute number of CPU cycles to wait for
uint32_t waitLoopIndex = (10 * (SystemCoreClock / 1000000U)); //10uS * number of instructions
while(waitLoopIndex != 0U)
{
waitLoopIndex--;
}
// Wait until End of unitary conversion or sequence conversions flag is raised
waitLoopIndex = (100 * (SystemCoreClock / 1000000U)); //100uS * number of instructions
ADC1->ISR |= ADC_ISR_EOC; //Clear End of conversion flag
ADC1->CR |= ADC_CR_ADSTART;
while( ((ADC1->ISR & ADC_ISR_EOC) != ADC_ISR_EOC) && (waitLoopIndex != 0U))
{
waitLoopIndex--;
}
ADC->CCR &= ~ADC_CCR_TSEN; //Turn off temp sensor
if(waitLoopIndex)
{ // Clear regular group conversion flag. It is cleared by software writing 1 to it or by reading the ADC_DR register.
ReadEEPROM(EEPROM_TEMPSENSOR_CAL1_ADDR, &u16_temp);
return (((int32_t)((ADC1->DR * ((uint32_t)(f_voltageRef * 1000.0f))) / TEMPSENSOR_CAL_VREFANALOG) - (int32_t)*TEMPSENSOR_CAL1_ADDR) * (int32_t)(TEMPSENSOR_CAL2_TEMP - TEMPSENSOR_CAL1_TEMP) / (int32_t)((int32_t)*TEMPSENSOR_CAL2_ADDR - (int32_t)*TEMPSENSOR_CAL1_ADDR) + TEMPSENSOR_CAL1_TEMP);
}
else
{
return -100; //Ignore this value
}
}
What is causing me to lose time?
I have already checked and confirmed that the temp sensor reads within 19-29C (24+/-5 C rounds to an adjustment factor of 0).
I have already checked that the adjustment is the correct direction by using a frequency generator. A slower external clock results in a slower calibrated RTC and vice versa.
Is it the frequent writes to the calibration register that are causing issues?
Solved! Go to Solution.
2026-01-08 6:07 PM
When you do the calibration, do you detect an error of 700 ppm? Or do the errors come at discrete steps, perhaps when the chip is starting up or when you are doing something with RTC?
It's unlikely that a crystal is off by 700 ppm, so probably something else to blame here.
The LSE has a CSS you can enable to see if it's dropping out. Or you can hook up a logic analyzer to the 1 Hz signal and record it for an hour. Shouldn't be too hard to catch if it's off by 1 minute over 24 hours.
2026-01-09 1:01 AM - edited 2026-01-09 1:02 AM
Isn't the RTC set by mistake to LSI?
JW
2026-01-09 8:14 AM
Good question, but it should be set to LSE.
Any reason not to use "LOW" drive strength?
LL_RCC_LSE_SetDriveCapability(LL_RCC_LSEDRIVE_LOW);
LL_RCC_LSE_Enable();
/* Wait till LSE is ready */
while(LL_RCC_LSE_IsReady() != 1)
{
}
LL_RCC_SetRTCClockSource(LL_RCC_RTC_CLKSOURCE_LSE);
LL_RCC_EnableRTC();
I started the timer last night at 4:57:00 pm with the RTC set to 0:00:00.
(16 hours and 57 minutes behind).
Today at 10:08:00 am (17 hours and 10 minutes later) it read 17:09:58.
With delays in setting breakpoints as well as startup time, I can see 1-2 seconds.
Even if it was legitimate drift, that's 23 ppm instead of 700+ ppm.
The issue then may be in the motor driving code then. I'll let it run a full 24 hours and report back.
Then I'll add back in the temperature compensation code and see what happens.
2026-01-09 8:43 AM
We use NTP but te same approach could be fine for pps.
First, we use RTC value to get subsecond (1/1024 sec) time stamp; we use whole date and time but you can use only subsecond part. IMPORTANT: we use direct register access (3 words) to get reference time date subsec aligned.
Then we compute timedate to reference, in your case reference is 0 subsec value. You get a signed error (0 to +511-512).
If we are too far (abs (error) > than XmaxDrift, usually 16) we drift setting to max correction according to error sign.
Wait.
As we are in range, we start small correction steps to keep inside +-8 error, a kind of simple IIR fuzzy filter.
Once in range ( no error change for some seconds) we relax our test little by little to one time every hour - we have clocks outside with big temp change, std nucleo crystal and that has proven to be enough to stay in range.
Hope can help.
2026-01-09 9:01 AM
I appreciate the solution, but unfortunately that won't work for this.
The final product is going to be mass produced and then operate in the field over a specified operating temperature range of -40 to +85C. It operates without any access to any sort of time reference in the field, like GPS, cell signal, network access etc. The target accuracy is +/-1 minute over 180 days. That works out to +/-3.858 PPM.
Since it will be mass produced, it would be difficult to spend extended time periods in calibration. Your procedure would take hours, which we won't have in a production environment. I will probably have to fight the production manager for the 32 seconds I'm currently requiring. I wish I could do an extended time calibration. I already am going to have to rely on the typical temperature drift characteristics of the crystal, even though those specs have tolerance on them as well.
My measurement resolution should be 16 MHz x 32 seconds or ~2 parts per billion. That should be plenty for a base calibration of +/- 3.8 PPM. Unless I'm not accounting for something. As I said above, it's easy to get lost in the weeds and overlook something obvious.
2026-01-09 9:16 AM - edited 2026-01-09 9:32 AM
> range of -40 to +85C.
+
> target accuracy is +/-1 minute over 180 days. That works out to +/-3.858 PPM.
Forget it. Thats not realistic, because :
A standard 32.768 kHz tuning fork crystal typically drifts by approximately -150 ppm to -180 ppm at the extremes of the -40°C to +80°C range.
And basic anyway only at +/- 20 ppm.
For anything close to your expectations you need a TCO , external oscillator: HSE bypass.
To maintain high accuracy across this wide temperature range, a Temperature-Compensated Crystal Oscillator (TCXO) or an integrated RTC with internal compensation (like the DS3231) is required to stay within ±10 ppm.
btw
I use a mems TCO :
2026-01-09 9:31 AM
I think you missed that I'm doing a base calibration with a GPS signal to eliminate the +/-20 ppm at 25C. I'm also measuring temperature and adjusting the calibration based on temperature to account for the -180 ppm over the temperature range. I'm doing my own TCO using the capabilities of the STM32.
2026-01-09 9:47 AM
But i bet, you will have no success with a crystal and the STM internal clock...
read my last post again, i gave up on this and use a TCO mems clock now.
2026-01-09 11:11 AM
The SIT1552AI-JE-DCC-32.768E is cheaper and uses far less power than I would have guessed. Since I have some other pending changes to the PCB needs, I can add this change as well.
I'll probably go this route. Thank you!