2018-07-11 08:03 AM
I'm curious if anyone else has seen this issue.
We are using 20 ppm LSE at 32768hz as the clock source for our RTC. Over an 8 hour period, we are a few seconds off vs. the expected max of .6 S for the 20 ppm.
Our system clock is derived from a 14Mhz ± 10ppm crystal oscillator PLL'd to 70Mhz.
We set up a test to capture the number of system clocks at the RTC one second rollover. From the data below, you can see most readings are spot on and some have slight software jitter that corrects itself on the next one second rollover. The problem is every 19 seconds, there is a strange offset applied. This doesn't correspond with any settings for the smooth calibration. I'm sure I could add smooth calibration to get rid of it, but what is it?
NOTE: These are the deltas, not the actual counts
69650000
70000000
70000000
70000000
70000006
69999994
70000006
69999994
70000000
70000000
70000000
70000000
70000000
70000000
70000000
70000000
70000000
70000000
70000000
69650000
2018-07-11 01:40 PM
Strange indeed.
Read out and check/post content of the RCC registers, and maybe also RCC_BDTR.
Any chance to output LSE onto MCO and observe?
Any watchdog, sleep mode?
How exactly were the above numbers obtained?
JW
2018-07-11 03:44 PM
Output to MCO looked good which corresponds well with seeing 70000000 system clocks in 1 second per RTC
No watchdog, no sleep
Not my code. FW engineer is on vacation.
2018-07-11 05:27 PM
For
Read out and check/post content of the RCC registers, and maybe also RCC_BDTR.
you don't need to post/publish code, nor a FW engineer.
JW