2020-08-03 11:31 AM
Good day,
We have multiple sensor devices running in a BLE Mesh configuration. The processor in use is the STM32L152RE.
I sync all the sensors' RTC by means of a message transmitted from a connected gateway node. I can check and verify that all sensors' RTC are in sync initially.
Over time however the RTC of the sensors become out of sync due to drift I believe is caused by the LSE oscillator driving the RTC clock. I have not yet done any calibration on the RTC of the sensors which is why I am posting this question.
I have found the Cube RTC package and relevant examples however these examples require an accurate external 1Hz signal generator in order for the RTC derived 1Hz signal to be matched. It successfully proves the calibration but is not practical in the field, unless I am not understanding it correctly?
I still believe that example can be used in principle but I would like to verify if my approach is correct please:
Instead of using a TIMER to measure the 1Hz signal from the sig gen and then using that to calibrate the RTC, can I simply set up the TIMER(whose clock is derived from HSI) to generate a 1Hz signal internally, fire an interrupt when it overflows and use that interrupt in the same way the external sig gen interrupt is used?
Thanks in advance for any help and advice!
Regards,
2020-08-03 01:20 PM
You propose to use a lower accuracy oscillator (HSI) to calibrate a higher accuracy oscillator (LSE with crystal). The likely outcome is increased drift in RTC sync rather than less.
Due to variations in the LSE crystals no two RTCs will maintain exact sync over a long period, no matter how well you calibrate the oscillator. That's why atomic clocks exist.
The fundamental question for your design is where the authoritative RTC time and date comes from. By implication this is over a BLE network (I assume you are using BLE time services for this). From your description you have no way to calibrate your RTC locally in the 'L152 since the RTC itself is the most accurate time source you have available.
Without calibration hardware you'll have to rely on your network time server to periodically sync your local RTC time. You can keep a running average of the correction factor each time the local RTC is updated to compute a calibration offset to reduce drift. As I recall you can extract this information from BLE time service updates.
Jack Peacock
2020-08-03 01:49 PM
The sig gen signal frequency is assumed to be very accurate unlike an internally generated signal.
If you study AN3371 Application note "Using the hardware real-time clock (RTC) in STM32 F0, F2, F3, F4 and L1 series of MCUs", isn't it Chapter 1.5 "Synchronizing the RTC" what could help if the gateway's more accurate time is periodically broadcasted.
2020-08-04 01:33 AM
Thank you for your replies.
Jack, you pointed out something obvious that I did not think clearly enough about. I thought that the HSI accuracy tolerance is more "predictable" when compared to the various elements that can affect the external clock. I thought I might be able to force the system to operate in a region of fault that I can predict. Anyway, with that being said I do actually implement daily resyncs with my gateway device but I thought this just covers up the problem instead of solving it - and it bothered me a bit
I guess the best way is to calibrate the RTC is at production assembly with a sig gen. What I now take from your answers is that resyncs will anyway be required periodically given the nature of the LSE.
Thanks!
2020-08-04 02:46 AM
> What I now take from your answers is that resyncs will anyway be required periodically given the nature of the LSE.
At the end of the day, yes; but you can also at the resync calculate, how much did the RTC deviate since the last resync, and that might give you an opportunity for much better (longer-term) correction which you can apply to the RTC, than when you do it short term at manufacturing.
JW