cancel
Showing results for 
Search instead for 
Did you mean: 

RTC Timestamping Delay?

Lennart-Lutz
Associate II

Hi everyone,

currently I am working with an STM32L431 on a project that timestamps incoming IEEE802.15.4 frames via an AT86RF233 radio. The radio module is configured to issue an interrupt when a message is received and this interrupt signal is connected to PC13, which is the timestamp interrupt on the STM32L431. To enable the basic timestamp functionallity is set the TSE bit in the RTC_CR register (nothing more) and im able to query the Timestamping Register (TR) to get the time at which the frame was received.

However, I have a question regarding the timing precision of copying the Sub-Second Register (SSR), TR , and Date Register (DR) into their respective timestamp registers. Specifically, I am concerned about the duration of this process in microseconds, as I am implementing a time synchronization mechanism. Does copying these registers take a significant amount of time? Any insights or detailed timing information would be greatly appreciated. Or are there other sources of delay from the interrupt to when the timestamp registers are filled with the current time??

Thank you in advance!

6 REPLIES 6

Not a direct answer to your question, but one possibility is to get the timestamp with a regular timer and combine it with the RTC to calculate the subseconds part yourself.

In the past I made a data logger that used a sub milli second timer to log events and one of the events was the RTC 1 second tick(or x seconds). I calculated the subseconds in post processing using linear regression. This allowed things like jumps in time synchronization (could set time a few seconds forward or even backwards) to be smoothed out and even allowed me to start logging before the RTC was valid (empty RTC battery and waiting for GPS time signal). But you can do this calculation in real time too. This approach combines long term accuracy and short term high precision.

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.
Lennart-Lutz
Associate II

Hi,

thanks for your answer. The problem with this is, that my long term goal is to implement a TDMA protocol, which leverages the "sleep between" slots nature of a TDMA protocol to conserve power. I think i cant use your approach, since i use the RTC to wake up at the right time, i.e., at a slot.

In addition to my first post, i already implemented some sort of time synchronization protocol and achieved a precision of around +-30 us, which is the quantization error of the RTC. When i try to implement the same with the timestamping functionallity. i get around 90-120 us error. Im 99% sure that I can rule out all sources of external errors (interrupts, calculation delays...), except that the timestamping simply takes a very long time in the RTC itself...


@Lennart-Lutz wrote:

Hi,

thanks for your answer. The problem with this is, that my long term goal is to implement a TDMA protocol, which leverages the "sleep between" slots nature of a TDMA protocol to conserve power. I think i cant use your approach, since i use the RTC to wake up at the right time, i.e., at a slot.


Won't sleeping and waking up introduce latency? I'm not familiar the timestamping feature of the peripheral and what the latency is.

 


@Lennart-Lutz wrote:

When i try to implement the same with the timestamping functionallity. i get around 90-120 us error.


Can you share your code. This doesn't seem right. Could it be that it receives multiple packets and overwrites the timestamps with newer once?

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.

Hi,

I am programming the STM32L431 with the RIOT OS and I have implemented the "Delay Measured Synchronization Protocol" (DMTS) by Ping et al. This protocol relies on measuring the delays along the message transfer path. To mitigate receiver-side errors, I intended to use the timestamping feature of the RTC and the timing interrupt of the AT86RF233.

Providing detailed code examples is challenging since most of the functionality occurs in hardware. However, here's the conceptual overview:

On the sender side (the synchronization master), I send a message containing the transmit timestamp. On the receiver side, when the Physical Header (PHR) of the transmission is received (an IEEE802.15.4 message starts with a preamble, a Synchronization Header (SHR), and a PHR, followed by the MAC header and payload), the receiver takes the timestamp via PC13. After validating the message (the payload contains a magic 32-bit number), the receiver retrieves the timestamp from the timestamp registers. The code to take a microsecond resolution timestamp looks like this:

 

int rtc_get_timestamp_from_timestamp_reg(uint64_t *timestamp)
{   
    /* First thing to do: save current time */
    uint32_t tr = RTC->TSTR;
    uint32_t dr = RTC->TSDR;

    struct tm time;
    *timestamp = 0;
    
    time.tm_year = bcd2val(dr, RTC_DR_YU_Pos, DR_Y_MASK) + YEAR_OFFSET;
    time.tm_mon  = bcd2val(dr, RTC_DR_MU_Pos, DR_M_MASK) - 1;
    time.tm_mday = bcd2val(dr, RTC_DR_DU_Pos, DR_D_MASK);
    time.tm_hour = bcd2val(tr, RTC_TR_HU_Pos, TR_H_MASK);
    time.tm_min  = bcd2val(tr, RTC_TR_MNU_Pos, TR_M_MASK);
    time.tm_sec  = bcd2val(tr, RTC_TR_SU_Pos, TR_S_MASK);

    *timestamp = (uint64_t) mktime(&time);

    return 0;
}

int rtc_get_timestamp_micros_from_timestamp_reg(uint64_t *timestamp)
{   
    uint32_t ssr = RTC->TSSSR;
    rtc_get_timestamp_from_timestamp_reg(timestamp);

    // Since we defined the prescaler static, we can use it directly
    uint64_t numerator = PRE_SYNC - ssr; // * 1000 * 1000... doesnt work!
    numerator *= 1000;
    numerator *= 1000;
    numerator *= 1000; // Rounding

    uint64_t micros = numerator / (PRE_SYNC + 1);
    micros += 500; // Rounding
    micros /= 1000; // Rounding

    *timestamp *= 1000 * 1000; // Micros resoution
    *timestamp += (uint64_t) micros;
    
    return 0;
}

 

Note that i set the prescalers to the appropriate values to achieve the highest precision of 30.5 microseconds with an 32768 kHz low speed external oscillator.

 

Once the timestamps are acquired, the receiver calculates the clock offset using this:

 

clock_offset = (rx_timestamp + DMTS_OFFSET) - rx_started_timestamp;

 

The variable rx_timestamp is the timestamp the sender sent and the rx_started timestamp is the timestamp aquired from the timestamping register, when the message was received. The "DMTS_OFFSET" variable represents the delays along the message transfer path (taking a timestamp, packet transfer to the AT86RF233, some radio delays (ramp up..), the transmission time until the PHR is fully send and finally the time it takes to take a reception timestamp on the receiver side (interrupt delay..). These delays have been measured with a logic analyzer and are accurate, as I can achieve a precision of 0-30µs using a similar concept not using the timing interrupt and the RTC timestamp function but an RX started interrupt and taking a regular timestamp from the RTC time shadow registers directly in the ISR (in software).

The real problem arises with the "DMTS_OFFSET" variable. I compensated for the delay caused by the time the interrupt needs until the timestamp is taken in the RX started interrupt (not using the RTC timestamp function). This delay is about 87µs. If I exclude this delay when using the RTC timestamp function, I encounter an error of 90-120µs, which aligns with this delay. This suggests that taking the timestamp with the RTC timestamp function takes as long as not using it. Therefore, I suspect there is a significant delay in the RTC. Could it be that the timestamp interrupt is not handled in hardware directly but instead an interrupt is issued and the cpu has to process the interrupt, i.e., copying the timestamp registers??

 


@unsigned_char_array wrote:

Can you share your code. This doesn't seem right. Could it be that it receives multiple packets and overwrites the timestamps with newer once?




I can exclude this issue because the RTC ISR register has a TSF flag that indicates a timestamp event was encountered. Subsequent events do not overwrite the timestamp registers. After validating the received message on the receiver side, I clear this bit and directly calculate the clock offset. I also verified the interrupts with a logic analyzer and confirmed that only a single message has been received.

I have an idea:

  • Use a us precision timer to create a pulse on a GPIO pin at a precise interval. For instance 1.000001 second
  • Connect the output to the timestamp pin.
  • Start the timer and clear the rtc at the same time
  • Log the raw timestamp data from the timestamp interrupt in an array.
  • After x cycles stop the program and analyse the results (print via serial or read using debugger

There should be no latency. Only max one RTC tick extra delay. And perhaps some dither.

I have several development boards with a different STM32 than you have, but they do. Have timestamping. I could replicate the results since it won't need ethernet.

 

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.
Lennart-Lutz
Associate II

Hi,

 

good idea, i will try it and if i have some results, i will report them..