cancel
Showing results for 
Search instead for 
Did you mean: 

UART maximum tolerated deviation

Moose
Associate II

So I am reading the RM0008 Reference manual because I need to describe some limits for my system
I am looking for the UARTs tolerance of clock drift
on page 800 I am confused by these tables:

 

NF is dont care.png

In my case DIV_FRACTION is different from 0 and the M bit is 0
The way I read the table... the Noise Flag triggers as an error if there is more than 3.33% clock drift...
but... it does not care if there is 3.88%...
This makes no sense so... am I misunderstanding something  or?


Link to the document in question:
https://www.st.com/resource/en/reference_manual/rm0008-stm32f101xx-stm32f102xx-stm32f103xx-stm32f105xx-and-stm32f107xx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf

1 ACCEPTED SOLUTION

Accepted Solutions
Moose
Associate II

The fact that the "don't care" was written from the users perspective instead of the MCU was a useful hint from gbm
The rest of gbm's answer... not so much. ""common sense" is a vague term" and I am not sure where the 2% came from.. 

I also had to find out that there is no such thing as a "NF(Noise flag)" bit anywhere. I figure they are referring to the NE bit in the USART_SR register.
And what is meant by "is error" and "don't care" is "Is the EIE bit set in register USART_CR3"


I ended up reverse engineering the numbers to figure it out.

I noticed the numbers for "DIV_Fraction is 0" and M bit set to 0 was very neat.
3.75% and 4.375%
Since this is for 10 bits this meant I could figure that it was these fractions:
160/6 and 160/7


The 160 is the 16 oversampling of the 10 bits. 16 for each.
And the reason for the 6 and 7 can be found in the oversampling for noise detection diagram found on page 796 (Not page 974 as mbarg.1 wrote).

If the signal is off 6 16th of a bit time at the last bit, it will read 3 samples and have 1 of those be in a different bit, thus triggering an NE error by getting a 100 or 011.
But if NE interrupt errors are not set with the EIE bit then the 100 or 011 values will simply be read as 0 and 1
But when you increase the error to 7 16th of a signal slower or faster then the expected 0 will read 110 or the expected 1 will read a 100, either of them being wrong.

Thank you gbm fr setting me on the right track :)

View solution in original post

3 REPLIES 3
mbarg.1
Senior III

I guess you have to spend some time and recall the basic of UARTs.

All real word signals are analog and a trigger transforms the same to digital - therefore rise time and fall time play a great role in when signals get in digital domain.

Once they are digital, they are oversampled - see pag 974 - and the oversampling is vital to increase or decrease tolerance to clocks speed.

As the picture in manual shows, decision is between count a and count b to recover the length of a baud and decide if it is a 0 or a 1.

On top, you have clocks - tx and rx - that must have some tolerance to allow a system to sample and decode the sampled stream; but unless you first define how is your system and ahoe is usart setup, it is very difficult to talk of tolerances in absolute terms.

At last, you come to the table, that shows that if you decide that noise is a problem, will reduce tolerance, same with fractional div that generates an asymmetric clock adding quantization noise to signal.

mike

gbm
Principal

Yes, you are misunderstanding the meaning of the numbers in the table.

if YOU don't care about NF, then the allowed clock difference between the transmitter and receiver is 3.88. If YOU require NF flag check, then it's 3.33%.

Basically, with the default 16x oversampling, single-side clock deviation should not exceed 2%. It's not about the details of STM32 UART - it's common sense. Total frame transfer time difference between RX and TX must be less than half of bit time slice.

My STM32 stuff on github - compact USB device stack and more: https://github.com/gbm-ii/gbmUSBdevice
Moose
Associate II

The fact that the "don't care" was written from the users perspective instead of the MCU was a useful hint from gbm
The rest of gbm's answer... not so much. ""common sense" is a vague term" and I am not sure where the 2% came from.. 

I also had to find out that there is no such thing as a "NF(Noise flag)" bit anywhere. I figure they are referring to the NE bit in the USART_SR register.
And what is meant by "is error" and "don't care" is "Is the EIE bit set in register USART_CR3"


I ended up reverse engineering the numbers to figure it out.

I noticed the numbers for "DIV_Fraction is 0" and M bit set to 0 was very neat.
3.75% and 4.375%
Since this is for 10 bits this meant I could figure that it was these fractions:
160/6 and 160/7


The 160 is the 16 oversampling of the 10 bits. 16 for each.
And the reason for the 6 and 7 can be found in the oversampling for noise detection diagram found on page 796 (Not page 974 as mbarg.1 wrote).

If the signal is off 6 16th of a bit time at the last bit, it will read 3 samples and have 1 of those be in a different bit, thus triggering an NE error by getting a 100 or 011.
But if NE interrupt errors are not set with the EIE bit then the 100 or 011 values will simply be read as 0 and 1
But when you increase the error to 7 16th of a signal slower or faster then the expected 0 will read 110 or the expected 1 will read a 100, either of them being wrong.

Thank you gbm fr setting me on the right track :)