2016-07-28 02:09 AM
Hello all, STM32F1xx ref. manual (RM0041) states different USART receiver tolerances to clock deviation depending on whether DIV_Fraction is zero or not (see attachment).
Could someone please explain to me why fractional baud-rate is influencing clock-deviation tolerances?
#uart #usart #tolerance2016-07-28 04:47 AM
Because when the baudrate divisor is fractional, the bit is not sampled at regular intervals.
Normally, the bit is sampled at 7/16, 8/16, 9/16 of bit time. Say the divisor is 5.0, then it's at trunc(5.0*7)=35th, trunc(5.0*8)=40th and trunc(5.0*9)=45th tick of UART input clock, which is exactly 35/(5.0*16) = 7/16 etc. as it should be (which is 0.4375, 0.5000, 0.5625). If the divisor is say 5.25, then sampling is at trunc(5.25*7)=36th, trunc(5.25*8)=42th and trunc(5.25*9)=47th clock, which is 36/(5.25*16)=0.4286, 42/(5.25*16)=0.5000, 47/(5.25*16)=0.5595 of bit time. This difference to the regular yields the different worst-case timing when still 2-of-3 samples are from the correct bit. I don't say .25 is the worst-case fractional value, but you should see the point by now. There may be some related nuances for detecting the startbit which impacts this calculation, too. Nevertheless, there are other factors coming into the complete timing budget for UART, too, often making the fractions of percents related to the fractional baudrate irrelevant. Examples of such are source clock precision and jittter, assymetric edge skews on the transmission chain (escpecially if level conversion is involved, whether true RS232 or RS485, or some of the open-collector schemes, or other), engineering margins, etc. JW2016-07-28 05:00 AM
Hi Jan, thanks for this explanation - I think I got it now ;)
Best,
David