2020-10-05 07:54 AM
PLEASE EXPLAIN WHY 104.167 IS CONVERTED TO 104.1875 AND HOW ITS ENCODED AS 0X683
MY UNDERSTANDING FOR ENCODING IS 104 IS CONVERTED TO HEX AS MANTISSA PART AS DECIMAL TO HEX AND O.1875 IS CONVERTED SEPARATELY FOR FRACTION PART.
IF I USE FCLK/BAUDRATE I.E 16MHZ/9600= 1666.666d =1667d=0x683h
why use above formula i don't understand.
2020-10-05 08:17 AM
Because some amount of error is allowed in the USART clock rate, and it's the closest achievable value. Typically under 1% is fine. No clock has perfect accuracy.
104.1875 = 0x683 / 16
2020-10-05 08:22 AM
The BRR uses a fixed-point representation, ie Q12.4
int((104.1875 * 16) + 0.5) = 1667 = 0x683
Personally I've been using BRR = APBCLK / BAUD for 13+ years, it is simpler to explain/compute.
You are basically computing the 16x rate, the USART divides the window into 16 pieces, and can realign the input based on the center time of the bit, basically they have a 16-bit shift register on the input, and they use that to recognize the input/edges.
2020-10-05 01:29 PM
> Personally I've been using BRR = APBCLK / BAUD for 13+ years, it is simpler to explain/compute.
+1, but the calculation gets more tricky if you want to use the 8x oversampling. OTOH, I'm yet to find a reason for that (except extreme baudrates, which are not a good idea anyway).
JW