STM32F415 USB Full Speed Clock Accuracy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2022-07-13 11:24 PM
Hello all,
While searching online, I came across this specification regarding USB FS:
For full-speed communications, the data rate is specified to be 12 Mbps +/- 0.25%.
I've decided to test this specification by connecting a STM32F415 MCU to a PC using a virtual com port.
Instead of using the local 12MHz crystal, I've connected an external function generator which allows me to deviate from the center 12MHz frequency.
I was trying to go up and down the 12MHz Reference, until I could see a loss of communication between the MCU and PC.
The actual deviation was +/-700KHz from the 12MHz reference!!! which is very far away from the above specification of +/- 0.25%
Can anyone explain the difference between the specification and the actual test?
Thank you all,
Nir
- Labels:
-
STM32F4 Series
-
USB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2022-07-14 12:49 AM
USB is not UART. It's not the data rate which must be matched. USB requires 0.5% clock accuracy to operate correctly. Anything above it yields random results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2022-07-14 01:32 AM
I thing there is a misunderstanding - let me try again:
I'm using the MCU USB port as a virtual com port, when I plug the USB jack coming from the MCU, the PC recognize the MCU as a "Virtual serial port''. It does not matter if I actually use a UART communication or not, the USB bus runs at 12Mbps. in this state, I've increased/decreased the MCU crystal clock by at least 6% and only then, the PC device manager could no longer identify the MCU as a virtual com port. I was not referring to USB as UART at any point.
Thanks,
Nir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2022-07-14 02:19 AM
This is the "it works for me" syndrome.
That one particular receiver in your PC, with one particular setup of cables/hubs, works, does not mean, that all possible combinations of your device with all PCs or other hosts, and any set of hubs and cables, will work. The standard aims for the latter. This is why standards are around, to ensure interoperability under various, even the most adversary, conditions.
JW
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2022-07-14 02:29 AM
This is the "it works for me" syndrome...sure, if we are discussing the differences between 0.5% to for example 1%...
0.5% against 6% is a bit puzzling to me...sounds like a huge "safety net"
Nir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2022-07-14 04:47 AM
Did you try your setting at +85°C and -20°C ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2022-07-14 06:59 AM
This is way more complex than just basic bit-timing. At that level, USB is not that much different from UART - the specified max 20ns rise/fall time of driver, the expected 48MHz sampling (this is undocumented but expected quietly as it was typical max freq of cheap digital stuff of given era), and mainly the bit stuffing result in allowing surely a few % of bit timing error. [EDIT] see next post
However, there are also higher-level timing requirements, to ensure that the whole chain from device through hub(s) to host remains consistent under various pathologic situations. Read the first few subchapters of chapter 11 of USB (HUB) - the main point is, that if there's a runaway babbling device downstream, the hub must be able to cut it off so that it can detect SOF. OTOH hub must not cut short a downstream function's valid packet, so that downstream function (device, USB terminology is a mess) must not transmit beyond a certain point in time, that's why it must also obey some timing constraint. In other words, host's scheduler knows, that with hub and downstream devices not deviating more than 0.25%, it can schedule 12000-42 "perfect" bit-times for transactions within one frame.
As long as frame timing on the host port is not stretched to the limits, you won't see the effect of violating this constraint; yet it may result in very unpleasant effects (surprising unreliability/disconnects) in situations where the host scheduler starts to hit the limit.
There may be other reasons for the constraint, too. I did not write the USB spec.
In other words, as with all cases of "it works for me": go ahead, but then don't complain if it mysteriously fails to work in the future.
JW
PS. The standard of course stems from the technology of the era. RC oscillators are few % and that was obviously rejected for FS straight off; and the next best option were ceramic resonators, that's where the 0.25% comes from.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2022-07-14 07:25 AM
Okay, so the 48MHz sampling *is* mentioned in USB2.0, as
A simple 4X over-sampling state machine DPLL can be built that satisfies these requirements.
in ch.7.1.15.1. And the frequency tolerance forms part of the bit budget detailed in table 7-4, as Source/Destination Frequency Tolerance (worst-case) 0.21ns/bit, total [over the 7-bit period ensured by bit stuffing] 1.5ns.
Note, that the major part of budget is hub jitter, where the maximum allowable chain of 5 hubs is considered, with 3ns per hub (hub timing requirements are given in previous subchapter - if I understand this properly, the major portion is J-K vs K-J transition timing mismatch). In the particular case of "testing" where no hub is used, or with today's hubs which quite likely redrive/retime even for FS with very high (sub-ns) timing precision, you can see much higher tolerance for the individual endpoint's frequency mismatch.
JW
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2022-07-17 12:04 AM
Thanks you all for the interesting discussion,
Just to be clear, I have no intention to push the clock specification to its limit.
My actual MCU crystal have a +/-50ppm tolerance (under all conditions) which would be more than enough to meet the specification.
I just wanted to understand how come the actual frequency deviation test came out to be significantly different from the +/- 0.25% specification.
Thanks again for all the explanations!
Nir
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-12-13 08:52 AM
Hello NirAlon,
I know that this thread is two years old but I will share a thought you should consider.
The USB full-speed clock tolerance specification of +/-0.25% is specified to guarantee a maximum cable and hub setup. Which means that the complete turn-around time must be guaranteed from the host to the device of up to 7 tiers. This means seven 5m cable segments and passing 5.usb hubs.
If your setup is just a direct connection, host to device using a short cable, I would expect a higher tolerance margin.