cancel
Showing results for 
Search instead for 
Did you mean: 

USB Virtual Com to send/receive message

LMorr.3
Senior II

I have enabled USB Device FS and successfully use it as a virtual com port.

I now need to determine the best way to send messages from master to slave over this port, and how I will handle potentially corrupted data, timeouts, and how enabling CRC will help.

My message format includes a start byte '12', then a payload size byte, then a command byte, followed by data payload bytes and 2 CRC-16 checksum bytes.

What happens if the master sends a payload of 2 bytes, but the slave receives corrupted data and interprets the payload size as 64 bytes?  I have the master 'retrying' if it does not receive a response back from the slave after 100ms, but should the slave also have a timeout to reset?  I can see the slave expecting 64 bytes in my case, potentially never parsing a complete valid message, or the slave and master becoming endlessly 'out of sync' .  I'm seeing the timeouts on the master and slaves potentially triggering in a way that would also cause them to become out of sync endlessly.

Thank you

 

6 REPLIES 6
TDK
Guru

Technically the CDC device data is a stream, not packet based, but in practice you can count of packets being the expected length. On the STM32 side, you can guarantee this since you control the code. This lets you not worry about partial packets being received.

If you stay within the max packet size, you shouldn't need to buffer anything and can simply discard the entire packet if something doesn't look right.

I would recommend the following approach: if a bad packet is received, pause sending for 100ms (or whatever, 10ms is probably plenty), discard anything received within that time, and then re-send a message after that time elapses. That should be enough to flush the system of bad data.

If you feel a post has answered your question, please click "Accept as Solution".

Ok thanks.  What happens if the slave gets a bad sync byte, 12, and then reads the actual start byte as a command byte?  The actual start byte may get flushed along with the 'bad data'.  

Also, it is the master re-sending so both timeout timers so I'm not clear on whether both devices run a 10ms timeout.

UPDATE: I think if I just add a 10ms timeout timer on the slave to 'pop' first byte from the buffer and re-read all existing bytes to try to parse a packet should work, thanks.

Did you mean "you can count on packets being the expected length"?  If so, how do I 'control the code' to ensure the message is not split up?  I have one 'highest priority' timer interrupt at priority 5 and my USB, USART and SPI interrupts at 6 priority, so the USB may get interrupt by USART or SPI.  The USB buffer size if large enough for the largest message.

I plan on keeping the max size to 64bytes.

Thanks for the insight on this.

TDK
Guru

In your code, if you call CDC_Transmit_FS once for the entire packet, and the packet is <= 64 bytes, it will be sent as a single packet and not be broken up, or appended to another packet.

Whether or not it gets broken up has nothing to do with interrupt priority.

 

USB has its own CRC data integrity check. Adding a CRC won't help much with detecting poor signal integrity, but it may help with other code bugs.

If you feel a post has answered your question, please click "Accept as Solution".
Johi
Senior III

Some serial protocols use STX to indicate start of message, ETX for end of message etc. If STX or ETX are part of the inner part of the message they are preceded by DLE (see it as some kind of escape code) and of course DLE is also preceded by DLE. If you use this approach you are sure where the start of a message is situated as it should not be preceded by an odd number of DLE's.

 

LMorr.3
Senior II

The answer I had flagged as 'accept as solution' was apparently generated by AI or someone used AI/ChatGPT to help generate the answer.  I received a message from the mod stating the AI influenced message was deleted, and to re-post my comment.

I guess I was fooled by the AI, as I thought the answer most helpful and detailed 🙂  My AI spidey senses should have been triggered but I was fooled thinking I was interacting with an actual hu-man.

I won't re-post the AI based comment since I'm not sure what the ST policy on this is, and don't was the ST moderation AI thinking I'm now AI.

But just to re-cap, the AI suggested I do this:

- Setup a max payload on both sides

- Implement CRC16 and have the slave send retransmission requests to the master on bad checksums, ensuring my APP is not intruducing corrupt data.

- Set reasonable timeouts for both slave and master.  The slave timeout ensures it does not wait indefinitely for a message which may never arrive.  If the slave does not receive a response, initiate a retransmission.

- Setup incrmental back-off to avoid overloading the system

- Include sycn message if I want to ensure master and slave are always in sycn for longer messages.

- Add a logging mechanism for events, errors, retries on master and slave to help  debug.

- Remember to thoroughly test the various scenarios, and adjust timeouts, retry intervals and error handling