cancel
Showing results for 
Search instead for 
Did you mean: 

USART Rx with DMA: IDLE interrupt vs timer?

Sean Kelly
Associate II
Posted on August 01, 2017 at 16:03

Hi Forum

I'm trying to implement USART Rx of variable, unknown length messages at a 1Mbps baud rate through DMA. I would like to minimize the latency from receiving to consuming the data, so I'm trying to do this with some sort of idle detection as opposed to having my consumer code periodically drain a buffer.

I can attach my code if necessary, but for right now I'm mostly looking for high-level answers on how this is best accomplished to make sure I'm on the right track.

My first attempt was to configure a buffer in DMA and enable the USART IDLE interrupt. In the USART interrupt handler, I first disable DMA (and wait for the enable bit to clear on the DMA stream) and then clear the IDLE flag (by reading the bit and then reading the data register). I copy the data out of the DMA buffer and then re-enable DMA in the interrupt to be ready for the next message.

Doing this gets mixed results -- I do get valid packets occasionally, but most often my received data falls into one or more of these failure categories:

  1. The valid data starts at byte 1 instead of byte 0 of the DMA buffer. The extra byte is very often a duplicate start byte. Where's the extra byte coming from? Should I be disabling the UART peripheral as well?
  2. Truncated message - A valid message is in the buffer, but it abruptly ends. It appears the IDLE interrupt might have come early, or the sender stalled (thus triggering IDLE)?
  3. Data underrun - NDTR says N bytes were received but memory inspection shows fewer than N bytes were actually touched. Do I need to wait a short while after the IDLE flag for DMA to complete before disabling the stream?

I've spent a lot of time searching for this online, and it seems the consensus is that using the IDLE interrupt is not the way to go, and I should be using a timer on the Rx pin (as described in AN3109). The reasons I have found are unclear, though. I'd like to understand more about why IDLE doesn't work for this, and what's the point of the IDLE interrupt if not for this sort of application?

Thanks

Sean

#uart-dma #usart #usart-idle #dma
15 REPLIES 15
Posted on August 02, 2017 at 16:20

Which DMA registers would you consider relevant to read out for debug?

The registers of the stream you are using, plus the relevant status register (yes that's pretty much all of them, but that's still just 5 or 6 numbers).

In the meantime, the fact that you're asking all of these questions leads me to believe that I'm not necessarily off in the woods, and I should be able to use the IDLE interrupt in this way - is that true?

I believe so, although did not do that personally, but others apparently do. The devil, as usually, is surely hidden in the details.

JW

Posted on August 02, 2017 at 16:39

Oh, so the code (in the flat view I am using) appeared *above* rather than *below*... with a timestamp breaking the timeline... jive's code behind the moderation feature is ehm well how to say that politely... :|

In

void __attribute__((used)) USART6_IRQHandler(void)

Can we be sure that

Only have USART_IT_IDLE enabled

?

Also, does your scenario account for more than

RX_BUFFER_SIZE

bytes between IDLEs? Is your testing transmitter well-behaved, i.e. transmits only up to

RX_BUFFER_SIZE 

bytes and then long enough IDLEs?

What are the interrupt priorities? May these interrupt nest and if yes, how?

Other than that I see nothing suspicious. I don't use 'libraries' so can't say if there's nothing related to that.

JW

Posted on August 03, 2017 at 15:38

Yes, USART6_IRQHandler actually handles some other interrupts for the Tx side of things, but I pruned them out of the sample code to make reading easier. I do have an if(USART_GetITStatus(UARTSLK_TYPE, USART_IT_IDLE) == SET) check around the code that disables the DMA, etc.

Interrupts are the same priority, and I do use the NVIC.

I started digging deeper in to what your questions about whether my sender is well behaved, etc. I'm not so sure now -- there may actually be scenarios where it can burst several messages without pausing long enough. I'm going to try to get this on a scope to take a look at the timing.

In the meantime, I've solved problem &sharp1, and understand why it's usually a duplicate START byte: Because when I'm stepping through code deubbing this (with my UART peripheral still enabled), of course it's going to go receive the next byte and have it sitting in DR! So, issue &sharp1 is not a real issue and is user error. I haven't been able to reproduce that exact issue when I let the system free-run for a while and then set the breakpoint.

Finally, your questions about whether my transmitter is well behaved prompted me to look deeper. It turns out that it appears there ARE some cases where I get multiple (valid) packets before the IDLE interrupt is set, and I'm definitely not accounting for that. I thought I had ruled that problem out, but I think being on the debugger was also masking that problem.

So, in conclusion, I think this mechanism is working fine but I need to improve the robustness of my implementation to handle multiple packets in a row before IDLE is asserted. More likely, I'll have to switch tactics and have my upstream code drain the FIFO periodically, or just interrupt on half and full FIFO.

Thanks for your help and much-needed kick in the pants to get me looking in the right direction!

Sean

Sean Kelly
Associate II
Posted on August 16, 2017 at 04:20

I have a follow up on this issue.

Regarding the discussion above about whether my transmitting device was well behaved -- I had found under a debugger that I was quite often filling up the entire DMA buffer before the IDLE interrupt fired, which shouldn't be possible given that my transmitting device is supposed to be sending bursts of data every 1ms. Fortunately, I also have source and JTAG access to the transmitting device and did some digging. Turns out it's behaving just fine, and it takes about 240us to send a 15byte message. This is a bit more than the theoretical 150us, but the implementation is blocking through register access, so I'm not surprised there's more overhead. Either way, there should be plenty of idle time between messages for my receive side's IDLE interrupt to fire and be processed.

I went back to debugging on the STM receive side and I think this is another case of user error: I was using JTAG and breakpoints to try to detect this condition. I'm now finding that the act of setting or enabling a breakpoint is enough to overflow my DMA buffers. If I resume execution, I do not hit the condition again. If I enable or set another breakpoint (anywhere in the code, basically) my overflow breakpoint hits immediately. 

I guess the act of setting breakpoints is enough to stall the CPU? Is that in the realm of possibility? I've never had to think about the overhead for setting breakpoints before..

Thanks

Sean

Posted on August 16, 2017 at 09:06

This has nothing to do with *breakpoints* as such, but with you stopping the processor in the debugger.

Note that 32-bitters are *not* microcontrollers, they are SoC - think of a board with a microprocessor and a bunch of peripheral chips, as we built them in the past. The debugger stops the processor, but not the surrounding circuitry, which continues to operate - including the USART and the DMA.

Some peripherals (timers, I2C, CAN, watchdogs) can be *optionally* 'frozen' during debugging, see DBGMCU_xxx registers.

JW

TLin.5
Associate III

I'm currently using a multibuffering approach to mix DMA RX and IDLE for UART reception, using a modified HAL in order to achieve all that I need.

On IDLE, I pause the DMA reception to handle the receive event keeping track of where in the buffer I finished reading last.

I used to have problems with IDLE and DMA receive complete happening "at the same time", but I found that the interrupt priorities had been incorrectly set which caused IDLE to preempt while handling receive complete.