2018-01-16 11:46 PM
what is the difference between UART INTERRUPT MODE and UART_DMA MODE.what is the best procedure to use when i want to work with data stream received from modules like gps,where specific data length is not known exactly.
can some body tell me the specific application applicable to both of them.(when to use Interrupt and when to use DMA)
Note: this post was migrated and contained many threaded conversations, some content may be missing.2018-01-17 12:03 AM
UART IT handler is called once to receive a byte from UART. So you do not have to wait in blocking mode (doing nothing just waiting for a a byte from UART) - you can do anything else and once you receive a byte the handler is called.
In the DMA mode the data you receive from UART are stored in the RAM buffer. You can alert (generate the interrupt) when the buffer is full or test if the buffer is full through the polling (testing bit).
I use DMA for data collecting from the ADC - I know hw many samples I need. It works perfectly.
For parsing data as in your case you need to build state machine arround the data for example. I mean depending on the data you move through certain decision tree - you interpret data you receive and make a decision. Probably I would use interrupts when the data size is unknown and receiving a byte I would decide what is next (FSM). Think about how you want parse the data - it me helps in the decision.
2018-01-17 02:56 AM
thanks,but sometimes even receiving one byte on interrupt is not serving my requirement,as block of data is received at a time.i need to enable interrupt after each byte received.so inter character timing is more,while a block of data received.i am unable to capture that whole data stream
2018-01-17 06:56 AM
I am not sure I understand properly what your problem is. Interrupt servicing time should be much less than the byte transmission time so there should be no problem receiving data as fast as it is transmitted and capturing the whole stream.
Are you analyzing the data in your interrupt routine? That might take too long so best to only have data reading (to a buffer) and perhaps checking for end of data (eg <CR><LF>) and leave analysis and use of data to something else.
2018-01-17 07:35 AM
The question is framed in the context of the indeterminate length of the data vs the fixed DMA buffer size. One of the tricks with RX DMA in this case is to use it as a ring buffer (ie continuous and circular DMA), and periodically the harvest the tainted words.
I would agree that a simple IRQ handler would be quite efficient, with parsing/processing of data pushed to a worker thread/task. Especially true of GPS/NMEA data where the messages need cracking and there is often a lot of floating point math, or people veer off down a rabbit hole forgetting the byte time constraints.
2018-01-17 07:48 AM
DMA buffer in this case gives us more time for other processing (especially when the pace of the data is not constant but the average pace is acceptable from the timing perspective) and if we get back to the buffer content processing before it fills up we win (=don't miss bytes).
I usually use IT for GSM modules and language parsers because the time is not critical there. I usually use fixed point math as I did on 8bits.
So the whole picture and the requirements are needed to decide which approach is better, I guess.
2018-01-17 08:54 AM
kalpana lopinti wrote:
as block of data is received at a time.i need to enable interrupt after each byte received
As the others have said, there is really no reason why this should be a problem (unless, perhaps, you are running the UART extremely fast).
If you are having a problem with interrupts 'keeping up', it is far more likely to be due to bloat in your ISR, or other inappropriate handling in your code.
As already noted, a common approach is for the Rx ISR to just put each received character into a ring buffer.
http://www.avrfreaks.net/comment/2369221 ♯ comment-2369221
2018-01-17 12:23 PM
>
(=don't miss bytes)This is exactly the thing that confuses me. Async serial comm is prone to loss of single bytes.
Because of line noise or overrun (there's no FIFO in most ST models, correct?) or whatever.
Then the preset byte count won't be reached.
A naive timeout won't help because transmission of next block can begin and first bytes of the next block will be consumed. Two blocks will be lost. Too bad.
-- pa
2018-01-17 12:34 PM
>>This is exactly the thing that confuses me. Async serial comm is prone to loss of single bytes.
I would say at the baud rates in question, over a 3' copper trace, it is fairly robust. I can push tens of millions of bytes without data loss.
Data loss in the context that most people experience it on these boards is due to ignorance about how long their processing code is running, or that it is in a spin-loop waiting for something else and completely ignoring the deadlines it needs to meet.
2018-01-17 01:21 PM
In addition to what Clive says, you don't need a timeout. As long as the serial RX is running and RXNE interrupt is enabled then you should still get all the data after a word failure. So at most you should only lose the packet that has the corruption.
I don't know the application (and we appear to have two people asking the question) but a simple way of handling this would be:
1) Have a (volatile) circular buffer with associated in and out pointers/offsets, at least twice the size of the longest packet to be received. It probably doesn't need to be this long but it will do no harm.
2) RXNE interrupt puts received data in the buffer. If the buffer is full, just discard data (but still read it from the usart to prevent overrun at a hardware level).
3) Some other thread (eg main(), something under sys_tick or something else that is lower priority than usart interrupt will read the buffer looking for start data, as soon as it sees it it starts putting the data into a frame buffer until it sees the stop data. When it does it can trigger processing the packet.
If it finds start data when looking for stop data then it should discard the contents of the frame buffer (and hence the current packet) and start again.
4) Process the data then go back to stage 3.
I am not an expert on GPS but if this is a basic NMEA sentence being received it you could use $ and ! to detect the start and <lf> to detect the end condition.
Overrun will not occur at a hardware level as long as the uart interrupt is higher priority than anything else that could take a long time, plus you shouldn't turn off interrupts for any significant time. If you do see an overrun (perhaps something else had to turn off interrupts for too long) then just clear the error on the uart and continue, it will lose the current packet but pick up the next one fine.
Overrun at the circular buffer could occur but only if 3/4 above can not do its job in the period between data packets.
In this case you need a mechanism to detect it and decide what to do on failure (eg discard and wait for the next?). This will be true of any transmission system.
Corruption of the start and/or end markers will invalidate the sentence but should not affect the next one. As would any corruption of data within the sentence (found by either parity of checksums).
As Clive says, this sort of system is pretty robust if designed properly. We have systems like this running for years without detectable errors with the data going over hundreds of meters of cabling in very noisy environments. The key is to make sure you have proper error detection and either redundancy (ie repeated packets) or a handshake/ack system.
Hope that makes and doesn't have too many tipos!