cancel
Showing results for 
Search instead for 
Did you mean: 

Most Efficient solution for reading ADC through SPI

sde c.1
Senior II

Hi,

i'm always looking for the most efficient implementation of code in my STM controllers and was wondering which is the most efficient way to implement ADC reading from an external ADC chip trough SPI in an STM32G4 controller.

The ADC (MCP3461/2/4) have a continious mode and set an interrupt line after each conversion. The 16 bit adc value is present in a buffer register to read trough SPI.

I see these 3 choises:

  • Using HAL_SPI_Receive_DMA, but as the ADC send 16 bits at a time i'm not sure this is the best solution. Each ADC conversion interrupt i need to start the DMA which fire a Transfer-Complete interrupt to read the actual value.
  • Using HAL_SPI_Receive_IT, Each ADC conversion interrupt i need to start the interrupt driven SPI read which fire a Transfer-Complete interrupt to read the actual value.
  • Just by reading the 16 bits each ADC conversion interrupt with the blocking HAL_SPI_Receive function as this goes super fast.

i'm wondering is the overhead in the first 2 option makes it worthwile to use the non blocking SPI functions?

Thank you

1 ACCEPTED SOLUTION

Accepted Solutions
Danish1
Lead II

Caution: Rant follows.

You ask for efficiency. Efficiency in what?

HAL is aimed at reducing programmer effort. That's one type of efficiency.

But one thing it is not, is efficient for the ARM processor. Having said that, under many circumstances the processor is way faster than e.g. peripherals, so unless you need processor-speed for intensive calculations, processor cycle count isn't something needs too much effort.

Selection of the correct algorithm is where to put your attention.

There is an overhead to interrupts, because the arm core has to push the processor state onto the stack, read the interrupt vector, execute the interrupt code then restore the processor state.

But on the other hand a DMA of only a couple of bytes probably isn't worth the overhead of setting it up.

Anything blocking is imho bad. Because the processor can't do anything else in that time (except for servicing a higher-priority interrupt). But it might be less bad than the other approaches in this particular case.

Where I can, I try to set up a DMA to grab a block of data into a circular buffer, and use halves of it at my leisure, making use of half-transfer and transfer-complete interrupts. But that might not work with the 16-bit transfers you need on each conversion-complete interrupt - unless you can set up a timer to produce two DMA-trigger pulses in response to each ADC conversion-complete pulse. (Assuming you have routed that signal to a pin that can fire a Timer).

View solution in original post

6 REPLIES 6
Danish1
Lead II

Caution: Rant follows.

You ask for efficiency. Efficiency in what?

HAL is aimed at reducing programmer effort. That's one type of efficiency.

But one thing it is not, is efficient for the ARM processor. Having said that, under many circumstances the processor is way faster than e.g. peripherals, so unless you need processor-speed for intensive calculations, processor cycle count isn't something needs too much effort.

Selection of the correct algorithm is where to put your attention.

There is an overhead to interrupts, because the arm core has to push the processor state onto the stack, read the interrupt vector, execute the interrupt code then restore the processor state.

But on the other hand a DMA of only a couple of bytes probably isn't worth the overhead of setting it up.

Anything blocking is imho bad. Because the processor can't do anything else in that time (except for servicing a higher-priority interrupt). But it might be less bad than the other approaches in this particular case.

Where I can, I try to set up a DMA to grab a block of data into a circular buffer, and use halves of it at my leisure, making use of half-transfer and transfer-complete interrupts. But that might not work with the 16-bit transfers you need on each conversion-complete interrupt - unless you can set up a timer to produce two DMA-trigger pulses in response to each ADC conversion-complete pulse. (Assuming you have routed that signal to a pin that can fire a Timer).

S.Ma
Principal

Most external ADC (Analog Device and co) uses the SPI clock for the conversion, especially if they are SAR type. If the SPI has only one or multiple of daisy chained ADCs on the bus and nothing else, consider tuning the SPI frequency for continuous conversion without the need of the interrupt. The interrupt is a sync signal to know when the 16 bit data start, or trigger a DMA transfer for some STM32 with DMAMUX.

The goal is to HW assist as much as possible things that have critical latency as top priority, core workload as second prio.

On the blocking functions comments, one way to tune these things is an interrupt based state machine using SPI and maybe a TIMER Compare interrupts to pace the data flow betweeb STM32 and the external ADCs.

Thank you,

Unfortunately we have decided to use HAL for the ease and speed to come to market , i prefer building up code from scratch as well. 

Your description and consideration of the various options and their pro's and con's is exactly what's on my mind, except for the timer DMA trigger after every ADC conversion. That's something I didn't think of and the best solution in this case.

thank you,

this type is a sigma delta and use an internal clock to do the conversion.

i'm going to use the conversion done interrupt pin as DMA trigger, and only use the DMA transfert complete interrupt to get the data from the buffer.

it works beautifully with the DMA synchronisation by external ADC interrupt

@sde c.1 Hi! Are you able to share the code that you have used for the MCP346x. It would be very helpful to see how you have implemented it.