cancel
Showing results for 
Search instead for 
Did you mean: 

Using DMA to sample external ADC (16bit ADS8688) using SPI with both single ADC and daisy chain configurations.

AEaso.1
Associate

I am using an STMF446ZE MCU for a data acquisition + engine control unit application. My application uses four ADS8688's to sample 32 sensors. 28 of them are required to be logged at 100 Hz while the other 8 are to be logged at 2 kHz. I have the 100 Hz sensors connected to three of the ADS8688's in daisy chain configuration, while the 2kHz sensors are on the last ADS8688. These ADCs require 16 bits to be written to them to trigger the sampling of the next channel in the sequence. They then return 16 bits per ADC in the daisy chain, as per the timing diagram below.

0693W00000LwsmYQAR.png 

I began this project using a timer interrupt to trigger the basic HAL_SPI_Transmit and HAL_SPI_Recieve to sample the ADCs into a buffer of uint16. A 2 kHz timer interrupt samples each channel of the 2kHz ADC sequentially, and every 20 interrupts it then samples all channels of the 100Hz daisy chained ADCs (equating to a sampling rate of 100 Hz as required). This has worked perfectly fine for testing up to now. However, these timer interrupts use around 18% CPU load as the SPI communication is blocking. I am looking to reduce this load by using a DMA controller. I am quite new to using DMA having only used it once before so I have some questions on how to go about it.

My first thought was to use HAL_SPI_Transmit_DMA and then the HAL_SPI_TxCpltCallback to trigger a call to HAL_SPI_Receive_DMA to read the returned data. Then use HAL_SPI_RxCpltCallback to trigger the next sample in the sequence and so on. My thought was that this would allow the CPU to return to the main loop briefly while the SPI was transmitting to attend to other tasks. However, when sending only 16 bits of data using HAL_SPI_Transmit_DMA it would not leave the function before the transmission was complete, i.e. it was still blocking. To test the DMA was working I transmitted 20 bytes at a time which correctly resulted in non-blocking transfer after only 6 bits, as per the code and scope output below (blue line is GPIO_PG9 pin).

int buffsize = 10;

uint16_t ads_data[buffsize];

for (int i = 0; i < buffsize; i++)

{

ads_data[i] = 43690; //fill with 1010...

}

HAL_SPI_Transmit_DMA(&hspi1, ads_data, buffsize);

bsp_digital_output(BSP_GPIO_PG9, true);

0693W00000LwsoAQAR.png 

Does this mean there is a minimum number of bytes required to be transmitted for non-blocking transfer? If so this would make this method impossible.

After further reading I found mentions of being able to use timers to trigger DMA TX transfer, meaning no CPU intervention is required. If this is possible how can I also have a timer trigger the DMA to read the returned value from the ADC, that is well synchronized with the TX DMA timer? As I think I would also need this else the reading of the sample data from the ADC would still be blocking as I tested before. Consequently, if the RX DMA is triggered by some sort of timer how can I tell the DMA to receive 16 bits or 32 bits (depending on if it is reading the daisy chain or single ADC) if I am not calling HAL_SPI_Receive_DMA (which take the data size as an input)?

Any other solutions or tips for this would be greatly appreciated, I am happy to send in any other code or scope plots that might help 🙂

0 REPLIES 0