cancel
Showing results for 
Search instead for 
Did you mean: 

Is it possible to achieve an ADC sampling rate of 500ksps and more via the iio interface, and if yes, how?

Led
Senior

Hi all,

I followed the instructions described on the wiki, page ADC_device_tree_configuration.

Changing the sampling_frequency, I can achieve a sampling rate of about 10k. Going higher crashes the system.

Reducing the parameter 'min-sample-time-nsecs' in the device tree did not solve the issue.

Is it not possible to go higher with the sampling rate when using the iio interface? Would I option higher sampling rates writing a C Program accessing the ADC via mmap()?

Any hint is appreciated.

Denise

1 ACCEPTED SOLUTION

Accepted Solutions
Jean-Marc S
ST Employee

Hello,

We have used the ADC (in Trigger mode) under Linux reaching > 10Khz (I could not get the exact perf value but 500KHz seems ok).

What could be the source of your limitation is the fact that you may not use the "buffer" feature of the IIO.

Please check this link on Wiki:

https://wiki.st.com/stm32mpu/wiki/How_to_use_the_IIO_user_space_interface#Convert_one_or_more_channels_using_triggered_buffer_mode

  • you should :
    • define the sampling time
    • define the ADC Freq
    • define the length of your buffer (up to 4KB)
    • define the watermark (nb of samples after which the CPU is asked to get the transfered data)

I suppose the 2 last items have not been done.

watermark is set to 1 by default, so you might want to increase.

I am not at all expert here on those parameters and the wiki does not explain their exact usage. however I encourage you to play with those parameters and see if your perf increases.

Also, do you use OpenSTLinux Kernel ? (because is using other kernel the DMA-MDMA channel transfer will not be encoded in the drivers)

Last, using mmap would not be a recommended option, it would bypass all Linux framework and would be certainly hard to handle.

Hope this can help you.

JM

View solution in original post

4 REPLIES 4
Jean-Marc S
ST Employee

Hello,

We have used the ADC (in Trigger mode) under Linux reaching > 10Khz (I could not get the exact perf value but 500KHz seems ok).

What could be the source of your limitation is the fact that you may not use the "buffer" feature of the IIO.

Please check this link on Wiki:

https://wiki.st.com/stm32mpu/wiki/How_to_use_the_IIO_user_space_interface#Convert_one_or_more_channels_using_triggered_buffer_mode

  • you should :
    • define the sampling time
    • define the ADC Freq
    • define the length of your buffer (up to 4KB)
    • define the watermark (nb of samples after which the CPU is asked to get the transfered data)

I suppose the 2 last items have not been done.

watermark is set to 1 by default, so you might want to increase.

I am not at all expert here on those parameters and the wiki does not explain their exact usage. however I encourage you to play with those parameters and see if your perf increases.

Also, do you use OpenSTLinux Kernel ? (because is using other kernel the DMA-MDMA channel transfer will not be encoded in the drivers)

Last, using mmap would not be a recommended option, it would bypass all Linux framework and would be certainly hard to handle.

Hope this can help you.

JM

Led
Senior

Dear Jean-Marc,

Thanks a lot for the hints. Indeed I did not care about the watermark. And the buffer length I set to 65535.

Using buffer length 4096, and watermark 2048 I achieve now 900kSps.

Thats great :)

Issue I still have is that the data are not sampled continously over the boarder of the buffer length. So when I cat the data from /dev/iio:device1, after 4096 samples I have a jump in the signal. Looking at /proc/interrupts I can also not see any interrupts arriving. That's surprising for me.

I use the BSP from Phytec, which uses the OpenSTLinux Kernel.

Best Regards,

Denise

Jean-Marc S
ST Employee

Hello Denise,

1) the loss of data is characteristic of buffer overrun

  • each time the buffer is half full "watermark reached" a dma task is requested.. my understanding is that this task must be completed before the full buffer is loaded
  • Linux not beeing real-time OS it can preempt some tasks.. and there is no obvious way to control this to make sure the dma requested task is completed on time.
  • Maybe reducing the sampling rate would help ?

2) the interrupt to be watched is the DMA interrupt and not the ADC one which is masked (it could be the ADC IT that you are looking at?)

JM

Led
Senior

Thanks a lot Jean-Marc for the replay, I currently can't find time to investigate further. I will update as soon as I know more..

Denise