cancel
Showing results for 
Search instead for 
Did you mean: 

how to collect 50000 data points samples.

manoritesameer
Associate II
Posted on April 22, 2014 at 00:13

I have to store 50000 samples. The information would be stored in an Array. Unfortunately the maximum size my program allows me to allocate the array space is 10000 because it complains could not allocate block. how do I get this information. 

How do I do this.

thanks

#!rocketscience #basic-arithmetic #systems-analysis #filling-a-leaky-bucket #stm32-array #pigs-might-fly
14 REPLIES 14
Posted on April 24, 2014 at 02:07

Maybe; maybe not. Even if there is, the buffer requirement would still be reduced if it is constantly being ''drained''...

Indeed, and what you and I might do with a full set of specifications/facts, but I think my goal here is better served with simple and demonstrable.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
Andrew Neil
Evangelist III
Posted on April 24, 2014 at 20:13

''a full set of specifications/facts''

We can but dream...
manoritesameer
Associate II
Posted on April 24, 2014 at 23:23

I cannot implement in a steaming manner since the sending data over the UART would create some delay by the time I would miss the next sample. 

And thanks Clive I would try your suggestion.

Posted on April 25, 2014 at 00:32

I cannot implement in a streaming manner since the sending data over the UART would create some delay by the time I would miss the next sample. 

Buffering would permit you to disconnect the two, the disparity in the rates would determine how big that would need to be, but it would be less than 50000 samples.

If you could send 10K in the time it takes to sample 50K, you'd need about 40K

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
jpeacock2399
Associate II
Posted on April 26, 2014 at 00:56

If you stream data to the UART following behind the DMA it will allow you to overlap operations and use the CPU more efficiently.  DMA operations are either free or low cost to the CPU, often running in parallel depending on how you lay out the memory.

The way to do this is to separate your ADC samples into blocks, essentially in a queue.  As each block completes (from the DMA HT or TC) it is queued for transmission to the UART.  All the blocks are in one large contiguous buffer of 16 bit words (aligned ADC sample size).  That's 100K bytes, within the range of STM32F4 SRAM.

A second task dequeues the ADC block out to the UART.  After the first block is ready set up a DMA transfer from the completed block to the UART.  When it completes (DMA TC) start the next block (assuming it is ready, otherwise trigger the UART transfer at the end of an ADC block).  The data streaming to the UART follows behind the ADC conversion, but occurs almost in parallel, reducing the time you would otherwise spend sending data to the UART AFTER the ADC completes.  With this method only the time to finish the last ADC blocks defines the latency between end of ADC and end of UART transfer, a small fraction of what it would take if the operations were sequential.

The only catch is making sure the ADC DMA is higher priority so you don't lose samples, and there is sufficient time between ADC operations to allow the UART to catch up.

As for allocating a large buffer, that's a function of the linker in your toolchain.  For an F4 make sure the array is mapped to the SRAM1 region (112KB) and the rest of your non-DMA RAM (stack, heap etc.) is mapped to CCM (64KB).  You will have to send data to the UART as 16 bit binary and convert to float on the PC side, doubtful you will have the CPU time available to do it on the STM side.

If you do want to send it as a float, you'll have to convert a block at a time from ADC to a temporary buffer and DMA that temporary out to the UART.  Slower because you can't transfer in place and it will double your transfer time to the UART since you are moving twice the bytes (16 bit vs. 32 bit float).

  Jack Peacock

  Jack Peacock