Showing results for 
Search instead for 
Did you mean: 

STM32F37x SDADC data count offset change

Associate II
Posted on March 25, 2013 at 18:25

I've been working with the SDADC hardware on the stm32f37x.  Using a sdadc clock of 6 MHz and software triggering 3 injected channels in single-ended zero-reference mode every millisecond.  I've noticed that when applying DC inputs to the three channels in question I see a periodic 'shift' in the counts output downward by around 10 counts and slowly drift back upward to the previous level on average, this shift appears to be temperature dependent as applying a heat gun to the PCB at a distance the frequency of the 'shift' behavior increases.  Has anyone else seen this behavior? Could this be a function of the underlying SDADC hardware implementation? When monitoring the inputs with an oscilloscope I do not see the shift which the SDADC is reporting. 

I welcome any ideas or comments and I'm including a screen-shot of the plotted data around the time of the 'shift' which occurs on counts reported for all channels.


#stm32f37x #sdadc #sdadc
Posted on March 26, 2013 at 15:40

Two guesses:

(a) The peripheral is automatically applying a temperature calibration correction periodically. I don't see any mention of such in the data sheet or reference manual.

(b) The Sigma Delta logic is applying a correction to bit flips in it's front end comparator.

The 10 bit counts is within data sheet offset spec for single ended inputs.

Perhaps STOne-32 could shed some light on this. The time between offset corrections at room temperature might be a good clue.

Cheers, Hal

Associate II
Posted on March 26, 2013 at 16:49

Hi Hal,

Thanks kindly for your thoughts!  To answer your statement: ''The time between offset corrections at room temperature might be a good clue.''

What I'm seeing at room temperature is something in the neighborhood of 25 seconds between shifts.

Temperature compensation seems like a possible cause, but as you mentioned there's not much in the documentation either way.

Posted on October 23, 2013 at 09:49

Were you able to solve this problem in the end? 

I'm having the exact same problem. The input is in differential mode, and sampling at 12Khz. 120 samples are added together in a signed integer, and send to the PC. The reading is very stable because of this averaging, but I also see the periodic shifts which are mentioned above.

Associate II
Posted on February 04, 2014 at 08:38

Hi All

Does any body have a solution or a confirmation from ST about this issue? It seems that we observed the same behaviour of the SDADC.

Thanks in advance Raphael

Igor Cesko
ST Employee
Posted on March 05, 2014 at 14:09

Hi all,

 I personally not see this problem on my device. Can you please write in which exact configuration is your SDADC used:

- internal or external reference voltage (what is VREFSD value)

- is supply voltage for given SDADC VDDSD common also for VDD on the PCB

- it is observed in single mode or also differential - which exact

- which gain is used

- ... more details about PCB design - was designed correctly the supply paths and decoupling

It seems (my hypothesis) that the problem probably comes from the internal voltage regulator working principle (1.8V regulator for CPU core and peripherals). The internal regulator has specific regulation principle which causes that the output 1.8V supply timeline has sawtooth like shape with period from 10 seconds up to 2 minutes (period depends from ambient temperature). This period correspond to your observed ~25 seconds at room temperature. So - if the core 1.8V voltage has this sawtooth shape variations then the current consumed from this voltage also varies with similar sawtooth shape. If there is not correctly designed the PCB then this can cause voltage drop on supply voltages or on reference voltages - depends from the schematic design (source for reference voltages) and PCB design (voltage drops on supply wires).

 Please check above things and give me please information to reproduce your problem.



Posted on August 19, 2014 at 20:17

Hi Igor,

we have been seeing this exact problem as well. We use the '373 as a digital DC restore for biopotential signals in the ~100uV region. We have 5 bio boards in our system each with 4 '373s (for a total of 20 ARMS) and each ARM have 4 channels (using 2 SDADCs). So, we have a total of 80 channels where this is occurring. We have payed close attention to the layout and have been liberal with bypass caps. The 3.3V power supply is generated using an LT8614 which is capable of supplying 4A. The '373 is used as an integrator to do a DC restore to a signal which is then digitized using a 24bit 2MSPS sigma delta ADC (AD7760). The ramp shows up very clearly. Given the number and location of bypass caps, and that the ADC supply is capable of 4A, I am very suspicious that that the ARM are creating any sort of external voltage ramp due to current consumption. Also, there seem to be a few customers who are experiencing this problem so I doubt that we all made the same layout errors.






Attachments :

DAC___DAC-ECO.pdf :

uARM___uARM-ECO.pdf :
Associate II
Posted on December 03, 2014 at 17:18

Hi Julian,

we have a similar effect. Is there any progress on your site? I am open to pay a bottle of wine for it:)

Associate II

I have the same problem. Is there a workaround for this behaviour?