cancel
Showing results for 
Search instead for 
Did you mean: 

STM32F373 sdadc linearity at low input drops off

dhaselwood
Associate II
Posted on March 20, 2016 at 18:07

With a 'F373 I find that the sdadc counts-per-volt of the SDADC drops off rather dramatically when the input voltage decreases below about 40% of FS.  I would like to understand the mechanism behind this before locking in the rest of the design and a board layout.

Before laying out pc board a test setup was made using a DiscoveryF3 board with the F'303 processor removed and replaced with a 'F373, along with adding a 8 MHz xtal, several jumpers and capacitors for Vrefsd+.

SDADC3 is setup to scan 9 ports, single-ended, DMA, 1.8v internal reference.

The input to one port: 13.8v regulated bench supply to a 500 ohm pot.  The wiper of the pot drives a resistive divider, e.g. 6.8K|1.8K, with 100u cap across the 1.8K (which is similar to the application).  A 900 ohm series resistor goes from the junction of the divider to the SDADC port pin. 

The following table is an example of the sdadc counts-per-volt changing versus the input level.  The first column is the voltage at the junction of the resistive divider; the second column the readings after filtering with two, three section cic filters; and, the third column the sdadc reading per volt.  The total change is roughly 23% which seems rather high.

0.015 445 29666.7

0.045 1540 34222.2

0.112 3973 35473.2

0.164 5878 35841.5

0.241 8703 36112.0

0.308 11148 36194.8

0.423 15321 36219.9

0.563 20426 36280.6

0.815 29616 36338.7

0.984 35804 36386.2

1.263 45984 36408.6

1.560 56847 36440.4

1.753 63910 36457.5

Running this test with dividers, 75K|12K|10u, and 604K|100K|1u, I get virtually identical results.  Normalized, the curves match quite well.  This suggests that it is not a source impedance type of issue.

The application is a battery management monitoring system and since the voltage range is expected to stay within a 2:1 range the non-linearity in the lower half of the sdadc range is not as important, though the change in the upper half still cuts in the total tolerances allowable in the design. 

Applying a correction computation, such as a least squares fit of a polynomial, or table with interpolation, is a possibility, however at the moment how this non-linearity changes with temperature and time is unknown.

Another issue that is a bit puzzling is the effect of the series resistor.  Using the 6.8K|1.8K|100u divider and varying the series resistor changes the readings of course, but the curve shows a peak around 900 ohms.  One would expect the readings to increase as the series resistance drops from say 100K to zero, the rate of change becoming less as the resistance nears zero, however to reach a peak then drop off was a surprise.

Also, I noticed the noise of the raw readings are worse when the capacitor is from the port pin to ground, rather than on the resistive divider side of the series resistor.

The noise also rides on top of a low level waveform (roughly 5-20 adc counts) that has a 4 and 64 sdadc clock repetition.  I couldn't find anything that correlated with these rates, and the cic filtering takes this out.

Early in the testing I discovered that switching a LED on the Discovery board had a large effect (150-200 adc counts) on the readings.  The LEDs are driven by port pins associated with the SDADC and share the same power pin so it makes clear that one should not be using doing any switching of port pins associated with the sdadc.  Eliminating the switching of the LEDs removed the gross noise. 

The DiscoveryF3 board layout is probably not optimum for minimizing noise, but the only thing running besides the SDADC is the USART1 on port A and given the heavy filtering the serial port effect should be small.  The bigger issue, however, is the dependence of the calibration with input voltage.

Before locking in the design based on the 'F373 sdadc I need to have a better understanding of the underlying mechanism that gives rise to the sdadc-counts-per-volt changing with input level. 

Any insights, or directions to documentation would be appreciated.

#stm32f373-sdadc #sigma-delta
2 REPLIES 2
re.wolff9
Senior
Posted on March 20, 2016 at 20:00

If I understand your table correctly the first column is the voltage you measured at the input, and the second is the output of the SDADC. From that you simply divide one by the other to get counts per volt. 

Because ADCs and opamps have annoying offsets, I would first verify the differential performance: Does the ADC read a certain number of counts  higher for each additional input milivolt? Luckily you already did the measurements. You can get the difference between two counts, and the difference between two voltages from your table..... Once I did that calculation correctly ( 🙂 ) I get reasonable results: about 36500 counts per volt. 

In theory you would have 65536 counts for 1.8V, or 36409 counts per volt.  This differs 0.2 percent from the average value you measured. 

This means that mostly you've got an offset in your system. So... You have to take into account that the SDADC won't read 0 counts with (what you think is) 0 Volts. 

From your measurement the offset is about 3mV (2.5mV lowest, 3.8 highest). The manual allows for a 80 count INL, or about 2mV. The measured 1.3 seems within the range. On the other hand, the offset error is specified as less than ''2mV after offset calibration'', but you're seeing 3mV. So... are you doing offset calibration? How?

dhaselwood
Associate II
Posted on March 22, 2016 at 20:32

Roger,

Thanks.  Yes, subtracting out a small offset does flatten the plot considerably at the low end.   I missed that, assuming the calibration takes care of it.  It still goes off at the very low end, but it is an improvement.

Reviewing AN4550 it looks like one should stay within a 10-90% band with this SDADC, i.e. doesn't do rail-to-rail very well.

BTW, where did you get the INL of about 80 counts?  The datasheet for the F373xx shows 80 but for a gain of 8 and I am using 1x and a Vref of 1.8v which should put it in the 23-31 range.  3.3 mv seems to give a best fit, however.

One approach I tried was to take the readings from a number of runs (112) and find the best polynomial fit, which was order 8.  Recomputing the data from this polynomial I ran another fit which was order 6.  The two passes is essentially a smoothing operation (a third pass offered no improvement).  Using this 6th order polynomial to correct the readings the results were less than my current test equipment can measure over a about a 5-95% range.  This process would have taken out any implied offset.

Thanks again, I'll look in to the offset issue some more.