AnsweredAssumed Answered

Understanding ADC Accuracy

Question asked by Thomas Watson on Mar 26, 2017
Latest reply on Mar 27, 2017 by waclawek.jan

I am looking to determine the accuracy of a sensor using the ADC on an STM32L0. For reference, the datasheet's specified accuracy is here: https://i.imgur.com/Q1w9Eq5.png  .

 

The sensor outputs a voltage in relation to what it senses. Assuming the sensor's output voltage is error free, I'm trying to understand the error in measurement generated by the ADC. Does the figure "effective number of bits" mean that I can read the 10 most significant bits of the 12 bit result and assume that number is accurate, i.e. the actual value is within 0.5LSB of the 10 measured bits? How does the hardware calibration routine affect this figure?

 

Let's say that a 12 bit reading of 348 means a measured value of 4896, a reading of 349 means a measured value of 4880, and a reading of 352 means a value of 4834. Assuming this reading was accurate to ±0.5LSB, this means the measured value is accurate to ±8, correct? If we take the max unadjusted error from the datasheet of ±4LSB (or is it ±2LSB?), this means the measured value is accurate to ±64? If I just round the 12 bit reading to 10 bits and assume it's now accurate to ±0.5LSB, I believe this would result in an error of ±32. Can I do the rounding and then put on the result "Figure accurate to ±32" and be correct all the time, no matter the error in that particular chip? How does this improve if I calibrate the chip using the hardware routine? I would like to be able to claim as much accuracy as possible without having to manually characterize each chip.

Outcomes