cancel
Showing results for 
Search instead for 
Did you mean: 

STM32 Nucleo L476 Internal Temperature Sensor

simon_lauser
Associate
Posted on July 20, 2016 at 23:00

Hello,

I'm trying to get the internal temperature sensor to work. But the whole time, the sampled values are too low. If I use that fancy formula from the reference guide, it says there are about 10�C, but believe me, it's not. The sampled value is always about 970 with 1040 calibrated at 30�C. I tried a lot of configurations but now I don't know how to proceed. Maybe someone can see the problem. There is my code:

ADC Config by CubeMX:

ADC_MultiModeTypeDef multimode;

  ADC_ChannelConfTypeDef sConfig;

    /**Common config 

    */

  hadc1.Instance = ADC1;

  hadc1.Init.ClockPrescaler = ADC_CLOCK_ASYNC_DIV8;

  hadc1.Init.Resolution = ADC_RESOLUTION_12B;

  hadc1.Init.DataAlign = ADC_DATAALIGN_RIGHT;

  hadc1.Init.ScanConvMode = ADC_SCAN_DISABLE;

  hadc1.Init.EOCSelection = ADC_EOC_SINGLE_CONV;

  hadc1.Init.LowPowerAutoWait = DISABLE;

  hadc1.Init.ContinuousConvMode = DISABLE;

  hadc1.Init.NbrOfConversion = 1;

  hadc1.Init.DiscontinuousConvMode = DISABLE;

  hadc1.Init.ExternalTrigConv = ADC_EXTERNALTRIG_T1_CC1;

  hadc1.Init.ExternalTrigConvEdge = ADC_EXTERNALTRIGCONVEDGE_NONE;

  hadc1.Init.DMAContinuousRequests = DISABLE;

  hadc1.Init.Overrun = ADC_OVR_DATA_PRESERVED;

  hadc1.Init.OversamplingMode = DISABLE;

  if (HAL_ADC_Init(&hadc1) != HAL_OK)

  {

    Error_Handler();

  }

    /**Configure the ADC multi-mode 

    */

  multimode.Mode = ADC_MODE_INDEPENDENT;

  if (HAL_ADCEx_MultiModeConfigChannel(&hadc1, &multimode) != HAL_OK)

  {

    Error_Handler();

  }

    /**Configure Regular Channel 

    */

    sConfig.Channel = ADC_CHANNEL_TEMPSENSOR;

    sConfig.Rank = 1;

    sConfig.SamplingTime = ADC_SAMPLETIME_12CYCLES_5;

    sConfig.SingleDiff = ADC_SINGLE_ENDED;

    sConfig.OffsetNumber = ADC_OFFSET_NONE;

    sConfig.Offset = 0;

    HAL_ADC_ConfigChannel(&hadc1, &sConfig);

Measuring:

while(HAL_ADC_PollForConversion(&hadc1,10) != HAL_OK);

   temp = (int)HAL_ADC_GetValue(&hadc1);

Thanks;

#adc #hal #stm32l476 #stm32cubemx
9 REPLIES 9
Walid FTITI_O
Senior II
Posted on July 22, 2016 at 17:15

Hi Knechter,

I recommend to take a look to ''ADC_temperature Sensor ''example inside

http://www.st.com/content/st_com/en/products/embedded-software/mcus-embedded-software/stm32-embedded-software/stm32cube-embedded-software/stm32cubel4.html

, at this path:

STM32Cube_FW_L4_V1.5.0\Projects\STM32L476RG-Nucleo\Examples_LL\ADC\ADC_TemperatureSensor

-Hannibal

raptorhal2
Lead
Posted on July 24, 2016 at 23:56

The internal temperature sensor is a high impedance device, so a higher sample cycles count will make a significant difference.

Cheers, Hal

bryenton
Associate II
Posted on December 10, 2016 at 06:38

Hi Knetcher et al,

I was also struggling with L4 temperature sensor, so here is some info to hopefully help you or others out...

In my opinion the ST temp sensor examples are limited. The one Walid referred to above I find very complicated, and would benefit from the internal schematics of the L476 to have a chance at getting all those and many other related LL steps right

There is another example that is much more reasonable, but is hidden away with an obscure name at: STM32Cube_FW_L4_V1.5.0\STM32Cube_FW_L4_V1.5.0\Projects\STM32L476G_EVAL\Examples\ADC\ADC_Sequencer. Unfortunately this is a single point calibration and the values are incorrect, but these can be fixed if you use the calibration readings from your L4xx adjusted as described below.

Since you refer to the data sheets, you may notice that the description says that the calibrations for 30C / 110C (Table 8)were taken at Vdda = 3.0v. It is likely that your system is at 3.3v, and if so I believe you will need to scale the L4 specific calibration values to match your CPUs Vdda, e.g. by 3.0/3.3 (or 3.0/2.7 if you have designed at 2.7v, etc.). If you are using 3.3v that would give you 945 adc Value for TS_CAL30, and with your 970 adc reading would mean the CPU was about 38C +/- 1C (assuming your system is 3.3v). Note: the +/- 1C is based only on the calibration curve 30/110 +/-5C inaccuracy and many other factors would make it a wider error margin.

Another thing to keep in mind is that the Sampling time is 5uS minimum (Ts_temp, table 74), so if you are running ADC clock source faster than 16MHz, you will have to adjust hadc1.Init.ClockPrescaler = ADC_CLOCK_ASYNC_DIV8 and sConfig.SamplingTime = ADC_SAMPLETIME_12CYCLES_5 accordingly. Depending on what speed and modes you are running in you might need to consider Tstart and Tcal (tables 63 and 74) as well which might need some properly placed short delays.

And since oversampling is very easy to do on L4xx, you might consider possibly oversampling to de-noise the adc readings a bit, even if not totally necessary.

Hope this is helpful,

Al

Seb
ST Employee
Posted on December 10, 2016 at 15:09

Few years back on STM32F437, got a sample code which might help. The coefficients are datasheet dependent.

Here the idea was to factor out the VDD supply of the ADC from the measurement.

Here is an extract of an old test code :) )

This code is only for guidance, not to be reused 'as is'.

Hope this helps.

s32 Interpolate_s32 (s32 x0, s32 x1, s32 y0, s32 y1, s32 x)

{

// code for 32 bit mcu

s32 dwQ;

dwQ = ((y1-y0))*x+(x1*y0)-(x0*y1); // overflow not checked yet

dwQ = dwQ / (x1-x0);// we can also do roundings here

return dwQ;

}

u32 ADC_Normal_LsbTo_mV(ADC_t* A, u32 Lsb) {

u32 mV = Interpolate_s32 (0, 0xFFF0, 0, A->VRef_mV, Lsb);

return mV;

}

u32 ADC_Injected_LsbTo_mV(ADC_t* A, u32 Lsb) {

u32 mV = Interpolate_s32 (0, (s32)0x7FF8, 0, A->VRef_mV, (s32)(s16)Lsb);

return mV;

}

u32 ADC_ConvertNormalTo_mV(ADC_t* A) {

u8 n;

for(n=0;n<countof(A->Normal_Lsb);n++) {

A->Normal_mV[n] = ADC_Normal_LsbTo_mV(A, A->Normal_Lsb[n]);

}

return n;

}

u32 ADC_ConvertInjectedTo_mV(ADC_t* A) {

u8 n;

for(n=0;n<countof(A->Injected_Lsb);n++) {

A->Injected_mV[n] = ADC_Injected_LsbTo_mV(A, A->Injected_Lsb[n]);

}

return n;

}

s32 ADC_Convert_mV_to_DegC_x10(ADC_t* A, u32 mV) {

s32 Deg30Lsb = (s32)(*((u16*) 0x1FFF7A2C));

s32 Deg110Lsb = (s32)(*((u16*) 0x1FFF7A2E));

// we first convert the LSB calibrated Flash values to mV (let's remove the sign issue between normal and injected channels)

u32 Deg30mV = Interpolate_s32 (0, 0x0FFF, 0, A->VRef_mV, Deg30Lsb);

u32 Deg110mV = Interpolate_s32 (0, 0x0FFF, 0, A->VRef_mV, Deg110Lsb);

u32 DegC_x10 = Interpolate_s32 (Deg30mV, Deg110mV, 300, 1100, (s32)(s16)mV);

A->Temp_degC_x10 = DegC_x10; // capture the value for debugging or reference

return DegC_x10;

}

s32 ADC_Convert_VRefByLsb(ADC_t* A, u32 Lsb) {

s32 VRefLsb3p3V_Lsb = (s32)(*((u16*) 0x1FFF7A2A)); // this contains the ADC 12 bit right aligned injected raw data for Vref LSB at 30C and 3.3V

u32 Vdd_mV = (3300*(VRefLsb3p3V_Lsb<<3))/Lsb;

A->MeasuredVdd_mV = Vdd_mV;

return 0;

}

s32 ADC_FeedbackVdd(ADC_t* A, u32 Vdd_mV) {

A->VRef_mV = A->MeasuredVdd_mV; // hmm....

return 0;

}
Posted on December 10, 2016 at 18:37

 ,

 ,

Hi Sebastien,

Thanks for the pseudocode!. Also have an F4 board and wasn't aware that it also had the 2 point temp sensor calibration values. ,Ours uses the F407, and just checked the data sheet and yes it also has 2 point calibration values ,measured during manufacturing ,with Vdda at 3.3v ,

Using interpolation to account for different Vdda voltages is interesting.

However in ,our case the Vdda is known and quite tightly controlled with an LDO, so all I did was adjust for our CPUs Vdda, and then ST's 2 point calibration formula from RM works great, e.g. adjust the calibration adc values as follows:

♯ define TS_CAL30_ADDRESS , ((uint32_t)0x1FFF75A8) /* addr of temp sensor 30C calibration set during mnf'g */

 ,tsCal30AdcValueAtVdda , = (*(__IO uint16_t*)TS_CAL30_ADDRESS) * 3000/VDDA_APPLIC_MV,

Al

Posted on December 10, 2016 at 20:00

One more point on this, with respect to Vrefint calibrations you were using. Since they are also taken (e.g. for your F437 board) at a specific Vdda, they also need to be adjusted to properly compensate for the actual Vdda being used on your board. i.e.:

♯ define VREF_CAL30_ADDRESS ((uint32_t)0x1FFF75AA) /* Addr of Vrefint 30C/3.3v calibration value set in mnf'g */

vRefCal30AdcValueAtVdda = (*(__IO uint16_t*)VREF_CAL30_ADDRESS) * 3000/VDDA_CPU_MV;

Since F4 was calibrated at 3.3v you may not have needed to do this since many devl boards use 3.3v, but if your board was at a voltage other than 3.3v, then your Vref calibration value would have been incorrect I believe.

Al

Posted on December 11, 2016 at 02:14

I haven't looked at the CPU temp sensors specifically, but many of ST's peripheral sensors, like MEMs inertial sensors, have chip temp sensors for improving reading accuracy.  What had me going, similar to your question, for a while was the odd temperature readings.  It seems that most of these sensors are explicitly accurate for temperature CHANGES but not for a given reference.  ie; 22 degrees C might return a value of 5.  If this is the case, when the chip warms to 30 degrees, the sensor will read 13 (5 + 8) for an accurate change in temperature, but it does not tell you what either values are referenced to an absolute temperature value.  If this is the case in the CPU temperature sensors, a reading of 10 degrees at room temperature makes perfect sense.  The only question is if you heat it up, does it track changes in temperature?  If so it is up to you to calibrate the zero point.  And rest assured that these change from chip to chip.

Posted on December 11, 2016 at 02:24

Just to add to my above post, the L476 data sheet says exactly what I was describing, but it can be misread if you expect something different:

0690X00000605nxQAA.png

Note that the data sheet indicates that the temperature sensor is ''suitable for applications that detect temperature changes only''.  It then becomes confusing because on the next page it shows that the part is factory calibrated by testing at 30 degrees C and 110 degrees C.  This might lead you to expect that these data points would yeild accurate absolute temperature measurements.  However I think they only sample two points like that to trim linearity, so that you get an accurate difference in degrees.  But they are referenced to absolutely nothing.

Posted on December 11, 2016 at 03:23

Hi Aaron,

Use of 2 calibration points is actually very standard for linear temperature sensors, and by ST providing them for an inexpensive sensor in a processor like this is I believe somewhat unique. The first portion of the data sheet you reference seems like it was written prior to the 2 calibration points being available. If you have read lots of ST documents and code you will notice they reuse a lot, and this was possibly a cut and paste from some other STM32 chip datasheet. For the latest you should refer to the temperature sensor section in RM0351 (p489) as well as table 8 temperature sensor calibration values in the datasheet.

I did do some basic testing on this, trying the range of CPU frequencies, ADC clock prescalers, sampling cycles and oversampling ratios to see how accurate the chip spec including sampling delay and temperature accuracies were, and I would say that with the 2 point calibrations they are indeed about +/- 1C. When I ran at higher speeds and faster sampling rates I was able to measure actual changes in the chip temperature, and verified these with a temperature probe. And when reducing the speed I could see the temperature reduce back to ambient and stabilizing at room temperature as expected.

I also don't understand how they are able to provide individual chip calibration points, it is possible that they do this by batch and factor in statistics, but they do seem to do the trick. And having them in the chip makes things easy.

I wouldn't necessarily rely on the absolute accuracy for a space mission, but to get a pretty good sense of CPU overheating, or possibly ambient/local temperature, it is quite good. And with calibrations, these are indeed absolute temperatures, not just relative temperatures.

cheers,

Al