cancel
Showing results for 
Search instead for 
Did you mean: 

HAL issue with int8_t instead of uint8_t

Xenon02
Senior

Hello ! 

I've been working on with LIS3MDL a magnetometer from STM, but there was one problem because reading X,Y,Z it was showing good values but sometimes instead of value it just spitted 0, for example X = 0.14, Y = 0.30, Z = 0.14 and sometimes it just randomly zeroed random axis like X = 0.14, Y = 0.0, Z = 0.14

The culprit was here : 

 

    uint8_t data[6];
    HAL_I2C_Mem_Read(&hi2c1, LIS3MDLTR_Address, LIS3MDLTR_X_Y_Z_Axis, 1, data, 6, HAL_MAX_DELAY);

 

The problem  firstly I used int8_t data[6] instead of uint8_t data[6], I don't know why HAL had problem with it ?? Binnary it should be the same data but somehow it didn't work. Someone know maybe why ?

 

    *x = ((int16_t)data[1] << 8 | (int16_t)data[0]);
    *y = ((int16_t)data[3] << 8 | (int16_t)data[2]);
    *z = ((int16_t)data[5] << 8 | (int16_t)data[4]);

 

Here I still changed it into int16_t so the binary data is not interpreted differently decimally, but in HAL_I2C it should make any difference because it is binary. 

In summary why puttin int8_t data[6] into HAL_I2C function made it work wierdly ??? Like cutting some data or puttin 0 instead of value. 

1 ACCEPTED SOLUTION

Accepted Solutions
unsigned_char_array
Senior III

Because you store the LSB as signed byte it can become negative if it has an unsigned value > 127. By typecasting it to a signed 16 bit int you are doing sign extension of an int8_t to an int16_t and oring those upper bits with the MSB. Proof:

 

#include <stdio.h>

int main()
{
   
   uint8_t data_unsigned[2];
   int8_t data_signed[2];
   int16_t a,b;
   
   data_unsigned[0] = 128;
   data_unsigned[1] = 0;
   
   data_signed[0] = (int8_t)128; // this is the problem
   data_signed[1] = 0;
        
   
   a = ((int16_t)data_unsigned[1] << 8 | (int16_t)data_unsigned[0]);
   b = ((int16_t)data_signed[1] << 8 | (int16_t)data_signed[0]);
   
   printf("%d, %d\n", a,b);
   
   
   return 0;
}

 

(you can test this quickly with an online compiler such as https://cpp.sh/ or https://www.onlinegdb.com/online_c_compiler)

Better to do shifting and bit-wise logic with unsigned unless you know what you are doing (such as deliberate sign extension of an int that's not a power of 2 i.e. 13-bit int).

This is how to do it properly:

 

 a = (int16_t) (data_unsigned[1] << 8 | data_unsigned[0]); // no need to typecast LSB as this automatically happens when using bitwise operators, cast to signed after shifting

 

In case you have an odd size two's complement integer such as 13 bit:

 

a = ((int16_t)((data_unsigned[1] << 8 | data_unsigned[0]) << (16-13))) >> (16-13); // 13-bit sign extension

 

This works by first doing an unsigned shift to shift the sign bit in place and then doing a signed shift back to scale it back while retaining the sign bit. But in your case I assume the data is 16-bit so that's not needed.

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.

View solution in original post

8 REPLIES 8
unsigned_char_array
Senior III

Because you store the LSB as signed byte it can become negative if it has an unsigned value > 127. By typecasting it to a signed 16 bit int you are doing sign extension of an int8_t to an int16_t and oring those upper bits with the MSB. Proof:

 

#include <stdio.h>

int main()
{
   
   uint8_t data_unsigned[2];
   int8_t data_signed[2];
   int16_t a,b;
   
   data_unsigned[0] = 128;
   data_unsigned[1] = 0;
   
   data_signed[0] = (int8_t)128; // this is the problem
   data_signed[1] = 0;
        
   
   a = ((int16_t)data_unsigned[1] << 8 | (int16_t)data_unsigned[0]);
   b = ((int16_t)data_signed[1] << 8 | (int16_t)data_signed[0]);
   
   printf("%d, %d\n", a,b);
   
   
   return 0;
}

 

(you can test this quickly with an online compiler such as https://cpp.sh/ or https://www.onlinegdb.com/online_c_compiler)

Better to do shifting and bit-wise logic with unsigned unless you know what you are doing (such as deliberate sign extension of an int that's not a power of 2 i.e. 13-bit int).

This is how to do it properly:

 

 a = (int16_t) (data_unsigned[1] << 8 | data_unsigned[0]); // no need to typecast LSB as this automatically happens when using bitwise operators, cast to signed after shifting

 

In case you have an odd size two's complement integer such as 13 bit:

 

a = ((int16_t)((data_unsigned[1] << 8 | data_unsigned[0]) << (16-13))) >> (16-13); // 13-bit sign extension

 

This works by first doing an unsigned shift to shift the sign bit in place and then doing a signed shift back to scale it back while retaining the sign bit. But in your case I assume the data is 16-bit so that's not needed.

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.
TDK
Guru

The int8_t and uint8_t data types are different. You need to use the correct one for your scenario. The value -1 can't be expressed in uint8_t for example.

If you feel a post has answered your question, please click "Accept as Solution".

I do have a question though. 
Because the function : 

HAL_I2C_Mem_Read(&hi2c1, LIS3MDLTR_Address, LIS3MDLTR_X_Y_Z_Axis, 1, data, 6, HAL_MAX_DELAY);

Is taking the address of variable "data". Isn't it that  the same bite values are placed in this variable ? 

For example 

uint8_t data; 

//Incoming data from I2C is 01011100
//So this value 01011100 is placed in data so 

data = 01011100; 

//Hence if data was int8_t type then the same bit values from I2C is placed in data so 

int8_t data;

data = 01011100; 

 

So in both case data from I2C is taken the same way, of course the interpretation in decimal is different.
Or perhaps the data placed in "data" is not in bits but in decimals ?

Because changing only the type int8_t data to uint8_t data changed it, although the data from I2C or UART etc, is taken from buffer and placed into variable so in bits it should be the same. I guess ? 
I mean in the buffer of I2C is 01011100 and it is placed in "data" variable so it should be the same ? Or perhaps not ? 

PS. 

So is it that because it was int8_t and not uint8_t this "data" variable, the extension made additional "1" values ? that weren't supposed to be there ? 

 

It has nothing to do with the function, but purely because of the datatype.

data = 0b10000000;

The value if this integer depends on the datatype. For int8_t it is -128 and for uint8_t it is 128. But you don't want negative numbers in your lower byte. For values between 0 and 127 there is no difference. That's why you didn't always see a problem with it.

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.

Okey, so correct me here or tell me what I am right and on which part not. 

So the function either I give him uint8_t or int8_t it should give me the same bit data right ?? Like 01011100 the same bit values will be pushed in uint8_t and in int8_t so both of them have 01011100

The problem was after that, because if it was int8_t and was expandend with int16_t then the gaps will be filled with "1", for uint8_t if it is expanded to int16_t the gaps will be filled with "0" and not with "1". 

Pretty interesting because I thought because it is expanded or uint8_t into int16_t i thought it's value will be also negative because it has "1" on the MSB i uint8_t and was expanded to int16_t. Hmmmm 


@Xenon02 wrote:

Okey, so correct me here or tell me what I am right and on which part not. 

So the function either I give him uint8_t or int8_t it should give me the same bit data right ?? Like 01011100 the same bit values will be pushed in uint8_t and in int8_t so both of them have 01011100


Yes same bit value. In memory viewer you would see the same hex value for each since the function uses a pointer and expects a uint8_t.


@Xenon02 wrote:

The problem was after that, because if it was int8_t and was expandend with int16_t then the gaps will be filled with "1", for uint8_t if it is expanded to int16_t the gaps will be filled with "0" and not with "1". 

Pretty interesting because I thought because it is expanded or uint8_t into int16_t i thought it's value will be also negative because it has "1" on the MSB i uint8_t and was expanded to int16_t. Hmmmm 


It is expanded with 1 if the signbit is 1, otherwise a 0. This is because then the int16 has the same value as the int8.

The reason I put up a code snippet is that you can test it without hardware. It is pure C-code. As long as the values from the chip are correct and you don't get read errors it is all C code. To be sure the values are correct you can clear the bytes prior to reading so if reading fails you should get zero instead of an old value. And also check the return code of the read function.


 

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.

@unsigned_char_array wrote:

@Xenon02 wrote:

Okey, so correct me here or tell me what I am right and on which part not. 

So the function either I give him uint8_t or int8_t it should give me the same bit data right ?? Like 01011100 the same bit values will be pushed in uint8_t and in int8_t so both of them have 01011100


Yes same bit value. In memory viewer you would see the same hex value for each since the function uses a pointer and expects a uint8_t.


Even though if the function expected int8_t and I gave him uint8_t the bit values should still be the same. The most important thing for the function is that it is a pointer. Or rather here it doesn't matter that much the type cast because they have 8 bits each. 
So if function expects uint8_t and I gave him int8_t or uint8_t then both have same result, same if function expects int8_t and I gave him int8_t or uint8_t. Because both have 8 bits then type doesn't that much matter (of course I don't mean char will works or something ... or int16_t)


@unsigned_char_array wrote:

 


@Xenon02 wrote:

The problem was after that, because if it was int8_t and was expandend with int16_t then the gaps will be filled with "1", for uint8_t if it is expanded to int16_t the gaps will be filled with "0" and not with "1". 

Pretty interesting because I thought because it is expanded or uint8_t into int16_t i thought it's value will be also negative because it has "1" on the MSB i uint8_t and was expanded to int16_t. Hmmmm 


It is expanded with 1 if the signbit is 1, otherwise a 0. This is because then the int16 has the same value as the int8.

The reason I put up a code snippet is that you can test it without hardware. It is pure C-code. As long as the values from the chip are correct and you don't get read errors it is all C code. To be sure the values are correct you can clear the bytes prior to reading so if reading fails you should get zero instead of an old value. And also check the return code of the read function.


 



From my perspective : 

a = ((int16_t)data_unsigned[1] << 8 | (int16_t)data_unsigned[0]);
b = ((int16_t)data_signed[1] << 8 | (int16_t)data_signed[0]);


Because the data_unsigned is 10000000 so changing into int16_t should result in 1111111110000000 but instead it was 0000000010000000 although it was changed into int16_t.
For data_signed is 10000000 so changing into int16_t should result in 1111111110000000 and indeed it changed like that. 

Both had changed from their data type into int16_t 

Or rather is it that we have value 128 unsigned, because it is changed into int16_t it still has to be 128 so the rest is filled with 0 ? Same goes with -128 signed ? I before interpreted it that it must be changed into int so 128 uint8_t to int16_t is now negative because it had 10000000

Dang easy C problem I just overcomplicated ... I just don't get what is processed here first in the code or how to understand the transition to (int16_t) xD because there was unsigned to it should be negative like int because both of them have same binary values. 


@Xenon02 wrote:
Because the data_unsigned is 10000000 so changing into int16_t should result in 1111111110000000 but instead it was 0000000010000000 although it was changed into int16_t.


the number 0b10000000 is 128 if in uint8. Typecasting it as int16 and it remains 128. In binary it has been left-padded with 0.

the number 0b10000000 is -128 if in int8. Typecasting it as int16 and it remains -128. In binary it has been left-padded with the signbit (which is 1 in this case).

Doing bitwise operations on uint8 or int8 first converts them to ints. If they are unsigned the zero-padding doesn't cause problem with OR-ing, but when they are signed the potential 1-padding potentially sets some databits of the MSB.

This is all documented in the C-standard. I suggest you read those sections and also play with it in code. It may not always be intuitive, but you can always test all the edge and corner cases of your calculation.

 

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.