cancel
Showing results for 
Search instead for 
Did you mean: 

HAL_I2C_Slave_Transmit injects single FF between first and second transmit bytes

FrankNatoli
Associate III

Have two STM32 eval systems, Discovery [running as I2C master] and touchscreen [running as I2C slave]. I2C communications were successful when master was built with HAL and slave was built with LL. After converting slave to HAL, I observe that messages received by master from slave have a single FF inserted between the first and second bytes from the slave. There is a perfect 37 byte message, exactly what the slave transmitted, excepted something is inserting an FF after the first byte [thus the master actually receives 38 bytes]. What could be causing the slave to inject this spurious FF?

7 REPLIES 7
TDK
Guru

> excepted something is inserting an FF after the first byte

How are you detecting this? Are you looking at SDA/SCL signals? How do you know the message is correctly formed on the slave?

The code in HAL_I2C_Slave_Transmit is straightforward. You can view it and debug as if it were your own code. There's no "add an 0xFF after the first byte" checkbox in there.

If you feel a post has answered your question, please click "Accept as Solution".

I am a software not hardware guy, so no, I am not looking at the SDA/SCL signals.

I am detecting this by looking at the receive buffer after HAL_I2C_Master_Receive [obviously on the master] returns.

The receive buffer has this bizarre FF between the first and second bytes that the slave sent.

The buffer given to HAL_I2C_Slave_Transmit [obviously on the slave] absolutely positively does not have the FF between the first and second bytes, and the second through 37th bytes are all displaced by one.

The last two bytes of the message are a 16 bit CRC, which is correct for the 37 bytes but not with the inserted FF, so the slave software has no idea where the FF came from.

I am guessing, just guessing, there's some I2C timing issue, where the slave fails to provide a byte and the hardware inserts an FF, but I'm hoping someone more informed than me can comment on it.

I don't think there's anything in hardware to support this. If DR isn't serviced fast enough (underrun), the I2C will re-transmit the same byte and the OVR bit will be set. That can only happen if clock stretching is disabled. Probably a bug in the software. I would also check the errata sheet just to be safe.

If you feel a post has answered your question, please click "Accept as Solution".

Again, speaking as a software guy, I am told, by my hardware guy, that if the master requests more bytes than the slave plans to send, FFs are "received" to fill the buffer. I don't know if that means the slave actually sends FFs or the master presumes FFs. But I can confirm this is true. If I initialize the master receive buffer to AA, ask for 256 bytes, the slave sends 37 bytes, from the 38th byte to the end of the buffer turns into FFs. However, if I have the master ask for 37 bytes, from the 38th byte to the end of the buffer remains AAs. So the "fill" is true.

Is this spurious FF, between receive bytes 1 and 2, caused by a slave underrun? I don't know.

As I note in my original post, the slave code was originally configured by CubeMX to use LL for I2C.

When I took over the code, I used CubeMX to convert the slave code from LL to HAL for I2C.

Initially, the project didn't compile, because some residual LL code was left over by CubeMX.

I fixed that, it compiles, but now I have this bizarre FF.

I repeat, the original I2C LL code did not do this.

Perhaps there is yet other I2C LL code lurking, left by CubeMX, that I missed.

I may have to rebuild the workspace from scratch.

TDK
Guru

> If I initialize the master receive buffer to AA, ask for 256 bytes, the slave sends 37 bytes, from the 38th byte to the end of the buffer turns into FFs.

If that is the case, there is an issue with the master receive code. There is an ACK bit on each byte (asserted by the slave). If this is not asserted, the master should immediately exit the transfer.

But in this case, you don't get the spurious 0xFF on the second byte? What changed?

You don't mention the particular board or software version you're using. It's possible there is a bug within HAL causing this.

> Is this spurious FF, between receive bytes 1 and 2, caused by a slave underrun? I don't know.

I doubt it, but why leave it a mystery? Check the bit.

If you feel a post has answered your question, please click "Accept as Solution".

I too would like to blame the master I2C receive code, except that works just fine, i.e., no spurious FFs, when the slave is using I2C LL to transmit.

It was my switch, using CubeMX, in the slave from I2C LL to I2C HAL that triggered this behavior.

Will have to get a logic analyzer on the signals.

FrankNatoli
Associate III

Both mysteries were triggered by user software.

The injected FF, between first and second bytes, was an artifact of residual LL code leftover after using CubeMX to reconfigure from LL to HAL. Had to rebuild the workspace from scratch to be sure all the LL code was gone.

The "filler" FFs were artifacts of the user LL code, which was detecting that the receiver wanted more data, so the sender sent "filler" FFs.