cancel
Showing results for 
Search instead for 
Did you mean: 

HAL_I2C_Slave_Receive_IT() with varying length

craig239955_stm1_st
Associate II
Posted on April 26, 2016 at 17:00

I've got an STM32F0 based I2C slave device which I'd like to send simple commands to, as well as updates containing additional bytes of data. 

HAL_I2C_Slave_Receive_IT() seems to require you to directly set the number of bytes you are waiting for however, so all received data must be the same length. Is there any way around this using the HAL system?

#!i2c #i2c-slave #no-hablo-hal #!stm32
12 REPLIES 12
craig239955_stm1_st
Associate II
Posted on June 26, 2016 at 16:39

Still looking for a way to do this. Would really appreciate some input.

Posted on June 26, 2016 at 17:42

The protagonists for HAL really don't show up to support other users.

The I2C is not a high bandwidth connection, I suggest you process the bytes as received in a state machine to manage the stream and auto-increment the internal register count, etc.

Tips, buy me a coffee, or three.. PayPal Venmo Up vote any posts that you find helpful, it shows what's working..
Luca R
Associate II
Posted on August 28, 2017 at 12:28

Hi,

your thread is 1 year old, and maybe you already fixed everything.

I need to implement such a functionality, and I thought to use the HAL_I2C_Slave_Sequential_Receive_IT ,specifying '1' as lenght, and iterating it until the STOP condition is detected.

L

Harvey White
Senior III
Posted on August 28, 2017 at 22:07

Whether or not this will work for you depends on the I2C hardware implemented in your chip.  It does work for an L432.

I2C protocol can handle a variable length message.  The HAL software sets the stop condition after the number of bytes, so the slave terminates the transmission. 

However, the result is an AF error because the slave software did not know how much to receive.  Your message is there, it's fine, it just has that particular error.  Make a decision on whether or not the interface is busy and if you have any error other than an AF error.  If not busy and an AF error, then your message is good.  Set the maximum slave receive length to the length of the slave buffer.  Set the slave buffer length to more than any message you expect to receive.

This has been verified on 90 or so byte messages sent from an F446 to an L432.  Transmission from the L432 to the F446 has known length messages.  Worked for over 24 hours, 2.5 million transactions.

Please note that the AF error is at the slave end, the master should not show any errors.

Posted on August 29, 2017 at 16:33

If the master wants to write some data, it works, and the transaction is completed with AF error at the slave end.

Instead, if the master wants to read some data, I still don't know how to let it work: how can the slave know when all the requested data have been sent out to the master?

Posted on August 29, 2017 at 17:12

OK, this requires you to have commands that you can define, written to the slave.  It also requires master read from slave capability.  This is for talking to processors, not hardware devices.

1) write variable length command to slave (your choice, mine by definition are...) get AF error in slave, ignore that.

all commands to slave are variable length.

2) write command to slave to return message byte count, read integer from slave (known length)

3) write command to slave (if desired) to read data.  Master now knows how much to read.  You could have just read from the slave, but this makes most everything a command/response scenario, which is easier to manage. 

Note that the only variable length message is to the slave which is handled by the slave.  All other transactions are of known length, which is required by the HAL read routines.

The command/response scenario closely duplicates the board to board demo, and is reliable.

Posted on August 29, 2017 at 18:40

Thank you Harvey, I am setting a communication protocol for data retrieving, FW update and so on.

But what if the master doesn't follow the rules?

What I am after now is to implement something robust against unpredictable W/R messages coming from the master.

In my test environment I am using, as a master, an Aardvark I2C programmer. When using the Aardvark as a master, basicly you can send two commands to an addressed slave:

1) MASTER WRITE

As already said, when the master is writing data there aren't issues: bytes are received and written into a buffer in a cyclic way, so that if you are trying to write more data than the buffer size, it wraps and start again from the beginning.

2) MASTER READ

The Aardvark programmer splits the read command in:

2.1-'Master Read' command: in the Aardvark GUI you can specify how many bytes you want to read, but this information is not sent in any way to the slave. I guess that the Aardvark will send a STOP condition after having received the requested bytes. But how can my slave detect it? I would have an internal buffer, and start transmitting data (in a cyclic way, wrapping it as in previous case) until the master is satisfied.

2.2-'Master Register Read' command: here you have to specify the address (and its width) and the number of bytes to read. This command should result in a 'Master Write' + 'Master Read': and we are back to the previous question: how can the slave know when to stop transmitting data? 

Posted on August 29, 2017 at 19:46

Luca R wrote:

Thank you Harvey, I am setting a communication protocol for data retrieving, FW update and so on.

But what if the master doesn't follow the rules?

What I am after now is to implement something robust against unpredictable W/R messages coming from the master.

Since I'm talking to multiple devices with different capabilities, I have a command group (256 possibilities) and a command (again, 256 possibilities).  If anything is sent that does not make sense, the slave device ignores it and sets an error flag.  In a programmed reply (not the count), that status is returned.

You're writing the master code, so you have some control here, but having the slave ignore what it can't do or doesn't understand works.

In my test environment I am using, as a master, an Aardvark I2C programmer. When using the Aardvark as a master, basicly you can send two commands to an addressed slave:

1) MASTER WRITE

As already said, when the master is writing data there aren't issues: bytes are received and written into a buffer in a cyclic way, so that if you are trying to write more data than the buffer size, it wraps and start again from the beginning.

In the case of a buffer overrun in the slave, the slave NAKs that byte, and the master is to stop transmitting immediately.

I generally have a buffer size that exceeds message length, and code to automatically truncate a message at less than the buffer size.

2) MASTER READ

The Aardvark programmer splits the read command in:

2.1-'Master Read' command: in the Aardvark GUI you can specify how many bytes you want to read, but this information is not sent in any way to the slave. I guess that the Aardvark will send a STOP condition after having received the requested bytes. But how can my slave detect it? I would have an internal buffer, and start transmitting data (in a cyclic way, wrapping it as in previous case) until the master is satisfied.

I suggest looking at the processor status flags in the processor manual for the I2C section.  The behavior of those flags can teach you a lot about the I2C protocol and what to expect.

The answer is that the command to get the count tells the slave to be ready to send two bytes (your choice about overhead) of data.  That is a fixed message length that the slave has to send, and the master expects to receive.  So master write, then command the master to read.

Once the master has the byte count of the reply, it can receive the exact number of bytes in the message (i.e. do a master get reply (your command code), then the slave goes into slave transmit mode and waits to be addressed.)

2.2-'Master Register Read' command: here you have to specify the address (and its width) and the number of bytes to read. This command should result in a 'Master Write' + 'Master Read': and we are back to the previous question: how can the slave know when to stop transmitting data? 

This command is made to read the registers inside hardware, and is not needed for a smart device.  Generally, slaves send data as long as the master sends word clocks.  Stop resets the interface.  Slaves accept data as long as the master sends it.  If the master gets a NAK from the slave, then the master stops transmitting.

The limitation of how many bytes to send is not in the I2C protocol (as a requirement in the HAL code, it's something that the writers of the HAL code put in for convenience.) transmission or reception is handled by NAK or stop.

Look at the message sequence, and you'll see that it is pretty safe with a few minor additions.

Posted on August 30, 2017 at 17:28

Thank you.

My uController has two I2C interfaces: one will act as a master, the other as a slave.

This uController manages a PCI card, with several I2C slave devices on it. The I2C master interface is connected to these devices.

The other I2C bus comes from the PCI connector, and on the other side (the mother board) there is a BMC, that is I2C master respect to the uController on the PCI card.

What I have to do, is to implement a passthrough functionality to let the external BMC see every I2C device on the PCI card, as if it was actually connected to the BMC itself. That's why I can't implement an unique command interface.

One of the devices is a FRU eeprom: with that EEPROM you can exceed the page lenght, because it implements a roll-over feature. With that eeprom the address is auto-incremented, or you can specify it by a 16bit write operation.

Other devices have other peculiarity and rules... It is tricky because you cannot know in advance who is addressed by the master (well...actually you know it because you can intercept the address, but then there are a lot of branches you can follow to satisfy any request).