Showing results for 
Search instead for 
Did you mean: 

I2C using the HAL library in the IT mode.

Associate II

I have implemented a basic I2C between STM32H7 (Main) and STM32F4 (Secondary) processor. I am using an array of size 6 to transmit from the Main CPU to Secondary CPU using I2C in IT mode.

The HAL API for I2C requires the data_buffers to be of size uint8_t, but the HAL APIs also accept data_buffers of size uint32_t if they are typecasted to uint8_t. 
The next parameter for the data buffer size should be unt16_t for which I have implemented them as: size(txData)/sizeof(txData[0]) and the same implementation for the rxDataBuff on the receiving end.

The HAL APIs that I am using are:
1.] On the Main CPU:
HAL_I2C_Master_Seq_Transmit_IT(&hi2c1, SlaveAddress << 1, TxDataFrame, Size_TxD, I2C_FIRST_AND_LAST_FRAME);

2.] On the Secondary CPU:

HAL_I2C_Slave_Seq_Receive_IT(&hi2c1,rx_dataFrame, rxDataFrame_Size,I2C_FIRST_FRAME);

What I noticed is, if I initialize my "TxDataFrame" and "rx_dataFrame" as an array of 'uin8_t', then the HAL APIs for I2C are able to successfully transmit and receive the complete data of the array. 

But, if I initialize my "TxDataFrame" and "rx_dataFrame" as an array of 'uin32_t', type cast the data buffers inside the HAL API as an "uint8_t", then the secondary CPU experiences data loss. In my case, the value stored in the last index position of the rx_dataFrame is lost. 


For instance, 
On the MAIN CPU:
uint32_t TxDataFrame[6]  = {8,642,0,0,0,138};

HAL_I2C_Master_Seq_Transmit_IT(&hi2c1, SlaveAddress << 1,(uint8_t *)TxDataFrame, Size_TxD, I2C_FIRST_AND_LAST_FRAME);

On the Secondary CPU:

uint32_t rx_dataFrame [6] = {0};

HAL_I2C_Slave_Seq_Receive_IT(&hi2c1,(uint8_t *)rx_dataFrame,(uint16_t)rxDataFrame_Size,I2C_FIRST_FRAME);

If I load the TxBuff with another set of random data, then the RxBuff is loaded with data upto index position 1 (RxBuff[0], RxBuff[1]), while the remaining array is all 0s.

UPDATE: Resolved issue for 32 bit data handling on I2C using HAL API. 

The fix to the size thing was simple and with the help of @Bob S I was able to implement the correct size for the buffers. 

The HAL APIs should be implemented in such a way when typecasting 32 bit buffer to an 8 bit for I2C.

MAIN Processor:
HAL_I2C_Master_Seq_Transmit_IT(&hi2c1, SlaveAddress << 1,(uint8_t *)TxDataFrame, Size_TxD * sizeof(uint32_t), I2C_FIRST_AND_LAST_FRAME);

SECONDARY Processor:

HAL_I2C_Slave_Seq_Receive_IT(&hi2c1,(uint8_t *)rx_dataFrame, RXBUFFERSIZE * sizeof(uint32_t), I2C_FIRST_FRAME);

This correct implementation made my data transfer successful using the HAL API. 



Accepted Solutions
Bob S

What are the values of Size_TxD and rxDataFrame_Size?  If your buffers are 6-element arrays of uint32_t then those should be (6*sizeof(uint_t)) = 24, because I2C transfers are ALWAYS number of BYTES.

View solution in original post

Bob S

What are the values of Size_TxD and rxDataFrame_Size?  If your buffers are 6-element arrays of uint32_t then those should be (6*sizeof(uint_t)) = 24, because I2C transfers are ALWAYS number of BYTES.

Thank you for getting back to my query. 
The way I have been implementing the sizes for the TxDataFrame and rx_dataFrame is as follows:

Because, the HAL API accepts the buffer size of uint16_t:
uint16_t txDataFrame_Size = sizeof (tx_dataFrame)/sizeof(tx_dataFrame[0]);

uint16_t rxDataFrame_Size = sizeof (rx_dataFrame)/sizeof(rx_dataFrame[0]);

From your resposne, I understood that I should declare the sizes as follows:
uint16_t txDataFrame_Size = (6 * sizeof(uint32_t));

uint16_t rxDataFrame_Size = (6 * sizeof(uint32_t));
because my original TxDataFrame and rx_dataFrame are initialized as an uint32_t arrays? Is my understanding correct? 

In this case, do I also need to convert the 32 bits data in the transmit array into 8 bit values using some bit-operation implementation? or would typecasting the original txDataFrame and rxdataFrame to (uint8_t) would just be fine, now that we have corrected the implementation for the buffer size?

Thank You. 

Another concern of mine is, even after typecasting the tx_dataFrame and rx_dataFrame to 'uint8_t' inside the HAL I2C TX and RX API, the memory layout for the databuffer displays the loaded data as a 32 bit instead of an 8 bit data. 

I tried implementing the way you explain me to configure the size parameter for the buffers but it ends up in data loss on the transmitting side as well. And the receiving side won't receive any data at all. 

So I had to revert back to my previous implementation of: sizeof(tx_dataFrame) / sizeof(tx_dataFrame[0]); and the same for the rxBuffer. 


Oh yes, you were right with the size thing. I was implementing it incorrectly. I have added the solution as an update section to the original post.
Thank you for your insights though!!!