cancel
Showing results for 
Search instead for 
Did you mean: 

Fast I2C with DMA transfer issue

LaoMa
Associate III

Hello,

I have successfully implemented the standard I2C protocol with OLED. To speed up the display operations, I increased the speed of the I2C and it works fine up to 290 kHz.

The protocol relies on DMA transfer and the CPU is in SLEEP until the DMA ends and the HAL_I2C_MasterTxCpltCallback() run the HAL_PWR_DisableSleepOnExit() instruction.

Now, like I've told, up to 290 kHz everything's fine, but going 300 kHz and above, the MCU is stuck after to have executed the HAL_PWR_DisableSleepOnExit(): it doesn't exit from sleep (in debug, sending any interaction will do it runs, you see, so I cannot really understand what's happening).

I have tried to change the interrupt priority, but it doesn't matter and I suppose the I2C event handler shall be at higher priority of the DMA handler... I2C has priority 0,0 while DMA 0,1.

Any suggestion? It's ok also to run at 290 kHz, but I wanna know what's going on.

Thanks for any reply.

Maurizio

19 REPLIES 19

Hi @JHOUD​ ,

I appreciate you efforts and your tentative to get off the issue, but I'm afraid you have not yet got the point: it isn't matter of I2C, the communication does like expected from the 1st START to the last STOP at any speed, included 100, 200, 300 and 400 kHz (I also checked some intermediate speeds....).

And, like I have told in my posts, the callback is called once the I2C has managed the STOP (it appears so with SDA and SCL signal got by a scope!).

The point is the the MCU doesn't execute the HAL_PWR_DisableSleepOnExit(): when the code returns from I2C callback, it remain stuck at the __WFI instruction. This happens at speeds higher than 291 kHz: at 291 kHz works fine, at 292 doesn't!

I've given the details about NVIC priorities and the code in the callback routine hoping that there is there something I've not known / understood yet!

But it seems that also you, guys, are having no clues and give me pertinent but not useful information to understand that.

I wanna repeat, the speed of 290 kHz is enough good for my end device, but I wanted to use the full 400 kHz capability of the my OLED display. And the sleep is very important, the device is powered by a small lithium cell, I cannot waste energy leaving the CPU waiting for the data transfer.

One test I have not yet run is to verify the DMA transfer without sleep ... even if I think it ends like expected, I'll try Monday when I'll back office.

Stay tuned for result.

Maurizio

Piranha
Chief II
HAL_I2C_Master_Transmit_DMA(&hi2c1, OLED_ADD, &IIC_Data[0], len);
 
// this sequence put the cpu in stop while all peripheral still working
// the CPU is stopped, processing interrupts
HAL_SuspendTick();
HAL_PWR_EnableSleepOnExit(); // conversion ended, resume CPU
HAL_PWR_EnterSLEEPMode(PWR_MAINREGULATOR_ON, PWR_SLEEPENTRY_WFI); //cpu stops until INT
 
// CPU will go ahead after the HAL_PWR_DiableSleepOnExit() will be executed
HAL_ResumeTick();

At some speed the I2C interrupt can finish before the HAL_PWR_EnableSleepOnExit() is called and therefore the HAL_PWR_DisableSleepOnExit() will do nothing. And, if the I2C interrupt happens before the HAL_PWR_EnterSLEEPMode(), then the CPU sleeps forever.

You are disabling SysTick to not wake up, but that way you will loose ticks. And what will you do when the system will be more complex with many interrupts used? Disable them all? How to manage that in code? The "temporary disable a bunch of things in this place... and this... and also this one..." type software design is just completely broken. That is why people make a task schedulers and events to wait - then those manage putting CPU to sleep for the whole system centrally in one place. And, to avoid race conditions, the test "is there something to do" and the WFI instruction are executed in a critical section with interrupts disabled in PRIMASK register. Yes, the WFI instruction will wake up the CPU when the interrupt is pending regardless of PRIMASK state, and it will not go into sleep mode at all if the interrupt is already pending.

Hello,

even if I continue to appreciate your contribution 'cause I've ever things to learn, it appears like a discussion between deaf people!

"At some speed the I2C interrupt can finish before the HAL_PWR_EnableSleepOnExit() is called and therefore the HAL_PWR_DisableSleepOnExit() will do nothing. And, if the I2C interrupt happens before the HAL_PWR_EnterSLEEPMode(), then the CPU sleeps forever."

This cannot be true! the HAL routine ends after to have set the DMA for transfer and run the START sequence. Even if there are 2 bytes to be transferred, the total bytes are 3, because the I2C address byte then assuming the interrupt is called only for the last byte (and it's not true, too!), the transfer time is not less than 25 usec, more than enough to execute the instruction which disable sleep on exit and enter in sleep.

In any case, the call to the TxCpltCallback is done into the I2C_EV handler of the HAL library, then still before the last return from interrupt!

Or, this is what I see with DEBUG, but you know, debugger adds something that there's not when run!

In any case, I changed the concept a little and I use now the HAL_I2C_Master_Seq_Transmit_DMA (because I can pass pointer to stream in the FLASH without copy the data to RAM) and in this way its works fine at 400 kHz with the same disable sleep on exit. It works regardless the data are transferred from FLASH or RAM).

Not yet measured how much power is saved, but I'm gonna do it!

Thanks ...

Maurizio

Hello @LaoMa​ ,

this morning I found an old Nucleo with F103RB and tried measuring the consumption using DMA Master I2C with or without sleep. The difference is hardly noticeable. And I transferred a lot more than 3B to make the difference bigger. I suggest to try saving the power elsewhere. Or if you are really concerned with power consumption, perhaps pick another device, like L1, L0 or L4.

Rgds,

J

To give better visibility on the answered topics, please click on Accept as Solution on the reply which solved your issue or answered your question.

Hi @Piranha​ ,

thanks for your contribution, but with Tick set to default 1ms rate, it's probably useless to call tick disable for a sleep that would anyway only last several microseconds. Generally it's however important step.

J

To give better visibility on the answered topics, please click on Accept as Solution on the reply which solved your issue or answered your question.

LaoMa
Associate III

Hello @JHOUD​  and @Piranha​ 

all your comments have been useful to understand more and to look forward a root cause, not yet found.

To close this issue, I wanna say thanks to you both, I have rearranged the code applying your suggestions, even if in my way.

About to save power anywhere else, I've been already done in the code and I know that the sleeping time during DMA transfer is little compared to other wasted time in display refreshing loops or waiting the ADC 1024 conversions (oversampling/decimating on 4 channels) but I hate to take CPU running when it is doing nothing....

Yeah, it could sound a little freaky but I am an "original" kind of engineer!

Thanks again, your contributions have made me a little more skilled!

Maurizio

Hello @LaoMa​ ,

thanks for your appreciation!

I also think highly of you, and I believe your approach of trying to save every last bit of battery power is commendable and more engineers should be as thorough in pursuing this goal as you are.

However from my experience short periods of sleep can even increase the overall power consumption in application. I described this case in AN4635 (different communication interface, but some similarities are there: https://www.st.com/content/ccc/resource/technical/document/application_note/c3/77/09/03/41/31/46/0b/DM00151811.pdf/files/DM00151811.pdf/jcr:content/translations/en.DM00151811.pdf )

Particularly look at chapter 6.1.2.

Best of luck with your application.

J

To give better visibility on the answered topics, please click on Accept as Solution on the reply which solved your issue or answered your question.

It has nothing to do with the tick rate. A tick interrupt can happen at any moment regardless of the tick rate. In author's code SysTick interrupt is disabled to not wake up CPU from WFI sleep.

LaoMa
Associate III

Thanks all for the help given.

I'm not yet sure the original issue is closed, but the use of sequence is working fine at any frequencies.

About the AN4635, I had read before to send the 1st question when I was looking at the low power modes and the way to call them.

Thanks gain.

Maurizio

Hi @Piranha​ ,

I know what you mean. And I agree - it can happen anytime. But it will happen rarely, since the systick period is so much longer than the sleep duration. On the other hand, disabling and reenabling the systick will add several instructions (~10) time every time the Sleep would be saving power. So unless the clock speed is set to excessively high MHz, it's likely to increase the overall power consumption. That's why I'd go without tick suspending in this case.

J.

To give better visibility on the answered topics, please click on Accept as Solution on the reply which solved your issue or answered your question.