2025-08-06 10:38 PM
Hello ST Community,
I'm currently developing a project on STM32H743 using TouchGFX 4.25.0 (RGB LTDC interface, framebuffer in external 16MB SDRAM) and LWIP + FreeRTOS (TCP client mode over Ethernet RMII). Everything works fine when TouchGFX is not included. However, after adding TouchGFX to the project, I'm facing strange Ethernet instability.
:magnifying_glass_tilted_left: Observations:
Without TouchGFX:
TCP Client connects to the server reliably.
No reconnection or data loss observed.
Using recv() with flags = 0 works flawlessly for long durations.
With TouchGFX + LTDC + DMA2D:
Frequent TCP reconnections start occurring.
Happens especially when using lwip_select() + recv() mechanism.
If I skip lwip_select() and use direct recv() with flags = 0, reconnections stop, but after some time communication lags or stalls.
:light_bulb: Hypothesis:
I suspect the problem lies in AXI bus contention between:
LTDC/DMA2D accessing external SDRAM (for framebuffer)
Ethernet DMA accessing DTCM/SRAM or other AXI slaves
Possibly QSPI access if active for bitmap cache
I'm considering using AXI PMU (Performance Monitoring Unit) to measure:
Bus latency
AXI master stalls
Transaction overlap
However, documentation on AXI PMU in STM32H7 context is scarce, especially under FreeRTOS + TouchGFX + LwIP.
:question_mark: Questions to Community:
Has anyone faced similar TCP instability or lag after enabling TouchGFX or DMA-heavy peripherals?
Can AXI PMU registers be monitored in runtime to measure bus load/stalls in FreeRTOS?
Any suggestion to prioritize Ethernet DMA transactions on AXI bus?
Can DMA2D or LTDC be throttled or delayed to reduce congestion?
:package: Configuration:
MCU: STM32H743II
TouchGFX: 4.25.0 (LTDC RGB interface)
SDRAM: 16MB at 0xD0000000 (double-buffered framebuffer)
QSPI Flash: For external bitmap assets (W25Q128)
FreeRTOS: Default settings (preemptive)
Ethernet: LwIP TCP Client using RAW API
Compiler: STM32CubeIDE + GCC
I'm open to any insights, workaround ideas, or experiences. I’d also be happy to share code structure or bus matrix settings if needed.
Thanks in advance,
Jumman Jhinga
2025-08-06 11:00 PM
Ethernet in current STM32Cube implementation requires top priority access not to crash ( and I found data streams capable of crashing it also with STM examples) - adding other peripherals will slow response times by sure and that could be one reason of your blues.
What you suspect (internal blocks) @480Mhz clock have limited impact, SDRAM use a lot of bw more than display driver; i would focus on Ethernet implementation before scaling to low level hw.
2025-08-06 11:22 PM
@mbarg.1 Thanks for your input.
Yes, I'm already managing priorities accordingly:
HAL_NVIC_SetPriority(ETH_IRQn, 5, 0); // High priority for Ethernet
HAL_NVIC_SetPriority(QUADSPI_IRQn, 6, 0); // Medium priority
HAL_NVIC_SetPriority(DMA2D_IRQn, 7, 0); // Lower priority for graphics
im using System clocks as follow:
CPU Frequency: 240 MHz (set via CubeMX)
LTDC Pixel Clock: 60 MHz
To monitor runtime behavio im calculating time using DWT registers:
TCP latency (send → receive): ~500 µs to 1 ms
Communication thread execution: ~12–200 ms depending how much over-all delay im giving, exactly its showing.
CPU load: ~93–97% in normal operation
These metrics remain stable unless packets are delayed or missed — then latency and thread timing increase accordingly.