2018-05-22 01:28 PM
So, we are developing a camera-to-display interface
STM32f769. Our EE has created an FPGA which is wired in to our STM32F7, giving us good VSYNC, HSYNC, DATA, and CLK pulses (verified by Salae). We have configured our DCMI peripheral (via the BSP functions BSP_CAMERA_Init, BSP_ContinuousStart, etc) to read in this data, configured our DMA2D peripheral to convert the incoming data from RGB565 to ARGB8888 , and then the output of the DMA2D is picked up by our DSI peripheral, which is driving our own little custom display.
Lots of links in this chain, but we were ecstatic to get it all working pretty quickly. Then we realized that when we increased the clock speed of the FPGA input to our DCMI from 20Khz to 30khz, we would hang after one frame. What was happening is that we would receive an IT_OVR interrupt ('indicates the overrun of data reception' -- not much info available) fro the DCMI peripheral, which causes the BSP functions to cancel the transfers.
Digging in to the BSP_CAMERA_Init function, it configures a DMA transfer. So really we have [external FPGA] -> DCMI->DMA-DMA2D->DSI->[external display]
So the first question is...
Is it appropriate to use *both* DMA and DMA2D? Is this redundant? Should I instead configure DMA2D to be connected directly to the output of the DCMI? (i.e., should we be DCMI->DMA2D->DSI instead of DCMI->DMA->DMA2D->DSI?)
And the second question is
What the heck is an overrun exactly, what causes the
IT_OVR interrupt? What is being overrun? Is the DCMI receiving clock pulses too early? I feel like we are chasing a needle in a haystack without understanding exactly what this is, and it is *not* documented well.Thanks very much for any insight!
#dma2d #dcmi #dsi #stm32f7-dma #stm32f72018-05-22 01:51 PM
If I understand D2DMA it is a bit-blit type interface, LTDC is a raster generator, and DSI is a transport.
LTDC+DSI/LCD if heavily driven can cause a lot of bus contention. Use 32-bit wide memories.
The DCMI data and D2MDA both need to use memory to hold intermediate/transient data, ie you're not synchronously piping things, the data has to dwell somewhere due to the async nature and disparity of the rates involved.
2018-05-23 10:48 AM
'Use 32-bit wide memories'... I'm not exactly sure how to do that. Is this configurable somehow, or dependent on location?
Do you happen to know exactly what the DCMI is detecting when it flags an overrun?
And I've heard D2DMA called a bit-blit before... but is it not a 'transport'? Do you think its meant to be used in series with regular DMA, or should I use it *instead* of DMA if I also need a 2-D conversion?
2018-05-23 11:31 AM
It would be a design choice you make for your board, you didn't specify anything I assumed custom, it would mean using the external data bus in the D[0..31] mode rather than D[0..15]
An overrun implies the data didn't meet delivery deadlines due to bus contention. You'd want to review the flight patterns of your assorted data streams and memories in the context of the bus matrix diagrams. In an ideal world you'd want to localize traffic to specific DMA units and memories, and outside the utilization by your executing code.
It is not a 'transport' in the sense it directly shovels data from one bus interface to another. I would view the DSI connection like the channel tunnel, a high speed link, with two tunnels, that data is being crammed down. In this context the LTDC stages that data.
D2DMA strikes me as something that can interact with itself, pattern memory and display memory, and these interactions are both complex and transformative.
I come at this with an ATARI-ST/AMIGA mindset from the 1980's, with chip memory, bit-blit and sprites, I could be entirely wrong...