cancel
Showing results for 
Search instead for 
Did you mean: 

SDRAM framebuffer causing USB FS to suspend? Interoperability issue.

R_1
Associate II

Hello!

I am having an interesting problem and I was wondering if someone could help point me in the right direction. Thanks in advance kind people. Dust off your magnifying glass and get ready for some clues!

Problem description:

I can successfully jointly run the USB OTG FS (acting as a Virtual COM Port) with the 480x272 RGB interface LCD if I store the (single) framebuffer for the LCD in internal SRAM. If I try to store the framebuffer (single or double) in external SDRAM, then the LCD continues to operate but the USB will suspend. From what I have observed, the other peripherals continue to function normally apart from an increased rate of garbling of the Serial Wire Viewer data.

Project background:

My project is running on a STM32F767NIH. The board includes an LCD, USB FS, and several other peripherals. From the profiling I have done the CPU spends ~95% in idle. For the LCD, I am using the LTDC driver, DMA2D, an external flash chip to store graphic assets (QSPI connection), and an external SDRAM is also available on the board via FMC. For the USB, it is running at 48MHz and is not set for DMA operation. The processor is running at 200MHz so the FMC/SDRAM can operate at 100MHz (max speed).

Here are the relevant NVIC interrupt priorities:

OTG_FS (5) > (LTDC (6) == DMA2D (6)) > (QUADSPI (0xf) == SDRAM_DMA (0xf))

While testing, the only peripherals I enabled were the USB, LCD related peripherals, and the Serial Wire Viewer ITM output.

Operability Notes:

Without the SDRAM framebuffer enabled (the internal SRAM is used for the framebuffer), the USB enumeration always succeeds. With the SDRAM framebuffer enabled, the USB enumeration -sometimes- enumerates but then PC may display ‘USB device not recognized’. The USB can sometimes even transmit a few packets before it suspends. With a terminal I can -sometimes- see a message or two successfully sent before the USB on the chip gets stuck in ‘suspend’. (I’d suspect this timing of whether the USB enumerates or not has to do with the timing of the initialization of the FMC/SDRAM). Again, this problem only occurs when the SDRAM framebuffer is used.

If I enable the FMC/SDRAM but continue to store the framebuffer in internal SRAM, then the USB seems to continue to function. If the framebuffer is set in SDRAM but the touchgfx task is disabled (but the LTDC is still running), then still USB does not work.

My suspicion is that this issue is the result of a bandwidth problem. Even with the internal SRAM buffer, if I increase the pixel clock 25% then the USB will also suspend. The lower the pixel clock the longer the USB can run without suspending. As for the bpp, when using internal SRAM, I can only use 16bpp due to memory size constraints. Using 16 or 24bpp with external SDRAM still breaks the USB. Reducing the pixel clock when using external SDRAM does not stop the USB from suspending. Additionally, I also tried disabling DMA2D but that also had no effect on the problem. In case it is relevant I’ll point out that I have disabled the d-cache (using the MPU) for the DTCM (unnecessary)/SRAM1/SRAM2 to prevent any peripheral DMA buffer issues. I have enabled caching and prevented unaligned access for SDRAM using the MPU. And following the respective appnote, I also deactivate speculative/cache access to the first FMC bank to save FMC bandwidth. Lastly, I’ll note that when I tried to use double buffering in external SDRAM, I made sure the framebuffers were in different banks.

Please let me know if you would like any additional information. And if you are thinking at this point “By Jove! How could this knucklehead miss ___�?, I would be appreciative for you letting me know!

Thanks for your time!

7 REPLIES 7

Well, the LCD controller may choke the bus it accesses - and in case of external framebuffer it should be ONLY the FMC bus (i.e. LTDC should not access any other bus) unless you did not tell us something, e.g. having a second layer in some other memory.

OTG_FS is on AHB2, and the only busmaster dealing with OTG_FS is the processor. On AHB2, there's also JPEG, DCIM, (and RNG) - do you use any of these?

If no, then the only reason why OTG_FS does not get served regularly is, IMO, that the processor attempts to access the choked FMC. So, does the processor read/write any variable in the SDRAM, is some heap/stack allocated there, is there anything else connected to FMC other than the SDRAM?

JW

@waclawek.jan

Thank you very much for your response! I was hoping to be able to report back with tales of my success, but alas! I have still been unable to precisely find the culprit. I apologize for bothering you again!

Just in case I’m making some obvious error, I’ll just enumerate my graphical setup: the graphic assets are stored in the external flash chip (which I program with an external loader I wrote) connected via QSPI. DMA2D is used to transfer assets from the external flash to the SDRAM framebuffer. I believe the LTDC only accesses the FMC bus. The FMC is connected to the external SDRAM (32-bit configuration). The LTDC gets the framebuffer data and outputs the data to the LCD.

I am only using a single layer for the LTDC. I am using RGB565 format for the layer, with 16bpp, so there shouldn’t be any alignment issues (AN4861 4.5.2). The DMA2D transfers are in ARGB8888 transfer mode with RGB565 color mode. Interestingly, if I decrease the layer window size in the CFBLR/CFBLNR registers (essentially only displaying a tiny bit on the screen) then the USB can operate. I’m guessing this has to do with fewer burst transfers occurring.

On AHB2 I don’t use any of those peripherals (JPEG, DCIM, etc.). Only the OTG_FS is being enabled and used on that bus.

Only the SDRAM is attached to the FMC. I don’t have the SDRAM region set in my linker file and in the generated map file there does not appear to be anything stored or used in external SDRAM. If as a test I set the FMC/SDRAM to disable the write operation (and disable the DMA2D error callback for this test), the USB still doesn’t work.

The odd thing to me is that I would expect the contention issues to result in the graphics losing out and not the USB, but there are no underrun or any other errors on the LTDC. There does not appear to be any noise problems on the USB DP/DM lines. The ITM data output is totally corrupted when the FMC/SDRAM is running (there is a slight amount of corruption when using the internal frame buffer). I have some ‘printf’s I was using to debug and the contents are garbled when using the SDRAM framebuffer (after removing all the printf statements junk data still appears).

Thanks again for the help. I really appreciate it!

Edit: P.S. I should also mention that in my original post I mistakenly included the SDRAM DMA in my NVIC priority list, but I don't actually use this interrupt or enable that dma channel for the SDRAM.

I still suspect the processor attempts to access FMC and chokes there.

Where are stack and heap (if used) allocated?

The processor does not write anything to the framebuffer?

JW

Here's an experiment you can/should do: set a pin at USB ISR entry and clear it at exit. Now observe this pin when LTDC is disabled or cut back and USB works, and compare to the non-working case.

JW

R_1
Associate II

[duplicated post- sorry!]

@waclawek.jan

Thanks again for the feedback! I apologize for the delay but I wanted to make sure I had all my ducks in a row. I have reduced the problem further, with some interesting surprises. Get ready for a plot twist (if you have the time 😊)!

Ordinarily, after USB setup, there is no input or output packet transfer over USB. The USB OTG IRQ does not fire until my PC application sends data. If I setup LTDC (Etc.) to be used on the internal framebuffer, I verified that no USB traffic is occurring.

Now here’s where things get interesting. With LTDC (Etc.) being used on the external framebuffer, USB traffic appears! Next, I cut out LTDC from the firmware and just tried writing to the SDRAM from the processor via a task that fires periodically. This produced even more surprising results. I found that I could generate these USB abnormalities by writing a couple thousand bytes continuously into SDRAM. [Is this a normal result of the processor ‘choking’ on FMC as you suggested?] The abnormalities showed up several milliseconds (20-70ms, as described below) after the completion of the write.

The ‘USB Problem’: The problem begins with the device sending out a URB_BULK packet to the host (using endpoint 0x81 and status ‘Broken pipe’). The device then sends 15 packets to the host (using endpoint 0x81 and status ‘No such file or directory’). These transmits are half a millisecond to almost 2ms apart. The host then sends back a ‘CLEAR_FEATURE’ request packet, which is responded to by the device. The host then sends 16 URB_BULK packets back to the device. No more USB communication occurs until after the next SDRAM write. I’d venture this number of transfers is apparently tied to the endpoint count. The first OTG ISR to fire after the SDRAM write is caused by the ‘setup packet received’ and ‘TX complete’ flags indicating that a setup packet was transferred.

Now let me clarify: Not all writes to the SDRAM produce this issue. If I just wrote to a 500-byte region anywhere in the framebuffer but rewrote over that area ‘480*272*2*sizeof(uint16)’ bytes [a framebuffer’s worth], then the USB problem does not occur. But writing several thousand bytes in a row over a region several thousand bytes long causes the USB problem to occur. If the same data was being written (cached) vs new data being written into the buffer, the same data write case is quicker to complete but still causes USB problems. Doing the same number of writes over different buffer sizes, I found the time taken to execute the writes was the same but only the write several thousand bytes into continuous memory caused the USB issue to occur. It didn’t matter if the memory was being written towards higher or lower addresses. Reading from the SDRAM did not cause USB problems. I don’t see any USB control errors, core resets, reinitializations or anything in that vein occurring when the problem occurs.

I also experimented with delays between SDRAM writes: In my experiment, the writing to SDRAM starts 5s after USB initialization. With SDRAM writes spaced 50ms apart, the USB interrupts start ~20ms after the end of the SDRAM write. With SDRAM writes spaced 15s apart, the USB interrupts start ~70ms after the SDRAM writes completed. With SDRAM writes spaced, 30s apart, the USB interrupts start ~70ms after the SDRAM writes completed.

SDRAM Time profiling:

1.     Working USB: Three full writes over 500 bytes: ~30.49us. 10 writes over 500 bytes: ~.1ms. 100 full writes of 500 bytes: ~1ms. 480x272*2 of 544 bytes: ~5.254ms (with cache breaker: ~15.73ms).

2.     Failing USB: 1 write over 5000 bytes: .1ms, and same times as ‘working USB’ category for matching byte write count.

USB setup notes: I have the dedicated USB FIFO divided as: 0x80 for the RX FIFO, 0x40 for the control TX FIFO, and 0x80 for the TX FIFO. My USB structure is defined statically (not allocated with malloc).

Other notes: (And to tie up loose ends with your initial questions: The stack and heap are allocated in internal RAM, and in my original tests I -think- the processor was not writing to the framebuffer and I had just the DMA2D writing data into the framebuffer but I’ll have to do some more tests.)

So, with that, I am curious: Why does the device send these setup packets after the continuous SDRAM write? The other surprising element to this is the delay between the write and when the USB setup packet is transferred…

Thanks once more for all the help! 👍

I'd say it's you program which attempts to transmit through USB, probably unintended.

Instrument all instances where your program attempts to transmit through USB to confirm our disprove this.

JW