cancel
Showing results for 
Search instead for 
Did you mean: 

How to get a TouchGFX demo working in DSI Video Mode

OLync.1
Associate III

I want to verify that the STM32L4R9 discovery kit can work in DSI video mode so that I can port the application to use a display driver that does not have a graphics buffer. I started with the TouchGFXWatch example and have that working. I then used TouchGFX Generator to change the Display Interface to Parallel RCB (LTDC) (per advice from the forum). This resulted in the initial display but no update. I then changed DSIHost to Video Mode which resulted in stripy noise being displayed. I believe I am using the latest versions of all the tools: TouchGFX 4.19, CubeMX 6.5.0, STM32CubeIDE 1.8.0.

My forum reading suggests that this is hard, but possible. Can anyone enlighten me on what I need to modify under the hood to get this to work?

14 REPLIES 14
RetroInTheShade
Associate III

Hi,

The two steps you have listed are correct.

Some other items to investigate:

  • LTDC configuration (front/back porch timings etc). These usually come from display data sheet.
  • Display initialisation. The display is configured via DCS commands over the DSI link after boot. You may need to manually update driver init function to configure the display into specific video mode (burst /non-burst with sync etc)

There should be a DSI test pattern generation mode. This is worth investigating as a first step as it allows you isolate and test the DSI video mode functionality and ensure all timing parameters are correct before you add Touchgfx to the mix!

Cheers.

OLync.1
Associate III

So I now have my TouchGFX project driving a display in DSI video mode. The remaining problem is display refresh, there are momentary black rectangles around the updated area as the display updates. Probably caused by lack of synchronization between TouchGFX frame buffer updates and LTDC/GFXMMU accesses. The code I started with has overrides for HAL_DSI_TearingEffectCallback() and HAL_DSI_EndOfRefreshCallback() which I think don't occur in DSI video mode - calling these in the main loop gets the display to work, but with the flickering.

There is a generated HAL_LTDC_LineEventCallback() which is configured to be called when entering and exiting the active area. This code is very similar to the other callbacks but display refresh does not happen when this is used and the others are disabled.

Is double buffering required for TouchGFX when operating in DSI video mode?

How should LTDC interrupts interact with TouchGFX (vSync() etc) when in DSI video mode?

Did you set the values in your init code to fit your display datasheet?

"Vidcfg...", "hltdc." and so on?

Double buffering is not needed just for video mode, but for transitions i had to activate double and animation buffer.

OLync.1
Associate III

I think my display init values are good - static images display well. It is the transitions that cause the problem. What I'd like to understand is the required synchronization with the frame buffer when in DSI video mode. It seems that the LineEventCallback should be used to trigger events on entry to and exit from the active area. These events signal the TouchGFX framework to enable/disable frame buffer updates?

CubeMX generated the following code when I specified double buffering and provided both buffer addresses, but presumably more user code is required - for instane how is buffer swapping managed?

void TouchGFXGeneratedHAL::initialize()
{
    HAL::initialize();
    registerEventListener(*(Application::getInstance()));
    setFrameBufferStartAddresses((void*)0x30000000, (void*)0x30070800, (void*)0);
}

Exit0815
Senior

Playing around with my problems, i found something that may help you.

The DSI Command for the display interface driver is very time-sensitive. If the delay between some commands are not perfect it will produce some strange behavior with pixels.

Make sure that your init code for your display IC is correct.

OLync.1
Associate III

I've now exhausted all options I can think of to get rid of flickering. My understanding is that it is an artifact of single buffering - there's always a chance that the framework will be writing into the framebuffer while the LTDC is reading at the display refresh rate. So, I should change to double buffering. I tried using external RAM for this but no good - I'm pretty sure the external RAM access times are too slow. I can fit 2 buffers into the internal RAM if I switch from 16 bpp to 8 bpp, my graphics requirements are simple and this will be OK. The STM32L4R9 datasheet indicates that it can support an 8 bpp framebuffer, but I'm unsure whether the TouchGFX Designer / framework does. Has anyone had experience with this?

RetroInTheShade
Associate III

Hi Olync,

It can be a challenge to get the DSI video mode going.

Based on my experiences (albeit on different platforms), you should not be experiencing problems with double buffering using external RAM.

Can you provide a little more detail:

  • Your design is based on the STM32L4R9... have you incorporated the same PSRAM?
  • What display size are you using?
  • What video mode have you configured? Burst mode?
  • Is the Chrom-ART Accelerator configured for transfers?
  • Can you share relevant portions of your source?

Cheers.

OLync.1
Associate III

I am using the STM32L4R9 Discovery board, with the 16Mbit PSRAM it comes with (IS66WV1M16EBLL-55BLI 55nS access time), accessed via FMC.

Display is 480 x 480 circular (ST7701 driver chip without GRAM)

GFXMMU is used to save RAM

Video mode is burst mode

Chrom-ART Accelerator is used

I'm happy to share my source - which parts?

I gave up on external RAM because when I tried it the screen displayed mostly noise, at one setting (can't remember what I'd changed) a glimmer of the image was discernable. I concluded that 480 x 480 x 60 fps = 1 read every 72 nS is too close to the PSRAM limits.

When I enabled FMC CubeMX complained that their was a configuration clash with I2C/OCTOSPI that I did not understand or resolve.

OLync.1
Associate III

Attached is an example of the flicker - I think the framework has invalidated the speedo needle region and is part way through writing to the framebuffer when the LTDC does the read. Synchronization is provided by the following generated code for DSI video mode:

void HAL_LTDC_LineEventCallback(LTDC_HandleTypeDef* hltdc)
    {
        if (LTDC->LIPCR == lcd_int_active_line)
        {
            //entering active area
            HAL_LTDC_ProgramLineEvent(hltdc, lcd_int_porch_line);
            HAL::getInstance()->vSync();
            OSWrappers::signalVSync();
            // Swap frame buffers immediately instead of waiting for the task to be scheduled in.
            // Note: task will also swap when it wakes up, but that operation is guarded and will not have
            // any effect if already swapped.
            HAL::getInstance()->swapFrameBuffers();
            GPIO::set(GPIO::VSYNC_FREQ);
        }
        else
        {
            //exiting active area
            HAL_LTDC_ProgramLineEvent(hltdc, lcd_int_active_line);
            GPIO::clear(GPIO::VSYNC_FREQ);
            HAL::getInstance()->frontPorchEntered();
        }
    }

 I have varied many parameters to try and get rid of the flicker. They do change it, but not remove it completely. I think this is an intrinsic artifact of a single framebuffer in DSI video mode - the LTDC will be reading it at 60 Hz and there's no guarantee that a framework redraw can be done atomically between LTDC frame reads?