cancel
Showing results for 
Search instead for 
Did you mean: 

Code structure for terminal application?

Martin294
Associate III

Hello all,

I have used TouchGFX for one project before (see https://www.eevblog.com/forum/projects/high-voltage-electronic-load-(500v-500ma)/msg4833872/#msg4833872)

which was lots of trial and error, following some tutorials. I still find the architecture (MVP) pretty confusing, so this time I am asking for your help to set this up properly:

I want to build a user interface module (based on an existing module like STM32F469I DISCO) that displays pre-defined screens. It will be connected to a host via a fast serial link (UART or SPI). From the host, it might get commands like

Switch to screen 2

Put text "asdfasdf" in text field 23

set gauge 17 to value 1234... you get the idea.

In the opposite direction the UI module should send messages for UI events like

button 5 pressed

slider 8 set to value 123

How do I structure the code? I'll have CodeMX generate the HAL interface for a full duplex UART, generating interrupts for received  characters. From there, a message handler would collect the charaters and execute a command once a terminator (end of line) would be received. 

In examples, I have seen FreeRTOS queues for a communication like that - why would one want to do that? Why not directly from the message handler call a function like "set gauge to value 123"?

 

Thanks! Martin

2 REPLIES 2

Hello @Martin294,

I will try to explain from the beginning and hopefully answer your questions throughout the explanation. The MVP architecture allows your application (view) to communicate with the hardware via model. And, the presenter is responsible for connecting the view to the model. 

To be able to change your GUI, first you need to define a virtual function in your ModelListener.hpp that will be called from the model when an actions has to happen on the GUI. Each screen of your TouchGFX project has a corresponding view and presenter (these concepts are thoroughly explained here), and all the presenters inherit from the ModelListener. Therefore, in the related screen's presenter, you define the virtual function which will call the required functions of the view to implement your desired changes. So, if you want to apply a change from hardware on your GUI, the path will be model -> model listener -> presenter -> view

However, when the GUI should control the hardware (for instance, having a button on GUI that turns an LED on or off), then the path of function calls would be view -> presenter -> model

The message queues from FreeRTOS are just examples on how to use them, otherwise, you can directly call functions of your hardware code from model.cpp.

I hope I managed to clear some of the confusion. Don't hesitate to ask more questions!

Mohammad MORADI
ST Software Developer | TouchGFX
Martin294
Associate III

Thanks @Mohammad MORADI ESFAHANIASL , I'll try putting my interface logic in model.cpp. I'm sure I'll be back with more questions...

Regards, Martin