cancel
Showing results for 
Search instead for 
Did you mean: 

TouchGFX Smart watch UI architecture design

DOkaz.1
Associate II

Hi,

Sorry for the long post, I was looking at the TouchGFX documentation for UI development and had some clarifying questions about the current active screen limitations.as well as general SW architecture questions.

I'm looking into creating a smartwatch UI with TouchGFX and am try to figure out the software architecture using the limitations of the software and if it's even possible.

For my design, I was thinking of having screens for each application/widget, a home screen that has icons to open said applications, a settings menu, and a screensaver screen.

According to the docs, it seems like there can only be one active screen at a time.

https://support.touchgfx.com/4.20/docs/development/ui-development/software-architecture/screen-definition-and-mvp

Given the limitations, I was curious to see how to handle background events like interrupts or triggers to dynamically change the UI. For example notifications for unrelated applications or the ability to use media controls for videos/music no matter the application/screen that was active.

I was also curious if there were any constraints on how big a screen can be like if TouchGFX simply allocates the memory necessary for what's currently displayed on the screen. An example of an unbounded screen would be something like in google maps where the user can scroll/navigate to different areas of the screen and have more locations rendered onto the display. I was wondering if this was even possible in TouchGFX? and if so, how does it manage the RAM allocated to objects/UI artifacts that aren't on the screen, and is there any control in software that I can have over that?

I was also wondering if there was any built in performance management daemons available for TouchGFX, or if something like that isn't possible or if I would have to create it myself. For example if a widget/application becomes unresponsive or starts to consume too many resources, if a daemon could automatically kill the widget to restore functionality of the display.

For screen architecture, I was thinking maybe create a base model that can handle all common functionality between each application and have each screen model inherit from that to control the UI?

Another way could be to have background processes that are responsible for updating thee current active screen which then changes the UI when it receives specific triggers. Or maybe a main process that just fully controls the entire UI dynamically?

For example saving each model/view pair in a stack to keep track of the order of opened applications so that when one application is closed, the last opened application is started again, or has its state saved so that it can be opened up where it last left off. With this architecture there would be one presenter and multiple model/views that get interchanged depending on the user's actions?

The UI design process seems pretty rigid in its design process being WYSIWYG, however I was wondering if it were possible to dynamically generate the UI at runtime. For example, the home screen having a grid of icons for the applications that are currently installed on the device.

I know that this kind of software isn't really meant for this kind of a dynamic load, but was trying to see if I could make it work!

Please let me know if you have any thoughts on these ideas and if they are/aren't possible, and if things aren't possible if there were any alternatives that would be better suited for this use case.

Thanks!

0 REPLIES 0