cancel
Showing results for 
Search instead for 
Did you mean: 

Feature request: smaller generated image source files (e.g. image_EXAMPLE_IMAGE.cpp)

Currently image source files take up a lot of drive space.

My image folder TouchGFX\generated\images\src is 44.5MiB on my drive.

TouchGFX projects can take up a lot of drive space as is (https://community.st.com/t5/stm32-mcus-touchgfx-and-gui/gfxdesigner-option-to-auto-delete-touchgfx-folder-upon-exit/m-p/724758#M39655). So these files just add to it.

You could use build scripts or cmake to add data from binary files to temporary source files. But this can be quite a hassle to make it platform independent. But that would save a lot of drive space as the binary files take up less space than source files with hex constants. (Example to use OBJCOPY)

There are ways to shrink the source files. One way is to use uint64_t hex constants instead of uint8_t hex constants to reduce overhead.
Before:

 

 

LOCATION_PRAGMA("ExtFlashSection")
KEEP extern const unsigned char image_test_image[] LOCATION_ATTRIBUTE("ExtFlashSection") = { // a x b RGB565 pixels.
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 
0x0C, 0x0D, 0x0E, 0x0F, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 
0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x1E, 0x1F, ...
};

 

After:

header file:

 

template <typename... T>
constexpr std::array<uint8_t, sizeof...(T)*8> u64_array_to_u8_array(T&&... t) 
{
    std::array<uint8_t, sizeof...(T)*8> out{};
    std::array<int64_t, sizeof...(T)> in = {t...};

    for (size_t i = 0; i < sizeof...(T)*8; ++ i) {
        out[i] = uint64_t(in[i/8]) >> ((7-i%8)*8);//big endian
    }
    return out;
}

 

smaller source file:

 

LOCATION_PRAGMA("ExtFlashSection")
KEEP extern const unsigned char image_test_image[] LOCATION_ATTRIBUTE("ExtFlashSection") =  // a x b RGB565 pixels.
u64_array_to_u8_array(
    0x0001020304050607,0x08090A0B0C0D0E0F,
    0x1011121314151617,0x18191A1B1C1D1E1F, ...
);

 

Large files can easily be 50% smaller on drives this way. Especially with wider lines and without whitespace.
I don't know how this affects compression of source files.

Using base64 would work too, but that would require c++17 or c++20 and by default TouchGFX uses c++14. Also it won't compress as well in zip or in git. Here is an example of base64: https://stackoverflow.com/a/79042473/15307950

 

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.
13 REPLIES 13
Andrew Neil
Evangelist III

Now I remember the days when 20MiB was a good size for a hard drive but, with Tera-byte drives being commonplace nowadays - is this really an issue?

Without trimming some files and removing build artifacts a TouchGFX project folder easily gets to be 1GiB. I find that a lot for embedded software source code. My folder for TouchGFX projects is more than 10GiB. That's still significant for disk space. All these projects are also stored in version control and also backed up so every byte counts multiple times.
I keep several versions of TouchGFX installed and each of them is about 1.5GiB too. STM32CubeIDE is 3GiB.
For my work PC it is not a problem, but my home PC runs out of disk space quite easily.

This may not seem like a lot, but combined with other methods (removing unused files, see other topic I linked in my post) I think this can save many users a lot of disk space and I think it is worth at least considering.

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.

The first example is 20% spaces

But yes would generally agree that outputting as 32-bit words, or 64-bit double words would be more efficient, it does create some portability issues related to endianness.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..

@Tesla DeLorean wrote:

it does create some portability issues related to endianness.


I use shifting so there is no assumption made about endianness and it will always work. But simply using a uint64_t array would also work as long as it matches the endianness of the platform.

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.

Ok, but how much of the 10GB is clag generated at compile time?

One of the problems with bit maps is holding them in original and intermediate forms, perhaps multiple.

Having them as C arrays makes for compatibility and compilability, but not for image editing.

The linker should be able to pull binary data, other platforms allow for binaries to be included directly via assembler or linker directives, but this is highly non-portable.

Lot of choices and options, most unlikely to be desirable to the entire audience.

Editing tools (image) that can natively import/export in a C source form would reduce a number of headaches and necessity to hold multiple forms of the same data.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..

@Tesla DeLorean wrote:

it does create some portability issues related to endianness.


But in the  specific context of TouchGFX, is that an issue?

Hello @unsigned_char_array ,

Thanks a lot for your input. It is a valid one, and I would share it with the rest of the team to consider it for future releases, but I cannot guaranty when it will be implemented.

Best regards,

Mohammad MORADI
ST Software Developer | TouchGFX

Here in 2024 i mean is waste energy swap this 8-64. I wait for AI here and TGFX show compared source with multi formats as jpg png vector and choice ....


@MM..1 wrote:

Here in 2024 i mean is waste energy swap this 8-64.


It's a tradeoff. The compiler will take a fraction of a second longer and slightly more energy to process the file, but you are saving a lot of drive space. Your repo will be smaller too. And it also saves server space for the remote repos on the server and less space in backups. Cloning takes less time. Not everyone has a 2TiB drive. Many people still have 500GiB or even just 250GiB.

 


@MM..1 wrote:

I wait for AI here


An AI model will not save drive space in this scenario. Those models can be huge. I don't see the need for AI here.

 

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.