cancel
Showing results for 
Search instead for 
Did you mean: 

Are all RAM-sections used in this linkerscript? (STM32H743ZI)

Kristof Mulier
Associate III
Posted on July 13, 2018 at 14:37

 

 

The original post was too long to process during our migration. Please click on the attachment to read the original post.
3 REPLIES 3
Posted on July 13, 2018 at 15:41

It is done that way because it is simple and efficient to do so.

If you want exotic code and data placement you're going to have to take ownership of that task based on your own specific requirements and goals.

You can floor-plan the sections and objects within the linker script, and then micro-manage that with #pragma and attribute settings down to a function/variable level.

Memory usages is going to depend on the application, if you need DMA access, if you have coherency requirements, how fast the memory, cache utilization, etc. It gets complication and unique quickly. Thus it is left to the engineer responsible to make the determination and balance the trade-offs.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
Kristof Mulier
Associate III
Posted on July 14, 2018 at 13:48

Thank you

Turvey.Clive.002

.

1. Without linkerscript modifications

So when using this linkerscript - without modifications - all data ends up in the DTCMRAM, right? What happens if the data is bigger than 128K? Will it overflow into the other RAM sections, or will the linker throw an error?

2. With linkerscript modifications

If I understand you correctly, one can micro-manage where data (or even code) ends up in the RAM. To do that, one has to make modifications in 

both

the linkerscript and the source code. Could you give an example on how to do that precisely? I&39ve never done it before.

Thank you very much ^_^

Posted on July 14, 2018 at 14:49

The GNU/LD linker isn't super smart when it comes to embedded, tools from Keil and IAR are a bit more aware of multiple memory regions.

Generally they find it hard to split/spill between regions, and the tools usually don't have a chip level understanding of the memory speed or how many buses it is away from the processor.

The linker will throw an error if it can't fit things based on your script.

    *(.data)           /* .data sections */

    *(.data*)          /* .data* sections */

For foo.c you might expect the data sections to be called .datafoo (or similar) which matches the second pattern.

You could use more specificity, and direct allocations to a specific region

  foo.o(.data*) /* .data* sections from foo.c, the compiler is providing an object to the linker, so foo.o */

uint32_t arrayfoo[100]  __attribute__ ((section('.dtcmram')));

    *(.dtcmram)           /* .dtcmram sections */

    *(.dtcmram*)          /* .dtcmram* sections */

The H7 gets rather complicated, memory is spread around, and some requires clocks to be enabled, so startup code or SystemInit needs to bring that up before it copies data from ROM to RAM, or zeros other statics.

The F7 had some things where RAM mapped/shadowed to two locations, to give the appearance of one larger contiguous region.

Other things one can do is have a heap allocator that can manage multiple regions, or one where you can specify the memory pool to draw from. As memory expands one should consider using malloc() during initialization rather than large static allocations. This would allow for configuration of buffer sizes by the user (or conditions) rather than freeze things at compile/link time.

Stacks want to be in the fastest/closest memories, as using the wrong memory will drag on all performance dramatically.

Linkers have been doing these tasks for multiple decades, some times achieved in different ways, and a lot easier when they were very aware of the one specific cpu/machine they were targeting.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..