2025-05-28 7:12 AM
I'm using the NUCLEO-H7S3L8 and have it setup 192K of DTCM memory. However, when I start to allocate DTCM memory above 64K the .elf file fails to load in the debugger. If I stay below 64K not a problem.
volatile uint8_t __attribute__((section (".dtcm_data"))) __attribute__((aligned(32))) gShadowPQ_WFB[65536] = {1, 2, 3, 4, 5, 6, 7, 8};
volatile uint8_t __attribute__((section (".dtcm_data"))) __attribute__((aligned(32))) gShadowS1_WFB[8192]; //<-- gives problems with loading of STS_Bias.elf
2025-05-28 7:30 AM
Did you set DTCM_AXI_SHARED option byte to allocate the max 192 kB to DTCM? Show in CubeProgrammer.
2025-05-28 10:50 AM
@TDK I set these bits via the MX tool:
and they appear to increase the memory size. However, specifying more than 64K in my code results in the .elf file not being able to be loaded in the debugger
2025-05-28 11:49 AM - edited 2025-05-28 11:49 AM
.dtcm_data :
{
. = ALIGN(4);
*(.dtcm_data) /* Include data */
. = ALIGN(4);
} AT> DTCM
In this part of the .ld file, remove "AT" . If you want initialized data in DTCM or ITCM, please check examples.
For simplicity and robustness, initialize variables in non-default data segments in runtime. Compile-time initialization requires extra code and is error-prone.
2025-05-28 12:02 PM
Thanks this is good to know. I do not need to have them initialized at all, I just did so for debug purposes. However, I still have the same issue with the .elf file not being able to be loaded if I specify more than 64K in the DTCM area. I want to place some large arrays that are at max 64K and just can not seem to be able to do so without the elf file not wanting to load.
2025-05-28 12:23 PM
Then there should be other problems in the ld file or in the code.
Try to obtain more detailed log of the debugger.