2025-03-11 6:13 PM
Hi all, I have an STM32H743 project that I've been working on. I've worked on a fair amount of projects in STM32CubeIDE in the past, but now I have a problem I haven't encountered before: the linker is putting my .bss section in flash as well as RAM. I'm not sure why an uninitialized section would be placed in flash... I've attached the output from the build analyzer as well as my linker script. The script is almost the default, except that I've added D2/D3 RAM and have moved stack & heap to DTCM. I've used very similar scripts in other projects and BSS ends up only in RAM in those projects.
Can anyone find what I'm doing wrong? Thanks!
2025-03-11 6:37 PM - edited 2025-03-11 7:41 PM
Can you show any of your other "good" .ld files?
These lines outside of sections blocks are suspicious:
. = ALIGN(4);
2025-03-11 6:38 PM
Not sure I'd rely on the memory view to understand what's actually happening
Use an ELF dumping tool like OBJDUMP, OBJCOPY or FROMELF, see what's getting into BITS / NOBITS sections. How large the .BSS foot-print in FLASH actual is vs the amount of RAM being cleared.
The .MAP might also help understand what's going on.
I don't have the fortitude or motivation to fight the .LD syntax and ordering for you, that'll need to be your battle. But it's likely to be something to do with the ordering and attributes that the linker thinks it's working with. It's not using a "best fit" or "multi-pass" approach, but for the most part a linear traversal of rules.
2025-03-11 7:06 PM
Right! Humans should not edit manually makefiles and .ld scripts. In the AI era, this will be beyond capability of a typical human very soon ))
2025-03-11 7:55 PM
Here's an .ld from another project that does the right thing with the bss section. This other project didn't need as much memory, so I didn't need to add D2RAM/D3RAM sections.
If there's no obvious error to point out, I'll have to dig into this further. I know next to nothing about the .ld format, so I guess it's time to learn!
2025-03-11 10:56 PM
Again - try to remove lines " . = ALIGN(4);" that are not within any { section }
(lines 139, 155).
2025-03-12 10:00 AM
Thanks for catching that, not sure why that isn't flagged as an error. Unfortunately, it doesn't seem to help.
I started going through documentation, and tried some things. In the end, it appears that Tesla D. is right - it's all in the order. Putting the .bss section before .data in the file fixed it. It appears that, because .data is placed in both RAM_D1 and Flash, any section that follows it that is placed in RAM_D1 is automatically also placed in Flash.
Weird, though, that my other project's linker script works fine and it has .bss after .data.
I'm going to forge ahead, cautiously optimistic, with what I've got and see what happens. Fortunately, this project has a long way to go before production.
Thanks everyone!
2025-03-12 10:18 AM
Well, at the very least it's extremely frustrating, and I'm not looking for unpaid tasks. The GNU/LD could have a book written on them. I've written linkers in the past, so I'm not unfamiliar with the goals, it just makes dealing with intractable ones more frustrating.
The Arduino implementation at least constructs it's own scatter tables.
The Keil Scatter files and load regions are far less frustrating. And my tools of choice. Still, working set optimization is something I'd opt to automate rather than attack manually.
2025-03-12 2:45 PM
> It appears that, because .data is placed in both RAM_D1 and Flash, any section that follows it that is placed in RAM_D1 is automatically also placed in Flash.
Interesting observation! One of so many "bug or feature?" moments. In any case: (NOLOAD) helps with anything that should not go to the image.
2025-04-21 6:24 AM
Got the same issue. Just want to report that Tesla D. and your detail suggestion works!.
I had a more complex memory setup with this issue (bss on flash). Initially, I was trying to simplify the code and script without any luck. Then I post the linker script and the default ST scripts generated directly by the STM32Cube to test on almost all LLM's, including the last generations trained a few weeks ago, paid service ALL of them fail to find the issue. No clue about what is happening. I could include here the long list of possible issues described by the LLM's, some with an authority tone (almost superiority), telling me that I was wrong in my large array memory definitions.
We still are far for LLM's finding issues with problems introduced by humans. LLM's produce basic code that could help in some cases, but as soon you need something sophisticated or more complex, all fail. Training is base on what is public, the best code and most innovative concepts, are not public, means no part of their training. We are consuming the public data with a large amount of garbage. I'm anticipating a new generation of coding full of bugs, problems, inefficiencies and demand for exaggerated hardware requirements.
For the embedded world, the impact will be noticeable, for other computing platforms, adding more hardware will solve the problem, so the LLM's with terrible code output will work either way.
Thank you Tesla D. and ron239955 to remove a nightmare day from my life.