2017-09-12 07:52 AM
Hi,
I'm using STM32F415 with treadX OS and Netxduo
When I want to allocate some data, it crash when I'm over 64 Kb
I have try to configure OPENOCD with this configuration but no changes:
configure -work-area-phys 0x20000000 -work-area-size 0x15000 -work-area-backup 0
Ludo
#sram #stm32f4 #memory #openocd2017-09-12 08:03 AM
I have no idea what is treadx OS and Netxduo, but 64kB limit sounds like you allocate your variables into the CCMRAM which is 64kByte large and does not form continguous memory space with SRAM1 and SRAM2.
IMO OpenOCD has nothing to do with variables allocation; unless you have some dynamic allocation in mind, you have to allocate them using facilities of the linker you are using.
JW
2017-09-12 08:05 AM
Ok, perhaps you should look at the code specifically that is crashing, and the address it is using.
Are the heap/stack crashing into each other? Where is each situated, and how large are the initial allocations?
Review the .MAP file understand the memory utilization. Review the .LD file describing the memory usable by the Linker.
The program is crashing, not the debugger, right? Why would changing the debugger settings change the code as built and linked?
2017-09-12 10:21 AM
Thanks for your reply,
Where I can verify if I work in CCMRAM ?
2017-09-12 10:28 AM
Thanks for this reply,
My program work fine when I reduce all buffer size (static RAM =64250 b)
When I use the normal value for the buffer (Static RAM = 70125 b) the application stop.
Right for the debugger, but in desperation
2017-09-12 11:01 AM
The linker script and map file would describe a region around 0x10000000..0x1000FFFF
2017-09-12 11:05 AM
Well if you have a Hard Fault Handler that isn't a while(1) you might learn if you're in there.
Do you use malloc()? Do you check if it returns NULL?
It stops where?
Use the debugger to better understand what your code is doing. Output diagnostic/telemetry data to better understand flow and sanity checking pointers, allocations, stack depth, etc.
2017-09-12 02:53 PM
Hard Fault Handler
2017-09-13 03:00 AM
Hi,
I can't use debug in my project, it isn't in place now.
You are right, it isn't a crash, it's only malloc retrun null.
I have print the pointer during malloc() and it work until 0x2001B3FF (I 'm not in CCMRAM)
1B3FF = 111615 b in linker script I have SRAM (rwx) : ORIGIN = 0x20000000, LENGTH = 128K
In MAP file, I have 63656b in static RAM.
If I understand, static ram = 63656 normaly, I can use the SRAM until
0x20020000 (0x20000000 + 128 K)
Why the malloc () return null when I 'm over 0x2001B3FF ?
2017-09-13 03:20 AM
As I recall the standard GNU/GCC allocator defines the heap size as the space between the end of the statics and the bottom of the heap.
That would mean you're trying to allocate more space than available.
Try moving some of your statics or heap to the CCM RAM.