cancel
Showing results for 
Search instead for 
Did you mean: 

stm32f415 increase 64k RAM data

ludovic Wiart
Associate II
Posted on September 12, 2017 at 16:52

Hi,

I'm using STM32F415 with treadX OS and Netxduo

When I want to allocate some data, it crash when I'm over 64 Kb

I have try to configure OPENOCD with this configuration but no changes:

configure -work-area-phys 0x20000000 -work-area-size 0x15000 -work-area-backup 0

Ludo

#sram #stm32f4 #memory #openocd
14 REPLIES 14
Posted on September 12, 2017 at 17:03

I have no idea what is treadx OS and Netxduo, but 64kB limit sounds like you allocate your variables into the CCMRAM which is 64kByte large and does not form continguous memory space with SRAM1 and SRAM2.

IMO OpenOCD has nothing to do with variables allocation; unless you have some dynamic allocation in mind, you have to allocate them using facilities of the linker you are using.

JW

Posted on September 12, 2017 at 17:05

Ok, perhaps you should look at the code specifically that is crashing, and the address it is using.

Are the heap/stack crashing into each other? Where is each situated, and how large are the initial allocations?

Review the .MAP file understand the memory utilization. Review the .LD file describing the memory usable by the Linker.

The program is crashing, not the debugger, right? Why would changing the debugger settings change the code as built and linked?

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
Posted on September 12, 2017 at 17:21

Thanks for your reply,

Where I can verify if I work in CCMRAM ?

Posted on September 12, 2017 at 17:28

Thanks for this reply,

My program work fine when I reduce all buffer size (static RAM =64250 b)

When I use the normal value for the buffer (Static RAM = 70125 b) the application stop.

Right for the debugger, but in desperation  

Posted on September 12, 2017 at 18:01

The linker script and map file would describe a region around 0x10000000..0x1000FFFF

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
Posted on September 12, 2017 at 18:05

Well if you have a Hard Fault Handler that isn't a while(1) you might learn if you're in there.

Do you use malloc()? Do you check if it returns NULL?

It stops where?

Use the debugger to better understand what your code is doing. Output diagnostic/telemetry data to better understand flow and sanity checking pointers, allocations, stack depth, etc.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
Posted on September 12, 2017 at 21:53

Hard Fault Handler

 
Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
Posted on September 13, 2017 at 10:00

Hi,

I can't use debug in my project, it isn't in place now.

You are right, it isn't a crash, it's only malloc retrun null.

I have print the pointer during malloc() and it work until 0x2001B3FF  (I 'm not in CCMRAM)

1B3FF = 111615 b  in linker script I have     SRAM       (rwx) : ORIGIN = 0x20000000, LENGTH = 128K

In MAP file, I have 63656b in static RAM.

If I understand, static ram = 63656  normaly, I can use the SRAM until

0x20020000  (0x20000000 + 128 K)

Why the malloc () return null when I 'm over 0x2001B3FF ?

Posted on September 13, 2017 at 12:20

As I recall the standard GNU/GCC allocator defines the heap size as the space between the end of the statics and the bottom of the heap.

That would mean you're trying to allocate more space than available.

Try moving some of your statics or heap to the CCM RAM.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..