2023-12-14 05:32 AM
If I understand malloc() correctly, then it should either return NULL (if out of memory) or a valid pointer.
However, certain sequences of calls to malloc() cause a hard fault instead. Here is a minimalistic sample project for an STM32L431 with 64kB RAM and 1kB configured main stack:
int main(void)
{
void *pv;
pv = malloc(72000);
if (pv != NULL) // false - OK
{
free(pv);
}
pv = malloc(32000);
if (pv != NULL) // true - OK
{
free(pv);
}
pv = malloc(62000);
if (pv != NULL) // true - OK
{
free(pv);
}
pv = malloc(66000); // hard fault in _malloc_r()
if (pv != NULL) // should be false, but never reached
{
free(pv);
}
/* Loop forever */
for (;;)
;
}
The first three calls to malloc() work as expected (either return NULL or valid pointer), but the fourth one does neither and goes into Hard Fault. Does anyone understand why?
Note that I can also make it crash by asking for an amount memory that is available, however to achieve that I need to make a long sequence of (random) malloc/free calls, which makes it more difficult to understand. The above is the shortest example I could produce.
I did check _sbrk() calls while running the above code, and they seem reasonable. The last malloc crashes before calling _sbrk().
I am using stm32cubeide 1.12.1 and a C-project of type "empty" for the STM32L431VCTx, but I see the same problems in full-blown Cube-generated projects, too.
Is this a bug in newlib-nano (Setting for Runtime library: Reduced C [--specs=nano.specs]), or am I doing something wrong? If I link with standard newlib, there is no hard fault even if I attempt 10000 allocations of random sizes.
2024-04-22 03:23 AM
I have the same issue.
I cannot use nano version of libs, and I have to supply a proprietary _sbrk implementation.
so, ----specs=nosys + sbrk implementation is working for my, but no nano