cancel
Showing results for 
Search instead for 
Did you mean: 

speed & code size stm32cubeIDE vs iar

kshin.111
Associate III

hi

i use of iar embedded workbench arm.

But after seeing the ads and features of stm32cubeIDE, I used stm32cubeIDE for testing.

There are two major drawbacks to stm32cubeIDE:

1. The generated hex code is almost twice as long as iar(in flash memory).

2- The execution speed of the program is almost 2 times slower than iar.

The amount of RAM used in stm32cubeIDE was slightly higher and can be ignored.

A large project was used for the experiment, in which almost most of the features of micro stm32f407vg were used.

This experiment was performed for micro stm32f030f4 with a small project that opened the result as before.

I urge developers to address these two important issues.

Friends, please share your experiences.

Thanks

1 ACCEPTED SOLUTION

Accepted Solutions
mattias norlander
ST Employee

Hi @kshin.11​, 

This is a quite common feedback which does not have a simple straight forward answer. Yes IAR compiler is most of the time faster and produces smaller binaries. But not always, we have also seen examples where GCC does a better work. And the result should NOT be in the range of 2 times worse in neither case 1 nor case 2. Typically we can get results in the realm of +-5%.

It is important to compare apples and apples.

So as @MM..1​ commented, one has to consider which optimization levels that are used. -O0 in GCC for example, has the fastest "compilation times" since it does not apply any optimizations. But, if you want to compare a binary that can be debugged with IAR you should probably instead rely on -Og which applies all optimization that which does not interfere with the Debug experience!

Similar consideration for the higher optimization levels need to be made.

Non optimlal usage of the C Run-time libraries quite often effects the results in a negative way. For a starter rely on Newlib-nano if you can.

Another thought is that if code initially was written with IAR compiler and run-time libraries in mind, then this could bias the result since the developer may be well-familiar with what/how to write optimal code for this compiler and libraries. Such proejct/code may not translate perfectly to GCC. Comparisons/benchmarks are not a straight-forward and easy topic. :)

Kind regards, Mattias

View solution in original post

5 REPLIES 5

>>I urge developers to address these two important issues.

Pretty sure the people writing the IDE, UI/GFX, Java automatic code generators and other w*nky stuff are NOT the compiler/linker guys.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
MM..1
Chief II

You use in cubeIDE Release build and optimize -Os or -O3 ???

mattias norlander
ST Employee

Hi @kshin.11​, 

This is a quite common feedback which does not have a simple straight forward answer. Yes IAR compiler is most of the time faster and produces smaller binaries. But not always, we have also seen examples where GCC does a better work. And the result should NOT be in the range of 2 times worse in neither case 1 nor case 2. Typically we can get results in the realm of +-5%.

It is important to compare apples and apples.

So as @MM..1​ commented, one has to consider which optimization levels that are used. -O0 in GCC for example, has the fastest "compilation times" since it does not apply any optimizations. But, if you want to compare a binary that can be debugged with IAR you should probably instead rely on -Og which applies all optimization that which does not interfere with the Debug experience!

Similar consideration for the higher optimization levels need to be made.

Non optimlal usage of the C Run-time libraries quite often effects the results in a negative way. For a starter rely on Newlib-nano if you can.

Another thought is that if code initially was written with IAR compiler and run-time libraries in mind, then this could bias the result since the developer may be well-familiar with what/how to write optimal code for this compiler and libraries. Such proejct/code may not translate perfectly to GCC. Comparisons/benchmarks are not a straight-forward and easy topic. :)

Kind regards, Mattias

Ozone
Lead II

On average, you get what you pay for.

Keil, IAR and others spent man-years in their toolchains, and not only a pretty look.

For a free tool, consider correctness / standard adherence as realistic demand.

For ST, it does not pay off to invest that many resources in a free tool, especially since software and tools are not their focus.

And if they did, there would be trouble with the propfessional competitors.

BTW, I can personally confirm that another unnamed MCU vendor deliberately filled applications with crap, to coax users of their free tools/libs into paying for it.

KnarfB
Principal III

Just my 5ct: When using gcc (STM32CubeIDE), try -flto in the compiler and linker, that's link time optimization, see https://gcc.gnu.org/onlinedocs/gccint/LTO-Overview.html. Helps to reduce function call overhead and impoves register allocation when much library code (from individual compilation units) is involved.