2018-10-16 10:38 AM
I am moving a large existing project from IAR to Atollic True Studio for STM32 as an experiment. I got it to build and run without too much pain, but have noticed that the ELF file which is produced is *much* larger - more than twice the size with no optimisation according to the respective map files.
I've been fiddling with various settings and optimisations to little avail. The -flto option is nice, and did result in a significant reduction, but the image is still more than 25% larger than for IAR.
I was a bit surprised by the scale of the difference. The underlying tool chain is arm-atollic-eabi. I've used essentially the same ARM tools in Compiler Explorer, and they seemed very efficient in that context.
Does anyone have experience of this? I am probably missing something. The project is C++ rather than C, and does make significant use of templates and so on. But IAR appears to cope just fine.
Thanks.
2018-10-16 10:49 AM
You mean gcc rather than Atollic.
I'd say, this is one of the cases, where money shows.
JW
2018-10-16 10:56 AM
Are you comparing the total size of the ELF file, or the size of the code segments within the ELF file? The ELF file contains lots of information besides just the code. Some of the difference *may* be due to debug symbol table differences, or something else not directly code related.
Better to compare binary files, or even hex file sizes.
Code size also depends on library implementations.
As JW alluded to, I would *expect* the IAR libraries to be aggressively optimized for code size and speed/performance since they cost real money and that is one of the features that would make it worth the money (at least to me).
2018-10-16 06:51 PM
GCC has many flags that might be prudent in your comparisons.
-Os, optimise's by size
-ffunction-sections, removes dead code (done by default in atollic)
-fdata-sections, removes dead data (done by default in atollic)
-fno-rtti, disables RTTI (done by default in atollic)
-fno-exceptions, disables exceptions(done by default in atollic)
-flto, enables linker and compiler linke time optimisations, can substantially improve template code by using code folding (gnu gold is a great linker for that).
There is a checklist by atollic.
2018-10-16 10:33 PM
And C++ means probably a lot of libs.
Differences in variants and sizes will blur any comparison.
But having some experience with IAR WB too, I see it at the better end in regard of code density and performance optimization.
2018-10-17 12:41 AM
Thanks for all the replies.
I'm using all of those options, davidledger1.5374107947960886E12. I had a brief look at that checklist yesterday, and thought I was already doing what it advised. I'll check again.
I've compared SREC outputs, and the gcc version is 25% larger with -flto flag. Much larger without.
It is a fair point that C++ might mean a lot of libs of varying sizes, but I generally use C++ for its language capabilities rather than its library features. I make no use at all of the STL, for example. I guess I'll have to look a bit deeper to work out where the bytes are going.
And perhaps it doesn't matter much: the program still fits on the hardware. I suspect this is basically, as waclawek.jan indicates, a case of the cost of being free.
2018-10-17 01:16 AM
> I suspect this is basically, as waclawek.jan indicates, a case of the cost of being free.
Exactly.
Companies like Keil and IAR put a lot of manpower in their compilers and libs, and certainly charge you for that. When one k$/k€ can buy you a smaller MCU, or a variant with less Flash/RAM, it will pay off quickly for mass-produced parts.
If you seriously want to compare toolchains, consider separate projects for specific use cases.
But for a hobbyist and experimenter, the decision is usually a no-brainer.
2018-10-17 04:38 AM
The comparison was an experiment for future work. Most of my projects are test rigs, prototypes or low volume devices based around Cortex-M devices. Naturally we select processors that are reasonably generous, but we do try to keep the BOM plausible. I think slightly bigger code would rarely be an issue in this case, and free tools would save our clients money should they wish to maintain the code after we hand it over.
But, as you say, for commercial volumes, should the device go that far, the cost of the licence for our clients would be largely irrelevant. Changing the tool chain might even be a reasonable cost-reduction exercise at that late stage, if it's not a complete pain to do...
I guess I was just a little surprised by gcc, but shouldn't have been. Need to manage my expectations better. :)
2018-10-17 06:28 AM
> I think slightly bigger code would rarely be an issue in this case, and free tools would save our clients money should they wish to maintain the code after we hand it over.
Besides of licence questions, one needs to keep some other "soft factors" like long-term availability and debug port support in mind. I'm recently dealing with a device based on an obscure Japanese MCU, and exactly one toolchain and one debugger available.
Might not apply to your case, but toolchains with remote-server based licences become a PITA with discontinuations ...
BTW, Crossworks is gcc-based as well. I purchased a private licence some while ago, and experienced substantially smaller differences to the top toolchains (Keil/IAR), compared with Atollic. Briefly, gcc is not the same as gcc ...
2018-10-17 06:32 AM
For completeness, I discovered that my specs weren't the best choice. Adding "-specs=nano.specs" to the command line makes a *huge* difference. The Atollic IDE has a setting to do this for you: on the Tool Settings tab, select "Reduced C and C++" for the runtime library. The output is now only 9% larger than IAR without -flto, 2% with -flto. I understand that LTO messes up debugging, so this is a nice result.