cancel
Showing results for 
Search instead for 
Did you mean: 

Difference between compiler optimization -O0 and -Og

MaxC
Associate III

Hello everyone,

what are the differences (regarding the debugging limitation) between the compiler options -O0 ( no optimization) and -Og ( optimized for debug) in STM32CubeIde (v 1.14.1) ?

Thanks

9 REPLIES 9
Pavel A.
Evangelist III

-Og means optimize only as much as it does not hinder debugging (by GDB). For example, allow removal of dead code.

AScha.3
Chief III

Hi,

see here what the settings do:

https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html

 

If you feel a post has answered your question, please click "Accept as Solution".
PHolt.1
Senior III

I don't think there is much difference when it comes to debugging.

Well, you sometimes see "optimised away" or some such, and a variable value cannot be examined, but usually you can still see the value by looking at a register which holds it. I have very rarely used -O0 to solve this.

Cube IDE is pretty crude and allows you to set a breakpoint on a line where there is no code. This could be a line containing C code which was stripped by the linker, or a line containing a comment :face_with_tears_of_joy: and neither breakpoint will ever get hit. Both of these cases could have been dealt with elegantly (because there are files which can be examined to see if the code exists) but nobody was bothered.

It is just a bad system.

The wider issue is that the "way you are supposed to work" (according to the purists) i.e. config a Debug build mode and a Release build mode, each with different optimisations etc, and debug in Debug and then use Release for the production product, is dangerous because a higher optimisation level can break code. For example when talking to a device needing specific minimum /CS timing (all of them do) it is a hassle to re-check this with a scope for every possible optimisation level. In fact sometimes one uses opt=0 for a particular function which does this kind of timing, to make sure. And 100% regression testing is impossible once you have more than about 100 lines of code. Especially over temperature, where /CS and other timing susceptibility can be large.

So I use Debug for everything, with -Og, and never change it. And I accept that the code is not as small or not as fast as it could be. These chips are 100x fast enough for most jobs and the tiny bits of code which are truly speed critical one can optimise for speed etc for just the one function.

Yes, definitely tends to make things more fragile, but can also surface latent defects in the code.

Debug what you ship, I tend to walk the optimized code, and live with it moving data to registers, and re-sequencing the code.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
Pavel A.
Evangelist III

Cube IDE is pretty crude and allows you to set a breakpoint on a line where there is no code. 

> It is just a bad system.

Eclipse CDT editor lets you put a breakpoint at any line, but how can it know before loading the debug symbols which lines actually exist in the image. Just a little common sense, please? When the debug session starts and some breakpoints cannot be placed, the debugger should complain. If it does not, it is a defect.

 

PHolt.1
Senior III

Not sure if you are saying I am right or I am s.t.u.p.i.d (this site removes that word!) :grinning_face:
It could load the compiled data when the image gets sent to the debugger. It's all there at that time.

Cube IDE always allowed breakpoints on comment lines :grinning_face:

Anyway that's a side issue. The -Og or -O0 one is a good discussion. My take is that -Og is good for most stuff all the way to production. -O0 produces 30-40% more code, and mostly pointlessly; I guess one should not be getting the "optimised away" situation with variable names with -O0 but but I would use it only very temporarily.

Another wider issue is that a change of compiler version can break timing. Think about that one :face_with_tears_of_joy: Unlikely, because most GCC changes are tiny and even more tiny if you use -Og (the new work is focused on really esoteric stuff around -O3 etc) but Cube IDE does sometimes change the compiler version. This leads to not updating Cube when working in the final stages of a project. And when archiving a project, you wrap up the whole thing, Cube and all, into a VM. I use VMWARE for this purpose.


@PHolt.1 wrote:

Another wider issue is that a change of compiler version can break timing. 


That's why you should never rely on HLL source code for timing - use timers, etc, instead.

 

Obviously that is the correct answer but there are cases like /CS timing where the minimum time is very short e.g. 30ns and this is easy to meet, with a large margin, by executing a few instructions. In such a case it is ok to select
-O0 (no optimisation) for a piece of code.

Or even use assembler.

Not clear who you're responding too.

Personally I think Optimization, Dynamic Memory Allocation and Float Point all have a place in embedded. The way we get bugs out of optimizers is to use them and validate the output. Most of the complaints tend to stem to coding errors, or expectations, as what the compiler can do might change the linear order of things, dispense with apparently pointless code, or move it. A lot of people just aren't wired for multi-processor, multi-thread and concurrent behavior, pipelining, etc especially with ARM's preference for no/few safety fences. The selfie-takers are going to fall down the mountain or waterfall occasionally.. However their lack of self-awareness should not preclude others from utilizing functionality in a more considered manner.

If optimization breaks timing or ordering it because you're playing on the edge, and not expressing clearly what needs to happen. At the MCU level there are fencing methods to insure in-order-completion, and flushing of write buffers.

What I can see as a problem is tool churn, where the compiler, or library, is a moving target, like with Adobe Flash being updated every time the computer booted, or you come back from lunch. I'd much rather be comfortable with the tools so when I deploy some code it can run for years without issues or updates.

Unoptimized code can be buggy, optimization generally surfaces bugs a lot quicker.

Debug and validate what you ship, I don't tend to maintain debug and release builds, the release build is what the customer gets, and what the support team have to support, if it fails and dies silently, everyone is annoyed and angry..

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..