cancel
Showing results for 
Search instead for 
Did you mean: 

C compiler benchmarks

ccowdery9
Associate III
Posted on December 18, 2009 at 10:20

C compiler benchmarks

12 REPLIES 12
gahelton12
Associate II
Posted on May 17, 2011 at 13:34

''Outperform'' was a poor choice of words on my part.

Tests show that the Keil CARM compiler has a smaller code size when full printf libraries are included. In this particular test, speed was not measured. The benchmark described the printf libraries as they relate to small embedded systems in Section 3.3.

Personally, I've used printf in an embedded applicaton. The code ''bloat'' was always more that I could stand.

st3
Associate II
Posted on May 17, 2011 at 13:34

Quote:

it was outperformed by the other compilers in most tests (with the exception of the full ''printf'' tests).

But what was the definition of ''outperform''?

In many cases with small embedded systems, memory usage is more important than raw ''performance''

And were the benchmark tasks representative of real-worl embedded applications?

anders239955
Associate II
Posted on May 17, 2011 at 13:34

There are several reasons why tool vendors do not publish comparing benchmarks. Regardless how thorough you may be when you run benchmarks as a tool vendor, you can by definition never be regarded as being objective. And publishing results where you compare your own tools with the competitors is therefore rude!

I have been in this business for more than 20 years now and I have seen some really horrifying examples where benchmarks have been used in order to deliberately misguide customers. The worst example was a vendor (no names!) which compared tools by using the default settings of the tools. Their own default was maximum optimization turned on and the usage of the smallest possible memory model while the competitors tools had optimizations turned off and used the largest possible memory model by default. Such benchmarks are worse than useless!

So, are we tool vendors using benchmarks or not?

Yes we are using them every day, but in house as a vital part of our development. Benchmarks indicate changes in code size and speed during our development. If the current revision differs in the wrong direction compared to previous revisions, then something may be broken in the compiler. Benchmarks serve in that way as a complement to all the other test suites which are testing conformity and functionality.

The second usage is of course comparing our tools to the competitors so we preferably are the #1 when we release new versions of our tools :)