cancel
Showing results for 
Search instead for 
Did you mean: 

Optimized Multiplies for Cosmic Compiler?

paulrbryson
Associate III
Posted on November 11, 2017 at 23:27

Are there optimized basic math functions available for the STM8 such as; 8 bit by 8 bit multiply and 8 bit by 16 bit multiply and similar?

My application has a time critical 8 bit by 16 bit multiply. The Cosmic compiler seems to always default to a 16 bit by 16 bit multiply, which is slower.I wrote an inline assembly macro that runs in about 2/3 the time of the compiler's output; but it was very tedious to write. I would rather not do this for every math function.

Any helpful information would be appreciated.

I include my macro here, in case anyone else finds it useful:

// macro to perform an optimized UI8 by UI16 multiply: uint16_t

RESULT_UI16 = (uint8_t)X_UI8 * (uint16_t)Y_UI16;

// Note: All arguments must be declared '@tiny'

// macro assumes that no overflow occurs

// '_asm()' will load an 8 bit argument in reg A or a 16 argument into reg X

#define MULT_8x16(X_UI8, Y_UI16, RESULT_UI16) {\

_asm('LDW Y,X\n SWAPW X\n',(uint16_t)Y_UI16);\

_asm('MUL X,A\n SWAPW X\n PUSHW X\n LDW X,Y\n', (uint8_t)X_UI8);\

_asm('MUL X,A\n ADDW X,($1,SP)\n POPW Y\n');\

_asm('CLRW Y\n LD YL,A\n LDW (Y),X\n',(@tiny uint16_t*)(&RESULT_UI16));\

}

#compiler #math

Note: this post was migrated and contained many threaded conversations, some content may be missing.
24 REPLIES 24
S.Ma
Principal
Posted on November 12, 2017 at 15:40

I guess there would be a C rule for variable rank and promotion when operands differ. Casting the variables does not necessarily means the * operator won't convert them to something else. When things need to be very optimized to the core cycle level, it makes sense to get to assembly level as in this particular case. Check for the math ansi library for specific function (not using *) in case it exists... this requires some doc reading about the compiler.

Posted on November 13, 2017 at 13:54

Yes, C has its rules on types. But the compiler is still free to optimize, as long as the observable behaviour is the same. E.g. 8-bit types will always be promoted to at least 16-bit types by the rules. But compilers will still use 8x8->16 multiplication where they can.

Philipp

Vyacheslav Azarov
Associate III
Posted on November 14, 2017 at 06:33

In my opinion, arithmetic in COSMIC CXSTM8, is very high quality. However, for DSP this may not be enough.

Posted on November 14, 2017 at 11:37

I agree that the Cosmic compiler does well in all of the head to head comparisons I have seen; but, I don't think I would refer to simple multiplication as 'DSP'.

The reason for this post was that I was hoping that someone official or unofficial had identified common operations that could be sped up and then created optimized functions or macros to perform those operations.  An application note would be great. 

Posted on November 14, 2017 at 11:43

I assume, that with any compiler, someone would have looked into common operations, espcially multiplications, and how to speed them up. Multiplications can be quite time-intensive, and are important both in benchmarks and real-world applications. That 8x16->16 multiplication where neither operand is a constant is not treated as a special case probably means that it was not considered particularly common / important. Not even SDCC has such an optimization.

But if you provide examples from real-world code, where such an optimization matters a lot, requesting the feature from compiler developers might result in it getting implemented.

Philipp

Posted on November 14, 2017 at 13:46

1. I agree that the compilers should do their best to optimize multiplies. However, here, they seem to have failed.

2.  Why would 8x16 be any less common than any other multiply?  In an embedded application, on an 8 bit micro, where every byte of RAM  and every CPU cycle counts why would you NOT have a mix of 8 bit and 16 bit integers?  A prudent coder uses the minimum precision practical - with the assumption that the compiler will too.

My particular application should be VERY common.  I am scaling the ADC reading (10 bits) with the least precision necessary (6 bits). 

3. It doesn't matter whether either factor is constant or not - multiplying by an arbitrary constant is the same as multiplying by a variable. 

4. Asking for a new version of the compiler is not really a practical solution.  

Posted on November 14, 2017 at 13:47

I think it's because of the standard. According to the rules of the C language, before performing the arithmetic operation, both operands are reduced to a longer one, of them and truncated to the result type. Special behavior in the calculations is not mandatory but some compilers support customizing.

Posted on November 14, 2017 at 14:07

Do not blame the developers, please. Building a compiler is not a trivial task. In any case, you can always write down a section of high-speed code in assembler, corresponding to the architecture of the processor. You can purchase compiler from IAR which have more deep optimizing.

Posted on November 14, 2017 at 14:13

4:

It might not be the fastest solution, but I'd still consider it practical. If the compiler developers get feedback indicating that 8x16->16 multiplication is speed-critical for some of their users optimizations for it are likely to get implemented.

2.,3:

I worked on the multiplication optimizations in SDCC. To do so, I needed a good understanding of which multiplications are common, and looked at code for 8-bit µCs from various sources. However, I have to admit that my analysis of which type of multiplications are common was mostly based on code from RTOSes, benchmarks, standard library functions. You might be more familiar with other code.

8x16-> multiplications are a) less common and b) can't be done that much faster than 16x16->16 (yes, there is potentail for optimization, but not as much as in 8x8->16 vs 16x16->16 or 16x16->32 vs 32x32->32). Most multiplications arise from accesses to arrays of structs/unions. If the programmer uses a cheap index type, such as uint_fast8_t, these are 8x8 ->16. If the programmer just uses int, these are 16x8 ->16. Naturally, the 8-bit operand is always a constant. The constant known at compile-time allows some optimizations that otherwise would not be possible. 16x16->16 happen once in a while, there is some code that makes heavy use of 32-bit multiplications (in many variants of operand sizes) in matrix multiplications. 8x16 with both operands being non-constant is less common. So are multiplications with 64-bit operands.

Philipp