2015-11-07 12:15 PM
Hi. I'm interested in that how you are programming - in C or in assembly language?
What are really benefits of assembly in contrast to C? Speed of code execution probably? How much faster it is, if you compare the same code written in assembly with that written in C with enabled all of optimization (level 3 in Keil and checked ''optimize for time'').Also, when you move from one core to another (let say from Cortex M3 to M4), you have to learn many new instructions, don't you? #32-bits-of-a-bus #assembly-language2015-11-07 01:53 PM
Some people say C and ASM is obsolete and you should code in C++48!
Some people say C is neat and most usable for IoT and small embedded applications.So language debates tend often to get infected and ugly.2015-11-07 02:00 PM
2015-11-07 06:11 PM
Compilers are not good with algorithms and goals. They tend to focus on localized performance rather than system level, and are not good at identifying what's actually important/critical.
Compilers are good at tracking/managing registers, and with optimization applying tricks it knows, and how to pull things out of loops. Someone with their wits about them can do the same. Coding in C tends to be quicker, and less prone to errors. Assembler can be applied to specific areas and algorithms where speed of execution is critical. With RISC there are less opportunities and fewer clever side-effects. Yes, different architectures have different instructions, and tricks to make things go faster. If it's important you'll figure it out. I can code effectively in quite a number of assembler variants, the mechanics are all very similar. I've written quite sizeable code bases in assembler, it was a lot more important when things ran at several MHz. I've built tools to do static analysis of code (assembler/machine), and provide timing/cycle accurate estimates. When speed is important I do dynamic analysis using benchmarking and timing. These days I'd say 99% of the time I use C. When speed is critical and software's not the answer I use hardware/logic. Software guys tend to just try what they know, regardless of whether it's the best solution. If you want orders of magnitude improvements of speed, you look harder at the algorithm, and if there are better ways to do things.2015-11-08 05:16 AM
clive1
Prompt most advanced assembly tools for ARM, with the maximum amount of bonuses.There is a desire to watch the execution of a separate mnemonics, its dependence on the previous command, and the ability to perform a new team without depending on the old one. I would like to ID showed tics automatically. Although I have to print on paper of all the important data for me - but not always manage to keep up.It is important for critical sections that are required to execute a certain way, or very quickly / or with minimal disruption registers.Granted, you can do in C, and protect critical code commands GCC. But the GCC often ignores the clear orders, if he feels that recourse greater acceleration.Currently I gather critical section in keil, porting S at the GCC, and dopilivat file. It is extremely uncomfortable and disgusting.clive1Под�?кажите �?амые �?овершенные ин�?трументы дл�? а�?�?емблера ARM, �? мак�?имальным количе�?твом бону�?ов.Е�?ть желание наблюдать врем�? выполнени�? отдельной мнемоники, её зави�?имо�?ти от предыдущей команды, и возможно�?ть выполнени�? новой команды без зави�?имо�?ти от �?тарой. Хочет�?�? чтобы ID показывала тики автоматиче�?ки. Хот�? у мен�? е�?ть ра�?печатки на бумаге в�?ех важных дл�? мен�? данных - но у�?ледить удаёт�?�? не в�?егда.Это важно дл�? критиче�?ких �?екций, которые об�?заны выполнить�?�? определённым образом, либо очень бы�?тро / либо �? минимальным разрушением реги�?тров. Согла�?ен, можно �?делать на С, и оградить критиче�?кий код командами GCC. �?о �?тот GCC очень ча�?то игнорирует чёткие приказы, е�?ли ему кажет�?�? что регре�?�?а больше у�?корени�?.�?а данный момент �?обираю критиче�?кие �?екции в keil, портирую S в GCC, и допиливаю напильником. Это жутко неудобно и противно.2015-11-08 11:23 AM
2015-11-08 04:08 PM
I think it is best to use C for your initial design, just to prove that your design works - even if you have to run the application at a higher clock speed to make it work.
Then you can try converting critical routines, critical inner loops to assembler. You can then use your C routines to test/compare/verify the outputs of the assembler routines.Other factors worth considering: use integer arithmetic where possible, use lookup tables where possible (even if it is implemented in C). There's plenty of memory for tables. Understand what precision is required and don't work beyond that level. For me, the only reason to use assembly language is to save battery power (and reduce electromagnetic noise). There have always been micros fast enough for my applications - they were just too current and voltage hungry. If there is no pressing need for speed or efficiency, stay with C. I like the M4, not just for the DSP instructions, but because there are less restrictions on which instructions can use which registers. The M3 is fine for most things. So stick with the M3 and M4 and get yourself a copy of ''The Definitive Guide to ARM® Cortex®-M3 and Cortex®-M4'' by Joseph Yui - or download the relevant documents from the ARM web site (eg. ''Cortex-M4 Devices, Generic Users Guide'' is a good start)2015-11-09 05:14 AM
If you need ''fast as possible'' then use a bigger hammer for the nails...switch to an A9 or A15 (or quad A9s) instead of M series. The engineering time spent optimizing code often isn't worth the effort, especially when adding a couple dollars to the BOM for a faster controller is much easier to code and maintain, and quicker to market.
Same argument for C vs. assembly. In terms of economics C wins because there's less engineering time consumed to produce the same product. A few assembly routines for pinpoint optimization are justified but for an entire project, not really practical. As for dsPICs, yeah I worked with a dsPIC33 and the 3 operand DSP instructions work very well, for a limited set of problems. The small X and Y spaces, plus being forced to write in assembly in order to get the lookahead fetch to work, 16 bit, no SIMD and then add in the overall headaches dealing with the PIC architecture, I'd prefer a M4/M7 any day. And if you look at ST with the way SRAM and TCM are separate regions you can come close to the X and Y space from a dsPIC for simultaneous access. Jack Peacock2015-11-09 05:36 AM
IMHO the dsPIC was a particularly bad example.
DSP's generally have multiple units to work in parallel, even separate RAM sections that can be accessed in parallel, in one cycle. That allows them to execute somewhere from 4 up to 20 instructions (for video DSP's) in parallel. And other DSP vendors like TI, Analog, etc. - perhaps with the exception of Microchip - deliver a decent C compiler supporting their architecture. Hardly anyone goes the full-assembler route for larger projects. The pool of embedded software engineers is already small, and only a tiny fraction of those is really willing (or capable) to descend into assembler coding.2015-11-09 06:10 AM
The pool of embedded software engineers is already small, and only a tiny fraction of those is really willing (or capable) to descend into assembler coding.
Lot of late forties white guys in the pool, if the seminar crowd is a good metric, definitely assembler skills there. The Java kiddies, perhaps less so.