??? 12/28/10 12:56 Read: times Msg Score: +1 +1 Good Answer/Helpful |
#180315 - "Best code" Responding to: ???'s previous message |
Richard said:
Well, first of all, the "best" code for one architecture or even one chip within that architecture, is not code that's written to be easily portable between architectures. This has been discussed a number of times. There are seldom any "best" code. And there are seldom any "best" architecture. The majority of embedded problems don't put any special requirements on the core, so the developer is quite free to chose processor based on other criteria. Remember all projects implemented with the original 12MHz 12-clocker? These projects still exists. With several architectures having processors (at quite low price) managing 100MHz one-clock instructions, it should be obvious that even HLL code running on a core that may be read/modify/write (such as the ARM) or that doesn't have specific bit instructions, will manage just as well. And because of the greatly lowered cost of transistors, the cost of memory have changed greatly, so having all - or at least the majority - code written in a HLL doesn't really affect the chip cost. Needing to switch from one processor to a big brother with more memory normally costs more. But moving to another processor (change of architecture or same architecture but other family or manufacturer) often allows a change to a processor with twice as much memory at a lower cost than the original processor. So for the majority of projects, a very large percentage of the source code can be written in a HLL and can have very little "bindings" to a specific architecture. So the code can be moved to a completely different processor at a quite low cost, and without the developer needing to worry about how to optimize for the new core. Fast enough is fast engouh. Moving to a twice as fast processor (released at a lower price) does not mean the product have any advantages from running twice as fast. How fast needs a lamp timer be? Anyone who can notice if it "thinks" for 1ms or 1us before issuing the on or off command? Same thing with power consumption. A processor with 0.13u features will draw less power even if the program is only 50% as efficient than a 0.4u processor with super-optimized asssembly code. For a few products, the needs are to have super-tight optimized instructions running on a core that is optimally efficient for the specific task. That may sometimes be a 100MHz 8051 one-clocker. That may sometimes be a DSP. That may sometimes be a processor with 32-bit ALU. But these needs are seldom the case. For the majority of situations, it's the number of pins, the number of UART, the amount of RAM, availability of EEPROM, lead times, purchase price, quality of documentation, availability of tools, ... that are the main selection criteria. And that are the criteria to optimize against. Hunting assembly optimizations when not needed is often counter-productive. It isn't a question of lowering the standards, but of selling the "right" product at the "right" price to the "right" customer. The customer don't care about the architecture or the percentage of assembler. They have completely different selection criteria. |