??? 06/17/09 07:52 Read: times |
#166172 - Versatility and diversification Responding to: ???'s previous message |
You are still living in the 8-bit world. An embedded unit may have the capacity of a 100MHz i486 PC. If it already has a working Linux kernel, then it is quite likely that any code that needs to be written can be in C or C++ - even any new drivers required.
Next thing. If you need nanosecond delays between signal transitions, and know the existence of a __nop() intrinsic, you can manage a "at least long enough" loop without knowing more than the execution time of a nop instruction for the relevant processor. For longer delays, it is likely that you can use us-resolution delays already available in the kernel. Using hw timers or similar obviously does not require assembler knowledge to use. Next step up - writing efficient C code requires a basic knowledge of what makes the processor tick - it does not require the knowledge to be able to write assembler code for that processor. You still have to get out of your box and realize that the embedded world spans a huge range of target hardware, and that the requirements varies quite a lot dependong on where a specific product exists on this scale - both in type of hardware, and in production volume and in cost requirements. You may take your 8051 10MIPS 8-bit processor and fit it with a bar-code reader containing a 32-bit 100+ MIPS processor. Or maybe connect it to a GPRS module with a 400MIPS processor having 32MB flash + 32MB of RAM. That GPRS module is also running embedded software, but developed under different requirements than your 8051 code. Or you may have a multimedia box having a GB of RAM and flash, and used with TB-sized disks. So the company developed a multimedia box with a MIPS processor and then decides to switch to an ARM or a PPC or x86. You port the Linux kernel and recompiles the software and runs it on the new hardware. Then you decide that the developers who wrote the code should have o leave the company and be replaced with other developers because they don¨t know how to write in assembler for the new processor? That is basically your view - they wouldn't be competent unless they happened to know how to develop in assembler for the new processor (and I have to assume that you mean write really good assembler, and not just be able to scribble something that sneaks past the assembler). The question I have to ask is of course how many bigger processors you know how to write efficient assembler for? being a competent assembler programmer for a small PIC, 8051 or AVR processor is quite easy compared to learning how to write efficient assembler code for some of the bigger 32-bit processors. In a world where you may need to produce a new model of a product every 6 months, you may have a situation where the assembler code is seen as bad programming - representing big costs to the company since you can't just recompile for a new target hardware. In the end, it is vital to realize that the metrics to measure competence or code quality must be defined on a case-by-case basis. It isn't possible to live in a one-size-fits-all world. While an embedded developer should be very familiar with the instruction set for a number of different processor architectures, there will be many situations where there will be no need to know how to write in assembler for a specific processor. Your delay loop example? locking down the code generated by the C compiler can be as simple as having the C compiler produce an assembler listing, and rip the assembler output and modify to a stand-alone assembler function. The C-generated code may possibly consume one or two instructions extra compared to hand-optimized code, but if the target hardware happens to have MB of flash, that would probably be irrelevant. If you need an extra byte at a later time, you would probably have 10 or 100 thousand lines of C code that would be quicker to look at for squeezing out that required byte. I have situations where I have spent a day writing an optimized assembler routine for a time-critical function. Then spent 4 hours writing and testing 10 different variants in C and had 3 of the C implementations run within 3% of the time of my assembler code. For posterity, I could commit the assembler code just to have a backup of it. Then directly change the application to use the most readable/maintainable of the three fastest C alternatives. That C code may not have been the fastest alternative if moving the code to another processor, but it would be at least reasonably efficient, and it will compile. And after a platform change, the processor may possibly be changed to one where the code runs several times faster than needed, allowing basically zero maintainance cost for moving the code. In the end, I would be a lousy embedded developer if I run around thinking that assembler is the big goal. That would be similar to having a hammer and thinking everything looks like a nail. |