??? 09/21/11 08:50 Read: times |
#183827 - Just don't kick in open doors Responding to: ???'s previous message |
Yes, C can make programs larger.
But what is normally debated here, is that a very large percentage of people write lousy assembler, in which case they often fail to match the C compiler on size or speed. The average assembler programmer is not an expert with many years of assembler programming experience. Next thing often debated, is that it is very hard to produce a complex program in assembler and get even close to the development times you would get with C. And development times and time-to-market are often more critical parameters than size of program. Next thing often debated, is that a C program is way easier to modify, in case of changed customer requirements. And many customers don't have the skills to know what they want, until they have tested a prototype - or until they start to get feedback from real users. And next thing often debated, is that a C program is way easier to move to a different processor, in case of changed customer requirements, or in case a new processor is released that would give a significant cost reduction. Today, many products have life cycles of 1-3 years. So you want to be able to take one step back, make a completely new analysis of suitable hardware (cheap for mass production) and then lift as much as possible of existing know-how into new hardware. Again, and again, and again. Next thing often debated, is that the size difference between a program written in C or assember (or the speed differene) is normally so small that it doesn't make a difference in processor cost. The cost difference is way higher between similarly spec:ed processors from two different product families, than the cost difference is between two chips in the same family - one with extra flash or RAM. And a move to a newer processor family often gives more speed + memory at a lower current consumption and purchase price. In my case, the issue wasn't cost of Mega48 or Mega88, but that the Mega88 had the release date significantly delayed. And fitting everything into half the space wasn't trivial just because of assembler - lots of not too nice tricks was needed. Tricks that makes it way harder for someone else to modify the code. A compiler can detect common tail code between different functions, and automatically make use of this. Same programming methods in assembler makes it very hard to modify the code - a change of function A will also affect function B that assumes that the code after a specific label should still perform the same thing, with the same register allocations. The 8051 architecture is notoriously C-unfriendly. A general-purpose 32-bit core on the other hand is way more all-round, making it much easier for a compiler to produce extremely tight code. The compiler can perform brute-force optimizations (similar to looking n steps forward in chess) in a way that is hard for a human. We can write "clean" assembler code, but get into troubles when we have 32 general-purpose registers, to keep track of what is - at every single time - stored in them. And when is it most optimal to dump one of the registers into RAM to instead pick up other data (or switch to use the register as an index or pointer). |