??? 08/06/09 12:58 Read: times |
#168215 - If you can ignore those features Responding to: ???'s previous message |
Christoph Franck said:
Richard Erlacher said:
My app's don't have any time to waste either.
I've yet to see/evaluate a 'C' compiler that could keep up with ASM. I've yet to see/evaluate a 'C' compiler that I had time to fool with. They're either time-limited, or they're going to have too small a memory range to be "evaluatable." Now, that's a real waste of time! 4k should be plenty to evaluate the performance-critical parts of most applications for a '51 (usually, the CPU spends 90% of its time executing a handful of percent of the code). It would be if 4K were all that the MCU had, as the original 8051 had, but ... if tables are in the upper half of the 64KB code space, and the code resides in the lower portion, things can't be evaluated so easily. If you attempt to "evaluate" piece by piece, you get one result. If you evaluate as the end-product will be configured, you get another. If that's good enough for you, well ... fine. It's not good enough for me, though. Most of my ASM code is based on lookup tables, dispatch tables, and table-driven decision trees. It's not unusual for half of my code space to be consumed by tables. Often it's 10% code and 90% tables. At times, it's faster to perform arithmetic by lookup rather than by computation. I could go on ... None of those tasks can be evaluated in their 'C' form with so limited an evaluation version. And as soon as you look at pipelined architectures (or, even worse, quirks like instructions with delayed execution), writing anything in assembly will become a royal pain in the neck. And after that, you're not faster than the compiler yet, you merely have code that doesn't mess up because of pipeline conflicts - beating the compiler will take another round of work.
If those features were properly and fully documented so that one actually could determine precisely what their effects under various circumstances really are, it would be practical. Cut-and-try approaches aren't sufficiently rigorous to suit me. Architectures that approach non-determinism, or are documented as though they do, don't interest me. The SiLabs parts are entirely deterministic. Consequently, their behavior can be precisely predicted. The lack of a precise simulator is not SiLabs' fault ... it's the fault of those who claim they can simulate the output from their compiler. That's the thing about which the KEIL support folks have repeatedly attempted to mislead me. They've lied about that, saying that timing of the Dallas parts can't be predicted, and they've said the same about the SiLabs parts. The real fact is that their simulator isn't written to simulate a specific core. That would be too much trouble for them. Letting the compiler "handle" the various deviations from the "standard" core simply ignores those things you say are difficult in ASM. It doesn't solve the problem. That's consistent with relying on 'C' to do the work. It simply means one can proceed with inadequate attention to detail. If you do that, you get what you deserve. RE |