??? 10/27/11 21:35 Read: times |
#184403 - You missed my point again Responding to: ???'s previous message |
Per Westermark said:
Richard Erlacher said:
That's only true if the evaluation compiler puts the code where it can actually run on the target. That is, after all, the issue that initiated this discussion. Doesn't matter if I have to use a processor with more code space when evaluating the compiler. The goal of evaluation is to find out if the compiler is good. It will be good whatever 8051 chip you use, since they all use the same instruction set. I don' agree. If you want to evaluate the timing of a particular module, one small enough to run in the 2kB space, you need to run it on the target, else you need tools to evaluate the timing automatically under simulation, and those tools are even scarcer than "good" compilers. The easy, quick, simple and deterministic way to do this is to run the code on the target MCU. Further, you have to be able to load other modules with which the one of interest has to interact, else your evaluation will be little more than an arbitrary value judgment. An evaluation version that allows the code to be placed at any location would mean that a user only making use of chips with 4kB of code space never needs to buy any commercial version of the compiler, so it really is important for the compiler vendor to give them an incentive to switch from the evaluation version to a bought version.
Richard said:
Nobody's discussing "ARM-class" processors at this point. I know ARM would like to supplant the 805x's but there are places where they'll NEVER fit. But you missed the important note here, that I gave in the next paragraph (quoted below): lots of embedded code is more or less one-to-one between source lines and peripherial accesses. So not much need for any optimization. We do not want the compiler to remove writes to our UART transmit registers or reads from GPIO pins. And the compiler may not change the order. Richard said:
But even more importantly - the critical part of code is likely to contain lots if GPIO accesses, where there is a one-to-one between source lines and hardware accesses. Exactly how can a compiler fail when there isn't anything to optimize? I'm not sure optimization is the issue. Compilers from different vendors produce code that runs at different rates, uses differing volumes of code space, uses more or fewer resources ... But that can be evaluated within 4kB of code space for an 8051 compiler. Just how would YOU do it without running it on the target MCU? Would it be anything more than your expert opinion? Richard said:
After being repeatedly lied-to by the KEIL people, not to suggest they're the only ones, and probably as much out of ignorance as out of evil intent, I've nearly given up on trying to deal with software vendors. Strong claim from you. Care to back up that with some examples of how Keil staff have lied to you? I'll give you one that stands out ... When I asked about precisely simulating the address-mode switching in the DS89C420, about a decade ago, the tech-droid with whom I was speaking told me that it had been proven to be impossible. The guy said that it was impossible to tell when the timing effects took place, hence, it couldn't be simulated. I know better, as the device is entirely deterministic in its operation, and started designing my own simulator, which effort was later abandoned because it's easier to hook the circuit up to the logic analyzer and observe the timing there. Of course, the code was, by that time, already written in ASM. I've mentioned this particular matter before. I know a bit about simulators, having managed to get through grad school by doing little other than writing or modifying simulators. I doubt simulation has changed much since then. It's like math. It's the same process and language. What worked then works now ... maybe even better, but it still works. RE |