??? 06/22/10 13:54 Read: times |
#176827 - Yes, but does it? Responding to: ???'s previous message |
Per Westermark said:
It doesn't matter if you talk about single-bit or multi-bit operations. We are not talking about 32-bit general-purpose processors, but processors intended for microcontroller use. Many of the 32-bit processors have memory mappings allowing the program to specify one port address to read 32-bits. A different address to read the low 16 bits. A third address to read the high 16-bits. A fourth address to read the low 8 bits, ...
An NXP LPC23xx or LPC17xx can manage an 8-bit or 16-bit read or write just as well as it manages a full 32-bit access. And the LPC17xx can continue this by having 32 unique addresses to just read or write a single bit. When needing to get the third and seventh bit, no normal processor will have such a dedicated instruction, so it'll be time for normal and/or operations. A 32-bit processor then automagically get the freedom to and/or 32-bit masks making it able to match and surpass any 8-bit processor. There are also 32-bit processors that can perorm a mask operation directly on the memory read - an advantage of having larger instructions and hence more bits available for the instruction decoder. A 32-bit processor can always be built to match any instruction set you can have in an 8-bit processor. Look at the instruction sizes of the 8051. They can be implemented in a 32-bit processor, and there would still be ample of instruction space left for adding full 32-bit instructions. So there can never be a general rule that a 8-bit processor can have an instruction set that can not be matched by a 32-bit processor. It is easier to build a fast 8-bit ALU than a fast 32-bit or 64-bit ALU, so in theory you would be able to build an 8-bit processor that could run at a higher clock frequency than a 32-bit processor. But in the real world, we already know that few 8-bit processors reaches past 100MHz - there isn't enough driving force for taking the costs to produce extremely fast 8-bit processors since a program that needs such high speeds normally also needs the type of computation performance that improves with 16-bit or 32-bit operations, doing a single add instead of one add and three add-with-carry. Right now, the absolutely fastest 8051 implementations are sold as cores for incorporation in programmable logic, for customers who have already decided to take the extra cost to get custom hw acceleration. In the end, it all comes back to the simple rule that you can find a specific 8-bit processor that can beat a specific 32-bit processor when doing a specific operation. But that is a specific case that can not be extrapolated - there are no physical limitations stopping a 32-bit processor from implementing the exact same instruction and processing that specific instruction in the same number of clock cycles, or even supporting superscalar operation. In the real world, there are economic factors that limits the availability of too fast 8-bit processors, or 8-bit processors using the smallest geometries. At the same time, there are economic factors that tries to make 32-bit processors have as well-rounded instruction sets as possible, which is a reason why you don't normally do a "80351" 32-bit processor. The 80251 chip does exist, but didn't manage to deliver enough to make it take reasonably amounts of market share. A badly selling 80251 hasn't inspired the processor manufacturers into trying to copy the 8051 instruction set into 32-bit processors. But they are extremely interested into getting huge performance/mW from their 32-bit chips, so even if you do find 8051 instructions that can't be directly matched, you will find that the 32-bit processors are very, very fast anyway. Every month, new interesting processors gets released. We can't design based on 10 year old truths. We have to look around, all the time. In some situations, the critical factor is how much flash or RAM we need. In some situations it is the number of nWh needed to perform a specific task. In some situations we need to operate concurrently on many I/O signals. All we know is that the price of the processor does not reflect the number of transistors in the core, or the size of the flash or RAM. And the number of transistors does not reflect the current consumption. So as professional developers, we just have to chose a processor based on some "best for the task" score, and not just based on "is my faviourite". Is the processor (cost, learning time, speed, current consumption, availability now, availability 5 years from now, tool quality, ...) a good fit? The above can basically lead to one of three things: 1) we end up working with a lot of different processors, since the needs are so different between projects. 2) we end up cherry-picking projects to just do projects that fits reasonably well with our favourite set of processors. 3) we use our favourite processors weither they is good or bad for the task, sometimes producing substandard products because we were too stubborn. I tend to end up in the first group. One project may use an 8051. The next an AVR. The third a PIC. The fourth a PPC. The fifth an ARM. For some reason, I have never done a commercial project with any MSP430, but hopefully one day... It doesn't bother me if I have to learn a new processor, as long as the required tools are commercial grade. I'm not too convinced about the tool quality for the PIC chips, but that is something to save for a different thread ;) The real question is not whether or not a thing is possible, but, whether it has been done. Nobody's going to wait for this manufacturer or that to build what they need. They have to solve their problems with what can be obtained now ... not only the hardware but the associated development tools as well. As I've said, I often use an MCU as a piece of hardware, and not as a computer. That relies on the MCU being able to do very specific things at a very specific rate in a very specific way. I'm not referring to "general" concepts, but, rather, to very specific features. As you probably recall, I'm particularly enamored with the feature set of the Maxim/Dallas DS89C4x0, specifically the dual DPTR and associated operations. Why? Well, it's because I can quickly move data around using those features, and, I've found that I often need to do that. If I were dealing with 32-bit data, a 32-bit MCU might seem appealing. However, my app's generally involve 8-bit data paths, or, occasionally, 16-bit paths. When I see a 32-bit MCU that can fetch an 8-bit value from a fixed location and move it to a location specified by a pointer, increment that pointer, and do it again in two clocks, as the Maxim/Dallas MCU can do in two or three 30 ns clocks, in page mode 1, or even in 5 clocks as it would in non-page mode, it'll get a serious look. The DS89C4x0's aren't the cheapest 805x's, but 32-bitters tend to be quite pricey once you look at the faster and more feature-rich versions. The cost of a set of development tools is a factor in some cases as well, at least for me, since my volume is generally very low. All those things have to be considered in the context of what you have to do, how you intend to do it, and what it's going to cost. The MCU cost is only important in cases where the design is ultimately intended for production, but the development cost, including tools, is important in cases where it is not. The system designer has to be aware of what his hardware can do. He has to trade off the relevant factors of hardware cost, development cost, development time, power, etc, as his requirements demand. You're certainly right in that all things are possible, if one is "rolling his own" MCU in programmable logic, but how many of the current 805x users are doing that? Since ARM isn't available as "free" IP, that's not being used. What do you have in mind as far as your notion that all instructions/operations can be implemented? Is there a core that you'd choose for DIY MCU applications? RE |