??? 06/23/10 17:27 Read: times |
#176870 - x bits are just one parameter among many Responding to: ???'s previous message |
Jez doesn't have any problems with larger/smallar packaging. They have a problem with smaller geometries. But if you run a 8051 core in a 0.13u geometry, you'll end up with similar result as an ARM core in 0.13u.
Limited capacity - the ARM chips don't require the user to try to move variables between different memory regions to maximize the speed or minimize the code size. That is a significant difference between the architectures. It's up to people to pay attention to details, whatever processor you have. Anyone thinking that with processor X everything will take care of itself is a fool and is best ignored. Corroding pins? You shoudn't have any corroding pins. If you play in salty environments, you should look at conformant coating where everything is isolated. Using pin distance as a timer to decide the number of months/years the unit may survive before a failure is not a good metric. Justin said:
Quite frankly, 32-bitters can do the same job an 8-bitter can do, maybe a little more poorly in some applications Once more - that's a sentence that should read "A specific 32-bitter can get more or less spanked by a specific 8-bitter for a specific task." It is not the width of the ALU that decides when an 8-bit processor will win/lose over a 32-bit processor. It's how fell the instructions fits the task and/or the total crunching power and/or the worst-case response time and/or the amount of memory and/or the mapping from problem to peripherials, ... Being 32-bit just makes a processor "different" from an 8-bit processor. But as mentioned earlier, you can design a 32-bit processor with mnemonic-level support for every 8051 instruction, and able to run any 8051 instruction with the same number of clock cycles. If that is all you do, you'll waste the extra bits of the ALU. So you'll use wider instructions allowing the extra bits to describe new op-codes using 16- or 32-bit arguments. And you make sure that these extra instructions are valuable enough that they "pay for" the larger code flash needed to have all instructions consume n*16 or n*32 bits instead of n*8 bits. But since cost of a chip isn't proportional to the number of transistors, a processor with two or four times the number of code bytes can be sold at a lower price, if enough customers exists to make it meaningful to produce it in enough volumes using a good enough process. Justin said:
If thats not true, then the 8-bitter will never vanish from the market. There are no real driving forces to removing 8-bit processors from the market. It's only a subset of problems where 8 bits are a real disadvantage. Even if 32-bit processors can compete at lower and lower costs, the market suitable for 8-bit processors will still continue to grow since the total market for microprocessorcontrolled equipment will continue to grow. Who cares if there is an 8-bit or 32-bit processor in the keyboard? Of course - there is a better availability of 32-bit processors with USB interface so the 32-bit processors will take market shares. But as long as the power consumption continues to grow, there will always be new niches where microprocessors can be introduced. With a microprocessor in a lamp, you can get a smart lamp. Right now, we have RGB lamps that can randomly or by remote control change their color and intensity. We'll probably get more processor-controlled lamps. It doesn't matter how much ARM grows. PC machines and simulation tools will also continue to grow. The day ARM leaves segments uncovered because they are so large that they can force their users into narrow tracks, "small" operators will be able to produce processors targeting these niches. Quite a lot of students have been tasked with designing their own processors (often not all the way to silicon but some students are lucky enough to study at the right university) so there will always be possibilities for unexpected competitors to jump in and grab market shares. Embedded applications often have short life spans, so compatibility with old code can often be ignored. This is a big difference from Windows and the ability/need to be able to run old PC applications. The reason this thread started to debate ARM chips wasn't because the great love of ARM chips, but because this thread got a number of generalisations claiming that 8-bit processors in general and 8051 chips especially had magic advantages. One example was the ability to handle individual bits - but there are ARM chips that uses peripherial hardware to make the ARM core behave as if it had bit instructions. One example was the ability to read 8 bits at a time without need to isolate away the other 24 bits from a 32-bit port -but there are ARM (and other) chips that can do 8-bit reads/writes even if the core is 32-bit wide. The thing here is that 8-bit or 80-bit is similar to one UART or two. It isn't in itself a critical selector unless the specific project has a specific need. Without the need for a second UART, it will only be a nice-to-have. If you don't need to be able to atomically add two 80-bit numbers, an 80-bit ALU will just be something nice-to-have. Anything existing with 80-bit ALU? Well the old x86 floating point unit did support 80-bit integers, but the reason people doesn't know about them is that not too many found them important to have and use. But how do we make sure that people starts a project with the right processor instead of trying to implement a second UART in software? |