??? 06/23/10 07:28 Read: times |
#176853 - Please don'g generalize Responding to: ???'s previous message |
Justin said:
People already bought the s/w for an 8051 and they are unwilling to fork over more money to learn a new cpu and the massive time investment. No massive time investment to look at a new chip. You can buy a development board with demo software and be productive after 2-3 days. Faster, if you already have experience with a couple of different processors/processor architectures. Remember too that a 8051 is hell-on-earth for running a HLL. Most newer chips are very, very, very much easier to write a compiler for. For an ARM, you'll find that you really have to invest significant amounts of time to match or beat the ARM compiler. You don't have to be the best student at school to beat an 8051 compiler long before you graduate. And you can find free software, if you are willing to sacrifice the availabity of a simulator. Justin said:
People already bought the s/w for an 8051 and they are unwilling to fork over more money to learn a new cpu and the massive time investment. You are confusing core size with cost or production complexity. 32-bit cores are nothing magic. They are good to use in any quantities in just about any-size project. It is only very specific situations where you may decide that you _need_ an 8-bit processor or a 32-bit processor. Maybe your product really needs an 8-pin AVR chip. Maybe it really needs a 144-pin monster with multiple USB, CAN, Ethernet, ... But there is no significant difference in price or power consumption for most of the chips. If I'm going to produce a unit in 10k/year - I don't magically select a 32-bit processor. If an 8-bit processor is $0.2 cheaper and as good at the task, I would save $2k/year using the 8-bit processor. It's just that there are no simple rule saying that an 8-bit processor will be $0.2 cheaper. It may be the same price or even more expensive. And even if having the same price, it may require weeks of extra programming because most 8-bit processors have lousy peripherials. Justin said:
The packages are larger the pin spacing is more separated, etc. I assume you are talking about hole-mounted components now - not many such components available, since they are expensive to use in production. But are you saying that there are a difference in package size or pin spacing between a 32-pin ARM chip and a 32-pin 8051 chip? I doubt it very much. An ARM isn't automagically 100+ pins. Justin said:
Rarely, does anyone need all the bells and whistles that accompany the ARM chips. Rarely, does anyone need all the configuration bits in a single 8051 chip. But it doesn't cost you anything extra to have access to the X2 speed, ... Price and power consumption ads up when you add extra features. But you wouldn't buy chips with extra features unless you considered them meaningful. Then you would notice that you may pay $0.2 for a CAN contrller, or a similar amount for a USB device. Maybe $0.5 or $1 extra for a variant with USB host or USB OTG. But if you need USB OTG, then you need it. Need all the bells and whistles? Most of them you get for free, giving you options when developing. But quite a lot of applications can take advantage of 16-character FIFO on UARTS. Quite a lot of applications have an advantage of a chip with two or four UART - how nice it is with a debug port, separated from the port(s) used by the business logic. Fast, SPI with FIFO or DMA? Guess how much nicer it is to guild a multiprocessor solution when you can drop 16 bytes of data and have the SPI perform the full transfer in the background. How many applications have "needs" for 32-bit timers, allowing the timers to tick at 15MHz and still span 286 seconds - or prescale it to 1Hz and have it tick for 4 billion seconds allowing it to store uptime for 136 years. Or maybe a timer with four separate match registers, letting you create four separate events with same frequency but varying phase. Many of the chips have excellent PWM - for example driving a three-phase motor with push-pull from 6 pins at zero processor load. That gives you a lot of CPU time to measure and decide what speed you really want. Many of the ARM chips have full IrDA support on at least one serial port. Possibly hardware-acceleration for RS485. Often fractional baudrate divisor allowing exact baudrate with whatever crystal you use. Normally always a PLL allowing the processor speed to be changed dynamically. Often multiple I2C. Not too seldom a 12MHz USB device with the required capacity to service it. Quite a number of applications can make good use of the peripherials - quite large parts of applications can be handled by the hardware and ISR. Justin said:
Unfortunately, it is unnecessary power because a lot of applications are just bloated and unrealistic. I don't think too many developers have time to spend on adding too much extra bells and whistles on embedded products. They tend to deliver the absolute minimum. It's the PC programs that gets all the bloated extra just because you can link in 10 libraries more and drag/drop some extra fields into the dialogs. And the difference between embedded and PC programs is that the PC program has an almast fixed distribution cost - a CD or DVD can hold quite a lot, and the bandwidth for downloading applications isn't so high either. For embedded, you may save $0.5 my using one step smaller flash, so there is incentives for not bloating. But you have to separate a couple of things. - Code seen on the web tends to be lousy, since it is mostly students posting it - they are proud that they "almost" got the code working. - Much embedded products have lousy software - not bloated, but badly written and badly tested resulting in huge amounts of bugs. Doesn't depend on chip. Only time, education, goals, management, ... - You only see the bad/broken/uggly - when an embedded device works well, you don't notice it. Most of them are expected to be almost invisible, while solving their tasks. Justin said:
You even have to worry about the heat dissipation from these chips Heat dissipation? You, as developer, may decide if you want to run your ARM chip with 3.3V 1mA or if you allow it to draw 50mA. It's just a question of how much you crunch and what parameters you use for the PLL. There are ARM chips that may draw 5-10mA/MHz. There are chips drawing 1mA/MHz. And there are chips at 0.1mA/MHz. It's just a question of generation and need. A Cortex-M3 in 90nm technology gives 12.5DMIPS/mW. So you could get 125 million Dhrystone instructions/seconds at 10mW. Not so sure about your problems with power dissipation. 10mW doesn't sound like any huge heat problem... 1) A mobile phone consumes huge power for the display. 2) A mobile phone consumes a lot for a superscalar processor with 3D acceleration because we stupid customers buys the phones with the most animated featues. 3) A high-end mobile phone consumes a lot of power because we cant WIFI, Bluetooth, ... besides the "phone". But the above has hardly anything with our discussion to do. I have a Nokia N900. A computer with integrated phone, running a full Linux system capable of installing Debian installation packages for the ARM distribution. If I want to build a Linux system, then I have to select a suitable processor for running Linux. If I need to build a lamp timer, I'm better off selecting a different processor. You find a 100:1 or larger difference between a high-end and low-end in the 8051 world just as in the ARM world. What goes for the high-end chips should not be extrapolated down to the low-end world. The Nokia N900 runs a superscalar processor at 600MHz (and some people overclocks it to 1GHz). One of the products I work with runs an ARM chip at 48MHz - the chip can do twice as much, but I don't need the extra speed. Most of the time, I don't need 48MHz either but it's good for burst needs when multiple CAN channels runs at max while all other peripherials are also busy. For the majority of the time, the core is stopped. The main loop spends most of the time waiting at a halt instruction. Hot? If you buy me an IR-imaging camera (not the IR you get with a normal digital camera or standard IR film, but a camera that can actually measure temperature), I would be able to notice if the processor is powered or not. My finger isn't able to sense if it is up and running. Justin said:
I will say I have never felt a hot 8052 and on top of that one can easily move the chips around on the board so that if they did heat up they would easily dissipate the heat. Do you use tape to fix your 8052? If it's soldered, I can't see how you can easily move the chips around. If you have it soldered to a PCB, a CAD program would not care about the used core - it would move a 8052 chip with the same ease as it would move a PIC chip or an AVR chip or an ARM chip. If you have a processor with extreme quantities of pins? Then I really hope you are working with a product that needs such a chip - it would be stupid to add 60 pins "just in case". A 100-pin 8051 is no different from a 100-pin ARM. And you only select 100 pins if you see an advantage in doing so - for example by being able to remove a lot of external MUX chips. In the end, there is nothing magic with the number of bits the ALU is wide. So there are no magic rules what is hard and easy. Select processor based on an unbiazed analysis of the needs. If the analysis points at a specific 8051 variant - use it. If it points at a specific AVR - use it. Don't use core size as a holy gral. Don't use digits in the model name as a holy gral. The 8051 will continue to live for a good many years. Not because it is best, but because it will always be "good enough" for a large number of tasks. But at the same time, it will be unsuitable for a large number of tasks too. The same goes for any other chip too, whatever manufacturer, architecture, core size, pin count, ... Make sure you don't go ahead and continue to use a 8051 where it is not suitable - just because of ancient history. And make sure you don't ignore other chips because of incorrect assumptions that are not applicable. |