??? 06/24/10 09:10 Read: times |
#176878 - Avoiding what issue? Responding to: ???'s previous message |
Justin said:
I wasn't talking about my designs. And why assume that I was talking about your designs? You shouldn't have any corroding pins even if buying electronics off-the shelf. There are many ways to make the electronics work in corroding environments, and changing the pin c-c distance is definitely not on the top of the list. Hardly even on the list at all. The capsules you are disliking are in use in the most demaning environments already with good results. But salt should never be allowed to reach any electronic parts. Justin said:
Obviously you missed that qualifier at the end that said "maybe a little more poorly". No, I most definitely did not miss any "maybe a little more poorly". It's just that there are no magic factor producing any "little more poorly" in either direction without looking at a specific problem and specific chips. So without having specific problems/chips, the "maybe a little more poorly" part becomes moot. But do I have to start from scratch and repeat everything I have ever written in this thread in every single post so as not "having missied" something? I have made several posts that have the implied meaning that: You always have situations where a specific 8-bit processor will win over a specific 32-bit processor. And you will always have situations where a specific 32-bit processor win over a specific 8-bit processor. That is not incompatible wtih another statement I have made at least twice. A 32-bit processor can always be designed to 100% span the feature set and timing of an 8-bit procesor, but the reverse is not true. So there can never be a general case that an 8-bitter must always be able to win over a 32-bitter in speed or suitability of instruction set. There can be situations where an 8-bitter can win in power, but the 32-bitters tends to be released by the companies with the big money and with access to the best factories. So you can normally get 32-bit processors in smaller geometries than the 8-bit processors, which in this case balances any extra power consumption from the extra transistors in the 32-bit cores. And while busy using a technology where transistors are cheap, you at the same time gets access to great peripherials - the 8051 is a nice processor but every part of it was designed on a very, very, very, very strict transistor budget. Most bang without adding an extra transistor. That makes UARTs, timers, SPI, I2C, ... normally much nicer on 32-bit processors. Or more specifically - a low-end 32-bit processor normally have better peripherials than a high-end 8-bitter in general, 8051 chips included. I have also specifically mentioned why I have used ARM as example for 32-bit processors. It's the one big 32-bit architecture that is seriously fighting in the microcntrller arena. For other architectures you will find one or more odd variants but for the big part they are competing in the PDA/Phone/Router/PC/... arenas. Release 10 great x86 chips with good microcontroller features and available from one source and I might mention them. Release 1000 variants and I most definitely will.The old 80186/80188 chips did contain a number of pheripherials integrated. But the concept was still more like a "system on a chip" than a microcontroller. I have been using it in more than one design, but only because the availability of alternatives was so much different at that time. When looking for situations where a 8052 chip will start ahead (but may or may not end up the best choice), you have to focus on something that very few other processors have. That is the dedicated bit instructions that spans a large part of the bits of the instruction set. On one hand, the large number of op-codes consumed for bit instructions means that you lost the instruction space for some more advanced instructions (given availability of more transistor later down the timeline). But the bit instructions makes the chip excellent for programs performing single-bit decisions or single-bit control using a minimum of instructions. A general purpose processor will need extra add/or to perform the same things, which adds number of instructions and bytes of code space and normally extra clock cycles. Some ARM chips does have hw-acceleration outside the core emulating bit read and bit write. But the chip still does not have bit test - it needs a separate instruction for the emulated bit read besides the test/jump. Since there are 32-bit chips that can manage work-arounds and still end up doing bit operations with similar timing (even if performing multiple 1 or 2-clock instructions) it wont be enough to just claim that projects with bit operations is projects where 8051 chips ends up on top. It's the area where they start with an advantage. The rest of the project requirements will then tell if they will manage to hold the lead, or end up similar or gets dropped. Justin said:
So, why am I working with an 8051/2? I mean you even go on to say that it is economical to misuse power with [...] Missuse power? Power doesn't have anything with number of transistors to do? Transistors are not a scarce resource requiring us to count them to be environmentally friendly. And 10 transistors in one geometry can consume less power than one transistor in another geometry. Especially if the 10 transistors are through with the job faster and can then just idle. The current process technologies consumes most energy for transition changes, i.e. the recharging of capacitances. A smaller transistor have a smaller capacitance and so need less power to toggle. It's not until leak currents represents the majority of the loss that the processors will continue to draw a lot of energy even when sleeping. Another thing - most newer microcontrollers have very advanced power routing features. If you don't need UART3, you turn off the supply power to it. The transistors are still there, but not being powered means they will not even leak current. Justin said:
You still avoided my original question and gave me that salesman like approach. What is the "suitable market"? Nope. I have not ignored the question. The problem with the thread is that it somehow makes the assumption that you can subdivide the embedded market into niches and each niche can then be tagged as 8051 model x, 8051 model y, PIC z, AVR w, ... Even tiny projects are multidimensinal meaning that there will not be a single metric to select the best chip. And as soon as we have a multidimensional problem we may end up with equations that can't be solved exactly or with a formula. So we need to do an iterative solve and may not be able to prove that we can find the optimum - maybe our iterations can only find some sub-optima. We can identify a number of big no-no and throw out large groups of processors breaking these no-nos. At the end, we have a couple of handfuls of candidates left, and will have to pick one with no more hard evidence to use. That is a good time to pick one you feel good for, even if that is a subjective measurement. But at least "I like" means you may have a happier time during the development stage. Kind of working with people you like. Saying that a specific problem area should always use a 8051 chip is similar to saying that a specific area should always use a Silabs C8051F92x-93x. It can only end up in a large number of "why" that can't be answered. Why say that a problem area should always use a chip with 64kB of flash and specially designed for low power? Maybe the unit is always powered from 230V mains? Maybe the expected code size is 300 byte? Maybe the need is only for 3 I/O pins? It is possible to discuss generic selection processes. What is important to look at before selecting a chip. But there will never be a "best" chip. Subdidide the enbedded world once, twice, ... twenty times. That could at the most give you one million small pieces of the embedded world. But these piaces will probably still have one or more unhandled parameters - parameters that means you can't pick one specific processor and say it is the "best" processor for this tiny niche. And even if you see a very suitable processor, it may end up being suboptimal within weeks or months because of new processor releases. Since there is a huge overlap between different microcontroller families, you can't just loosen the rules a bit and upgrade from a single specific model from a specific manufacturer into saying that the nice is best handled by a specific architecture. You will normally always find multiple architectures that have multiple models that are well suited for the task. And there isn't even any hard natural limits between 8-bit, 16-bit or 32-bit problems. In the end, you get a thread filled with debate. Nothing wrong with that, as long as you realize why you get a debate instead of a list of target areas where a 8051 should be used. Suitable areas^H^H^H^H^Hprojects for 8051 chips are areas^H^H^H^H^Hprojects where the used 8051 isn't a misfit. The exact same projects may be suitable for a PIC. Or for an AVR. Or an MSP430. Or a number of other architectures. It all comes down to the large areas spanned by different models in an architecture, and the large overlap between architectures. It is 10 or 100 or maybe even a million times easier to talk about "do not" than "do". Everyone would agree with "Do not use a PC-class processor for a lamp timer". But try the reverse. What is the best processor for a lamp timer? Suddenly, you get a lot of open issues: - extremely low power because driven by capacitive feed? Or maybe months running on supercap? - number of outputs - one lamp or 16? maybe just 8-bit is enough? - able to PWM-control the lamp with great precision withut burden on the core? - RTC to keep time individually or radio-controlled clock or just counting cycles from mains? - huge set of control points or just four on/four off times? - support to drive a LCD directly? Or send data using 4-bit or SPI or UART? Or just LEDs for feedback? - ... Even for a trivial product in a very well defined niche you get a huge set of open parameters. A generic 8051 with an intelligent LCD controller? Or an ARM chip with hw support to access the LCD glass with no extra glue logic? Or a nanoWatt PIC? Or a custom 4-bit chip with special peripherials designed especially for shipping millions of lamp timers? All debates claiming that a 32-bit processor is overkill have to remember that there exists 4-bit processors too, so you could have the debate that 8-bit processors are overkill. But are they? And when? Justin said:
I will recommend the 32-bit PIC because everyone likes the ARM or has only worked with ARM. That is silly. Recommend a processor because you see objective reasons to recommend it. Don't recommend anything just "because". What are your view on the development tools needed for your 32-bit PIC? You do recommend them? And you also recommend the PIC because of price and availability? And multitude? And amounts of independent sample code or documentation? Their CAN controllers? Or because Microchip claims that they are "designed for best-in-class 32-bit performance and accompanied by a vast offering of software." Or exactly what are your objective reasons for your recommendaton? Just to be anti isn't a good reason. Justin said:
Not one time did I say "great love of ARM chips". If you do read the sentence, you'll notice that it is I who are using the expression "[...] wasn't because the great love of ARM chips [...]" so why do you assume that anything I write are magically a claim that someone else have said the same thing? Justin said:
Why is it assumed that 32-bitter inherently implies ARM? Isn't that just a bit odd. The first post contained four references to "ARM". The second post two. So obviously the ARM family is on peoples mind. Must be a reason why. Probably not an odd reason. They aren't exactly under heavy pressure from other 32-bit architectures in the embedded world... |