??? 01/28/11 20:54 Read: times |
#180892 - Lucky you Responding to: ???'s previous message |
Richard said:
I doubt anyone using an 8-bit MCU would start using 16-bit anything as part of a protocol. That's why they make 16-bitters! Pathetic comment. Do you know what Turing-complete is? This world don't need preachers thinking an 8-bit processor must only work with 8-bit data, while a 32-bit processor must only work with 32-bit data. So, you have never ever used anything but an 8-bit checksum or an 8-bit CRC with a 8051? You have never ever used a protocol where the address is 16-bit wide and each bit in the address represents one listener, allowing you to specify any combination of 16 listeners as a target? You have never ever worked with a protocol requiring more than 256 commands? It's ok to call them comand and sub-command if you feel more comfortable looking at them as two 8-bit values. Doesn't matter. Still the same thing. Richard said:
Further, I doubt anyone would go so far as to use a keypad entry system for 200 commands. I'd guess it would take the form of a fairly simple command and a parameter or two, in which case the code would be tested separately, command recognition first, then parameter assessment. Feel free to doubt. Your doubt doesn't make the programming manual for the alarm system I use as an alarm source any thinner. I think it's 80 pages with command sequences... Richard said:
Use the UART? Probably not, and if it had to be used, it could be done at test speed and not necessarily at 300 baud. Totally impossible to know what you are discussing (besides an UART) since you don't quite any text. A system that does take input through the UART don't leave you too many choices when it gets time to test. I would give it a very high probability that system testing would use that same UART for sending the test vectors. You do remember me saying that a test vector isn't just a sequence of digital ones and zeroes? Richard said:
When it's hard to reach certain modules, one can verify their proper function and program flow separately. So Richard, how long did it take you to realize that? Didn't you notice that exactly that - module testing - have been discussed for a number of posts now? And module testing makes use of a software test harness that sends stimuli to a module and checks the results. But good that you somehow did get to think about module testing. A necessity since the multiplicative nature of a system results in almost infinite state sets if only trying with external stimuli. Richard said:
For one who so often advocates for the use of large-core MCU's, I'm surprised that you'd suggest one use an 8-bitter as you've suggested. Not sure why you are surprised. I have used 6800 chips. HC11. Z80. 8051, 68008 (32 or 8-bit depending on view), 80186, AVR, PIC, ... I use whatever processor I feel is best suited for the task. Which means I do not argue processor choices based on some hidden agenda ("USE xx BRAND"). I can do 32-bit jobs with a 8051 and 8-bit jobs with an ARM. The question isn't what width the numbers I need to work with are. The question is what total computation power I need, or what hw acceleration. Or what purchase price. Or what availability. Or what power consumption. Or how small case. Or if I have a compiler available. Or if extending an existing program. Or ... But what I have shown many times is that there is no significant price difference between a fast 8-bit processor and a slow 32-bit processor. And the slow 32-bit processor normally have enough flash that it doesn't matter if each instruction takes more code space. So people should not fight almost to the death with an 8-bit processor for problems an 32-bit processor can do trivially. Why bank-switch memory with a 8051 when an ARM has 4GB of linear address space? But still irrelevant for this thread. You constantly argue "I can squeeze into a smaller...". Yes, if an 8-bit processor can do the task - why not let it? If a customer wants their data to be encrypted before sent in, and the existing product has an 8-bit processor with enough free codespace and CPU cycles to do the encryption - le'ts do it. The receiving end doesn't care if the sender had an 8-bit or 32-bit ALU as long as the data arrives at the expected time and is correctly encrypted. If you do think about it, it doesn't matter what processor that is used. Testing is still to a large part an universal problem. And to a large degree you use the same methods. Some processors do have advantages - a JTAG interface can allow some extra tricks. Richard said:
I'm not sure your suggestion of using a state-machine to interpret keypad-entered commands and data is wise. I think table-lookup makes it sounder, quicker, and much more testable. That's just my opinion, of course. That's why my 8KB of code space is often <1kB of code and >6kB of table space. The big problem is that you are arguing against yourself. It doesn't matter if you build a state machine with a switch statement or with arrays of tabular data. It's still a state machine. And each entry in the table represents a state information. And every stimuli that may you change - or not change - state represents state information. And even trivial systems quickly gets hundreds of decision points meaning 2^100+ state alternatives. It doesn't matter if you see the math behind it or not. The math is still there and can't be avoided until we can use quantum computers for the testing. Richard said:
With a table-lookup, all the valid entries are vectored through the table to their appropriate routine, while the invalid ones are vectored to an error handler. It's really simple. In some situations, a table will become too large because there are too many stimuli to look at giving a too large lookup. In many situations your 6kB of state tables might be possible to express in 1kB of code, making it possible to convert 1kB code + 6kB table into 2kB code. Why? Because the 1kB of code can select what stimuli to look at depending on current state. And it can perform state changes based on history stored in a couple of bytes of RAM - something that needs to be precomputed and stored at many different locations in a precomputed lookup table. A huge percentage of lookup tables are larger than the code that builds them. But they are used because they can give a fixed-time state change. So they have advantages when maximum speed is required. But a large keypad command processor cares nothing about speed. It does care about code size. Richard said:
I think you've overcomplicated the problem. Maybe that's why you frequently find 32-bit MCU's more useful than 8-bitters. Yes, I know you want to think that. Because if you are right, that would kind of hopefully mean that I would be wrong. And that would make you sleep better than if you have to ponder the implications of you having spent last 35 years not knowing about software test harnesses. I guess you have manged these 35 years without needing so incorporate memory or time-discrete decisions in your state machines. Lucky you. Just don't assume that this is a general situation. |