??? 01/30/11 07:06 Read: times |
#180920 - I'm not on board with all of that Responding to: ???'s previous message |
Per Westermark said:
Richard said:
Per said:
So, you have never ever used anything but an 8-bit checksum or an 8-bit CRC with a 8051? No, but I wouldn't be inclined to use separate CRC32 with 16 bits of data. Noted. So please inform where you saw me say I would use CRC32 to protect a single 16-bit entity. Don't get too overexited now. Richard said:
I've no doubt that there are such things, but it matters a great deal how you partition the data. If you process it as bytes, you end up with different data structures than if you use words or long-words. Stay focused now. This debate isn't related to people reading data 8-bit at a time or 16-bit at a time. Exactly what do you, or do not doubt? It is very much an open question since you did respond to my response to: Richard said:
Further, I doubt anyone would go so far as to use a keypad entry system for 200 commands. I repeat again: Stay focused now - you are jumping all over with your comments, arguing against yourself. You either doubt, or you don't doubt. And make sure you remember exactly what it is you doubt or don't doubt. Richard said:
Per said:
You do remember me saying that a test vector isn't just a sequence of digital ones and zeroes? In my experience, that's just exactly what they are. Perhaps it's just a semantic difference. They are test data for testing hardware or software. If a hardware input is digital, then we are talking about a digital signal. If the hardware input is analog, then we are talking about an analog signal. If the hardware input has timing pecularities, then we are talking about a signal with time domain properties. If the software function takes an integer, then it's an integer. If the software function takes a float, then it's a float. A test vector depends on the requirements of the input, and will have to take whatever type that is required. A big reason why external test vector generators exists with waveform playback etc but still can't really help testing much software. And as have been mentioned quite a number of times now, a software test harness isn't focused on what digital test patterns your HP can emit. Richar said:
Per said:
A huge percentage of lookup tables are larger than the code that builds them. But they are used because they can give a fixed-time state change. So they have advantages when maximum speed is required. But a large keypad command processor cares nothing about speed. It does care about code size. My tables are static because they're precomputed and not built by the executable/deliverable code. What I did say was that a large percentage of lookup tables takes more room than the code that builds them, so if you don't need constant-time processing, you can save space by using code instead of lookup tables - if/else or switch statements often scales better when the number of conditions grows since a jump table quickly grows with the number of parameters it needs to index. Source code can prune that since every state in a state machine normally only cares about some few conditions to jump to another state. And some conditions are global to all states, so do not be repeated in the code for each state. And many times, you can have a state machine where only tiny fractions of the evaluation gets translated into table lookups. Thanks to the "locality of reference", the individual states knows what limited stimuli that may affect the state, so they can make do with a much smaller sub-table. Richar said:
I just don't call them test harnesses. I've always associated the term <harness> with a physical fixture. You have to figure out how to test your code. You have to design the test procedures before you write the code. The goal, after all, is to verify that the test criteria are met, and not that the tests fit the code. Try the term "test harness" in Google. First link: http://en.wikipedia.org/wiki/Test_harness Second link is: http://search.cpan.org/~andya/Te...Harness.pm Third link: http://search.cpan.org/dist/Test-Harness/ Fourth link: http://www.testingreflections.com/node/view/3655 Fifth link: http://testharness.org/Open.Core/ Sixth link: http://msdn.microsoft.com/en-us/mag...63752.aspx ... Maybe you should take a step back and look around a bit sometimes. Your terminology is quite ancient. And since you should have noticed that a couple of times already, you should react in a different way when you see people using a term in a way you didn't expect. Ask "what do you mean" or try Google instead of jumping in. Didn't you read my original post on the issue? http://www.8052.com/forumchat/read/180856 Richard said:
The goal, after all, is to verify that the test criteria are met, and not that the tests fit the code. Have you seen anyone claim something else? Please notify me where. I'm interested. By the way - I would recommend you to take a look at all the different methodologies that exists. Even if you aren't interested in them, it may be good if you learn the terminology to next time: http://en.wikipedia.org/wiki/Software_testing Richar said:
I've often quoted the old military saying that, "Where you sit determines what you see." I haven't encountered a requirement for the sorts of features to which you refer. It's enough that you work with a device containing an internal RTC. Suddenly you get a situation where some tests will give different results based on the time. Your HP test vector generator can't know the time, unless you maybe can pick up lit LCD segments to use as feedback. A sw test harness may include forcing the clock to a known state before starting a specific test. Or you may have a circuit with a microphone or vibration sensor, where the software performs some form of digital filtering on the data before reacting. Your HP test vector generator would have a hard time producing a very specific vibration. A software test harness can "reroute" the ADC and feed fixed tabulated data to the digital filter. Something as simple as estimating the state of a battery requires a filter, where the decision isn't based on a single measurement. So such a state machine contains history. History that is very hard to recreate from the outside. Without history, the code wouldn't differentiate between a voltage dip from a short current surge, or a voltage dip from a shorted battery cell or a voltage dip from the battery charge state becomming empty. You are constantly arguing as if whatever you haven't done don't exist. Why not instead argue from the point: Is there something here I may learn? Richar said:
To me the key with FSM's is to keep the state count small. You clearly don't agree. First of all, you haven't seen me say anything at all about my preferences to the size of a FSM. You are just forming an oppinion based on what you hope me to think that will make me as incompetent as possible based on your own metrics. I believe you grossly overestimate your importance. I don't care at all about your your relative competence or lack of it, nor do I care how you go about your work. It's not likely ever to have any effect on me, as I don't care a whit what you think about me and mine. I'm simply pointing out, with all this, that people in different situations see things differently and for good reason. I'm of the opinion, and freely admit that an opinion is all it is, that no one would attempt to do some of the things you've suggested one might do in the way you suggest one might do them. Perhaps I'm wrong, at least in that YOU might do them, but my confidence even in that is low. The pattern generator, BTW, can be made to provide not only I/O stimuli, but the clock and other controls as well, and the LA can monitor them in parallel. Even my old TEK portable can do that, though not at as high a rate. If you want to know about what happens and when, you can assert pretty tight control at least in that you know that an event is so-many clock cycles from T=0. My stuff is tested against external conditions and stimuli. Admittedly, I make considerable effort to keep my code short and simple, which is not everyone's approach. Having read the items to which you provided links, I have to say I don't agree with the underlying philosophy displayed in some of them, at least in the semantics. I view testing as a process applied to systems that one believes are fully functional, and not to a subsystem that's just fresh out of the box. The first "smoke tests" are trial and not testing, IMHO. In reality, the complexity of a system isn't related to the number of states of a FSM. You also have to include the number of conditions that must be fulfilled to stay in a state or to jump to another state. When you draw a FSM graphically, the complexity is not the nodes, but all the lines connecting them. And in the end, I'm not allowed to make these decisions. They are implied by the product specification. If the specification lists 100 unique keypad sequences, then that is what a FSM must handle - and it must correctly produce an error response for all deviations from these 100 required sequences. All I can do, is make sure I partition the problem in a way that results in small and easy-to-maintain code. And even here, the size of the code and the number of states of an FSM don't follow each other, since the code inside a state depends on the number of potential routes out of that state, and how that may be pruned by intelligent use of helper functions. A GetCommand() function quickly prunes the options, since a caller can then decide if there was an error or if it's time to check if the command number was known or not.
Richard said:
In order to ensure that my deliverable code is as specified, I keep it very simple, which, in turn, makes it cheap and easy to review and accept with confidence. That way, there won't be any last-minute band-aids. Not sure exactly what path you are trying to debate now. You can't make your implementation simpler than what the requirements dictates. And no one have said anything about introducing random noise not dictated by the requirements specification. I try very hard to do what the requirements dictate, which never allows me to ignore system requirements. However, system requirements generally require that the system ultimately function properly in less-than-ideal conditions, and requirements often dictate that the system be characterized as to its behavior when such conditions arise, even though the reqirements don't dictate the system function nominally as though the conditions were ideal. The discussion have been about test harnesses, and how to subdivide a complex solution into smaller, better testable modules. And that a number of tools exists, that can autogenerate test harnesses and can auto-locate corner cases (potential overflows or +/-1 errors). That isn't about making code more complex, but about making it less complex and about having automatic tools help locating errors. Well, the latter certainly is an admirable goal. I've not worked with any tools that automatically generate test tools of any sort. It might be interesting to see how they apply to 805x and other small MCU tasks. There are, BTW, some other points I want to address, but haven't the time at the moment. Perhaps later ... but not right away ... RE |