??? 11/18/09 16:27 Modified: 11/18/09 16:28 Read: times |
#170938 - about those test spec's Responding to: ???'s previous message |
James Hinnant said:
could be hardware, software, design, past end-of-life, etc., etc.
better to fill unused executable memory with SJMP $. i seem to remember ST docs even recommend fill memory with a trap pattern than is less likely to fail to trap even if the jump into an unexpected address could be in the middle of an instruction. ___ http://en.wikipedia.org/wiki/Soft...ing_topics Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.[12] 12. Kaner, Cem; Falk, Jack and Nguyen, Hung Quoc (1999). Testing Computer Software, 2nd Ed.. New York, et al: John Wiley and Sons, Inc.. pp. 480 pages. ISBN 0-471-35846-0. What testing against a comprehensive performance specification can do is to verify conclusively that the device under test meets all the requirements set forth in the test specification. If that specification is properly devised, then the test cycle will show that the device meets those criteria. Further, if the test is set up to find limits, it can show at what point, in each category, the device no longer functions predictably. In most cases, manufacturers want to know what they should expect when each class of testable behavior causes a breakdown. If, for example the temperature limits are exceeded, how high or low can the temperature be driven and for how long before a failure occurs, and, how does the device respond? It can also be set up to determine behavior as supply noise exceeds specified limits, and other such parameters as well. All this must, of course, be considered in advance of the test process, and likely behaviors on failure taken into account. After all, how do you know when it's failed? One thing to keep in mind is the distinction between software and firmware. When firmware fails, it's actually a hardware failure, if, and only if, the firmware has been properly debugged before submission for testing. After all, one doesn't test a piece of hardware+firmware that isn't believed to be fully worthy of testing. It should be considered to be absolutely perfect in terms of its specified operating parameters when submitted for testing. "Smoke-test" is really not a test at all. Just because it doesn't burst into flame doesn't mean it "works" to any extent. Further, software standards can't really be applied to what, ultimately, will be viewed as a piece of hardware, as microcontroller-based circuits generally are. Software criteria are seldom relevant to this sort of product. Too many people think that having it do what they believe it should is sufficient indication that it, the device in question, "works." Until "works" is clearly defined in writing, nothing about it is known, aside, perhaps from its cost. RE |