??? 08/30/09 14:53 Modified: 08/30/09 14:54 Read: times |
#168626 - That's where the problem lies ... Responding to: ???'s previous message |
Your perception that every case of "RESET problems" is, in fact, related to RESET has no rigorous basis. Nobody has ever, to my knowledge, made a rigorous connection between the "problem" and its cause, and certainly not to the RESET signal itself. From what I've read, they've found no obvious cause for the perceived problem, and made a circuit change, in the form of inserting a supervisor into the circuit, and the problem, which might well have been RESET related, "went away", or, at least, was somewhat less apparent, and the resulting assumption was the it was RESET-related and repaired.
Has anybody actually said, "I observed this behavior in the RESET (or other) signal(s), which violates required behavior, and, therefore, made the decision to insert a supervisor" ... I don't remember it being so clearly stated. That's just an example, however. Rigorous testing would force such matter to the surface, and, as a result, the application circuit behavior could be remedied, to the extent that circuit behavior could be forced within specified limits, OR the manufacturer of the MCU or other offending component could be forced to bear the cost of the remedy. If failures aren't tracked to their causes, then there is insufficient rigor in the process. I don't mean to suggest that all insertions of supervisors are the result of random action, but when they're done without rigorous justification they're no better. The manufacturers haven't helped much. They don't all specify a rise-time for Vcc or a fall time. They don't all specify a rise time and fall time for RESET, nor do they set any limits on duration, beyond the minimum that some provide. Further, they don't specify a maximal start time for their oscillator, when a crystal is attached, so that the oscillator itself could be used as a duration-determining parameter. Hardware design isn't supposed to be a creative art, and it isn't supposed to be based on guesses. Frankly, firmware design isn't supposed to be so either. I've read the various "explanations" that are clearly a struggle to apply some sort of logic to what is little more than an assumption based on a guess. Nobody has said, "I observed this <followed by some sequence of signal behaviors and durations that violate manufacturer's spec's and the observed failure rate> and concluded this <followed by some rational set of conclusions> on the basis of which I decided to do this <explaining what was done and how it was intended to remedy the previously described fault(s)> and subsequently observed this <followed by a description of now-properly behaving signals and timings, all within manufacturers' specifications. As a consequence, the failure rate was ...>." That's terribly pedantic and possibly boring to most people, particularly since it requires lots of time and diligent effort designing and implementing a test procedure, applying those rigorous testing procedures, carefully recording the results, and drawing conclusions based on them and not some conjecture. I mention this not because I want to reopen an old wound, but because I want to make it clear why there are so many testing and verification positions, relative to the rest of the employment market in electronics development engineering. It's the least glamorous field, seldom acknowledged by those who write the checks, and often understaffed and underequipped. The result is lots of unpaid overtime, and little recognition beyond the "those %$#@! test guys found another problem" comments. It's a job not like driving the carriage, but more like picking up what the horse leaves behind. There are probably fewer openings for carriage drivers than for poop scoopers. RE |