Email: Password: Remember Me | Create Account (Free)

Back to Subject List

Old thread has been locked -- no new posts accepted in this thread
???
01/15/10 17:01
Read: times


 
#172496 - You'll get no argument from me
Responding to: ???'s previous message
Per Westermark said:
Richard said:
Doesn't that require that those be part of the same routine? If your write routine simply checks to see whether it's unlocked, and, if it isn't, then gives up, your NV memory is safer. If those two functions, i.e. write, and unlock, are disconnected, it's safe. If they are always used together, then you might as well not use that lock/unlock function.

People working with "locking" tends to write code looking like:
unlock_memory();
write_changed_data();
lock_memory();


A program that jumps to a random location, because of some ESD glitch or similar, is quite likely to run this three-step procedure, whatever complexity you have for the lock/unlock code.

Whenever the code has logic to unlock the memory, stray execution may locate this chain of events.

Richard said:
Just how would you have a "pointer error" in code that's thoroughly debugged?

Your regular question. Apply an ESD gun to your product, and it will not matter how thoroughly your code is debugged. No memory access can be trusted when enough voltage or magnetic fields are applied to a circuit. And no, shielding will not solve the problem. Just change the amount of energy it takes until your product may suffer a hiccup.

Actually, I believe that would qualify as a single-event-upset rather than a "pointer error," but it should nonetheless be considered.

Same thing if one of your components isn't as good as you thought. The factory test may have accepted the unit, without being able to find that this specific unit will not work well for the full temperature range it was expected to support. Testing your prototypes is not the same as knowing that 100.000% of produced units will be at least as good as the prototypes.

That's why the product being sold must be subjected to random short-term (100-hour) stress-testing while the originals ought to be 1000-hour tested under all stresses. You need statistical data on MTBF as well as comfort-zone data.

Another thing. Your new device may be excellent. 10-15 years later, a unit running in high temperatures will suffer a lot of aging of all components. Capacitors may have lost a lot of capacity, making the unit more sensitive in case of glitches on supply voltages or input/output signals.

It is foolish to decide that there isn't a need for a safety belt as long as code or electronics are tested. Good code should still be written in a robust way, so that it either resets or auto-repairs when possible.

Yes, and the hardware should be designed to do that as well.

A student may strive to write "correct" code. An experienced engineer should not stop there, but try to write robust code.

I didn't mean to suggest that all mishaps are caused by firmware. The hardware has to be robust as well. Power supply, supply bypass, clamps on external input signals, etc. all have their place. Engineering should always incorporate such "features" as will make their product bullet-proof. Let management decide what to omit for cost, or other, reasons. That is, after all, a business decision.

RE





List of 10 messages in thread
TopicAuthorDate
EEPROM Protection            01/01/70 00:00      
   No much use            01/01/70 00:00      
      Think about this ...            01/01/70 00:00      
         Robust better than correct            01/01/70 00:00      
            You'll get no argument from me            01/01/70 00:00      
               Always a balance between cost and gains/losses            01/01/70 00:00      
                  No argument here ...            01/01/70 00:00      
            Shielding will not solve?            01/01/70 00:00      
               Magnetic fields are tricky to shield from            01/01/70 00:00      
   Reset performance of micro is important!!!            01/01/70 00:00      

Back to Subject List