??? 02/24/09 13:43 Read: times |
#162742 - Once more - think time instead of frequency Responding to: ???'s previous message |
I wasn't talking about the computed speed, i.e. miles/hour. I was talking about the lowest and highest pulse frequencies that would represent a valid speed.
The lowest pulse frequency would represent a timeout value, where you give up looking for the next pulse and flag the speed as zero. The highest pulse frequency would represent the highest load in case you let every pulse generate a pin interrupt. But once more. When you have a signal with a frequency, you have two choices. Count the number of pulses in a time interval, or measure the time between two or more pulses. For a very high-speed pulse train, it is easiest to just count the pulses during a fixed time interval. But for slow frequencies, that time interval must be very, very, very long to make sure that a single pulse more or less does not affect the precision. If you need 1% precision, then you must select a sample time that includes at least 100 pulses. If one pulse/second is an acceptable speed, then you have to measure over 1min40sec to get your 1% precision. If you instead switch to the time domain and measure the time between pulses, you will get excellent precision just measuring a single pulse period when the pulse train is slow. If you can measure with 1ms resolution, then any pulse frequency slower than 100ms (10Hz) will manage your 1% target. If the pulse frequency increases to 100Hz, then you only manage to measure 10ms between each pulse, but can instead switch "range" and decide to measure the time for ten periods or more. Then you will again have a measurement interval of at least 100 times your measurement resolution and can produce a 1% precision. The normal way to measure time between pulses is to let each pulse generate an interrupt. But if your highest pulse frequency is very high, then you can't to this without having an external frequency divider. That is the reason why high-frequency signals is easier to measure by counting pulses during a fixed interval instead. For low pulse frequencies, the jitter from the interrupt response time would not matter, so every single interrupt would produce a valid speed as computed from the time since the previous interrupt (time taken from a running timer, possibly augmented with an overflow counter in case your lowest tick frequency is slower than the turnaround time for the timer). For high pulse frequencies, the varying response time of the interrupt would introduce a larger and larger error. But since you get the ticks at a higher frequency, you can then switch from computing the time between two ticks, and instead compute the time between five or 10 or 50 ticks. In your case, you need one speedometer output every second, if I understood you correctly. In that case, the latencies in the interrupt handler does not matter since it would represent so very small fraction of one second. |