??? 10/22/07 10:50 Modified: 10/22/07 11:16 Read: times |
#145997 - Derivation Responding to: ???'s previous message |
In otherwords, I'm obtaining about 2 samples per second, taking 4 or 16 or 32 samples is simply too long.
Well, if you want a stable result up to the last LSB, you will have to work with several samples. There's no "magic" way of knowing the precise end result from just one sample, apart from outright cheating (i.e. knowing what the input value is a priori). You _may_ be able to give an estimate of the precise value that is statistically better than the actual measured value if you have some a priori knowledge of the statistics of the signal (mean value, autocorrelation function) and just one sample, but even that's more educated guesswork than anything else. As I have explained, if you are using flash or sar adc's then taking 50 samples and then dividing the end result by 50 does stabilise the last two digits, but doing this kill's the performance of such fast adc's not to mention the added processing time, Kills the performance ... how ? One point of having such fast ADCs is being able to use them for oversampling applications. And doing a couple of additions and a bit shift should not present any issues to one of todays CPUs. Except hardware and compiler price, not to mention single cycle 64bit floating point+MAC's, barrel shifters and about 250 flops at your disposal? What point are you trying to make here ? You either have a very distorted definition of "digital signal processing", or you're trying to make fun of me. The former could be remedied, the latter isn't really conducive to this discussion. And if you're doing floating point DSP, what point is the barrel shifter ? Last I heard, floating point numbers don't take too well to being bit-shifted, unless you're trying to approximate an inverted square root à la Quake 3. Not that floating point is necessary for signal processing. In fact, in many applications I'd rather decide myself where I put the decimal point. Not to mention that using floating point hardware on general signal processing tasks is a colossal waste of money that should better be spend on hiring a designer with a clue of numerics. There are lots of different numerical methods to solving first/second order differential equations besides Laplace such as: Laplace transformation can be used to solve differential equations analytically. In the case of a simple, first order differential equation, you can arrive at the same end result (algorithm) by solving with one of the numerical methods or by using the various transformations. However, in digital signal processing, the latter approach is preferred. More advance numeric solvers (especially the multi-step methods) require input data that is simply unavailable in signal processing (like the value of the signal between two samples, or the value of future samples). The numeric solvers are better suited for simulations. The numeric solution of a first order differential equation would go like this: 1/a * y' + y = x y' = a * (x - y) Discretized in time, approximation by using the first two terms of the Taylor series (-> explicit Euler method). Dt means delta t. y(n+1) = y(n) + Dt * y'(n) y(n+1) = y(n) + Dt * a * (x(n) - y(n)) y(n+1) = (1 - Dt * a) * y(n) + Dt * a * x(n) For signal processing, Dt is usually simplified (normalized) to 1. This is different from numeric simulation, where playing with the step size might become important. So the difference equation that approximates the differential equation is: y(n+1) = (1 - a) * y(n) + a * x(n) which is basically the equation you'll find in Steve's or my earlier posts. But this still won't help you if you want to stabilize the last digit after just one measurement, since this filter, like any other, has a certain settling time. |