??? 10/05/10 19:38 Read: times |
#178931 - These assertions look entirely wrong to me. Responding to: ???'s previous message |
Per Westermark said:
Doesn't matter if square wave only works for known parts. First step is obviously to perform logic function detection, including such thinks as looking for tristated outputs. If do you know it's an '00, you can try to figure out if LS00, HC00, HCT00 etc by starting to play with individual inputs and/or outputs. First of all, how would you detect that an output is tristate? How would a square wave and an XOR help with that? It really doesn't matter that an XOR is slow, since the critical part isn't the delay from input change to output change. The critical part is that a change in quotient between the two inputs does change the percentage of time the output is high or low, resulting in an analogue value after low-pass filtering. How did you come to that conclusion? What matters is precisely the input-to-output delay, along with the Iol and Ioh. An XOR won't help with those at all. Even very small changes in time delay on the input does affect the average output voltage. The reason for getting the fastest possible technology for the XOR gate is more that it's good to have fast rise and fall times on the output signal so that the majority of the analog integration is performed in steady states and not during the flanks of the output signal. I don't see how that's relevant either, since the PC isn't capable of making observations at a rate high enough to detect differences in propagation delay, rise time, etc. But even the flanks can be survived since they add the same offset error for a fixed input frequency of the square wave until the quotient gets so bad that the output never goes fully high or fully low before the gate changes output state again. I think you should provide a schematic and at least a hint as to how you 'd make such measurements with the aid of a PC and no other costly hardware. The precision is best when measuring delays through simple gates, such as an OR gate. Having a more complex subject-under-test means longer delays between the change of an input and the change of an output. This requires a slower square wave and hence less resolution of phase detector. But on the other hand, a chip with slower signal path is normally not allowed to be used in a design that is as sensitive to the chip delays. If you use a ripple counter, you definitely don't want to make a design where 100ps change in ripple time changes the circuit between working and failing. You need to make a design that handles the worst-case ripple time with some margin. More internal layers in the logic chip also makes a working chip have a wider spread so the requirement to measure the ripple delay don't need the same precision as if you measure the delay of a 74x00. I don't see how you can do any of this with a PC and some number of XOR's. The thing here is that even a hobbyist can get very interesting results with a simple phase detector. It's irrelevant what you can do if you have a 5Gs scope available just for the simple reason that most people don't have one. Just how does a phase-detector fit into this? A very large number of logic chips of the types we are discussing in this thread - not random CPLD or FPGA - have quite a number of internal parameters that can be "tasted" with quite simple "spoons". For some chips it may be hard to send an input logic signal that results in a suitable output logic signal. But there may be an /OE pin that changes the output. Either between a data value and a fixed logic level or between a data value and tristate. Just how do you go about detecting a floating output? The discussion I did start about doing some detection instead of just entering a chip ID and test a logic table was that much things can be done with low costs. Low costs (or actually any costs) means you have limitations in what you can do. But low cost can still give lot more results than just walking through 2^n input patterns and watching output values and say a chip is a quad 2-input NAND. Well, I don't agree about that, since you wouldn't have to "walk" through 2^n input patterns. What you'd have to do is to "walk through one input vector for each known device in the repertoire. If it passes on one gate, then it's to be checked on all the gates. If it fails, that's not what it is. When you run out of test vectors, you have a junk part. You can either open it up and read what it says on the die, or you can toss it out. As long as the square wave signal used for phase detection has fixed and known frequency, you can manage some decent calibration of a phase detector with not too advanced methods. So you can get interesting results even without seed chips. The confidence of the tester would of course be better if having measured 100 chips of every model and family, but even when a seen chip hasn't been tested you can still measure time delays with a resolution that is affected by the square wave frequency needed, i.e. what measurement range you set your circuit to use. When you get all the way into considering relativistic tests based on seed chips, you reach a level where you have to consider the differences in delays between batches or manufacturers. But then you have reached a level that should normally not matter - a user should normally not make a design that only works with a Fairchild chip and not a TI chip. Since it was never unusual for one vendor to buy dice from a competitor, it's hardly important to base any decisions on what manufacturer's brand on the device says. Richard said:
If you don't know what it is, you try all the tables you have assembled. If you don't get a match, at least you know it's not one of them, though it could be defective. Remember that the tables can still be arranged as a decision tree, so you do not have to test all logic tables even if the chip is of unsupported type. A single detected output means the decision tree can directly prune all chips that has an input at that position. Even with 250 different chips in the database, you probably don't need to test for more than 10 different chips - the other 240 gets pruned on the way. And it isn't until you have done this pruning that you make use of any additional hardware to look for inputs with hysterese, current dirve capabilities, chip power consumption or possible gate delay times. That decision tree is an interesting concept, though I've no idea how you'd make such decisions. As far as the vector table database is concerned, I'd order them by pin count, since, with unknown parts, that's all we know. I'd appreciate a demonstration of how you'd make the decision based on what you get from 74xx365 and a 74xx161. I'd also be interested in how you'd apply an XOR to the business of distinguishing between 74xx161 and 74xx163. How the PC can help would be interesting, too, since it's a couple of orders of magnitude too slow in its I/O. RE |