Model instruments to improve signal integrity simulation
Keywords:instruments model? signal integrity? simulation?
By John Olah
Applications Manager
Agilent EEsof EDA
When signal integrity simulations agree with measurement results, design iterations can be performed in software with greater assurance, more accuracy and with less cost than in doing multiple board turns. However, when it comes to signal integrity design, getting measurements and simulation results to agree takes special considerations.
Some important steps can be taken to make measured versus modeled comparisons more than just a qualitative comparison. Including the effects of the test system into circuit simulators improves the understanding of the design and reduce the cost of design turns.
Signal integrity designers have a large variety of test equipment and simulators available to themsome from the digital design background and some from the analog microwave background. The goal is to use each tool to its best advantage and then combine the measurements and simulation to provide the necessary insight into the design.
As long as circuit simulations produce predictive, meaningful comparisons to measured results, they serve as an indispensable tool for further design refinement. Problems arise, however, when circuit simulation does not compare well with measurement. In this case, the simulation result is at best only used as a qualitative comparison rather than for rigorous quantitative comparison. To get meaningful simulation results as a design progresses requires discipline to build up a design knowledge base.
As prototypes or test structures are produced, a good practice is to use measurements to improve simulation accuracy. However, getting good comparisons between simulation and measurement isn't just the task of the simulator. Often, comparison differences are a result of not accounting for what the instrument actually measures, and what things need to be accounted for in the measurement setup and calibration.
In the early phase, designers use ideal signal sources and analytic models to construct a circuit representation to predict performance. As a design progresses to the point where portions of the circuit are available as physical test structures, designers look for ways to incorporate measured data back into the simulation.
S-parameter measurements from network analyzers work well as a model for passive structures or linear devices. Through the use of convolution time-domain techniques, the S-parameters are directly usable as a component model in a circuit simulator, as opposed to using the data to extract some equivalent circuit representation.
For high-speed data rates, another important method is to capture waveforms with high-speed digital scopes, both before they are connected to the test structure and after they pass through the test structure. These waveforms can be used as a signal source in the simulator so that the circuit is being driven by the exact same waveform as the real test structure sees.
Another consideration is to account for everything that comes in between the actual source of the signal and the measurement of that signal, including any cables or connectors that are not calibrated out of the measurement. While at lower data rates there might be a reason to ignore these effects, at higher data rates they can be significant contributors.
The eye diagram measurement involves a digital oscilloscope to detect the signal, and a bit-error rate generator to produce a pseudo-random binary sequence signal. For most digital test systems, the measurement they make or the signal they produce is only valid at the instrument interface. Anything external to the instrument needs to be characterized and included in the simulation before a quantitative comparison can be made.
S-parameter measurements of the cables and connectors and even of the test device (if it's a passive channel) are good ways to characterize the path between the signal source and the scope. Including the effects of the test system in simulation helps designers make meaningful, quantitative comparisons to improve understanding of the design, reduce costs, and reduce or eliminate design turns, ultimately leading to better understanding of the design.
About the author
John Olah is applications manager at Agilent EEsof EDA. He can be contacted at john_olah@agilent.com.
Visit Asia Webinars to learn about the latest in technology and get practical design tips.