One of the more common design techniques for power distribution networks (PDN) is the determination of the peak impedance that will assure that the voltage excursions on the power rail will be maintained within allowable limits, generally referred to as the target impedance. In theory, this simple evaluation is as basic as Ohm’s law: the allowable target impedance is determined by dividing the allowable voltage excursion by the change in load current. Significant benefits of this method are that it: 1) allows measurements to be made in the frequency domain, 2) is simple, 3) is inexpensive to perform, and 4) is wideband, allowing measurements to be made easily into the GHz range. A drawback of the frequency-domain measurement is that the results are small signal, and they might not represent non-linear controls or boundary conditions that result in large-signal rather than small-signal solutions.
Measuring in the time domain offers a large signal measurement solution; however, the method is much more difficult because the ability to control very high-speed current steps is very difficult and might not be possible. This article focuses on the fundamental flaws of using target impedance as an assessment method using simple, lumped element models and simulations to highlight some of the key issues. A high-performance optimization simulator is used to determine the best- and worst-case voltage excursions for a given tolerance.