Pathfinding Work on No-Touch Leakage Testing, Part 2: Tales from the Intel I/O Test Road Map

With AC I/O Loopback test method Intel, had embarked upon not relying upon Automatic Test Equipment (ATE) to explicitly test for Input/Output (I/O) circuit characteristics. The device under test (DUT) remained connected to ATE and tests like I/O voltage levels and I/O pin leakage were still tested by the ATE. As noted in an earlier post, Intel planned to use a structural tester prior to the functional tester. Willamette (code name for Pentium 4 on 0.18um process node) would be our microprocessor intercept for AC I/O Loopback. Intel’s first structural tester, Teradyne J973, would also intercept Willamette.

As a limited pin-count/PEC tester, not all pins would be connected to the product. Hmmm…testing without connecting. Mike Tripp suggested we take on the stretch goal of performing all I/O tests on this structural tester. AC I/O loopback provided the solution for I/O Timings. I/Os require several DC tests that historically had only been executed via direct connection to an ATE. The tests included output voltage levels, input voltage levels, impedance and I/O leakage. Typically called pin leakage, this test would prove to be the one most fraught with challenges along Intel’s I/O test road map.

The standard method for leakage testing consisted of putting pins into a high impedance state while the tester forces a DC voltage and measures the resulting leakage current on the pin. Leakage between pins is checked in a similar method, by applying a DC voltage to one pin while grounding the adjacent pin and measuring the current. The test takes considerable time because the explicit measurement of current requires settling time for ATE instruments.

With the team’s goal of doing all I/O testing on the structural tester, we leveraged Tawfik Arabi’s I/O leakage no-touch testing method that he explored on a Deschutes product. He had used the JTAG interface that was historically used for Boundary Scan board level test. The test concept uses the discharge property of an RC circuit. The steps:

  • Set a voltage level on a pin
  • Tristate (i.e. disconnect the pin from a driving source)
  • Let it discharge

The voltage will discharge faster than expected. The concept could even be applied to Pin-to-Pin leakage, which looks for defects that inadvertently connect two isolated pins.

Luckily, we did not need to add much circuitry to enable the RC decay methodology. For Willamette, a limited number of I/Os would be connected to the structural tester (for the purposes of supporting Scan-based logic tests.).The remaining pins would be unconnected. In applying the RC decay method to the I/O pins we would need to consider the differing impedance on each pin: connected vs. not connected. This impedance could have a second order to the RC decay constant. In addition, the Willamette Sort manager, Tom Lang, had concerns about process shifts and process variation.

As any VLSI test engineer can attest, the shifts in the Fab process can throw off a test’s implementation. Tom’s very valid concerns needed to be considered. To investigate, I looked at data on Coppermine, the first microprocessor on the 0.18um process node. I collected data on the current pass/fail limits, pin/pad leakage on parts, and the pin/pad capacitance variation. Using the RC decay time constant, I created an Excel spreadsheet to calculate the variation in leakage and decay times we might observe. By selecting data from several months, I figured I would be covered for process shifts. A process shift would result in the overall distribution moving to less or greater leakage. I did not observe any such shift.

Despite all our preparation for testing on the structural tester, we did not implement RC decay leakage in production. We never got a chance to collect data, even pre-production. With I/O timings, AC I/O loopback needed to be adopted–this no-touch leakage testing seemed suspicious. Having a full-pin count tester in the manufacturing production flow proved that the familiar wins out over the new. During the Willamette I/O test method, development engineers invented another approach to I/O leakage testing, and the engineers for a follow-on product expressed an interest in it. In the final installment of this series I will share their story.

Have a Productive Day,

Anne Meixner

Dear Reader, What memory or question does this piece spark in you? Have you had to consider process variation in a project? Did a shift in process ever surprise you? Please share your comments or stories below. You, too, can write for the Engineers’ Daughter- See Contribute for more information.

2 Comments Add yours

  1. Ross R. Youngblood says:

    Was Googling “tristate” and low leakage. Liked this “no touch” leakage test idea.
    Years ago (’90’s) at Motorola, a Staff Scientest there asked me “Has your company ever considerd sample testing Parametric Tests” Turns out that something like 75% of the test TIME was spent running low performance DC/Parametric validation on the ATE, and 25% of the test TIME was spent doing high speed digital performance testing. Also only a small percentage of the DUT’s failed out for DC failures. We were working together on “Typhoon” which was the worlds first 1024 pin “structural” tester which was essentially a 64 scan pin 100Mhz tester with 960 1Mhz capable “broadside” pins. Each of the 64 “Scan pins” fed a shift register so full width 1024 ASIC digital test patterns could be applied at 1Mhz.
    This was back when DFT was considered “too expensive” and “difficult” to implement. So just getting scan chains designed in was an uphill battle.
    It’s nice to see that DFT has flourished over the years. I now see cases where IC’s do onboard tests much faster than ATE could perform silmilar tests.

    1. Anne Meixner says:

      Hello Ross,

      Appreciate your informative response.
      With all the emphasis on testing digital logic most DFT engineers don’t realize how much time is spent on the parametric tests. your 75% number is in the ball park for the 1990’s for sure. With CPUs with large embedded arrays I’ve seen memory test take up to 25% of the test time. Wafer probe test catches most of the memory defects, yet identical tests need to run at final test (aka package test, class at Intel).

      DFT has certainly progressed.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.