Needles don’t occur in every haystack. Suppose you have developed a new method to detect a needle. Now prove it! Easy, you say—just toss some needles into a haystack and go find them. I faced this scenario with proving that Weak Write Test Mode would work. Two things needed to be proved:
- Did the new design for test circuitry work as intended?
- Did the new test method catch everything the existing test method caught?
This post concerns the first proof. I simulated the Weak Write Test Mode (WWTM) circuitry prior to taping out the microprocessor design. Actual silicon provides the real proof and if issues existed I had a short window of time to correct the design. Data retention failures occurred on the order of thousands parts per million. Therefore, I couldn’t cross my fingers and hope that they naturally appeared in the first manufactured P54CS units. So what’s a semiconductor test methods engineer to do? I had some ideas and also spoke to colleagues within Sort Test Technology Development. The table below summarizes approaches considered:
|Insert a defect into an SRAM cell via Focus Ion Beam (FIB) techniques||Impacts only 1 unit at a time
Can precisely place the defect of interest
|More urgent design fixes will take priority
Not a guarantee of being successful
|Randomly throw particles during a processing step for a small set of wafers||No upfront design costs
Can mimic a likely defect mechanism that causes bad SRAM cells
|No guarantee particles land on SRAMs
They could impact any circuit on the 500 + million transistor device, making them useless for debug
Difficult to convince a Fab manager do this
|Create defective SRAM cells and place them in all on-die memories with a “defect mask.” Process a small set of wafers.||Precisely place all the defects, cover all fault types in all on-die memories
Other circuits not impacted, hence silicon can be used for other activities
Can limit impacted silicon to 1 or 2 wafers.
Masks are expensive
Involves an engineering fab run
The middle idea had been suggested by Mike Mayberry—he had managed a Fab process integration team prior to starting the STTD integration team. I thought, seems a little risky but its implementation cost would be minimal. I met with Mitch Taylor, a senior Fab group leader. His face grew more alarmed as I described the possible experiment. He had valid concerns about particles messing up the machines and essentially refused to support this option. I didn’t really blame him for that decision. Yet following the motto “Never hurts to ask,” I can honestly state I explored all options. I ended up pursuing the other two options.
The FIB option provided the initial proof. All five on-die memories used the same DFT circuitry die. We could insert a
defects in 1 memory array to verify that it works. I, along with Glenn King, another engineer on the Cache Design team, identified a couple of defects to fulfill the first working proof. When the first silicon debug began, I created a working test program on the IMS tester. Then we submitted our FIB request to the queue and waited. I recall the excitement when we received the part—“Would it work?” A snow storm did not delay my getting to the debug lab. I ran the tester with the fibbed part–it failed! I used the SEM machine to probe the signals of interest and actually saw the bits flip and compared to a cell that didn’t flip. I felt a bit like Dr. Frankenstein when he exclaimed, “It’s alive!”
The defect mask option verified that all 5 on-die SRAMS had functioning circuitry. Joe Schutz and Ken McQuhae approved the funds to create Defect Mask option. I worked with Harley, a Mask Designer, to create the defective SRAM bits and place them in the five different on-die memories. I verified via simulation that the DFT circuitry should find these. Harley took care of the changes in the SRAM cells and the five memories. We would only need 1-2 mask layers. With masks ordered, I worked with the Fab engineers to get a special engineering lot with the engineering masks. Then we waited for defective SRAMs to arrive.
This set of defective parts had value not only for my verification, it also assisted the engineering team responsible for manufacturing test program development. Two engineers, Bao Nguyen and Sirama Pedarla, worked on WWTM test program development. Four out of the five memories found the “defective” bits. We scratched our heads a while before Rama determined the issue. This particular memory had a minor design flaw: the weak write test mode circuitry had been flipped horizontally so that Bit and Bit Bar connections had been reversed. This design flaw resulted in the test program weakly writing the “same” state instead of weakly writing the “opposite” state. A rookie mistake by the designer for that memory, the design as is could work if we changed the test to reflect the flipped connection. While disappointed that a full 10X in test improvement could not be achieved, I couldn’t make a strong argument for fixing the design. We could work around the design bug. The impact to test time meant we couldn’t test all five memories in parallel and would need to test the fifth array separately, resulting in a 5X test time improvement.
Seeing that what you simulated actually works gives you a satisfying thrill. It was a true team effort to get this far, as well; on the road to solving the sexy hard problem knowing that everything works so you can move on to the next part–showing you find naturally occurring needles in the haystack.
Have a productive day,
Dear Reader, What memory or question does this piece spark in you? Did you ever need to prove you could find a needle in a haystack? Have you held your breath while waiting to see if your implementation actually worked? Please share your comments or stories below. You too can write for the Engineers’ Daughter- See Contribute for more Information.