How can we help?
(+1) 510.490.4600
47810 Westinghouse Drive • Fremont, CA 94539
   Navigation
Articles

Satisfaction is in the Eye of the Beholder

Making rejects disappear is easy. Making failures disappear is another story.

When is a test program done? Seems like a question with an obvious answer, doesn’t it? A test program is complete and operational when it can be satisfactorily demonstrated to a qualified observer, via a representative lot of boards, that all the accessible component pins on the board under test are in fact accessed and stimulated, or otherwise verified to be installed in their designated location, at the required values, properly oriented, with no signs of workmanship defects, according to the tolerances specified by the board schematic, bill of materials, and customer’s Statement of Work (SOW), assuming one exists.

Heck of a sentence but simple in concept, right?

Not so fast.

Notice I said pins and not components. That is a distinction with a difference. A huge difference. It’s the difference between pristine, laboratory-pure, abstract thinking vs. life as it is actually lived.

This matters. So does recognition of perspective. And acknowledging the existence of hidden motivations.

Therein lies the problem. Obscurity and obfuscation serve the interests of some. Put plainly, they don’t want you to know how good, or worse, how bad, your program really is. Even though you paid for it and are entitled to know.

Who are “they?” I’ll explain.

Two examples will help illustrate my point. Recently, one of our test engineers was reviewing an ICT fixture and program not designed by us. We were chartered to critique it and also to make recommendations to our customer for coverage improvements. In the course of his “discovery” period, our Man determined that 40 field effect transistors (FET) on this board were not being tested, despite the customer’s insistence that they were. The devices in question were enhancement mode FETs, whereas the test being employed was for depletion mode FETs; specifically, they tested the protection diode across the drain to the source in the FET. Our engineer observed that the test turned on the diode (twice in fact) and provided an increase in the stimulus current sufficient to permit the protection diode to conduct. However, there was no actual test of the FET itself. For reference, the test methodology for depletion mode FETs is exactly opposite of enhancement mode FETs.

When our Man brought this revelation to the attention of the customer’s test engineering manager, he was met with denial, followed by resistance, escalating eventually to belligerence. Denial that the test program was incorrect in that section; resistance in the form of the Argument from Authority; and belligerence of sufficient intensity to mask the fact that he got caught, and was hoping to divert unwanted attention.

To refresh, this is the Argument from Authority, aka The Oldest Trick in the Book: “I’ve been testing boards for (insert sufficiently awe-inspiring, discussion-ending time period) years. It follows that longevity holding this title confers on me the authority to declare a test to be present, or not. Thus have I spoken; there is no appeal.”

Q.E.D.

Setting aside the complete absence of logic, his argument is irrefutable. At no time was this customer able to explain the proper technical basis for the test to be employed, nor was he able to distinguish depletion mode FETs from enhancement mode FETs. Until challenged, our customer believed his test was complete. In truth, after checking with a trusted third party for a second opinion, it was revealed to him that the test he got was a default setting of the tester. No one had ever checked whether the test was appropriate for the application. Until now. Boards kept passing, so why rock the boat? Again, in their mind, and undisturbed by challenge, the test was done. One less hurdle to clear prior to revenue recognition. Thank you for your quaint concern. Now, where were we? Start shipping.

The second example concerns a recent in-circuit testing conference we attended. Several participating speakers gave talks with ambitious themes like Improved Test Coverage Analysis and Design for Testability/Test Technology Challenges. Such presentations were, in our opinion, products of the ivory tower view of the world: nice as an ideal, but quite removed from reality, where compromises in design, manufacturing, schedule and budget prevail.

One speaker walked us through a review of the traditional PCOLA (Presence, Correctness, Orientation, Live, Alignment), SOQ (Shorts, Opens, Quality) and FAM (Features, At-Speed, Measurement) acronyms that guide every qualified test engineer in developing ICT, AXI, JTAG/Bscan, BIST/BA-BIST with parametric measurements, and functional test. He then launched into a catalogue of emerging trends in package design and their testing implications. This roadmap included, in no particular order:

  • 3D electronic packages.
  • Impact of silicon technology evolution on established testability standards.
  • Need for better defect diagnostics.
  • Impact of higher speed signals.
  • Reduction in test coverage due to increased packaging density (smartphones, tablets, etc.).
  • Improve testability of lanes/signaling with AC coupling capacitors in high-speed differential pairs.
  • Improve overall testability of physical devices (especially using AC pulse method as defined in IEEE 1149.6-2003).
  • Investigate the feasibility of board-assist built-in self-test (BA-BIST) diagnostics at PC-based JTAG/Bscan or ICT+JTAG/Bscan.
  • Improve testability of DDRx DRAMs that are soldered directly to the board.
  • Plan for implementation of powered opens with selective toggle using IEEE Standard 1149.8.1.
  • Anticipate use of embedded instrument features in ASICs using IEEE Standard 1687-2014.

Very impressive. If I had that much time on my hands, I could probably come up with an even bigger bowl of alphabet soup.

The speaker continued with an extended lamentation about the quality of ICT fixtures and programs developed in Asia, that time and distance prevented proper oversight, and enabled errors and omissions to be “baked in” to a program and worked around rather than resolved with optimal coverage in mind. How, he asked, could we expect reliable implementation of these new and wonderful technologies when the more mundane aspects of overseas test engineering were so shoddily managed today?

And this is somehow a surprise? You get what you pay for.

He concluded with an appeal to a common standard, that industry would be best served by creating a common benchmark for pin-level coverage, whether through IPC, IEEE or some other standard-making body. Problem: How do you police such a standard (assuming one can be agreed on)? Worse: How do you enforce it, and what sanctions do you impose when you do enforce it?

Somebody is not thinking these things through. Ironic coming from an ivory tower.

That is why people who dwell in such rarified settings are referred to in polite company as Ivory Tower Thinkers. In non-polite company the descriptive adjectives have fewer syllables.

Meanwhile, back on Earth, where cash must flow and bills must be paid, what if somebody, say a contract manufacturer, is in a hurry (to ship, for example)? It’s quarter end, and they need to make a certain number. There is a big fat metaphorical bullseye called “bonus” that the ops manager has in their crosshairs. Chances are in that setting they view testing not as an enhancement but as an obstacle to achieving what is rightly theirs. The human mind is endlessly creative about developing strategies to circumvent obstacles. New test technologies, especially those in the infancy of use, are often obstacles demanding to be disabled.

One final example: Another customer recently asked one of our engineers whether there was an industry standard or some other general rule for changing the tolerances on an ICT program. No kidding. The guy actually requested guidelines to widen certain tolerances and, by extension, make failures disappear. Anecdotal evidence suggests this is hardly an isolated case. By the way, we told him no, there isn’t one such standard. The customer’s BoM and schematic, with tolerances clearly identified, are a sufficient standard to work with, aim for, and hit wherever possible.

So back to the original question: When is a test program complete, considering there is no customer SOW; the nontechnical buyer expects to get 100% coverage for their money; the guy “debugging” it is a glorified technician at best; the tester default outputs are accepted as Gospel Truth, and the operations manager has P&L numbers to meet? Oh, and it’s June 29, so think fast.

Nobody knows.