Board Test and Inspection Methodologies

What makes a test program complete?  Who defines completeness?

Another complicated question, often prone to misperception and consequent acrimony. To begin with, very few board designs are 100% testable (expressed as a percentage of total nets on a board) using a single test platform. The question usually comes down to determining how testable a board is for the dollars invested, and whether both parties (Test Provider and Customer) reside on the same page once the work is complete.

Frequently in the design of a test program and/or fixture we encounter problems not anticipated during the original quoting process. These problems arise from many factors, such as ECOs not known at the time of quoting; bare board (fab) changes; design discrepancies; or just plain undocumented omissions from the bill of materials. These are just a few examples of an endless topic. The point is that we notify the customer when such anomalies are encountered, we advise regarding the potential time (and cost) impact of the change to the project, requote and re-engineer where needed, and proceed again once the customer approves our doing so. In all cases we employ our best technical judgement to make cost-effective recommendations to the customer for eventual action.

And who is the Final Authority regarding completeness? Usually the customer defines completeness by reviewing test coverage data from the finished program and evaluates whether it meets his/her requirements as defined either by a customer-designed Statement of Work (SOW) or by the requirements of the customer’s Purchase Order. In either case the customer has ultimate signature authority, but it is anticipated that such authority is exercised collaboratively and iteratively with Datest engineers.

Watch this space in the future as we attempt to provide technical backup to our attempts to address the vexing question (Well, to us at least!) of Program and Fixture Completeness.