Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > T&M

Changing perspectives on test and diagnoses (Part 1)

Posted: 22 Mar 2016 ?? ?Print Version ?Bookmark and Share

Keywords:DFT? Design for Testability? BIST? built-in self test? ATE?

Having been in the test and diagnosis arena for decades and having taught courses on test strategy, DFT (Design for Testability), BIST (built-in self test), ATE and Test Economics, I thought I was pretty well informed and knew the most common test terminologies used. But, different things have different meanings depending on the setting.

For example, I was somewhat puzzled when I read an article about medical test and diagnoses and some of the terms that popped up were less than familiar to me. The article focused extensively on two terms, namely specificity and sensitivity. Trying to familiarise myself with these statistical measures of test lead me to a Wikipedia article where I was confronted by some other unfamiliar terms, such as prevalence, diagnostic odd ratios, and positive (negative) likelihood ratios.

Familiar test terms such as accuracy and precision were defined differently than the way we test engineers use them. These terms are common to physicians and medical researchers who use tests to find out whether their patients have a disease (positive) or can be ruled out from having the disease (negative). How different is a test used to determine if an electronic system (probably less complex than the human body) is faulty and if so, what part needs to be repaired or replaced?

The first thing I noticed is that the medical people don't consider a test to "fail" when it actually "succeeds" in finding a fault, disease, or disorder. They call the result that finds what it sets out to find "positive" and if the disorder truly exists, they call it True Positive (TP) and if the test mistakenly says it exists, they call it FP (false positive). Similarly, if a test rules out a condition it tests for, the result is either TN (true negative) meaning that the condition doesn't exist, or FN (false negative) where the condition actually exists but the test says otherwise. Figure 1 shows the possible outcomes of tests.

Figure 1: There are four possible outcomes depending on whether the UUT is bad or good.

So the question we normally ask, "How good is a test?" is actually replaced by the more comprehensive question of "How good are the tests?" In examining Figure 1, we see that we're really performing two tests: one to prove that a bad UUT (unit under test) fails our test as TP, while the other test will prove that a good UUT will pass our test as TN. In a perfect set of tests, we won't have any false results, so FP and FN will have zero occurrences. Figure 2 illustrates such a perfect test scenario.

Figure 2: A normal distribution representation of a "Perfect Test" that passes all good and only good UUTs (True Negative) and fails all faulty and only faulty UUTs (True Positive). Since FP=FN=0, specificity and sensitivity = 100%.

Reflecting on this perfect test scenario, we can immediately formulate objections about the cost and "overkill" nature of such a test strategy because it requires every fault to be sensitive to a TP test and every specification to be demonstrated by TN. Sensitivity measures the likelihood that a failed test is due to a bad UUT and specificity measures the likelihood that a passing test is due to a good UUT. Since both FP and FN are zero, in a perfect test, both sensitivity and specificity are 100%.

1???2???3?Next Page?Last Page

Article Comments - Changing perspectives on test and di...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top