What is Wrong With Success Testing?

What is Wrong With Success Testing?

Three prototypes survive the gauntlet of stresses and none fail. That is great news, or is it? No failure testing is what I call success testing.

We often want to create a design that is successful, therefore enjoying successful testing results, I.e. No failures means we are successful, right?

Another aspect of success testing is in pass/fail type testing we can minimize the sample size by planning for all prototypes passing the test. If we plan on running the test till we have a failure or two, we need more samples. While it improves the statistics of the results, we have to spend more to achieve the results. We nearly always have limited resources for testing.

Let’s take a closer look at success testing and some of the issues you should consider before planning your next success test.

What Does Successfully Passing a Test Mean?

Not much actually. If we have 5 prototypes run for 2 weeks at elevated temperatures, what does that really mean? Does it suggest the product will last for 5 years with normal use conditions? Probably not.

How about placing 77 components from three batches in an 85°C temperature and 85% humidity chamber for 1,000 hours? If we then test all the components and they work as expected can we make any claims about their operation in the real world? Probably not.

In both cases, we can pretty much only say the units under test survived the specific conditions and durations. In some cases, where we understand the specific effect of the applied stress on specific failure mechanisms, we may employ the appropriate acceleration model to project into normal use conditions.

Simply passing the test without analysis and connection to specific failure mechanisms is rather meaningless.

Sampling Error with Success Testing

A test designed and executed without any failures, for pass/fail type testing, minimizes the number of samples necessary to conduct the testing. This is good.

This approach gives up the information gained from actually having failures. It also increases the risk that the sample will incur a type II error (we accept the null hypothesis that the items under test are reliable, when in fact the population is not). Designing a binomial test with even one failure significantly improves the power the test thus improving the chance the results represent the population’s performance.

There is always a discussion necessary to balance the risk of the sample providing misleading results and cost. Understanding the risks is a necessary first step, including the sample errors, both Type I and Type II errors.

Without Failures What Do We Learn?

One of the first success tests I ran went well. All the samples survived. While reporting on the results a fellow engineer asked if the applied stress would eventually lead to the failures we expected it would cause?

I didn’t know since we didn’t experience any failures. We did have a model that the increased temperature would accelerate the oxidation based damage to the product, yet we didn’t measure such degradation nor witness any failures related to oxidation or anything else.

The design of the test relied on the assumption that stress would accelerate a particular failure mechanism. If it didn’t do so as expected the test, while seemingly reassuring as everything passed, the results would provide evidence of something that wasn’t true.

With further work, we did find failures and proved the previous success test set of results were very misleading and grossly underestimating the product’s reliability.

With failures, you learn something. You may learn:

  • About an unexpected failure mechanism
  • That the applied stress does lead to the expected failures
  • That the applied stress does not lead to the expected failure mechanism, invalidating the acceleration model
  • That there are multiple failure mechanisms at work (more than one failure may be necessary)
  • That the oven wasn’t even turned on during the test (happened to me once….)
  • About the margins between operation and failure for the design
  • About changes or variability in vendor-supplied materials or your own processes

With a failure, we can do failure analysis and understand the root cause of the failure. Without a failure, we can’t.

Summary

Test to failure. Yes, it costs more, takes a bit more work, requires failure analysis, and provides insights, confirmation, and knowledge about your test and product.

Test to failure. You learn about your product in ways that no amount of success testing can.

Test to failure.

About Fred Schenkelberg

I am an experienced reliability engineering and management consultant with my firm FMS Reliability. My passion is working with teams to create cost-effective reliability programs that solve problems, create durable and reliable products, increase customer satisfaction, and reduce warranty costs.

2 thoughts on “What is Wrong With Success Testing?

  1. Nice article.

    If management wants to know if the product is “good enough,” then testing with no failures works. My blood pressure rises when a vendor says, “we tested 100 units for 1000 hours with no failures, so our MTBF is greater than 100,000 hours.” Or other silly things, like multiplying the test “MTBF” by an Arrhenius factor. Even if I believe that the acceleration factor is 8, the best I can do is to say in the first 8000 hours of life, I’m x% confident that less than y% of the population will fail.

    The obvious problem with even that kind of statement is that there’s no assurance that the failure modes modeled by the Arrhenius equation (or any other model you’d like) will actually occur or (and more importantly) that all the important failure modes have been modeled. As you point out, the only way to find that out is to test to failure and see what modes show up.

    Thanks for the excellent reminder.

Leave a Reply

Your email address will not be published.