Here’s a common problem. You have been tasked to peer into the future to predict when the next failure will occur.
Predictions are tough.
One way to approach this problem is to do a little analysis of the history of failures of the commonest or system. The problem looms larger when you have only two observed failures from the population of systems in questions.
While you can fit a straight line to two failures and account for all the systems that operated without failure, it is not very satisfactory. It is at best a crude estimate.
Let’s not consider calculating MTBF. That would not provide useful information as regular reader already know. So what can you do given just two failures to create a meaningful estimate of future failures? Let’s explore a couple of options. Continue reading Life Data Analysis with only 2 Failures→
Some years ago a few colleagues compared notes on results of a Weibull analysis. Interesting we all started with the same data and got different results.
After a recent article on the many ways to accomplish data analysis, Larry mentioned that all one needs is shipments and returns to perform field data analysis.
Let’s say we have a set of numbers, {2.3, 4.2, 7.1, 7.6, 8.2, 8.4, 8.7, 8.9, 9.0, 9.1} and that is all we have at the moment.
How many ways could you analyze this set of numbers? We could plot it a few different ways, from a dot plot, stem-and-leaf plot, histogram, probability density plot, and probably a few other ways as well. We could calculate a few statistics about the dataset, such as mean, median, standard deviation, skewness, kurtosis, and so on. Continue reading The Many Ways of Data Analysis→
We gather and report loads of data nearly every day.
Is your data “good data”? Or does it fall into the “bad data” category?
Let’s define the difference between good and bad data. Good data is accurate, timely, and useful. Bad data is not. It may be time to look at each set of data you are collecting or reviewing and judge if it’s good or not. Then set plans in motion to minimize the presence of bad data in your organization.
Good data is accurate
By this I mean it truly reprints the items or process being measured.
If the mass is 2.3 kilograms, then the measurement should be pretty close to 2.3 kg. This is a basic assumption we make when reviewing measurements, yet when was the last time you checked? Use a different measurement method, possible a known accurate method to check.
Measurement system analysis includes a few steps to determine if the gage making a measurement is true or not. Calibration may come to mind, as it is a step to verify the gage readings are reflecting standard measures. A meter is a meter is a meter across the many ways we can measure distance.
It also includes checking the common sources of measurement error:
Repeatability
Reproducibility
Bias
Linearity
Stability
You may also want to understand the resolution or discrimination of the measurement process.
If these terms and how one goes about checking for accuracy, it may be time to learn a little about MSA.
Good data is timely
If the experiment results are available a week after the decision to launch the product, it will not be considered in the decision. It is not useful for the decision concerning product launch. If the data was available it may alter the decision. Late, we will not know.
Timely means it is in time for someone or some team to make a decision. Ideally, the data is available immediately. When a product fails in the field, we would like to know right away, not two or three month later. If a production line becomes unstable, knowing before another unit of scrap is produced would be timely.
Not all data gathering and reporting is immediate. Some data takes months or an entire year to gather. There are physical constraints in some situation that day the gathering of data. For example is takes on average 13 minutes, 48 seconds, for radio signals to travel from a space probe orbiting Mars to reach Earth [1]. If you are making important measurements on Earth it should be a shorter delay.
The key point here, is the data should be available when it is needed to make decisions.
Good data is useful
Even if the data is accurate and timely is may not be useful. The data could be from a perfect measurement process, yet is measuring something we do not need to know or consider. The data gathered does not help inform us concerning the decision at hand.
For example, if I’m perfectly measuring production throughput, it does not help me understand the causes of the product line downtime. While related to some degrees, instead of the tally of units produced per hour, what we really would find useful is data concerning the number of interruptions to production, plus details on the root cause of each.
Setting up and maintaining the important measurements is difficult as we often shift focus based on the current data. We spot a trend and want to learn more than the current data can provide. The idea is we should not setup and only use a fixed set of data collection processes. Ideally your work to gather data is driven by the need to answer questions.
Is the maintenance process improving the equipment operation?
Is our manufacturing process stable and capable of creating our product?
Will the current product design meet life expectations/requirements?
Have we confirmed the new design ‘fixed’ the faults seen in the last prototype?
We have questions and we gather data to allow us to answer questions.
How would you describe the data you will look at today? Good or Bad? And more importantly, do you know if your data is good or bad?
In all aspects of engineering we only make improvements and innovation in technology by building on previous knowledge. Yet in the field of reliability engineering (and in particular electronics assemblies and systems), sharing the knowledge about field failures of electronics hardware and the true root causes is extremely limited. Without the ability to share data and teach what we know about the real causes of “un-reliability” in the field, it is more easily understood why the belief in the ability able to model and predict the future of electronics life and MTBF continue to dominate the field of electronics reliability Continue reading What will Advance Reliability Engineering?→