Tuesday 15 April 2014

What Is the Difference Between Type I and Type II Errors?

The statistical practice of hypothesis testing is widespread not only in statistics, but also throughout the natural and social sciences. While we are conducting a hypothesis test there a couple of things that could go wrong. There are two kinds of errors, which by design cannot be avoided, and we must be aware that these errors exist. The errors are given the quite pedestrian names of type I and type II errors. What are type I and type II errors, and how we distinguish between them?


Hypothesis Testing


The process of hypothesis testing can seem to be quite varied with a multitude of test statistics. But the general process is the same. Hypothesis testing involves the statement of a null hypothesis, and the selection of a level of significance. The null hypothesis is either true or false, and represents the default claim for a treatment or procedure. For example, when examining the effectiveness of a drug, the null hypothesis would be that the drug has no effect on a disease.


After formulating the null hypothesis and choosing a level of significance, we acquire data through observation. Statistical calculations tell us whether or not we should reject the null hypothesis.


In an ideal world we would always reject the null hypothesis when it is false, and we would not reject the null hypothesis when it is indeed true. But there are two other scenarios that are possible, each of which will result in an error.


Type I Error


The first kind of error that is possible involves the rejection of a null hypothesis that is actually true. This kind of error is called a type I error, and is sometimes called an error of the first kind.


Type I errors are equivalent to false positives. Let’s go back to the example of a drug being used to treat a disease. If we reject the null hypothesis in this situation, then our claim is that the drug does in fact have some effect on a disease. But if the null hypothesis is true, then in reality the drug does not combat the disease at all. The drug is falsely claimed to have a positive effect on a disease.


Type I errors can be controlled. The value of alpha, which is related to the level of significance that we selected has a direct bearing on type I errors. Alpha is the maximum probability that we have a type I error. For a 95% confidence level, the value of alpha is 0.05. This means that there is a 5% probability that we will reject a true null hypothesis. In the long run, one out of every twenty hypothesis tests that we perform at this level will result in a type I error.


Type II Error


The other kind of error that is possible occurs when we do not reject a null hypothesis that is false. This sort of error is called a type II error, and is also referred to as an error of the second kind.


Type II errors are equivalent to false negatives. If we think back again to the scenario in which we are testing a drug, what would a type II error look like? A type II error would occur if we accepted that the drug had no effect on a disease, but in reality it did.


The probability of a type II error is given by the Greek letter beta. This number is related to the power or sensitivity of the hypothesis test, denoted by 1 – beta.


Type II errors can be avoided at the design stage of the study by power calculations that give an indication of how many subjects are required in the trial to minimise the risk of making type 2 error.


Type I and Type II errors


How to Avoid Errors


Type I and type II errors are part of the process of hypothesis testing. Although the errors cannot be completely eliminated, we can minimize one type of error.


Typically when we try to decrease the probability one type of error, the probability for the other type increases. We could decrease the value of alpha from 0.05 to 0.01, corresponding to a 99% level of confidence. However, if everything else remains the same, then the probability of a type II error will nearly always increase.


Many times the real world application of our hypothesis test will determine if we are more accepting of type I or type II errors. This will then be used when we design our statistical experiment.


Read more [About.com]

What Is the Difference Between Type I and Type II Errors?

No comments:

Post a Comment