Type I or Type II Errors
Error. The last word anyone wants to hear, especially when it comes to the world of science. Science plays such a vital role in the development of society there is little room for mistakes. Therefore, it is very important to know where errors may occur in order to do everything possible to prevent them. This brings me to the most terrifying words to a budding scientist’s ears; Type I and Type II Errors. Now, when we consider the definitions of these errors they sound very similar in terms of their ‘seriousness’, but is one type of error worse than the other?
Type I Errors refer to the times when a person claims that there study/experiment’s results show a significance difference when there is no real significance (http://www.experiment-resources.com/type-I-error.html). Back in the early days of my first year at Bangor University I was under the impression that Type I errors were much worse than Type II. The reason I thought this is because I assumed that on some level it was the researchers ‘fault’ that they claimed a difference when there was not one (or ‘incorrectly rejecting the null hypothesis’). I think that this was because I was very naïve/uninformed/clueless. I thought that a Type I Error meant that a researcher hadn’t done their job as well as they should have (which is almost NEVER the case). When you consider the methods of a scientific study, we use a confidence level of 95% to determine whether we are confident that our results are true. The part that I forgot/didn’t know about is the other 5%. This means that 5% of the time a scientist can complete their experiment following every rule to the letter but they may still make an error when determining the significance of their results. This link http://www.sportsci.org/resource/stats/errors.html provides a lovely way of explaining how (by the definition of the 95% confidence interval) 1 out of 20 times, you will claim a relationship is in the sample when none exists in the population.
Type II Errors on the other hand occur when a claim is made that there isn’t a significant difference when there actually is (the null hypothesis is incorrectly accepted’). This article (http://www.stats.gla.ac.uk/steps/glossary/hypothesis_testing.html#2err) describes how it is difficult to measure the probability of a Type II Error occurring but that it is often due to sample sizes being too small. Consider a pregnancy test, some women may wish to do more than one test for confirmation, a Type II Error would be very unfortunate in this case for some people!
The reason I think that I changed my mind about Type I Errors definitely being the more serious is when I consider real life examples. For example, if we imagine a scientific study testing the effectiveness of a drug for cancer treatment; A Type I Error would lead us to conclude that the drug worked when in fact it did not whereas a Type II Error would claim that the drug didn’t work when it did. I think that in this case both of the errors would be equally as devastating as the other.
However, if we were to consider the trial of a potential criminal, if a Type I Error were to occur, an innocent person would be put in jail but if a Type II Error occurred a guilty person would walk free. In this scenario I find that my opinion has completely switched as to which error may be the most serious. (This article http://intuitor.com/statistics/T1T2Errors.html gives a good account of both types of errors in the Justice System). I believe that if a guilty man were to walk free it would devalue the whole point of the Justice System completely and so is slightly worse than a Type I (although, I do think a Type I Error would be horrific, just not quite as bad as a Type II).
And so the only conclusion that I can make is that both Errors are very bad and can have huge, devastating consequences. Depending on circumstance (and A LOT of personal opinion) one type can be considered more serious than the other but I do not think (anymore!) that one should be avoided more than the other. A delicate balancing act must be attempted in order for results of our scientific results to be as true as possible.