Homework for my TA – due 22/02/2012

Here are my comments for this week:

http://psud1a.wordpress.com/2012/02/19/is-there-anything-that-cant-be-measured-by-psychologists/#comment-49

http://psychab.wordpress.com/2012/02/18/type-one-and-type-two-errors/#comment-40

http://mdscurr.wordpress.com/2012/02/15/why-we-should-not-offend-animals-by-saying-their-results-cannot-be-generalised/#comment-48

http://columsblog.wordpress.com/2012/02/19/the-advantages-and-disadvantages-of-case-studies/#comment-55

Advertisements

Week 4 Blog – Due 19/02/2012

Type I or Type II Errors

Error. The last word anyone wants to hear, especially when it comes to the world of science. Science plays such a vital role in the development of society there is little room for mistakes. Therefore, it is very important to know where errors may occur in order to do everything possible to prevent them. This brings me to the most terrifying words to a budding scientist’s ears; Type I and Type II Errors. Now, when we consider the definitions of these errors they sound very similar in terms of their ‘seriousness’, but is one type of error worse than the other?

 

Type I Errors refer to the times when a person claims that there study/experiment’s results show a significance difference when there is no real significance (http://www.experiment-resources.com/type-I-error.html). Back in the early days of my first year at Bangor University I was under the impression that Type I errors were much worse than Type II. The reason I thought this is because I assumed that on some level it was the researchers ‘fault’ that they claimed a difference when there was not one (or ‘incorrectly rejecting the null hypothesis’). I think that this was because I was very naïve/uninformed/clueless. I thought that a Type I Error meant that a researcher hadn’t done their job as well as they should have (which is almost NEVER the case). When you consider the methods of a scientific study, we use a confidence level of 95% to determine whether we are confident that our results are true. The part that I forgot/didn’t know about is the other 5%. This means that 5% of the time a scientist can complete their experiment following every rule to the letter but they may still make an error when determining the significance of their results. This link http://www.sportsci.org/resource/stats/errors.html provides a lovely way of explaining how (by the definition of the 95% confidence interval) 1 out of 20 times, you will claim a relationship is in the sample when none exists in the population.

 

Type II Errors on the other hand occur when a claim is made that there isn’t a significant difference when there actually is (the null hypothesis is incorrectly accepted’). This article (http://www.stats.gla.ac.uk/steps/glossary/hypothesis_testing.html#2err) describes how it is difficult to measure the probability of a Type II Error occurring but that it is often due to sample sizes being too small. Consider a pregnancy test, some women may wish to do more than one test for confirmation, a Type II Error would be very unfortunate in this case for some people!

 

The reason I think that I changed my mind about Type I Errors definitely being the more serious is when I consider real life examples. For example, if we imagine a scientific study testing the effectiveness of a drug for cancer treatment; A Type I Error would lead us to conclude that the drug worked when in fact it did not whereas a Type II Error would claim that the drug didn’t work when it did. I think that in this case both of the errors would be equally as devastating as the other.

However, if we were to consider the trial of a potential criminal, if a Type I Error were to occur, an innocent person would be put in jail but if a Type II Error occurred a guilty person would walk free. In this scenario I find that my opinion has completely switched as to which error may be the most serious. (This article http://intuitor.com/statistics/T1T2Errors.html gives a good account of both types of errors in the Justice System). I believe that if a guilty man were to walk free it would devalue the whole point of the Justice System completely and so is slightly worse than a Type I (although, I do think a Type I Error would be horrific, just not quite as bad as a Type II).

 

And so the only conclusion that I can make is that both Errors are very bad and can have huge, devastating consequences. Depending on circumstance (and A LOT of personal opinion) one type can be considered more serious than the other but I do not think (anymore!) that one should be avoided more than the other. A delicate balancing act must be attempted in order for results of our scientific results to be as true as possible.

Homework for my TA due 08/02/12

http://ssbetween.wordpress.com/2012/02/05/2012-blog-1-boredom-what-psychologists-didnt-account-for/#comment-36

 

http://hls92.wordpress.com/2012/02/02/is-it-possible-to-prove-a-research-hypothesis/#comment-59

 

http://statisticsbyrachel.wordpress.com/2012/02/05/moral-justification-and-ethical-issues-with-non-human-animals/#comment-47

 

http://lisamarieoliver.wordpress.com/2012/02/05/the-file-drawer-problem-and-fabrication-of-results/#comment-27

 

http://stefftevs.wordpress.com/2012/02/03/are-single-case-designs-effective/#comment-18

 

http://jessicaaro.wordpress.com/2012/02/05/internet-researchyes-or-no/#comment-38

 

http://psucfa.wordpress.com/2012/02/05/has-psychology-reached-its-limits/#comment-18

 

Thank you!! 🙂

Week 2 Blog – Due 05/02/12

My issues with the media
(how they portray scientific statistics)

Statistics is the method we use to try to ensure that the results of studies are as correct and as true as possible. They tell us whether the results are LIKELY to have been obtained by chance or not, then we can make inferences from there. Unfortunately, in many of the cases I have come across, the media forget the LIKELY part. And in the very main stream media sometimes they neglect to mention any statistics or research methodology at all. I am going to use an article to provide as an example of the things I think could be imporved in the reporting of science in the news.

http://www.bbc.co.uk/news/science-environment-16811042 is an incredibly interesting article about the brain and the inner ‘voice’ we hear in our heads but there are a few issues to consider.

To begin with I have a little issue with the title of the article; “Decodes ‘internal voice'”. I feel that the article uses this very hard hitting head line simply to get people to read the article (as you do when writing the news). But I don’t feel it is accurately describes what the study is about at all. I don’t feel that the study in any way claimes to have ‘decoded’ the inner voice but instead is has unearthed a part of the system that is perhaps responsible for the inner voice. Although the title may be an exaggeration, the role of the media is to get the public’s attention and the article does go on to explain the real basics of the study so it can pobably be forgiven in this instance.

The next thing that bugs me about this article and many others like it is that there are virtually no numbers in it! From this article we know nothing about the experiment apart from the descriptive author’s descriptive interpretation of the study (with a couple of ‘choice quotes’ to make it sound conviencing). There is no mention of the significance levels to tell us how we should be treating the results of the data. I can’t even find a couple of means in there! Perhaps it is because the general population is less interested in the numbers than those that have been forced to undergo deadly stats lectures, but if you don’t tell people they will never be interested or understand.

Although this article mentions that there were 15 participants and that they were undergoing surgery for epilepsy it does not mention the potential consequences of this. As epilepsy is a disorder of certain aspects of the brain (http://www.epilepsy.com/Epilepsy/epilepsy_brain) it is important to consider whether the conclusions of the study could be automatically transfered to the rest of the human population (something which the actual study probably addresses but the article doesn’t).

Finally something that this article does (as do many others) that disappoints me is that it doesn’t tell you about the whole scientific process. This article http://www.scidev.net/en/features/how-journalism-can-hide-the-truth-about-science.html
explains very well that the auidence of the media sometimes aren’t told about the times when science is not correct and so are disappointed/angry when the experiments don’t go to plan. Also as soon as they hear something they want it and don’t realise there is the need for further trials/experiments/further development. Another very important point that I think this artice smacks directly on the head is the media’s use of the word ‘breakthrough’. The use of this word often makes scientific advances sound sudden and surprising instead of the reality of the months/years of hard work of many people. Although this is less of a serious incident in reporting science I feel it almost belittles the work of scientists which can be dangerous in terms of the respect and trust the public has in their results.

Despite all of these points, I would like to end this blog by saying that although I think articles like this perhaps could do with a little more science jargon I applaud the optimism that news like this brings. The idea that perhaps one day we will have a device that could help a person who has lost all other means to communicate with their loved ones (and the rest of the world) makes a VERY welcome change to the majority of the news we get to hear: riots, murders, drugs and theives.

 

Pictures from:

http://t2.gstatic.com/images?q=tbn:ANd9GcSjxsFIOVa_rosIKOdTO6sizQ3a1lQQuTtaYKAlL0ZU5vMFKmXK_A

http://www.cartoonstock.com/directory/n/negative.asp