Week 2 Blog – Due 05/02/12

My issues with the media
(how they portray scientific statistics)

Statistics is the method we use to try to ensure that the results of studies are as correct and as true as possible. They tell us whether the results are LIKELY to have been obtained by chance or not, then we can make inferences from there. Unfortunately, in many of the cases I have come across, the media forget the LIKELY part. And in the very main stream media sometimes they neglect to mention any statistics or research methodology at all. I am going to use an article to provide as an example of the things I think could be imporved in the reporting of science in the news.

http://www.bbc.co.uk/news/science-environment-16811042 is an incredibly interesting article about the brain and the inner ‘voice’ we hear in our heads but there are a few issues to consider.

To begin with I have a little issue with the title of the article; “Decodes ‘internal voice'”. I feel that the article uses this very hard hitting head line simply to get people to read the article (as you do when writing the news). But I don’t feel it is accurately describes what the study is about at all. I don’t feel that the study in any way claimes to have ‘decoded’ the inner voice but instead is has unearthed a part of the system that is perhaps responsible for the inner voice. Although the title may be an exaggeration, the role of the media is to get the public’s attention and the article does go on to explain the real basics of the study so it can pobably be forgiven in this instance.

The next thing that bugs me about this article and many others like it is that there are virtually no numbers in it! From this article we know nothing about the experiment apart from the descriptive author’s descriptive interpretation of the study (with a couple of ‘choice quotes’ to make it sound conviencing). There is no mention of the significance levels to tell us how we should be treating the results of the data. I can’t even find a couple of means in there! Perhaps it is because the general population is less interested in the numbers than those that have been forced to undergo deadly stats lectures, but if you don’t tell people they will never be interested or understand.

Although this article mentions that there were 15 participants and that they were undergoing surgery for epilepsy it does not mention the potential consequences of this. As epilepsy is a disorder of certain aspects of the brain (http://www.epilepsy.com/Epilepsy/epilepsy_brain) it is important to consider whether the conclusions of the study could be automatically transfered to the rest of the human population (something which the actual study probably addresses but the article doesn’t).

Finally something that this article does (as do many others) that disappoints me is that it doesn’t tell you about the whole scientific process. This article http://www.scidev.net/en/features/how-journalism-can-hide-the-truth-about-science.html
explains very well that the auidence of the media sometimes aren’t told about the times when science is not correct and so are disappointed/angry when the experiments don’t go to plan. Also as soon as they hear something they want it and don’t realise there is the need for further trials/experiments/further development. Another very important point that I think this artice smacks directly on the head is the media’s use of the word ‘breakthrough’. The use of this word often makes scientific advances sound sudden and surprising instead of the reality of the months/years of hard work of many people. Although this is less of a serious incident in reporting science I feel it almost belittles the work of scientists which can be dangerous in terms of the respect and trust the public has in their results.

Despite all of these points, I would like to end this blog by saying that although I think articles like this perhaps could do with a little more science jargon I applaud the optimism that news like this brings. The idea that perhaps one day we will have a device that could help a person who has lost all other means to communicate with their loved ones (and the rest of the world) makes a VERY welcome change to the majority of the news we get to hear: riots, murders, drugs and theives.


Pictures from:




Homework for my TA – Week 11









Happy Holidays!

Can statistics change behaviour? Week 11 Blog

When we consider the many warnings we receive from organisations such as the NHS about the types of behaviours that are healthy/unhealthy for, perhaps the answer is no. Smokers are more than twice as likely to die from heart disease than non-smokers (http://www.ic.nhs.uk/pubs/smoking09) and yet think about how many of the people you know who are smokers. Despite many warnings from a various number of sources about the dangers of driving while under the influence of alcohol, on average, 3000 people are killed or seriously injured each year in drink-drive collisions (http://www.drinkdrivingfacts.com/drinkdriving/drink_driving_facts.aspx). With so many, and such strong, statistics, why is it that people decide to act in such a way? Perhaps statistics have no effect on our behaviour.


However, a few years ago I heard an advert on the radio from the NHS. The advert was concerning the low amount of blood donors regularly giving blood and it truly shocked me to learn that only 4% of the population donate blood. As soon as I got home I went to the internet and found that this 4% was only concerning the people who were ELIGIBLE to give blood. That meant that about 96% of the people who seemingly had no reason to not give blood weren’t doing so. This made me feel very guilty. I knew many people who are absolutely terrified of needles which might discourage them, but I was never bothered by needles at all. So why wasn’t I giving blood? That day I checked the date for the next session for my local area and went along (dragging a few others with me!) and I continue to donate as regularly as I can. So perhaps statistics can change behaviour? Or am I just a very easily persuaded person?! (For adverts concerning blood donation see http://www.blood.co.uk/video-audio-leaflets/tv-radio-ads/ and for general information about blood donations visit http://www.blood.co.uk/)

According to some people, there are a few factors that influence how people are motivated to act: fear, fun, obligation, reward (http://www.selfcareforum.org/?tag=behaviour-change). I would argue that my decision to become a blood donor was due to at least three of these motivations; fear that someone I could potentially help could die, obligation to help as I was perfectly eligible and the reward of feeling good about myself for doing something good. So perhaps we need to be aware of how our statistics are presented as well as how they are calculated? Maybe if we take into consideration some of the motivations for changing behaviour we can create meaningful, truthful statistics that enhance human behaviours.


And now, because it is that part of year, there is just enough time to get a little festive (statistically of course!). Sometimes during the holiday season strain can be felt by some people due to the sheer amount of money spent during such a short space of time. On this website http://www.eauk.org/resources/info/statistics/christmas-quotes-surveys-and-statistics.cfm it explains that about 19% of people feel less able to manage their own mental health due to the stress of handling money over Christmas. Also, over 50% admitted that they had spent more than they could afford during the holidays. If I tell you this, then am I going to convince you not to spend as much money? Possibly not as this only focuses on the fear aspect of motivation. However, if I tell you that the same survey suggests that 90% of under 18s would gladly receive fewer presents for Christmas in order for their families to feel less pressure would that perhaps convince you that you don’t need to spend so much? Like the saying my mum refers to almost every year; that you spend X amount of pounds on a new toy for a child and they end up playing with the cardboard box it came in. Doesn’t that just warm the chambers of your heart?

So instead of going out this Christmas time and spending all your dollar, spread a little Christmas cheer. According to the statistics it will only make things better. But can statistics change YOUR behaviour?


Merry Christmas everyone!



Pictures from:



Homework for my TA – Week 9





Thank you!!

My Views on Qualitative Data – Week 9 Blog


First of all let me begin by saying that I think my brain works in a similar way to a computer. You throw a problem at it and, if it is capable, it spits back a solution. If it cannot work out the solution, all you get back is an indescribable error noise and a completely blank screen/face. Therefore, I find qualitative data analysis extremely difficult to complete myself and I dip my hat to anyone that manages it because to my mind, qualitative data has no set-in-stone answers. And this completely messes with my brain!

Qualitative data seems, to me, to be much more difficult than quantitative data analysis. I think this because it is a method that is completely dependent on the skills of the researcher. Instead of simply learning which buttons to push on a computer to complete an ANOVA, for example, a qualitative researcher needs to train and refine their skills in order to analyze data. This link http://www.simplypsychology.org/qualitative-quantitative.html goes a way to explain the difficulties psychologists can encounter with qualitative data. Because so much of the analysis relies on the skill of the experimenter, it is often argued that the results are very subjective and the same data can be interpreted in different ways by different researchers. This can be thought to make qualitative data unscientific. The analysis of this type of information tend to take a very long time, and for this reason, there are usually far fewer people studied in a qualitative experiment. This makes it harder to generalize the findings to the rest of the population as the sample is much smaller (as stated by this link http://www.mondofacto.com/study-skills/research/how-to-start-research-at-university/02.html).

For me, qualitative data is more of an art form, slightly subjective but most people can appreciate the end result, which is more often than not a beautiful piece of work. Obviously that being said, science does need to produce results that are as true as possible to the world. However, qualitative data also has huge advantages that I think are sometimes overlooked. Qualitative data looks at participants in a huge amount of detail and creates an openness that can sometimes be stifled in quantitative research. This in-depth enquiry into individuals experiences can be hugely beneficial in the early stages of theories especially, as because they have no hypothesis, they avoid pre-judgements of the data. Here is a link that discusses the positives and negatives of qualitative research http://www.learnhigher.ac.uk/analysethis/main/qualitative1.html.

To my mind, qualitative data analysis treats human beings as wholes and tries gain a deep understanding of behaviour.. rather than asking participants to push a button, like a mouse, every time they see a dot on a screen. And yet, quantitative data is hugely important also so perhaps a mixture of the both types of data is the best solution. This full understanding of human behaviour is something I believe is vital to the development of psychology as a science.

I just might leave other psychologists to do it while I hide behind my calculator and SPSS…

Picture from:


Homework for my TA

Here are the comments I have made this week (due 28th October):




Thank you very much!

Does removing outliers make us liars?

There are a great deal of reasons why outliers occur in data. There can be technical faults with machinery, a fault in the design of the experiment, the participant could have misunderstood the instructions (or not really respected the concept of the experiment), or a certain participant could just have extraordinary results. Although, it is often accepted that outliers can be removed in certain circumstances, I am going to tell you some of the reasons and why I think they should not be excluded from the findings.

When there is a technical fault with machinery, the data obtained can be drastically effected. On these occasions, the results may not represent what the investigator is attempting to analyse. For example, a research may be interested in reaction times but if a machine is faulty it might record times that are hugely different from the time it took the participant to react. In instances like this it is easy to assume that outliers should simply be cast away as they are in fact not valid. However, I would argue that if the machine had been faulty for some of the trials then there is no certainty that it had worked for the rest. Perhaps the other results seem ‘normal’ because the inaccuracies of the machinery were less obvious. I feel that really there is no other way to proceed than to recollect the answers from the participants (after ensuring the machinery is no longer faulty) to get accurate data. Obviously, this would come to a great cost and effort and time for the researchers but as researchers of science surely we need to make sure that the results are true. Here is a link http://pareonline.net/getvn.asp?v=9&n=6 that describes how, using certain statistical methods, you can instead keep your outliers without violating your results.

The next outlier issue I will discuss is concerning participants. Some times it is thought that participants can, purposefully or not, make mistakes when taking part in experiments. Although there are reason with which people remove outliers from data, such as data entry mistakes,(, I believe that regardless of whether it was because the participant did not understand or if they just didn’t bother doing it to the best of their capabilities, outliers should not be removed from the results of the investigation. I think that if the participants did not under stand what was expected of them, there has been a fault in the methods of the experimenter (Rosenthal, 1994). The instructions should be written or delivered well enough for everyone to understand easily. This will ensure that they are aware of what is expected of them as what ethical guidelines demand.

If participants have not completed the task suitably because of a lack of interest it is thoroughly accepted that the data is not valid and so should be dismissed. However, I think that sometimes this completely oversees the point. We, as psychologists, attempt to investigate human behaviour. If we ask a person to do a task, no matter how they react, what gives us the right to say that they are incorrect and so shouldn’t be counted? Any human reaction should be important to our full understanding of human behaviour even if it is not the reaction we were looking/hoping for. We are not laboratory rats, we are humans.

This link http://pareonline.net/getvn.asp?v=9&n=6 also describes beautifully how sometimes the only valid scores look like outliers. For example, if you ask teenagers about their drug use, many of them might underestimate their true score due to demand characteristics. Therefore, the real scores would seem exceptionally high and might be regarded as outliers which is another thing we must be careful to consider.

Then finally, there is the occurrence of chance. Sometimes a person’s results are just very extreme compared to the rest of the sample. Whether this is because they come from a different population compared to the rest or whether there is a different factor affecting them should we really exclude them just because their results are different? And if we can do this, where do we draw the line? We might one day end up with studies removing any and all pieces of data that do not agree with the theories, and that is not really science, is it?


Pictures from:





Rosenthal, R. (1994). Science and ethics in conducting, analysing, and reporting psychological research. Psychological Science, 5(3), 127-134. doi: 10.1111/j.14679280.1994.tb00646.x