March 25, 2012 Leilla92

I came across the term “marginally significant” in one of my seminar groups, and since then it’s got me thinking. How many research papers are saying this? “Well we tested this drug, and it was nearly significant so go ahead it’s perfectly safe to use”. NO! I don’t think this is acceptable at all!. Firstly the significance level is set for a reason, and in case you don’t know it is less than .05. Secondly if you find your data is marginally significant, perhaps eye balling your data and adding participants could make your data significant.

Funnily enough I found it very easy to find papers on Google scholar that suggest they’re findings are marginally significant. One paper in particular talks about how cardiovascular disease is a big problem with the older generation as well as high cholesterol and high blood pressure. Participants were told to take a simvastatin tablet every day to see if their symptoms improved, many different results were found. Results stated that there was a marginally significant reduction in vascular deaths (p=0.07), I don’t think marginally is good enough although it was close I give it that!.

So on the other side of things I can see why some researchers do put down that results showed to be marginally significant. Some could argue that yeah it doesn’t show significance now but, this could be due to outliers or other issues that swayed the results. However saying that I don’t think that research is sound enough to be presented unless it states a significant effect was found, or a significant effect was not found.

Another interesting paper I found talks about the defenders and opponents of significance testing. It states that for decades there has been debate about this topic, the paper states three main primary concerns are: (1) the misuse of significance testing, (2) the misinterpretation of P values and (3) the lack of accompanying stats (effect size + confidence intervals). The paper is really good in presenting current thinking both in favour and against significance testing.

Personally I think having a significance level to work from or to is a firm foundation in statistics. There could be many implications to not having a significance level, it must mean something if Karl Pearson laid the foundation for significance testing as early as 1901!.

.05 Or less, or not at all is my opinion! THANK YOU!




Entry Filed under: Uncategorized

15 Comments Add your own

  • 1. secretdiaryofapsychstudent  |  April 4, 2012 at 6:38 pm

    I totally agree with what you on this. Like you say 0.07 is close to 0.05 but its still not 0.05, its not significant and that should be the end. More research can be conducted in the future and those results may prove to be significant but scientists need to take into account the results they have in hand. As you say the 0.05 significance testing has been a long trusted foundation and it should stay that way. If psychologists begin saying a result of 0.07 is marginally significant that opens a door for many inaccuracies. There are other ways of testing data without using significance tests including effect size, power analysis and multiple models. Scientists could use these tests to suggest breakthroughs in the data but not change the rules of significance.

  • 2. cerijayne  |  April 13, 2012 at 10:32 pm

    When a researcher writes ‘marginally significant’ they are basically telling us that the results are in the predicted direction but just not entirely significant. I think if the research paper has been designed well and holds a solid theory then why not use ‘marginally significant’ it is better to do this tham to dismiss all together and engage in the file draw problem and probably not publish it at all just because it is not exactly significant to the number. The term is not trying to fool anyone either, it is not lying, it telling you it is nearly significant and therefore the results are still interesting and could spur on a replication of the study in question.

  • 3. raw2392  |  April 14, 2012 at 12:32 pm

    Hey Leila, really interesting blog and on a subject I have never really focused on. You make a great point, especially in regards to the study on cardiovascular problems in the older generation. Cardiovascular problems are very serious and in more cases than not lead to death, how can researchers even think to publish their results as marginally significant when the results showed 0.07, there should be rules in place not allowing this to happen. If people see that the results marginally helped some of the participants then they could assume the wrong thing and thing that the drugs could help them too, when the results have varied vastly!
    On the other hand, something I feel you fail to have pointed out is that marginally significant could be taken in the other direction. A marginally significant result could be reported at 0.04, this shows significance, but does not show a great degree of significance. Tiller and Reed (2005) carried out research into The Effect of Intention on Decreasing Human Anxiety and Depression via Broadcasting from an Intention-Host Device- Conditioned Experimental Space, the results were marginally significant at .03, however many people could still find this useful and gain knowledge from it.
    Overall, I do have to say I agree with your main argument, if research has produced results above the significance level of 0.5, then it should not get reported as marginally significant at all, they should accept the null hypothesis, and then work on their experiment further in order to see why they did not gain the results the expected too!

  • 4. Final Blog Comments &laqu&hellip  |  April 14, 2012 at 1:07 pm

    […] https://leilla92.wordpress.com/2012/03/25/marginally-significant-so-you-didnt-find-anything-then/#com… Share this:TwitterFacebookLike this:LikeBe the first to like this post. […]

  • 5. liamjw91  |  April 16, 2012 at 1:36 pm

    Interesting blog however I have to disagree with you I think it’s better to report research as being marginally significant rather than just disgarding what could be valuable data. I think this for a couple reasons. Mainly becuase a researcher may be making a type 2 error as extraneous variables such as outliers may have been the reason why the findings weren’t significant. So by reporting that the research was marginally significant it is likely to lead to follow up research by others who may design an experiment better and find significance between variables.

  • 7. lmr92  |  April 18, 2012 at 1:38 pm

    hi 🙂 i agree that the findings of a marginally significant study should not be applied in a real life setting, especially if the research involves drugs/medicines etc. However i do think the term “marginally significant” is a useful tool, as it helps researchers to pick out studies that have the potential to be significant with further research.
    Essentially its a away of the researcher saying “we didn’t find anything, but its worth you having a look”. As opposed to “we didn’t find anything and we doubt anyone ever will”.
    Without this distinction, findings with the potential to be significant could be overlooked.

  • 9. psuc98  |  April 18, 2012 at 5:22 pm

    While I agree with cerijayne’s comments that ‘marginally significant’ is informing and not lying or completely disregarding the research I think your point that it should be 0.05 or less is very important; we need to draw the line somewhere, if we allowed the result you found with 0.07 then another experiment would ask why 0.08 was not allowed, we need a level of professionalism and limitations in research not simply a ‘close enough’ attitude. Your example of cardiovascular problems shows how serious our research can be and our results need to be taken seriously and rules need to be carefully followed.

    I agree with the above comments which state that marginally significant is a useful term to use when describing data and in addition I think areas of marginal significance can point areas that may need more conclusive research to be carried out or indicate a basic direction. Overall I think your point of marginally significant simply not being good enough to act on, for example to prescribe, is strong and important to consider- the level of 0.05 is there for a reason.

  • 10. tomwall39  |  April 18, 2012 at 10:40 pm

    A good blog this week Lei 🙂

    I have to agree that it is idiotic to claim that something is marginally significant. It would be much better to say that there is evidence to suggest that an effect may be significant and it would be beneficial to tweak a repeat of the study.

  • 11. Eric Marsh (@wemarsh)  |  November 28, 2012 at 12:01 am

    It was not “set for a reason.” Fisher said so himself. Whether you report marginal values or not or regardless what you call them, it is people who blindly adhere to .05 who don’t understand statistics. I’m for reporting as much as possible so the reader can see the whole picture. In reasearch, complaining that somebody gave you too much data is absurd.

  • 12. agusmuji  |  December 8, 2012 at 5:42 pm

    In social science may it reasonable to use “marginally significant” because in social term, there is no perfect or exact prediction. There are so many situation and context affect this significant value. So researcher consider that this finding may can give the explanation rather than push away.. may in other situation its differ. BUT in exact science like medic, healthy, chemistry, may 5% isn’t significant too, they need < 1%.

  • 13. Eric Marsh (@wemarsh)  |  April 4, 2013 at 1:04 pm

    It’s worth noting that your post comes up second currently on Google for “marginal significance,” right behind an article talking about how the APA style requires it.

  • 14. Eric Marsh (@wemarsh)  |  April 4, 2013 at 1:13 pm

    Also, I just noticed you call for adding participants after seeing your data. This is a huge statistical no no, or an “experimenter degree of freedom.” If you test significance and then test again after running more subjects, you are increasing the chance of a false positive result. (i.e. you have effectively made your .05 significance level equal to .1) Keep in mind that if you run a test 20 times at a 5% significance level, you should find significance regardless if the effect is real. It happens in a lot of fields, but nobody that takes statistics seriously would ever advocate changing the number of trials in a study because it looks like you are approaching significance. All such design choices must be made before the data collection begins or else p values the p values all lose their meaning. Even considering this option makes the entire question of what to set alpha at irrelevant.

  • 15. enrique  |  April 19, 2013 at 7:55 am

    There is nothing magical about .05. From a statistics perspective, its just an arbitrary number of acceptable error: it just means that there is a 5% chance that differences between conditions might actually be do to chance when thinking about discarding a null hypothesis. Applying this thinking to medicine, the argument of “i dont care what they say: “.05 is not .05” is kind of dumb because its not like they are hidding anything, they are reporting P values. Its not that they didnt find anything, but that its more likely than otherwise that the cause might be chance. The difference between that arbitrary “significance” and lets say .07, simply translates to something like: “there is a 5% chance results are actually chance”, to “there is a 7% chance results are actually chance”. Regardless, i though that for farma and medicine, P confidence levels were set to .01 and not .05 anyways?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to comments via RSS Feed




March 2012
« Feb   Apr »

Most Recent Posts

%d bloggers like this: