Speaking from experience here, I'd say a .05 (5%) level is the norm. If you want to be more rigorous, the .01 level is ok, but almost every research paper I have ever read has used the standard .05 alpha level. The .10 (10%) level is rarely used.
The value tells you what the probability is that a result is due to chance. So for a .05 level, you are 95% confident that the result is a reflection of the reality- that it is significant- but there is a 5% chance that the result is actually just due to chance. That is, there is a 5% chance that a result you conclude is significant- based on hypothesis testing- is actually not significant. This is also known as a Type I error, or a false positive error (because you accept a test as "positive" even though it is false). There is a second type of error, a Type II error, which reflects the likelihood that you accept a result as not significant even though it is.
At a .05 level, there is a 5% chance that the result is not significant. So if I find that the association between income and the purchase of Lamborghinis is positive and significant at just the .05, then I can say that I am 95% confident that income is correlated positively with Lamborghini purchases, but there is a 5% chance that there is no correlation in reality.
There is no "hard and fast" rule about what level of significance you require. Clearly the .01 level is most rigorous because there is only a 1% chance that a significant result is merely due to chance, as opposed to 5% or 10%. The only place I have seen the 10% level being used is in a field like political science, sociology, or economics, where significance is hard to come by in many models and so the bar is lowered somewhat. But 10% is generally rare in almost all fields; again, 5% is the norm.
It depends in part on how worried you are about accepting a result as significant even though it's not. If you are making life and death decisions, or say you are developing a pharmaceutical drug against cancer, you want to have a smaller alpha level like .01 because you want to be as certain as possible that the result of the drug's efficacy isn't just due to chance.
What you can do is say that if a value is significant at the .05 alpha level (if you recall, you do a t-test and look up the value on a table of Student's t Scores), place one asterisk, like 2.00*, and if it is significant at the .01 level, have two: e.g. 3.45**. Usually researchers disregard the .10 level unless they have no better values and they are really desperate.
A note about reporting the results: If you find that a particular value fails to be significant at any of your chosen alpha levels, you should merely state it is not significant at the level you chose. You should not say "X is not significant at the 5% alpha level, and therefore we are 95% sure it is not significant". When a value is not significant ("false"), you cannot say how sure you are by the alpha level alone.
Also, make sure you determine whether you have a 1 or 2 tailed test. In most cases, if you just want to determine significance and you don't have a specified sign (positive, negative), then use a two-tailed. However, don't look up the 95% level; rather, look up the 97.5% level, because in a two-tailed test we have 2.5% on both tails (hence two tailed), and these together add up to 5%.
You typically want to look for t-scores above 2 for the .05 level. That value depends on your sample size and the number of variables you have (from which you calculate the degrees of freedom).
2007-03-15 09:48:57
·
answer #1
·
answered by bloggerdude2005 5
·
6⤊
0⤋
level of significance (or alpha) is set somewhat arbitraily
.05 by convention for most psychological studies
.01 (at least) by convention for most medical studies.
Really it is how stringent you want to be against type 1 errors (false positive)
2007-03-15 09:49:17
·
answer #3
·
answered by Anonymous
·
0⤊
0⤋
The standard is 5%. 1% is great and using 10% is usually not a good idea.
2007-03-15 10:53:35
·
answer #4
·
answered by kyle b 1
·
0⤊
0⤋