English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

As the title says. I thought the greater the p your experiment is more correct. For my experiment I used three classes of beans and got about 80-70% p in my results. I thought this was good because my expected was kinda close to my observed result, (about 6 away from expected). I read online though that its the opposite or something like that and that the farther away from .05, the less reliable it is. The closer it is to .05 then that means the more likely it occurred by chance and the farther from .05 the more likely it was biased and something affected the results??? Can someone please explain to me what exactly p stands for??? I am very confused...

2007-12-12 16:28:01 · 4 answers · asked by Anonymous in Science & Mathematics Biology

4 answers

OK, you've got your three classes of beans, with so many in each class, right? Those are your observed results.

You also have a model that tells you to how many of each class to expect.

The observed results aren't identical to the expected, so you want to know whether they're close enough that your model could be right, or so different that the model is probably wrong. That's what the chi squared test is for.

The p value is the probability of seeing that much difference between observed and expected just by chance, assuming the model is correct. So, as the p value gets smaller, it is less likely that your actual results are different from expected just by chance, and it's more likely that the difference is because the model is wrong.

What matters with p is how big or small it is, not how close it is to 0.05. Because of the way p is calculated, it can't be bigger than 1, and it can't be less than or equal to zero. We usually use 0.05 as a somewhat arbitrary cut-off for interpreting the p value. When p is > 0.05, we say that the difference could be just by chance, and we shouldn't conclude the model is wrong. If p is < 0.05, we say the difference is probably not just by chance, and the model is probably wrong. The smaller p is, the more certain we can be that the model is wrong.

In your case, p is 0.7 - 0.8, which is much higher than 0.05. Thus, your observed results give you no reason to think your model is wrong.

However, that doesn't mean that your model is probably right! With a chi-squared test, a low p-value is a good reason to doubt your model (assuming no flaws in your experiment, etc.), but a high p value is not a good reason to believe your model. That's because there could be a lot of other models that also fit your observed data. A chi-squared test can't tell you if your model is more likely to be right than any of those others.

To help see this, imagine that your model (model 1) predicts 25 class A, 50 class B, and 25 class C, and you observe 30 A, 42 B, and 27 C. The p value by chi-squared in this case is about 0.27, so there's no reason to think model 1 is wrong.

Now imagine model 2 that predicts 33 1/3 of each class. Comparing observed versus expected for model 2 gives a p value of about 0.18, so there's no reason to think model 2 is wrong either. The difference in p values (0.27 versus 0.18) should not be interpreted to mean that model 1 is more likely than model 2.

In contrast, consider model 3 that predicts 20 A, 60 B, and 20 C. When you compare that to the observed results, the p value is about 0.001. In other words, if model 3 were right, there would only be a 0.1% chance of getting those observed results. That's very unlikely, so we conclude that model 3 is probably wrong.

I hope that clears things up a bit for you. And don't feel bad about being confused. Most of us find it confusing at first (me included).

2007-12-12 18:20:20 · answer #1 · answered by qetzal 4 · 0 0

A way I use in the interpretation of "p" is that it's the percent certainty that what you did was the cause of your result.

For any test which uses p, the higher the number, the more your test shows a cause and effect. Ideally you want to be in the 95% (95% certain that what you did caused your results) or higher range to claim there is a "significant" relationship, or a 99% or above for a "highly significant" relationship.

So a 50% result would be what you could expect to happen by chance given two outcomes (think about flipping a coin). If you were below 50%, either your idea was totally inconsistent to what really was a cause, or you have too many variables within your population (maybe limiting to a single age, size, gender, mass, etc would improve the p) or that you aren't controlling extraneous variables (temperature, light, dose, time, water, food, etc.).

A result of 70-80% isn't bad, but there may still be some variable you aren't accounting for and controlling that's influencing your result, or because you have a fairly small sample size.

2007-12-12 16:57:22 · answer #2 · answered by Dean M. 7 · 0 0

P (probability) stands for the level of significance of your findings. You should make sure that your X^2 values are accurate ((observed minus expected) squared) - all divided by expected. Then find your degrees of freedom which is sets of data (n) minus one. At this point you need to get a hold of a table of critical values for chi-square. Find the line that corresponds to your degrees of freedom, and compare your X^2 numbers to the value under the column that you need ( i.e 0.1 or 0.05 or 0.01). These values represent the number of percent deviation you have from your expected value. Just multiply by 100, so 0.05 is 5 percent. Most scientists accept no more than a 5% discrepancy from their expected values.

2007-12-12 16:46:23 · answer #3 · answered by spirouack 2 · 1 0

Now that became a superb analogy. thank you to flow. We won't have created existence out of scratch, yet i would not be greatly surprised if scientists down the line, say after a century or 2, finally end up getting robust close to to that success. we are somewhat stretching the exterior whilst it is composed of garnering expertise in this branch of biology, yet we are making fairly good progression. None of this might remember to the hardcore Creationists, nevertheless. they're going to easily arise with excuses one after yet another.

2016-11-03 02:40:05 · answer #4 · answered by ? 4 · 0 0

fedest.com, questions and answers