In Statistics, P value is a measure that helps scientists determine whether or not their hypotheses are correct.

P values are used to determine whether the results of their experiment are within the normal range of values for the events being observed.

Usually, if the P value of a data set is below a certain pre-determined amount (like, for instance, 0.05), scientists will reject the “null hypothesis” of their experiment – in other words, they’ll rule out the hypothesis that the variables of their experiment had *no* meaningful effect on the results. Today, p values are usually found on a reference table by first calculating a *chi square* value.

# Calculation Of P Value

Usually, when scientists conduct an experiment and observe the results, they have an idea of what “normal” or “typical” results will look like beforehand.

This can be based on past experimental results, trusted sets of observational data, scientific literature, and/or other sources. For your experiment, determine your expected results and express them as a number.

**1.**Now that you’ve determined your expected values, you can conduct your experiment and find your actual (or “observed”) values. Again, express these results as numbers.

If we manipulate some experimental condition and the observed results

*differ*from this expected results, two possibilities are possible: either this happened by chance, or our manipulation of experimental variables*caused*the difference.The purpose of finding a p-value is basically to determine whether the observed results differ from the expected results to such a degree that the “null hypothesis” – the hypothesis that there is no relationship between the experimental variable(s) and the observed results – is unlikely enough to reject.

2. You will have to determine the degrees of freedom which measure the amount of variability involved in the research, which is determined by the number of categories you are examining.

The equation for degrees of freedom is

**Degrees of freedom = n-1**, where “n” is the number of categories or variables being analyzed in your experiment.**3. Compare expected results to observed results using**Chi square(written “x

*chi square*.^{2}“) is a numerical value that measures the difference between an experiment’s

*expected*and

*observed*values.

The equation for chi square is:

**x**, where “o” is the observed value and “e” is the expected value. S^{2}= Σ((o-e)^{2}/e)^{}um the results of this equation for all possible outcomes.**4.**Now that we know our experiment’s degrees of freedom and our chi square value, there’s just one last thing we need to do before we can find our p value – we need to decide on a significance level.

Basically, the significance level is a measure of how certain we want to be about our results – low significance values correspond to a low probability that the experimental results happened by chance, and vice versa.

Significance levels are written as a decimal (such as 0.01), which corresponds to the percent chance that random sampling would produce a difference as large as the one you observed if there was no underlying difference in the populations.

**5.**

**Use a chi square distribution table to approximate your p-value.**Scientists and statisticians use large tables of values to calculate the p value for their experiment.

These tables are generally set up with the vertical axis on the left corresponding to degrees of freedom and the horizontal axis on the top corresponding to p-value.

Use these tables by first finding your degrees of freedom, then reading that row across from the left to the right until you find the first value

*bigger*than your chi square value.Look at the corresponding p value at the top of the column – your p value is between this value and the next-largest value (the one immediately to the left of it.)

**6. Decide whether to reject or keep your null hypothesis.**Since you have found an approximate p value for your experiment, you can decide whether or not to reject the null hypothesis of your experiment (as a reminder, this is the hypothesis that the experimental variables you manipulated did

*not*affect the results you observed.) If your p value is lower than your significance value, congratulations – you’ve shown that your experimental results would be highly unlikely to occur if there was no real connection between the variables you manipulated and the effect you observed. If your p value is higher than your significance value, you can’t confidently make that claim.