Re: Null Hypothesis?
Reply #3 – 2016-10-28 20:55:57
The idea of a "null hypothesis" or rather "default position" has it's roots in logic and philosophy. It is reasonable to stick with the default position. Claims are rejected until sufficient evidence is available to convince one otherwise. Then the default position is abandoned and the alternative position, for which ideally overwhelming evidence was provided, is adopted. That's the rational way to do it anyway. A null hypothesis in statistics is what you have posted as definition. In the case of null hypothesis significance testing (NHST) the null is not accepted, because of the way the statistical analysis is carried out: Imagine the case of throwing a coin randomly 10 times. It lands tails 9 times. The null hypothesis (H0 ) would be that the coin is fair. An alternative hypothesis (H1 ) would be that the coin is biased towards tails. In typical frequentist inference fashion, we'd do a NHST: we calculate the probability of achieving 9/10 tails or a more extreme result (that only leaves 10/10 in our case) assuming that the coin was fair (our null hypothesis): P(X >= 9 | H0 ) = 0.01 Note that here we calculate the probability of a random variable (X .. the random outcome of tossing a coin, so either heads or tails) given the assumption that H0 is true. This is called the p-value and it is compared against a more or less arbitrary significance level. 0.05 is often used. Since our p-value is below that we call the result statistically significant, i.e. the null should be rejected. A Bayesian analysis of the situation would be a more natural approach imo, as it would allow to accept or reject the "null" hypothesis or any other hypothesis. It's more natural because Bayesian inference also makes use of a prior probability, that is, you wouldn't assume that the coin was fair if it was from a magician's shop. You would give the "null" hypothesis a lower prior probability than e.g. the alternative hypothesis H1 that the coin is not fair. Then, every time you flip the coin you calculate P(H1 |coin flipping result) and the resulting probability is then fed back into the same equation as prior probability for the next coin flip. Note that here we actually calculate the probability of a hypothesis given the data we have gathered from our experiment. But I'll stop my rambling here..