Analysis (10)
The focus here is on comparing the Poisson fit to the negative binomial fit. Goodness of fit is measured via a comparison of the Bayesian evidence favoring each of the two models.
Read in the data and find the mean:
The Poisson random variable has one parameter namely the mean μ. Plot the log likelihood of the data under the assumption that it is Poisson distributed:
Plot the posterior probability density function for the parameter μ under the assumption that the prior probability distribution for μ is flat on the interval (μmin,μmax). Bayes theorem tell us that the posterior probability distribution is proportional to the product of the prior and the likelihood. The constant of proportionality known as the evidence and is found by numerical integration. Computations are performed on a log scale to avoid numerical underflow:
Compute the Bayesian evidence in favor of the Poisson model:
The Bayesian evidence is only meaningful when compared to another model using the exact same data.
Now we turn our attention to the evidence computation of the negative binomial assumption. The negative binomial distribution has two unknown parameters r and p. The log likelihood of the data under the assumption of the negative binomial distribution is:
The evidence computation requires an integration over two dimensions. We assume that on prior information r is flat distributed on (0.1,5.0) and that p is flat distributed on (0.05,0.5). The results of the evidence computation are:
As a check compute the evidence using the function NItegrate:
The difference between the evidence in favor of the negative binomial model and the Poisson model is huge. The observed data clearly favor the negative binomial model relative to the Poisson model:
All the information in a Bayesian analysis is in the posterior. The posterior marginal probability density function for the parameters r and p are:
These two marginal probability density functions are clearly consistent with results from the function EstimatedDistribution: