Basic Bayes

A Very Quick Overview of Some of The Features and Benefits of Bayesian Analysis

Author

Andy Grogan-Kaylor

Published

August 29, 2023

1 Bayes

Thomas Bayes

2 Bayes Theorem

\[P(H|D) = \frac{P(D|H) P(H)}{P(D)}\]

3 In Words:

\[\text{posterior} \propto \text{likelihood} \times \text{prior}\]

4 In Even More Words:

Our posterior (updated) beliefs about the probability of some hypothesis are proportional to our prior beliefs about the probability of that hypothesis multiplied by the likelihood that were our hypothesis true it would have generated the data we observe.

5 Benefits

  1. In Bayesian analysis, we are not assessing the plausibility of the data (\(P(D|H)\)), given the assumption of a null hypothesis \(H_0: \text{parameter value} = 0\). This feature of Bayesian analysis has a few key implications:
    • We are actually estimating–the probability of a particular hypothesis given the data–what we often think we are estimating in Frequentist analysis. Thus, after conducting a Bayesian analysis, we can simply say, “The probability of our hypothesis (\(P(H|D)\)) given the data is x,” rather than engaging in complicated statements about “Were the null hypothesis to be true, we estimate that it is p likely that we would see data as extreme, or more extreme, than we actually observed.”
    • Because we are directly estimating the probability of hypotheses, we can not only evaluate the probability of the null hypothesis (\(H_0\)), but also accept the null hypothesis (\(H_0\)), something that we are never supposed to be able to do in frequentist analysis. Being able to accept the null hypothesis may have implications for equivalency testing and theory simplification and may reduce publication bias, if we are not always looking for ways to reject \(H_0\).
  2. In Bayesian analysis, we are able to incorporate prior information (\(P(H)\)). Prior information may reflect particular beliefs about the likely value of a parameter, and have higher (vague prior) or lower (informative prior) levels of uncertainty. Prior information may come from a number of sources:
    • Prior history of a project.
    • Meta-analyses or systematic reviews of the question being considered.
    • Expert knowledge, or community beliefs or wisdom.
  3. Because Bayesian estimation relies on Markov Chain Monte Carlo (MCMC) simulation methods, Bayesian methods may perform better when there are small samples. In multilevel modeling, Bayesian methods may perform better when there are small level 1 and/or level 2 samples. Adequate model performance will likely depend upon having good prior information.