Lecture 7
Tuesday 7th April, 2026
We want to find a balance between two types of errors we could make:
\[ P(\text{heads}) = \frac{\text{number of heads}}{\text{total number of flips}} \]
\[ P(\text{success}) = \frac{\text{number of successes}}{\text{total number of trials}} \]

\[ P(X = k) = \binom{n}{k} p^k (1-p)^{n-k} \]
where:
\[ P(X = k) = \frac{\lambda^k e^{-\lambda}}{k!} \]
where:
\[ f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} \]
where:
where: - \(x\) = observed value - \(\mu\) = mean - \(\sigma\) = standard deviation
If we sample values from any distribution and calculate the mean, as we increase our sample size, the distribution of the mean gets closer and closer to a normal distribution (Duthie 2025)
\[ \chi^2 = \sum \frac{(Observed_i - Expected_i)^2}{Expected_i} \]