Welcome to Estimators! | Mathematical Statistics 1
Introduction to Point Estimation
Say we have some data \(\mathbf{X}\). It’s a vector, so just think of it like a list of \(n\) numbers. We want to learn something about how these data came to be. First, we will aggregate our data using a statistic.
Let \(X_1, \cdots, X_n\) be our data. A statistic is a function of that data. We will denote that statistic with \(\delta\). Crucially, this function cannot contain anything that we don’t know. It is purely a function of known quantities.
Statistics
Let’s say that our data comes from a normal distribution (a “bell curve”). We denote this with \(X \sim N(\mu, \sigma^2)\), where \(\mu\) is the mean of the distribution and \(\sigma^2\) is the variance. Also, to make our life easier, say we know the variance \(\sigma^2\). In practice, this will basically never be the case, but it will simplify our math for now.
We have infinitely many options for statistics that we can choose. For example, we could use \(X_1\) (that is, the first data point in our vector). While we leave some data on the table in that case, it is certainly a statistic since \(\delta = X_1\) is a function of our data, and there are no unknowns.
Alternatively, we could use the observed mean of our data. We will call it \(\overline{X}\) (pronounced “\(X\) bar”), and it is denoted with \[ \delta(\mathbf{X}) = \overline{X} = \frac{1}{n}\sum_{i=1}^n X_i. \]
Notice that this is also a statistic! Although it looks much more complicated, we are still using our data and no unknowns. Here, \(n\) is the number of data points that we have, which we know. And we know every \(X_i\) because each is part of our data vector \(\mathbf{X}\).
Constants (e.g., 7) are also statistics, although no data are involved in the calculation. If it feels like you’re just guessing at random if you do this, you’re right.
Now let’s look at an example of a function that is not a statistic: \[ \delta(\mathbf{X}) = T = \frac{\overline{X} - \mu}{\sigma/\sqrt{n}}. \] This function will come back in future articles, but for now, recall that we said that we already know the variance \(\sigma^2\). So that means we already know \(\sigma\). We also know \(n\), as we mentioned earlier. But \(\mu\) is unknown to us. Because we have an unknown value \(\mu\) in the numerator, \(T\) is not a statistic.
Point Estimators
To use a plain-language term that is largely despised among statisticians, we want to “guess” (*gasp*) at the value of our unknown parameter. In the previous example, that parameter was \(\mu\). To be more general, we will use the Greek letter \(\theta\) (pronounced “theta”) since \(\mu\) is often saved for the mean of a normal distribution.
Say we have data \(\mathbf{X}\) that comes from some probability distribution with an unknown parameter \(\theta\) that has some true fixed value. A point estimator is a statistic that estimates the true value of \(\theta\), and is denoted by \(\hat{\theta}\). That is, \[ \hat{\theta} = \delta(\mathbf{X}). \]
Point estimator example
Let’s keep going with our data, which comes from a normal distribution. But, to get used to using \(\theta\), say that \(X \sim N(\theta, \sigma^2)\). One possible estimator is the example mean \(\overline{X}\) from earlier (i.e., the mean of the observed data). Alternatively, we can use a constant: say 5. Intuitively, it feels like \(\hat{\theta} = \overline{X}\) would be a better guess than a simple \(\hat{\theta} = 5\), because it is actually informed by the data. But how do we quantify that intuition? We will calculate and compare both bias and precision for each.
Bias
Bias tells us how often, on average, we get the correct value of our unknown parameter \(\theta\). Mathematically, we hope that the following quantity is as small as possible: \[ \mathbb{E}[\delta(\mathbf{X}) | \theta] - \theta. \]
The confusing-looking term \(\mathbb{E}[\cdot]\) is the expected value of our estimator, given the value of the unknown parameter \(\theta\). Basically, this is the expected value of \(\hat{\theta}\). If our estimator \(\hat{\theta}\) is expected to be exactly correct on average, then this whole term will be 0, which is the smallest possible bias.
Variance
Variance describes the variability of our estimator. Ideally, variance is also small. Intuitively we are less “sure” about our estimate if we have a wider variance. We denote variance with \(Var(\delta(\mathbf{X})|\theta)\).
However, notice that both bias and variance are conditional on the true value of our unknown parameter \(\theta\). Thus, we cannot calculate these quantities directly. To deal with this, we will introduce the concept of loss in the next article here!