Stata presents a bewildering array of options for the confidence interval for a proportion. Which one should you use?

By default, Stata uses the "

*exact*" confidence interval. This name is a bit misleading (this interval is also called the Clopper Pearson confidence interval, which makes fewer implied claims!). The exact confidence interval is exact only in the sense that it is never too narrow. In other words, the probability of the true proportion lying within the "exact" confidence interval is at least 95%. However, this means that in most cases the interval is wider than it needs to be.
For an apparently simple problem, finding a formula that will give 95% confidence intervals for a proportion has turned out to be surprisingly difficult to crack! The problem is that events are whole numbers, while proportions are continuous. Imagine you have a 25% real prevalence of smoking in your population, and you have a sample size of 107. Your sample

*cannot*have a 25% prevalence of smoking, because, well, that would be 26·75 people. So some sample sizes are "lucky" because they can actually show lots of sample sizes proportion, and some proportions are "lucky" because they can turn up in lots of sample sizes. You begin to see the problem?### Solutions from research

There have been quite a few studies that have used computer simulation to examine the performance of different confidence interval formulas. The recommended alternatives are Wilson or Jeffeys confidence intervals for samples of less than 100 and the Agresti-Coull interval for samples of 100 or more. This gives the best trade off between confidence intervals that are less than 95% and confidence intervals that are too wide.

### What about the textbook formula that SPSS uses?

One option that Stata does not offer you is the formula you find in textbooks, which simply uses the standard error of the proportion to create a confidence interval. This is known as the normal approximation interval, and it is used by SPSS. If you calculate the confidence interval for 2 events out of a sample of 23 using the normal approximation, the confidence interval is -4% to 21%. That's right: SPSS is suggesting that the true event rate could be minus four percent. Quite clearly this is wrong, as there is no such thing as minus four percent. However, the confidence interval also includes a figure which is obviously wrong. If we have observed two cases, then the true value cannot be zero percent either. Less obviously, the upper end of the confidence interval is also very wrong. Using Wilson's formula gives a confidence interval of

**. cii 23 2, wil**

**------ Wilson ------**

**Variable | Obs Mean Std. Err. [95% Conf. Interval]**

**-------------+---------------------------------------------------------------**

**| 23 .0869565 .0587534 .02418 .2679598**

2.4% to 26.8%. The "exact" method gives an interval that is slightly wider:

**. cii 23 2, exact**

**-- Binomial Exact --**

**Variable | Obs Mean Std. Err. [95% Conf. Interval]**

**-------------+---------------------------------------------------------------**

**| 23 .0869565 .0587534 .01071 .2803793**

at 1.1% to 28.0%.

So never calculate a binomial confidence interval by hand or using SPSS!

#### Skip to this bit for the answer

For such an apparently simple problem, the issue of the confidence interval for a proportion is mathematically pretty complex. Mercifully, a Stata user just has to remember three things:

- the "exact" interval is conservative, but has at least a 95% chance of including the true value;
- for N < 100, Wilson or Jeffreys is less conservative and closest to an average chance of 95% coverage,
- and for N > 100, Agresti Coull is the best bet.