With case studies in drunk driving and birth control

Written on September 17, 2018

**Disclaimer: This post is written mostly for a non-math-savvy audience. However, there is going to be some math sprinkled in. If the technical parts are not for you, I encourage you to gloss over them, and keep reading for the sake of the intuitive concepts I introduce. A basic understanding of statistics is important for everyone, whether or not you’ll actually be performing calculations with them.**

Consider this: percentages are often not as extreme as you’d think.

It happens all the time that people say “\(99\%\)” or “\(99.9\%\)” or sometimes even “\(90\%\)” to mean “almost \(100\%\)”. It also happens that people *have* the precise statistics and don’t *report* exactly what they are, instead rounding to \(99\%\).

But often times *how* “almost” really matters. The difference between \(99\%\) and \(99.9\%\) seems trivial in the abstract, but let’s put that in a context. If a surgery has a \(99\%\) recovery rate, you should imagine \(100\) people getting the surgery, and one of them never recovering. That’s very different from \(1,000\) and one not recovering.

This brings us to the second of five principles for understanding large statistics: flip it around. Whenever I encounter a large statistic, I ask myself, what is the opposite of it? What does \(100\%-x\) signify, and how big is it? It often helps to also reframe it from a percentage to \(\approx \frac 1 N\) where \(N\) is some large number. These are all just different ways of writing down the same fact, but they can often give us very different intuitions.

This is a very general principle, really: when the same fact gives you different intuitions, it’s a good idea to pause and think about what’s really going on.

Here’s an example: my freshman year at college, there were placards around campus that gave statistics about how wide-spread and accepted different alcohol and drug related activities are on campus. The idea, I think, was to show people that drug abuse is not widespread or widely accepted, so you shouldn’t feel pressure to do it. In fact, you should feel pressure *not* to.

With this goal in mind, most of the statistics they published were reasonable, because all they were really trying to show is that the *majority* of Cornell students find some behavior unacceptable, not that *all* of them do. So a \(70\%\) or \(80\%\) statistic is actually pretty good.

But there was one placard that grabbed my eye. It reported that \(94\%\) of Cornell students say they had not driven after drinking in the past 30 days. That sounds pretty good, but once again, let’s flip it around. That means \(6\%\) of Cornell students *did* drive drunk in the last 30 days. That’s terrifying!

To get a real sense of how terrifying it is, it’s useful to look at the raw number, rather than a percentage. Yet again, we’re writing down the same fact in two different ways. Sometimes raw numbers give you a better intuition, and sometimes percentages do. If the two disagree, you should think carefully about what’s going on.

There are approximately \(23,000\) students at Cornell. \(6\%\) of that is \(1,380\) students. That’s \(1,380\) students driving drunk, and the university is reporting this statistic with *pride*.

\(1,380\) students… *every month*, that’s an average of 46 per day, This is an easy part of the equation to lose sight of. And this reflects another easy way to go wrong with statistics: not understanding what you’re measuring.

In this case, any answer to “how many students drive drunk on campus?” must specify *over how much time*. To explore this concept further, and to demonstrate ways of intuitively combining extreme statistics, I’m going to shift to a new example: How effective is birth control?

**Note: I’m completely ignoring the question of STI prevention here, which is also very important but not to the broader point I’m making in this post.**

We’ll focus on the following figures which I could only find organized well on Wikipedia. Feel free to check the sources, but I’ve seen similar statistics reported in multiple places including Planned Parenthood and the CDC.

Type of BC | Perfect Use | Typical Use |
---|---|---|

Implant | \(99.95\%\) | \(99.95\%\) |

Copper IUD | \(99.4\%\) | \(99.2\%\) |

The Pill | \(99.7\%\) | \(91\%\) |

Condom | \(98\%\) | \(82\%\) |

Pulling Out | \(96\%\) | \(78\%\) |

So let’s return to the topic at hand: what are we measuring? Condom’s, with perfect use, are \(98\%\) effective. \(98\%\) of *what*?

It turns out, the answer is \(98\%\) of women who use condoms every time they have sex, don’t get pregnant *in the first year*.

Just like “every month” was an important caveat before, “in the first year” is an important caveat here. If you are planning on using birth control for a year, this is the statistic you want. But for anyone who is planning on using birth control for longer, this statistic doesn’t really answer the question they need to be asking. The real question is, what are the chances *I* will accidentally get pregnant at some point in my life?

To answer that, we have to know how long you intend to have sex, while at the same time not wanting to get pregnant. This varies from individual to individual, but for our purposes we’ll go with 10 years.

To figure out what the success rates are over 10 years, we break it down into each of those years by itself, since we know the statistic for each year; then we have to combine them. It is easier to intuitively reason about combining statistics if they are small, rather than large. So we will consider not the success rates of the birth control which is large, but the failure rate, which is small.

So we come to our fourth principle: when there are multiple (unrelated) events, each of which is unlikely, the likelihood that *one* of them is going to happen is *not that low*.

In this case, the unlikely events are that in a given year the birth control will *fail*. So the likelihood that it fails during any one of the years is *not that low*.

Mathematically, we can say precisely how likely:

\[\begin{align*}
P(A \text{ or } B) &= \overline{\overline{P(A)}\cdot\overline{P(B)}}\\
&= 1-(1-P(A))(1-P(B))
\end{align*}\]

In actually performing the calculations, another form of the equation is useful: \(P(A \text{ and } B) = P(A)\cdot P(B)\).

To calculate precisely the likelihood that I won’t get pregnant in any of the 10 years I’m having protected sex, we take the probability for one year, and raise it to the tenth power: \(P(10\text{ years}) = P(1\text{ year})^{10}\)

Redoing the table for a ten year timeline, we get:

Type of BC | Perfect Use | Typical Use |
---|---|---|

Implant | \(99.5\%\) | \(99.5\%\) |

Copper IUD | \(94.2\%\) | \(92.3\%\) |

The Pill | \(97\%\) | \(39\%\) |

Condom | \(82\%\) | \(14\%\) |

Pulling Out | \(67\%\) | \(8.3\%\) |

Condom’s and pulling out are clearly terrible. The pill, perfect use, copper IUD, and the implant seem pretty good though.

But let’s refer back to our first principle: *percentages are often not as extreme as you think.*

Lets take the best there is, the implant. Even on the implant, one out of every 200 women will get pregnant. You probably personally know more than that many women, and if they all used the implant, then still at least one of them would probably get pregnant. From there, it just gets worse, the pill at perfect use still sees three out of a hundred women get pregnant. Condoms, \(18\) out of \(100\).

So is that it? Does birth control just suck? Fortunately, no. There’s good news, which we’ll get from our fifth principle:

Before, we said if there are several (unrelated) rare events the likelihood that *one* of them will occur is not that rare.

The dual of that is that if there are several (unrelated) rare events, the likelihood of *all* of them occurring is *extremely* rare.

The mathematical equation is the same as before, just in the second form: \(P(A \text{ and } B) = P(A)\cdot P(B)\)

We can apply this general principle to our case of birth control failure. It is rare for the pill or the implant, to fail. If we use *both*, then they must *both* fail independently, so we multiply the chances of failure together.

\[P(\text{pill and implant both fail}) = P(\text{pill fails}) \cdot P(\text{implant fails})\]

I made a mistake here that is actually very common and very important for understanding statistics: when I considered implants and the pill used together, I assumed that these were *independent* methods of birth control. Since both act on the hormonal system, they are *not independent*! Not only that, but I’m pretty sure it wouldn’t be safe or effective to use them both at the same time.

The table below reflects only the combinations that to the best of my knowledge can be used together and act on independent methods of pregnancy prevention.

With this in mind, we can make a chart of *combined* birth control methods.

Over one year:

Types of Birth Control | Perfect Use | Typical Use |
---|---|---|

Implant and Condoms | \(99.999\%\) | \(99.991\%\) |

Implant and Pulling Out | \(99.998\%\) | \(99.99\%\) |

Copper IUD and Condoms | \(99.988\%\) | \(99.86\%\) |

Copper IUD and Pulling Out | \(99.976\%\) | \(99.82\%\) |

The Pill and Condoms | \(99.994\%\) | \(98.4\%\) |

The Pill and Pulling Out | \(99.988\%\) | \(98\%\) |

Condoms and Pulling Out | \(99.92\%\) | \(96\%\) |

And over ten years:

Types of Birth Control | Perfect Use | Typical Use |
---|---|---|

Implant and Condoms | \(99.99\%\) | \(99.91\%\) |

Implant and Pulling Out | \(99.98\%\) | \(99\%\) |

Copper IUD and Condoms | \(99.88\%\) | \(98.6\%\) |

Copper IUD and Pulling Out | \(99.76\%\) | \(98.2\%\) |

The Pill and Condoms | \(99.94\%\) | \(85.1\%\) |

The Pill and Pulling Out | \(99.8\%\) | \(81.7\%\) |

Condoms and Pulling Out | \(99.2\%\) | \(66\%\) |

Some of these are still pretty bad. In particular, you really really want to make sure you’re using the birth control right, so you land closer to the perfect use column. But some of them are quite good! You have to evaluate for yourself if these are risks worth taking for you, and maybe you want to stack three birth control methods, which will make the risk even tinier. I’ll leave the specific calculations of that to you.

Let’s stop, though, and notice the comparison we’re actually doing. We said that the implant on it’s own, at \(99.5\%\) effective over ten years, is good but not great, condoms plus pulling out, even at perfect use, is \(99.2\%\), which isn’t fantastic, and, for example, the pill plus condoms, at \(99.94\%\), is pretty reliable.

All of that analysis is of the numbers after the decimal point. If we’d rounded, as many people do, all of those statistics would be \(99\%\), and we’d lose a lot of the important information.

So next time you come across a statistic close to \(100\%\), or \(0\%\), don’t assume you know what it means. Don’t assume that the person giving you the statistic knows what it means. Your, and their, intuition won’t distinguish the differences that are likely relevant. Stop, and think about what the statistic means. The five principles we’ve discussed, can help to guide that thought:

Percentages are often not as extreme as you think. Don’t trust your first intuition.

Write down your fact in multiple ways, and see if your intuition differs.

See what \(100\%-x\) means.

Write the numerical quantity, as well as the percentage.

Know what question you really want the answer to. What is the statistic relevant to

*you*.- In particular, how long something is lasting for is often relevant.

If you’re thinking about combining statistics, remember to first convert your statistic into a

*small*statistic, and then apply whichever principle is appropriateIf

*one*of several unlikely events must occur, it’s not that unlikely.If

*all*of several unlikely events must occur, it’s*extremely*unlikely.

Site proudly generated by
Hakyll