Type 1 and Type 2 Errors of Doctrine

Dr. Richard Beck recently had a couple of posts on his blog regarding “The Theology of Type 1 and Type 2 Errors“, specifically dealing with the ideas of “saved” and “lost”. His second post expanded on the (I think) interesting idea that really the disagreements we have as Christians are fundamentally disagreements about what God is like. Both of these posts are rather interesting, but they got me thinking about the idea of Type 1 and Type 2 errors in terms of things like doctrine.

For those of you who aren’t statisticians or scientists dealing with automated classification systems, Type 1 and Type 2 errors are specific terms we use when talking about the kinds of errors we can make when classifying or predicting events. Because I deal with classifications more than I do with statistics per-se, I tend to think of Type 1 and Type 2 errors in the slightly different but related vocabulary of “false positives” and “false negatives”. Simply put, a false positive occurs when we declare something to be true when it is in fact false, or say something happened when in fact it did not. A false negative occurs when we incorrectly say something is false when it was really true, or that nothing happened when in fact something did.

One important aspect of Type 1 and Type 2 errors is that they are inherently related – we can set an arbitrary Type 1 error rate (even down to zero), but as we decrease our chance of making a Type 1 error, we increase our chance of making a Type 2 error. One of the easiest (and most classic) examples to illustrate this is the legal system. Consider a capital murder trial. The jury commits a Type 1 error if they convict the defendant when he or she was actually innocent. The verdict is a false positive, because we’re saying the defendant actually committed the crime, but they in fact did not. We have falsely sentenced an innocent person, possibly to die. On the other hand, the jury commits a Type 2 error if they acquit the defendant when he or she was in fact guilty. This verdict was a false negative – we said the defendant didn’t commit the crime, though in fact they did. Notice how we can change, and indeed bias the frequency of our errors. We can reduce our Type 1 error rate to zero if we never convict anyone, but we will be certain that all guilty people will also go free. Likewise we can make sure no murders are ever escape justice if we sentence everyone to prison, regardless of their actual guilt. In the absence of these two extremes, however, we can never be certain that we will never commit an error – and furthermore we should expect that we will commit errors; the best we can do is bias ourselves to making certain *types* of errors.

Critically, both Type 1 and Type 2 errors are errors. This sounds obvious, but isn’t always appreciated. A practical example in my field is the idea of “security” and “reliability” in circuit breakers. Reliability means that the circuit breaker *must* open when there is a problem. Failure to do so could mean the destruction of property and even death. In other words, it is unacceptable to have a false negative. If there is a real problem, we need to act on it. On the other hand, we don’t want the device to operate when there isn’t a problem either. If your circuit breaker tripped every time you turned on a light switch, it would become annoying quickly. If this actually happened, you would uninstall the technology that’s intended to protect you because, in effect, it kept crying “wolf”. This is called “security” – if the device operates when it isn’t supposed to it can give us headaches. In this example, both kinds of errors are bad. They are not, however, equally bad. In this case, killing someone is much worse than annoying someone, so circuit breakers tend to be biased toward reliability at the expense of security. There are things we can do that can reduce the rates of *both* types of errors, but we cannot eliminate both of them completely.

Perhaps the trickiest part of those whole deal is that for any given instance, it’s impossible to *know* whether you’ve made an error simply based on looking at the data. Statistically, the concept of Type 1 and Type 2 errors are related to the probability that the results you saw would have been generated “by chance”. In other words, our conclusion about the data is supported – the data does appear to indicate that what we’re saying happened really happened. The problem is that there is a small (but finite) probability the data could have looked that way simply by chance. There is a chance you can interpret the data “correctly” (by applying whatever criteria are appropriate), reach an incorrect conclusion, and furthermore not be aware that your conclusion is incorrect.

But this was supposed to be a post about doctrine, right?

In the absence of certainty (actually being God), we have to start with the premise that there is at least a possibility we will be wrong about some of our doctrinal decisions. In fact, it’s more than a possibility – there is almost a certainty that everybody is wrong about something. Obviously we aren’t aware of the doctrinal errors we make – if we were, we would correct them. Our reading of the text (data), may be perfectly consistent, “correct”, and still be wrong. In other words, we could select an good, appropriate hermeneutic, apply it consistently and honestly to the full body of Scripture, and still come to a conclusion that is in fact not the way God will ultimately decide things. Furthermore, because we chose an appropriate measure of interpretation and applied it correctly, there would be no way we could externally verify that we reached an incorrect (from God’s perspective) conclusion.

This seems problematic. If we can never be certain about doctrinal correctness (i.e. we accept that we can look at a text “correctly” and still commit a Type 1 or Type 2 error), does that bring everything to a standstill? Well, no, I would suggest. Remember that the idea of Type 1 and Type 2 errors are coming out of statistics, and the field of statistics didn’t collapse because we can’t be certain about things. In fact, it thrives because of it. In such a system, what criteria could we apply to produce an acceptable body of doctrine and belief? I think in general, we can look to scientific inquiry as a guide for ways in which we can improve our ability to avoid making both Type 1 and Type 2 doctrinal errors.

First, in science, we require experiments to be repeatable. One scientist’s study doesn’t confirm something to be true. In 1989, several scientists with credible reputations claimed to have discovered cold fusion – a claim that if true promised to be a safe and clean energy source that would basically solve the world’s energy problems. The initial results were confirmed by major research labs at Texas A&M and Georgia Tech. But as a larger group of scientists attempted to replicate the results, there were problems – nobody could get it to work. The researchers who initially confirmed the results discovered there had been problems in their experimental setup which cause erroneous results. After additional investigation, the original scientist’s claim was rejected. Doctrinally, I believe we can apply a similar principle of repeatability. Can other people who are looking at the same data I am and reading with similar method at least verify that my conclusion is sound? To be clear, this is not a call for the democratization of doctrine. This is not a suggestion to adopt the most widely held belief as true. The majority of people in Iceland believe in gnomes and fairy spirits, but that doesn’t make it true. Our doctrinal reading must conform to the text. But if I am the only person who reads the text this way, and almost nobody else can even see where I’m coming from, that would call into question how repeatable my conclusion really is.

Second, scientists generally follow a particular method in reaching their conclusions. I can’t change the method simply because it gives results more in line with what I want. If I can convince people there is something flawed about the method, then I might be able to suggest its change – and in fact the method of scientific inquiry has changed over time. But changes in the method are made by the community as a whole over time – not by a few rogue individuals who are wanting to get different results. In this sense, the tradition of science is important. The scientific community decides what methods are “good” and what methods are “bad”. These decisions are not arbitrary – in fact there are often very good reasons why a particular method is followed. Likewise doctrinally, adherence to a “good” hermeneutic is of paramount importance. If particular doctrines do not conform to reasonable and standard hermeneutics, as informed by the greater tradition of Christianity, we should be sufficiently skeptical of them. This is not to say our hermeneutic is required to be static – indeed it seems obvious that our understanding of God should grow and change over time. It is to say, however, that changes in our method need to be informed and accepted by the broader community before truly becoming orthodox.

Finally, even though this is far from egalitarian, experts should be trusted more than laypeople. We tend to trust Stephen Hawking more than Billy Joe Jones when it comes to the field of Theoretical Physics. That’s not to say Hawking always gets it right, or that Billy Joe might not have some interesting things to say on the subject. It is to say, though, that if our lives were on the line and we could only choose one person to answer a question about neutrinos, we’d be placing a call to Cambridge instead of Mobile. Modern Evangelical Christianity tends to push the other direction – in general with a large anti-academic bias where experts are largely distrusted. Academics don’t always get it right, and laypeople don’t always get it wrong, but experts generally possess tools and training which allow them to make better sense of data than someone without such training. In general, we are less likely to commit Type 1 and Type 2 error when we assign greater value to the opinions of people who have spent years of their lives not only learning about Christian theology, but living lives which have been shaped by a serious commitment to spiritual formation. We shouldn’t immediately dismiss the viewpoints of people who don’t meet this criteria, but we should be inherently suspicious of new viewpoints that arise (or old viewpoints that are perpetuated) primarily by people who have little training and little obvious commitment to spiritual formation and discipline.

Leave a Reply

Your email address will not be published. Required fields are marked *