The official blog of University of Missouri Skeptics, Atheists, Secular Humanists, & Agnostics

When Good Proofs Go Bad

I am at a conference about formal methods in computer security. In formal methods, they use math talk to make proofs about systems. For example, cryptography is really important for computer security, and it would be nice if we could formally prove that a cryptographic system is secure. But the strangest thing always seems to happen. Some intrepid formal methods researcher will publish a proof about a cryptosystem’s being secure, and then in a year or so some other intrepid researcher will publish a successful attack strategy on the system. So, what is going on here? How could the proof turn out to be wrong? That’s been a big issue at the conference.

There was a cryptography symposium out in Oakland this year where some researchers presented a proof verifying cryptosystem TLS secure at 9:30 AM. At 1:30 PM at the same symposium, someone presented a successful attack. This is clearly a problem. Do only computer security formal methods people have this problem, or is it something else?

The consensus we’re moving towards is that the trouble comes from two parts of the reasoning process. The big picture reasoning process looks like this:

1. Create a formal model of the real world system you want to prove stuff about.

2. Use a formal language to prove stuff about the model.

3. Take the stuff you proved about the model, and say that it is true about the real world system.

Clearly, you can only make a mistake in step 2 if you make some logically incorrect move. This is not an interesting way to mess up a proof, although it does happen. But that’s not the type of error we care about, because it doesn’t cause the type of trouble we’re interested in. We are interested in cases where we have a rock solid legitimate proof with no logical errors and then later on it turns out not to be true.

So, interesting errors can happen in steps 1 and 3. And they do. All the time. Those are the non-logical aspects of the reasoning process. By “non-logical”, I mean there is no step by step deductive method for translating the real world system into the formal system, and then back again.

At step 1, a human being has to make decisions about which parts of the real world system to focus on. You simply can’t accurately model the entire system. That would defeat the purpose of having a model. You might as well just use the real world system. So, every model is missing some details. Sometimes, humans leave out details that turn out to be important. Or they might abstract a little too much, and make vague a concept that needs to be more exact.

In step 3, when we transfer the proof results from the formal model to the real world system, we forget about those details we ignored earlier. So, we take our proof and say, “hey look at this proof. It’s basically the highest level of confidence you can achieve. So, yeah, *nerd snicker*, I guess you could say your system is secure…” But those details we ignored will haunt us.

Even with a legitimate proof, we cannot be sure that the proof applies to the real world system we developed it for. Yes, it is still a proof, but its truth is only guaranteed about the FORMAL MODEL, not about the real world.

Is this problem unique to computer science? I don’t think so. I think it is a troll that lurks beneath every non-purely-mathematical discipline’s mathematical bridges. Engineering. Physics. Economics. Philosophy. In each of these disciplines we see the transfer of judgments from the deductively safe formal realm to the messy realm of nature. So while the equations themselves may be sound, their application to reality is always subject to error.

So if someone presents you with a proof and tries to force you to conclude something about concrete reality, it is okay to remain skeptical, and probe their methods in steps 1 and 3 of the reasoning process above.

You can always ask:

1. Did you do a good job formalizing the real world system?

2. Are you applying the formal stuff to the real stuff correctly?


These are thorny questions, and the proof advocate might double down and try to dismiss them. Don’t let this happen. Really probe for the processes of formalization and application and look for potential mistakes.


About Seth Kurtenbach

Philosophy grad student who wandered into a computer science PhD program with a backpack full of modal logic and decision theory.

2 comments on “When Good Proofs Go Bad

  1. rocketkirchner
    July 25, 2013

    Seth , this is a real good article and timely for me , becuaes i have been debating a freind of mine who thinks that facts are reality , when what we know as a fact today might change tommorrow . When i apply any kind of philosophy to him like the concept of infinite regress , or Pi ( approximation infinite universe) , or Qauntum , or incompleteness theoroms , or uncertainty principles in measuring , or most of all that becuase something is factual does not mean that it qualifys as objective absolute reality ( which is a subject that is unprovable ) … he , even all his intellegence wont budge or conceded a thing . GRRR!

    • Seth Kurtenbach
      July 26, 2013

      Thanks, Rocket!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


This entry was posted on July 25, 2013 by in Author: Seth Kurtenbach, Science, Skepticism and tagged , , , .
%d bloggers like this: