Gödel’s Incompleteness Theorems

This entry is about Gödel’s incompleteness theorems. I wrote this entry mostly for myself, to collect my own thoughts, though if it managed to entertain or teach a passerby while simultaneously being correct then that would bring me immeasurable joy.

Before Kurt Gödel, most mathematicians seemed pretty convinced that mathematics was complete.  This basically means that if we have a true statement , like 2+2=4, then we can prove it to be true. The mathematician David Hilbert thought that this (that all true statements could be proven) was in itself a true statement and asked for a proof of this statement ( a proof that , if given a true statement, it could be proven). Since it’s very complicated to consider every possible statement expressible in our mathematical language, Hilbert restricted his ask to basic arithmetic.

Originally, I thought that it would be very hard to write an article about the Incompleteness Theorems because the way I was taught them involves at least a week’s worth of lectures and several whiteboard’s worth of frantically scribbled text. The original argument I was exposed to involved an elaborate construction of self referential statements through the use of prime numbers. However, after reading “Quantum Computing Since Democritus” by Scott Aaronson, I found I really liked the simplicity of one of his arguments. The arguments here are paraphrased from that book – all credit (and glory) goes to Aaronson.

Gödel’s First Incompleteness Theorem states that if we have a consistent, computable set of axioms then there is a statement about the integers that it cannot prove, Of course, now we must define what is meant by both computable and consistent.  Consistent is the easier one – it means that we cannot derive both a statement and its negation. So we cannot prove from our axioms that 2+2 = 4 and 2 + 2 does not equal 4.  Another way of phrasing this is that we cannot derive a contradiction from our axioms. Computable means that there is a finite algorithm that generates our list of axioms.   This means that either the list of our axioms is finite or, if it is infinite there is at least a “good way to get them”.  An example of an infinite computable set is the even numbers, and it’s not hard to imagine that one could write a program that printed all the even numbers (even if it took forever – that’s still computable!). I realize that this is a bit different since we shifted from talking about axioms  to talking about numbers instead, but please just bear with me.

So now that we have this statement, it’s time to prove it. Luckily for us, we already have notions about what a computer should be and therefore we will experience much less duress than necessary. We will be using Turing’s results to retcon Gödel’s original proof. The notion of a computer we are using here is the Turing machine,  and it’s worth a quick Google if you do not know what it is. If you don’t want to do that you can safely continue without consequence, as Turing machines and “programs” turn out to be quite interchangeable.

It might be known to many of you, if only in name, that the halting problem cannot be solved by any Turing machine. For the purpose of this endeavor we are still going to define and prove this statement.

The halting problem asks if we can construct a Turing machine ( or program if that suits your fancy) that, upon given another Turing machine (program) can decide whether the program halts or not on a given input.  For example , suppose we had a program (too hard to write about Turing machines on wordpress) that says “if the input is 1, loop forever, and if the input is not 1 output 1”.  It’s pretty easy to analyze this program and see when it halts and when it halts, but can we construct a program to do this analysis for us? In this case, most certainly.  Now the new question is, can we construct a program that will do this for every other program and possible input , or , in other words,  can we solve this problem in the general case?

The answer is no.  Suppose we had such a program to do this for us. Let us call it program A. Now, let us create program B from program A.  B will run forever if an input program , P,  halts if given its own code as an input.  It will halt if P runs forever given  its own code. Now input B into B. There are two possibilities, either B will  halt, in which case it runs forever, in which case it halts, and so on so forth. So assuming this program exists leads to a contradiction, meaning it didn’t exist in the first place.

Now we have to tie this back to what we were doing.  The key idea is that the concepts of proof, program, and computation are pretty closely tied together.  The statement of the Incompleteness Theorem says that if we have a consistent, computable set of axioms then there is a statement about the integers that cannot be proven. Well , suppose that’s false, and we have a set of consistent, computable axioms that prove every statement about the integers. Then, a program could search through every possible proof for a proof that it either halts or runs forever – because, as Aaronson adds, a statement that a computer program halts or not is ultimately a statement about the integers.  This was seem a little wonky, but think about how computer programs output and couple that with the fact that when we say “statements about the integers” we are also including statements with quantifiers, multiple operations, etc.  For example, Goldbach’s conjecture – that every even number above 2 is the sum of two primes  is a simply a statement about the integers. Perhaps a more helpful example is the statement that there a finite number of integers greater than 0 and less than 10 – and that’s also a statement about the integers.

So, that’s the First Incompleteness Theorem.  As the name “first” might imply, there is indeed a sequel. Let A be a program that, upon given an input P will try to decide whether P halts through the aforementioned strategy of searching through every possible proof/ disproof.  Once again, we will create a program B that halts if program P runs forever on input P ( the same lettering is very intentional) and runs forever if program P halts on input P.   Now we give program B the input B.  Note that it must run forever without every discovering a proof that it either halts or runs forever, because both of these lead to the aforementioned contradictions.

Therefore, it seems natural to conclude that we have proved B on input B runs forever. In fact, it seems we have even proved it. So, what happened? Is this paradox resolved? And if so why wasn’t this proof found by the search program?

The answer lies with our hidden assumption – that we were dealing with a consistent set of axioms the whole time.  After all, why couldn’t an inconsistent system produce a program that finds a result that it halts while running forever?

So if our axioms could prove that they themselves were consistent, then they could prove B ran forever, which leads our old contradiction. The  conclusion, called Gödel’s Second Incompleteness Theorem, states if the set of axioms is consistent, then it cannot prove its own consistency.

 

Senthil Rajasekaran

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s