## Basics of Theoretical Computer Science

I have a friend who is very knowledgeable and skilled in many areas including theoretical computer science. During our conversations he uses terms like “halting problem”, “Turing machine”, “Gödel’s incompleteness theorem”. I felt the need to learn the basics of theoretical computer science but quickly realized that this is a vast field. Beginners like me don’t know where to start. After looking around for tutorials I decided that the best place to start is the article titled “Computability and Complexity” [1].

Computability and Complexity

“A mathematical problem is computable if it can be solved in principle by a computing device. Some common synonyms for “computable” are “solvable”, “decidable”, and “recursive”. Hilbert believed that all mathematical problems were solvable, but in the 1930’s Gödel, Turing, and Church showed that this is not the case. There is an extensive study and classification of which mathematical problems are computable and which are not. In addition, there is an extensive classification of computable problems into computational complexity classes according to how much computation—as a function of the size of the problem instance—is needed to answer that instance. It is striking how clearly, elegantly, and precisely these classifications have been drawn.” – [1]

In addition to reference [1], you can check out Scott Aaronson’s guide to computational complexity titled “Why Philosophers Should Care About Computational Complexity“.

Church-Turing thesis

For any computable problem, there is a Turing machine which computes it. Any problem not computable by a Turing machine is not computable by any finite means whatsoever.

“In the 1930’s, well before there were computers, various mathematicians from around the world invented precise, independent definitions of what it means to be computable. Alonzo Church defined the Lambda calculus, Kurt Gödel defined Recursive functions, Stephen Kleene defined Formal systems, Markov defined what became known as Markov algorithms, Emil Post and Alan Turing defined abstract machines now known as Post machines and Turing machines.” [1]

“Surprisingly, all of these models are exactly equivalent: anything computable in the lambda calculus is computable by a Turing machine and similarly for any other pairs of the above computational systems. After this was proved, Church expressed the belief that the intuitive notion of “computable in principle” is identical to the above precise notions. This belief, now called the “Church-Turing Thesis”, is uniformly accepted by mathematicians.” – [1]

Is there a proof of the Church-Turing thesis? Here’s an authoritative answer:

“There has never been a proof, but the evidence for its validity comes from the fact that every realistic model of computation, yet discovered, has been shown to be equivalent. If there were a device which could answer questions beyond those that a Turing machine can answer, then it would be called an oracle.” – Wolfram MathWorld

Turing machine

In his 1948 essay, “Intelligent Machinery”, Alan Turing described his machine as follows:

“…an unlimited memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine.”

In the modern definition of the Turing machine described below the tape (memory) is not moving, instead there is a moving head (reader) which can only move 1 cell.

• a finite set $Q$ of possible states
• an infinite tape (memory), consisting of consecutive cells, $\sigma_1$, $\sigma_2$, $\sigma_3$, $\sigma_4$, … from some finite alphabet $\Sigma$. It is convenient to fix $\Sigma = \{ 0,1,b\}$ consisting of the binary alphabet plus the blank cell symbol.
• a read/write tape head, scanning tape cell $\sigma_h$
• a transition function $\delta$. Given a state $q$ and symbol $\sigma_h$ the transition function $\delta$ tells us the new state the machine should enter, the new symbol that should replace the scanned one, and the new head position, $h^\prime = h + d$, where $d \in$ {−1, 0, 1} is the displacement.

Even though concrete imagery such as “tape” and “tape head” are used the Turing machine is an abstract machine, obviously. There is no mention of how “states” are implemented or what the physical mechanism of transition function is. I suppose we can visualize the transition table as a look-up table. Also, there is no mention of how slow or fast the operations are.

“The linear nature of its memory tape, as opposed to random access memory, is a limitation on computation speed but not power: a Turing machine can find any memory location, i.e., tape cell, but this may be time consuming because it has to move its head step by step along its tape.” – [1]

Each Turing machine can be uniquely described by its transition table (also known as action table or instruction set). For each state $q$ and symbol $\sigma$, the result of the transition function $\delta (q, \sigma)$ is the new state, the new symbol, and the head displacement. These transition tables, can be written as a finite string of symbols, giving the complete set of instructions of each Turing machine. These strings of symbols can be listed in lexicographic order as follows: $M_1$, $M_2$, $M_3$,…, where $M_i$ is the complete set of instructions for Turing machine number $i$. In other words $M_i$ is the $i^{th}$ program.

Universal Turing Machine (UTM)

Alan Turing showed that he could build a Turing machine that was universal, in the sense that it could run the programs of any other Turing machine.

The main idea of UTM is to treat $M_i$ mentioned above as input data. In other words, the instructions for the machine are placed in the same memory as the input data.

The UTM will have its own set of instructions $U$ of course. In mathematical language, the transition table $U$ of the UTM for any program $i$ and corresponding input $w$ is equivalent to $M_i (w)$.

$U (i , w) = M_i (w)$

Well…the details are not discussed here but obviously the UTM has to build the lookup table corresponding to the specific instruction set $M_i (w)$ first before executing it with the input data $w$. I suppose the UTM has to convert the alphabet of each $i$ to its own alphabet as well.

UTM is the basis of modern computing. One computer can run all programs. We don’t have to purchase a new computer for new tasks. Instead, we write new programs for new tasks and run those programs on the same computer.

Halting problem

Halting means terminating or exiting. A Turing machine can run forever (enter an infinite loop), or reach a particular state or set of conditions at which it is prescribed to halt.

In 1936, Alan M. Turing demonstrated that for a given input and set of rules, determining whether a Turing machine will ever halt is undecidable.

For simple programs it is decidable, of course. But for more complex programs there is decision problem. We can run the program for some number of steps and check if it halts. But if the program does not halt, it is unknown whether the program will eventually halt or run forever.

Alan Turing’s argument is about finding a general decision procedure that applies to any program with any complexity. He concludes that we cannot find such a general procedure.

For simple proofs of the Turing’s halting theorem see this or this.

Relationship between Gödel’s incompleteness theorem and the halting problem

“[Gödel] …incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.” – Wikipedia

“Instead, I simply observe Gödel’s Theorem as a trivial corollary of what I see as its conceptually-prior (even though historically-later) cousin: Turing’s Theorem on the unsolvability of the halting problem. For those of you who’ve never seen the connection between these two triumphs of human thought: suppose we had a sound and complete (and recursively-axiomatizable, yadda yadda yadda) formal system F, which was powerful enough to reason about Turing machines.  Then I claim that, using such an F, we could easily solve the halting problem.  For suppose we’re given a Turing machine M, and we want to know whether it halts on a blank tape.  Then we’d simply have to enumerate all possible proofs in F, until we found either a proof that M halts, or a proof that M runs forever.  Because F is complete, we’d eventually find one or the other, and because F is sound, the proof’s conclusion would be true.  So we’d decide whether M halts.  But since we already know (thanks to Mr. T) that the halting problem is undecidable, we conclude that F can’t have existed.” – Scott Aaronson

Models of computation

“…a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.” [2]

For a list of the models of computation you can see the “Model of computation” [2] article at Wikipedia.

Other types of computations not mentioned in [2] are DNA-computation [3][4] and quantum Turing machine [5][6].

It is important to note that, a quantum Turing machine can be simulated by a classical Turing machine. Remember the Church-Turing thesis mentioned above. Not just the quantum Turing machines but all computable problems can be simulated by a Turing machine.

Turing complete

If a system of data-manipulation rules (programming language, cellular automaton) is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine.

Stated in a different way, a programming language is Turing complete if every problem that can be computed by a Turing Machine can also be computed by writing a program in that language.

Noncomputable numbers and functions (few examples)

“The real numbers are uncountable so most real numbers are not computable. See computable number. The set of finitary functions on the natural numbers is uncountable so most are not computable. Concrete examples of such functions are Busy beaverKolmogorov complexity, or any function that outputs the digits of a noncomputable number, such as Chaitin’s constant.” [7]

References

This entry was posted in computer science and tagged , , , , . Bookmark the permalink.