Universal TM / Halting Problem

Church/Turing Thesis

The Church/Turing thesis says that anything which is algorithmically computable can be computed by a Turing machine. The weight toward this argument is everything that we've done up to this point in terms of TM extensions, grammatical generation, etc., is equivalent to what can be done with a standard 1-tape Turing Machine.

Another argument in the ability of a TM to act not just like a machine, i.e. hardware, but also like software, i.e., like a program run on a machine. The machine which runs other TMs is called a universal Turing Machine. An analogous idea is being asked to write Java interpreter program which can run any Java program along with some input. If we do so then we our interpreter could "run itself" if given another Java program as the input. This ability to "run yourself" will be the source of the first undecidable problem called the Halting problem. Obscure as it may seem, it is the source of other undecidable problems which are not at all obscure.

TM Encodings

This is a sketch of the ideas used to create the Universal Turing Machine and prove the Halting Problem. The details of the encodings are found in the TM encoding details section below.

Use Universal Symbol and State Sets

Assume: Any TM M = (K,Σ,δ,s,H) draws from these sets, i.e.
Σ ⊆ Σ  a finite subset
K ⊆ K  a finite subset
Enumerate these symbols and states by assigning arbitrary unary numbers to each, via an enumerator function N. for example,
N(#)=1N(>)=11N(L)=111N(R)=1111N(1)=11111N(c)=111111N(a)=1111111N(b)=11111111,  ...

A TM can run on any string in any finite alphabet Σ

A TM M = (K,Σ,δ,s,H) expects δ to be a function, but if you think of it as a partial function, the meaning is that that it "runs forever" if it reads a symbol σ ∈ Σ for which the transition δ(p,σ) is undefined. We have already discussed this issue in the TM Computations document. For example, suppose we have this partial TM:
({s,p,h}, {a,b,#,>}, δ, s, {h})
Here is how we could complete it, adding a "dead" (run forever) state:

2-symbol encoding of strings

Pick an alphabet of two symbols in Σ. We will use the alphabet {1,c} where 1 is the unary digit symbol already used for the enumerator function N. The encoding function, E, which we want to create maps into strings in {1,c}*.

Strings in Σ* can be encoded by using c as a separator of symbol numbers. A string
w = σ1σ2...σn
has the encoding:
E(w) = cN1)cN2)...cNn)
For example,
E(>#) = c11c1

2-symbol encoding of a Turing Machine

Even if δ is a partial function, we can complete it, getting a true function.
δ: K × Σ  →  K × (Σ ∪ {L,R})
Then ascertain that: The encoding of a TM M, composed of E(s) and E(δ), is written as (see detail section below):
E(M) = E(s)E(δ)cc
It turns out that the set of encodings of TMs is a decidable language. The basic format TM encodings gives a regular language. The encoding of E(δ) has the form:
with transitions like this, for A,B,X,Y ∈ 1+:
The fact that these transitions make a function is beyond simply being regular; it implies that: Determining whether a given string in (c1+c1+c1+c1+)* satisfies these requirements is decidable. Say to yourself: could I program it? The answer is yes (hopefully), although it may be tedious to do so. Lists of numbers can be stored on a single tape.

A TM can run on its own encoding

This is the "uroboros" effect: a TM running itself. Given M = (K,Σ,δ,s,H), if {1,c} ⊈ Σ, then M will most likely run forever on its own encoding because it will encounter symbols for which it is not defined. However, if {1,c} ⊆ Σ then the behavior is unpredictable. This is not as ridiculous as it first may seem. For example, if the TM decides strings defined over our favorite 2-letter alphabet {a,b}, then it could easily be "mapped" to an equivalent TM which decides strings over {1,c}.

Running a TM on it own encoding is the essence of what proves the Halting Problem. In particular, we consider whether:
 M↓E(M)   M halts on its own encoding
 M↑E(M)   M runs forever on its own encoding

2-symbol encodings of TM/string pairs

Both strings and TMs can be encoded. We can put together the two encodings to create the encoding of a TM/string pair, written:
E(M,w) = E(M)E(w)
As in other encodings parts, you can computationally:

Universal Turing Machine

The universal TM, U, is a TM which takes as input an encoded machine/string pair, (M,w), and performs the actions of M running with input string w. The most important achievement is to simulate the accepting (i.e., halting) behavior of M. That is we want:
M halts on w   if and only if   U halts on E(M,w)
or, in notational terms,
M↓w  if and only if  U↓E(M,w)
The input alphabet for U is {1,c}, and so it is legitimate to start U in any configuration:
>#x     x ∈ {1,c}*
The first step is to decide whether x is of the right form E(M,w). If not, we make U run forever.

Otherwise assume U's start configuration is:
Make U be a 3-tape TM with these intended tape usages:


The initialization separates the encoded pair E(M,w) into it the encoded string E(w) and machine, E(M). E(M) is then separated into encode transition function and start state number E(δ) and N(s). These parts are written onto the 3 tapes like this:

String Encoding on Tape 1

We need to say more about the tape 1 encoding. Assuming N(#)=1 and N(>)=11, the encoding of the initial part of the tape string ">#" is "c11c1", and so it is not immediately obvious what it means for # to be the "selected" symbol.

The idea is that the scan head should always be positioned at the "c" which precedes the selected symbol's unary encoding. Thus the ecoding of the initial simulation contents
appears on the Universal machine as:
During the simulation of M, the scan head of tape 1 consistently marks the "c" of the encoded symbol which is selected. For example, if the first step in M is to move left, then the resulting configuration on tape 1 after the simulated step would be:

Real and Encoded

Keep in mind that there is both a real left end, >, and an encoded left end, N(>) = 11. U will never see the real left end because M cannot go left of the encoded left end.

There also are real blanks beyond the encoded portion (which would not be explicitly expressed in the initial configuration):
Unlike the real left end, we may encounter a real blank when moving right beyond the intial content. In that case the U machine replaces the real blank by an encoded blank appended to the current tape string, "c1". The simulation section below mentions this point.

Run the Simulation

After the initialization, start simulating M's behavior as follows.
  1. Reading from tapes 1 and 3, let
    • A be the unary state number on tape 3, initially N(s)
    • B be the selected unary symbol number on tape 1
    Scan the encoded transitions on tape 2 looking for a match of the first pair of the encoded transition
    ( (A,B), (X,Y) )
    If a match of (A,B) is found, go to step 2.

    If no match is not found, it can only be that A is the encoding of a halt state which has been reached by M, so halt U.
  2. Simulate the transition:
    1. change the encoded state on tape 3 from A to X
    2. effect the change on tape 1 indicated by action Y:
      1. if Y is the unary encoding of a symbol to write:
        ...cBc...    ⊢U*    ...cYc...   
        This operation will require shifting the content of the tape right or left beyond the selected c. In either case you'll have to scan to the end of the content, which is marked by a real blank.
      2. If Y is the encoding of an L or R symbol, then scan either left or right, respectively, to the adjacent "c" marker. For example, to going right is this:
        ...cBcZ...   ⊢U*    ...cBcZ...   
        When going right, if we encounter a real blank, replace it by an encoded blank and select it, i.e., assuming N(#)=1,
        ...cB   ⊢U*   ...cB#U*   ...cBc1  
  3. Go back to step 1.

The Halting Problem

The halting problem is this: given a TM M and input string w, is there an "algorithm" to decide whether M halts on input w or not? Assuming that an "algorithm" means a TM, the Universal TM allows us to frame this question in a formal context. Let HALT be the language of machine encoding pairs (M,w) where M halts on input w:
HALT = { x ∈ {c,1}*: x = E(M,w) where M↓w }
HALT is precisely the language accepted by the Universal Turing Machine, U:
M↓w  if and only if  U↓E(M,w)
Therefore HALT is recursively enumerable since it is accepted by U. The essence of the Halting problem is that HALT, which is recursively enumerable, or semi-decidable, is not recursive, i.e., not decidable.

Statement of the Halting Problem

Theorem: HALT is not recursive, or, undecidable, i.e., there is no algorithm which can determine whether an arbitrary TM, or an arbitrary algorithm, will halt on an arbitrary string input.


There are several facts going into the proof: Here is the proof working backward directly from the cause. The textbook works forwards to get a contradiction.
  1. Consider the language
    NOTRE = {E(M): M↑E(M)}
    Saying it in English, NOTRE is the set of encodings of TMs which run forever on their own encodings.
  2. Argue that NOTRE is not recursively enumerable, i.e., NOTRE cannot be accepted by any TM. Why? The proof uses a classic diagonalization construction.

    Suppose NOTRE is recursively enumerable, i.e., it is the language accepted by a TM, M′.

    Then ask yourself:
    Is E(M′) ∈ NOTRE   or   is E(M′) ∉ NOTRE ?
    The contradiction is to argue that neither is possible:
    E(M′) ∈ NOTRE  
          ⇔  M′↑E(M′)                (by definition of NOTRE)
          ⇔  M′ does not accept E(M′)   (by definition of acceptance by M′)E(M′) ∉ NOTRE           (because NOTRE is the language accepted by M′)
    The assumption that NOTRE is recursively enumerable must be false.
  3. Conclude that NOTRE is not recursive (not R.E. implies not recursive).
  4. Conclude that the complement, NOTRE is not recursive.
    NOTRE = {E(M): M↑E(M)} = {E(M): M↓E(M)} ∪ ENCTM
    ENCTM = {x ∈ {1,c}*: x is the encoding of a TM}
    We have argued that ENCTM is recursive, implying that ENCTM is also recursive. But since
    NOTRE = {E(M): M↓E(M)} ∪ ENCTM
    is not recursive, it must be true that
    {E(M): M↓E(M)} is not recursive.
  5. Conclude that the generalization,
    HALT = {E(M,w): M↓w}, is not recursive
    Because if we could decide if M↓w for every TM M and every string w, then we decide it for the specific string w = E(M).


  1. The language HALT corresponding to the Halting problem is recursively enumerable, but not recursive. In particular, the universal TM accepts HALT, but no TM can decide HALT.
  2. There are languages which are not recursively enumerable, in particular the language NOTRE in the proof.
  3. Recursively enumerable languages are not closed under complementation. In particular, from the proof, the language NOTRE is not recursively enumerable, but its complement, NOTRE is recursively enumerable.

Encoding details

The textbook suggests a certain encoding, but our encoding a bit more traditional. There is no point in trying make this encoding "useful", we simply want to ensure that it can be done in theory. Choose two dedicated symbols:
1,c ∈ Σ
Assign every state and symbol a number which we can express in unary notation using the 1 symbol as the basis. Let
N: K ∪ Σ → 1+
be the assignment of states and symbols to positive unary numbers. All known symbols must be assigned numbers:
N(#) = 1, N(>) = 11, N(L) = 111, N(R) = 1111, 
N(1) = 11111, N(c) = 111111
Writing "N(1) = 11111" looks confusing, but it simply means that 1 is the 5th symbol. We will describe our encoding function, E. It is necessary to encode strings and Turing machines.

String encodings

A string is a sequence of symbols:
The encoding is this:
E(a1...an) = cN(a1)...cN(a1)
Consider the symbols a and b. We can assume their encodings are the next available numbers in the sequence.
a = 1111111,  b = 11111111
And we encode "#ab" as follows:
E(#ab) = c1c1111111c11111111
Thus the language of string encodings is precisely the regular language
(c11*)* = (c1+)*
Furthermore, the encoding function is structure-preserving w.r.t. concatenation:
E(ε) = ε
E(u·v) = E(u)·E(v)

TM Encodings

A Turing machine can express almost all of its information in the transition function alone where the transition function is an ordered list of transition quadruples of the form:
(state, symbol, target_state, target_symbol)
Our textbook assumes that the start state is the first in this list, but we will make explicit mention of the start state. Otherwise, as in the textbook, we assume the halt state set H consists of those states which appear as a second state but never as a first. Thus a machine consists of a start state s and list of transitions:
(s, (t1, t2, ..., tn))
To be a function means that there are no two different transitions
ti = (q,a,pi,bi)  and  tj = (q,a,pj,bj)
Each state and symbol has an encoded numerical value which can be used to create an encoding of a transition t = (q,a,p,b) as follows:
E(t) = cN(q)cN(a)cN(p)cN(b)
For example, assuming N(s) = 1, the encoding of the transition (s,>,s,R) is
The encoding of a machine M = (s, (t1, t2, ..., tn)) is
E(M) = N(s)E(t1)E(t2)...E(tn)cc
Thus the language of Turing Machine encodings is a subset of the regular language
Consider the "a-writing machine" M which has these transitions:
δ(s,#) = (h,a)
δ(s,>) = (h,R)
δ(s,a) = (h,a)
Also assuming N(s)=1, N(h)=11, and N(a)=1111111, then we can encode this machine as the string (spaces added for clarity only)
1 c1c1c11c1111111 c1c1c11c1111 c1c1111111c11c1111111 cc
s (s,#,h, a)      (s,>,h, R)   (s,a,      h, a)      
The textbook assumes that the transition quadruples are lexicographically ordered so that every TM has a unique code, but this "uniqueness" does not appear to be a crucial fact.

Machine/String pair encodings

Finally we want encodings of (M,w) pairs which can simply be the concatenation:
E(M,w) = E(M)E(w)
Thus (machine,input) pair encodings belong to the regular language:
For example, using the example machine in the above section with the input string w=ab, we get
E(M,w) = 1c1c1c11c1111111c1c1c11c1111c1c1111111c11c1111111ccc1111111c11111111

© Robert M. Kline