Turing Machines

Our next, and effectively last model of computation is the Turing Machine. This model bears some similarities to our previous models in that there are finitely many states and the input string is presented on the "tape". Furthermore, the machine control has a read/write head which considers the current symbol, the current state and makes a computational step based on these values. The differences these:
  1. the tape is semi-infinite and the machine can pass back and forth across the input
  2. the tape can be written onto
The ability to write an unbounded amount of information was what gave the PDA its advantages over the DFA. The TM has the same unbounded read/write memory, but it also has the ability to use this memory in different ways, not just as a stack.

Church-Turing Thesis

With a PDA, the single stack also becomes a limitation, and we could imagine more powerful models with multiple stacks, etc. In contrast, we will see that the Turing machine represents a maximal model of computation in the sense that anything that can be decided algorithmically, can be done some with a single TM (albeit very complicated).

The argument that we're alluding to is called the Church-Turing Thesis: anything that is algorithmically computable can be computed by a Turing machine. One refers to this as a thesis because there is no effective definition of what algorithmically computable really means. The point is that every effort to extend the TM to a more powerful computational mechanism fails in the sense that any such new mechanism can be modeled by a TM.

Turing Machine Definition

A Turing machine M = (K,Σ,δ,s,H):
K = finite set of states
Σ = tape alphabet containing special symbols > (left end) and # (blank)
s ∈ K is the start state
δ: (K-H)×Σ → K×(Σ∪{L,R})
Further qualifications are that the directional symbols L (go left) and R (go right) do not belong to Σ, and that δ has certain fixed behaviors with respect to the left end symbol:
  1. if δ(q,>) = (p,X), then X = R or X = >,   i.e., when the machine is at the left end, it can only rewrite the left end or go right
  2. if δ(q,σ) = (p,σ′) where σ ≠ >, then σ′ cannot be >,   i.e., we can never write the left end marker except at the left end
What's the intuitive meaning of all this? A TM, after reading an input symbol σ in state q, can do one of the following (subject to restrictions A and B):
  1. δ(q,σ) = (p,L): change to state p and move the read/write head one symbol left
  2. δ(q,σ) = (p,R): change to state p and move the read/write head one symbol right
  3. δ(q,σ) = (p,σ′): change to state p, write a symbol σ′ at the current position and not move the read/write head
The TM steps in this model used in this textbook can be described as atomic, in the sense that they cannot be composed of simpler moves. In other TM models, one allows moves whereby a symbol is always written and the tape head moves one step left or right, or stays where it is.

Behavior of the left end marker

The TM restrictions (A) and (B) above imply that the left end marker is what it purports to be, a symbol which resides on the left end exclusively throughout the computation and steers the computation away from "falling off the left." Not all TM models are defined in this way. In particular some models allow a TM to run off the left end, in which case it is said that the TM "hangs". Such behavior is considered effectively equivalent to running forever.

Halting states

Halting states have some of the connotation of final states in our other machines, except that the definition of δ excludes any further steps once a halting state is reached, and so the machine stops.

The point is that, unlike our other machines, the goal is not to consume the input by passing left-to-right over it. In fact, computations can go back and forth across the input multiple times. Therefore, to have some sense of being "done" we need a stronger notion of final state, thus the necessity of halting state.

In fact, we can really assume that there is only ever one halt state, h, i.e., H = {h}. In theory, some information might be expressed by the state in which the TM halts; however, in reality, all we ever care about is whether it halts or not, and if it does halt, we can look at the tape contents, not the halt state.

Starting point

Every TM starts with a finite portion of the tape replaced by input tape symbols. Usually the interpretation is that we have one or more non-blank strings separated by blanks, like this:
>aaa#aaa#####
Unlike PDAs and DFAs, we do not necessarily assume that the read/write head be at a specific position. The idea is that we want to describe TMs by "hooking them together", i.e., one starts where the previous one left off.

Examples: 4.1.1, 4.1.2

As you'll soon see, it is barely worth the effort to work up a graphical depiction of TM behavior, because we develop a whole new system of combining TMs.

4.1.1

This is how we could depict example 4.1.1:
This is a "blanking machine". When positioned at the left end it moves right, replacing all non-blanks by blank until a blank is seen, and then halts. Note that our depiction indicates a partial function. The combination (q1,a) and others are never seen in any computation. The textbook defines values for these, but they are not used.

4.1.2

This machine searches for a blank, starting from the current position and going left. If there is none, it will run forever, going back and forth between the left end and the adjacent blank. In other TM models, as we mentioned, we would say that it hangs by falling off the left.

Configurations

A TM configuration for M = (K,Σ,δ,s,H) belongs to
K × >Σ* × (Σ*(Σ-{#})∪{ε})
The interpretation of a configuration is this:
( state,  tape to (including) the r/w head,  tape after r/w head before infinite blank sequence )

Notational Convention

In order to avoid writing the tape as two strings, it is easier to write a single string and mark the position of the r/w head. The textbook does this by underlining the symbol at the r/w head position, e.g.,
(q0, >, aaa )    ⇒   (q0, >aaa)
(q, >aa#, bb)    ⇒   (q, >aa#bb)
A blank can always be within the string, but it can never be at the end of the string unless it is selected.

Other textbooks use an even more compact notation whereby we get one string in Σ*KΣ* like this:
(q0, >, aaa )    ⇒   >q0aaa
(q, >aa#, bb)    ⇒   >aa#qbb
However, we'll stick with the former version used by this textbook.

The ⊢ relation

As can be expected, we want to define a computation as a series of related configurations obtained by the δ function. Consider our previous examples. In 4.1.1, we made the configuration transformations:
(q0,>aa) ⊢ (q0,>aa) ⊢ (q1,>#a)) ⊢ (q0,>#a) ⊢ (q1,>##) ⊢ (q0,>###) 
         ⊢ (h,>###)
To define this relation formally, it gets a bit messy with all cases to consider. We want to consider the possible transformations on a given configuration:
(q,wau) 
We are assuming: Here are the possibilities:
  1. δ(q,a) = (p,b). If w = ε, then a = >, and so the restrictions on δ dictate that b = >. Either way
    (q,wau) ⊢ (p,wbu) 
    
  2. δ(q,a) = (p,R) and u = ε.
    (q,wa) ⊢ (p,wa#) 
    
  3. δ(q,a) = (p,R) and u = σv:
    (q,waσv) ⊢ (p,waσv) 
    
  4. δ(q,a) = (p,L). The restrictions on δ dictate that w ≠ ε, so write w = xσ
    (q,xσau) ⊢ (p,xσau) 
    
As usual, define ⊢* as the reflexive, transitive closure of .

Combining TMs

We want to create a language and graphical structure for combining TMs.

Conditional continuation

The graphical representation is like this:
Let Mi = (Ki,Σ,δi,si,Hi), i=1,2. Assume state sets are disjoint. Choose a state h2 ∈ H2 arbitrarily. Then our new machine is
(K1∪K2,Σ,δ,s1,H2)
These are the transitions
δ(q,σ) = δ1(q,σ), q ∈ K1-H1
δ(q,σ) = δ2(q,σ), q ∈ K2-H2
δ(h,a) = (s2,a), h ∈ H1
δ(h,σ) = (h2,σ), σ ≠ a, h ∈ H1, h2 ∈ H2
In English,
  1. run M1 until it halts (if it halts)
  2. if the r/w head is reading the symbol a, leave read head alone and go to state s2 in M2
  3. if the r/w head is reading any other symbol, go to some halt state in M2
The textbook goes through a similar scenario on page 187 with:

Unconditional continuation

Graphically we would write:
or simply:
M1 M2
It means that, if M1 halts, keep R/W head where it is and go to the start state in M2

Inverted conditional continuation

Graphically we would write:
The a is read as "a complement". It means that, if M1 halts, run M2 if the scanned symbol is anything other than a.

Component machines

The machines we use in these compositions are very simple: Both the σ and the L machines are superceded by the TM transition restrictions when scanning ">". In particular, these component machines are not entirely trivial. Suppose that our alphabet were:
Σ = { a, b, #, > }
then these component machines would look like this:

No-op machine

We also have a generic "no-op" machine depicted as with only a halt state which is the start state, and no transitions:
({h},Σ,∅,h,{h})
The no-op machine simply serves as a good "place marker" with no actions, for example, acting as a decision point:

Unconditional continuation

Unconditional continuations of these machines can be written simply as juxtapositions:
RR  =  RR 
R#  =  R#
Although the textbook promotes notation like R2 being equivalent to RR, it causes inconsistencies when trying to write multitape Turing machines in which the superscript represents the tape number.

Scanning machines

Our input strings to TMs are strings of Σ-{#,>} symbols. Blank symbols are used to delineate string input as well as to mark positions which have been read and processed within an input string. Important operations are these:

More advanced machines

Our input strings are always assumed to be composed of the tape symbols excluding blanks and the left end symbol.

4.1.8

A copy machine, C, takes an initial tape configuration:
#w#
and modifies it to
#w#w#
We really want the outcome to be "position independent" on the tape, i.e., if the initial tape is this:
>x#w#
where x ∈ (Σ-{>})*, then the outcome is this:
>x#w#w#
Here is copy machine, C:
The loop section of this machine:
suggests the idea of a "parameterized TM" somehow depending on σ:
#R#R#σL#L#σ
In reality, there is a separate loop for each σ. For example, if a and b are the two "regular" alphabetic symbols, then the single loop is actually two loops joined at the "R" which is the base of the loop:
Here is a sample run:
#ab#
L#R #ab
# R# R# ##b##
a ##b#a
L# L# a #ab#a
R #ab#a
# R# R# #a##a#
b #a##ab
L# L# b #ab#ab
R #ab#ab
R# #ab#ab#

Shift Left machine

A shift-left machine, SL, takes an initial tape configuration:
#w# 
and modifies it to
w# 
Again, the result is position independent, i.e., the initial tape is:
>x#w# 
where x ∈ (Σ-{>})*, and the outcome is this:
>xw# 
The machine is:
Here is a sample run:
#abc#
L#R #abc
LaRR aabc
LbRR abbc
LcRR abcc#
L# abc#

Compute the w2 function

We'll see what we mean by computing a function in the next section, but the machine:
R# → C → SLL#
computes the function
f(w) = ww
in the sense that it takes an initial tape configuration:
>#w
to the halting configuration:
>#ww


© Robert M. Kline