### Overview

The goal of Theory of Computation is to produce a classification and analysis of formal languages which, in some way or another can be computed. The computation can take a variety of forms:- construct a "computational machine" of sorts and "run" strings from the language on it
- use a generative grammar to produce the strings

*accepted*if, after processing it, the machine is in some notion of a "final state." Going further, one can ask what it means to

*process*the string, in particular is their a fixed, deterministic way to do so, or are there more than one way to process it (non-deterministic). The other, generative approach is that of grammars. Grammars are the basis of all modern computer languages. It's fairly easy to describe what it means for a grammar to generate a string. But it is quite difficult to understand the nature of what kind of languages correspond to these grammars. Taking the computational model a step in an analytical direction, we're also interested in languages which are efficiently computable in some general sense. At this point the languages are expressed more succinctly as decision problems and efficiency is characterized by "in polynomial time" in terms of the number of steps that a computing machine (i.e. an algorithm) would take.

### Sets: section 1.1

A set is a "collection" of objects belonging to some "universal collection,"`A`. We make much use of the set descriptor notation like this:

S = { x : x ∈ A and x has some property P }A set is considered to be a

*primitive*, in the sense that we don't define it in terms of other mathematical concepts. A naïve approach allows creation using any sense of "has some property"; however this appoach suffers from

*Russel's Paradox*, and thereby indicates that set creation must be made, somehow, more restrictive.

Basic set operations are:

*union*,

*intersection*,

*difference*. Algebraic properties include

*DeMorgan's Laws*. The power set,

`2`= set of all subsets of

^{A}`A`, is an important concept. We also use, on some occasions, the notion of a

*partition*of

`A`.

### Relations and functions: section 1.2

In order to define relations we need the notion of an*ordered pair*of elements,

`(a,b)`. Although this is intiutive, it is still encumbent on us to

**define**an ordered pair in terms of sets. This is done (see Wikipedia) by the so-called

*Kuratowski*definition:

(a,b) = {{a},{a,b}}satisfying the assumed requirement:

(aThe basis of relations is the_{1},b_{1}) = (a_{2},b_{2}) if and only if a_{1}=a_{2}and b_{1}=b_{2}

*Cartesian product*of sets:

A × B = { (a,b) : a ∈ A and b ∈ B }With 2 sets, it a

*binary*relation

`R`is

R ⊆ A × BIt is very common to have a binary relation where

`B = A`; this is called simply a binary relation on

`A`, a subset of

`A×A`. A

*ternary*relation on

`A,B,C`is

`R ⊆`.

`A×B×C`#### Functions

A function is certain type of relation. A function from`A`to

`B`is written

`f: A ⇾ B`. We call

`A`the

*domain*and

`B`the

*range*. Being a function means two things:

- functional:
`(a,b),(a,c) ∈ f ⇒ b=c` - for all
`a ∈ A`, there exists`b ∈ B`such that`(a,b) ∈ f`

`f(a)`is meaningful for each

`a ∈ A`. If only property (a) is satisfied, we call

`f`a

*partial function*, or in computer programming, a

*map*or

*associative array*. The domain element is called a

*key*. In practice a map needs to have some way of determining whether

`f(a)`is defined or not. Functions can use more that 2 sets, e.g.

`f: A×B ⇾ C`is a subset of

`(A×B)×C,`. With 3 components, the domain of the function coult be either

`A×B`, or just

`A`.

### Special Binary Relations: section 1.3

For a binary relation on set`A`, i.e.

`R ⊆ A×A`, some properties of importance are these:

*reflexive*,

*symmetric*,

*transitive*. A relation which has all three is an

*equivalence*relation. Example:

`R ⊆ N×N`:

`R = { (n,m) : such that n%10 = m%10 }`. With an equivalence relation, one defines the equivalence class:

[n] = { m : (m,n) ∈ R }The equivalence classes form a partition of

`A`.

### Finite and Infinite Sets: 1.4

A set`A`is countable if it is finite or there is a 1:1 correspondence with

`N`. Technically, an infinite countable set

`A`has a bijection

`f: A ⇾ N`. The facts are useful, but we need not dwell on them.

### Proof Techniques: section 1.5

#### Proof by contradiction

The textbook doesn't consider this worthy of mentioning, but it's a common technique, called*reduction ad absurdum*. Assume some statement is true, derive a contradiction, i.e., false statement. Conclude that the statement you assumed true is false.

#### Induction

This is the most important proof technique in all subjects related to discrete math. It is used everywhere.#### Pigeonhole Principle

It has specific isolated usages, mostly along the lines of Theorem 1.5.3.#### Diagonalization

The most important thing to know is that if`A`is any infinite (typically countable) set, then

`2`is uncountable. For our purposes, the set

^{A}`A`is the set of all strings on an alphabet, and

`2`is the set of all languages. The classic proof is that 2

^{A}^{N}is uncountable. It is a proof by contradiction. Assume it were so, then we could enumerate all the subsets of

`N`as

`(S`. Consider the

_{0}, S_{1}, ..., )*diagonal*set

`D = { n : n ∉ S`. Then, for any

_{n}}`n`,

`D ≠ S`, because if

_{n}`n ∈ D`then

`n ∉ S`, and vice-versa. So

_{n}`D`cannot be in this list. A contradiction. ■

### Closures and Algorithms: section 1.6

Ignore the discussion of algorithm complexity, this is taught in Csc560.

Closures are an important concept for us, in particular the most common usage in the textbook is this: if`R`is a binary relation on

`A`we want to define:

R* = reflexive, transitive closure of R

The most instructive example is the "path" relationship on a graph.
A directed graph is a set of vertices and edges, `(V,E)`, where

`E ⊆ V × V`. That is

`E`is the "edge" relationship on vertices. The path relationship (allowing empty paths as well) is:

P = E* = reflexive, transitive closure of E

In general, for a binary relation on `A`, how do we know that

`R*`exists? If

`A`is finite, we can easily compute

`R*`by following all "chains" of pairs. Think of it as computing all paths in a finite graph using a breadth-first seach from every vertex. If the set

`A`and

`R`are infinite, we cannot compute

`R*`, but we still want to know that it

**exits**in a mathematical sense. Mathematically,

`R*`is the smallest relation (i.e., the smallest subset of

`A×A`) which contains

`R`and is both reflexive and transitive. Here's the math:

RWe know that an^{*}=∩{ R′ ⊆ A × A : R′ ⊇ R and R′ is reflexive and transitive }

`R′`does exist, i.e.,

`A×A`. Any such

`R′`must contain

`R`, and so

`R`must also exist, minimally being

^{*}`R`itself. We must argue that this

`R`is reflexive and transitive.

^{*}- Take
`a ∈ A`. Every`R′`is reflexive, thus`(a,a) ∈`every`R′`and so`(a,a) ∈ R*`. - Take
`a,b,c ∈ A`and suppose`(a,b),(b,c) ∈ R*`. This means`(a,b),(b,c) ∈`every`R′`, and thus`(a,c) ∈`every`R′`because`R′`is transitive. Thus`(a,c) ∈ R*`.

`R ⊆ N×N`defined by

`R = { (n,n+1) : n ∈ N }`.

`R`= the less-than-or-equal relation on

^{*}`N`.

#### Generalization

The textbook generalizes this idea. The two properties reflexivity and transitivity are argued to be*closure*properties because they are "inclusive" in the sense that adding more pairs to a reflexive or transitive relation does not exclude other pairs already in it. The textbook proves, loosely speaking, that a relation

`R`always has a closure under one or more closure property. Symmetry is also a closure property and the theorem allows us to define of common closures of a binary relation such as:

- transitive closure (one relation)
- reflexive, transitive closure (two relations)
- symmetric, reflexive, transitive closure (three relations)

*antisymmetry*property of a relation

`R ⊆ A×A`:

`(a,b) ∈ R ⇒ (b,a) ∉ R`