SKEDSOFT

Automata | Comp. Sc. Engg.

Simulations: We show that having multiple tapes or having nondeterminism does not change the inherent power of a Turing machine.

Multi-tape vs. single-tape Turing machines: A k-tape Turing machine simply maintains k instantaneous descriptions. In each step, its δ function specifies how each ID evolves. One can simulate this behavior on a single tape Turing machine by placing the k logical tapes as segments, end-to-end, on a single actual tape, and also remembering where the k tape heads are through “dots” kept on each segment. One step of a k-tape Turing machine now becomes a series of k activities conducted one after the other on the k tape segments.

Nondeterministic Turing machines: A nondeterministic Turing machine can be conveniently simulated on a single tape deterministic Turing machine. However, it is much more convenient to explain how a nondeterministic Turing machine can be simulated on a multi-tape deterministic Turing machine. For the ease  of exposition, we will carry this explanation out with respect to the example given in Figure 15.3.
We will employ a 3-tape deterministic Turing machine to simulate the NDTM in Figure 15.3 (hereafter called ww ndtm). The first tape maintains a read-only copy of the initial input string given to ww ndtm. We call it the input tape. The second tape maintains a tree path in the nondeterministic computation tree of ww ndtm. We call it the tree path tape. The third tape is the “working tape.”

Tree path conventions: Notice that state q2 is the only nondeterministic state in ww ndtm; and hence, the computation tree of ww ndtm will have a binary nondeterministic split every so often—whenever the nondeterminism in q2 is invoked. The tree paths in ww ndtm can be specified as follows, with the associated computations shown next to it:
ε—the empty computation beginning at q0,
0—the computation q0 → q1 of length 1,
1—the computation q0 → q2,
1, 0—the computation q0 → q2 → q2, and
1, 1—the computation q0 → q2 → q3.
We will uniformly use the 0 path for a “self” loop (if any), and 1 for an exit path; example: 1,0 and 1, 1 above. The only exception to this convention is at state q4 where there are three exits, and we can number them 0, 1, and 2, going left to right. Therefore, note that the tree path 1,1,1,4,0 refers to the march q0, q2, q3, q4, q13, and back to q13. Notice that we do not have a path 0,0 or 0, 1, as state q1 has no successor. When we require the next path in numeric order to be generated below, we skip over such paths which do not exist in the computation tree, and instead go to the next one in numeric order.

The Simulation itself: Here is how the simulation proceeds, with the second tape (tree path tape) containing ε:

  • If the machine has accepted, accept and halt.
  • Copy the first (input) tape to the third (working) tape.
  • Generate the next tree path in numeric order on the tree path tape.
  • Pursue the δ function of the NDTM according to the tree path specified in the tree path tape, making the required changes to the working tape. If, at any particular point, the tree path does not exist or the symbol under the TM tape head does not match what the δ function is forced to look, move on to the next option in the tree path enumeration.
  • Repeat the above steps.

For example, if the input 0, 1, 0, 1 is given on the input tape, the simulation will take several choices, all of which will fail except the one that picks the correct midpoint and checks around it. In particular, note that in state q2, for input string 0, 1, 0, 1, the self-loop option can be exercised at most three times; after four self-loop turns, the Turing machine is faced with a blank on the tape and gets stuck (rejects). The outcome of this simulation is that the given NDTM accepts ww if and only if the DTM that simulates the NDTM as described above accepts.

The astute reader will, of course, have noticed that we are walking the exponential computation tree of the nondeterministic Turing machine repeatedly. In fact, we are not even performing a breadth-first search (which would have been the first algorithm thought of by the reader) because

  • Performing breadth-first search (BFS) requires maintaining the frontier
  • Even after the extra effort towards maintaining a BFS frontier, the overall gain is not worth it: we essentially might achieve an O(2n) computation instead of an O(2n 1) computation.