SKEDSOFT

Real Time Systems

Ground Instance: Given a set S of clauses, a ground instance of a clause C is a clause obtained by substituting every variable in C by a member of the Herbrand
universe H of S. Atom Set: Let h1, . . . , hn be elements of H. The atom set of S is the set of ground atoms of the form Pn(h1, . . . , hn) for every n-place predicate in S.

H-Interpretation: An H-interpretation of a set S of clauses is one that satisfies the following conditions:
1. Every constant in S maps to itself.
2. Each n-place function symbol f is assigned a function mapping an element of Hn to an element in H, that is, a mapping from (h1, . . . , hn) to f (h1, . . . , hn).

Let the atom set A of S be {A1, . . . , An, . . .}. An H-interpretation can be written as a set

I = {e1, . . . , en, . . .} where ei is either Ai or ¬Ai , i ≥ 1. That ei is Ai means Ai is assigned true; false otherwise.
A set S of clauses is unsatisfiable iff S is false under all the H-interpretations of S. To systematically list all these interpretations, we use semantic trees [Robinson,1968; Kowalski and Hayes, 1969]. The construction of a semantic tree for S, whether manually or mechanically, is the basis for proving the satisfiability of S. In fact, constructing a semantic tree for S of clauses is equivalent to finding a proof for S.

Semantic Tree: A semantic tree for a set S of clauses is a tree T in which each edge is attached with a finite set of atoms or negated atoms from S such that:
1. There are only finitely many immediate outgoing edges e1, . . . , en from each node v. Suppose Ci is the conjunction of all the literals attached to edge ei . Then the disjunction of these Ci s is a valid propositional formula.
2. Suppose I (v) is the union of all the sets attached to the edges of the branch of T and including v. Then I (v) does not contain any complementary pair of literals (one literal is the negation of the other as in the set {A,¬A}).

Example. Let the atom set A of S be {A1, . . . , An, . . .}. A complete semantic tree is one in which, for every leaf node L, I (L) contains either Ai or ¬Ai, where i = 1, 2, . . . . A node N in a semantic tree is a failure node if I (N) falsifies a ground instance of a clause in S, and for each ancestor node N of N, I (N ) does not falsify any ground instance of a clause in S.

Closed Semantic Tree: A closed semantic tree is one in which every branch ends at a failure node.
Herbrand’s Theorem (Using Semantic Trees): A set S of clauses is unsatisfiable iff a corresponding finite closed semantic tree exists for every complete semantic tree of S.

Herbrand’s Theorem: A set S of clauses is unsatisfiable iff a finite unsatisfiable set S of ground instances of clauses of S exists.
Given an unsatisfiable set S, if a mechanical procedure can incrementally generate sets S1 , S2 , . . . and check each Si for unsatisfiability, then it can find a finite number n such that Sn is unsatisfiable. Gilmore’s computer program [Gilmore, 1960] followed this strategy of generating the Si s, replacing the variables in S by the constants in the i -level constant set Hi of S. It then used the multiplication method (for deriving the empty clause ) in the propositional logic to test the unsatisfiability of each Si since it is a conjunction of ground clauses (with no quantifiers).

Proving Unsatisfiability of a Clause Set Using the Resolution Procedure: The above proof technique based on Herbrand’s theorem has a major shortcoming in that it requires the generation of sets of ground instances of clauses, and the number of elements in these sets grows exponentially. We now present the resolution principle [Robinson, 1965], which can be applied to any set of clauses (whether ground or not) to test its unsatisfiability. We need several additional definitions.

Substitution: A substitution is a finite set of the form {t1/v1, . . . , tn/vn} where the vi s are distinct variables and the ti s are terms different from the vi s. Here, there are n substitution components. Each vi is to be replaced by the corresponding ti. The substitution is a ground substitution if the ti s are ground terms. Greek letters are used to denote substitutions.
Example. θ1 = {a/y} is a substitution with one component. θ2 = {a/y, f (x)/x, g( f (b))/z} is a substitution with three components.
Variant: A variant (also called a copy or instance), denoted Cθ, of a clause C is any clause obtained from C by a one-to-one replacement of variables specified by
the substitution θ. In other words, a variant C can be either C itself or C with its variables renamed.
Example. Continuing the above example, let C = (¬(R(x) ∧ O(y)) ∨ D(x, y)).
Then Cθ1 = ¬(R(x) ∧ O(a)) ∨ D(x, a).

Separating Substitutions: A pair of substitutions θ1 and θ2 for a pair of clauses C1 and C2 is called separating if C1θ1 is a variant of C1, C2θ2 is a variant of C2, and C1θ1 and C2θ2 have no common variable.
Unifier: A substitution θ is a unifier for a set C = {C1, . . . ,Cn} of expressions iff C1θ = · · · = Cnθ. A set C is unifiable if it has a unifier. A unifier for set C is a
most general unifier (MGU) iff there is a substitution λ such that ρ = θ ◦ λ for every unifier ρ for set C.
Example. Let S = {P(a, x, z), P(y, f (b), g(c))}. Since the substitution θ = {a/y, f (b)/x, g(c)/z} is a unifier for S, S is unifiable.
Unification Theorem: Any clause or set of expressions has a most general unifier.
Now we are ready to describe an algorithm for finding a most general unifier for a finite set C of nonempty expressions (Figure 2.8). The algorithm also determines if the set is not unifiable.
Resolvent: Let C1 and C2 be two clauses with no common variable and with separating substitutions θ1 and θ2, respectively. Let B1 and B2 be two nonempty subsets (literals) B1 ⊆ C1 and B2 ⊆ C2 such that B1θ1 and ¬B2θ2 have a most general unifier ρ. Then the clause ((C1 − B1) ∪ (C2 − B2))ρ is the binary resolvent of C1 and C2, and the clause ((C1 − B1)θ1 ∪ (C2 − B2)θ2)ρ is the resolvent of C1 and C2.
Given a set S of clauses, R(S) contains S and all resolvents of clauses in S, that is, R(S) = S∪{T : T is a resolvent of two clauses in S}. For each i ≥ 0, R0(S) = S, Ri 1(S) = R(Ri (S)), and R∗ (S) = ∪{Ri (S)|i ≥ 0}.
Example
Let C1 = ¬A(x) ∨ O(x) and C2 = ¬O(a). A resolvent of C1 and C2 is ¬A(a).
Let C3 = ¬R(x)∨¬O(a) ∨ D(x, a) and C4 = R(b). A resolvent of C3 and C4
is ¬O(a) ∨ D(b, a).