1 of 64

CMPSC 442: Artificial Intelligence

Lecture 8. Propositional Logic

Rui Zhang

Spring 2024

1

AIMA Chapter 7

2 of 64

Outline

Logic and Logical Agents (AIMA 7.1, 7.2, 7.3)

Propositional Logic (AIMA 7.4 7.5 7.6)

  • Model Checking
  • Theorem Proving
  • Inference Rules
  • Resolution

2

3 of 64

3

4 of 64

4

https://www.lsac.org/lsat/taking-lsat/test-format/logical-reasoning/logical-reasoning-sample-questions

5 of 64

Can GPT answer this logical question (GPT-3.5)?

5

6 of 64

Can GPT answer this logical question (GPT-4)?

6

7 of 64

Can GPT answer this logical question (GPT-4)?

7

8 of 64

8

9 of 64

9

10 of 64

'Deduction' vs. 'Induction' vs. 'Abduction'

10

https://www.merriam-webster.com/grammar/deduction-vs-induction-vs-abduction

11 of 64

Wumpus World

Environment

  • Squares adjacent to wumpus are Stench
  • Squares adjacent to pit are Breezy
  • Glitter iff gold is in the same square
  • Shooting kills wumpus if you are facing it in a line
  • Shooting uses up the only arrow
  • Grabbing picks up gold if in same square
  • Agent dies if entering a pit or a cell with a wumpus
  • Agent can climb in cell [1,1] out of this cave

Actuators

  • Forward, TurnLeft, TurnRight, Shoot, Grab, Climb only from [1,1]

Sensors

  • Stench, Breeze, Glitter, Bump (when walk into a wall), Scream (when wumpus is killed)

11

12 of 64

Wumpus World

Performance measure

  • +1000 out of the cave with gold
  • -1000 falling into a pit or the wumpus
  • -1 per step
  • -10 for using up the arrow

12

13 of 64

Wumpus World

  • Partially Observable: Only local perception
  • Deterministic: Action outcomes are exactly specified
  • Non-episodic / Sequential: Sequential at the level of actions
  • Static: Wumpus and Pits do not change while agent decides on action
  • Discrete
  • Single-agent

13

14 of 64

Agents with Explicit Knowledge Representation

Knowledge Base (KB) = A set of sentences (declarations) in a formal language

Adding to the KB

  • Agent Tells the KB what it perceives: Si
  • Inference: derive new statements from KB + Si

Using the KB

  • Agent asks the KB what action to take
  • Declaration of the action to take

14

15 of 64

Exploring Wumpus World: an Initial Action

Figure (a):

  • Agent in [1,1]
  • Percept: [None,None,None,None,None]
  • Inferences added to KB
    • [1,2] is OK
    • [2,1] is OK

Figure (b):

  • Actuator: Right → Agent in [2,1]
  • Percept: [None,Breeze,None,None,None]
  • Inferences added to KB
    • ?Pit in [2,2]
    • ?Pit in [3,1]

15

16 of 64

Exploring Wumpus World:

Figure (a):

  • Left → Agent in [1,1]
  • Up → Agent in [1,2]
  • Percept: [Stench,None,None,None,None]
  • Inferences added to KB
    • [2,2] is OK (not ?Pit in [2,2])
    • Wumpus in [1,3]

Figure (b):

  • Right → Agent in [2,2]
  • Up → Agent in [2,3]
  • Percept: [Stench,Breeze,Glitter,None,None]
  • Inferences added to KB
    • Gold in [2,3]
    • ?P in [2,4]
    • ?P in [3,3]

16

17 of 64

Generic Knowledge-Based (KB) Agent

17

18 of 64

Formalizing the KB

The Wumpus world illustrates a logical agent: the agent takes an action, which results in a new percept, leading to new facts to add to the KB

  • Facts can follow directly from percepts
  • Facts can follow from other facts

How could such an agent be implemented?

  • A logic language provides a way to represent and reason about facts
  • Propositional Logic is introduced next as a first step
  • First-order Logic is the second step

18

19 of 64

General Properties of a Logical Formalism

Logic: a way to formulate statements, interpret them, and derive conclusions

  • Syntax: vocabulary, rules of combination to make statements
  • Semantics: truth of statements in possible worlds, or models
  • Entailment: a way to evaluate consistency of models with one another

19

20 of 64

Semantics

Grounding: The connection between logical reasoning processes and the real environment in which the agent exists.

20

21 of 64

Models

We use the term model in place of “possible world”.

A logical model m is a formally structured world with respect to which truth can be evaluated.

Satisfaction

  • m is a model of a sentence α if α is true in m
  • M(α) is the set of all models of α

Entailment

  • KB ⊨ α iff M(KB)⊆M(α)
  • Example:
    • KB = Giants won and Reds lost
    • α = Giants won

21

22 of 64

Entailment in Wumpus World

  • Eight possibilities for [1,2], [2,2], [3,1]

22

23 of 64

Entailment in Wumpus World

  • Eight possibilities for [1,2], [2,2], [3,1]

KB

  • Agent in [1,1], Percept [N,N,N,N,N]
  • Right → Agent [2,1], Percept [N,B,N,N,N]

23

24 of 64

Entailment in Wumpus World

  • Eight possibilities for [1,2], [2,2], [3,1]

KB

  • Agent in [1,1], Percept [N,N,N,N,N]
  • Right → Agent [2,1], Percept [N,B,N,N,N]
  • 3 (in red) consistent with KB

24

KB

25 of 64

Model Checking and Wumpus World

Statement to check: α1 ([1,2] is safe])

KB ⊨ α1

25

26 of 64

Model Checking and Wumpus World

Statement to check: α2 ([2,2] is safe])

KB ⊭ α2

26

27 of 64

Propositional Logic

The syntax of propositional logic defines the allowable sentences. Sentences are true or false.

The atomic sentences consist of a single proposition symbol. Each such symbol stands for a proposition that can be true or false.

  • A
  • B

Logical operators (connectives) combine simple sentences into complex sentences

  • negation: ¬A
  • conjunction: A ∧ B
  • disjunction: A ∨ B
  • implies: A ⇒ B
  • if and only if: A ⇔ B

27

28 of 64

Syntax of Propositional Logic

The proposition symbols S, S1, S2 etc are sentences. If S, S1, S2 are sentences

  • ¬S is a sentence (negation)
  • S1 ∧ S2 is a sentence (conjunction)
  • S1 ∨ S2 is a sentence (disjunction)
  • S1 ⇒ S2 is a sentence (implication)
  • S1 ⇔ S2 is a sentence (biconditional)

28

29 of 64

Translating between Propositions and English

29

30 of 64

Truth Table for Connectives

For 3 statements (P1, P2, P3), 23 combinations of truth values = 8 models

Recursive evaluation of arbitrary sentences. If P1,2 = false, P2,2 = true, P1,2 = false:

  • ¬ P1,2 ∧ (P2,2 ∨ P3,1) = true ∧ (true ∨ false) = true ∧ true = true

30

31 of 64

Backus-Naur Form (BNF) Grammar of Propositional Logic

31

32 of 64

Operator Precedence

The BNF grammar by itself is ambiguous; a sentence with several operators can be parsed by the grammar in multiple ways.

To eliminate the ambiguity we define a precedence for each operator.

  • Unary operators (¬) precede binary operators (∧∨⇒⇔)
  • Conjunction and disjunction (∧∨) precede conditionals (⇒⇔)
  • Conjunction (∧) precedes disjunction (∨)
  • Implication (⇒) precedes biconditional (⇔)

32

33 of 64

Wumpus World Sentences

Let Pi,j be true if there is a pit in [i, j]

Let Bi,j be true if there is a breeze in [i, j]

¬ P1,1

¬ B1,1

Pits cause breezes in adjacent squares

B1,1 ⇔ (P1,2 ∨ P2,1)

B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)

33

34 of 64

Limitations of Propositional Logic

Can only state specific truths

  • For example, B2,1 (location 2,1 is breezy)

Cannot state generic truths

  • For example, cells next to pits are breezy

34

35 of 64

Logic Inference

Determine Entailment: Given a KB, how can the agent know if a sentence is true of false? In other others, how can we know if KB entails the sentence?

1. Model Checking

  • Truth table enumeration: for n propositional symbols, O(2n)
  • Enumerating models and showing that the sentence must hold in all models
  • This is based on the definition of entailment

2. Theorem Proving

  • Natural Deduction by application of inference rules
  • Legitimate (sound) generation of new sentences from old
  • Proof = a sequence of inference rule applications
    • Use inference rules as operators in a standard search algorithm
  • Typically requires transformation of sentences into a normal form

35

36 of 64

An example in the Wumpus World

KB:

Given the KB, we want to prove that there is no pit in [1,2], i.e,

36

37 of 64

Model Checking

37

38 of 64

Model Checking

38

39 of 64

Soundness and Completeness

Soundness

An inference algorithm that derives only entailed sentences is called sound or truth-preserving

Completeness

An inference algorithm is complete if it can derive any sentence that is entailed

For example, Model Checking is sound and complete.

  • sound because it implements directly the definition of entailment,
  • complete because it works for any KB and α and always terminates—there are only finitely many models to examine.

39

40 of 64

Validity, Tautologies, and Deduction Theorem

A sentence is valid if it is true in all models, e.g., the following are valid sentences

  • A∨¬A
  • A⇒A
  • (A∧(A⇒B))⇒B

Valid sentences are also known as tautologies—they are necessarily true.

Validity is connected to inference via the Deduction Theorem:

  • A ⊨ B if and only if (A ⇒ B) is valid

40

41 of 64

Satisfiability

A sentence is satisfiable if it is true in some model

  • e.g., A ∨ B

Determining satisfiability (the SAT problem) is NP-complete

A sentence is unsatisfiable if it is true in no models

  • e.g., A ∧ ¬A

Validity and satisfiability are connected

  • α is valid iff ¬ α is unsatisfiable
  • contrapositively, α is satisfiable iff ¬ α is not valid

Satisfiability is connected to inference via reductio ad absurdum (literally, “reduction to an absurd thing”)

  • A ⊨ B if and only if (A ∧¬ B) is unsatisfiable
  • Also known as proof by contradiction or proof by refutation

41

42 of 64

Inference Rules

  • Modus Ponens: if A is true, and A ⇒ B is true, then B is true

  • And-Elimination: for a conjunction, and of the conjuncts can be inferred

  • Logical equivalence: Rewrite Rules, for example

42

43 of 64

Logical Equivalence: Rewrite rules

43

44 of 64

Query a Wumpus KB: Prove ¬P1,2

44

45 of 64

Uninformed Search to Derive Proofs

We found this proof by hand, but we can apply any of the search algorithms in Chapter 3 to find a sequence of steps that constitutes a proof. We just need to define a proof problem as follows:

  • Initial State: The sentences in initial knowledge base
  • Actions: Apply inference rules where a KB sentence matches the l.h.s.
  • Result: Add r.h.s. of matched rules to KB
  • Goal: A state containing the sentence to prove

Sound, but not necessarily complete with inference rules presented so far

  • No proof of completeness exists that uses only those inference rules, e.g., If biconditional elimination is removed, then proof on the previous slide fails

45

46 of 64

Monotonicity

46

47 of 64

Proof by Resolution

Inference rules covered in previous slides (Modus Ponens, And-Elimination, Logical Equivalence) are sound, but not necessarily complete

So, let us introduce a single inference rule, resolution, that yields a complete inference algorithm when coupled with any complete search algorithm.

47

48 of 64

Unit Resolution: A First Step towards Completeness

where each is a literal, and and are complementary literals (i.e., one is the negation of the other.

For examples, we can do unit resolution over P2,2 in these two clauses:

to get the following resolvent:

In English: if there’s a pit in one of [1,1], [2,2], and [3,1], and it’s not in [2,2], then it’s in [1,1] or [3,1].

48

49 of 64

Resolution

Generalization of unit resolution

Two clauses can be combined to produce a new clause as follows

  • If the first clause contains a literal , and the second clause contains its negation
  • Then inference step is to produce a new single clause that includes all the literals from both clauses except and

49

50 of 64

Conjunctive Normal Form (CNF)

Every sentence of propositional logic is logically equivalent to a conjunction of clauses.

A sentence expressed as a conjunction of clauses is said to be in conjunctive normal form or CNF.

50

51 of 64

Converting to Conjunctive Normal Form (CNF)

Resolution Requires Conjunctive Normal Form (CNF): the input to resolution consists of two clauses (implicit conjunction) that are each disjunctions of literals

A procedure for converting all propositions to conjunctive normal form (CNF)

  • Apply bi-conditional elimination if applicable
  • Apply implication elimination if applicable
  • Move all ¬ inward to literals (double-negation elimination; de Morgan)
  • Apply distributivity of ∨ over ∧

51

52 of 64

CNF Example

converting the sentence into CNF

52

53 of 64

Resolution-based Theorem Prover

For any sentences A and B in propositional logic, the following algorithm based on resolution can decide whether A ⊨ B

  • Step 1: Start with the sentence (A ∧ ¬B)
  • Step 2: Convert (A ∧ ¬B) in conjunctive normal form (CNF)
  • Step 3: Apply resolution
  • Step 4: The process continues until one of two things happens:
    1. There are no new clauses that can be added, in which case A does not entail B
    2. Two clauses resolve to yield the empty clause, in which case A entails B, i.e., we prove by contradiction

53

54 of 64

A quick example

Suppose our KB is

  • a
  • a => b

Now consider the following five queries over this KB:

54

Query

True or False judge by you

Resolution Result

How to interpret Resolution Result

a

True

empty clause

KB entails a

¬a

False

no empty clause

KB does not entails ¬a (we cannot interpret it as KB entails a)

b

True

empty clause

KB entails b

c

Unknown / False

no empty clause

KB does not entails c (we cannot interpret it as KB entails ¬c)

¬c

Unknown / False

no empty clause

KB does not entails ¬c (we cannot interpret it as KB entails c)

55 of 64

Example Proof

KB = (B1,1 ⇔ (P1,2 ∨ P2,1 )) ∧ ¬ B1,1

Prove: ¬ P1,2

CNF: (¬P1,2∨B1,1)∧(¬ B1,1 ∨ P1,2 ∨ P2,1)∧(¬P2,1 ∨ B1,1) ∧ ¬B1,1

55

We got an empty clause, so KB entails ¬ P1,2

56 of 64

Resolution Algorithm

56

57 of 64

Resolution is Complete

Resolution gives a complete inference algorithm when coupled with any complete search algorithm (See AIMA 7.5 for proof).

57

58 of 64

Horn Clauses

In many practical situations, however, the full power of resolution is not needed. Some real-world knowledge bases satisfy certain restrictions on the form of sentences they contain, which enables them to use a more restricted and efficient inference algorithm. On such restricted form is Horn Clause.

Horn Clause, a disjunction of literals of which at most one is positive.

  • Definite Clause: a disjunction of literals of which exactly one is positive.
    • For example, the clause (¬L1,1 ∨ ¬Breeze ∨ B1,1) is a definite clause, whereas (¬B1,1 ∨ P1,2 ∨ P2,1) is not.

  • Goal Clauses, a disjunction of literals of which none is positive

58

59 of 64

Horn Clauses

Every definite clause can be written as an implication whose premise is a conjunction of positive literals and whose conclusion is a single positive literal.

Similar for Goal Clause. Therefore, we can rewrite Horn Clauses into literal, or, (conjunction of literals) ⇒literal.

59

60 of 64

Inference with Horn Clauses

The Inference with Horn clauses can be done efficiently through the forward-chaining and backward-chaining algorithms.

Deciding entailment with Horn clauses can be done in time that is linear in the size of the knowledge base.

60

61 of 64

Forward or Backward Chaining

Require Horn Form

  • Horn clauses are written as literal, or, (conjunction of literals) ⇒literal
  • Horn Form is Conjunction of Horn clauses
  • e.g., C ∧ (B ⇒ A) ∧ (C ∧ D ⇒ B)

Modus Ponens (for Horn Form): complete for Horn KBs

Forward chaining is sound, and complete for horn clauses

Forward chaining, linear time

Backward chaining potentially much less than linear time

61

62 of 64

CNF, Horn Clauses, Definite Clauses

62

63 of 64

Efficient Model Checking Algorithms for PL

Backward chaining (Horn Clauses)

Forward chaining (Horn Clauses)

There are two more:

DPLL Algorithm (Davis, Putnam, Logemann, Loveland)

  • Efficient and complete backtracking
  • Can efficiently handle tens of millions of variables
  • Applications include hardware verification

WalkSAT

  • Local search, thus very efficient
  • Incomplete

63

64 of 64

Summary

Logical agents apply inference to a KB to derive new facts and make decisions

  • Syntax: formal structure of sentences
  • Semantics: truth of sentences wrt models
  • Entailment: necessary truth of one sentence given another
  • Inference: deriving sentences from other sentences
  • Soundness: derivations produce only entailed sentences
  • Completeness: derivations can produce all entailed sentences

Propositional Logic

  • Model Checking
  • Inference Rules
  • Resolution

Efficient search algorithms with inference procedures

  • Forward chaining and backward chaining are linear-time, complete for Horn clauses

64