73 terms

Agent

Perceives its environment through sensors

Achieves its goals by acting on its environment by using actuators

Achieves its goals by acting on its environment by using actuators

Single-Reflex Agents

Actions depends on its immediate precepts

Model-Based Agents

Actions depend on history/unperceived aspects of the world. It needs to maintain an internal world model.

Goal-Based Agents

Agents with variable goals and forms plans to achieve its goals.

Utility-based Agents

Agents that juggles multiple goals that are sometimes conflicting. Optimises utility over a range of goals.

Expected Utility

Utility + probability of success

Partially Observable Environment

Sensors of an agent don't fully describe an environment

Deterministic Environment

The next state is fully determined by current state + actions

Stochastic Enviroment

Random state changes

Episodic Environment

Next action is not dependant on last action (think mail sorting)

Sequential Environment

Next action is dependant on last action (think crossword)

Static Environment

Environment stays unchanged while agent deliberates (think crossword)

Dynamic Environment

Environment changes as agent deliberates (think chess)

Discrete Environment

Percepts, actions and episodes are discrete (think chess)

Continuous Environment

Percepts, actions and episodes are continuous (think robot car)

Single Agent Enviroment

In the environment, only one object can be modelled as an agent (think crossword)

Multi-Agent

In the environment, multiple objects can be modelled as agents (think poker)

Problem-Solving Agent

Agent that implements a "formulate, search, execute" design

Single-State Formulation

1. Initial state

2. Successor Function

3. Goal Test

4. Path Cost

2. Successor Function

3. Goal Test

4. Path Cost

Successor Function

S(x) = set of Actions-State pairs

Path Cost

C(x,a,y) = Cost of action a in state x to state y

Tree search Algorithms

An offline, simulated exploration of state spaces

(Search Strategy) Completeness

Search strategy finds a solution if one exists

Time Complexity

Number of nodes generated in a search strategy

Space Complexity

Maximum number of nodes in memory

Optimality

The finding of a least-cost solution when using a specific search strategy

Breadth-First Search

Expands shallowest unexpanded node.

Fifo queue (puts successors at the end of list)

(has space + time complexity problems)

Fifo queue (puts successors at the end of list)

(has space + time complexity problems)

Depth-First Search

Expands deepest unexpanded node.

Lifo queue (puts successor at front)

(not optimal or complete, time complexity problems)

Lifo queue (puts successor at front)

(not optimal or complete, time complexity problems)

Which search strategy to use when completeness/ optimal solutions is important?

Breadth-First Search

Which search strategy to use when solutions are dense/ low cost is important?

Depth-First Search

Depth-Limited search

Searches until for solution until depth i is reached

(not optimal or complete)

(not optimal or complete)

Iterative Deepening Search

Searches until depth i, and if no solution is found, then searches i+1 and repeats the process until solution is found

Minimax Search

Chooses state with the highest minimax value

(Example: if you=max, look at your possible moves and choose the state that will lead your opponent to choose the highest minimum choice.)

(Example: if you=max, look at your possible moves and choose the state that will lead your opponent to choose the highest minimum choice.)

Alpha-Beta Pruning

MiniMax search but doesn't continue searching paths that are aren't optimal

Greedy Best-First Search

Expands the node that appears to optimal. Uses evaluation function f(n) = (heuristic)(estimated cost)

(not optimal or complete)

(not optimal or complete)

A* Search

Similar to Best-First but avoids already expensive paths using evaluation function f(n) = g(n) + h(n)

where f(n) = estimated total cost of path through n to goal, g(n) = cost so far, h(n) = heuristic

where f(n) = estimated total cost of path through n to goal, g(n) = cost so far, h(n) = heuristic

Knowledge Base

A set of sentences in a formal language

Entailment

Necessary truth of one sentence given another

(ie: KB including "Celtics won", "Hearts won" entails "Hearts won or Celtics won")

(ie: KB including "Celtics won", "Hearts won" entails "Hearts won or Celtics won")

Inference

Deriving a sentence from another sentence

Soundness

Derivations produce only entailed sentences

Logical Completeness

Derivations produce all entailed sentence

Valid

A logical statement is true in all models

Satisfiable

A logical statement that is true in some model

Unsatisfiable

A logical statement that is always false

Proof Methods (2)

Application of Inference Rules and Model Checking

Application of Inference Rules

Sound generation of new sentences from old that typically requires a transformation of sentences into normal form (ie: resolution)

Model Checking

Truth table enumeration (ie: DPLL method or heuristic search in model space)

2 families of efficient algorithms for propositional inference

1. Complete back tracking search algorithms

2. Incomplete local search algorithms

2. Incomplete local search algorithms

DPLL

A complete back tracking search algorithm that determines if an input propositional logic sentence is satisfiable. works better than truth table enumeration by using Early Termination, Pure Symbol Heuristic and the Unit Clause

Early Termination

Improvement of DPLL that states:

1. A clause is true if one of its literals is true

2. A sentence is false if any of its clauses is false

1. A clause is true if one of its literals is true

2. A sentence is false if any of its clauses is false

Pure Symbol Heuristic

Improvement of DPLL that states:

A symbol is "pure" if it appears with the same sign throughout all clauses

(ie: (¬A v B)(¬A v C)(¬B v C) <- A and C are pure)

DPLL algorithm makes all literals that are pure, true

A symbol is "pure" if it appears with the same sign throughout all clauses

(ie: (¬A v B)(¬A v C)(¬B v C) <- A and C are pure)

DPLL algorithm makes all literals that are pure, true

Unit Clause

Improvement of DPLL.

If there is only 1 literal in a clause, then make the clause true

If all but one 1 literal is false then make the one literal true.

If there is only 1 literal in a clause, then make the clause true

If all but one 1 literal is false then make the one literal true.

WalkSAT

Incomplete, local, search algorithm that uses min-conflict heuristic to minimise the number of unsatisfiable clauses. A balance between greediness and randomness.

Constraint Satisfaction Problems

Problems where:

-state = variables Xi with values from Di (i= 1...n)

-goal test = set of constraints specifying allowable combinations of values for variables

(ie: the Australian territories problem)

-state = variables Xi with values from Di (i= 1...n)

-goal test = set of constraints specifying allowable combinations of values for variables

(ie: the Australian territories problem)

Standard Search Formulation

-Initial state: the empty assignment {}

-Successor function: assigns value to unassigned variable that doesn't conflict with current assignment

-Goal test: the current assignment is complete

-Successor function: assigns value to unassigned variable that doesn't conflict with current assignment

-Goal test: the current assignment is complete

Backtracking Search

Depth-First Search for CSPs with singular assignments

Forward Checking

Keeps track of remaining legal moves for unassigned variables and terminates search when any variable has no legal moves

Universal Quantifier (∀)

When used, ∀x.p is true in an interpretation iff P is true with x being each possible object in an interpretation

(Example: Everyone at UoE is smart:

∀x. At(x,UoE) ⇒ Smart(x) )

(Example: Everyone at UoE is smart:

∀x. At(x,UoE) ⇒ Smart(x) )

Existential quantification (∃)

When used, ∃x.P is true in an interpretation iff P is true with x being some possible object in the interpretation

( Example: Someone at UoE is smart:

∃x. At(x,UoE) ∧ Smart(x) )

( Example: Someone at UoE is smart:

∃x. At(x,UoE) ∧ Smart(x) )

Unification

Inference rule

We if we can find a substitution such that a(θ) = b(θ)

we can unify(a,b)

example:

King(x) and Greedy(x) match King(John) and Greedy(y)

θ= {x/John,y/John} works

We if we can find a substitution such that a(θ) = b(θ)

we can unify(a,b)

example:

King(x) and Greedy(x) match King(John) and Greedy(y)

θ= {x/John,y/John} works

Generalised Modus Ponens

if there is a bunch of definite clauses (p'1, p'2,...p'n ) and an entailment clause (p1 ∧ p2 ∧..pn ⇒ q)

if there is a unifer p'i θ = p'i θ ∀i

then we c an replace all the above clauses with: q(θ)

if there is a unifer p'i θ = p'i θ ∀i

then we c an replace all the above clauses with: q(θ)

Forward Chaining

Starting from KB, using entailment clauses to chain forward to a new clause goal for KB

Backwards Chaining

Starting with goal clause and chain backwards using entailment clauses from KB to prove goal clause.

Ground Binary Resolution

If you have clauses (C v P), ( D v ¬P), it is the same as saying (C v D)

Non Binary Resolution

if there are clauses (C v P), (D v P'), then it is the same as saying (C v D)θ

(iff there is MGU θ for P and P')

Example: (¬rich(x) v unhappy(x)) , (rich(Ken))

= (unhappy (Ken))

(iff there is MGU θ for P and P')

Example: (¬rich(x) v unhappy(x)) , (rich(Ken))

= (unhappy (Ken))

Factoring

if you have clause (C v P1 v P2... v Pn) then it is the same as (C v P1)θ where θ is the MGU for all Pi

Planning Domain Definition Language

A planning language that allows you to describe states, actions and goals.

Precondition in PDDL

defines states in which action is executable

Forward State-Space Search (PDDL)

Start in initial state; consider action sequences until goal state is reached.

Backward State-Space Search (PDDL)

Start from goal state; consider action sequences until initial state is reached

Subgoal Decomposition

State space search heuristic where the original goal is broke into subgoals to more easily reach the main goal

Relaxed Problem

State space search heuristic that is a derivation of the original problem that ignores all preconditions and gets rid of negative effects

Partial-order planning

Least commitment strategy where you add actions to a plan without committing to a specific order of actions (unless necessary).