Study sets, textbooks, questions
Upgrade to remove ads
EECS 348 Midterm
Terms in this set (68)
Goals of AI
Think like a human (Cognitive Modeling)
Think rationally (Logic-based Systems)
Act like a human (Turing Test)
Act rationally (Rational Agents)
An agent (program) that does the "right" thing, given its goals, its abilities, what it perceives of its environment and its prior knowledge
What does a rational agent do?
Our goal as AI programmers is develop agents that behave rationally. This means we must specify what the agent does given:
- Its goals
- Its precepts (what is perceives)
- Its possible actions
- Its prior knowledge
agent's plan of action
policy or "agent function"
Policy Design and Implementation
Policy Design and Implementation Techniques
Search, reasoning with utility, reasoning with knowledge and uncertainty, learning
1. Formulate problem and goal
2. Search for a sequence of actions that will lead to the goal (the policy)
3. Execute the actions one at a time
tree search basic idea
def TreeSearch(problem, strategy):
initialize search tree using information in the problem
if there are no candidates for expansion, return failure
choose a leaf node for expansion according to strategy
if node contains goal state, return solution
else expand node and add resulting nodes to search tree
(representation of) a physical configuration
data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth
creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.
Tree Search Algorithm
1. Add the initial state (root) to the <fringe>
2. Choose a node (curr) to examine from the <fringe> (if there is nothing in <fringe> - FAILURE)
3. Is curr a goal state?
If so, SOLUTION
If not, continue
4. Expand curr by applying all possible actions (add the new resulting states to the <fringe>)
5. Go to step 2
Uninformed search strategies
use only the information available in the problem definition
- Depth First Search
- Breadth First Search
- Depth-limited search
- Iterative deepening search
• Expand shallowest unexpanded node
- fringe is a FIFO queue, i.e., new successors go at end
• Expand deepest unexpanded node
- fringe = LIFO stack, i.e., put successors at front
Breadth-first search time complexity
Breadth-first search space complexity
Breadth-first search complete?
Breadth-first search optimal?
Depth-first search time complexity
Depth-first search space complexity
O(branching property * max depth)
Depth-first search complete?
No (yes, if space is finite and no circular paths)
Depth-first search optimal?
Breadth-first search problems
Depth-first search problems
not optimal and not necessarily complete
Depth limited depth-first search
Depth-first search, but with a depth limit L specified
- nodes at depth L are treated as if they have no successors
- we only search down to depth L
Depth limited depth-first search time complexity
Depth limited depth-first search space complexity
Depth limited depth-first search complete?
No if solution is longer than L
Depth limited depth-first search optimal?
Iterative deepening search
For depth 0, 1, ...., ∞
run depth limited DFS
if solution found, return result
• Blends the benefits of BFS and DFS
- searches in a similar order to BFS
- but has the memory requirements of DFS
• Will find the solution when L is the depth of
the shallowest goal
Iterative deepening search time complexity
Iterative deepening search space complexity
Iterative deepening search complete?
Iterative deepening search optimal?
use an evaluation function f(n) for each node
- estimate of "desirability"
- Expand most desirable unexpanded node
Order the nodes in fringe in decreasing order of desirability
Greedy best-first search
Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal
avoid expanding paths that are already expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n to goal (the evaluation of the desirability of n)
tree search algorithm -keeping track of visited
1. start w the initial node as curr
2. have I been to curr before? (is it in CLOSED)
3. is curr the goal?
4. if neither, expand curr - add children/
successors to OPEN, add curr to CLOSED
5. choose a node curr according to the smallest f(n) & go to step 2
• A heuristic h(n) is admissible if for every
node n, h(n) ≤ h'(n), where h'(n) is the true cost to reach the goal state from n.
• An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic
• A problem with fewer restrictions on the actions
• The cost of an optimal solution to a relaxed problem
• If h2(n) ≥ h1(n) for all n (both admissible)
• then h2 dominates h1
• h2 is better for search
deterministic, fully observable
Agent knows exactly which state it will be in; solution is a sequence of actions
sensorless (conformant) problem
Agent may have no idea where it is; solution is a sequence
Nondeterministic and/or partially observable
- percepts provide new information about current state
- often interleave search, execution
unknown state space
at least as good as any other, no matter what the opponent does
- If there's a way to force the win, it will
- Will only lose if there's no other option
if n is a terminal state
else if MAX's turn
the MAXIMUM MINIMAX-VALUE of all possible successors to n
else if MIN's turn
the MINIMUM MINIMAX-VALUE of all possible successors to n
eliminate parts of the tree from consideration, does not affect final result
prunes away branches that can't possibly influence the final decision
Consider a node n
If a player has a better choice m (at a parent or
further up), then n will never be reached
So, once we know enough about n by looking at some successors, then we can prune it.
α is the value of the best (i.e., highest value)
choice found so far at any choice point along the path for max
• If v is worse than α, max will avoid it
=> prune that branch
• Define β similarly for min
cutting off search
- if TERMINAL-TEST(state) then return UTILITY(state)
- if CUTOFF-TEST(state,depth) then return EVAL(state)
• Introduces a fixed-depth limit
- Is selected so that the amount of time will not exceed what the rules of the game allow.
• When cutoff occurs, the evaluation is performed.
• Idea: produce an estimate of the expected utility of the game from a given position.
• Performance depends on quality of EVAL.
- EVAL should order terminal-nodes in the same way as UTILITY.
- Computation may not take too long.
- For non-terminal states the EVAL should be strongly correlated with the actual chance of winning.
Constraint Satisfaction Problem
There's a set of variables. Each variable x has a domain D of possible values. Usually D is discrete and finite.
There's a set of constraints. Each constraint C involves a subset of variables and specifies the allowable combinations of values of these variables.
Assign a value to every variable such that all constraints are satisfied
- If a is complete then return a
- X <= select an unassigned variable
- D <= select an ordering for the domain of X
- For each value v in D do
• If v is consistent with a then
- Add (X= v) to a
- result <= CSP-BACKTRACKING(a)
- If result ≠ failure then return result
- Return failure
the process of determining how the possible values of one variable affect the possible values of other variables
After a variable X is assigned a value v, look at each unassigned variable Y that is connected to X by a constraint and deletes from Y's domain any value that is inconsistent with v
Removal of Arc Inconsistencies
• removed <= false
• X <= label set of J
• Y <= label set of K
• For every label y in Y do
- If there exists no label x in X such that the constraint (x,y) is satisfied then
• Remove y from Y
• If Y is empty then contradiction <= true
• removed <= true
• Label set of K <= Y
• Return removed
1. Select (random) initial state (initial guess at solution)
2. If not goal state, make local modification to improve current state
3. Repeat Step 2 until goal state found (or out of time)
- generate a random (probably-not-optimal) guess
- evaluate quality of guess
- move to other states (well-defined neighborhood function)
- do these operations quickly
local search algorithms
• Simulated annealing
• Local Beam Search
• Stochastic Beam Search
• Genetic Algorithms
look at neighbor states and choose the best one
Random-restart hill climbing
when you hit a local maximum, start again
Stochastic Hill Climbing
look at neighbor states, pick one that's better than current state randomly so maybe you won't get stuck in a local maxima or minima. Weighted random choice
• Instead of choosing the k best from pool, choose k at "random"
• Like natural selection
- Successors = offspring
- State = organism
- Value = fitness
First-choice hill climbing
generates successors randomly until you find one that is an uphill neighbor
Hill-climbing space complexity
Problems with hill-climbing
Ridges (local maxima/minima)
Simulated annealing search
escape local maxima by allowing some "bad" moves but gradually decrease their frequency
Local beam search
• Keep track of k states rather than just one
• Start with k randomly generated states
• At each iteration, all the successors of all k states are generated
• If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
1. Choose initial population
2. Evaluate fitness of each in population
3. Repeat the following until we hit a terminating condition:
1. Select best-ranking to reproduce
2. Breed using crossover and mutation
3. Evaluate the fitnesses of the offspring
4. Replace worst ranked part of population with offspring
Recommended textbook explanations
John Buck, William Hayt
Introduction to the Theory of Computation
Introduction to the Theory of Computation
Big Java: Early Objects
Cay S. Horstmann
Sets found in the same folder
EECS 348 Midterm 2
ITCS-3153 Exam 1
Sets with similar terms
Search Algorithms and State Space Representations
CM20252 - AI
CS 171 Final Exam
psych 360 vocab 8
Other sets by this creator
EECS 349 Midterm
Show, by adding pointers to the nodes, how to support each of the dynamic-set queries MINIMUM, MAXIMUM, SUCCESSOR, and PREDECESSOR in O(1) worst case time on an augmented order-statistic tree. The asymptotic performance of other operations on order-statistic trees should not be affected.
Consider the RPC mechanism. Describe the undesirable consequences that could arise from not enforcing either the “at most once” or “exactly once” semantic. Describe possible uses for a mechanism that has neither of these guarantees.
Many companies pay time-and-a-half for any hours worked above 40 in a given week. Write a program to input the number of hours worked and the hourly rate and calculate the total wages for the week.
Consider a version of the division method in which h(k) = k mod m, where $$ m = 2^p - 1 $$ and k is a character string interpreted in radix $$ 2^p $$ . Show that if we can derive string x from string y by permuting its characters, then x and y hash to the same value. Give an example of an application in which this property would be undesirable in a hash function.