Home
Subjects
Explanations
Create
Study sets, textbooks, questions
Log in
Sign up
Upgrade to remove ads
Only $35.99/year
Science
Computer Science
Artificial Intelligence
EECS 348 Midterm
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (68)
Goals of AI
Think like a human (Cognitive Modeling)
Think rationally (Logic-based Systems)
Act like a human (Turing Test)
Act rationally (Rational Agents)
Rational Agent
An agent (program) that does the "right" thing, given its goals, its abilities, what it perceives of its environment and its prior knowledge
What does a rational agent do?
Our goal as AI programmers is develop agents that behave rationally. This means we must specify what the agent does given:
- Its goals
- Its precepts (what is perceives)
- Its possible actions
- Its prior knowledge
agent's plan of action
policy or "agent function"
AI Programming
Policy Design and Implementation
Policy Design and Implementation Techniques
Search, reasoning with utility, reasoning with knowledge and uncertainty, learning
Search-based agent
1. Formulate problem and goal
2. Search for a sequence of actions that will lead to the goal (the policy)
3. Execute the actions one at a time
tree search basic idea
def TreeSearch(problem, strategy):
initialize search tree using information in the problem
while true:
if there are no candidates for expansion, return failure
choose a leaf node for expansion according to strategy
if node contains goal state, return solution
else expand node and add resulting nodes to search tree
state
(representation of) a physical configuration
node
data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth
expand
creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.
Tree Search Algorithm
1. Add the initial state (root) to the <fringe>
2. Choose a node (curr) to examine from the <fringe> (if there is nothing in <fringe> - FAILURE)
3. Is curr a goal state?
If so, SOLUTION
If not, continue
4. Expand curr by applying all possible actions (add the new resulting states to the <fringe>)
5. Go to step 2
Uninformed search strategies
use only the information available in the problem definition
- Depth First Search
- Breadth First Search
- Depth-limited search
- Iterative deepening search
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
- fringe is a FIFO queue, i.e., new successors go at end
Depth-first search
• Expand deepest unexpanded node
• Implementation:
- fringe = LIFO stack, i.e., put successors at front
Breadth-first search time complexity
O(b^(d+1))
Breadth-first search space complexity
O(b^(d+1))
Breadth-first search complete?
yes
Breadth-first search optimal?
yes
Depth-first search time complexity
O(b^m)
Depth-first search space complexity
O(branching property * max depth)
Depth-first search complete?
No (yes, if space is finite and no circular paths)
Depth-first search optimal?
no
Breadth-first search problems
memory
Depth-first search problems
not optimal and not necessarily complete
Depth limited depth-first search
Depth-first search, but with a depth limit L specified
- nodes at depth L are treated as if they have no successors
- we only search down to depth L
Depth limited depth-first search time complexity
O(b^L)
Depth limited depth-first search space complexity
O(bL)
Depth limited depth-first search complete?
No if solution is longer than L
Depth limited depth-first search optimal?
no
Iterative deepening search
For depth 0, 1, ...., ∞
run depth limited DFS
if solution found, return result
• Blends the benefits of BFS and DFS
- searches in a similar order to BFS
- but has the memory requirements of DFS
• Will find the solution when L is the depth of
the shallowest goal
Iterative deepening search time complexity
O(b^d)
Iterative deepening search space complexity
O(bd)
Iterative deepening search complete?
yes
Iterative deepening search optimal?
yes
Best-first search
use an evaluation function f(n) for each node
- estimate of "desirability"
- Expand most desirable unexpanded node
Implementation:
Order the nodes in fringe in decreasing order of desirability
Greedy best-first search
Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal
A* search
avoid expanding paths that are already expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n to goal (the evaluation of the desirability of n)
tree search algorithm -keeping track of visited
1. start w the initial node as curr
2. have I been to curr before? (is it in CLOSED)
3. is curr the goal?
4. if neither, expand curr - add children/
successors to OPEN, add curr to CLOSED
5. choose a node curr according to the smallest f(n) & go to step 2
admissable
• A heuristic h(n) is admissible if for every
node n, h(n) ≤ h'(n), where h'(n) is the true cost to reach the goal state from n.
• An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic
relaxed problem
• A problem with fewer restrictions on the actions
• The cost of an optimal solution to a relaxed problem
dominance
• If h2(n) ≥ h1(n) for all n (both admissible)
• then h2 dominates h1
• h2 is better for search
single-state problem
deterministic, fully observable
Agent knows exactly which state it will be in; solution is a sequence of actions
sensorless (conformant) problem
Non-observable
Agent may have no idea where it is; solution is a sequence
contingency problem
Nondeterministic and/or partially observable
- percepts provide new information about current state
- often interleave search, execution
exploration problem
unknown state space
optimal strategy
at least as good as any other, no matter what the opponent does
- If there's a way to force the win, it will
- Will only lose if there's no other option
Minimax Algorithm
MINIMAX-VALUE(n) =
if n is a terminal state
then Utility(n)
else if MAX's turn
the MAXIMUM MINIMAX-VALUE of all possible successors to n
else if MIN's turn
the MINIMUM MINIMAX-VALUE of all possible successors to n
Pruning
eliminate parts of the tree from consideration, does not affect final result
Alpha-Beta pruning
prunes away branches that can't possibly influence the final decision
Consider a node n
If a player has a better choice m (at a parent or
further up), then n will never be reached
So, once we know enough about n by looking at some successors, then we can prune it.
α is the value of the best (i.e., highest value)
choice found so far at any choice point along the path for max
• If v is worse than α, max will avoid it
=> prune that branch
• Define β similarly for min
cutting off search
• Change:
- if TERMINAL-TEST(state) then return UTILITY(state)
- if CUTOFF-TEST(state,depth) then return EVAL(state)
• Introduces a fixed-depth limit
- Is selected so that the amount of time will not exceed what the rules of the game allow.
• When cutoff occurs, the evaluation is performed.
Heuristic EVAL
• Idea: produce an estimate of the expected utility of the game from a given position.
• Performance depends on quality of EVAL.
• Requirements:
- EVAL should order terminal-nodes in the same way as UTILITY.
- Computation may not take too long.
- For non-terminal states the EVAL should be strongly correlated with the actual chance of winning.
Constraint Satisfaction Problem
There's a set of variables. Each variable x has a domain D of possible values. Usually D is discrete and finite.
There's a set of constraints. Each constraint C involves a subset of variables and specifies the allowable combinations of values of these variables.
Assign a value to every variable such that all constraints are satisfied
Backtracking Algorithm
CSP-BACKTRACKING(PartialAssignment a)
- If a is complete then return a
- X <= select an unassigned variable
- D <= select an ordering for the domain of X
- For each value v in D do
• If v is consistent with a then
- Add (X= v) to a
- result <= CSP-BACKTRACKING(a)
- If result ≠ failure then return result
- Return failure
CSP-BACKTRACKING({})
Constraint Propagation
the process of determining how the possible values of one variable affect the possible values of other variables
Forward checking
After a variable X is assigned a value v, look at each unassigned variable Y that is connected to X by a constraint and deletes from Y's domain any value that is inconsistent with v
Removal of Arc Inconsistencies
REMOVE-ARC-INCONSISTENCIES(J,K)
• removed <= false
• X <= label set of J
• Y <= label set of K
• For every label y in Y do
- If there exists no label x in X such that the constraint (x,y) is satisfied then
• Remove y from Y
• If Y is empty then contradiction <= true
• removed <= true
• Label set of K <= Y
• Return removed
random search
1. Select (random) initial state (initial guess at solution)
2. If not goal state, make local modification to improve current state
3. Repeat Step 2 until goal state found (or out of time)
Requirements:
- generate a random (probably-not-optimal) guess
- evaluate quality of guess
- move to other states (well-defined neighborhood function)
- do these operations quickly
local search algorithms
• Hill-climbing
• Simulated annealing
• Local Beam Search
• Stochastic Beam Search
• Genetic Algorithms
hill-climbing search
look at neighbor states and choose the best one
Random-restart hill climbing
when you hit a local maximum, start again
Stochastic Hill Climbing
look at neighbor states, pick one that's better than current state randomly so maybe you won't get stuck in a local maxima or minima. Weighted random choice
• Instead of choosing the k best from pool, choose k at "random"
• Like natural selection
- Successors = offspring
- State = organism
- Value = fitness
First-choice hill climbing
generates successors randomly until you find one that is an uphill neighbor
Hill-climbing space complexity
O(1)
Problems with hill-climbing
Ridges (local maxima/minima)
Simulated annealing search
escape local maxima by allowing some "bad" moves but gradually decrease their frequency
Local beam search
• Keep track of k states rather than just one
• Start with k randomly generated states
• At each iteration, all the successors of all k states are generated
• If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
Genetic algorithms
1. Choose initial population
2. Evaluate fitness of each in population
3. Repeat the following until we hit a terminating condition:
1. Select best-ranking to reproduce
2. Breed using crossover and mutation
3. Evaluate the fitnesses of the offspring
4. Replace worst ranked part of population with offspring
Recommended textbook explanations
Engineering Electromagnetics
8th Edition
John Buck, William Hayt
483 explanations
Introduction to the Theory of Computation
3rd Edition
Michael Sipser
389 explanations
Introduction to the Theory of Computation
3rd Edition
Michael Sipser
393 explanations
Big Java: Early Objects
5th Edition
Cay S. Horstmann
1,008 explanations
Sets found in the same folder
EECS 348 Midterm 2
73 terms
ITCS-3153 Exam 1
66 terms
Artificial Intelligence
83 terms
Sets with similar terms
Search Algorithms and State Space Representations
79 terms
CM20252 - AI
98 terms
CS 171 Final Exam
97 terms
psych 360 vocab 8
15 terms
Other sets by this creator
Final
50 terms
Midterm
112 terms
EECS 349 Midterm
119 terms
Journalist's Privilege
17 terms
Verified questions
COMPUTER SCIENCE
Show, by adding pointers to the nodes, how to support each of the dynamic-set queries MINIMUM, MAXIMUM, SUCCESSOR, and PREDECESSOR in O(1) worst case time on an augmented order-statistic tree. The asymptotic performance of other operations on order-statistic trees should not be affected.
COMPUTER SCIENCE
Consider the RPC mechanism. Describe the undesirable consequences that could arise from not enforcing either the “at most once” or “exactly once” semantic. Describe possible uses for a mechanism that has neither of these guarantees.
COMPUTER SCIENCE
Many companies pay time-and-a-half for any hours worked above 40 in a given week. Write a program to input the number of hours worked and the hourly rate and calculate the total wages for the week.
COMPUTER SCIENCE
Consider a version of the division method in which h(k) = k mod m, where $$ m = 2^p - 1 $$ and k is a character string interpreted in radix $$ 2^p $$ . Show that if we can derive string x from string y by permuting its characters, then x and y hash to the same value. Give an example of an application in which this property would be undesirable in a hash function.