CS 171 Final Exam
Terms in this set (97)
Assigns each sentence a degree of belief ranging from 0 to 1
P(a ∧b | c) = P(a | c) P(b | c)
P(a ∧b) = P(a) P(b)
Product Rule (Chain Rule)
P(a ∧b∧c) = P(a | b∧c) P(b | c) P(c)
P(a | b) = P(b | a) P(a) / P(b)
Formal symbol system for representation and inference
True in every possible world
Inference system can derive any sentence that is entailed
Conjunctive Normal Form
A sentence expressed as a conjunction of clauses (disjuncts)
Inference system derives only entailed sentences
True in at least one possible world
The idea that a sentence follows logically from other sentences
Degree of belief accorded after some evidence is obtained
Degree of belief accorded without any other information
Factored Representation (Probability Concept)
A possible world is represented by variable/value pairs
Takes values from its domain with specified probabilities
Joint Probability Distribution
Gives probability of all combinations of values of all variables
Perceives environment by sensors, acts by actuators
Agent's perceptual inputs at any given instant
Complete history of everything agent has perceived
Agent that acts to maximize its expected performance measure
Next state of environment is fixed by current state and action
Environment can change while agent is deliberating
Evaluates any given sequence of environment states for utility
Maps any given percept sequence to an action
Process of removing detail from a representation
Sensors give the complete state of environment at each time
All states reachable from the initial state by a sequence of actions
Set of all leaf nodes available for expansion at any given time
Uses no additional information beyond problem definition
Uses problem-specific knowledge beyond problem definition
Guaranteed to find lowest cost among all solutions
Guaranteed to find a solution if one is accessible
Expand a state
Apply each legal action to state, generating a new set of states
Maximum number of successors of any node
Estimates cost of cheapest path from current state to goal state
Tries to minimize the total estimated solution cost
Greedy Best-First Search
Tries to expand the node believed to be closest to the goal
For n' a successor of n from actions a, h(n)'s cost(n, a, n') + h(n')
Never over-estimate cost of cheapest path to a goal state
Tree where nodes are game states and edges are game moves
Function that decides when to stop exploring this search branch
Returns same move as MiniMax, but may prune more branches
Weighted Linear Function
Vector dot product of a weight vector and a state feature vector
In all game instances, total pay-off summed over all players is the same
Optimal strategy for 2‐player zero‐sum games of perfect information, but impractical given limited time to make each move
Function that specifies a player's move in every possible game state
Heuristic Evaluation Function
Approximates the value of a game state (i.e., of game position)
Solution to a CSP
A complete and consistent assignment
Every variable is associated wit ha value
Nodes correspond to variables, links connect variables that participate in a constraint
All values in a variable's domain satisfy its binary constraints
When variable X is assigned, delete any value of other variables that is inconsistent with the assigned value of X
Associates values with some or all variables
All values in a variable's domain satisfy its unary constraints
Set of allowed values for some variable
Specifies an allowable combination of variable values
The values assigned to variables do not violate any constraints
Defines the truth of each sentence in each possible world
Specifies all the sentences that are well formed
Improves performance of future tasks after observing the world
Supervised learning with numeric output values
Surface in a high-dimensional space that separates the classes
Choose an over-complex model based on irrelevant data patterns
Randomly split the data in a training set and a test set
Agent learns patterns in the input with no explicit feedback
Factored Representation (Machine Learning Concept)
Fixed set, list, or vector of features/attributes paired with a value
Agent observes input-output pairs & learns to map input to output
Examples distinct from training set, used to estimate accuracy
Example input-output pairs, from which to discover a hypothesis
Supervised learning with a discrete set of possible output variables
Performance measure, environment, actuators, sensors
Uniform Cost Search
Iterative Deepening Search
(T/F) The information gain from an attribute A is how much classifier accuracy improves when attribute A is added to the example feature vectors in the training set.
(T/F) Overfitting is a general phenomenon that occurs with most or all types of learners.
(T/F) An agent is learning if it improves its performance on future tasks after making observations about the world.
(T/F) A decision tree can learn and represent any Boolean function.
(T/F) Cross-validation is a way to improve the accuracy of a learned hypothesis by reducing over-fitting using Ockham's razor.
(T/F) A constraint satisfaction problem (CSP) consists of a set of variables, a set of domains (one for each variable), and a set of constraints that specify allowable combinations of values.
(T/F) A consistent assignment is one in which every variable is assigned.
(T/F) A complete assignment is one that does not violate any constraints.
(T/F) A partial assignment is one that violates only some of the constraints.
(T/F) The nodes of a constraint graph correspond to variables of the problem, and a link connects any two variables that participate in a constraint.
(T/F) A constraint consists of a pair <scope, rel>, where scope is a tuple of variables that participate and rel defines the values those variables can take on.
(T/F) Performing constraint propagation involves using the constraints to reduce the number of legal values for a variable, which in turn can reduce the legal values for another variable, and so on.
(T/F) A variable in a CSP is arc-consistent iff, for each value in its domain and each of its binary constraints, that constraint is satisfied by that domain value together with some value in the domain of the other variable in that constraint.
(T/F) Constraint satisfaction problems are semi-decidable because they may never terminate if the problem has no legal solution.
(T/F) The minimum-remaining-values (MRV) heuristic chooses the variable with the fewest remaining legal values to assign next.
(T/F) The degree heuristic is used to set the temperature in methods for solving CSPs based on Simulated Annealing.
(T/F) The least-constraining-value heuristic prefers the value that rules out the fewest choices for the neighboring variables in the constraint graph.
(T/F) The min-conflicts heuristic for local search prefers the value that results in the minimum number of conflicts with other variables.
(T/F) The min-conflicts heuristic is rarely used because it is only effective when the constraint graph is a tree.
YOU MIGHT ALSO LIKE...
Academic Word Lists - AWL Sublists
Marketing 451 Final Exam Practice Questions
MA 330 Final Exam Review - Machado
OPRE3333 Final Exam Review
OTHER SETS BY THIS CREATOR
QA Interview Questions
171 Quiz 4
171 Quiz 3