Search
Create
Log in
Sign up
Log in
Sign up
Get ahead with a $300 test prep scholarship
| Enter to win by Tuesday 9/24
Learn more
CS320 Exam #1
STUDY
Learn
Flashcards
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (65)
Measure of progress
Each step then by the algorithm brings it closer to termination
-Something that strictly increases with each iteration
Perfect Matching
There is a one to one match.
Stable Match
A perfect match with no instabilities
-Instability: man and woman each prefer the other to their current parters
What does "correct" mean for Gale-Shapley?
-> Does it terminate?
-> If so, when?
-> Does it come up with a perfect matching?
-> Does it come up with a stable matching?
-Does it terminate? Yes
-When? When there are no free pets/men
-Does it come up with a perfect matching? Yes
-> Does it come up with a stable matching? Yes
Gale-Shapley spud code for algorithm
Initially all men and women are free
While there is a man m who is free and hasn't proposed to every woman
-> Choose such a man
-> Let w be the highest-ranked woman in m's preference list to whom m has not yet proposed
-> If w is free then (mew) become engaged
-> Else w is currently engaged to other man o
---> If w prefers o to m then m remains free (and moves onto next highest ranked woman in his preference list)
---> Else w prefers m to o: (m,w) become engaged and o becomes free
Return the set of engaged pairs
T/F (GS): w reamins engaged from the point at which she receives her first proposal
True
T/F (GS): The sequence of partners of a woman w to which she is engaged gets worse and worse
False - it gets better and better
T/F (GS): A man is free until he proposes to the highest ranked woman on his list; at this point he may or may not become engaged
True - he may alternate between being free and being engaged
T/F (GS): The sequence of women to whom m proposes to gets worse and worse
True
Prove the following statement:
If m is free at some point in the execution of the algorithm, then there is a woman to whom he has not yet proposed
Proof by contradiction:
Suppose there comes a point when m is free but has already proposed to every woman.
Then, each of the n women is engaged at this point in time.
Since the the set of engaged pairs forms a matching, there must also be n engaged men at this point in time.
But there are only n men total and m is not engaged, so this is a contradiction.
Prove by contraction that GS produces a perfect matching
Proof by contradiction
Suppose that the algorithm terminates with a free man m.
Since the loop terminates, m had already proposed to every woman. If a woman gets engaged, she remains engaged until the algorithm ends so all woman are engaged.
The set of engaged couples is a matching so there myst be n men also engaged. Since there are only n men in total, and m is not in a trial, this is a contradiction.
Prove the GS returns a stable matching
Proof by contradiction:
Assume there is an instability with respect to S (the set of perfect matches) and obtain a contradiction.
Assume a matching:
S = {(m, w'), (m', w)...} and an unstable pair: (m, w) where m and w both prefer each other over their current partners.
Case 1: m never considered w
-This means m prefers w' so (m, w) is not an instability
Case 2: m did consider w
-This means w rejected m by trading up to m', and w prefers m' over m, so (m,w) is not an instability
In either case (m,w) is not an instability so this is a contradiction
T/F: Executions of GS may differ depending on which man you start with
False - all executions of the GS algorithm yield the same matching
What type of algorithms solve the following problems:
Interval Scheduling
Greedy Algorithms
Ex: of interval scheduling - you have a resource (classroom) - and many people request to use it for certain periods of time. Assume the resource can be used by at most one person at a time.
-Requester wants to accept a subset of these requests, rejecting all others, so that the number of accepted requests do not overlap.
-Goal is to maximize the number of requests accepted
What type of algorithms solve the following problems:
Weighted Interval Scheduling
Dynamic Programming
Ex: Each request has an associated value or weight. Our goal is to find a compatible subset of intervals of maximum total value
What type of algorithms solve the following problems:
Bipartite Matching
Polynomial Maximal Flow
A graph is bipartite if its nodes can be portioned in two sets X and U, such that the edges go from one x in X to a y in Y
What type of algorithms solve the following problems:
Independent Set
No known polynomial algorithm
Subset of nodes such that no two are joined by an edge
1
1
logn
logn
sqrt(n)
sqrt(n)
n
n
nlogn
nlogn
n^2
n^2
2^n
2^n
n!
n!
Complexity of Brute-force checking every possible solution
2^n or worse for inputs of size n
N0 in upper/lower/tight bounds
The place where it crosses any other possibilities
Asymptotic Upper Bound
(Big O)
f(n) is always on or below cg(n) starting at some point N0
f(n) = O(g(n)
Asymptotic Lower Bound
(Big Omega)
f(n) is always above or on cg(n) starting at N0
f(n) = BIG OMEGA(g(n))
Asymptotic Tight Bound
(Big Theta)
f(n) is always above or on c1g(n) and below or on c2g(n) starting at some N0
-same function g(n) with different constants
f(n) = BIG THETA(g(n))
Function Ratios:
Want to determine the bounds of two functions - f(n) and g(n)
-what is their relation to each other as their input size gets bigger and bigger
Upper bound:
lim f(n)/g(n) = 0
n-> oo
-g(n) grows faster
-f(n) = O(g(n)) and f(n) = o(g(n))
Lower Bound:
lim f(n)/g(n) = oo
n-> oo
-f(n) grows faster
-f(n) = BigOmega(g(n)) and f(n) = w(g(n))
Tight Bound:
lim f(n)/g(n) = c (>0)
n-> oo
-they grow the same way with a constant difference between them
-f(n) = BigTheta(g(n)) and f(n) = littleTheta(g(n))
T/F: A lower bound will always be smaller, and upper bound always larger, and a theta will be roughly "the same"
True
Priority Queue (definition)
A data structure that provides the following functions:
-Finding the min element
-Extracting the min element
-Inserting new element
Implementation of a priority queue containing at most n elements at any time so that the elements can be added and deleted, and the element with the minimum key selected, in O(logn) time per operation
Heap
Heap
Complete binary tree that has the heap property:
-all nodes have a lower value than the nodes in their subtrees (min heap)
-we insert new nodes at the end to maintain a complete binary tree and then swap nodes (heapify) as necessary to restore the heap property
Implementation of a Heap (how do we store it)
We can store a heap as an array with the following indexing:
-The root in at A[1]
-For any node i, the left child is at index 2i and the right is at 2i+1
-A[0] is -oo
Heap Complexity -> how many nodes can we have if our array is size 32?
We can have 31 nodes because A[0] = -oo
Heap - if root = height 0, How many nodes are there at each level?
2^h where h is height
Heap - How many total nodes in the tree at any height?
(from the root to that particular height)
2^(h+1) - 1
Heap - height of the tree for a given number of nodes?
log(N) = height
-base 2
n = 7
height = 2
-round down??
Heap Implementation Complexity
Note: all logs are base 2
-Extract the minimum element: BigTheta(1)
-Heapify-up: O(log n)
-Heapify-down: O(log n)
-Insert
--> add to the end of the array: BigTheta(1)
--> heapify-up/down: O(log n)
-Build up the heap initially - insert n times, once for each element: O(nlog(n)
Complexity of an ordinary array(list) as a priority queue and one pointer to min value
O(1) to find min value
O(n) to extract min value
O(1) to insert new element
Divide and Conquer- Strategy
1)
Divide
the problem into equal-sized sub-problems
2)
Conquer
the subproblems by solving them recursively.
3)
Combine
the solutions to the subproblems into the solutions for the original problems
Recurrence Relation
Equation that describes a function in terms of its value on smaller inputs
Two parts:
-Base Case
-Function for problem sizes of larger n
Merge Sort Recurrence Relation
T(n) = {c if n = 1,
T(n/2) + T(n/2) + cn otherwise}
-solve left half, solve right half, merge
Divide: d
Conquer: 2(Tn/2)
Merge: cn
Generalized Master Theorem:
TOO MUCH TO TYPE. ADD TO YOUR CHEAT SHEET
Recurrence relation for Heapify
Heapify the left, heapify the right, then heapify the new element into it
T(n) = 2T(n/2) + log n
Divide and Conquer vs Dynamic Programming
Divide and Conquer:
-partition the problem into
disjoint
subproblems
-solve the problems recursively
-and then combine their solutions to solve the original problem
Dynamic Programming:
-Applies when the subproblems overlap (aka when the sub problems share subproblems)
-solves each subproblem just once, and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time it solves each subproblem
-(Divide and Conquer would need to recompute all of the subproblems each time)
Dynamic Programming
-What types of problems do we typically apply these to?
We typically apply dynamic programming to optimization problems
-Each solution has a value
-We wish to find a solution with the optimal (min/max value)
Dynamic Programming Strategy:
1) Characterize the structure (subproblems) of an optimal solution
2) Recursively define the value of an optimal solution
3) Compute the value of an optimal solution, typically in a bottom-up fashion
--> can be top-down memorization algos as well
4) Construct an optimal solution from computed information
-->if we need only the value of a optimal solution and not the solution itself, then we can omit step 4
Optimal Substructure
optimal solutions to a problem incorporate optimal solutions to related subproblems, which we may solve independently
T/F: Dynamic programming uses additional computation time to save memory
False - Dynamic programming uses additional memory to save computation time
time-memory trade off
T/F: Top-down with memoization and the bottom-up method are equivalent ways to implement a dynamic programming approach
True
Dynamic Programming: Top-Down with Memoization
We write the procedure recursively in a natural manner but modified to dave the result of each subproblem (usually in an array or hash table).
First checks to see whether it has previously solved this subproblem.
--> If so, it returns the saved value
--> If not, it computes the value in the usual manner
"Remembers" what results it has computed previously
Dynamic Programming: Bottom-up Method
Typically depends on some natural notion of "size" of a subproblem, such that solving any particular subproblem depends only on solving "smaller" subproblems
-We sort the subproblems by size and solve them in size order, smallest first.
--> When solving a particular subproblem, we have already solved all of the smaller subproblems its solution depends on, and we have saved their solutions
-We solve each subproblem only once, and when we first see it, we have already solved all of its prerequisite subproblems
Dynamic Programming: Reconstructing a solution
In the example of the rod-cutting problem, we returned the value of an optimal solution (revenue), but did not return an actual solution ( a list of the piece sizes)
Need to extend the approach to record not only the optimal value computed but also a choice that led to the optimal value
--> i.e. the optimal size of the first piece to cut off
Overlapping Subproblems
When a recursive algorithm revisits the same problem repeatedly
As opposed to a divide and conquer approach which generates bran-new problems at each step of the recursion
T/F: in Dynamic Programming (both approaches) we usually store which choice we made in each subproblem in a table so that we do not have to reconstruct their info from the costs that we store
True
(2D array)
Difference in tables between memoization and bottom-up approach
Memoization: memoize the natural, but inefficient, recursive algorithm.
Both maintain a table with subproblem solutions, but the control structure for filling in the table in memoization is more like the recursive algorithm
In memoization, each entry in the table initially contains a special value to indicate that the entry has yet to be filled. When the subproblem is first encountered as the recursive algorithm unfolds, its solution is computed and then stored
T/F: Greedy Algorithms always provide an optimal solution
False - they do not always provide an optimal solution, but for many problems they do
Greedy Algorithm
Always makes the choice that looks best at the moment
-make a locally optimal choice in hope that this choice will lead to a globally optimal solution
-Don't need to explore ALL the choices explored in Dynamic Programming so don't need to save results to subproblems
T/F: Recursive approach is the same as top-down approach (without memoization)
True
Greedy Strategy
1) Determine optimal problem substructure
--> an optimal solution to the problem contains within it optimal solutions to subproblems
2) Develop recursive solution
3) Show that if we make the greedy choice only 1 subproblem remains
4) Prove it is always safe to make the greedy choice
-->
Greedy Choice Property
: we can assemble a globally optimal solution by making locally optimal choices
----> choices may depend on choices made so far (in the past) but cannot depend on any future choices or on solutions to future subproblems
5) Develop a recursive algorithm for the greedy strategy
6) Convert the recursive algorithm into an iterative algorithm
Are Greedy Algorithms top-down or bottom-up?
A top-down algorithm making a choice, and reducing the problem instance to a smaller one
--> bc you can't look into the future when making decisions
More general version of Greedy Algorithm Strategy
1) Cast the optimization problem as one in which we make a choice and are left with one subproblem to solve
2) Prove that there is always an optimal solution to the original problem that makes the greedy choice, so that the greedy choice is always safe
3) Demonstrate optimal substructure by showing that, having made the greedy choice, what remains in a subproblem with the property that if we combine an optimal solution to the subproblem with the greedy choice we've made, we arrive at an optimal solution to the original problem
;