Search
Browse
Create
Log in
Sign up
Log in
Sign up
COP 4531 - Fall 14 Final Exam
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (68)
Selection Sort
runtime: Theta(n^2)
runspace: in place
stable: no
Insertion Sort
runtime: Theta(n^2)
runspace: in place
stable: yes
Heap Sort
runtime: Theta(n log n)
runspace: in place
stable: no
Quick Sort
runtime: Theta(n^2)
runspace: +Theta(n)
stable: no
Merge Sort (array)
runtime: Theta(n log n)
runspace: +Theta(n)
stable: yes
Merge Sort (list)
runtime: Theta(n log n)
runspace: in place
stable: yes
Counting Sort
runtime: Theta(n+k)
runspace: +Theta(k)
stable: yes
note: best possible worse-case runtime
Radix Sort
runtime: Theta(d(n+k))
runspace: +Theta(k)
stable: yes
Bit Sort
runtime: Theta(nb)
runspace: +Theta(n)
stable: yes
Byte Sort
runtime: Theta(nB)
runspace: +Theta(n)
stable: yes
Among the three fsu:: sequential containers, name a minimal set of operations that make fsu::TVector<> uniquely qualified for client use.
a) PushFront ( element )
b) SetSize ( size )
c) operator++ ()
d) operator [] ( index )
e) PushBack ( element )
f) Insert ( iterator , element )
SetSize ( size )
Among the three fsu:: sequential containers, name a minimal set of operations that make fsu::TDeque<> uniquely qualified for client use.
operator [] ( index )
Insert ( iterator , element )
PushBack ( element )
SetSize ( size )
operator++ ()
PushFront ( element )
operator [] ( index )
PushFront ( element )
Among the three fsu:: sequential containers, name a minimal set of operations that make fsu::TList<> uniquely qualified for client use
Insert ( iterator , element )
PushFront ( element )
operator++ ()
PushBack ( element )
operator [] ( index )
SetSize ( size )
Insert ( iterator , element )
Consider the following adjacency list representation of a directed graph G:
[0]: 1
[1]:
[2]: 0
[3]: 0, 2
[4]: 2
[5]: 1, 2
A topological sort for G is:
3 4 5 2 0 1
Consider the following adjacency list representation of a directed graph G:
[0]: 1
[1]:
[2]: 1
[3]: 1, 2, 4
[4]: 2, 5
[5]: 0
A topological sort for G is:
3 4 2 5 0 1
Consider the following adjacency list representation of a graph G:
[0]: 1, 2, 4
[1]: 5
[2]: 1
[3]: 1
[4]: 0, 3, 5
[5]: 0
Show a trace for the LIFO control queue conQueue during a depth-first-search of G, starting at vertex 0.
conStack
------->
NULL
0
0 1
0 1 5
0 1
0
0 2
0
0 4
0 4 3
0 4
0
Consider the following adjacency list representation of a graph G:
[0]: 1
[1]: 5
[2]: 1
[3]: 1, 2, 4
[4]: 2, 5
[5]: 0
Show a trace for the FIFO control queue conQue during a breadth-first-search of G, starting at vertex 3.
conQue
<-----
3
1 2 4
2 4 5
4 5
5
0
Given the following adjacency list representation for a graph G:
[0]->2,4,3
[1]->2,5,4
[2]->0,1,5,3
[3]->0,2,5
[4]->0,1,5
[5]->1,2,3,4
What is the discovery order of vertices produced by depth-first survey?
0 2 1 5 3 4
Given the following adjacency list representation for a graph G:
[0]->2,4,3
[1]->2,5,4
[2]->0,1,5,3
[3]->0,2,5
[4]->0,1,5
[5]->1,2,3,4
What is the finishing order of vertices produced by depth-first search?
3 4 5 1 2 0
Given the following adjacency list representation for a graph G:
[0]->2,4
[1]->2,5
[2]->5
[3]->0,2,5
[4]->1,5
[5]->
What is the discovery order of vertices produced by depth-first survey?
0 2 5 4 1 3
Given the following adjacency list representation for a graph G:
[0]->2,4
[1]->2,5
[2]->5
[3]->0,2,5
[4]->1,5
[5]->
What is the finishing order of vertices produced by depth-first survey?
5 2 1 4 0 3
Given the following adjacency list representation for a graph G:
[0]->1,5
[1]->0,5,2,3
[2]->1,3,4
[3]->1,2,4
[4]->3,2,5
[5]->1,4,0
What is the discover order of the vertices for Breadth First Survey ?
0 1 5 2 3 4
Given the following adjacency list representation for a graph G:
[0]->1
[1]->5
[2]->1
[3]->1,2,4
[4]->2,5
[5]->0
What is the discover order of the vertices for Breadth First Survey ?
0 1 5 2 3 4
Given the following adjacency list representation of the graph G:
[0]->1,5
[1]->0,5,2,3
[2]->1,3,4
[3]->1,2,4
[4]->3,2,5
[5]->1,4,0
What is the grouping of vertices by distance from root after Breadth First Survey?
[ ( 0 ) ( 1 5 ) ( 2 3 4 ) ]
Given the following adjacency list representation of the graph G:
[0]->1
[1]->5
[2]->1
[3]->1,2,4
[4]->2,5
[5]->0
What is the grouping of vertices by distance from root after Breadth First Survey?
[ ( 0 ) ( 1 ) ( 5 ) ] [ ( 2 3 ) ( 4 ) ]
Following is an adjacency matrix representation of a graph G:
0 1 2 3 4 5
0 - 1 - 1 - -
1 - - 1 1 - 1
2 - - - - - 1
3 1 - - - 1 -
4 - - - - - -
5 - - 1 1 - -
What is the adjacency list representation of G?
Note: For readability, row and column indices are shown in italics and '-' is used for zero matrix entries.
[0]: 1, 3
[1]: 2, 3, 5
[2]: 5
[3]: 0, 4
[4]:
[5]: 2, 3
Given the following adjacency list representation of a graph G:
[0]->1,2,4
[1]->5
[2]->1
[3]->1
[4]->0,3,5
[5]->0
G is a directed graph
Given the following adjacency list representation of a graph G:
[0]->1
[1]->5
[2]->1
[3]->1,2,4
[4]->2,5
[5]->0
G is a directed graph
Given the following graph G in adjacency list representation:
[0]->2,4,6,8
[1]->3,5,7,9
[2]->0,1,5,6,9
[3]->
[4]->1,2,5,7
[5]->
[6]->5,2
[7]->8,9
[8]->1,3
[9]->0,5,7
What is the inDegree of vertex 7 ?
3
Given the following graph G in adjacency list representation:
[0]->2,4,6,8
[1]->3,5,7,9
[2]->0,1,5,6,9
[3]->
[4]->1,2,5,7
[5]->
[6]->5,2
[7]->8,9
[8]->1,3
[9]->0,5,7
What is the outDegree of vertex 7 ?
2
Given the following code fragment:
Partition p(10);
p.Union(1,3);
p.Union(2,4);
p.Union(4,6);
p.Union(1,2);
p.Union(7,8);
What is the partition set structure? (p.Display(std::cout);)
{ 0 } { 1 2 3 4 6 } { 5 } { 7 8 } { 9 }
Given the following code:
Partition p(10);
p.Union(1,3);
p.Union(2,4);
p.Union(4,6);
p.Union(1,2);
p.Union(7,8);
What value is returned by the call Find (1,6) ?
true
Given the following code:
Partition p(10);
p.Union(1,3);
p.Union(2,4);
p.Union(4,6);
p.Union(1,2);
p.Union(7,8);
What value is returned by the call Find (4,5) ?
false
What is the value of log* 1000 ?
4
Inserting an element in a vector of size n (not at the end) has asymptotic this runtime (select the best answer available)
O(n)
Suppose the ADT Set is implemented as Set<T,H> using hashing-with-chaining. What is the average case run time [ACRT] of the three set operationsIncludes(t) , Insert(t) and Remove(t) ? (Here n is the size of the set and b is the number of buckets)
O(1+n/b)
Suppose the ADT Set is implemented as Set<T,P> using height-balanced trees. What is the average case run time [ACRT] of the three set operationsIncludes(t) , Insert(t) and Remove(t) ? (Here n is the size of the set.)
O(log n)
The worst case run time [WCRT] for TBinaryTree<>::Iterator::operator++() (where n = size of the tree) is:
Amortized Θ(1)
Θ(n)
The worst case run time [WCRT] for THashTable<>::Iterator::operator++() (where n = size of the table) is:
Amortized Θ(1)
Θ(n)
Begin with the vector v = [ 50 , 20 , 40 , 10 , 5 , 30 , 45] .
Show the vector after fsu::g_push_heap(v.Begin(), v.End()).
v = [ 50 , 20 , 45 , 10 , 5 , 30 , 40 ]
Code implementing merge_sort (A, p, r) for the index range [p,r) in the array A is:
void merge_sort(int* A, size_t p, size_t r)
{
if (r - p > 1)
{
q = (p+r)/2;
merge_sort(A,p,q);
merge_sort(A,q,r);
merge(A,p,q,r); // defined in separate function using g_set_merge
}
}
true or false?
true
Given the array a = [ F , G , A , H , B , D ] , show the result of the first call to Partition in Quick Sort, with the pivot value chosen to be the last element of the array.
[ A , B , D , H , G , F ]
1. The worst case run time [WCRT] of Byte Sort (n = size of input) is
Θ(n)
1. What is the theoretical lower bound on the worst case asymptotic runtime for comparison sorts? (n = size of input)
Ω (n log n)
The following sort is stable, in-place, and has best possible worst case runtime:
None of the other choices
Heap Sort
Quick Sort
Counting Sort
Merge Sort
None of the other choices
The following comparison sort is stable and has the best possible worst case runtime:
Merge Sort
The run space requirement of Byte Sort (n = size of input) is:
+Θ(n)
The worst case run time [WCRT] of Bit Sort (n = size of input) is
Θ(n)
Deriving the theoretical lower bound on the worst case asymptotic runtime for comparison sorts uses (select all that apply)
a) Stirling's Formula
b) The minimum height of a binary tree with N leaves
c) Random number generator
d) Runtime stack depth
e) Asymptotic runtime of vector operations
f) Amortization of runtime over several trials
g) A decision tree
h) The number of permutations of n items
Stirling's Formula
The minimum height of a binary tree with N leaves
A decision tree
The number of permutations of n items
The following comparison sort is in-place and has the best possible worst case runtime:
Heap Sort
Suppose that we know Algorithm A has runtime Θ(n2), where n is a measure of input size. Which of the following statements is also true? (Check all that apply.)
A has runtime Θ(3n2 + 7n).
A has runtime O(n).
A has runtime Ω(n2).
A has runtime O(n2).
A has runtime O(n3).
A has runtime Ω(n).
A has runtime Θ(n3).
A has runtime Ω(n3).
A has runtime Θ(3n2 + 7n).
A has runtime Ω(n2).
A has runtime O(n2).
A has runtime Ω(n).
The following sort is a comparison sort:
Merge Sort
Heap Sort
Quick Sort
Selection Sort
Radix Sort
Counting Sort
Bit Sort
Insertion Sort
Merge Sort
Heap Sort
Quick Sort
Selection Sort
Insertion Sort
The following sort is in place (i.e., has run space +O(1)) when implemented as a generic algorithm.
Quick Sort
Insertion Sort
Merge Sort
Counting Sort
Radix Sort
Heap Sort
Selection Sort
Bit Sort
Insertion Sort
Heap Sort
Selection Sort
The following sort is stable:
Merge Sort
Heap Sort
Correct Bit Sort
Quick Sort
Counting Sort
Insertion Sort
Radix Sort
Selection Sort
Merge Sort
Bit Sort
Counting Sort
Insertion Sort
Radix Sort
What is the theoretical lower bound on the worst case asymptotic runtime for comparison sorts? (n = size of input)
O(n log n)
Ω(n)
Ω (n log n)
O(n)
None of the other choices
Ω (n log n)
Deriving the theoretical lower bound on the worst case asymptotic runtime for comparison sorts uses (select all that apply)
Amortization of runtime over several trials
Asymptotic runtime of vector operations
The minimum height of a binary tree with N leaves
The number of permutations of n items
Stirling's Formula [ to estimate Θ(log n!) ]
Random number generator
A decision tree
Runtime stack depth
The minimum height of a binary tree with N leaves
The number of permutations of n items
Stirling's Formula [ to estimate Θ(log n!) ]
A decision tree
The following comparison sort is in-place and has the best possible worst case runtime:
Counting Sort
None of the other choices
Heap Sort
Merge Sort
Quick Sort
Heap Sort
The following comparison sort is stable and has the best possible worst case runtime:
Counting Sort
Quick Sort
Heap Sort
Merge Sort
None of the other choices
Merge Sort
The following sort has worst case runtime Θ(n2) and average case runtime Θ(n log n) [n = size of input]:
Heap Sort
Insertion Sort
Merge Sort
None of the other choices
Quick Sort
Quick Sort
The following sort is stable, in-place, and has runtime no worse than O(n log n) [n = size of input]
Merge Sort [implemented as a generic algorithm]
Heap Sort
Counting Sort
None of the other choices
Quick Sort
Merge Sort [implemented as a List member function]
Merge Sort [implemented as a List member function]
The worst case run time [WCRT] of Bit Sort (n = size of input) is
Θ(1)
Θ(n log n)
None of the other choices
Θ(log n)
Θ(n)
Θ(n)
The run space requirement of Bit Sort (n = size of input) is:
+Θ(n)
None of the other choices
+Θ(1)
+Θ(n log n)
+Θ(log n)
+Θ(n)
Begin with the vector v = [ 60 , 40 , 50 , 30 , 10 , 20 , 70 ] .
Show the vector after fsu::g_push_heap(v.Begin(), v.End()).
v = [ 70 , 40 , 60 , 30 , 10 , 20 , 50 ]
none of the other choices
v = [ 70 , 60 , 50 , 40 , 30 , 20 , 10 ]
v = [ 60 , 50 , 70 , 30 , 10 , 20 , 40 ]
v = [ 60 , 40 , 50 , 30 , 10 , 70 , 20 ]
v = [ 70 , 40 , 60 , 30 , 10 , 20 , 50 ]
Given the array a = [ M , F , D , Q , X , G ] , show the result of the first call to Partition in Quick Sort, with the pivot value chosen to be the last element of the array.
[ D , F , G , Q , X , M ]
[ F , D , G , M , X , Q ]
[ F , D , G , Q , X , M ]
None of the other choices
[ Q , X , M , F , D , G ]
[ F , D , G , Q , X , M ]
To copy the elements from an fsu::Dequeue<> d to an fsu::Vector<> v we can use the following generic algorithm call:
None of the other choices
fsu::g_copy (d.Begin(), d.End(), v);
fsu::g_copy (d.Begin(), d.End(), v.Begin(), v.End());
fsu::g_copy (d.Begin(), d.End(), v.Begin());
fsu::g_copy (d.Begin(), d.End(), v, v + d.Size);
fsu::g_copy (d.Begin(), d.End(), v.Begin());
Suppose the fsu::Vector<> v has elements in sorted order and we wish to insert the element x in correct order in v. The following generic algorithm call returns an iterator to the last correct location to insert x:
i = fsu::g_lower_bound( v.Begin() , v.End() , x );
i = fsu::g_lower_bound( v , x );
i = fsu::g_upper_bound( v.Begin() , v.End() , x );
i = fsu::g_upper_bound( v , x );
None of the other choices
i = fsu::g_upper_bound( v.Begin() , v.End() , x );
The algorithm fsu::HashTable<K,D,H>::Retrieve(const K& key, D& data) has average-case asymptotic runtime (where n = table size, b = number of buckets)
Amortized Θ(log n)
Θ(1)
None of the other choices.
Amortized Θ(1 + n/b)
Θ(log n)
Θ(1 + n/b)
Amortized Θ(1)
Θ(1 + n/b)
The algorithm fsu::HashTableIterator<K,D,H>::Iterator::operator++() has asymptotic runtime (where n = table size, b = number of buckets)
Amortized Θ(1)
Θ(log n)
Amortized Θ(log n)
None of the other choices
Θ(1)
Amortized Θ(1 + n/b)
Θ(1 + n/b)
Amortized Θ(1 + n/b)
YOU MIGHT ALSO LIKE...
COP 4531 - Fall 14 Final Exam
84 Terms
drad02
CSC102 Ch 8 Quiz
40 Terms
fridmanresume
COMPSCI 105
64 Terms
Digging6FeetDeep
Computer Programming: Python - Module 2 (Terms)
40 Terms
TheDarkCosmos
PLUS
OTHER SETS BY THIS CREATOR
Operating Systems Chapter 3 and 4
54 Terms
danc103
Operating Systems Final Review
466 Terms
danc103
;