Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Credit Invariant - Advanced Data Structures and Algorithms - Solved Exam, Exams of Data Structures and Algorithms

Main points of this past exam are: Advanced Data Structures, Partition Data Structure, Numbers Represent Ranks., Numerical Lower Bound, Non-Root Node, Dominant, Amortized, Analysis, Deletemin, Invariant

Typology: Exams

2012/2013

Uploaded on 03/23/2013

sardai
sardai 🇮🇳

4

(10)

117 documents

1 / 9

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
- 1 -
a
30
b
29
c
27
d
26
e
20
f
16
g
3
1. (10 points) In the analysis that establishes an O(log log n) bound on the amortized time per
operation for the partition data structure, a node x is defined to be dominant if Δ(x)>2Δ(y)
for all ancestors y of x. Define Δ(x).
Δ(x) is rank(p(x))–rank(x)
The diagram at right shows one tree in a partition data structure; the
numbers represent ranks. Give a numerical lower bound on the
number of nodes in the tree.
There must be at least 230109 nodes in the tree.
List the dominant nodes in the tree.
Nodes b, e and g are dominant.
Suppose a find operation is done at node g. For each non-root node u
on the find path that is not dominant before the find, give the ratio of
the new value of Δ(u) to the old value.
For nodes f, d and c, the ratios are 14/4, 4/1 and 3/2 respectively. Note that
all are at least 3/2.
CS 542 – Advanced Data Structures and Algorithms
Final Exam Solutions
J
onathan Turne
r
5/12/2010
pf3
pf4
pf5
pf8
pf9

Partial preview of the text

Download Credit Invariant - Advanced Data Structures and Algorithms - Solved Exam and more Exams Data Structures and Algorithms in PDF only on Docsity!

30 a

(^29) b

(^27) c

(^26) d

(^20) e

(^16) f

(^3) g

  1. (10 points) In the analysis that establishes an O (log log n ) bound on the amortized time per operation for the partition data structure, a node x is defined to be dominant if Δ( x )>2Δ( y ) for all ancestors y of x. Define Δ( x ). Δ (x) is rank(p(x))–rank(x) The diagram at right shows one tree in a partition data structure; the numbers represent ranks. Give a numerical lower bound on the number of nodes in the tree. There must be at least 2^30 ≈ 10 9 nodes in the tree. List the dominant nodes in the tree. Nodes b, e and g are dominant. Suppose a find operation is done at node g. For each non-root node u on the find path that is not dominant before the find, give the ratio of the new value of Δ( u ) to the old value. For nodes f, d and c, the ratios are 14/4, 4/1 and 3/2 respectively. Note that all are at least 3/2.

CS 542 – Advanced Data Structures and Algorithms

Final Exam Solutions

Jonathan Turner 5/12/

  1. (10 points) In the Fibonacci heap shown below, the numbers are the key values.

How many credits are needed to ensure that the credit invariant used in the amortized analysis is satisfied? 6+2 × 4= Show the heap that results from performing a deletemin on the heap.

How many credits are needed to maintain the invariant following the deletemin? 9

a

(^3) b

c 4

d

e

f 7

g

h

i

k j

l

m o p

n 5

q 1

a

(^3) b

2

c 4

d

e

6

f 7

g

h

9

k^ j

l

m o p

n 5

8

5 3

5 8

7

6

4

q (^) 1

9

(^999)

  1. (10 points) The picture below shows an implementation of a dynamic tree, using linked paths represented by binary search trees. Draw the actual tree that this represents (with vertex costs). Assuming the actual tree is being used in a maximum flow computation, what is the residual capacity of the path from s to t in the actual tree. Which edges become saturated if the maximum possible amount of flow is added to this path?

The actual tree appears at right. The edge from b to a is the only one that becomes saturated after 2 units of flow are added to the path skfebadt.

Δ min , Δ cost

b d

e t

a

n

m (^) 3,0 g

i

s f h

k

c

Δ min , Δ cost

b d

e t

a

n

m (^) 3,0 g

i

s f h

k

c

b

t

d

e

c a

f

s

k

g 13

i

h

n

m

  1. (15 points) In the analysis of the general preflow-push method for max flows, a potential function is used to account for the number of steps in which flow is added to an edge, but the edge is not saturated. The potential function is defined to be the sum of the distance labels of the unbalanced vertices (that is, the vertices that have positive excess flow). Suppose we have a graph with 100 vertices and 1000 edges. Give a numerical upper bound on the value of the potential function and explain your answer. No vertex can have a vertex label ≥200, because labels are only increased for unbalanced vertices and every unbalanced vertex has a path in the residual graph leading back to the source, which has label 100. Since each label is <200, the sum of the distance labels can be no more than 20,000. Explain why every step that adds flow to an edge ( u , v ) without saturating it, causes the potential function to decrease. Such a step makes u balanced, reducing the potential function by d(u). It may also make v unbalanced, increasing the potential function by d(v), but since d(u)>d(v), the net effect will be to decrease the potential function by at least 1. How is the potential function affected by a step that adds flow to edge ( u , v ) and saturates it? Such a step may increase the potential function by up to d(v), since v may become unbalanced. Give a big- O bound on the sum of the increases in the potential that could result from all such steps, assuming a graph with n vertices and m edges The number of such steps is O(mn) and since each can increase the potential function by at most 2n, the total increase in potential from all such steps is O(mn^2 ). How is the potential function affected by a step that changes the label of a vertex u. Such a step will increase the potential function, since u is unbalanced and the relabeling increases d(u), also increasing the potential function. Give a big- O bound on the sum of the increases in the potential that could result from all such steps. Since the labels never get smaller, the maximum possible increase in potential from all such steps is O(n 2 ).
  1. (15 points) Suppose we wanted to add the following operation to the dynamic trees data structure. ancestor ( x , y ) returns true if y is an ancestor of x , else false Describe how you could implement this operation using the path set operations and the expose operation. First, do an expose at x. If y is an ancestor of x, they will now be in the same path p in the pathset data structure and succ(p) will be null. So, we can return true if succ(findpath(y))=null. Otherwise, return false. Alternatively, we could return true if findtail(x)=findtail(y) after the expose. In the breadth-first scanning algorithm for finding shortest paths, cycles are detected using a pass-counting method. Suppose that we want to detect a negative cycle in the parent pointers at the time the cycle is about to be created? This can be done by checking to see if a vertex x is an ancestor of y in the shortest path tree, whenever we’re about to make x the parent of y. The main loop of the basic version of the breadth-first-scanning algorithm is shown below. Show how you would modify it to use a dynamic tree data structure in place of the parent pointers, and use the ancestor operation to detect a negative cycle.

do queue ≠ [ ] ⇒

v := queue (1); queue := queue [2..];

for [ v , w ] ∈ out ( v ) ⇒

if dist ( v )+ length ( v , w )< dist ( w ) ⇒

p ( w ) := v ;

dist ( w ) := dist ( v )+ length ( v , w );

if w ∉ queue ⇒ queue := queue & [ w ]; fi ;

fi ;

rof ;

od ;

What is the asymptotic running time for this version of the algorithm? O(mn log n) since the dynamic tree operations have an amortized cost of O(log n) per operation.

Replace p ( w ) := v with if dt. ancestor ( v , w ) then stop, cycle found fi dt.cut ( w ); dt. link ( w , v );

  1. (15 points) Define a k -matching of a bipartite graph to be a subset of its edges that defines a subgraph with maximum degree ≤ k (so, we can view an ordinary matching as a k -matching for k =1). Describe ( in words) an algorithm that can find a maximum size k -matching of any connected, bipartite graph, for any integer k≥ 1 and explain briefly why it works. What is the running time of your algorithm? We can find a maximum size k-matching by solving a flow problem, using essentially the same construction that is used to find an ordinary maximum size matching in bipartite graphs. Divide the vertices into two independent subsets (call them left and right). Add a source vertex with an edge of capacity k to each of the left vertices and a sink vertex with an edge of capacity k from each of the right vertices. Direct the edges connecting left and right vertices from left to right and give them each a capacity of 1. Now, find a max flow in this graph using any max flow algorithm that finds an integer max flow, given integer capacities. The edges with positive flow joining left and right vertices are guaranteed to form a k-matching, because the capacity constraints on the source and sink edges ensure the left vertices have at most k outgoing edges with positive flow and the right vertices have at most k incoming edges with positive flow. For k>1, the flow graph is not a unit network, hence there is no special advantage to using Dinic’s algorithm, as there is in the case of ordinary matchings. Dinic’s algorithm with dtrees gives a running time of O(mn log n). The fifo preflow-push algorithm would give a running time of O(n^3 ).