CLRS Chapter 5. Probabilistic Analysis and Randomized Algorithms

source code directory

5.1. The hiring problem

Exercises

5.1-1. Show that the assumption that we are always able to determine which candidate is best, in line 4 of procedure hire-assistant, implies that we know a total order on the ranks of the candidates.

This is merely the definition of a total order: every pair of elements is comparable. $\square$

5.1-2. Describe an implementation of the procedure random(a, b) thathat only makes calls to random(0, 1). What is the expected running time of your procedure, as a function of $a$ and $b$?

We first note that implementing random(a, b) is equivalent to implementing random(0, b-a), as the former can be obtained from the latter via one addition operation, and vice versa. random(0, n) can be implemented from random(0, 1) by generating log n + 1 random bits, with which we can write down the binary representations of all integers between $0$ and $n$, inclusive. Unless $n$ is of the form $2^k-1$, the generated random bits will produce numbers other than $0,\ldots,n$ as well. In such cases, we simply generate another $(\log n + 1)$-bit random binary number and repeat the process.

from random import choice
from math import log, ceil

def random_integer(a, b):
"""We call this random_integer so as not to overwrite the random module."""
if a == 0 and b == 1:  # define random_integer(0, 1)
return choice([0, 1])
elif a > b:  # invalid case
return None
elif a == b:  # degenerate case
return a
else:
n = b - a + 1
random_bits = 0
for i in range(ceil(log(n, 2))):
random_bits += random_integer(0, 1) * (2 ** i)
if random_bits in range(n):
return a + random_bits
else:
return random_integer(a, b)

# Generate a graphical representation of the distribution generated random bits
import seaborn as sns
import matplotlib.pyplot as plt

sns.distplot([random_integer(0, 9) for _ in range(500), kde=False)
sns.distplot([random_integer(0, 9) for _ in range(1000)], kde=False)
sns.distplot([random_integer(0, 9) for _ in range(2000)], kde=False)


At each generation of a $(\log n + 1)$-bit random number, there is a probability of $p = ((2^{\log n + 1} - (n+1)) / 2^{\log n + 1}$ that we have to generate another random number. Since each instance is independent of the others, the probability of random(0, n) failing to terminate after $k$ iterations is $p^k$. It then follows taht the probability of random(0, n) terminating after $k+1$ iterations is $p^k(1-p)$.

Assuming it takes contant time $C$ to compute random(0, 1), the expected running time of random(0, n) is

Since

for all $% $, we differentiate both sides to obtain

Using this formula, we finisht he computation of the expected running time of random(0, n):

It is of course possible that none of the generated $(\log n + 1)$-bit numbers falls within the desired range. The probability of such event is negligible, however, as $p^k$ converges to $0$ as $k \to \infty$.

In general, it is impossible to implement random(0, n) with guaranteed finite-time termination solely from random(0, 1). Indeed, a repeated application of random(0, 1) can produce selection operations with probability $r/2^s$, where $r$ and $s$ are nonnegative integers. Implementing random(0, n) requires a selection operation with probability $1/(n+1)$, and most such quantities cannot be written as a finite sum of numbers of the form $r/2^s$. $\square$

5.1-3. Suppose that you want to output $0$ with probability $1/2$and $1$ with probability $1/2$. At your disposal is a procedure biased-random that outputs either 0 or 1. It outputs 1 with some probability $p$ and $0$ with some probability $1-p$, where $% $, but you do not know what $p$ is. Give an algorithm that uses biased-random as a subroutine, and returns an unbiased answer, returning $0$ with probability $1/2$ and $1$ with probability $1/2$. What is the expected running time of your algorithm as a function of $p$?

We generate two independent instances of biased-random, $x$ and $y$. Observe that

We can then generate $0$ and $1$ with equal probability by returning $0$ whenever $% $ and returning $1$ whenever $x > y$. If $x = y$, then we generate two more random instances of biased-random and compare them.

import numpy as np

def biased_random(p):
return np.random.choice([0, 1], 1, p=[1-p, p])[0]

def coin_toss_from_biased_random(p):
x, y = biased_random(p), biased_random(p)
if x < y:
return 0
elif x > y:
return 1
else:
return coin_toss_from_biased_random(p)

# Generate a graphical representation of the generated random 0-1s

import seaborn as sns
import matplotlib.pyplot as plt

sns.distplot([coin_toss_from_biased_random(0.2) for _ in range(1000)], kde=False)
sns.distplot([coin_toss_from_biased_random(0.5) for _ in range(1000)], kde=False)
sns.distplot([coin_toss_from_biased_random(0.8) for _ in range(1000)], kde=False)


Assuming it takes constant time to compute biased-random, the expected running time of biased-random is

(See Exercise 5.1-2 for details on a similar computational problem.) Since $(1-[2p(1-p)])^k \to 0$ as $k \to \infty$, the probability of the algorithm never terminating is negligible. $\square$

5.2. Indicator random variables

Exercises

5.2-1. In hire-assistant, assuming that the candidates are presented in a random order, what is the probability that you hire exactly one time? What is the probability that you hire exactly $n$ times?

For notational convenience, we label the $n$ candidates by $a_1,\ldots,a_n$, where $a_i$ is a better candidate than $a_j$ whenever $i > j$.

To hire exactly once, $a_n$ must be the first interviewee. The probability of this happening is $1/n$.

To hire exactly $n$ times, the $k$th interview, for each $1 \leq k \leq n$, must be $a_k$. The probability of this happening is $1/n!$. $\square$

5.2-2. In hire-assistant, assuming that the candidates are presented in a random order, what is the probability that you hire exactly twice?

For notational convenience, we label the $n$ candidates by $a_1,\ldots,a_n$, where $a_i$ is a better candidate than $a_j$ whenever $i > j$. Moreover, we also label the $n$ candidates in the order they are presented: $c_1,\ldots,c_n$.

If $c_1 = a_n$, then no more hiring can occur, so we must have $c_1 \neq a_1$. We thus fix $1 \leq k \leq n-1$ and assume that $c_1 = a_k$. In order to have precisely two hirings, all candidates better than $a_k$ must appear after $a_n$. Since the timing of appearances of $a_1,\ldots,a_{k-1}$ does not affect the number of hirings, we see that precisely two hirings occur if and only if $a_n$ is the first to be interviewed among the $(n-k)$-person subset

Since the candidates are presented in a random order, the probability of this happening is $\frac{1}{n-k}$.

We now denote by $E_n$ the event that precisely two hirings occur in the $n$-person setting and observe that $\mathbb{P}[E \mid c_1 = a_k]$, the probability of two hirings occuring assuming $c_1 = a_k$, is shown to be $\frac{1}{n-k}$. Since $c_1$ must be one of $a_1,\ldots,a_n$, the law of total probability implies that

Since $c_1 = a_n$ cannot result in two hirings, we conclude that

Page 1154 of CLRS shows that

whence it follows that

5.2-3. Use indicator random variables to compute the expected value of the sum of $n$ dice.

For each $1 \leq k \leq 6$, we let $I_k$ be the indicator random variable associated with the event that a die shows $k$. Since all six numbers are equally likely to show, we have $\mathbb{E}[I_k] = \frac{1}{6}$ for all $1 \leq k \leq 6$. The expected value of the sum of one die is

The expected value of the sum of $n$ dice is

5.2-4. Use indicator random variables to solve the following problem, which is known as the hat-check problem. Each of $n$ customers gives a hat to a hat-check person at a restaurant. The hat-check person gives the hats back to the customers in a random order. What is the expected number of customers who get back their own hat?

For each $1 \leq j \leq n$, we let $I_j$ be the indicator random variable associated with the event that customer $j$ receives their own hat back. Since only one of the $n$ hats is the desired hat, we have

for all $1 \leq j \leq n$. The expected number of customers who get their own hat back is

In other words, only one person can be expected to get their hat back, regardless of the number of customers. $\square$

5.2-5. Let $A[1..n]$ be an array of $n$ distinct numbers. If $% $ and $% $, then the pair $(i,j)$ is called an inversion of $A$. (See Problem 2-4 for more on inversions.) Suppose that the elements of $A$ form a uniform random permutation of $\langle 1,2,\ldots,n \rangle$. Use indicator random variables to compute the expected number of inversions.

For each $% $, we let $I_{i,j}{}_{}$ be the indicator random variable associated with the event that $(i,j)$ is an inversion of $A$. Since either $% $ or $A[i] > A[j]$, we have that $\mathbb{E}[I_{i,j}] = \frac{1}{2}{}_{}$ for all $% $. The expected number of inversions is

5.3. Randomized algorithms

Permuting input arrays

Here are Python implementations of the uniform random permutation algorithms in the text.

def permute_by_sorting(A):
n = len(A)
P = [0 for _ in range(n)]
for i in range(n):
P[i] = random_integer(1, n ** 3)
return sort_by_key(A, P)

def sort_by_key(A, P):
"""Use the built-in Python sort feature to sort by key"""
key = sorted(
[(i, P[i]) for i in range(len(P))],
key=lambda pair: pair[1])
return [A[key[i][0]] for i in range(len(P))]

def randomize_in_place(A):
n = len(A)
for i in range(n):
random_index = random_integer(i, n-1)
A[i], A[random_index] = A[random_index], A[i]  # swap
return A


Exercises

5.3-1. Professor Marceau objects to the loop invariant used in the proof of Lemma 5.5. He questions whether it is true prior to the first iteration. He reasons that we could just as easily declare that an empty subarray contains no 0-permutations. Therefore, the probability that an empty subarray contains a 0-permutation should be 0, thus invalidating the loop invariant prior to the first iteration. Rewrite the procedure Randomize-In-Place so that its associated loop invariant applies to a nonempty subarray prior to the first iteration, and modify the proof of Lemma 5.5 for your procedure.