Approximation Algorithms
Course rationale
In the 1970s it was discovered that in addition to decision problems like SAT and
Partition, many, many important discrete optimization problems are (roughly
speaking) NPComplete, and so are unlikely to admit polynomialtime optimal
algorithms. In such problems{e.g., traveling salesman, vertex cover, indepen
dent set, etc.{there is a large set of possible solutions and we search for a solution
of minimum cost (or of maximum value). In the decades since, there has de
veloped the theory of approximation algorithms. Such an algorithm typically
guarantees that, given any possible problem instance, the solution found will be
within some constant (multiplicative) factor c of the optimal, i.e., the solution
cost is never greater than c OPT, where c > 1 (or the solution value is never
less than c OPT, where c < 1) .
It may be surprising that in the case of many optimization problems, it
is possible to prove that our solutions will be close to OPT, even though we
don't (and can't) know what OPT is. A set of fundamental techniques have
emerged in recent decades for designing and analyzing the performance of such
algorithms. In this course we will study these techniques and some of the most
signicant instances of their application.
Course description
This is an advanced, graduatelevel course on approximation algorithms. We
will survey major techniques used in the design and analysis of approxima
tion algorithms, including greedy algorithms, local search, obtaining PTAS, LP
rounding, randomized rounding, primaldual, local ratio, factorrevealing LPs
and SDP rounding. About a quarter of the course will focus on geometric ap
proximation problems, and we will talk about techniques such as quadtrees,
localitysensitive hashing, and the Johnson Lindenstrauss lemma. We will also
provide a highlevel overview of hardness of approximation.
These general techniques will be introduced by studying their application to
various canonical problems such as set cover and max coverage, metric Steiner
tree and TSP, multiway cut, knapsack, bin packing, vertex cover, scheduling
problems, Steiner forest, Steiner network, facility location, multicommodity
ow, MAXSAT, and approximate nearestneighbor.
Prerequisites
Students are expected to have a solid background in the analysis of algorithms
and be comfortable with the idea of mathematical proofs about the correct
ness of algorithms. We will also assume knowledge of discrete mathematics,
elementary probability, and some idea of NPcompleteness.
Learning objectives
Students who successfully complete this course will develop techniques used for
obtaining (guaranteed) approximation algorithms for NPcomplete problems in
the past couple of decades. These techniques will be applied to about fteen
very fundamental problems in the eld of algorithms, and the students will
learn of relations between problems, and between techniques. This will help
them identify the right technique (or combination of techniques) to attack a
new hard problem that lacks good algorithms for it.
Assessment
Grades will be based on:
 Class participation 10%
 Assignments 60%
 Term project (presentation and report) 30%
Course reading
 The design of approximation algorithms, Williamson and Shmoys.
 Approximation algorithms, Vazirani.
 Geometric approximation algorithms, HarPeled.
Course topics

Introduction; set cover, kcenter, facility location.
 Greedy algorithms and local search (kcenter, TSP, scheduling on identical parallel machines)
 Rounding data and dynamic programming (knapsack, binpacking, maximum independent set in lanar graphs)
 Deterministic rounding of linear programs (prizecollecting Steiner tree, generalized assignment problem)
 Randomized rounding of LPs and SDPs (MAX SAT and MAX CUT, uncapacitated facility location, Cherno bounds)
 Primaldual (feedback vertexset, shortest s  t path problem, )
 Nearest neighbor problem, localitysensitive hashing.
 JohnsonLindenstrauss lemma and its applications.
 Examples of hardness of approximation, PCP.