Program – Thursday

Back to Program overview

SOCG - Session A
(AUD 4)
Session Chair: Antoine Vigneron
Faster Deterministic Volume Estimation in the Oracle Model via Thin Lattice Coverings [DOI] [+]
Daniel Dadush.
We give a 2O(n)(1+1/eps)n time and poly(n)-space deterministic algorithm for computing a (1+eps)n approximation to the volume of a general convex body K, which comes close to matching the (1+c/eps)n/2 lower bound for volume estimation in the oracle model by Barany and Furedi (STOC 1986, Proc. Amer. Math. Soc. 1988). This improves on the previous results of Dadush and Vempala (Proc. Nat'l Acad. Sci. 2013), which gave the above result only for symmetric bodies and achieved a dependence of 2O(n)(1+log5/2(1/eps)/eps3)n. For our methods, we reduce the problem of volume estimation in K to counting lattice points in K subseteq Rn (via enumeration) for a specially constructed lattice L: a so-called thin covering of space with respect to K (more precisely, for which L + K = Rn and voln(K)/det(L) = 2O(n)). The trade off between time and approximation ratio is achieved by scaling down the lattice. As our main technical contribution, we give the first deterministic 2O(n)-time and poly(n)-space construction of thin covering lattices for general convex bodies. This improves on a recent construction of Alon et al (STOC 2013) which requires exponential space and only works for symmetric bodies. For our construction, we combine the use of the M-ellipsoid from convex geometry (Milman, C.R. Math. Acad. Sci. Paris 1986) together with lattice sparsification and densification techniques (Dadush and Kun, SODA 2013; Rogers, J. London Math. Soc. 1950).

Optimal Deterministic Algorithms for 2-d and 3-d Shallow Cuttings [DOI] [+]
Timothy M. Chan and Konstantinos Tsakalidis.
We present optimal deterministic algorithms for constructing shallow cuttings in an arrangement of lines in two dimensions or planes in three dimensions. Our results improve the deterministic polynomial-time algorithm of Matousek (1992) and the optimal but randomized algorithm of Ramos (1999). This leads to efficient derandomization of previous algorithms for numerous well-studied problems in computational geometry, including halfspace range reporting in 2-d and 3-d, k nearest neighbors search in 2-d, (<= k)-levels in 3-d, order-k Voronoi diagrams in 2-d, linear programming with k violations in 2-d, dynamic convex hulls in 3-d, dynamic nearest neighbor search in 2-d, convex layers (onion peeling) in 3-d, epsilon-nets for halfspace ranges in 3-d, and more. As a side product we also describe an optimal deterministic algorithm for constructing standard (non-shallow) cuttings in two dimensions, which is arguably simpler than the known optimal algorithms by Matousek (1991) and Chazelle (1993).

A Simpler Linear-Time Algorithm for Intersecting Two Convex Polyhedra in Three Dimensions [DOI] [+]
Timothy M. Chan.
Chazelle [FOCS'89] gave a linear-time algorithm to compute the intersection of two convex polyhedra in three dimensions. We present a simpler algorithm to do the same.
SOCG - Session B
(AUD 5)
Session Chair: Monique Teillaud
Approximability of the Discrete Fréchet Distance [DOI] [+]
Karl Bringmann and Wolfgang Mulzer.
The Fréchet distance is a popular and widespread distance measure for point sequences and for curves. About two years ago, Agarwal et al [SIAM J. Comput. 2014] presented a new (mildly) subquadratic algorithm for the discrete version of the problem. This spawned a flurry of activity that has led to several new algorithms and lower bounds. In this paper, we study the approximability of the discrete Fréchet distance. Building on a recent result by Bringmann [FOCS 2014], we present a new conditional lower bound that strongly subquadratic algorithms for the discrete Fréchet distance are unlikely to exist, even in the one-dimensional case and even if the solution may be approximated up to a factor of 1.399. This raises the question of how well we can approximate the Fréchet distance (of two given d-dimensional point sequences of length n) in strongly subquadratic time. Previously, no general results were known. We present the first such algorithm by analysing the approximation ratio of a simple, linear-time greedy algorithm to be 2Θ(n). Moreover, we design an alpha-approximation algorithm that runs in time O(n log n + n2 / alpha), for any alpha in [1, n]. Hence, an neps-approximation of the Fréchet distance can be computed in strongly subquadratic time, for any epsilon > 0.

The Hardness of Approximation of Euclidean k-Means [DOI] [+]
Pranjal Awasthi, Moses Charikar, Ravishankar Krishnaswamy and Ali Kemal Sinop.
The Euclidean k-means problem is a classical problem that has been extensively studied in the theoretical computer science, machine learning and the computational geometry communities. In this problem, we are given a set of n points in Euclidean space Rd, and the goal is to choose k center points in Rd so that the sum of squared distances of each point to its nearest center is minimized. The best approximation algorithms for this problem include a polynomial time constant factor approximation for general k and a (1+c)-approximation which runs in time poly(n) exp(k/c). At the other extreme, the only known computational complexity result for this problem is NP-hardness [Aloise et al.'09]. The main difficulty in obtaining hardness results stems from the Euclidean nature of the problem, and the fact that any point in Rd can be a potential center. This gap in understanding left open the intriguing possibility that the problem might admit a PTAS for all k, d. In this paper we provide the first hardness of approximation for the Euclidean k-means problem. Concretely, we show that there exists a constant c > 0 such that it is NP-hard to approximate the k-means objective to within a factor of (1+c). We show this via an efficient reduction from the vertex cover problem on triangle-free graphs: given a triangle-free graph, the goal is to choose the fewest number of vertices which are incident on all the edges. Additionally, we give a proof that the current best hardness results for vertex cover can be carried over to triangle-free graphs. To show this we transform G, a known hard vertex cover instance, by taking a graph product with a suitably chosen graph H, and showing that the size of the (normalized) maximum independent set is almost exactly preserved in the product graph using a spectral analysis, which might be of independent interest.

A Fire Fighter's Problem [DOI] [+]
Rolf Klein, Elmar Langetepe and Christos Levcopoulos.
Suppose that a circular fire spreads in the plane at unit speed. A fire fighter can build a barrier at speed v > 1. How large must v be to ensure that the fire can be contained, and how should the fire fighter proceed? We provide two results. First, we analyze the natural strategy where the fighter keeps building a barrier along the frontier of the expanding fire. We prove that this approach contains the fire if v > vc = 2.6144... holds. Second, we show that any "spiralling" strategy must have speed v > 1.618, the golden ratio, in order to succeed.
10:30   Coffee Break
SOCG - Session A
(AUD 4)
Session Chair: Stefan Langerman
Approximate Geometric MST Range Queries [DOI] [+]
Sunil Arya, David Mount and Eunhui Park.
Range searching is a widely-used method in computational geometry for efficiently accessing local regions of a large data set. Typically, range searching involves either counting or reporting the points lying within a given query region, but it is often desirable to compute statistics that better describe the structure of the point set lying within the region, not just the count. In this paper we consider the geometric minimum spanning tree (MST) problem in the context of range searching where approximation is allowed. We are given a set P of n points in Rd. The objective is to preprocess P so that given an admissible query region Q, it is possible to efficiently approximate the weight of the minimum spanning tree of the subset of P lying within Q. There are two natural sources of approximation error, first by treating Q as a fuzzy object and second by approximating the MST weight itself. To model this, we assume that we are given two positive real approximation parameters epsq and epsw. Following the typical practice in approximate range searching, the range is expressed as two shapes Q- and Q+, where Q- is contained in Q which is contained in Q+, and their boundaries are separated by a distance of at least epsq diam(Q). Points within Q- must be included and points external to Q+ cannot be included. A weight W is a valid answer to the query if there exist subsets P' and P'' of P, such that Q- is contained in P' which is contained in P'' which is contained in Q+ and wt(MST(P')) <= W <= (1+epsw) wt(MST(P'')). In this paper, we present an efficient data structure for answering such queries. Our approach uses simple data structures based on quadtrees, and it can be applied whenever Q- and Q+ are compact sets of constant combinatorial complexity. It uses space O(n), and it answers queries in time O(log n + 1/(epsq epsw)d + O(1)). The O(1) term is a small constant independent of dimension, and the hidden constant factor in the overall running time depends on d, but not on epsq or epsw. Preprocessing requires knowledge of epsw, but not epsq.

Maintaining Contour Trees of Dynamic Terrains [DOI] [+]
Pankaj K. Agarwal, Thomas Mølhave, Morten Revsbæk, Issam Safa, Yusu Wang and Jungwoo Yang.
We study the problem of maintaining the contour tree T of a terrain Sigma, represented as a triangulated xy-monotone surface, as the heights of its vertices vary continuously with time. We characterize the combinatorial changes in T and how they relate to topological changes in Sigma. We present a kinetic data structure (KDS) for maintaining T efficiently. It maintains certificates that fail, i.e., an event occurs, only when the heights of two adjacent vertices become equal or two saddle vertices appear on the same contour. Assuming that the heights of two vertices of Sigma become equal only O(1) times and these instances can be computed in O(1) time, the KDS processes O(kappa + n) events, where n is the number of vertices in Sigma and kappa is the number of events at which the combinatorial structure of T changes, and processes each event in O(log n) time. The KDS can be extended to maintain an augmented contour tree and a join/split tree.

Hyperorthogonal Well-Folded Hilbert Curves [DOI] [+]
Arie Bos and Herman Haverkort.
R-trees can be used to store and query sets of point data in two or more dimensions. An easy way to construct and maintain R-trees for two-dimensional points, due to Kamel and Faloutsos, is to keep the points in the order in which they appear along the Hilbert curve. The R-tree will then store bounding boxes of points along contiguous sections of the curve, and the efficiency of the R-tree depends on the size of the bounding boxes - smaller is better. Since there are many different ways to generalize the Hilbert curve to higher dimensions, this raises the question which generalization results in the smallest bounding boxes. Familiar methods, such as the one by Butz, can result in curve sections whose bounding boxes are a factor Ω(2d/2) larger than the volume traversed by that section of the curve. Most of the volume bounded by such bounding boxes would not contain any data points. In this paper we present a new way of generalizing Hilbert's curve to higher dimensions, which results in much tighter bounding boxes: they have at most 4 times the volume of the part of the curve covered, independent of the number of dimensions. Moreover, we prove that a factor 4 is asymptotically optimal.
SOCG - Session B
(AUD 5)
Session Chair: Csaba Tóth
Topological Analysis of Scalar Fields with Outliers [DOI] [+]
Mickaël Buchet, Frederic Chazal, Tamal Dey, Fengtao Fan, Steve Oudot and Yusu Wang.
Given a real-valued function f defined over a manifold M embedded in Rd, we are interested in recovering structural information about f from the sole information of its values on a finite sample P. Existing methods provide approximation to the persistence diagram of f when geometric noise and functional noise are bounded. However, they fail in the presence of aberrant values, also called outliers, both in theory and practice. We propose a new algorithm that deals with outliers. We handle aberrant functional values with a method inspired from the k-nearest neighbors regression and the local median filtering, while the geometric outliers are handled using the distance to a measure. Combined with topological results on nested filtrations, our algorithm performs robust topological analysis of scalar fields in a wider range of noise models than handled by current methods. We provide theoretical guarantees and experimental results on the quality of our approximation of the sampled scalar field.

On Computability and Triviality of Well Groups [DOI] [+]
Peter Franek and Marek Krcál.
The concept of well group in a special but important case captures homological properties of the zero set of a continuous map f from K to Rn on a compact space K that are invariant with respect to perturbations of f. The perturbations are arbitrary continuous maps within Linfty distance r from f for a given r > 0. The main drawback of the approach is that the computability of well groups was shown only when dim K = n or n = 1. Our contribution to the theory of well groups is twofold: on the one hand we improve on the computability issue, but on the other hand we present a range of examples where the well groups are incomplete invariants, that is, fail to capture certain important robust properties of the zero set. For the first part, we identify a computable subgroup of the well group that is obtained by cap product with the pullback of the orientation of Rn by f. In other words, well groups can be algorithmically approximated from below. When f is smooth and dim K < 2n-2, our approximation of the (dim K-n)th well group is exact. For the second part, we find examples of maps f, f' from K to Rn with all well groups isomorphic but whose perturbations have different zero sets. We discuss on a possible replacement of the well groups of vector valued maps by an invariant of a better descriptive power and computability status.

Geometric Inference on Kernel Density Estimates [DOI] [+]
Jeff Phillips, Bei Wang and Yan Zheng.
We show that geometric inference of a point cloud can be calculated by examining its kernel density estimate with a Gaussian kernel. This allows one to consider kernel density estimates, which are robust to spatial noise, subsampling, and approximate computation in comparison to raw point sets. This is achieved by examining the sublevel sets of the kernel distance, which isomorphically map to superlevel sets of the kernel density estimate. We prove new properties about the kernel distance, demonstrating stability results and allowing it to inherit reconstruction results from recent advances in distance-based topological reconstruction. Moreover, we provide an algorithm to estimate its topology using weighted Vietoris-Rips complexes.
Invited Speaker
Session A (Blauwe Zaal)
Session Chair: Lars Arge
Modeling Real-World Data Sets [DOI] [+]
Susanne Albers.
Traditionally, the performance of algorithms is evaluated using worst-case analysis. For a number of problems, this type of analysis gives overly pessimistic results: Worst-case inputs are rather artificial and do not occur in practical applications. In this lecture we review some alternative analysis approaches leading to more realistic and robust performance evaluations. Specifically, we focus on the approach of modeling real-world data sets. We report on two studies performed by the author for the problems of self-organizing search and paging. In these settings real data sets exhibit locality of reference. We devise mathematical models capturing locality. Furthermore, we present combined theoretical and experimental analyses in which the theoretically proven and experimentally observed performance guarantees match up to very small relative errors.
13:00   Catered Lunch
Session A (Blauwe Zaal)

Introduction to several models from stochastic geometry  [+]
Pierre Calka.

Stochastic geometry can be described as the study of Euclidean objects with a random behaviour. First problems in geometric probability were presented as recreational mathematics but the domain has substantially grown for 50 years, in particular because of its many applications including experimental science and especially analysis of geometric algorithms. The talk will be centered on the description of several classical models from stochastic geometry: random tessellations and in particular the Poisson-Voronoi and Poisson-Delaunay tessellations, random geometric graphs, random polytopes constructed as convex hulls of a random point set, etc. We will survey the basic properties of each of them and focus in particular on some of their asymptotic properties which can provide valuable information in the context of computational geometry.

Introduction to the random generation of discrete structures  [+]
Philippe Duchon.

In many situations, one is able to define "natural" probability distributions on some set of discrete structures that one wants to study and/or simulate. This of course includes uniform distributions on finite sets, but in many cases the distribution is defined through some invariance property. Classical examples with a geometric flavor include tilings. The talk will focus on techniques used to algorithmically simulate, exactly on approximately, such a probability distribution - that is, sample discrete structures according to some previously defined probability distribution on a finite (but large) or infinite set.

Session B (AUD 2)
Introduction to Categories, Part I: Key Concepts [+]
Vin de Silva.
What are the key ideas that make category theory interesting and worthwhile? I will try to explain some of the following ideas: categories, functors, natural transformations, comma categories, universal constructions, equivalences of categories, adjunctions, representability.

The Interleaving Distance [+]
Elizabeth Munch.
The interleaving distance arose as a metric on persistence modules. However, through category theory, this metric can be extended to a large class of functors which represent many useful constructions in applied topology, including Reeb graphs, Reeb spaces, and multidimensional persistence modules. In this talk, we will discuss the current work on the interleaving distance through examples.
Session C (AUD 7)
Interval Graphs in Data Streaming  [+]
Sergio Cabello.

In this talk we will advocate that basic graph problems should be considered for geometric intersection graphs in the data streaming model. We will look in some detail at the interval selection problem, that is, finding a largest independent set in an interval graph, in the data streaming model. This part is joint work with Pablo Perez-Lantero. Towards the end of the talk I will restate a couple of unrelated problems that, I think, should get some attention.

Coloring and Independent Sets of Rectangles: Recent Developments and New Challenges  [+]
Parinya Chalermsook.

This talk will focus on two problems on rectangle intersection graphs: (i) approximating the independence number of rectangle intersection graphs and (ii) upper bounding the chromatic number of a graph in terms of its clique number. The former question has received a lot of attention from the approximation algorithms community due to its close connections to other optimization problems, such as map labeling and network resource allocation. The latter question was raised in 1948 and there has been essentially no progress since 1960 (i.e. for about half a century.) There is a close connection between (i) and (ii). I will start by giving a survey of recent developments on both problems. Then I will present some intermediate open problems that would help in gaining further insights for (i) and (ii). These intermediate problems could be of independent interest.

Intersection Graphs and Order Dimension  [+]
Stefan Felsner.

Containment graphs are comparability graphs and have, for a long time, been related to the notion of order dimension. In this talk we focus on connections between intersection graphs and order dimension. The starting point of the story is the observation that partial orders of height two whose comparability graph is a grid intersection graph have order dimension at most four. Refinements of this argument allow to distinguish between various classes of graphs that are sandwiched between grid intersection graph and bipartite permutations graphs. A construction involving grid intersection graphs is used for a new simple proof of the Brightwell-Trotter Theorem. Finally we use intersection and containment graphs of triangles to show that recognition of partial orders of height two and dimension three is NP-complete. This resolves a long standing complexity question.

15:30   Coffee Break
Session A (Blauwe Zaal)
Central limit theorems for random tessellations and random graphs  [+]
Matthias Schulte.

The topic of this talk are random tessellations and random graphs like Poisson-Voronoi tessellations and the nearest neighbour graph, which play an important role in stochastic geometry. Since the exact behaviour of related quantities as the total edge length, for example, is often very hard to determine, one is interested in the asymptotic behaviour as the size of the underlying system gets large. For this situation central limit theorems are presented that allow to approximate random variables by standard Gaussian random variables if the size of the system is sufficiently large. All considered problems have in common that they are driven by an underlying Poisson point process. The central limit theorems can be deduced via general normal approximation results for random variables depending on Poisson point processes.

Generalised Eden growth model and random planar trees  [+]
Sergei Zuyev.

In a classical Eden growth model, a crystal on a grid extends to nearby nodes with the rate proportional to the number of already crystallised neighbours. This model is equivalent to first-passage percolation with exponential passage time. An extension of this model is to allow a general weight for different number of crystallised neighbours. Classical subadditivity arguments allow establishing existence of a limiting shape for the most natural case of monotone rates: the more neighbours are crystallised, the sooner a boundary node crystallises too. However, a real challenge represent non-monotone rates where all the classical approaches to establishing the shape result fail. Computer simulations suggest that even in this case a limiting shape does exist. We present partial results for such models and discuss some of their intriguing features: existence of holes, the exponential bound of their size and possible long-range structural dependence.

Session B (AUD 2)
Introduction to Categories, Part II: Persistence via Category Theory [+]
Vin de Silva.
Following Liz's presentation on interleaving, I will discuss the theory in a more abstract way. This is joint work with Peter Bubenik and Jonathan Scott.

Topos Theory for Persistence [+]
Primoz Skraba.

Persistent (co)homology has been studied from a variety of viewpoints: algorithmic, algebraic, and categorical.

In this talk, I will describe a complementary approach, based on encoding the time-varying nature of the underlying topological space directly at the set level. In classical persistence, this was done by encoding the filtration with the action of a graded ring on the underlying space (multiplication by t in [CZ04]).

One can generalize this by constraining how the topological space may change over time, e.g. in classical persistence it must grow and in zig-zag persistence it may grow and shrink. We call these constraints, the shape of the theory of persistence. Using tools from topos theory, I will describe that if these constraints have a special structure (i.e. they form a Heyting algebra), we can automatically generate a corresponding persistence theory.

No previous knowledge of topos theory is necessary and in addition to definitions and structural properties, aspects such as computation and stability will also be discussed.

Session C (AUD 7)
"No k Pairwise Crossing" vs "No k Pairwise Disjoint"  [+]
Andrew Suk.

We will survey several problems surrounding an old conjecture which states that the number of edges in a graph drawn in the plane with no k pairwise crossing edges is at most linear in the number of vertices. We will also make comparisons to its "dual" counterpart, which is a conjecture of Pach and Tóth, stating that the number of edges in a graph drawn in the plane with no k pairwise disjoint edges is at most linear in the number of vertices. Both conjectures are still open, and in this talk, we will discuss several partial results and tools used in tackling these problems.

Colorings of Geometric Intersection Graphs and Related Problems  [+]
Bartosz Walczak.

This talk will survey some recent results and open problems concerning proper colorings and Kk-free colorings (i.e. colorings in which every color class is required to induce a Kk-free graph) of geometric intersection graphs with bounded clique number. In particular, the following questions will be addressed: (i) In what classes of geometric intersection graphs is the (Kk-free) chromatic number bounded in terms of the clique number? (ii) How big can the (Kk-free) chromatic number be with respect to the number of vertices for other classes of geometric intersection graphs with bounded clique number? The talk will also discuss connections to other problems, in particular, to the well-known conjecture that the number of edges in k-quasi-planar graphs (i.e. graphs drawn in the plane with no k pairwise crossing edges) is linear with respect to the number of vertices.

Open Problems Session