Program – SOCG papers

Listed in no particular order.
Clicking the “[+]” after the title will show the abstract for that paper.

  • On-line Coloring between Two Lines [DOI] [+]
    Stefan Felsner, Piotr Micek and Torsten Ueckerdt.

    We study on-line colorings of certain graphs given as intersection graphs of objects "between two lines", i.e., there is a pair of horizontal lines such that each object of the representation is a connected set contained in the strip between the lines and touches both. Some of the graph classes admitting such a representation are permutation graphs (segments), interval graphs (axis-aligned rectangles), trapezoid graphs (trapezoids) and cocomparability graphs (simple curves). We present an on-line algorithm coloring graphs given by convex sets between two lines that uses O(w3) colors on graphs with maximum clique size w. In contrast intersection graphs of segments attached to a single line may force any on-line coloring algorithm to use an arbitrary number of colors even when w=2. The left-of relation makes the complement of intersection graphs of objects between two lines into a poset. As an aside we discuss the relation of the class C of posets obtained from convex sets between two lines with some other classes of posets: all 2-dimensional posets and all posets of height 2 are in C but there is a 3-dimensional poset of height 3 that does not belong to C. We also show that the on-line coloring problem for curves between two lines is as hard as the on-line chain partition problem for arbitrary posets.
  • From Proximity to Utility: A Voronoi Partition of Pareto Optima [DOI] [+]
    Hsien-Chih Chang, Sariel Har-Peled and Benjamin Raichel.

    We present an extension of Voronoi diagrams where not only the distance to the site is taken into account when considering which site the client is going to use, but additional attributes (i.e., prices or weights) are also considered. A cell in this diagram is then the loci of all clients that consider the same set of sites to be relevant. In particular, the precise site a client might use from this candidate set depends on parameters that might change between usages, and the candidate set lists all of the relevant sites. The resulting diagram is significantly more expressive than Voronoi diagrams, but naturally has the drawback that its complexity, even in the plane, might be quite high. Nevertheless, we show that if the attributes of the sites are drawn from the same distribution (note that the locations are fixed), then the expected complexity of the candidate diagram is near linear. To this end, we derive several new technical results, which are of independent interest.
  • Pattern Overlap Implies Runaway Growth in Hierarchical Tile Systems [DOI] [+]
    Ho-Lin Chen, David Doty, Jan Manuch, Arash Rafiey and Ladislav Stacho.

    We show that in the hierarchical tile assembly model, if there is a producible assembly that overlaps a nontrivial translation of itself consistently (i.e., the pattern of tile types in the overlap region is identical in both translations), then arbitrarily large assemblies are producible. The significance of this result is that tile systems intended to controllably produce finite structures must avoid pattern repetition in their producible assemblies that would lead to such overlap. This answers an open question of Chen and Doty (SODA 2012), who showed that so-called "partial-order" systems producing a unique finite assembly and avoiding such overlaps must require time linear in the assembly diameter. An application of our main result is that any system producing a unique finite assembly is automatically guaranteed to avoid such overlaps, simplifying the hypothesis of Chen and Doty's main theorem.
  • Shortest Path in a Polygon using Sublinear Space [DOI] [+]
    Sariel Har-Peled.

    We resolve an open problem due to Tetsuo Asano, showing how to compute the shortest path in a polygon, given in a read only memory, using sublinear space and subquadratic time. Specifically, given a simple polygon P with n vertices in a read only memory, and additional working memory of size m, the new algorithm computes the shortest path (in P) in O(n2 / m) expected time, assuming m = O(n / log2 n). This requires several new tools, which we believe to be of independent interest. Specifically, we show that violator space problems, an abstraction of low dimensional linear-programming (and LP-type problems), can be solved using constant space and expected linear time, by modifying Seidel's linear programming algorithm and using pseudo-random sequences.
  • Semi-algebraic Ramsey Numbers [DOI] [+]
    Andrew Suk.

    Given a finite set P of points from Rd, a k-ary semi-algebraic relation E on P is the set of k-tuples of points in P, which is determined by a finite number of polynomial equations and inequalities in kd real variables. The description complexity of such a relation is at most t if the number of polynomials and their degrees are all bounded by t. The Ramsey number Rd,tk(s,n) is the minimum N such that any N-element point set P in Rd equipped with a k-ary semi-algebraic relation E, such that E has complexity at most t, contains s members such that every k-tuple induced by them is in E, or n members such that every k-tuple induced by them is not in E. We give a new upper bound for Rd,tk(s,n) for k=3 and s fixed. In particular, we show that for fixed integers d,t,s, Rd,t3(s,n)=2no(1), establishing a subexponential upper bound on Rd,t3(s,n). This improves the previous bound of 2nC due to Conlon, Fox, Pach, Sudakov, and Suk, where C is a very large constant depending on d,t, and s. As an application, we give new estimates for a recently studied Ramsey-type problem on hyperplane arrangements in Rd. We also study multi-color Ramsey numbers for triangles in our semi-algebraic setting, achieving some partial results.
  • Hyperorthogonal Well-Folded Hilbert Curves [DOI] [+]
    Arie Bos and Herman Haverkort.

    R-trees can be used to store and query sets of point data in two or more dimensions. An easy way to construct and maintain R-trees for two-dimensional points, due to Kamel and Faloutsos, is to keep the points in the order in which they appear along the Hilbert curve. The R-tree will then store bounding boxes of points along contiguous sections of the curve, and the efficiency of the R-tree depends on the size of the bounding boxes - smaller is better. Since there are many different ways to generalize the Hilbert curve to higher dimensions, this raises the question which generalization results in the smallest bounding boxes. Familiar methods, such as the one by Butz, can result in curve sections whose bounding boxes are a factor Ω(2d/2) larger than the volume traversed by that section of the curve. Most of the volume bounded by such bounding boxes would not contain any data points. In this paper we present a new way of generalizing Hilbert's curve to higher dimensions, which results in much tighter bounding boxes: they have at most 4 times the volume of the part of the curve covered, independent of the number of dimensions. Moreover, we prove that a factor 4 is asymptotically optimal.
  • Tight Bounds for Conflict-Free Chromatic Guarding of Orthogonal Art Galleries [DOI] [+]
    Frank Hoffmann, Klaus Kriegel, Subhash Suri, Kevin Verbeek and Max Willert.

    The chromatic art gallery problem asks for the minimum number of "colors" t so that a collection of point guards, each assigned one of the t colors, can see the entire polygon subject to some conditions on the colors visible to each point. In this paper, we explore this problem for orthogonal polygons using orthogonal visibility - two points p and q are mutually visible if the smallest axis-aligned rectangle containing them lies within the polygon. Our main result establishes that for a conflict-free guarding of an orthogonal n-gon, in which at least one of the colors seen by every point is unique, the number of colors is Θ(loglog n). By contrast, the best upper bound for orthogonal polygons under standard (non-orthogonal) visibility is O(log n) colors. We also show that the number of colors needed for strong guarding of simple orthogonal polygons, where all the colors visible to a point are unique, is Θ(log n). Finally, our techniques also help us establish the first non-trivial lower bound of Ω(loglog n / logloglog n) for conflict-free guarding under standard visibility. To this end we introduce and utilize a novel discrete combinatorial structure called multicolor tableau.
  • Building Efficient and Compact Data Structures for Simplicial Complexes [DOI] [+]
    Jean-Daniel Boissonnat, Karthik C. S. and Sébastien Tavenas.

    The Simplex Tree (ST) is a recently introduced data structure that can represent abstract simplicial complexes of any dimension and allows efficient implementation of a large range of basic operations on simplicial complexes. In this paper, we show how to optimally compress the Simplex Tree while retaining its functionalities. In addition, we propose two new data structures called Maximal Simplex Tree (MxST) and Simplex Array List (SAL). We analyze the compressed Simplex Tree, the Maximal Simplex Tree, and the Simplex Array List under various settings.
  • Computing Teichmüller Maps between Polygons [DOI] [+]
    Mayank Goswami, Xianfeng Gu, Vamsi Pritham Pingali and Gaurish Telang.

    By the Riemann mapping theorem, one can bijectively map the interior of an n-gon P to that of another n-gon Q conformally (i.e., in an angle preserving manner). However, when this map is extended to the boundary it need not necessarily map the vertices of P to those of Q. For many applications it is important to find the "best" vertex-preserving mapping between two polygons, i.e., one that minimizes the maximum angle distortion (the so-called dilatation). Such maps exist, are unique, and are known as extremal quasiconformal maps or Teichmüller maps. There are many efficient ways to approximate conformal maps, and the recent breakthrough result by Bishop computes a (1+epsilon)-approximation of the Riemann map in linear time. However, only heuristics have been studied in the case of Teichmüller maps. We present two results in this paper. One studies the problem in the continuous setting and another in the discrete setting. In the continuous setting, we solve the problem of finding a finite time procedure for approximating Teichmüller maps. Our construction is via an iterative procedure that is proven to converge in O(poly(1/epsilon)) iterations to a (1+epsilon)-approximation of the Teichmuller map. Our method uses a reduction of the polygon mapping problem to the marked sphere problem, thus solving a more general problem. In the discrete setting, we reduce the problem of finding an approximation algorithm for computing Teichmüller maps to two basic subroutines, namely, computing discrete 1) compositions and 2) inverses of discretely represented quasiconformal maps. Assuming finite-time solvers for these subroutines we provide a (1+epsilon)-approximation algorithm.
  • Space Exploration via Proximity Search [DOI] [+]
    Sariel Har-Peled, Nirman Kumar, David Mount and Benjamin Raichel.

    We investigate what computational tasks can be performed on a point set in Rd, if we are only given black-box access to it via nearest-neighbor search. This is a reasonable assumption if the underlying point set is either provided implicitly, or it is stored in a data structure that can answer such queries. In particular, we show the following: (A) One can compute an approximate bi-criteria k-center clustering of the point set, and more generally compute a greedy permutation of the point set. (B) One can decide if a query point is (approximately) inside the convex-hull of the point set. We also investigate the problem of clustering the given point set, such that meaningful proximity queries can be carried out on the centers of the clusters, instead of the whole point set.
  • Computational Aspects of the Colorful Carathéodory Theorem [DOI] [+]
    Wolfgang Mulzer and Yannik Stein.

    Let P1,...,Pd+1 be d-dimensional point sets such that the convex hull of each Pi contains the origin. We call the sets Pi color classes, and we think of the points in Pi as having color i. A colorful choice is a set with at most one point of each color. The colorful Caratheodory theorem guarantees the existence of a colorful choice whose convex hull contains the origin. So far, the computational complexity of finding such a colorful choice is unknown. We approach this problem from two directions. First, we consider approximation algorithms: an m-colorful choice is a set that contains at most m points from each color class. We show that for any fixed epsilon > 0, an (epsilon d)-colorful choice containing the origin in its convex hull can be found in polynomial time. This notion of approximation has not been studied before, and it is motivated through the applications of the colorful Caratheodory theorem in the literature. In the second part, we present a natural generalization of the colorful Caratheodory problem: in the Nearest Colorful Polytope problem (NCP), we are given d-dimensional point sets P1,...,Pn that do not necessarily contain the origin in their convex hulls. The goal is to find a colorful choice whose convex hull minimizes the distance to the origin. We show that computing local optima for the NCP problem is PLS-complete, while computing a global optimum is NP-hard.
  • Realization Spaces of Arrangements of Convex Bodies [DOI] [+]
    Michael Gene Dobbins, Andreas Holmsen and Alfredo Hubard.

    We introduce combinatorial types of arrangements of convex bodies, extending order types of point sets to arrangements of convex bodies, and study their realization spaces. Our main results witness a trade-off between the combinatorial complexity of the bodies and the topological complexity of their realization space. On one hand, we show that every combinatorial type can be realized by an arrangement of convex bodies and (under mild assumptions) its realization space is contractible. On the other hand, we prove a universality theorem that says that the restriction of the realization space to arrangements of convex polygons with a bounded number of vertices can have the homotopy type of any primary semialgebraic set.
  • Polynomials Vanishing on Cartesian Products: The Elekes-Szabó Theorem Revisited [DOI] [+]
    Orit E. Raz, Micha Sharir and Frank de Zeeuw.

    Let F in Complex[x,y,z] be a constant-degree polynomial, and let A,B,C be sets of complex numbers with |A|=|B|=|C|=n. We show that F vanishes on at most O(n11/6) points of the Cartesian product A x B x C (where the constant of proportionality depends polynomially on the degree of F), unless F has a special group-related form. This improves a theorem of Elekes and Szabo [ES12], and generalizes a result of Raz, Sharir, and Solymosi [RSS14a]. The same statement holds over R. When A, B, C have different sizes, a similar statement holds, with a more involved bound replacing O(n11/6). This result provides a unified tool for improving bounds in various Erdos-type problems in combinatorial geometry, and we discuss several applications of this kind.
  • A Fire Fighter's Problem [DOI] [+]
    Rolf Klein, Elmar Langetepe and Christos Levcopoulos.

    Suppose that a circular fire spreads in the plane at unit speed. A fire fighter can build a barrier at speed v > 1. How large must v be to ensure that the fire can be contained, and how should the fire fighter proceed? We provide two results. First, we analyze the natural strategy where the fighter keeps building a barrier along the frontier of the expanding fire. We prove that this approach contains the fire if v > vc = 2.6144... holds. Second, we show that any "spiralling" strategy must have speed v > 1.618, the golden ratio, in order to succeed.
  • Topological Analysis of Scalar Fields with Outliers [DOI] [+]
    Mickaël Buchet, Frederic Chazal, Tamal Dey, Fengtao Fan, Steve Oudot and Yusu Wang.

    Given a real-valued function f defined over a manifold M embedded in Rd, we are interested in recovering structural information about f from the sole information of its values on a finite sample P. Existing methods provide approximation to the persistence diagram of f when geometric noise and functional noise are bounded. However, they fail in the presence of aberrant values, also called outliers, both in theory and practice. We propose a new algorithm that deals with outliers. We handle aberrant functional values with a method inspired from the k-nearest neighbors regression and the local median filtering, while the geometric outliers are handled using the distance to a measure. Combined with topological results on nested filtrations, our algorithm performs robust topological analysis of scalar fields in a wider range of noise models than handled by current methods. We provide theoretical guarantees and experimental results on the quality of our approximation of the sampled scalar field.
  • Spanners and Reachability Oracles for Directed Transmission Graphs [DOI] [+]
    Haim Kaplan, Wolfgang Mulzer, Liam Roditty and Paul Seiferth.

    Let P be a set of n points in d dimensions, each with an associated radius rp > 0. The transmission graph G for P has vertex set P and an edge from p to q if and only if q lies in the ball with radius rp around p. Let t > 1. A t-spanner H for G is a sparse subgraph of G such that for any two vertices p, q connected by a path of length l in G, there is a p-q-path of length at most tl in H. We show how to compute a t-spanner for G if d=2. The running time is O(n (log n + log Psi)), where Psi is the ratio of the largest and smallest radius of two points in P. We extend this construction to be independent of Psi at the expense of a polylogarithmic overhead in the running time. As a first application, we prove a property of the t-spanner that allows us to find a BFS tree in G for any given start vertex s of P in the same time. After that, we deal with reachability oracles for G. These are data structures that answer reachability queries: given two vertices, is there a directed path between them? The quality of a reachability oracle is measured by the space S(n), the query time Q(n), and the preproccesing time. For d=1, we show how to compute an oracle with Q(n) = O(1) and S(n) = O(n) in time O(n log n). For d=2, the radius ratio Psi again turns out to be an important measure for the complexity of the problem. We present three different data structures whose quality depends on Psi: (i) if Psi < sqrt(3), we achieve Q(n) = O(1) with S(n) = O(n) and preproccesing time O(n log n); (ii) if Psi >= sqrt(3), we get Q(n) = O(Psi3 sqrt(n)) and S(n) = O(Psi5 n3/2); and (iii) if Psi is polynomially bounded in n, we use probabilistic methods to obtain an oracle with Q(n) = O(n2/3log n) and S(n) = O(n5/3 log n) that answers queries correctly with high probability. We employ our t-spanner to achieve a fast preproccesing time of O(Psi5 n3/2) and O(n5/3 log2 n) in case (ii) and (iii), respectively.
  • Approximability of the Discrete Fréchet Distance [DOI] [+]
    Karl Bringmann and Wolfgang Mulzer.

    The Fréchet distance is a popular and widespread distance measure for point sequences and for curves. About two years ago, Agarwal et al [SIAM J. Comput. 2014] presented a new (mildly) subquadratic algorithm for the discrete version of the problem. This spawned a flurry of activity that has led to several new algorithms and lower bounds. In this paper, we study the approximability of the discrete Fréchet distance. Building on a recent result by Bringmann [FOCS 2014], we present a new conditional lower bound that strongly subquadratic algorithms for the discrete Fréchet distance are unlikely to exist, even in the one-dimensional case and even if the solution may be approximated up to a factor of 1.399. This raises the question of how well we can approximate the Fréchet distance (of two given d-dimensional point sequences of length n) in strongly subquadratic time. Previously, no general results were known. We present the first such algorithm by analysing the approximation ratio of a simple, linear-time greedy algorithm to be 2Θ(n). Moreover, we design an alpha-approximation algorithm that runs in time O(n log n + n2 / alpha), for any alpha in [1, n]. Hence, an neps-approximation of the Fréchet distance can be computed in strongly subquadratic time, for any epsilon > 0.
  • Bisector Energy and Few Distinct Distances [DOI] [+]
    Ben Lund, Adam Sheffer and Frank de Zeeuw.

    We introduce the bisector energy of an n-point set P in the real plane, defined as the number of quadruples (a,b,c,d) from P such that a and b determine the same perpendicular bisector as c and d. If no line or circle contains M(n) points of P, then we prove that the bisector energy is O(M(n)2/5n12/5 + M(n)n2). We also prove the lower bound M(n)n2, which matches our upper bound when M(n) is large. We use our upper bound on the bisector energy to obtain two rather different results: (i) If P determines O(n / sqrt(log n)) distinct distances, then for any 0 < a < 1/4, either there exists a line or circle that contains na points of P, or there exist n8/5 - 12a/5 distinct lines that contain sqrt(log n) points of P. This result provides new information on a conjecture of Erdös regarding the structure of point sets with few distinct distances. (ii) If no line or circle contains M(n) points of P, then the number of distinct perpendicular bisectors determined by P is min{M(n)-2/5n8/5, M(n)-1n2}). This appears to be the first higher-dimensional example in a framework for studying the expansion properties of polynomials and rational functions over the real numbers, initiated by Elekes and Ronyai.
  • Combinatorial Discrepancy for Boxes via the gamma2 Norm [DOI] [+]
    Jiri Matousek and Aleksandar Nikolov.

    The gamma2 norm of a real m by n matrix A is the minimum number t such that the column vectors of A are contained in a 0-centered ellipsoid E that in turn is contained in the hypercube [-t, t]m. This classical quantity is polynomial-time computable and was proved by the second author and Talwar to approximate the hereditary discrepancy: it bounds the hereditary discrepancy from above and from below, up to logarithmic factors. Here we provided a simplified proof of the upper bound and show that both the upper and the lower bound are asymptotically tight in the worst case. We then demonstrate on several examples the power of the gamma2 norm as a tool for proving lower and upper bounds in discrepancy theory. Most notably, we prove a new lower bound of log(n)d-1 (up to constant factors) for the d-dimensional Tusnady problem, asking for the combinatorial discrepancy of an n-point set in d-dimensional space with respect to axis-parallel boxes. For d>2, this improves the previous best lower bound, which was of order approximately log(n)(d-1)/2, and it comes close to the best known upper bound of O(log(n)d+1/2), for which we also obtain a new, very simple proof. Applications to lower bounds for dynamic range searching and lower bounds in differential privacy are given.
  • Incidences between Points and Lines in Three Dimensions [DOI] [+]
    Micha Sharir and Noam Solomon.

    We give a fairly elementary and simple proof that shows that the number of incidences between m points and n lines in R3, so that no plane contains more than s lines, is O(m1/2n3/4 + m2/3n1/3s1/3 + m + n) (in the precise statement, the constant of proportionality of the first and third terms depends, in a rather weak manner, on the relation between m and n). This bound, originally obtained by Guth and Katz as a major step in their solution of Erdos's distinct distances problem, is also a major new result in incidence geometry, an area that has picked up considerable momentum in the past six years. Its original proof uses fairly involved machinery from algebraic and differential geometry, so it is highly desirable to simplify the proof, in the interest of better understanding the geometric structure of the problem, and providing new tools for tackling similar problems. This has recently been undertaken by Guth. The present paper presents a different and simpler derivation, with better bounds than those in Guth, and without the restrictive assumptions made there. Our result has a potential for applications to other incidence problems in higher dimensions.
  • On the Number of Rich Lines in Truly High Dimensional Sets [DOI] [+]
    Zeev Dvir and Sivakanth Gopi.

    We prove a new upper bound on the number of r-rich lines (lines with at least r points) in a 'truly' d-dimensional configuration of points v1,...,vn over the complex numbers. More formally, we show that, if the number of r-rich lines is significantly larger than n2/rd then there must exist a large subset of the points contained in a hyperplane. We conjecture that the factor rd can be replaced with a tight rd+1. If true, this would generalize the classic Szemeredi-Trotter theorem which gives a bound of n2/r3 on the number of r-rich lines in a planar configuration. This conjecture was shown to hold in R3 in the seminal work of Guth and Katz and was also recently proved over R4 (under some additional restrictions) by Solomon and Sharir. For the special case of arithmetic progressions (r collinear points that are evenly distanced) we give a bound that is tight up to lower order terms, showing that a d-dimensional grid achieves the largest number of r-term progressions. The main ingredient in the proof is a new method to find a low degree polynomial that vanishes on many of the rich lines. Unlike previous applications of the polynomial method, we do not find this polynomial by interpolation. The starting observation is that the degree r-2 Veronese embedding takes r-collinear points to r linearly dependent images. Hence, each collinear r-tuple of points, gives us a dependent r-tuple of images. We then use the design-matrix method of Barak et al. to convert these 'local' linear dependencies into a global one, showing that all the images lie in a hyperplane. This then translates into a low degree polynomial vanishing on the original set.
  • The Number of Unit-Area Triangles in the Plane: Theme and Variations [DOI] [+]
    Orit E. Raz and Micha Sharir.

    We show that the number of unit-area triangles determined by a set S of n points in the plane is O(n20/9), improving the earlier bound O(n9/4) of Apfelbaum and Sharir. We also consider two special cases of this problem: (i) We show, using a somewhat subtle construction, that if S consists of points on three lines, the number of unit-area triangles that S spans can be Ω(n2), for any triple of lines (it is always O(n2) in this case). (ii) We show that if S is a convex grid of the form A x B, where A, B are convex sets of n1/2 real numbers each (i.e., the sequences of differences of consecutive elements of A and of B are both strictly increasing), then S determines O(n31/14) unit-area triangles.
  • Finding All Maximal Subsequences with Hereditary Properties [DOI] [+]
    Drago Bokal, Sergio Cabello and David Eppstein.

    Consider a sequence s1,...,sn of points in the plane. We want to find all maximal subsequences with a given hereditary property P: find for all indices i the largest index j*(i) such that si,...,sj*(i) has property P. We provide a general methodology that leads to the following specific results: - In O(n log2 n) time we can find all maximal subsequences with diameter at most 1. - In O(n log n loglog n) time we can find all maximal subsequences whose convex hull has area at most 1. - In O(n) time we can find all maximal subsequences that define monotone paths in some (subpath-dependent) direction. The same methodology works for graph planarity, as follows. Consider a sequence of edges e1,...,en over a vertex set V. In O(n log n) time we can find, for all indices i, the largest index j*(i) such that (V,{ei,..., ej*(i)}) is planar.
  • On the Shadow Simplex Method for Curved Polyhedra [DOI] [+]
    Daniel Dadush and Nicolai Hähnle.

    We study the simplex method over polyhedra satisfying certain "discrete curvature" lower bounds, which enforce that the boundary always meets vertices at sharp angles. Motivated by linear programs with totally unimodular constraint matrices, recent results of Bonifas et al. (SOCG 2012), Brunsch and Röglin (ICALP 2013), and Eisenbrand and Vempala (2014) have improved our understanding of such polyhedra. We develop a new type of dual analysis of the shadow simplex method which provides a clean and powerful tool for improving all previously mentioned results. Our methods are inspired by the recent work of Bonifas and the first named author, who analyzed a remarkably similar process as part of an algorithm for the Closest Vector Problem with Preprocessing. For our first result, we obtain a constructive diameter bound of O((n2 / delta) ln (n / delta)) for n-dimensional polyhedra with curvature parameter delta in (0, 1]. For the class of polyhedra arising from totally unimodular constraint matrices, this implies a bound of O(n3 ln n). For linear optimization, given an initial feasible vertex, we show that an optimal vertex can be found using an expected O((n3 / delta) ln (n / delta)) simplex pivots, each requiring O(mn) time to compute. An initial feasible solution can be found using O((mn3 / delta) ln (n / delta)) pivot steps.
  • On Computability and Triviality of Well Groups [DOI] [+]
    Peter Franek and Marek Krcál.

    The concept of well group in a special but important case captures homological properties of the zero set of a continuous map f from K to Rn on a compact space K that are invariant with respect to perturbations of f. The perturbations are arbitrary continuous maps within Linfty distance r from f for a given r > 0. The main drawback of the approach is that the computability of well groups was shown only when dim K = n or n = 1. Our contribution to the theory of well groups is twofold: on the one hand we improve on the computability issue, but on the other hand we present a range of examples where the well groups are incomplete invariants, that is, fail to capture certain important robust properties of the zero set. For the first part, we identify a computable subgroup of the well group that is obtained by cap product with the pullback of the orientation of Rn by f. In other words, well groups can be algorithmically approximated from below. When f is smooth and dim K < 2n-2, our approximation of the (dim K-n)th well group is exact. For the second part, we find examples of maps f, f' from K to Rn with all well groups isomorphic but whose perturbations have different zero sets. We discuss on a possible replacement of the well groups of vector valued maps by an invariant of a better descriptive power and computability status.
  • Faster Deterministic Volume Estimation in the Oracle Model via Thin Lattice Coverings [DOI] [+]
    Daniel Dadush.

    We give a 2O(n)(1+1/eps)n time and poly(n)-space deterministic algorithm for computing a (1+eps)n approximation to the volume of a general convex body K, which comes close to matching the (1+c/eps)n/2 lower bound for volume estimation in the oracle model by Barany and Furedi (STOC 1986, Proc. Amer. Math. Soc. 1988). This improves on the previous results of Dadush and Vempala (Proc. Nat'l Acad. Sci. 2013), which gave the above result only for symmetric bodies and achieved a dependence of 2O(n)(1+log5/2(1/eps)/eps3)n. For our methods, we reduce the problem of volume estimation in K to counting lattice points in K subseteq Rn (via enumeration) for a specially constructed lattice L: a so-called thin covering of space with respect to K (more precisely, for which L + K = Rn and voln(K)/det(L) = 2O(n)). The trade off between time and approximation ratio is achieved by scaling down the lattice. As our main technical contribution, we give the first deterministic 2O(n)-time and poly(n)-space construction of thin covering lattices for general convex bodies. This improves on a recent construction of Alon et al (STOC 2013) which requires exponential space and only works for symmetric bodies. For our construction, we combine the use of the M-ellipsoid from convex geometry (Milman, C.R. Math. Acad. Sci. Paris 1986) together with lattice sparsification and densification techniques (Dadush and Kun, SODA 2013; Rogers, J. London Math. Soc. 1950).
  • The Hardness of Approximation of Euclidean k-Means [DOI] [+]
    Pranjal Awasthi, Moses Charikar, Ravishankar Krishnaswamy and Ali Kemal Sinop.

    The Euclidean k-means problem is a classical problem that has been extensively studied in the theoretical computer science, machine learning and the computational geometry communities. In this problem, we are given a set of n points in Euclidean space Rd, and the goal is to choose k center points in Rd so that the sum of squared distances of each point to its nearest center is minimized. The best approximation algorithms for this problem include a polynomial time constant factor approximation for general k and a (1+c)-approximation which runs in time poly(n) exp(k/c). At the other extreme, the only known computational complexity result for this problem is NP-hardness [Aloise et al.'09]. The main difficulty in obtaining hardness results stems from the Euclidean nature of the problem, and the fact that any point in Rd can be a potential center. This gap in understanding left open the intriguing possibility that the problem might admit a PTAS for all k, d. In this paper we provide the first hardness of approximation for the Euclidean k-means problem. Concretely, we show that there exists a constant c > 0 such that it is NP-hard to approximate the k-means objective to within a factor of (1+c). We show this via an efficient reduction from the vertex cover problem on triangle-free graphs: given a triangle-free graph, the goal is to choose the fewest number of vertices which are incident on all the edges. Additionally, we give a proof that the current best hardness results for vertex cover can be carried over to triangle-free graphs. To show this we transform G, a known hard vertex cover instance, by taking a graph product with a suitably chosen graph H, and showing that the size of the (normalized) maximum independent set is almost exactly preserved in the product graph using a spectral analysis, which might be of independent interest.
  • Low-Quality Dimension Reduction and High-Dimensional Approximate Nearest Neighbor [DOI] [+]
    Evangelos Anagnostopoulos, Ioannis Z. Emiris and Ioannis Psarros.

    The approximate nearest neighbor problem (epsilon-ANN) in Euclidean settings is a fundamental question, which has been addressed by two main approaches: Data-dependent space partitioning techniques perform well when the dimension is relatively low, but are affected by the curse of dimensionality. On the other hand, locality sensitive hashing has polynomial dependence in the dimension, sublinear query time with an exponent inversely proportional to (1+epsilon)2, and subquadratic space requirement. We generalize the Johnson-Lindenstrauss Lemma to define "low-quality" mappings to a Euclidean space of significantly lower dimension, such that they satisfy a requirement weaker than approximately preserving all distances or even preserving the nearest neighbor. This mapping guarantees, with high probability, that an approximate nearest neighbor lies among the k approximate nearest neighbors in the projected space. These can be efficiently retrieved while using only linear storage by a data structure, such as BBD-trees. Our overall algorithm, given n points in dimension d, achieves space usage in O(dn), preprocessing time in O(dn log n), and query time in O(d nrho log n), where rho is proportional to 1 - 1/loglog n, for fixed epsilon in (0, 1). The dimension reduction is larger if one assumes that point sets possess some structure, namely bounded expansion rate. We implement our method and present experimental results in up to 500 dimensions and 106 points, which show that the practical performance is better than predicted by the theoretical analysis. In addition, we compare our approach with E2LSH.
  • Order on Order Types [DOI] [+]
    Alexander Pilz and Emo Welzl.

    Given P and P', equally sized planar point sets in general position, we call a bijection from P to P' crossing-preserving if crossings of connecting segments in P are preserved in P' (extra crossings may occur in P'). If such a mapping exists, we say that P' crossing-dominates P, and if such a mapping exists in both directions, P and P' are called crossing-equivalent. The relation is transitive, and we have a partial order on the obtained equivalence classes (called crossing types or x-types). Point sets of equal order type are clearly crossing-equivalent, but not vice versa. Thus, x-types are a coarser classification than order types. (We will see, though, that a collapse of different order types to one x-type occurs for sets with triangular convex hull only.) We argue that either the maximal or the minimal x-types are sufficient for answering many combinatorial (existential or extremal) questions on planar point sets. Motivated by this we consider basic properties of the relation. We characterize order types crossing-dominated by points in convex position. Further, we give a full characterization of minimal and maximal abstract order types. Based on that, we provide a polynomial-time algorithm to check whether a point set crossing-dominates another. Moreover, we generate all maximal and minimal x-types for small numbers of points.
  • A Short Proof of a Near-Optimal Cardinality Estimate for the Product of a Sum Set [DOI] [+]
    Oliver Roche-Newton.

    In this note it is established that, for any finite set A of real numbers, there exist two elements a, b from A such that |(a + A)(b + A)| > c|A|2 / log |A|, where c is some positive constant. In particular, it follows that |(A + A)(A + A)| > c|A|2 / log |A|. The latter inequality had in fact already been established in an earlier work of the author and Rudnev, which built upon the recent developments of Guth and Katz in their work on the Erdös distinct distance problem. Here, we do not use those relatively deep methods, and instead we need just a single application of the Szemerédi-Trotter Theorem. The result is also qualitatively stronger than the corresponding sum-product estimate from the paper of the author and Rudnev, since the set (a + A)(b + A) is defined by only two variables, rather than four. One can view this as a solution for the pinned distance problem, under an alternative notion of distance, in the special case when the point set is a direct product A x A. Another advantage of this more elementary approach is that these results can now be extended for the first time to the case when A is a set of complex numbers.
  • Trajectory Grouping Structure under Geodesic Distance [DOI] [+]
    Irina Kostitsyna, Marc Van Kreveld, Maarten Löffler, Bettina Speckmann and Frank Staals.

    In recent years trajectory data has become one of the main types of geographic data, and hence algorithmic tools to handle large quantities of trajectories are essential. A single trajectory is typically represented as a sequence of time-stamped points in the plane. In a collection of trajectories one wants to detect maximal groups of moving entities and their behaviour (merges and splits) over time. This information can be summarized in the trajectory grouping structure. Significantly extending the work of Buchin et al. [WADS 2013] into a realistic setting, we show that the trajectory grouping structure can be computed efficiently also if obstacles are present and the distance between the entities is measured by geodesic distance. We bound the number of critical events: times at which the distance between two subsets of moving entities is exactly epsilon, where epsilon is the threshold distance that determines whether two entities are close enough to be in one group. In case the n entities move in a simple polygon along trajectories with tau vertices each we give an O(tau n2) upper bound, which is tight in the worst case. In case of well-spaced obstacles we give an O(tau(n2 + m lambda4(n))) upper bound, where m is the total complexity of the obstacles, and lambdas(n) denotes the maximum length of a Davenport-Schinzel sequence of n symbols of order s. In case of general obstacles we give an O(tau min(n2 + m3 lambda4(n), n2m2)) upper bound. Furthermore, for all cases we provide efficient algorithms to compute the critical events, which in turn leads to efficient algorithms to compute the trajectory grouping structure.
  • On the Beer Index of Convexity and Its Variants [DOI] [+]
    Martin Balko, Vít Jelínek, Pavel Valtr and Bartosz Walczak.

    Let S be a subset of Rd with finite positive Lebesgue measure. The Beer index of convexity b(S) of S is the probability that two points of S chosen uniformly independently at random see each other in S. The convexity ratio c(S) of S is the Lebesgue measure of the largest convex subset of S divided by the Lebesgue measure of S. We investigate a relationship between these two natural measures of convexity of S. We show that every subset S of the plane with simply connected components satisfies b(S) <= alpha c(S) for an absolute constant alpha, provided b(S) is defined. This implies an affirmative answer to the conjecture of Cabello et al. asserting that this estimate holds for simple polygons. We also consider higher-order generalizations of b(S). For 1 <= k <= d, the k-index of convexity bk(S) of a subset S of Rd is the probability that the convex hull of a (k+1)-tuple of points chosen uniformly independently at random from S is contained in S. We show that for every d >= 2 there is a constant beta(d) > 0 such that every subset S of Rd satisfies bd(S) <= beta c(S), provided bd(S) exists. We provide an almost matching lower bound by showing that there is a constant gamma(d) > 0 such that for every epsilon from (0,1] there is a subset S of Rd of Lebesgue measure one satisfying c(S) <= epsilon and bd(S) >= (gamma epsilon)/log2(1/epsilon) >= (gamma c(S))/log2(1/c(S)).
  • Approximate Geometric MST Range Queries [DOI] [+]
    Sunil Arya, David Mount and Eunhui Park.

    Range searching is a widely-used method in computational geometry for efficiently accessing local regions of a large data set. Typically, range searching involves either counting or reporting the points lying within a given query region, but it is often desirable to compute statistics that better describe the structure of the point set lying within the region, not just the count. In this paper we consider the geometric minimum spanning tree (MST) problem in the context of range searching where approximation is allowed. We are given a set P of n points in Rd. The objective is to preprocess P so that given an admissible query region Q, it is possible to efficiently approximate the weight of the minimum spanning tree of the subset of P lying within Q. There are two natural sources of approximation error, first by treating Q as a fuzzy object and second by approximating the MST weight itself. To model this, we assume that we are given two positive real approximation parameters epsq and epsw. Following the typical practice in approximate range searching, the range is expressed as two shapes Q- and Q+, where Q- is contained in Q which is contained in Q+, and their boundaries are separated by a distance of at least epsq diam(Q). Points within Q- must be included and points external to Q+ cannot be included. A weight W is a valid answer to the query if there exist subsets P' and P'' of P, such that Q- is contained in P' which is contained in P'' which is contained in Q+ and wt(MST(P')) <= W <= (1+epsw) wt(MST(P'')). In this paper, we present an efficient data structure for answering such queries. Our approach uses simple data structures based on quadtrees, and it can be applied whenever Q- and Q+ are compact sets of constant combinatorial complexity. It uses space O(n), and it answers queries in time O(log n + 1/(epsq epsw)d + O(1)). The O(1) term is a small constant independent of dimension, and the hidden constant factor in the overall running time depends on d, but not on epsq or epsw. Preprocessing requires knowledge of epsw, but not epsq.
  • Effectiveness of Local Search for Geometric Optimization [DOI] [+]
    Vincent Cohen-Addad and Claire Mathieu.

    What is the effectiveness of local search algorithms for geometric problems in the plane? We prove that local search with neighborhoods of magnitude 1/epsilonc is an approximation scheme for the following problems in the Euclidean plane: TSP with random inputs, Steiner tree with random inputs, uniform facility location (with worst case inputs), and bicriteria k-median (also with worst case inputs). The randomness assumption is necessary for TSP.
  • On Generalized Heawood Inequalities for Manifolds: A Van Kampen-Flores-type Nonembeddability Result [DOI] [+]
    Xavier Goaoc, Isaac Mabillard, Pavel Patak, Zuzana Patakova, Martin Tancer and Uli Wagner.

    The fact that the complete graph K5 does not embed in the plane has been generalized in two independent directions. On the one hand, the solution of the classical Heawood problem for graphs on surfaces established that the complete graph Kn embeds in a closed surface M if and only if (n-3)(n-4) is at most 6b1(M), where b1(M) is the first Z2-Betti number of M. On the other hand, Van Kampen and Flores proved that the k-skeleton of the n-dimensional simplex (the higher-dimensional analogue of Kn+1) embeds in R2k if and only if n is less or equal to 2k+2. Two decades ago, Kuhnel conjectured that the k-skeleton of the n-simplex embeds in a compact, (k-1)-connected 2k-manifold with kth Z2-Betti number bk only if the following generalized Heawood inequality holds: binom{n-k-1}{k+1} is at most binom{2k+1}{k+1} bk. This is a common generalization of the case of graphs on surfaces as well as the Van Kampen--Flores theorem. In the spirit of Kuhnel's conjecture, we prove that if the k-skeleton of the n-simplex embeds in a 2k-manifold with kth Z2-Betti number bk, then n is at most 2bk binom{2k+2}{k} + 2k + 5. This bound is weaker than the generalized Heawood inequality, but does not require the assumption that M is (k-1)-connected. Our proof uses a result of Volovikov about maps that satisfy a certain homological triviality condition.
  • Limits of Order Types [DOI] [+]
    Xavier Goaoc, Alfredo Hubard, Remi de Joannis de Verclos, Jean-Sebastien Sereni and Jan Volec.

    The notion of limits of dense graphs was invented, among other reasons, to attack problems in extremal graph theory. It is straightforward to define limits of order types in analogy with limits of graphs, and this paper examines how to adapt to this setting two approaches developed to study limits of dense graphs. We first consider flag algebras, which were used to open various questions on graphs to mechanical solving via semidefinite programming. We define flag algebras of order types, and use them to obtain, via the semidefinite method, new lower bounds on the density of 5- or 6-tuples in convex position in arbitrary point sets, as well as some inequalities expressing the difficulty of sampling order types uniformly. We next consider graphons, a representation of limits of dense graphs that enable their study by continuous probabilistic or analytic methods. We investigate how planar measures fare as a candidate analogue of graphons for limits of order types. We show that the map sending a measure to its associated limit is continuous and, if restricted to uniform measures on compact convex sets, a homeomorphism. We prove, however, that this map is not surjective. Finally, we examine a limit of order types similar to classical constructions in combinatorial geometry (Erdos-Szekeres, Horton...) and show that it cannot be represented by any somewhere regular measure; we analyze this example via an analogue of Sylvester's problem on the probability that k random points are in convex position.
  • Restricted Isometry Property for General p-Norms [DOI] [+]
    Zeyuan Allen-Zhu, Rati Gelashvili and Ilya Razenshteyn.

    The Restricted Isometry Property (RIP) is a fundamental property of a matrix which enables sparse recovery. Informally, an m x n matrix satisfies RIP of order k for the Lp norm, if |Ax|p is approximately |x|p for every x with at most k non-zero coordinates. For every 1 <= p < infty we obtain almost tight bounds on the minimum number of rows m necessary for the RIP property to hold. Prior to this work, only the cases p = 1, 1 + 1/log(k), and 2 were studied. Interestingly, our results show that the case p=2 is a "singularity" point: the optimal number of rows m is Θ(kp) for all p in [1, infty)-{2}, as opposed to Θ(k) for k=2. We also obtain almost tight bounds for the column sparsity of RIP matrices and discuss implications of our results for the Stable Sparse Recovery problem.
  • Comparing Graphs via Persistence Distortion [DOI] [+]
    Tamal Dey, Dayu Shi and Yusu Wang.

    Metric graphs are ubiquitous in science and engineering. For example, many data are drawn from hidden spaces that are graph-like, such as the cosmic web. A metric graph offers one of the simplest yet still meaningful ways to represent the non-linear structure hidden behind the data. In this paper, we propose a new distance between two finite metric graphs, called the persistence-distortion distance, which draws upon a topological idea. This topological perspective along with the metric space viewpoint provide a new angle to the graph matching problem. Our persistence-distortion distance has two properties not shared by previous methods: First, it is stable against the perturbations of the input graph metrics. Second, it is a continuous distance measure, in the sense that it is defined on an alignment of the underlying spaces of input graphs, instead of merely their nodes. This makes our persistence-distortion distance robust against, for example, different discretizations of the same underlying graph. Despite considering the input graphs as continuous spaces, that is, taking all points into account, we show that we can compute the persistence-distortion distance in polynomial time. The time complexity for the discrete case where only graph nodes are considered is much faster.
  • Bounding Helly Numbers via Betti Numbers [DOI] [+]
    Xavier Goaoc, Pavel Patak, Zuzana Patakova, Martin Tancer and Uli Wagner.

    We show that very weak topological assumptions are enough to ensure the existence of a Helly-type theorem. More precisely, we show that for any non-negative integers b and d there exists an integer h(b,d) such that the following holds. If F is a finite family of subsets of Rd such that the ith reduced Betti number (with Z2 coefficients in singular homology) of the intersection of any proper subfamily G of F is at most b for every non-negative integer i less or equal to (d-1)/2, then F has Helly number at most h(b,d). These topological conditions are sharp: not controlling any of these first Betti numbers allow for families with unbounded Helly number. Our proofs combine homological non-embeddability results with a Ramsey-based approach to build, given an arbitrary simplicial complex K, some well-behaved chain map from C*(K) to C*(Rd). Both techniques are of independent interest.
  • Sylvester-Gallai for Arrangements of Subspaces [DOI] [+]
    Zeev Dvir and Guangda Hu.

    In this work we study arrangements of k-dimensional subspaces V1,...,Vn over the complex numbers. Our main result shows that, if every pair Va, Vb of subspaces is contained in a dependent triple (a triple Va, Vb, Vc contained in a 2k-dimensional space), then the entire arrangement must be contained in a subspace whose dimension depends only on k (and not on n). The theorem holds under the assumption that the subspaces are pairwise non-intersecting (otherwise it is false). This generalizes the Sylvester-Gallai theorem (or Kelly's theorem for complex numbers), which proves the k=1 case. Our proof also handles arrangements in which we have many pairs (instead of all) appearing in dependent triples, generalizing the quantitative results of Barak et. al. One of the main ingredients in the proof is a strengthening of a theorem of Barthe (from the k=1 to k>1 case) proving the existence of a linear map that makes the angles between pairs of subspaces large on average. Such a mapping can be found, unless there is an obstruction in the form of a low dimensional subspace intersecting many of the spaces in the arrangement (in which case one can use a different argument to prove the main theorem).
  • An Optimal Algorithm for the Separating Common Tangents of Two Polygons [DOI] [+]
    Mikkel Abrahamsen.

    We describe an algorithm for computing the separating common tangents of two simple polygons using linear time and only constant workspace. A tangent of a polygon is a line touching the polygon such that all of the polygon lies to the same side of the line. A separating common tangent of two polygons is a tangent of both polygons where the polygons are lying on different sides of the tangent. Each polygon is given as a read-only array of its corners. If a separating common tangent does not exist, the algorithm reports that. Otherwise, two corners defining a separating common tangent are returned. The algorithm is simple and implies an optimal algorithm for deciding if the convex hulls of two polygons are disjoint or not. This was not known to be possible in linear time and constant workspace prior to this paper. An outer common tangent is a tangent of both polygons where the polygons are on the same side of the tangent. In the case where the convex hulls of the polygons are disjoint, we give an algorithm for computing the outer common tangents in linear time using constant workspace.
  • 1-String B2-VPG Representation of Planar Graphs [DOI] [+]
    Therese Biedl and Martin Derka.

    In this paper, we prove that every planar graph has a 1-string B2-VPG representation - a string representation using paths in a rectangular grid that contain at most two bends. Furthermore, two paths representing vertices u, v intersect precisely once whenever there is an edge between u and v.
  • Optimal Morphs of Convex Drawings [DOI] [+]
    Patrizio Angelini, Giordano Da Lozzo, Fabrizio Frati, Anna Lubiw, Maurizio Patrignani and Vincenzo Roselli.

    We give an algorithm to compute a morph between any two convex drawings of the same plane graph. The morph preserves the convexity of the drawing at any time instant and moves each vertex along a piecewise linear curve with linear complexity. The linear bound is asymptotically optimal in the worst case.
  • Geometric Inference on Kernel Density Estimates [DOI] [+]
    Jeff Phillips, Bei Wang and Yan Zheng.

    We show that geometric inference of a point cloud can be calculated by examining its kernel density estimate with a Gaussian kernel. This allows one to consider kernel density estimates, which are robust to spatial noise, subsampling, and approximate computation in comparison to raw point sets. This is achieved by examining the sublevel sets of the kernel distance, which isomorphically map to superlevel sets of the kernel density estimate. We prove new properties about the kernel distance, demonstrating stability results and allowing it to inherit reconstruction results from recent advances in distance-based topological reconstruction. Moreover, we provide an algorithm to estimate its topology using weighted Vietoris-Rips complexes.
  • Optimal Deterministic Algorithms for 2-d and 3-d Shallow Cuttings [DOI] [+]
    Timothy M. Chan and Konstantinos Tsakalidis.

    We present optimal deterministic algorithms for constructing shallow cuttings in an arrangement of lines in two dimensions or planes in three dimensions. Our results improve the deterministic polynomial-time algorithm of Matousek (1992) and the optimal but randomized algorithm of Ramos (1999). This leads to efficient derandomization of previous algorithms for numerous well-studied problems in computational geometry, including halfspace range reporting in 2-d and 3-d, k nearest neighbors search in 2-d, (<= k)-levels in 3-d, order-k Voronoi diagrams in 2-d, linear programming with k violations in 2-d, dynamic convex hulls in 3-d, dynamic nearest neighbor search in 2-d, convex layers (onion peeling) in 3-d, epsilon-nets for halfspace ranges in 3-d, and more. As a side product we also describe an optimal deterministic algorithm for constructing standard (non-shallow) cuttings in two dimensions, which is arguably simpler than the known optimal algorithms by Matousek (1991) and Chazelle (1993).
  • Riemannian Simplices and Triangulations [DOI] [+]
    Ramsay Dyer, Gert Vegter and Mathijs Wintraecken.

    We study a natural intrinsic definition of geometric simplices in Riemannian manifolds of arbitrary finite dimension, and exploit these simplices to obtain criteria for triangulating compact Riemannian manifolds. These geometric simplices are defined using Karcher means. Given a finite set of vertices in a convex set on the manifold, the point that minimises the weighted sum of squared distances to the vertices is the Karcher mean relative to the weights. Using barycentric coordinates as the weights, we obtain a smooth map from the standard Euclidean simplex to the manifold. A Riemannian simplex is defined as the image of the standard simplex under this barycentric coordinate map. In this work we articulate criteria that guarantee that the barycentric coordinate map is a smooth embedding. If it is not, we say the Riemannian simplex is degenerate. Quality measures for the "thickness" or "fatness" of Euclidean simplices can be adapted to apply to these Riemannian simplices. For manifolds of dimension 2, the simplex is non-degenerate if it has a positive quality measure, as in the Euclidean case. However, when the dimension is greater than two, non-degeneracy can be guaranteed only when the quality exceeds a positive bound that depends on the size of the simplex and local bounds on the absolute values of the sectional curvatures of the manifold. An analysis of the geometry of non-degenerate Riemannian simplices leads to conditions which guarantee that a simplicial complex is homeomorphic to the manifold.
  • Geometric Spanners for Points Inside a Polygonal Domain [DOI] [+]
    Mohammad Ali Abam, Marjan Adeli, Hamid Homapour and Pooya Zafar Asadollahpoor.

    Let P be a set of n points inside a polygonal domain D. A polygonal domain with h holes (or obstacles) consists of h disjoint polygonal obstacles surrounded by a simple polygon which itself acts as an obstacle. We first study t-spanners for the set P with respect to the geodesic distance function d where for any two points p and q, d(p,q) is equal to the Euclidean length of the shortest path from p to q that avoids the obstacles interiors. For a case where the polygonal domain is a simple polygon (i.e., h=0), we construct a (sqrt(10)+eps)-spanner that has O(n log2 n) edges where eps is the a given positive real number. For a case where there are h holes, our construction gives a (5+eps)-spanner with the size of O(sqrt(h) n log2 n). Moreover, we study t-spanners for the visibility graph of P (VG(P), for short) with respect to a hole-free polygonal domain D. The graph VG(P) is not necessarily a complete graph or even connected. In this case, we propose an algorithm that constructs a (3+eps)-spanner of size almost O(n4/3). In addition, we show that there is a set P of n points such that any (3-eps)-spanner of VG(P) must contain almost n2 edges.
  • Star Unfolding from a Geodesic Curve [DOI] [+]
    Stephen Kiazyk and Anna Lubiw.

    There are two known ways to unfold a convex polyhedron without overlap: the star unfolding and the source unfolding, both of which use shortest paths from vertices to a source point on the surface of the polyhedron. Non-overlap of the source unfolding is straightforward; non-overlap of the star unfolding was proved by Aronov and O'Rourke in 1992. Our first contribution is a much simpler proof of non-overlap of the star unfolding. Both the source and star unfolding can be generalized to use a simple geodesic curve instead of a source point. The star unfolding from a geodesic curve cuts the geodesic curve and a shortest path from each vertex to the geodesic curve. Demaine and Lubiw conjectured that the star unfolding from a geodesic curve does not overlap. We prove a special case of the conjecture. Our special case includes the previously known case of unfolding from a geodesic loop. For the general case we prove that the star unfolding from a geodesic curve can be separated into at most two non-overlapping pieces.
  • Recognition and Complexity of Point Visibility Graphs [DOI] [+]
    Jean Cardinal and Udo Hoffmann.

    A point visibility graph is a graph induced by a set of points in the plane, where every vertex corresponds to a point, and two vertices are adjacent whenever the two corresponding points are visible from each other, that is, the open segment between them does not contain any other point of the set. We study the recognition problem for point visibility graphs: given a simple undirected graph, decide whether it is the visibility graph of some point set in the plane. We show that the problem is complete for the existential theory of the reals. Hence the problem is as hard as deciding the existence of a real solution to a system of polynomial inequalities. The proof involves simple substructures forcing collinearities in all realizations of some visibility graphs, which are applied to the algebraic universality constructions of Mnev and Richter-Gebert. This solves a longstanding open question and paves the way for the analysis of other classes of visibility graphs. Furthermore, as a corollary of one of our construction, we show that there exist point visibility graphs that do not admit any geometric realization with points having integer coordinates.
  • Maintaining Contour Trees of Dynamic Terrains [DOI] [+]
    Pankaj K. Agarwal, Thomas Mølhave, Morten Revsbæk, Issam Safa, Yusu Wang and Jungwoo Yang.

    We study the problem of maintaining the contour tree T of a terrain Sigma, represented as a triangulated xy-monotone surface, as the heights of its vertices vary continuously with time. We characterize the combinatorial changes in T and how they relate to topological changes in Sigma. We present a kinetic data structure (KDS) for maintaining T efficiently. It maintains certificates that fail, i.e., an event occurs, only when the heights of two adjacent vertices become equal or two saddle vertices appear on the same contour. Assuming that the heights of two vertices of Sigma become equal only O(1) times and these instances can be computed in O(1) time, the KDS processes O(kappa + n) events, where n is the number of vertices in Sigma and kappa is the number of events at which the combinatorial structure of T changes, and processes each event in O(log n) time. The KDS can be extended to maintain an augmented contour tree and a join/split tree.
  • A Simpler Linear-Time Algorithm for Intersecting Two Convex Polyhedra in Three Dimensions [DOI] [+]
    Timothy M. Chan.

    Chazelle [FOCS'89] gave a linear-time algorithm to compute the intersection of two convex polyhedra in three dimensions. We present a simpler algorithm to do the same.
  • A Geometric Approach for the Upper Bound Theorem for Minkowski Sums of Convex Polytopes [DOI] [+]
    Menelaos I. Karavelas and Eleni Tzanaki.

    We derive tight expressions for the maximum number of k-faces, k=0,...,d-1, of the Minkowski sum, P1+...+Pr, of r convex d-polytopes P1,...,Pr in Rd, where d >= 2 and r < d, as a (recursively defined) function on the number of vertices of the polytopes. Our results coincide with those recently proved by Adiprasito and Sanyal [1]. In contrast to Adiprasito and Sanyal's approach, which uses tools from Combinatorial Commutative Algebra, our approach is purely geometric and uses basic notions such as f- and h-vector calculus, stellar subdivisions and shellings, and generalizes the methodology used in [10] and [9] for proving upper bounds on the f-vector of the Minkowski sum of two and three convex polytopes, respectively. The key idea behind our approach is to express the Minkowski sum P1+...+Pr as a section of the Cayley polytope C of the summands; bounding the k-faces of P1+...+Pr reduces to bounding the subset of the (k+r-1)-faces of C that contain vertices from each of the r polytopes. We end our paper with a sketch of an explicit construction that establishes the tightness of the upper bounds.
  • Combinatorial Redundancy Detection [DOI] [+]
    Komei Fukuda, Bernd Gaertner and May Szedlak.

    The problem of detecting and removing redundant constraints is fundamental in optimization. We focus on the case of linear programs (LPs) in dictionary form, given by n equality constraints in n+d variables, where the variables are constrained to be nonnegative. A variable xr is called redundant, if after removing its nonnegativity constraint the LP still has the same feasible region. The time needed to solve such an LP is denoted by LP(n,d). It is easy to see that solving n+d LPs of the above size is sufficient to detect all redundancies. The currently fastest practical method is the one by Clarkson: it solves n+d linear programs, but each of them has at most s variables, where s is the number of nonredundant constraints. In the first part we show that knowing all of the finitely many dictionaries of the LP is sufficient for the purpose of redundancy detection. A dictionary is a matrix that can be thought of as an enriched encoding of a vertex in the LP. Moreover - and this is the combinatorial aspect - it is enough to know only the signs of the entries, the actual values do not matter. Concretely we show that for any variable xr one can find a dictionary, such that its sign pattern is either a redundancy or nonredundancy certificate for xr. In the second part we show that considering only the sign patterns of the dictionary, there is an output sensitive algorithm of running time of order d (n+d) sd-1 LP(s,d) + d sd LP(n,d) to detect all redundancies. In the case where all constraints are in general position, the running time is of order s LP(n,d) + (n+d) LP(s,d), which is essentially the running time of the Clarkson method. Our algorithm extends naturally to a more general setting of arrangements of oriented topological hyperplane arrangements.
  • A Linear-Time Algorithm for the Geodesic Center of a Simple Polygon [DOI] [+]
    Luis Barba, Prosenjit Bose, Matias Korman, Jean-Lou De Carufel, Hee-Kap Ahn and Eunjin Oh.

    Let P be a closed simple polygon with n vertices. For any two points in P, the geodesic distance between them is the length of the shortest path that connects them among all paths contained in P. The geodesic center of P is the unique point in P that minimizes the largest geodesic distance to all other points of P. In 1989, Pollack, Sharir and Rote [Disc. & Comput. Geom. 89] showed an O(n log n)-time algorithm that computes the geodesic center of P. Since then, a longstanding question has been whether this running time can be improved (explicitly posed by Mitchell [Handbook of Computational Geometry, 2000]). In this paper we affirmatively answer this question and present a linear time algorithm to solve this problem.
  • On the Smoothed Complexity of Convex Hulls [DOI] [+]
    Olivier Devillers, Marc Glisse, Xavier Goaoc and Rémy Thomasse.

    We establish an upper bound on the smoothed complexity of convex hulls in Rd under uniform Euclidean (L2) noise. Specifically, let {p1*, p2*, ..., pn*} be an arbitrary set of n points in the unit ball in Rd and let pi = pi* + xi, where x1, x2, ..., xn are chosen independently from the unit ball of radius r. We show that the expected complexity, measured as the number of faces of all dimensions, of the convex hull of {p1, p2, ..., pn} is O(n2-4/(d+1) (1+1/r)d-1); the magnitude r of the noise may vary with n. For d=2 this bound improves to O(n2/3 (1+r-2/3)). We also analyze the expected complexity of the convex hull of L2 and Gaussian perturbations of a nice sample of a sphere, giving a lower-bound for the smoothed complexity. We identify the different regimes in terms of the scale, as a function of n, and show that as the magnitude of the noise increases, that complexity varies monotonically for Gaussian noise but non-monotonically for L2 noise.
  • Strong Equivalence of the Interleaving and Functional Distortion Metrics for Reeb Graphs [DOI] [+]
    Ulrich Bauer, Elizabeth Munch and Yusu Wang.

    The Reeb graph is a construction that studies a topological space through the lens of a real valued function. It has been commonly used in applications, however its use on real data means that it is desirable and increasingly necessary to have methods for comparison of Reeb graphs. Recently, several metrics on the set of Reeb graphs have been proposed. In this paper, we focus on two: the functional distortion distance and the interleaving distance. The former is based on the Gromov-Hausdorff distance, while the latter utilizes the equivalence between Reeb graphs and a particular class of cosheaves. However, both are defined by constructing a near-isomorphism between the two graphs of study. In this paper, we show that the two metrics are strongly equivalent on the space of Reeb graphs. Our result also implies the bottleneck stability for persistence diagrams in terms of the Reeb graph interleaving distance.
  • Shortest Path to a Segment and Quickest Visibility Queries [DOI] [+]
    Esther Arkin, Alon Efrat, Christian Knauer, Joseph Mitchell, Valentin Polishchuk, Gunter Rote, Lena Schlipf and Topi Talvitie.

    We show how to preprocess a polygonal domain with a fixed starting point s in order to answer efficiently the following queries: Given a point q, how should one move from s in order to see q as soon as possible? This query resembles the well-known shortest-path-to-a-point query, except that the latter asks for the fastest way to reach q, instead of seeing it. Our solution methods include a data structure for a different generalization of shortest-path-to-a-point queries, which may be of independent interest: to report efficiently a shortest path from s to a query segment in the domain.
  • An Edge-Based Framework for Enumerating 3-Manifold Triangulations [DOI] [+]
    Benjamin A. Burton and William Pettersson.

    A typical census of 3-manifolds contains all manifolds (under various constraints) that can be triangulated with at most n tetrahedra. Although censuses are useful resources for mathematicians, constructing them is difficult: the best algorithms to date have not gone beyond n=12. The underlying algorithms essentially (i) enumerate all relevant 4-regular multigraphs on n nodes, and then (ii) for each multigraph G they enumerate possible 3-manifold triangulations with G as their dual 1-skeleton, of which there could be exponentially many. In practice, a small number of multigraphs often dominate the running times of census algorithms: for example, in a typical census on 10 tetrahedra, almost half of the running time is spent on just 0.3% of the graphs. Here we present a new algorithm for stage (ii), which is the computational bottleneck in this process. The key idea is to build triangulations by recursively constructing neighbourhoods of edges, in contrast to traditional algorithms which recursively glue together pairs of tetrahedron faces. We implement this algorithm, and find experimentally that whilst the overall performance is mixed, the new algorithm runs significantly faster on those "pathological" multigraphs for which existing methods are extremely slow. In this way the old and new algorithms complement one another, and together can yield significant performance improvements over either method alone.
  • Two Proofs for Shallow Packings [DOI] [+]
    Kunal Dutta, Esther Ezra and Arijit Ghosh.

    We refine the bound on the packing number, originally shown by Haussler, for shallow geometric set systems. Specifically, let V be a finite set system defined over an n-point set X; we view V as a set of indicator vectors over the n-dimensional unit cube. A delta-separated set of V is a subcollection W, s.t. the Hamming distance between each pair u, v in W is greater than delta, where delta > 0 is an integer parameter. The delta-packing number is then defined as the cardinality of the largest delta-separated subcollection of V. Haussler showed an asymptotically tight bound of Θ((n / delta)d) on the delta-packing number if V has VC-dimension (or primal shatter dimension) d. We refine this bound for the scenario where, for any subset, X' of X of size m <= n and for any parameter 1 <= k <= m, the number of vectors of length at most k in the restriction of V to X' is only O(md1 kd-d1), for a fixed integer d > 0 and a real parameter 1 <= d1 <= d (this generalizes the standard notion of bounded primal shatter dimension when d1 = d). In this case when V is "k-shallow" (all vector lengths are at most k), we show that its delta-packing number is O(nd1 kd-d1 / deltad), matching Haussler's bound for the special cases where d1=d or k=n. We present two proofs, the first is an extension of Haussler's approach, and the second extends the proof of Chazelle, originally presented as a simplification for Haussler's proof.