Program – Tuesday

Back to Program overview

   
9:10
SOCG - Session A
(AUD 4)
Session Chair: Monique Teillaud
An Optimal Algorithm for the Separating Common Tangents of Two Polygons [DOI] [+]
Mikkel Abrahamsen.
We describe an algorithm for computing the separating common tangents of two simple polygons using linear time and only constant workspace. A tangent of a polygon is a line touching the polygon such that all of the polygon lies to the same side of the line. A separating common tangent of two polygons is a tangent of both polygons where the polygons are lying on different sides of the tangent. Each polygon is given as a read-only array of its corners. If a separating common tangent does not exist, the algorithm reports that. Otherwise, two corners defining a separating common tangent are returned. The algorithm is simple and implies an optimal algorithm for deciding if the convex hulls of two polygons are disjoint or not. This was not known to be possible in linear time and constant workspace prior to this paper. An outer common tangent is a tangent of both polygons where the polygons are on the same side of the tangent. In the case where the convex hulls of the polygons are disjoint, we give an algorithm for computing the outer common tangents in linear time using constant workspace.

A Linear-Time Algorithm for the Geodesic Center of a Simple Polygon [DOI] [+]
Luis Barba, Prosenjit Bose, Matias Korman, Jean-Lou De Carufel, Hee-Kap Ahn and Eunjin Oh.
Let P be a closed simple polygon with n vertices. For any two points in P, the geodesic distance between them is the length of the shortest path that connects them among all paths contained in P. The geodesic center of P is the unique point in P that minimizes the largest geodesic distance to all other points of P. In 1989, Pollack, Sharir and Rote [Disc. & Comput. Geom. 89] showed an O(n log n)-time algorithm that computes the geodesic center of P. Since then, a longstanding question has been whether this running time can be improved (explicitly posed by Mitchell [Handbook of Computational Geometry, 2000]). In this paper we affirmatively answer this question and present a linear time algorithm to solve this problem.

On the Smoothed Complexity of Convex Hulls [DOI] [+]
Olivier Devillers, Marc Glisse, Xavier Goaoc and Rémy Thomasse.
We establish an upper bound on the smoothed complexity of convex hulls in Rd under uniform Euclidean (L2) noise. Specifically, let {p1*, p2*, ..., pn*} be an arbitrary set of n points in the unit ball in Rd and let pi = pi* + xi, where x1, x2, ..., xn are chosen independently from the unit ball of radius r. We show that the expected complexity, measured as the number of faces of all dimensions, of the convex hull of {p1, p2, ..., pn} is O(n2-4/(d+1) (1+1/r)d-1); the magnitude r of the noise may vary with n. For d=2 this bound improves to O(n2/3 (1+r-2/3)). We also analyze the expected complexity of the convex hull of L2 and Gaussian perturbations of a nice sample of a sphere, giving a lower-bound for the smoothed complexity. We identify the different regimes in terms of the scale, as a function of n, and show that as the magnitude of the noise increases, that complexity varies monotonically for Gaussian noise but non-monotonically for L2 noise.

Finding All Maximal Subsequences with Hereditary Properties [DOI] [+]
Drago Bokal, Sergio Cabello and David Eppstein.
Consider a sequence s1,...,sn of points in the plane. We want to find all maximal subsequences with a given hereditary property P: find for all indices i the largest index j*(i) such that si,...,sj*(i) has property P. We provide a general methodology that leads to the following specific results: - In O(n log2 n) time we can find all maximal subsequences with diameter at most 1. - In O(n log n loglog n) time we can find all maximal subsequences whose convex hull has area at most 1. - In O(n) time we can find all maximal subsequences that define monotone paths in some (subpath-dependent) direction. The same methodology works for graph planarity, as follows. Consider a sequence of edges e1,...,en over a vertex set V. In O(n log n) time we can find, for all indices i, the largest index j*(i) such that (V,{ei,..., ej*(i)}) is planar.
SOCG - Session B
(AUD 5)
Session Chair: Donald Sheehy
Riemannian Simplices and Triangulations [DOI] [+]
Ramsay Dyer, Gert Vegter and Mathijs Wintraecken.
We study a natural intrinsic definition of geometric simplices in Riemannian manifolds of arbitrary finite dimension, and exploit these simplices to obtain criteria for triangulating compact Riemannian manifolds. These geometric simplices are defined using Karcher means. Given a finite set of vertices in a convex set on the manifold, the point that minimises the weighted sum of squared distances to the vertices is the Karcher mean relative to the weights. Using barycentric coordinates as the weights, we obtain a smooth map from the standard Euclidean simplex to the manifold. A Riemannian simplex is defined as the image of the standard simplex under this barycentric coordinate map. In this work we articulate criteria that guarantee that the barycentric coordinate map is a smooth embedding. If it is not, we say the Riemannian simplex is degenerate. Quality measures for the "thickness" or "fatness" of Euclidean simplices can be adapted to apply to these Riemannian simplices. For manifolds of dimension 2, the simplex is non-degenerate if it has a positive quality measure, as in the Euclidean case. However, when the dimension is greater than two, non-degeneracy can be guaranteed only when the quality exceeds a positive bound that depends on the size of the simplex and local bounds on the absolute values of the sectional curvatures of the manifold. An analysis of the geometry of non-degenerate Riemannian simplices leads to conditions which guarantee that a simplicial complex is homeomorphic to the manifold.

An Edge-Based Framework for Enumerating 3-Manifold Triangulations [DOI] [+]
Benjamin A. Burton and William Pettersson.
A typical census of 3-manifolds contains all manifolds (under various constraints) that can be triangulated with at most n tetrahedra. Although censuses are useful resources for mathematicians, constructing them is difficult: the best algorithms to date have not gone beyond n=12. The underlying algorithms essentially (i) enumerate all relevant 4-regular multigraphs on n nodes, and then (ii) for each multigraph G they enumerate possible 3-manifold triangulations with G as their dual 1-skeleton, of which there could be exponentially many. In practice, a small number of multigraphs often dominate the running times of census algorithms: for example, in a typical census on 10 tetrahedra, almost half of the running time is spent on just 0.3% of the graphs. Here we present a new algorithm for stage (ii), which is the computational bottleneck in this process. The key idea is to build triangulations by recursively constructing neighbourhoods of edges, in contrast to traditional algorithms which recursively glue together pairs of tetrahedron faces. We implement this algorithm, and find experimentally that whilst the overall performance is mixed, the new algorithm runs significantly faster on those "pathological" multigraphs for which existing methods are extremely slow. In this way the old and new algorithms complement one another, and together can yield significant performance improvements over either method alone.

Order on Order Types [DOI] [+]
Alexander Pilz and Emo Welzl.
Given P and P', equally sized planar point sets in general position, we call a bijection from P to P' crossing-preserving if crossings of connecting segments in P are preserved in P' (extra crossings may occur in P'). If such a mapping exists, we say that P' crossing-dominates P, and if such a mapping exists in both directions, P and P' are called crossing-equivalent. The relation is transitive, and we have a partial order on the obtained equivalence classes (called crossing types or x-types). Point sets of equal order type are clearly crossing-equivalent, but not vice versa. Thus, x-types are a coarser classification than order types. (We will see, though, that a collapse of different order types to one x-type occurs for sets with triangular convex hull only.) We argue that either the maximal or the minimal x-types are sufficient for answering many combinatorial (existential or extremal) questions on planar point sets. Motivated by this we consider basic properties of the relation. We characterize order types crossing-dominated by points in convex position. Further, we give a full characterization of minimal and maximal abstract order types. Based on that, we provide a polynomial-time algorithm to check whether a point set crossing-dominates another. Moreover, we generate all maximal and minimal x-types for small numbers of points.

Limits of Order Types [DOI] [+]
Xavier Goaoc, Alfredo Hubard, Remi de Joannis de Verclos, Jean-Sebastien Sereni and Jan Volec.
The notion of limits of dense graphs was invented, among other reasons, to attack problems in extremal graph theory. It is straightforward to define limits of order types in analogy with limits of graphs, and this paper examines how to adapt to this setting two approaches developed to study limits of dense graphs. We first consider flag algebras, which were used to open various questions on graphs to mechanical solving via semidefinite programming. We define flag algebras of order types, and use them to obtain, via the semidefinite method, new lower bounds on the density of 5- or 6-tuples in convex position in arbitrary point sets, as well as some inequalities expressing the difficulty of sampling order types uniformly. We next consider graphons, a representation of limits of dense graphs that enable their study by continuous probabilistic or analytic methods. We investigate how planar measures fare as a candidate analogue of graphons for limits of order types. We show that the map sending a measure to its associated limit is continuous and, if restricted to uniform measures on compact convex sets, a homeomorphism. We prove, however, that this map is not surjective. Finally, we examine a limit of order types similar to classical constructions in combinatorial geometry (Erdos-Szekeres, Horton...) and show that it cannot be represented by any somewhere regular measure; we analyze this example via an analogue of Sylvester's problem on the probability that k random points are in convex position.
 
10:30   Coffee Break
11:00
SOCG - Session A
(AUD 4)
Session Chair: János Pach
Combinatorial Redundancy Detection [DOI] [+]
Komei Fukuda, Bernd Gaertner and May Szedlak.
The problem of detecting and removing redundant constraints is fundamental in optimization. We focus on the case of linear programs (LPs) in dictionary form, given by n equality constraints in n+d variables, where the variables are constrained to be nonnegative. A variable xr is called redundant, if after removing its nonnegativity constraint the LP still has the same feasible region. The time needed to solve such an LP is denoted by LP(n,d). It is easy to see that solving n+d LPs of the above size is sufficient to detect all redundancies. The currently fastest practical method is the one by Clarkson: it solves n+d linear programs, but each of them has at most s variables, where s is the number of nonredundant constraints. In the first part we show that knowing all of the finitely many dictionaries of the LP is sufficient for the purpose of redundancy detection. A dictionary is a matrix that can be thought of as an enriched encoding of a vertex in the LP. Moreover - and this is the combinatorial aspect - it is enough to know only the signs of the entries, the actual values do not matter. Concretely we show that for any variable xr one can find a dictionary, such that its sign pattern is either a redundancy or nonredundancy certificate for xr. In the second part we show that considering only the sign patterns of the dictionary, there is an output sensitive algorithm of running time of order d (n+d) sd-1 LP(s,d) + d sd LP(n,d) to detect all redundancies. In the case where all constraints are in general position, the running time is of order s LP(n,d) + (n+d) LP(s,d), which is essentially the running time of the Clarkson method. Our algorithm extends naturally to a more general setting of arrangements of oriented topological hyperplane arrangements.

Effectiveness of Local Search for Geometric Optimization [DOI] [+]
Vincent Cohen-Addad and Claire Mathieu.
What is the effectiveness of local search algorithms for geometric problems in the plane? We prove that local search with neighborhoods of magnitude 1/epsilonc is an approximation scheme for the following problems in the Euclidean plane: TSP with random inputs, Steiner tree with random inputs, uniform facility location (with worst case inputs), and bicriteria k-median (also with worst case inputs). The randomness assumption is necessary for TSP.

On the Shadow Simplex Method for Curved Polyhedra [DOI] [+]
Daniel Dadush and Nicolai Hähnle.
We study the simplex method over polyhedra satisfying certain "discrete curvature" lower bounds, which enforce that the boundary always meets vertices at sharp angles. Motivated by linear programs with totally unimodular constraint matrices, recent results of Bonifas et al. (SOCG 2012), Brunsch and Röglin (ICALP 2013), and Eisenbrand and Vempala (2014) have improved our understanding of such polyhedra. We develop a new type of dual analysis of the shadow simplex method which provides a clean and powerful tool for improving all previously mentioned results. Our methods are inspired by the recent work of Bonifas and the first named author, who analyzed a remarkably similar process as part of an algorithm for the Closest Vector Problem with Preprocessing. For our first result, we obtain a constructive diameter bound of O((n2 / delta) ln (n / delta)) for n-dimensional polyhedra with curvature parameter delta in (0, 1]. For the class of polyhedra arising from totally unimodular constraint matrices, this implies a bound of O(n3 ln n). For linear optimization, given an initial feasible vertex, we show that an optimal vertex can be found using an expected O((n3 / delta) ln (n / delta)) simplex pivots, each requiring O(mn) time to compute. An initial feasible solution can be found using O((mn3 / delta) ln (n / delta)) pivot steps.
SOCG - Session B
(AUD 5)
Session Chair: Lars Arge
Pattern Overlap Implies Runaway Growth in Hierarchical Tile Systems [DOI] [+]
Ho-Lin Chen, David Doty, Jan Manuch, Arash Rafiey and Ladislav Stacho.
We show that in the hierarchical tile assembly model, if there is a producible assembly that overlaps a nontrivial translation of itself consistently (i.e., the pattern of tile types in the overlap region is identical in both translations), then arbitrarily large assemblies are producible. The significance of this result is that tile systems intended to controllably produce finite structures must avoid pattern repetition in their producible assemblies that would lead to such overlap. This answers an open question of Chen and Doty (SODA 2012), who showed that so-called "partial-order" systems producing a unique finite assembly and avoiding such overlaps must require time linear in the assembly diameter. An application of our main result is that any system producing a unique finite assembly is automatically guaranteed to avoid such overlaps, simplifying the hypothesis of Chen and Doty's main theorem.

Space Exploration via Proximity Search [DOI] [+]
Sariel Har-Peled, Nirman Kumar, David Mount and Benjamin Raichel.
We investigate what computational tasks can be performed on a point set in Rd, if we are only given black-box access to it via nearest-neighbor search. This is a reasonable assumption if the underlying point set is either provided implicitly, or it is stored in a data structure that can answer such queries. In particular, we show the following: (A) One can compute an approximate bi-criteria k-center clustering of the point set, and more generally compute a greedy permutation of the point set. (B) One can decide if a query point is (approximately) inside the convex-hull of the point set. We also investigate the problem of clustering the given point set, such that meaningful proximity queries can be carried out on the centers of the clusters, instead of the whole point set.

Star Unfolding from a Geodesic Curve [DOI] [+]
Stephen Kiazyk and Anna Lubiw.
There are two known ways to unfold a convex polyhedron without overlap: the star unfolding and the source unfolding, both of which use shortest paths from vertices to a source point on the surface of the polyhedron. Non-overlap of the source unfolding is straightforward; non-overlap of the star unfolding was proved by Aronov and O'Rourke in 1992. Our first contribution is a much simpler proof of non-overlap of the star unfolding. Both the source and star unfolding can be generalized to use a simple geodesic curve instead of a source point. The star unfolding from a geodesic curve cuts the geodesic curve and a shortest path from each vertex to the geodesic curve. Demaine and Lubiw conjectured that the star unfolding from a geodesic curve does not overlap. We prove a special case of the conjecture. Our special case includes the previously known case of unfolding from a geodesic loop. For the general case we prove that the star unfolding from a geodesic curve can be separated into at most two non-overlapping pieces.
 
12:00
Invited Speaker
(Blauwe Zaal)
Session Chair: János Pach
The Dirac-Motzkin Problem on Ordinary Lines and the Orchard Problem [DOI] [+]
Ben Green.
Suppose you have n points in the plane, not all on a line. A famous theorem of Sylvester-Gallai asserts that there is at least one ordinary line, that is to say a line passing through precisely two of the n points. But how many ordinary lines must there be? It turns out that the answer is at least n/2 (if n is even) and roughly 3n/4 (if n is odd), provided that n is sufficiently large. This resolves a conjecture of Dirac and Motzkin from the 1950s. We will also discuss the classical orchard problem, which asks how to arrange n trees so that there are as many triples of colinear trees as possible, but no four in a line. This is joint work with Terence Tao and reports on the results of [Green and Tao, 2013].
 
13:00   Catered Lunch
14:00
YRF
Session A (Blauwe Zaal)

Drawing Planar Cubic 3-Connected Graphs with Minimal Visual Complexity
Alexander Igamberdiev, Wouter Meulemans and André Schulz.

1-String B1-VPG Representations of Planar Partial 3-Trees
Therese Biedl and Martin Derka.

An Optimal Algorithm for Tiling the Plane with a Translated Polyomino
Andrew Winslow.

Recognizing Weighted and Seeded Disk Graphs
Boris Klemz, Martin Nöllenburg and Roman Prutkin.

Automatic Proofs for Formulae Enumerating Proper Polycubes
Mira Shalah and Gill Barequet.

Realization of simply connected polygonal linkages
Clinton Bowen, Stephane Durocher, Maarten Löffler, Anika Rounds, André Schulz and Csaba Tóth.

Session B (AUD 2)

Statistical Approaches to Topological Data Analysis
Brittany Terese Fasy.

Tutorial on the R Package TDA
Jisu Kim.

Session C (AUD 7)
Wireless networks: Do not disturb my circles  [+]
Roger Wattenhofer.

Is geometry the key to a deeper understanding of wireless networks? In my talk I will present a few exciting research questions that I learned when studying wireless networks: Can we derive the geometry of a network merely by logging communication? Can we improve the capacity of a network by solving geometric optimization problems? Can we model a wireless network by geometry -- or should we disturb the circles?


On the minimum-cost range assignment problem  [+]
Lilach Chaitman.

We study the problem of assigning transmission ranges to radio stations in the plane such that any pair of stations can communicate within h hops and the cost of the network is minimized. The cost of transmitting in a range r is proportional to ralpha, where alpha ≥ 1. We consider two settings of this problem: collinear station locations and arbitrary locations.

For the case of collinear stations, we introduce the pioneer polynomial-time exact algorithm for any alpha ≥ 1 and constant h, and thus conclude that the 1D version of the problem for constant h is in P. For an arbitrary h, not necessarily a constant, and alpha=1, we propose a 1.5-approximation algorithm. This improves the approximation ratio of the best known algorithm which admits a 2-approximation [CFPPS00]. For h=n-1, i.e., the unbounded case, we propose a new approach that achieves an exact O(n2)-time algorithm. This improves the running time of the best known algorithm [DGN07] by a factor of n. Moreover, we show that this new technique can be utilized for achieving a polynomial-time algorithm for finding the minimum cost range assignment of collinear points whose induced communication graph is a t-spanner, for any t ≥ 1.

For the case of stations placed arbitrarily in the plane and alpha=1, we present a (3+epsilon)-approximation algorithm, for any epsilon>0 and a constant h. This improves the best known approximation ratio of 4(9h-2)/(sqrt[h]{2}-1), by [KP09]. Moreover, we show a (1.5+ epsilon)-approximation algorithm for a case where deviation of one hop (h+1 hops in total) is acceptable. For the unbounded case (h=n-1), the best known approximation ratio is 1.5 [ACPRS05]. We show a new approximation algorithm that breaks the 1.5 ratio.


 
15:30   Coffee Break
16:00
YRF
Session A (Blauwe Zaal)
17:30

Diamonds are a Quiver's Best Friend
Clément Maria.

A hom-tree lower bound for the Reeb graph interleaving distance
Vin de Silva, Elizabeth Munch and Anastasios Stefanou.

The Offset Filtration of Convex Objects
Dan Halperin, Michael Kerber and Doron Shaharabani.

Session B (AUD 2)
Describing High-Order Statistical Dependence Using "concurrence topology", with Application to Functional MRI Brain Data [+]
Steven P. Ellis.
In multivariate data analysis dependence beyond pair-wise can be important. With many variables, however, the number of simple summaries of even third-order dependence can be unmanageably large.

``Concurrence topology'' is an apparently new method for describing high-order dependence among up to dozens of dichotomous (i.e., binary) variables (e.g., seventh-order dependence in 32 variables). This method generally produces summaries of dependence of manageable size. (But computing time can be lengthy.) For time series, this method can be applied in both the time and Fourier domains.

Write each observation as a vector of 0's and 1's. A ``concurrence'' is a group of variables all labeled ``1'' in the same observation. The collection of concurrences can be represented as a filtered simplicial complex. Holes in the filtration indicate relatively weak or negative association among the variables. The pattern of the holes in the filtration can be analyzed using persistent homology.

We applied concurrence topology on binarized, resting-state, functional MRI data acquired from patients diagnosed with attention-deficit hyperactivity disorder and from healthy controls. An exploratory analysis finds a number of differences between patients and controls in the topologies of their filtrations, demonstrating that concurrence topology can find high-order structure in data of real-world relevance.

Independence among pairwise disjoint groups of variables is reflected in nontrivial homology cross products of classes in the concurrence homology of the variable groups. In ``canonical concurrence topology'' one looks for nontrivial cross products and interprets that as an indication of ``independence-like'' behavior among the groups of variables.


Multi-Scale Local Shape Analysis and Feature Selection in Machine Learning Applications [+]
Ellen Gasparovic.

We introduce a method called multi-scale local shape analysis for extracting features that describe the local structure of points within a dataset. The method uses both geometric and topological features at multiple levels of granularity to capture diverse types of local information for subsequent machine learning algorithms operating on the dataset. Using synthetic and real dataset examples, we demonstrate significant performance improvement of classification algorithms constructed for these datasets with correspondingly augmented features.

This is joint work with Paul Bendich, John Harer, Linda Ness, and Rauf Izmailov.


Using Cycles in High Dimensional Data to Analyze Protein Binding [+]
Giseon Heo.
Persistent homology captures the evolution of topological features of a model as a parameter changes. The two standard summary statistics of persistent homology are the barcode and the persistence diagram. A third summary statistic, the persistence landscape, was recently introduced by Bubenik. It is a functional summary, so it is easy to calculate sample means and variances, and it is straightforward to construct various test statistics. Implementing a permutation test we detect conformational changes between closed and open forms of the maltose-binding protein, a large biomolecule consisting of 370 amino acid residues. Furthermore, persistence landscapes can be applied to machine learning methods. A hyperplane from a support vector machine shows the clear separation between the closed and open proteins conformations. Moreover, because our approach captures dynamical properties of the protein our results may help in identifying residues susceptible to ligand binding; we show that the majority of active site residues and allosteric pathway residues are located in the vicinity of the most persistent loop in the corresponding filtered Vietoris-Rips complex. This finding was not observed in the classical anisotropic network model.

Persistence Images: An Alternative Persistent Homology Representation [+]
Lori Ziegelmeier.
Topological data analysis, namely persistent homology, allows the extraction of coarse topological features from data. While the roots of this tool are deep in algebraic topology, applications of the methods therein are finding broad applications in the scientific community at large. There are two standard representations of persistent homology: barcodes and persistence diagrams, which summarize topological features of data. The space of persistent diagrams can be given a metric structure, enabling certain machine learning techniques to benefit from the features captured in persistence diagrams. We present an alternative representation that ‘vectorizes’ the information of persistent homology, which we refer to as a persistence image. Persistence images allow the use of a wider set of distance measures and enable a broader list of applicable tools from machine learning. We present analysis of a data set consisting of common topological spaces as well as data arising from the linked-twist map, a dynamical system which simulates turbulent mixing.

Ends at 18:00

Session C (AUD 7)
Euclidean traveling salesman tours through stochastic neighborhoods  [+]
Subhash Suri.

We consider the problem of planning a shortest tour through a collection of neighborhoods in the plane, where each neighborhood is a disk whose radius is an i.i.d. random variable drawn from a known probability distribution. This is a stochastic version of the classic traveling salesman problem with neighborhoods (TSPN). Planning such tours under uncertainty, a fundamental problem in its own right, is motivated by a number of applications including the following data gathering problem in sensor networks: a robotic data mule needs to collect data from n geographically distributed wireless sensor nodes whose communication range r is a random variable influenced by environmental factors. We propose a polynomial-time algorithm that achieves a factor O(loglog n) approximation of the expected length of an optimal tour. In data mule applications, the problem has an additional complexity: the radii of the disks are only revealed when the robot reaches the disk boundary (transmission success). For this online version of the stochastic TSPN, we achieve an approximation ratio of O(log n). In the special case, where the disks with their mean radii are disjoint, we achieve an O(1) approximation even for the online case.


Competitive local routing with constraints  [+]
Andre van Renssen.

Let P be a set of n points in the plane and S a set of non-crossing line segments between vertices in P, called constraints. These constraints can be viewed as obstacles that block communication in the network. Two vertices are visible if the straight line segment connecting them does not properly intersect any constraints. The constrained thetam-graph is constructed by partitioning the plane around each vertex into m disjoint cones, each with aperture theta = 2pi/m, and adding an edge to the `closest' visible vertex in each cone. We consider how to route on the constrained theta6-graph. We first show that no deterministic 1-local routing algorithm is o(sqrt{n})-competitive on all pairs of vertices of the constrained theta6-graph. After that, we show how to route between any two visible vertices of the constrained theta6-graph using only 1-local information. Our routing algorithm guarantees that the returned path has length at most 2 times the Euclidean distance between the source and destination. To the best of our knowledge, this is the first local routing algorithm in the constrained setting with guarantees on the length of the returned path.


An overview on compact routing  [+]
Cyrille Gavoille.

In this talk I present an overview on the field of compact routing. I give the main variants of the problems (labeled, name-independent, metric space routing, ...), followed by a state-of-the-art of the field which is declined for several families of graphs (from general to planar graphs). As illustration, I give also a simple stretch-3 compact routing scheme of general metrics. A list of main open questions is given.


Open Problem Session

Ends at 18:00