The informal research seminar of the ALGO group. Talks last roughly 25 minutes, with five extra minutes allocated for discussion. Many presentations are focused on recent conference presentations, or practice talks for upcoming conferences. New members are often asked to give an overview of their field of research. Talks given by invited speakers may take up to 45–60 minutes including questions.
To be kept up-to-date about noon seminar presentations, please subscribe to the algoseminar-l mailing list.
This is the schedule for 2017. Schedules are available between 2005 and 2024.
This project uses a map matching technique for cartographic schematization. In map matching paths are mapped on a graph, for example mapping GPS coordinates to a road network. To use this for cartographic schematization, in this project country outlines (paths) are mapped to a grid (graph). However, the requirements for map matching and cartographic schematization are different. In map matching the input and output can be any path and therefore can contain intersections, shared edges and vertices. In this project the input (countries) are simple polygons. Therefore, the output is preferred to be simple. However, is it the case that the output of a map matching algorithm is simple when the input is? For this project we focused on the algorithm by Alt et al. In this algorithm the Fréchet distance is used as a quality measure. We investigate how we can extend this algorithm, or manipulate the output to force it to be a simple polygon again and how much will this extension impact the recognizability of the input. We present multiple post-processing methods to convert a non simple output polygon into a simple output polygon. The performance of these methods are compared based on the Fréchet distance, symmetric difference and reuse of edges and vertices. The segment cut-out method shows to be the most promising results. Since it was able to convert most output polygons into simple polygons with an limited impact on the Fréchet distance.
With the recent rise of sensor-sourced positioning data, the need to derive meaningful conclusions from trajectory data increases. A prominent trajectory comparison metric, the Fréchet distance, is often used but is inefficent to compute for large input sets. This thesis attempts to improve the efficiency of Fréchet related computations for representative problems, with representative data. The investigated approaches can be integrated piecemeal into other computation systems, and the thesis examines when and how which approach is most effective. The algorithms proposed in this thesis have been shown to be successful in practice, reaching second place in the ACM sigspatial GIScup of 2017
An area cartogram is a map in which regions have been re-sized such that the area corresponds to a value, e.g. population, that should be mapped. A common way to generate cartograms is the diffusion-based method, however, these result in areas that might be difficult to compare because regions might have very different shapes. To make it easier to compare the size of regions Dorling introduced so-called Dorling Cartograms where each region is represented by a disk. Unfortunately, it is not always possible to make every region
a disk of the correct size while maintaining the topology of the map. In this thesis, we introduce near-Dorling cartograms: regions should be as round as possible, while maintaining all adjacencies between regions. We present an algorithm to create near-Dorling cartograms, which combines the diffusion method with a local area-preserving operation that reduces the boundary lengths of regions and in this way makes the regions more circular.
More than 25 years ago Chazelle et al. (FOCS 1991) studied the following question: Is it possible to cut any set of $n$ lines in $R^3$ into a subquadratic number of fragments such that the resulting fragments admit a depth order? They proved an $O(n^{9/4})$ bound for the very special case of bipartite weavings of lines. Since then only little progress was made, until a recent breakthrough by Aronov and Sharir (STOC 2016) who showed that $O(n^{3/2} \text{polylog}(n))$ fragments suffice for any set of lines. In a follow-up paper Aronov, Miller and Sharir (SODA 2017) proved an $O(n^{3/2+ε})$ bound for triangles, but their method results in pieces with curved boundaries. Moreover, their method uses polynomial partitions, for which currently no algorithm is known. Thus the most natural version of the problem is still wide open: Can we cut any collection of n disjoint triangles in $R^3$ into a subquadratic number of triangular fragments that admit a depth order? And if so, can we compute the cuts efficiently?
We answer this question by presenting an algorithm that cuts any set of n disjoint triangles in $R^3$ into $O(n^{7/4} \text{polylog}(n))$ triangular fragments that admit a depth order. The running time of our algorithm is $O(n^3.69)$. We also prove a refined bound that depends on the number, $K$, of intersections between the projections of the triangle edges onto the $xy$-plane: we show that $O(n^{1+ε}+n^{1/4}K^{3/4} \text{polylog}(n))$ fragments suffice to obtain a depth order. This result extends to $xy$-monotone surface patches bounded by a constant number of bounded-degree algebraic arcs, constituting the first subquadratic bound for surface patches. As a byproduct of our approach we obtain a faster algorithm than the one by Aronov and Sharir to cut a set of lines into $O(n^{3/2} \text{polylog}(n))$ fragments that admit a depth order.
Repetition is a fundamental concept in music. This fact still applies between derivative works of music. Previous research has shown that folk songs display high similarity in repeating patterns across various versions of a song. For human experts, these repeating patterns are distinguishable, but tedious to discover by hand. Therefore, Pattern Discovery algorithms have been developed to perform this task. Most of these algorithms, however, are not suited for the intricacies of dealing with various works at a time. The algorithms also lack the capability of working with all the differences between what humans would consider similar patterns.
We perform a qualitative analysis of the weaknesses in various pattern discovery algorithms with respect to groups of songs. We relate these weaknesses directly to what we find to be common properties in patterns discovered by a human analyst. These properties and weaknesses are then broken down into a set of requirements, which we try to meet with our own algorithmic approach.
Our solution comes in the form of a pipeline. This pipeline consists of four tasks, each with responsibility for a subset of the established requirements. The first task transforms the set of songs, such that they retain their information, but are similar in the time- and pitch-domains. The second task is a typical pattern discovery algorithm. The third task combines the discovered patterns into clusters. Finally, the fourth task selects a coherent subset of the clustered patterns. We propose a set of solutions for each of these tasks.
We demonstrate our algorithmic solution on the same families of folk songs from our initial quantitative analysis. We then evaluate it in terms of the requirements derived from that analysis. We conclude that our pipeline results in a manageable set of patterns, which display properties typically only handled well by human experts. These properties include rhythmic variation between pattern occurrences, embellishments, and positional context.
Next to our algorithmic solution, we developed a tool for visualizing patterns in songs. We used this tool for the qualitative analysis, both of previous pattern discovery algorithms and our own. This tool has also been evaluated by an expert user study.
Given a set of base stations in the plane, we wish to assign each of them a frequency such as to keep the same coverage and avoid any interference, while minimizing the number of frequencies used. This problem can be modeled using so called conflict-free colorings. In this talk, I will introduce some dynamics on the base stations. Namely, I will talk about what happens when some of them break down, or are constructed. I will present a specific solution that can be used in the minecraft universe (i.e., when base stations are squares), and a much more general one that can be applied to any shapes but that uses more colors.
We describe a general framework for subexponential algorithms that
applies to many classical NP-complete graph problems in geometric
intersection graphs. The framework is applicable in Euclidean space of
dimension at least $2$, where the intersection graph is defined by
similarly sized fat objects. We get the ``square root phenomenon''
running time exp$(O(n^{1-1/d}))$ for example to the following problems:
Hamiltonian Cycle, Independent set, (Connected) Feedback Vertex Set,
(Connected) Dominating Set. Our framework makes no restrictions on the
density of the objects defining the intersection graph, and it can be
applied on the graphs themselves, without having the geometric
representation. In the second part, I talk about a lower bound framework
that is based on a constructive embedding of graphs into d-dimensional
grids. The result together with classical gadgetry and under the
Exponential Time Hypothesis gives matching lower bounds for these
problems, even for much more specialized graph classes (for most
problems even in $d$-dimensional grid graphs). This talk is based on
ongoing work, which is joint with M. de Berg, H.L. Bodlaender, D. Marx
and T.C. van der Zanden.
In a geometric $k$-clustering problem the goal is to partition a set of points in $R^d$ into $k$ subsets such that a certain cost function of the clustering is minimized. We present data structures for orthogonal “range-clustering queries” on a point set $S$: given a query box $Q$ and an integer $k>1$, compute an optimal $k$-clustering for $S \cap Q$. We obtain the following results.
- We present a general method to compute a $(1+\varepsilon)$-approximation to a range-clustering query, where $\varepsilon>0$ is a parameter that can be specified as part of the query. Our method applies to a large class of clustering problems, including $k$-center clustering in any $L_p$-metric and a variant of $k$-center clustering where the goal is to minimize the sum (instead of maximum) of the cluster sizes.
- We extend our method to deal with capacitated $k$-clustering problems, where each of the clusters should not contain more than a given number of points.
- For the special cases of rectilinear $k$-center clustering in $R^1$, and in $R^2$ for $k=2,3$, we present data structures that answer range-clustering queries exactly.
This seminar will discuss our approach to solving the GIS Cup challenge of 2017, the challenge was as follows:
Consider a set $P$ of trajectories (polygonal lines in two dimensions), and a query given by a trajectory $Q$ and a threshold $\epsilon > 0$. To answer this query, we wish to find all trajectories $p$ in $P$ such that $df(P,Q) \leq \epsilon$, where $df$ denotes the Fréchet distance. We present an approach to efficiently answer a large number of queries for the same set $P$.
Key ingredients of our approach are (a) precomputing a spatial hash that allows us to quickly find trajectories that have endpoints near $Q$; (b) precomputing simplifications on all trajectories in $P$; (c) using the simplifications and optimizations of the decision algorithm to efficiently decide $df(P,Q) <= \epsilon$ for most $p$ in $P$. Our approach succeeded in reaching the top 3 of all entries.
Motivated by a new way of visualizing hypergraphs, we study the following problem. Consider a rectangular grid and a set of colors $\chi$. Each cell $s$ in the grid is assigned a subset of colors $\chi_s \subseteq \chi$ and should be partitioned such that for each color $c\in \chi_s$ at least one piece in the cell is identified with $c$. Cells assigned the empty color set remain white. We focus on the case where $\chi = \{\text{red},\text{blue}\}$. Is it possible to partition each cell in the grid such that the unions of the resulting red and blue pieces form two connected polygons? We analyze the combinatorial properties and derive a necessary and sufficient condition for such a painting. We show that if a painting exists, there exists a painting with bounded complexity per cell. This painting has at most five colored pieces per cell if the grid contains white cells, and at most two colored pieces per cell if it does not.
Treemaps are a popular tool to visualize hierarchical data: items are represented by nested rectangles and the area of
each rectangle corresponds to the data being visualized for this item. The visual quality of a treemap is commonly measured via the
aspect ratio of the rectangles. If the data changes, then a second important quality criterion is the stability of the treemap: how much
does the treemap change as the data changes. We present a novel stable treemapping algorithm that has very high visual quality.
Whereas existing treemapping algorithms generally recompute the treemap every time the input changes, our algorithm changes the
layout of the treemap using only local modifications. This approach not only gives us direct control over stability, but it also allows us
to use a larger set of possible layouts, thus provably resulting in treemaps of higher visual quality compared to existing algorithms.
We further prove that we can reach all possible treemap layouts using only our local modifications. Furthermore, we introduce a new
measure for stability that better captures the relative positions of rectangles. We finally show via experiments on real-world data that
our algorithm outperforms existing treemapping algorithms also in practice on either visual quality and/or stability. Our algorithm scores
high on stability regardless of whether we use an existing stability measure or our new measure.
The theory of kernelization can be used to rigorously analyze data reduction for graph coloring problems. Here, the aim is to reduce a $q$-Coloring input to an equivalent but smaller input whose size is provably bounded in terms of structural properties, such as the size of a minimum vertex cover. In this paper we settle two open problems about data reduction for $q$-Coloring.
First, we use a recent technique of finding redundant constraints by representing them as low-degree polynomials, to obtain a kernel of bitsize $O(k^{q−1} \log k)$ for $q$-Coloring parameterized by Vertex Cover for any $q \geq 3$. This size bound is optimal up to $k^{o(1)}$ factors assuming NP is not in coNP/poly, and improves on the previous-best kernel of size $O(k^q)$.
Our second result shows that $3$-Coloring does not admit non-trivial sparsification: assuming NP is not in coNP/poly, the parameterization by the number of vertices $n$ admits no (generalized) kernel of size $O(n^{2−e})$ for any $e > 0$. Until now, such a lower bound was only known for coloring with $q \geq 4$ colors.
Simplifying polygonal curves at different levels of detail is an important problem with many applications. Existing geometric optimization algorithms for this problem minimize the complexity of the simplified curve for a single scale. Other algorithms simplify for multiple scales, but do not minimize the complexity. We present a new simplification algorithm that bridges the gap between these approaches, and produces consistent simplifications for several scales while minimizing the combined simplification complexity. This algorithm uses so-called shortcut graphs, which are graphs in which each edge represents a single line segment that is a valid simplification for a contiguous subsequence of the input curve. To speed up the algorithm in practice, we present an efficient representation for the shortcut graph, and techniques for constructing shortcut graphs for multiple levels of detail.
We study the problem of connecting a new vertex p to a Euclidean graph G using k new edges,, called feed links which connect p to the boundary of the plane face f of G containing p. We aim to connect the vertex in such a way that the maximum dilation from p to a set of destination vertices in the
graph is minimised, where we define dilation as the ratio between the distance in the graph and Euclidean distance. Previous work mainly focus on reducing
the dilation to points on the boundary of f rather than points anywhere in G. We give two different methods to solve this problem in polynomial time
for a fixed k. We show that the problem is NP-hard when k is not fixed by a reduction from set cover. We give three different approximation algorithms
that place k feed links such that the maximal dilation is approximated. The first algorithm gives a (1+eps)-approximation for arbitrarily small eps; however, its
running time is still exponential in k. The second gives a (t+1)-approximation when f is a t-spanner and runs in O(m^2 log n) time, where m denotes the
number of vertices on the boundary of f and n denotes the total number of vertices in the graph. The third approximation algorithm also has a running
time of O(mn^2 log n) and is obtained after a comparison between dilation and distance in the context of feed link placement. Its approximation ratio depends on the difference in Euclidean distances between p and the destination vertices.
This project uses a map matching technique for cartographic schematization. In map matching paths are mapped on an graph, for example mapping GPS coordinates to a road network. To use this for cartographic schematization, in this project countries (paths) are mapped to a grid (graph). However, the requirements for map matching and cartographic schematization are different. In map matching the input and output can be any path and therefore can contain intersections, shared edges and vertices. Where in cartographic schematization the input (countries) tend to be simple polygons. Therefore, also the output is preferred to be simple again. However, is this always the case? For this project I focused on the algorithm Matching Planar Maps by Helmut Alt, Alon Efrat, Günter Rote and Carola Wenk. In this algorithm the Fréchet distance is used as a quality measure. How can we extend this algorithm or manipulate the output to force it to be a simple polygon again and how much will this extension impact the recognizability of the input.
We implemented a visual analytics tool which exploits migration patterns
in gull data. Migration is the spatial displacement of moving objects
during seasons, such as winter and spring. We compute and visualize a
trajectory aggregation, a densitymap on top of geographical maps, a
calendar view and a list of gulls. Our evaluation with an expert user in
ecology shows that our approach helps ecologists in identifying
interesting moving objects and supports them in their analysis workflow
thereby.
This is joint work with Pieter Gijsbers, Ferry Timmers, Emiel van Loon,
Michel Westenberg, and Kevin Buchin.
Drainage networks on terrains have been studied extensively from an algorithmic perspective. However, in drainage networks water flow cannot bifurcate and hence they do not model braided rivers (multiple channels which split and join, separated by sediment bars). We initiate the algorithmic study of braided rivers by employing the descending quasi Morse-Smale complex on the river bed (a polyhedral terrain), and extending it with a certain ordering of bars from the one river bank to the other. This allows us to compute a graph that models a representative channel network, consisting of locally lowest paths.
To ensure that channels in this network are sufficiently different we define a sand function that represents the volume of sediment separating them. We show that in general the problem of computing a maximum network of non-crossing channels which are $\delta$-different from each other (as measured by the sand function) is NP-hard. However, using our ordering between the river banks, we can compute a maximum $\delta$-different network that respects this order in polynomial time. We implemented our approach and applied it to simulated and physical models of braided rivers.
This is joint work with Maarten Kleinhans, Marc van Kreveld, Tim Ophelders, Bettina Speckmann, and Kevin Verbeek.
For two curves on a surface, one can quantify how similar they are based on how much one has to stretch the curves in order to continuously deform one into the other. Such a deformation will be a homotopy from one curve to the other that minimizes the length of the longest intermediate curve.
We give some structural properties that some such homotopy is guaranteed to have in general, as well as in more restricted cases. The first such restriction assumes that the two curves lie on the boundary of the surface.
Secondly, we consider a discrete variant of this problem, where the surface is represented as an edge-weighted combinatorial map. We show that this variant lies in NP by giving a polynomial bound on the complexity of such a homotopy. Furthermore, we show that this discrete variant is dual to an interesting graph-drawing problem, related to a plane variant of cut-width.
It is not known whether a polynomial time algorithm exists that gives a sub-logarithmic approximation factor.
As a third variant, we consider the surface to be a polygon in the plane with weighted point-obstacles. For this case, we give an exact polynomial time algorithm.
This presentation combines results of several collaborations. Collaborators include Benjamin Burton, Erin Chambers, Gregory Chambers, Marc van Kreveld, Arnaud de Mesmay, Wouter Meulemans, Regina Rotman and Bettina Speckmann.
Fundamental to the effective use of visualization as an analytic and descriptive tool is the assurance that presenting data visually provides the capability of making inferences from what we see. This paper explores two related approaches to quantifying the confidence we may have in making visual inferences from mapped geospatial data. We adapt Wickham et al.’s ‘Visual Line-up’ method as a direct analogy with Null Hypothesis Significance Testing (NHST) and propose a new approach for generating more credible spatial null hypotheses. Rather than using as a spatial null hypothesis the unrealistic assumption of complete spatial randomness, we propose spatially autocorrelated simulations as alternative nulls. We conduct a set of crowdsourced experiments (n = 361) to determine the just noticeable difference (JND) between pairs of choropleth maps of geographic units controlling for spatial autocorrelation (Moran’s I statistic) and geometric configuration (variance in spatial unit area). Results indicate that people’s abilities to perceive differences in spatial autocorrelation vary with baseline autocorrelation structure and the geometric configuration of geographic units. These results allow us, for the first time, to construct a visual equivalent of statistical power for geospatial data. Our JND results add to those provided in recent years by Klippel et al. (2011), Harrison et al. (2014) and Kay & Heer (2015) for correlation visualization. Importantly, they provide an empirical basis for an improved construction of visual line-ups for maps and the development of theory to inform geospatial tests of graphical inference.
Beecham, R., Dykes, J., Meulemans, W., Slingsby, A., Turkay, C. & Wood, J.
The complexity of computing the Frechet distance between curves in 1D is a challenging open problem: In 2d and for the discrete version of the Frechet distance in 1d no algorithm for this problem can be substantially subquadratic, unless the strong exponential time hypothesis fails. For the Frechet distance in 1D no non-trivial lower bound is known, but the fastest known algorithm runs in quadratic time. I will discuss a special case for which we can give a faster algorithm.
This is joint work with Jinhee Chun, Maarten Löffler, Aleksandar Markovic, Wouter Meulemans, Yoshio Okamoto, and Taichi Shiitada.
We study the exact complexity of the Hamiltonian Cycle and the q-Coloring problem in disk graphs. We show that the Hamiltonian Cycle problem can be solved in $2^{O(\sqrt n)}$ time on n-vertex disk graphs where the ratio of the largest and smallest disk radius is $O(1)$. We also show that this is optimal: there is no $2^{o(\sqrt n)}$-time algorithm for Hamiltonian Cycle on unit disk graphs, unless the (weak) Exponential Time Hypothesis fails. Our general approach can also be applied for
q-Coloring arbitrary Disk Graphs, for any fixed q: there is no $2^{o(\sqrt n)}$-time algorithm, but it is solvable in $2^{O(\sqrt n)}$-time.
We study the problem of computing straight-line drawings of non-planar graphs with few crossings. We assume that a crossing-minimization algorithm is applied first, yielding a planarization, i.e., a planar graph with a dummy vertex for each crossing, that fixes the topology of the resulting drawing. We present and evaluate two different approaches for drawing a planarization in such a way that the edges of the input graph are as straight as possible. The first approach is based on the planarity-preserving force-directed algorithm ImPrEd, the second approach, which we call Geometric Planarization Drawing, iteratively moves vertices to their locally optimal positions in the given initial drawing.
Graph Drawing uses a well established set of complexity
measures to determine the quality of a drawing, most notably the area
of the drawing and the complexity of the edges. For contact represen-
tations the complexity of the shapes representing vertices also clearly
contributes to the complexity of the drawing. Furthermore, if a contact
representation does not fill its bounding shape completely, then also the
complexity of its complement is visually salient.
We study the complexity of contact representations with variable shapes,
specifically mosaic drawings. Mosaic drawings are drawn on a tiling of the
plane and represent vertices by configurations: simply-connected sets of
tiles. The complement of a mosaic drawing with respect to its bounding
rectangle is also a set of simply-connected tiles, the channels.
We prove that simple mosaic drawings without channels may require
Ω(n^2 ) area. This bound is tight. If we use only straight channels, then
outerplanar graphs with k ears may require Ω(min(nk, n^2 /k)) area. This
bound is partially tight: we show how to draw outerplanar graphs with
k ears in O(nk) area with L-shaped vertex configurations and straight
channels. Finally, we argue that L-shaped channels are strictly more
powerful than straight channels, but may still require Ω(n^7/6 ) area.
Let $P$ be a finite set of points in the plane in general position,
that is, no three points of $P$ are on a common line.
We say that a set $H$ of five points from $P$ is a $5$-hole in $P$
if $H$ is the vertex set of a convex $5$-gon containing no other points of $P$.
For a positive integer $n$, let $h_5(n)$ be the minimum number of 5-holes
among all sets of $n$ points in the plane in general position.
Despite many efforts in the last 30 years, the best known asymptotic lower and upper bounds for $h_5(n)$ have been of order $\Omega(n)$ and $O(n^2)$, respectively.
We show that $h_5(n) = \Omega(n\log^{4/5}{n})$, obtaining the first superlinear lower bound on $h_5(n)$.
The following structural result, which might be of independent interest, is a crucial step in the proof of this lower bound.
If a finite set $P$ of points in the plane in general position is
partitioned by a line $\ell$ into two subsets,
each of size at least 5 and not in convex position,
then $\ell$ intersects the convex hull of some 5-hole in $P$.
The proof of this result is computer-assisted.
We consider drawings of graphs that contain dense subgraphs. We introduce intersection-link representations for such graphs, in which each vertex u is represented by a geometric object R(u) and in which each edge (u,v) is represented by the intersection between R(u) and R(v) if it belongs to a dense subgraph or by a curve connecting the boundaries of R(u) and R(v) otherwise. We study a notion of planarity, called Clique Planarity, for intersection-link representations of graphs in which the dense subgraphs are cliques.
Joint work with Patrizio Angelini, Giuseppe Di Battista, Fabrizio Frati, Maurizio Patrignani, and Ignaz Rutter
A rectangular layout (or simply layout) L is a partition of a rectangle into a finite set of interior-disjoint rectangles. Rectangular layouts have many applications, for example as rectangular cartograms in cartography, chip designs in VLSI and floorplans in architecture. A rectangular cartogram is a map in which the regions are replaced by scaled rectangles representing some quantity.
In the rectangular cartogram application it is desirable to find a layout that always keeps the same adjacencies between the interior rectangles when their sizes change. For example, when displaying a cartogram of a single quantity at different points in time, we prefer to do this using a set of layouts with the same adjacencies. Such layouts are area-universal.
The interior of a rectangular layout contains vertical and horizontal line segments. Any line segment that cannot extend farther on either side is a maximal segment. A rectangular layout is k-sided if every maximal segment has at most k rectangles on one of its sides. A rectangular dual of a graph G is a rectangular layout whose adjacencies are the same as those of G. A necessary and sufficient condition for a layout to be area-universal is that it is one-sided, that is k-sided with k=1. Unfortunately, not all graphs with a rectangular dual have a one-sided dual.
For graphs without a one-sided dual, a k-sided dual with k as small as possible is often a dual that has the smallest risk of adjacencies changing when changing the size of the interior rectangles.
In this thesis we prove that we cannot find a k-sided rectangular dual for all graphs admitting a rectangular dual, no matter the constant k. However, for certain graphs, namely those without separating 4-cycles, this might be possible. We do not quite obtain this result. Instead, we present an algorithm giving a d−1-sided dual, with d the maximal degree of the vertices of G in Ḡ, a so-called corner assignment.
Many complexity classes capturing intractability in Parameterized
Complexity Theory can be defined in terms of random access machines with
restricted forms of nondeterminism, thanks to the work of Chen, Flum,
and Grohe (2005). We give a short introduction to these machine-based
characterizations, and then briefly discuss our work on two related
questions: 1. What is the appropriate way to endow such machines with
access to an oracle, and 2. Can interactive proof systems be used to
characterize certain parameterized complexity classes?
Using Pokemon Go as inspiration, we define a search problem on geometric graphs and analyse whether a competitive strategy for it exists. Given a trainer on the graph and a Pokemon anywhere in the plane, the trainer has a sighting of the Pokemon if it is within distance r. She/He can catch it when it is at distance <= d, which is sufficiently smaller than r but of the same order. The problem is to catch the Pokemon once it becomes a sighting quickly, with little detour.
We show that for general geometric graphs, there is no constant c such that a c-competitive search strategy for the Pokemon Go search problem exists. On the other hand, if the geometric graph has convex faces or constant geometric dilation, then we can design a c-competitive search strategy for a constant c.
Ortho-Radial drawings are a generalization of orthogonal drawings to grids that are formed by concentric circles and straight-line spokes from the center.
We show that bend-free planar ortho-radial drawings can be combinatorially described in terms of the distribution of the angles around the vertices. Previously, such a characterization was only known for paths, cycles, and theta graphs [Hasheminezhad et al. '09], and in the special case of rectangular drawings for cubic graphs~[Hasheminezhad et al. '10], where the contour of each face is required to be a rectangle. This is an important ingredient in establishing an ortho-radial analogue of Tamassia's Topology-Shape-Metrics Framework for bend minimization in planar orthogonal drawings.